【问题标题】:Why does Spark fail with "java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V"?为什么 Spark 会因“java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V”而失败?
【发布时间】:2021-08-01 21:28:21
【问题描述】:

我使用 Spark Streaming 2.10、Kafka_2.11-0.10.0.0 和 Spark-streaming-0-10-2.11-2.10。

spark-submit --version
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.7.0_80
Branch 
Compiled by user jenkins on 2016-12-16T02:04:48Z

我使用 maven 来构建项目。以下是依赖项。

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.11</artifactId>
  <version>2.1.0</version>
</dependency>
<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming_2.11</artifactId>
  <version>2.1.0</version>
  <scope>provided</scope>
</dependency> 
<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
  <version>2.1.0</version>
 </dependency>

我使用该命令在 Eclipse 中正确运行应用程序。

right click -> run as -> maven build -> clean install -> run

然而,当我spark-submit应用如下:

spark-submit \
  --jars=/opt/ibudata/binlogSparkStreaming/kafka_2.11-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/ka‌​fka-clients-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/me‌​trics-core-2.2.0.jar‌​,/opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar,/opt‌​/ibudata/binlogSpark‌​Streaming/zkclient-0‌​.3.jar \
  --class com.br.sparkStreaming.wordcount \
  --master spark:m20p183:7077 \
  --executor-memory 2g \
  --num-executors 3 \
  /opt/ibudata/binlogSparkStreaming/jars/wordcounttest8-0.0.1-‌​SNAPSHOT.jar

...失败并出现以下错误:

> io.netty.handler.codec.EncoderException: java.lang.NoSuchMethodError:
> io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V  at
> io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:107)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:658)
>   at
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:716)
>   at
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:651)
>   at
> io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:266)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:658)
>   at
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:716)
>   at
> io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:706)
>   at
> io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:741)
>   at
> io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:895)
>   at
> io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:240)
>   at
> org.apache.spark.network.server.TransportRequestHandler.respond(TransportRequestHandler.java:194)
>   at
> org.apache.spark.network.server.TransportRequestHandler.processStreamRequest(TransportRequestHandler.java:150)
>   at
> org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
>   at
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
>   at
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
>   at
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at
> org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
>   at
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
>   at
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)     at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>   at
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>   at java.lang.Thread.run(Thread.java:745) Caused by:
> java.lang.NoSuchMethodError:
> io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V  at
> org.apache.spark.network.buffer.FileSegmentManagedBuffer.convertToNetty(FileSegmentManagedBuffer.java:133)
>   at
> org.apache.spark.network.protocol.MessageEncoder.encode(MessageEncoder.java:54)
>   at
> org.apache.spark.network.protocol.MessageEncoder.encode(MessageEncoder.java:33)
>   at
> io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
>   ... 36 more

任何建议将不胜感激

经过各种尝试,但没有好的结果,然后我尝试了 Spark Streaming + Kafka 集成指南(Kafka 代理版本 0.8.2.1 或更高版本)给出的运行示例, 使用命令运行示例: bin/run-example streaming.JavaDirectKafkaWordCount 172.18.30.22:9092 \test,

但是报同样的错误:java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion.;

所以我怀疑可能是类路径中的一些罐子导致了这个问题,我的 spark-env.sh :

export JAVA_HOME=//opt/jdk1.7
export export SCALA_HOME=/opt/scala
export export SPARK_HOME=/opt/spark
export HADOOP_HOME=/opt/hadoop2.7.3
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop  
#export SPARK_MASTER_IP=master1  
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=m20p180:2181,m20p181:2181,m20p182:2181 -Dspark.deploy.zookeeper.dir=/spark"  
export SPARK_WORKER_MEMORY=1g  
export SPARK_EXECUTOR_MEMORY=1g  
export SPARK_DRIVER_MEMORY=1g  
export SPARK_WORKDER_CORES=4
export HIVE_CONF_DIR=/opt/hadoop2.7.3/hive/conf
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar

如您所见,我将 netty-all-4.1.12.Final.jar 添加到类路径中,但它不起作用。 ::q!

我还使用命令开始了示例: SPARK_PRINT_LAUNCH_COMMAND=1 ./bin/run-example streaming.JavaDirectKafkaWordCount 172.18.30.22:9092 \test

输出:

Spark Command: //opt/jdk1.7/bin/java -cp /opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/:/opt/hadoop2.7.3/share/hadoop/yarn/lib/:/opt/hadoop2.7.3/share/hadoop/common/:/opt/hadoop2.7.3/share/hadoop/common/lib/:/opt/hadoop2.7.3/share/hadoop/hdfs/:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/:/opt/hadoop2.7.3/share/hadoop/mapreduce/:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/:/opt/hadoop2.7.3/share/hadoop/tools/lib/:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/:/opt/hadoop2.7.3/share/hadoop/yarn/lib/:/opt/hadoop2.7.3/share/hadoop/common/:/opt/hadoop2.7.3/share/hadoop/common/lib/:/opt/hadoop2.7.3/share/hadoop/hdfs/:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/:/opt/hadoop2.7.3/share/hadoop/mapreduce/:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/:/opt/hadoop2.7.3/share/hadoop/tools/lib/:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar:/opt/spark/conf/:/opt/spark/jars/*:/opt/hadoop2.7.3/etc/hadoop/ -Xmx1g -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --jars /opt/spark/examples/jars/spark-examples_2.11-2.1.0.jar,/opt/spark/examples/jars/scopt_2.11-3.3.0.jar --class org.apache.spark.examples.streaming.JavaDirectKafkaWordCount spark-internal 172.18.30.22:9092 test
========================================
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2017-07-04 18:37:01,123 INFO  [main] spark.SparkContext (Logging.scala:logInfo(54)) - Running Spark version 2.1.0
2017-07-04 18:37:01,129 WARN  [main] spark.SparkContext (Logging.scala:logWarning(66)) - Support for Java 7 is deprecated as of Spark 2.0.0
2017-07-04 18:37:02,304 WARN  [main] spark.SparkConf (Logging.scala:logWarning(66)) - 
SPARK_CLASSPATH was detected (set to ':/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with --driver-class-path to augment the driver classpath
 - spark.executor.extraClassPath to augment the executor classpath

2017-07-04 18:37:02,308 WARN  [main] spark.SparkConf (Logging.scala:logWarning(66)) - Setting 'spark.executor.extraClassPath' to ':/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar' as a work-around.
2017-07-04 18:37:02,309 WARN  [main] spark.SparkConf (Logging.scala:logWarning(66)) - Setting 'spark.driver.extraClassPath' to ':/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar' as a work-around.
2017-07-04 18:37:02,524 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - Changing view acls to: ibudata
2017-07-04 18:37:02,526 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - Changing modify acls to: ibudata
2017-07-04 18:37:02,528 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - Changing view acls groups to: 
2017-07-04 18:37:02,530 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - Changing modify acls groups to: 
2017-07-04 18:37:02,532 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(ibudata); groups with view permissions: Set(); users  with modify permissions: Set(ibudata); groups with modify permissions: Set()
2017-07-04 18:37:03,091 INFO  [main] util.Utils (Logging.scala:logInfo(54)) - Successfully started service 'sparkDriver' on port 35480.
2017-07-04 18:37:03,127 INFO  [main] spark.SparkEnv (Logging.scala:logInfo(54)) - Registering MapOutputTracker
2017-07-04 18:37:03,162 INFO  [main] spark.SparkEnv (Logging.scala:logInfo(54)) - Registering BlockManagerMaster
2017-07-04 18:37:03,166 INFO  [main] storage.BlockManagerMasterEndpoint (Logging.scala:logInfo(54)) - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2017-07-04 18:37:03,167 INFO  [main] storage.BlockManagerMasterEndpoint (Logging.scala:logInfo(54)) - BlockManagerMasterEndpoint up
2017-07-04 18:37:03,185 INFO  [main] storage.DiskBlockManager (Logging.scala:logInfo(54)) - Created local directory at /tmp/blockmgr-20f80f78-0e27-462d-b5a4-1e0067308861
2017-07-04 18:37:03,214 INFO  [main] memory.MemoryStore (Logging.scala:logInfo(54)) - MemoryStore started with capacity 408.9 MB
2017-07-04 18:37:03,330 INFO  [main] spark.SparkEnv (Logging.scala:logInfo(54)) - Registering OutputCommitCoordinator
2017-07-04 18:37:03,458 INFO  [main] util.log (Log.java:initialized(186)) - Logging initialized @4162ms
2017-07-04 18:37:03,623 INFO  [main] server.Server (Server.java:doStart(327)) - jetty-9.2.z-SNAPSHOT
2017-07-04 18:37:03,652 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@ef93f0{/jobs,null,AVAILABLE}
2017-07-04 18:37:03,653 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@70d9720a{/jobs/json,null,AVAILABLE}
2017-07-04 18:37:03,653 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@53ce2867{/jobs/job,null,AVAILABLE}
2017-07-04 18:37:03,654 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@3bead2d{/jobs/job/json,null,AVAILABLE}
2017-07-04 18:37:03,654 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@5b5b6746{/stages,null,AVAILABLE}
.....................
client.TransportClientFactory (TransportClientFactory.java:createClient(250)) - Successfully created connection to /192.168.22.197:35480 after 64 ms (0 ms spent in bootstraps)
2017-07-04 18:37:07,261 INFO  [Executor task launch worker-0] util.Utils (Logging.scala:logInfo(54)) - Fetching spark://192.168.22.197:35480/jars/spark-examples_2.11-2.1.0.jar to /tmp/spark-5110e687-a732-4762-8d74-a7c13a035681/userFiles-69ba7cd2-0014-40d7-8ac8-6b73cd07ce41/fetchFileTemp1907532202323588721.tmp
2017-07-04 18:37:07,367 ERROR [shuffle-server-3-2] server.TransportRequestHandler (TransportRequestHandler.java:operationComplete(201)) - Error sending result StreamResponse{streamId=/jars/spark-examples_2.11-2.1.0.jar, byteCount=1950712, body=FileSegmentManagedBuffer{file=/opt/spark/examples/jars/spark-examples_2.11-2.1.0.jar, offset=0, length=1950712}} to /192.168.22.197:41069; closing connection
io.netty.handler.codec.EncoderException: java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V
    at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:107)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:658)
    at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:716)
    at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:651)
    at io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:266)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:658)
    at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:716)
    at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:706)
    at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:741)
    at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:895)
    at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:240)
    at org.apache.spark.network.server.TransportRequestHandler.respond(TransportRequestHandler.java:194)
    at org.apache.spark.network.server.TransportRequestHandler.processStreamRequest(TransportRequestHandler.java:150)
    at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
    at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
    at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V
    at org.apache.spark.network.buffer.FileSegmentManagedBuffer.convertToNetty(FileSegmentManagedBuffer.java:133)
    at org.apache.spark.network.protocol.MessageEncoder.encode(MessageEncoder.java:54)
    at org.apache.spark.network.protocol.MessageEncoder.encode(MessageEncoder.java:33)
    at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
    ... 36 more
2017-07-04 18:37:07,379 ERROR [shuffle-client-6-1] client.TransportResponseHandler (TransportResponseHandler.java:channelInactive(126)) - Still have 1 requests outstanding when connection from /192.168.22.197:35480 is closed
2017-07-04 18:37:08,054 INFO  [JobGenerator] scheduler.JobScheduler (Logging.scala:logInfo(54)) - Added jobs for time 1499164628000 ms

【问题讨论】:

  • 您是如何启动 Spark Standalone 的?什么版本?可以把master的web UI粘贴到m20p183:8080吗?
  • 这个问题还没有解决,不仅运行我的程序,还运行Spark Streaming + Kafka集成指南提供的例子,报同样的错误,但是我添加了最新版本的netty-all -4.1.1 2.Final 到 spark/jars 而不是其他版本的 netty jars,但没有任何区别,这真的让我很困惑!
  • 你是如何运行这个例子的?您可以编辑您的问题并添加env 的输出吗?您可以使用run-exampleSPARK_PRINT_LAUNCH_COMMAND=1 ./bin/run-example 运行示例并将输出粘贴到问题中吗?谢谢!
  • 你能确定你没有设置SPARK_CLASSPATH吗?请在命令前重新运行run-exampleSPARK_CLASSPATH=,即SPARK_CLASSPATH= run-example。我认为SPARK_CLASSPATH 是罪魁祸首。
  • 我用 spark_classpath:SPARK_CLASSPATH= ./bin/run-example streaming.JavaDirectKafkaWordCount 172.18.30.22:9092 \test 尝试了命令。它报告了另一个错误:java.lang.AbstractMethodError

标签: apache-spark spark-streaming


【解决方案1】:

千辛万苦,终于解决了,在spark_classpath中添加netty-4.0.42.Final,一定要记住你的spark是一个集群,不仅要换master,还要换slave,就是这样阻止我很久的原因。 最后,非常感谢 Jacek Laskowski,你很友善。

【讨论】:

  • 很高兴我能帮上忙。我仍然很困惑,为什么你对环境做了如此多的更改,最终需要额外的netty-4.0.42.Final jar?!
  • 我也纳闷,原来是运维工程师干的。
  • 与工程师交谈,因为 Spark 可能已被安装方式损坏。除非有正当理由,否则我会避免操作此类 Spark 安装。
  • 谢谢,我花了 3 天时间来解决这个问题。这篇文章有帮助!
【解决方案2】:

你绝对不想spark-submit 使用以下罐子:

--jars=/opt/ibudata/binlogSparkStreaming/kafka_2.11-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/ka‌​fka-clients-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/me‌​trics-core-2.2.0.jar‌​,/opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar,/opt‌​/ibudata/binlogSpark‌​Streaming/zkclient-0‌​.3.jar

您只想包含spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar,这与您的部署环境相比也可能太高了。

--jars=/opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar

您应该从spark-submit 中删除--jars

我首先从本地部署环境开始,并且仅在它运行 spark-submit Hadoop YARN 的 Spark 应用程序时。

首先尝试以下方法并使其正常工作:

spark-submit \
  --jars /opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar \
  --class com.br.sparkStreaming.wordcount \
  /opt/ibudata/binlogSparkStreaming/jars/wordcounttest8-0.0.1-‌​SNAPSHOT.jar

注意--jars 不使用= 来指定参数(我不知道它会被接受)。

我的猜测是您 spark-submit 使用低于 2.1.0 的不同 Spark 版本的环境,并且与您捆绑在 uber jar 中的内容不兼容(我怀疑你组装了一个超级罐子,你最终spark-submit)。

正如您在堆栈跟踪中看到的,错误是由于:

java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion(Ljava/io/File;JJ)V
  at org.apache.spark.network.buffer.FileSegmentManagedBuffer.convertToNetty(FileSegmentManagedBuffer.java:133)

That particular line 133 最近在 [SPARK-15178][CORE] Remove LazyFileRegion instead use netty's DefaultFileRegion 中进行了更改,并且仅在您碰巧使用的 2.1.0 及更高版本中可用。

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.11</artifactId>
  <version>2.1.0</version>
</dependency>

【讨论】:

    【解决方案3】:

    使用这个依赖,不要跳过版本标签保留它

    <dependency>
        <groupId>io.netty</groupId>
        <artifactId>netty-all</artifactId>
        <version>4.0.42.Final</version>
    </dependency>
    

    【讨论】:

      猜你喜欢
      • 2013-06-27
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2019-10-17
      • 2019-12-16
      • 2020-10-14
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多