【问题标题】:Getting error : Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/SparkConf出现错误:线程“main”中的异常 java.lang.NoClassDefFoundError: org/apache/spark/SparkConf
【发布时间】:2020-05-02 16:45:20
【问题描述】:

我正在研究 Kafka Spark Streaming。 IDLE 没有显示任何错误,程序也成功构建,但我收到此错误:

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/SparkConf
at KafkaSparkStream1$.main(KafkaSparkStream1.scala:13)
at KafkaSparkStream1.main(KafkaSparkStream1.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.SparkConf
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 2 more

我正在使用 Maven。我还正确设置了环境变量,因为每个组件都单独工作我的 spark 版本是 3.0.0-preview2,Scala 版本是 2.12 我已经导出了一个 spark-streaming-Kafka jar 文件。

这是我的 pom 文件:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.org.cpg.casestudy</groupId>
<artifactId>Kafka_casestudy</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
    <spark.version>3.0.0</spark.version>
    <scala.version>2.12</scala.version>
</properties>

<build>
    <plugins>
        <!-- Maven Compiler Plugin-->
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
            </configuration>
        </plugin>
    </plugins>
</build>

<dependencies>
    <!-- Apache Kafka Clients-->
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>2.5.0</version>
    </dependency>
    <!-- Apache Kafka Streams-->
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-streams</artifactId>
        <version>2.5.0</version>
    </dependency>
    <!-- Apache Log4J2 binding for SLF4J -->
    <dependency>
        <groupId>org.apache.logging.log4j</groupId>
        <artifactId>log4j-slf4j-impl</artifactId>
        <version>2.11.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.12</artifactId>
        <version>3.0.0-preview2</version>
        <scope>provided</scope>
    </dependency>

    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.12</artifactId>
        <version>3.0.0-preview2</version>
        <scope>provided</scope>
    </dependency>

    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-10 -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming-kafka-0-10_2.12</artifactId>
        <version>3.0.0-preview2</version>
    </dependency>

</dependencies>

这是我的代码(生产者发送消息的字数):

import org.apache.kafka.clients.consumer.ConsumerConfig
import org.apache.spark._
import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.codehaus.jackson.map.deser.std.StringDeserializer

object KafkaSparkStream {
def main(args: Array[String]): Unit = {

val brokers = "localhost:9092";
val groupid = "GRP1";
val topics = "KafkaTesting";
val SparkConf = new SparkConf().setMaster("local[*]").setAppName("KafkaSparkStreaming");
val ssc = new StreamingContext(SparkConf,Seconds(10))
val sc = ssc.sparkContext

sc.setLogLevel("off")

val topicSet = topics.split(",").toSet

val kafkaPramas = Map[String , Object](
  ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG -> brokers,
   ConsumerConfig.GROUP_ID_CONFIG -> groupid,
  ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer],
  ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer]
)

val messages = KafkaUtils.createDirectStream[String,String](
  ssc, LocationStrategies.PreferConsistent, ConsumerStrategies.Subscribe[String,String](topicSet,kafkaPramas)
)

val line=messages.map(_.value)
val words = line.flatMap(_.split(" "))
val wordCount = words.map(x=> (x,1)).reduceByKey(_+_)

wordCount.print()

ssc.start()
ssc.awaitTermination()

}
}

【问题讨论】:

    标签: apache-spark apache-kafka spark-streaming


    【解决方案1】:

    尝试清理您的 mvn 本地存储库,或者运行以下命令以从在线覆盖您的依赖 JAR

    mvn clean install -U
    

    在执行 Spark JAR 时,您的 spark 依赖项,特别是 spark-core_2.12-3.0.0-preview2.jar 不会添加到您的类路径中。

    你可以通过

    spark-submit --jars <path>/spark-core_2.12-3.0.0-preview2.jar
    

    【讨论】:

      猜你喜欢
      • 2022-01-07
      • 1970-01-01
      • 1970-01-01
      • 2016-10-17
      • 2021-07-22
      • 1970-01-01
      • 2017-03-24
      • 2017-01-30
      • 1970-01-01
      相关资源
      最近更新 更多