【问题标题】:spark submit class not found error .java class not found火花提交类未找到错误.java类未找到
【发布时间】:2021-07-25 17:18:22
【问题描述】:

我已经安装了 java jdk8。我不知道为什么我会收到这个错误。 路径也正确完成。我究竟做错了什么。请帮我解决这个问题。我的火花版本是 2.4.7。我正在使用 intellij ide 当我尝试运行代码时出现此错误。

C:\spark\spark-2.4.7-bin-hadoop2.7\bin>spark-submit --class TopViewedCategories --master local C:\Users\Piyush\IdeaProjects\BDA\target\BDA-1.0-SNAPSHOT.jar
21/05/03 16:25:37 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
21/05/03 16:25:37 WARN SparkSubmit$$anon$2: Failed to load TopViewedCategories.
java.lang.ClassNotFoundException: TopViewedCategories
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.util.Utils$.classForName(Utils.scala:238)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:806)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
21/05/03 16:25:37 INFO ShutdownHookManager: Shutdown hook called
21/05/03 16:25:37 INFO ShutdownHookManager: Deleting directory 

这是代码

package org.example;

import java.util.List;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import scala.Tuple2;

import static org.apache.spark.SparkContext.getOrCreate;

public class TopViewedCategories {
    public static void main(String[] args) throws Exception {
        long timeElapsed = System.currentTimeMillis();
        System.out.println("Started Processing");
        SparkConf conf = new SparkConf()
                .setMaster("local")
                .setAppName("YouTubeDM");
        JavaSparkContext sc = new JavaSparkContext(conf);
        //Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
        sc.setLogLevel("ERROR");
        JavaRDD<String> mRDD = sc.textFile("C:\\Users\\Piyush\\Desktop\\bda\\INvideos.csv"); //directory where the files are
         JavaPairRDD<Double,String> sortedRDD = mRDD

// .filter(line -> line.split("\t").length > 6)
                 .mapToPair(
                         line -> {
                             String[] lineArr = line.split("\t");
                             String category = lineArr[5];
                             Double views = Double.parseDouble(lineArr[1]);
                             Tuple2<Double, Integer> viewsTuple = new Tuple2<>(views, 1);
                             return new Tuple2<>(category,
                                     viewsTuple);
                         })
                 .reduceByKey((x, y) -> new Tuple2<>(x._1 + y._1, x._2 + y._2)) .mapToPair(x -> new Tuple2<>(x._1, (x._2._1 / x._2._2)))
                 .mapToPair(Tuple2::swap)
                 .sortByKey(false);

// .take(10);
        long count = sortedRDD.count();
        List<Tuple2<Double, String>> topTenTuples = sortedRDD.take(10);
        JavaPairRDD<Double, String> topTenRdd = sc.parallelizePairs(topTenTuples);  String output_dir = "C:output/spark/TopViewedCategories";
//remove output directory if already there
        FileSystem fs = FileSystem.get(sc.hadoopConfiguration());
        fs.delete(new Path(output_dir), true); // delete dir, true for recursive
        topTenRdd.saveAsTextFile(output_dir);
        timeElapsed = System.currentTimeMillis() - timeElapsed;
        System.out.println("Done.Time taken (in seconds): " + timeElapsed/1000f); System.out.println("Processed Records: " + count);
        sc.stop();
        sc.close();
    }
}

请帮我解决一下

【问题讨论】:

  • 您确定您使用的是正确的 JAR 文件吗?您可以将 JAR 作为 zip 文件打开以检查其中包含哪些类

标签: java apache-spark hadoop


【解决方案1】:

你必须设置类名,包括包:

spark-submit --class org.example.TopViewedCategories ...

【讨论】:

猜你喜欢
  • 2014-12-25
  • 2017-01-01
  • 1970-01-01
  • 1970-01-01
  • 2014-11-17
  • 2018-01-26
  • 2015-10-01
  • 2015-09-02
  • 2011-08-24
相关资源
最近更新 更多