【问题标题】:Error initializing SparkContext: A master URL must be set in your configuration初始化 SparkContext 时出错:必须在配置中设置主 URL
【发布时间】:2017-06-21 07:01:10
【问题描述】:

我用this code

我的错误是:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

17/02/03 20:39:24 INFO SparkContext: Running Spark version 2.1.0

17/02/03 20:39:25 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

17/02/03 20:39:25 WARN SparkConf: Detected deprecated memory fraction 
settings: [spark.storage.memoryFraction]. As of Spark 1.6, execution and  
storage memory management are unified. All memory fractions used in the old 
model are now deprecated and no longer read. If you wish to use the old 
memory management, you may explicitly enable `spark.memory.useLegacyMode` 
(not recommended).

17/02/03 20:39:25 ERROR SparkContext: Error initializing SparkContext.

org.apache.spark.SparkException: A master URL must be set in your 
configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:379)
at PCA$.main(PCA.scala:26)
at PCA.main(PCA.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

17/02/03 20:39:25 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: A master URL must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:379)
at PCA$.main(PCA.scala:26)
at PCA.main(PCA.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
   sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at   
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

Process finished with exit code 1

【问题讨论】:

标签: scala apache-spark k-means


【解决方案1】:

如果您单独运行 spark 则

val conf = new SparkConf().setMaster("spark://master") //missing 

你可以在提交作业时传递参数

spark-submit --master spark://master

如果你在本地运行 spark 则

val conf = new SparkConf().setMaster("local[2]") //missing 

提交作业时可以传递参数

spark-submit --master local

如果你在纱线上运行火花,那么

spark-submit --master yarn

【讨论】:

    【解决方案2】:

    错误信息很清楚,您必须通过SparkContextspark-submit 提供Spark Master 节点的地址:

    val conf = 
      new SparkConf()
        .setAppName("ClusterScore")
        .setMaster("spark://172.1.1.1:7077") // <--- This is what's missing
        .set("spark.storage.memoryFraction", "1")
    
    val sc = new SparkContext(conf)
    

    【讨论】:

    • 现在我对这段代码有另一个问题:我如何输入我的文本。插入“/data/kddcupdata/kddcup.trasfrom.nou2r”我想使用保存在“C://kddcup.data_10_percent_corrected.txt”中的文本文件。请帮助我怎么做?
    • @fakherzad 您可以使用file:///kddcup.data_10_percent_corrected.txt 读取本地计算机上的文件。
    • 感谢您的指导,但我也遇到错误:“输入路径不存在:文件:/kddcup.data_10_percent_corrected.txt”。我不知道我该如何解决。请帮助我
    【解决方案3】:
     SparkConf configuration = new SparkConf()
                .setAppName("Your Application Name")
                .setMaster("local");
     val sc = new SparkContext(conf);
    

    它会起作用的......

    【讨论】:

      【解决方案4】:

      您很可能在 Java 中使用 Spark 2.x API。 像这样使用代码 sn-p 来避免这个错误。当您使用 Shade 插件在计算机上独立运行 Spark 时会出现这种情况,该插件将导入您计算机上的所有运行时库。

      SparkSession spark = SparkSession.builder()
                      .appName("Spark-Demo")//assign a name to the spark application
                      .master("local[*]") //utilize all the available cores on local
                      .getOrCreate();
      

      【讨论】:

        猜你喜欢
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 2022-01-25
        • 1970-01-01
        • 1970-01-01
        • 2019-05-19
        • 2019-02-23
        相关资源
        最近更新 更多