【问题标题】:Spark - Error “A master URL must be set in your configuration” using Intellij IDEASpark - 使用 Intellij IDEA 时出现错误“必须在您的配置中设置主 URL”
【发布时间】:2018-09-13 14:42:31
【问题描述】:

当我尝试使用 Intellij IDEA 访问 Spark 流媒体应用程序时

环境

Spark 核心版本 2.2.0 Intellij IDEA 2017.3.5 版本

附加信息: Spark 在 Yarn 模式下运行。

出现错误:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Exception in thread "main" java.lang.ExceptionInInitializerError
    at kafka_stream.kafka_stream.main(kafka_stream.scala)
Caused by: org.apache.spark.SparkException: A master URL must be set in your configuration
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:376)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
    at kafka_stream.InitSpark$class.$init$(InitSpark.scala:15)
    at kafka_stream.kafka_stream$.<init>(kafka_stream.scala:6)
    at kafka_stream.kafka_stream$.<clinit>(kafka_stream.scala)
    ... 1 more

Process finished with exit code 1

试过了

  val spark: SparkSession = SparkSession.builder()
    .appName("SparkStructStream")
    .master("spark://127.0.0.1:7077")
    //.master("local[*]")
    .getOrCreate()

仍然收到相同的 MASTER URL 错误

build.sbt 文件内容

name := "KafkaSpark"

version := "1.0"

scalaVersion := "2.11.8"

libraryDependencies ++= Seq(
  "org.apache.spark" % "spark-core_2.11" % "2.2.0",
  "org.apache.spark" % "spark-sql_2.11" % "2.2.0",
  "org.apache.spark" % "spark-streaming_2.11" % "2.2.0",
  "org.apache.spark" % "spark-streaming-kafka_2.11" % "1.6.3"
)

// https://mvnrepository.com/artifact/org.apache.kafka/kafka_2.11
libraryDependencies += "org.apache.kafka" % "kafka_2.11" % "0.11.0.0"

// https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients
libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.11.0.0"

// https://mvnrepository.com/artifact/org.apache.kafka/kafka-streams
libraryDependencies += "org.apache.kafka" % "kafka-streams" % "0.11.0.0"

// https://mvnrepository.com/artifact/org.apache.kafka/connect-api
libraryDependencies += "org.apache.kafka" % "connect-api" % "0.11.0.0"

libraryDependencies += "com.databricks" %% "spark-avro" % "4.0.0"

resolvers += Resolver.mavenLocal
resolvers += "central maven" at "https://repo1.maven.org/maven2/"

任何帮助将不胜感激?

【问题讨论】:

    标签: scala maven apache-spark apache-kafka


    【解决方案1】:

    下载winutils.exe并将文件放入c/hadoop/bin/winutil.exe 在 def main 语句下包含以下行

    System.setProperty("hadoop.home.dir", "C:\\hadoop")
    

    而且效果很好。

    【讨论】:

      【解决方案2】:

      看起来参数没有以某种方式传递。例如。火花在较早的地方初始化。不过,您可以尝试使用 VM 选项 -Dspark.master=local[*],将参数传递到未定义的所有位置,因此它应该可以解决您的问题。在 IntelliJ 中,它位于 list of run config -&gt; Edit Configurations... -&gt; VM Options

      【讨论】:

        猜你喜欢
        • 2016-10-26
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 2017-06-21
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        相关资源
        最近更新 更多