【问题标题】:Hadoop, Apache SparkHadoop,阿帕奇星火
【发布时间】:2015-12-18 14:09:21
【问题描述】:

我已经在 Window 中安装了 Spark。我正在尝试从 D: 驱动器加载文本文件。 RDD 正在正常创建,但是当我对该接收错误执行任何操作时。我尝试了所有斜线组合但没有成功

scala> val file = sc.textFile("D:\\file\\file1.txt")
15/12/16 07:53:51 INFO MemoryStore: ensureFreeSpace(175321) called with curMem=4
01474, maxMem=280248975
15/12/16 07:53:51 INFO MemoryStore: Block broadcast_2 stored as values in memory
 (estimated size 171.2 KB, free 266.7 MB)
15/12/16 07:53:51 INFO MemoryStore: ensureFreeSpace(25432) called with curMem=57
6795, maxMem=280248975
15/12/16 07:53:51 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in
memory (estimated size 24.8 KB, free 266.7 MB)
15/12/16 07:53:51 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on l
ocalhost:51963 (size: 24.8 KB, free: 267.2 MB)
15/12/16 07:53:51 INFO BlockManagerMaster: Updated info of block broadcast_2_pie
ce0
15/12/16 07:53:51 INFO SparkContext: Created broadcast 2 from textFile at <conso
le>:21
file: org.apache.spark.rdd.RDD[String] = D:\file\file1.txt MapPartitionsRDD[5] a
t textFile at <console>:21

RDD 正常创建,但是当我尝试对 RDD 执行任何操作时收到以下错误

scala> file.count()
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file
/D:/file/file1.txt
        at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(Fi
eInputFormat.java:285)
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.
ava:228)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.j
va:313)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:203)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD
scala:32)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1512)
        at org.apache.spark.rdd.RDD.count(RDD.scala:1006)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:24)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:29)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:31)
        at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:33)
        at $iwC$$iwC$$iwC$$iwC.<init>(<console>:35)
        at $iwC$$iwC$$iwC.<init>(<console>:37)
        at $iwC$$iwC.<init>(<console>:39)
        at $iwC.<init>(<console>:41)
        at <init>(<console>:43)
        at .<init>(<console>:47)
        at .<clinit>(<console>)
        at .<init>(<console>:7)
        at .<clinit>(<console>)
        at $print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl
java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce
sorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala
1065)
        at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala
1338)
        at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:84
)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
        at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:
56)
        at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.sc
la:901)
        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
        at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:656)
        at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:664)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$l
op(SparkILoop.scala:669)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$Spar
ILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:996)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$Spar
ILoop$$process$1.apply(SparkILoop.scala:944)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$Spar
ILoop$$process$1.apply(SparkILoop.scala:944)
        at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClas
Loader.scala:135)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$p
ocess(SparkILoop.scala:944)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl
java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce
sorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSu
mit$$runMain(SparkSubmit.scala:569)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:1
6)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


scala>

【问题讨论】:

    标签: hadoop apache-spark


    【解决方案1】:

    你必须使用sc.textFile("file:///...")

    默认情况下,它可能会查看 HDFS。通过使用file 协议,您将使用本地文件系统。

    在 Windows 上,我得到了这个命令

    sc.textFile("file:\\C:\\Users\\data.txt").count()
    

    你试试sc.textFile("file:\\D:\\file\\file1.txt")。还要检查你是否有 D:/file/file.txt 的权限。你可以去文件浏览器看看你对目录文件和文件file.txt有什么权限

    【讨论】:

    • 可能看起来像无限循环"scala> scala> val file = sc.textFile("file:\\\D:\file\file1.txt") // 检测到的 repl 成绩单粘贴: ctrl-D 完成。
    • scala> val file = sc.textFile("file:\\\D:\file\file1.txt") :1: error: invalid escape character val file = sc.textFile ("文件:\\\D:\file\file1.txt")
    • 你应该做 "file:\\\D:\\file\\file1.txt"
    • scala> val file1 = sc.textFile("file\\\D:\\file\\file1.txt") :1: error: invalid escape character val file1 = sc. textFile("文件\\\D:\\file\\file1.txt")
    • scala> val file1 = sc.textFile("file:\\\D:\\file\\file1.txt") :1: 错误:无效的转义字符 val file1 = sc .textFile("文件:\\\D:\\file\\file1.txt")
    猜你喜欢
    • 2014-10-09
    • 2014-12-17
    • 2019-03-21
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2022-11-10
    • 2018-08-22
    相关资源
    最近更新 更多