【问题标题】:Issues Google Cloud Storage connector on SparkSpark 上的 Google Cloud Storage 连接器问题
【发布时间】:2014-10-02 10:07:53
【问题描述】:

我正在尝试在 Mac OS 上的 Spark 上安装 Google Cloud Storage,以对我的 Spark 应用进行本地测试。我已阅读以下文档 (https://cloud.google.com/hadoop/google-cloud-storage-connector)。我在 spark/lib 文件夹中添加了“gcs-connector-latest-hadoop2.jar”。我还在 spark/conf 目录中添加了 core-data.xml 文件。

当我运行我的 pyspark shell 时,我得到一个错误:

>>> sc.textFile("gs://mybucket/test.csv").count()
    Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/poiuytrez/Documents/DataBerries/programs/spark/python/pyspark/rdd.py", line 847, in count
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
  File "/Users/poiuytrez/Documents/DataBerries/programs/spark/python/pyspark/rdd.py", line 838, in sum
    return self.mapPartitions(lambda x: [sum(x)]).reduce(operator.add)
  File "/Users/poiuytrez/Documents/DataBerries/programs/spark/python/pyspark/rdd.py", line 759, in reduce
    vals = self.mapPartitions(func).collect()
  File "/Users/poiuytrez/Documents/DataBerries/programs/spark/python/pyspark/rdd.py", line 723, in collect
    bytesInJava = self._jrdd.collect().iterator()
  File "/Users/poiuytrez/Documents/DataBerries/programs/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/Users/poiuytrez/Documents/DataBerries/programs/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o26.collect.
: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem not found
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1895)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2379)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2392)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:304)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
    at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
    at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:56)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:774)
    at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:305)
    at org.apache.spark.api.java.JavaRDD.collect(JavaRDD.scala:32)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:207)
    at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.ClassNotFoundException: Class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem not found
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1801)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1893)
    ... 40 more

我不知道下一步该去哪里。

【问题讨论】:

    标签: apache-spark google-hadoop


    【解决方案1】:

    要求 它可能在 Spark 的版本之间有所不同,但如果您查看 bdutil-0.35.2/extensions/spark/install_spark.sh 内部,您会看到我们使用 bdutil 设置的“Spark + Hadoop on GCE”是如何工作的;它包括您提到的项目,将连接器添加到 spark/lib 文件夹中,并将 core-site.xml 文件添加到 spark/conf 目录中,但另外在spark/conf/spark-env.sh 中添加了该行:

    export SPARK_CLASSPATH=\$SPARK_CLASSPATH:${LOCAL_GCS_JAR}
    

    ${LOCAL_GCS_JAR} 是您添加到 spark/lib 的 jarfile 的绝对路径。尝试将其添加到您的 spark/conf/spark-env.sh 中,ClassNotFoundException 应该会消失。

    【讨论】:

    • 我得到:这在 Spark 1.0+ 中已被弃用。请改为使用: - ./spark-submit 和 --driver-class-path 来增加驱动程序类路径 - spark.executor.extraClassPath 来增加执行程序类路径 但是当我尝试访问我的存储时出现另一个错误。我将创建一个新的 SO 问题。
    • 我遇到了元数据服务器错误,我使用您对这个问题的回复解决了这个错误:stackoverflow.com/questions/25291397/…
    • 将 $HADOOP_CLASSPATH 添加到 spark-env.sh 中的 $SPARK_CLASSPATH 将解决此问题。 (至少它在 Spark 1.2.1 中对我有用)
    猜你喜欢
    • 2016-02-06
    • 1970-01-01
    • 1970-01-01
    • 2020-02-08
    • 2018-08-14
    • 2018-02-16
    • 2019-07-11
    • 2016-01-19
    • 2018-12-31
    相关资源
    最近更新 更多