【问题标题】:Reading a GCS file using standalone on premise spark java program使用独立的本地 spark java 程序读取 GCS 文件
【发布时间】:2017-10-10 06:16:35
【问题描述】:

我正在尝试使用 Java 中的本地独立 Spark 作业读取存储在 GCS 存储桶中的文件。我已经为SparkContext 配置了所有必要的火花配置。我收到以下错误:

    at com.vr.HadoopSample.main(HadoopSample.java:78)
java.io.IOException: Error getting access token from metadata server at: http://metadata/computeMetadata/v1/instance/service-accounts/default/token
    at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:208)
    at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:70)
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1825)
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:1012)
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:975)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2848)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2830)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:265)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:236)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:322)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2087)
    at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:918)
    at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:916)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.RDD.foreach(RDD.scala:916)
    at org.apache.spark.api.java.JavaRDDLike$class.foreach(JavaRDDLike.scala:351)
    at org.apache.spark.api.java.AbstractJavaRDDLike.foreach(JavaRDDLike.scala:45)
    at com.vr.HadoopSample.main(HadoopSample.java:78)
Caused by: java.net.UnknownHostException: metadata
    at java.net.AbstractPlainSocketImpl.connect(Unknown Source)
    at java.net.PlainSocketImpl.connect(Unknown Source)
    at java.net.SocksSocketImpl.connect(Unknown Source)
    at java.net.Socket.connect(Unknown Source)
    at sun.net.NetworkClient.doConnect(Unknown Source)
    at sun.net.www.http.HttpClient.openServer(Unknown Source)
    at sun.net.www.http.HttpClient.openServer(Unknown Source)
    at sun.net.www.http.HttpClient.<init>(Unknown Source)
    at sun.net.www.http.HttpClient.New(Unknown Source)
    at sun.net.www.http.HttpClient.New(Unknown Source)
    at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(Unknown Source)
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(Unknown Source)
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source)
    at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source)
    at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93)
    at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972)
    at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:158)
    at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489)
    at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:206)
    ... 33 more
17/10/10 11:34:24 INFO SparkContext: Invoking stop() from shutdown hook
17/10/10 11:34:24 INFO SparkUI: Stopped Spark web UI at http://169.254.189.252:4040
17/10/10 11:34:24 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/10/10 11:34:24 INFO MemoryStore: MemoryStore cleared

任何帮助将不胜感激。

我已经使用以下属性配置了 SparkConf:

<property>
  <name>fs.gs.impl</name>
  <value>com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem</value>
</property>
<property>
  <name>fs.gs.project.id</name>
  <value>your-ascii-google-project-id</value>
</property>
<property>
  <name>fs.gs.system.bucket</name>
  <value>some-bucket-your-project-owns</value>
</property>
<property>
  <name>fs.gs.working.dir</name>
  <value>/</value>
</property>
<property>
  <name>fs.gs.auth.service.account.enable</name>
  <value>true</value>
</property>
<property>
  <name>fs.gs.auth.service.account.email</name>
  <value>your-service-account-email@developer.gserviceaccount.com</value>
</property>
<property>
  <name>fs.gs.auth.service.account.keyfile</name>
  <value>/path/to/hadoop/conf/gcskey.p12</value>
</property>

我已按照说明使用 Google Cloud Storage 连接器。我没有在我的机器上安装 Spark,但在 eclipse 中使用了它的所有库。任何帮助将不胜感激。我无法继续前进。谢谢。

【问题讨论】:

    标签: java apache-spark hadoop google-cloud-storage google-cloud-dataproc


    【解决方案1】:

    正如Cloud Storage Connector installation documentation 中所写,这些属性应在conf/core-site.xml 文件中指定。

    如果您是通过 Spark 直接配置它们(以编程方式或将它们添加到 spark-defaults.conf),则将 you need to add spark.hadoop. 前缀为 all of them

    【讨论】:

    • 即使在下面设置之后我也遇到了类似的问题。 spark.sparkContext.hadoopConfiguration.set("spark.hadoop.google.cloud.auth.service.account.json.keyfile","path_to_file.json") spark.sparkContext.hadoopConfiguration.set("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem") spark.sparkContext.hadoopConfiguration.set("spark.hadoop.google.cloud.auth.service.account.enable", "true") spark.sparkContext。 hadoopConfiguration.set("fs.gs.project.id", "my-project-id")
    • 如果你是通过spark.sparkContext.hadoopConfiguration设置属性,那么你不需要在属性中添加spark.hadoop.前缀,因为Spark已经知道这是Hadoop配置了。
    猜你喜欢
    • 2017-05-08
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2020-03-10
    • 2017-10-01
    • 2015-12-19
    • 1970-01-01
    • 2017-08-01
    相关资源
    最近更新 更多