【问题标题】:Error while using dataframe show method in pyspark在 pyspark 中使用数据框显示方法时出错
【发布时间】:2019-11-20 20:06:30
【问题描述】:

我正在尝试使用 pandas 和 pyspark 从 BigQuery 读取数据。我能够获取数据,但在将其转换为 Spark DataFrame 时以某种方式低于错误。

py4j.protocol.Py4JJavaError: An error occurred while calling o28.showString.
: java.lang.IllegalStateException: Could not find TLS ALPN provider; no working netty-tcnative, Conscrypt, or Jetty NPN/ALPN available
    at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.defaultSslProvider(GrpcSslContexts.java:258)
    at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:171)
    at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:120)
    at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder.buildTransportFactory(NettyChannelBuilder.java:401)
    at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:444)
    at com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:223)
    at com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:169)
    at com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:156)
    at com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.rpc.ClientContext.create(ClientContext.java:157) 

以下是环境细节

Python version : 3.7
Spark version : 2.4.3
Java version : 1.8

代码如下

import google.auth
import pyspark
from pyspark import SparkConf, SparkContext
from pyspark.sql import SparkSession , SQLContext
from google.cloud import bigquery


# Currently this only supports queries which have at least 10 MB of results
QUERY = """ SELECT * FROM test limit 1 """

#spark = SparkSession.builder.appName('Query Results').getOrCreate()
sc = pyspark.SparkContext()
bq = bigquery.Client()

print('Querying BigQuery')
project_id = ''
query_job = bq.query(QUERY,project=project_id)

# Wait for query execution
query_job.result()

df = SQLContext(sc).read.format('bigquery') \
    .option('dataset', query_job.destination.dataset_id) \
    .option('table', query_job.destination.table_id)\
    .option("type", "direct")\
    .load()

df.show()

我正在寻找解决此问题的帮助。

【问题讨论】:

  • df 是不是 DataFrame?
  • @NikolasRieble 是的,默认情况下它是数据框 ..
  • 这不是 pyspark 问题。由于 netty 和 tcnative 之间的版本不匹配,您会收到此错误消息。看看linklink

标签: python-3.x apache-spark pyspark google-bigquery


【解决方案1】:

我设法找到了引用此link 的更好解决方案,下面是我的工作代码:

在编写以下代码之前,在 python 库中安装 pandas_gbq 包。

import pandas_gbq
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession

project_id = "<your-project-id>"
query = """ SELECT * from testSchema.testTable"""
athletes = pandas_gbq.read_gbq(query=query, project_id=project_id,dialect = 'standard')


# Get a reference to the Spark Session
sc = SparkContext()
spark = SparkSession(sc)

# convert from Pandas to Spark
sparkDF = spark.createDataFrame(athletes)

# perform an operation on the DataFrame
print(sparkDF.count())

sparkDF.show()

希望对某人有所帮助!继续 pysparking :)

【讨论】:

    猜你喜欢
    • 2021-12-23
    • 1970-01-01
    • 1970-01-01
    • 2020-01-25
    • 1970-01-01
    • 2017-01-02
    • 2015-10-10
    • 1970-01-01
    • 2017-06-13
    相关资源
    最近更新 更多