【发布时间】:2020-01-25 03:52:06
【问题描述】:
我正在尝试使用 Lambda 架构构建实时大数据管道。到目前为止,我已经能够使用 Kafka 以及使用 S3 和 Redshift 创建 Batch-Layer 的数据摄取模块。但是我似乎无法通过 PySpark 连接到我的 kafka 服务器。我是 Spark 的新手,我在 Internet 上寻找过解决方案,但似乎都没有处理 Python 环境。
这是我的代码:
import pyspark as spark
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.master("local[*]") \
.appName("Learning_Spark") \
.getOrCreate()
data_stream = spark \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "localhost:9092") \
.option("subscribe", "tweets-lambda1") \
.option("startingOffsets","latest") \
.load()
我得到的错误如下:
Py4JJavaError Traceback (most recent call last)
C:\ProgramData\Anaconda3\lib\site-packages\pyspark\sql\utils.py in deco(*a, **kw)
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
C:\ProgramData\Anaconda3\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
Py4JJavaError: An error occurred while calling o36.load.
: org.apache.spark.sql.AnalysisException: Failed to find data source: kafka. Please deploy the application as per the deployment section of "Structured Streaming + Kafka Integration Guide".;
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:652)
at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
During handling of the above exception, another exception occurred:
AnalysisException Traceback (most recent call last)
<ipython-input-2-b77746ac3efc> in <module>
4 .option("kafka.bootstrap.servers", "localhost:9092") \
5 .option("subscribe", "tweets-lambda1") \
----> 6 .option("startingOffsets","latest") \
7 .load()
C:\ProgramData\Anaconda3\lib\site-packages\pyspark\sql\streaming.py in load(self, path, format, schema, **options)
398 return self._df(self._jreader.load(path))
399 else:
--> 400 return self._df(self._jreader.load())
401
402 @since(2.0)
C:\ProgramData\Anaconda3\lib\site-packages\py4j\java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
C:\ProgramData\Anaconda3\lib\site-packages\pyspark\sql\utils.py in deco(*a, **kw)
67 e.java_exception.getStackTrace()))
68 if s.startswith('org.apache.spark.sql.AnalysisException: '):
---> 69 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
70 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
71 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
AnalysisException: 'Failed to find data source: kafka. Please deploy the application as per the deployment section of "Structured Streaming + Kafka Integration Guide".;
可能出了什么问题?我无法通过查看“Structured Streaming + Kafka Integration Guide”来解决它,因为那里的信息主要与 Java 开发环境有关。我确信配置信息是正确的,因为我对批处理层使用了相同的信息。任何帮助将不胜感激!
编辑:根据上述指南,我尝试运行以下命令。它通过了,但仍然无法连接:
spark-submit --class "Learning_Spark" --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0 ...
【问题讨论】:
-
请同时提供 spark、kafka、spark-sql-kafka jar 版本。
-
最有可能的问题是您的版本不匹配。
-
这里是我下载的包的名称:spark-2.4.4-bin-hadoop2.7 kafka_2.12-2.3.0 spark-sql-kafka-0-10_2.11:2.3。 0
-
您使用的是
0-10_2.11:2.3.0,它是 2.3.0 版本 -
检查它实际使用的是哪一个,查看加载的包。
标签: python apache-spark pyspark apache-kafka