【发布时间】:2016-11-04 20:57:15
【问题描述】:
我想从 Jupyter notebook 运行 pySpark。我下载并安装了带有 Juptyer 的 Anaconda。我创建了以下几行
from pyspark import SparkConf, SparkContext
conf = SparkConf().setMaster("local").setAppName("My App")
sc = SparkContext(conf = conf)
我收到以下错误
ImportError Traceback (most recent call last)
<ipython-input-3-98c83f0bd5ff> in <module>()
----> 1 from pyspark import SparkConf, SparkContext
2 conf = SparkConf().setMaster("local").setAppName("My App")
3 sc = SparkContext(conf = conf)
C:\software\spark\spark-1.6.2-bin-hadoop2.6\python\pyspark\__init__.py in <module>()
39
40 from pyspark.conf import SparkConf
---> 41 from pyspark.context import SparkContext
42 from pyspark.rdd import RDD
43 from pyspark.files import SparkFiles
C:\software\spark\spark-1.6.2-bin-hadoop2.6\python\pyspark\context.py in <module>()
26 from tempfile import NamedTemporaryFile
27
---> 28 from pyspark import accumulators
29 from pyspark.accumulators import Accumulator
30 from pyspark.broadcast import Broadcast
ImportError: cannot import name accumulators
我尝试根据 Stackoverflow importing pyspark in python shell 中的答案添加以下指向 spark/python 目录的环境变量 PYTHONPATH
但这无济于事
【问题讨论】: