【问题标题】:Can't use PySpark DF cached in Google Colab无法使用 Google Colab 中缓存的 PySpark DF
【发布时间】:2023-02-25 05:44:40
【问题描述】:

我意识到在 Google Colab 环境中缓存 PySpark Dataframe 后使用任何方法(如 show() 或任何其他方法)我收到此错误:

喜欢:

df.show(5)

---------------------------------------------------------------------------
ConnectionRefusedError                    Traceback (most recent call last)
/tmp/ipykernel_26/1842469281.py in <module>
----> 1 df.show(5)

/opt/conda/lib/python3.7/site-packages/pyspark/sql/dataframe.py in show(self, n, truncate, vertical)
    604 
    605         if isinstance(truncate, bool) and truncate:
--> 606             print(self._jdf.showString(n, 20, vertical))
    607         else:
    608             try:

/opt/conda/lib/python3.7/site-packages/py4j/java_gateway.py in __call__(self, *args)
   1318             proto.END_COMMAND_PART
   1319 
-> 1320         answer = self.gateway_client.send_command(command)
   1321         return_value = get_return_value(
   1322             answer, self.gateway_client, self.target_id, self.name)

/opt/conda/lib/python3.7/site-packages/py4j/java_gateway.py in send_command(self, command, retry, binary)
   1034          if `binary` is `True`.
   1035         """
-> 1036         connection = self._get_connection()
   1037         try:
   1038             response = connection.send_command(command)

/opt/conda/lib/python3.7/site-packages/py4j/clientserver.py in _get_connection(self)
    282 
    283         if connection is None or connection.socket is None:
--> 284             connection = self._create_new_connection()
    285         return connection
    286 

/opt/conda/lib/python3.7/site-packages/py4j/clientserver.py in _create_new_connection(self)
    289             self.java_parameters, self.python_parameters,
    290             self.gateway_property, self)
--> 291         connection.connect_to_java_server()
    292         self.set_thread_connection(connection)
    293         return connection

/opt/conda/lib/python3.7/site-packages/py4j/clientserver.py in connect_to_java_server(self)
    436                 self.socket = self.ssl_context.wrap_socket(
    437                     self.socket, server_hostname=self.java_address)
--> 438             self.socket.connect((self.java_address, self.java_port))
    439             self.stream = self.socket.makefile("rb")
    440             self.is_connected = True

ConnectionRefusedError: [Errno 111] Connection refused

我是 Spark/PySpark 的新手,不明白为什么会这样。是因为我没有使用合适的集群吗?

【问题讨论】:

  • 您可以添加重现此错误的代码吗?
  • 我只是在一个 DF 中读取了一些 CSV 文件,像这样:spark = SparkSession.builder.master("local[*]").appName("trips_data").getOrCreate() df = spark.read.parquet(f"path/to/file.parquet").cache() 然后如果我尝试 show(5) 它会引发错误。

标签: apache-spark pyspark google-colaboratory kaggle


【解决方案1】:

It seems like there's not enough space to cache the dataframe in memory! It's an rdd long linkage error due to memory overflow in JVM.

I'm not sure if you can increase memory in google collab, so either use smaller files in google collab, or test locally if you have enough memory.