【发布时间】:2020-04-22 20:25:57
【问题描述】:
一个简单的代码编写 s3-minio 大约需要 130 秒,而写入本地磁盘只需要 1 秒。有什么问题吗?
我关注了这篇文章,但没有帮助 https://docs.min.io/docs/disaggregated-spark-and-hadoop-hive-with-minio.html
使用 3 个执行器运行会更快 -- 52 秒,但仍然不够快
master('local[32]') 可以达到21秒
master('local[1]') --> 130 秒
环境:
单节点kubernete集群在本地机器上运行(16核/32G), 一个s3-minio POD(本地磁盘作为存储),spark driver POD和一些spark executor POD。
iotop 显示 minio 和 spark 之间的净流量约为 100Kb~ 1Mb。 cpu也低。
minio 中的 goroutine 数量约为 150~450(max)
查看下面的日志,我发现有很多 API 调用来检索 s3 对象状态。是这个原因吗?
2020-01-05 03:00:42,674 DEBUG org.apache.hadoop.fs.s3a.S3AFileSystem - object_delete_requests += 1 -> 24456
2020-01-05 03:00:42,676 DEBUG org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol - Committing files staged for absolute locations Map()
2020-01-05 03:00:42,676 DEBUG org.apache.hadoop.fs.s3a.S3AFileSystem - op_get_file_status += 1 -> 61698
2020-01-05 03:00:42,676 DEBUG org.apache.hadoop.fs.s3a.S3AFileSystem - Getting path status for s3a://dataplatform/tmp/test_pp_60m/.spark-staging-466619ae-8b30-4be3-9c92-49e079bd449c (tmp/test_pp_60m/.spark-staging-466619ae-8b30-4be3-9c92-49e079bd449c)
2020-01-05 03:00:42,676 DEBUG org.apache.hadoop.fs.s3a.S3AFileSystem - object_metadata_requests += 1 -> 141711
2020-01-05 03:00:42,677 DEBUG org.apache.hadoop.fs.s3a.S3AFileSystem - object_metadata_requests += 1 -> 141712
2020-01-05 03:00:42,677 DEBUG org.apache.hadoop.fs.s3a.S3AFileSystem - object_list_requests += 1 -> 55793
2020-01-05 03:00:42,678 DEBUG org.apache.hadoop.fs.s3a.S3AFileSystem - Not Found: s3a://dataplatform/tmp/test_pp_60m/.spark-staging-466619ae-8b30-4be3-9c92-49e079bd449c
2020-01-05 03:00:42,678 DEBUG org.apache.hadoop.fs.s3a.S3AFileSystem - Couldn't delete s3a://dataplatform/tmp/test_pp_60m/.spark-staging-466619ae-8b30-4be3-9c92-49e079bd449c - does not exist
2020-01-05 03:00:42,678 INFO org.apache.spark.sql.execution.datasources.FileFormatWriter - Write Job 1a68dddd-fd88-49cd-957d-36e050d31de3 committed.
2020-01-05 03:00:42,679 INFO org.apache.spark.sql.execution.datasources.FileFormatWriter - Finished processing stats for write job 1a68dddd-fd88-49cd-957d-36e050d31de3.
2020-01-05 03:08:59,183 DEBUG org.apache.spark.broadcast.TorrentBroadcast - Unpersisting TorrentBroadcast 1
2020-01-05 03:08:59,184 DEBUG org.apache.spark.storage.BlockManagerSlaveEndpoint - removing broadcast 1
from pyspark.sql import Row
import random, time
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.enableHiveSupport() \
.config("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", 2)\
.master("local[1]") \
.getOrCreate()
fixed_date = ['2019-01-01','2019-01-02','2019-01-03','2019-01-04']
refs = ['0','1','2']
data = bytearray(random.getrandbits(8) for _ in range(100))
start=int(time.time())
print("start=%s"%start)
rows = []
for ref_id in refs:
for d in fixed_date:
for camera_id in range(1):
for c in range(1000):
rows.append(Row(ref_id=ref_id,
camera_id="c_"+str(camera_id),
date=d,
data=data
))
df = spark._sc.parallelize(rows).toDF()
print("partition number=%s, row size=%s"% (df.rdd.getNumPartitions(),len(rows)))
df.write.mode("overwrite")\
.partitionBy('ref_id','date','camera_id')\
.parquet('s3a://mybucket/tmp/test_data')
结果更新
我认为 hadoop s3 部分很慢(无论我使用快速上传还是普通的 s3 传输管理器),尤其是当我将太多文件写入 S3 时,每个文件大约需要 80-100 个 API 调用。可能是一些 S3 缓存(亚马逊EMR 或 alluxio 会有所帮助?)
【问题讨论】:
-
Minio 在同一台机器上运行?
标签: apache-spark pyspark parquet minio