【发布时间】:2018-08-29 10:48:51
【问题描述】:
我正在尝试使用以下流程在 Google Dataflow 上运行作业:
基本上采用单个数据源,根据字典中的某些值进行过滤,并为每个过滤条件创建单独的输出。
我写了以下代码:
# List of values to filter by
x_list = [1, 2, 3]
with beam.Pipeline(options=PipelineOptions().from_dictionary(pipeline_params)) as p:
# Read in newline JSON data - each line is a dictionary
log_data = (
p
| "Create " + input_file >> beam.io.textio.ReadFromText(input_file)
| "Load " + input_file >> beam.FlatMap(lambda x: json.loads(x))
)
# For each value in x_list, filter log_data for dictionaries containing the value & write out to separate file
for i in x_list:
# Return dictionary if given key = value in filter
filtered_log = log_data | "Filter_"+i >> beam.Filter(lambda x: x['key'] == i)
# Do additional processing
processed_log = process_pcoll(filtered_log, event)
# Write final file
output = (
processed_log
| 'Dump_json_'+filename >> beam.Map(json.dumps)
| "Save_"+filename >> beam.io.WriteToText(output_fp+filename,num_shards=0,shard_name_template="")
)
目前它只处理列表中的第一个值。我知道我可能必须使用 ParDo,但我不太确定如何将其纳入我的流程。
【问题讨论】:
标签: python apache-beam