【发布时间】:2021-01-05 21:25:30
【问题描述】:
我有一个从 BigQuery 获取数据并将其写入 GCS 的管道,但是,如果我发现任何拒绝,我想将它们正确地写入 Bigquery 表。我将拒绝收集到全局列表变量中,然后将列表加载到 BigQuery 表中。当我在本地运行它时,这个过程运行良好,因为管道以正确的顺序运行。当我使用dataflowrunner运行它时,它不能保证顺序(我希望pipeline1在pipeline2之前运行。有没有办法使用python在Dataflow中拥有依赖管道?或者也请建议是否可以用更好的方法解决这个问题。提前致谢。
with beam.Pipeline(options=PipelineOptions(pipeline_args)) as pipeline1:
data = (pipeline1
| 'get data' >> beam.io.Read(beam.io.BigQuerySource(query=...,use_standard_sql=True))
| 'combine output to list' >> beam.combiners.ToList()
| 'tranform' >> beam.Map(lambda x: somefunction) # Collecting rejects in the except block of this method to a global list variable
....etc
| 'to gcs' >> beam.io.WriteToText(output)
)
# Loading the rejects gathered in the above pipeline to Biquery
with beam.Pipeline(options=PipelineOptions(pipeline_args)) as pipeline2:
rejects = (pipeline2
| 'create pipeline' >> beam.Create(reject_list)
| 'to json format' >> beam.Map(lambda data: {.....})
| 'to bq' >> beam.io.WriteToBigQuery(....)
)
【问题讨论】:
-
@R.Esteves 感谢您的回复。我确实尝试过使用它 - pipeline1.run().wait_until_finish()。它在使用 python 的数据流中不起作用
-
您是否尝试使用您的第一个 pCollection 作为第二个管道的输入?
-
你是在建议这样的事情吗?我收到 assert isinstance(pbegin, pvalue.PBegin) AssertionError
with beam.Pipeline(options=PipelineOptions(pipeline_args)) as pipeline1: data = (pipeline1 | 'get data' >> .... ) # Loading the rejects gathered in the above pipeline to Biquery with beam.Pipeline(options=PipelineOptions(pipeline_args)) as pipeline2: rejects = (data | 'create pipeline' >> beam.Create(reject_list) | ..... ) -
尝试将两个 PCollection 放在同一个管道中,如下所示:with beam.Pipeline(options=PipelineOptions(pipeline_args)) as pipeline1: data = (pipeline1 | 'get data' >> ....) # 将上述管道中收集的拒绝加载到 Biquery rejects = (data | 'create pipeline' >> beam.Create(reject_list) | .....)
标签: python google-cloud-platform apache-beam dataflow google-dataflow