【发布时间】:2020-02-05 09:36:11
【问题描述】:
我正在尝试将 JSON 类型的 PubSub 消息流式传输到 spanner 数据库,并且 insert_update 运行良好。 Spanner 表具有复合主键,因此需要在从 PubSub 插入新数据之前删除现有数据(因此只有最新数据存在)。在这种情况下,扳手替换或插入/更新突变不起作用。 我添加了管道
import org.apache.beam.* ;
public class PubSubToSpannerPipeline {
// JSON to TableData Object
public static class PubSubToTableDataFn extends DoFn<String, TableData> {
@ProcessElement
public void processElement(ProcessContext c) {
.
.
.
}
}
public interface PubSubToSpannerOptions extends PipelineOptions, StreamingOptions {
.
.
.
}
public static void main(String[] args) {
PubSubToSpannerOptions options = PipelineOptionsFactory
.fromArgs(args)
.withValidation()
.as(PubSubToSpannerOptions.class);
options.setStreaming(true);
SpannerConfig spannerConfig =
SpannerConfig.create()
.withProjectId(options.getProjectId())
.withInstanceId(options.getInstanceId())
.withDatabaseId(options.getDatabaseId());
Pipeline pipeLine = Pipeline.create(options);
PCollection<TableData> tableDataMsgs = pipeLine.apply(PubsubIO.readStrings()
.fromSubscription(options.getInputSubscription()))
.apply("ParsePubSubMessage", ParDo.of(new PubSubToTableDataFn ()));
// Window function
PCollection<TableData> tableDataJson = tableDataMsgs
.apply(Window.into(FixedWindows.of(Duration.standardMinutes(1))));
PCollection<MutationGroup> upsertMutationGroup = tableDataJson.apply("TableDataMutation",
MapElements.via(new SimpleFunction<TableData, MutationGroup>() {
public MutationGroup apply(TableData input) {
String object_id = input.objectId;
pipeLine.apply("ReadExistingData", SpannerIO.read()
.withSpannerConfig(spannerConfig)
.withQuery("SELECT object_id, mapped_object_id, mapped_object_name from TableName where object_id ='" + object_id + "'")
.apply("MutationForExistingTableData",
ParDo.of(new DoFn<Struct, Mutation>(){
@ProcessElement
public void processElement(ProcessContext c) {
Struct str = c.element();
c.output(Mutation.delete("TableName", KeySet.newBuilder()
.addKey(Key.newBuilder()
.append(str.getString("object_id"))
.append(str.getString("mapped_object_id"))
.append(str.getString("mapped_object_name")).build()).build()));
}
} ))
.apply("DeleteExistingTableData", SpannerIO.write().withSpannerConfig(spannerConfig));
Mutation dataMutation = Mutation.newReplaceBuilder("TableName",
.
.
.
);
List<Mutation> list = new ArrayList<Mutation>();
List<Map<String, String>> mappingList = input.listOfObjectRows;
for (Map<String, String> objectMap : mappingList ) {
list.add(Mutation.newReplaceBuilder("TableName",
.
.
.);
}
return MutationGroup.create(dataMutation, list);
}
} )));
upsertMutationGroup.apply("WriteDataToSpanner", SpannerIO.write()
.withSpannerConfig(spannerConfig)
.grouped());
// Run the pipeline.
pipeLine.run().waitUntilFinish();
}
}
class TableData implements Serializable {
String objectId;
List<Map<String, String>> listOfObjectRows;
}
期望在插入或更新数据之前必须从表中删除现有的映射数据。
【问题讨论】:
标签: google-cloud-dataflow google-cloud-pubsub google-cloud-spanner