【发布时间】:2020-09-10 11:23:02
【问题描述】:
我有一些数据到达我的 kafka 主题“数据源”,其架构如下(此处为演示而简化):
{ "deal" : -1, "location": "", "value": -1, "type": "init" }
{ "deal": 123456, "location": "Mars", "value": 100.0, "type": "batch" },
{ "deal" 123457, "location": "Earth", "value", 200.0, "type": "batch" },
{ "deal": -1, "location": "", "value", -1, "type": "commit" }
此数据来自批量运行,我们获取所有交易并重新计算其价值。把它想象成一个开始的过程——此时,这里有一组所有位置的新数据。 此时init和commit消息没有发送到真正的topic,它们被producer过滤掉了。
白天,随着事情的变化,会有更新。这提供了新数据(在此示例中,我们可以忽略覆盖数据,因为这将通过重新运行批处理来处理):
{ "deal": 123458, "location", "Mars", "value": 150.0, "type": "update" }
此数据作为 KStream“位置”进入应用程序。
另一个主题“位置”有一个可能位置的列表。这些作为 KGlobalTable 位置被拉入 java kafka-streams 应用程序:
{ "id": 1, "name": "Mars" },
{ "id": 2, "name": "Earth"}
计划是使用 java 9 kafka-streams 应用程序来聚合这些值,按位置分组。输出应该类似于:
{ "id": 1, "location": "Earth", "sum": 250.0 },
{ "id": 2, "location": "Mars": "sum": 200.0 }
这就是我迄今为止所做的工作:
StreamsBuilder builder = new StreamsBuilder();
/** snip creating serdes, settings up stores, boilerplate **/
final GlobalKTable<Integer, Location> locations = builder.globalTable(
LOCATIONS_TOPIC,
/* serdes, materialized, etc */
);
final KStream<Integer, PositionValue> positions = builder.stream(
POSITIONS_TOPIC,
/* serdes, materialized, etc */
);
/* The real thing is more than just a name, so a transformer is used to match locations to position values, and filter ones that we don't care about */
KStream<Location, PositionValue> joined = positions
.transform(() -> new LocationTransformer(), POSITION_STORE)
.peek((location, positionValue) -> {
LOG.debugv("Processed position {0} against location {1}", positionValue, location);
});
/** This is where it is grouped and aggregated here **/
joined.groupByKey(Grouped.with(locationSerde, positionValueSerde))
.aggregate(Aggregation::new, /* initializer */
(location, positionValue, aggregation) -> aggregation.updateFrom(location, positionValue), /* adder */
Materialized.<Location, Aggregation>as(aggrStoreSupplier)
.withKeySerde(locationSerde)
.withValueSerde(aggregationSerde)
);
Topology topo = builder.build();
我遇到的问题是,这是汇总所有内容 - 所以每日批次,加上更新,然后是下一个每日批次,都被添加。基本上,我需要一种方式来表示“这是下一组批处理数据,对此进行重置”。我不知道该怎么做 - 请帮忙!
谢谢
【问题讨论】:
标签: java apache-kafka apache-kafka-streams