【发布时间】:2021-11-10 09:51:26
【问题描述】:
我正在运行一个程序,该程序从主题中的消息开始,使用它,处理它,提交下一个偏移量,并将新消息发布到同一主题,所有这些都以事务方式进行。我有以下(简化的)跟踪:
Fetch READ_COMMITTED at offset 20 for partition test-topic-0
processing message at offset 20
Committed offset 21 for partition test-topic-0
Sending PRODUCE
COMMITTING_TRANSACTION
Fetch READ_COMMITTED at offset 22 for partition test-topic-0
processing message at offset 22 <==== first time
...rebalance...
Setting offset for partition test-topic-0 to the committed offset FetchPosition{offset=21
Committed offset 23 for partition test-topic-0
Sending PRODUCE
COMMITTING_TRANSACTION
Fetch READ_COMMITTED at offset 24 for partition test-topic-0
stale fetch response for partition test-topic-0 since its offset 24 does not match the expected offset FetchPosition{offset=21
Fetch READ_COMMITTED at offset 21 for partition test-topic-0
processing message at offset 22 <==== second time
因此,我处理了消息“22”两次。是否预计 kafka 只是将消费者偏移量回退到在提交的偏移量之前?日志的顺序看起来正确吗?如有必要,我可以使用完整日志更新问题,但我认为那里没有任何用处。
【问题讨论】:
标签: apache-kafka kafka-consumer-api message kafka-producer-api kafka-transactions-api