【发布时间】:2019-03-08 14:32:55
【问题描述】:
如果我的代码得到org.apache.kafka.clients.consumer.OffsetOutOfRangeException,我想执行一些操作。我试过这个检查
if(e.getCause().getCause() instanceof OffsetOutOfRangeException)
但我仍然收到 SparkException,而不是 OffsetOutOfRangeException。
ERROR Driver:86 - Error in executing stream
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 11, localhost, executor 0): org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {dns_data-0=23245772}
at org.apache.kafka.clients.consumer.internals.Fetcher.parseFetchedData(Fetcher.java:588)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:354)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1000)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938)
at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:136)
at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:68)
at org.apache.spark.streaming.kafka010.KafkaRDDIterator.next(KafkaRDD.scala:271)
at org.apache.spark.streaming.kafka010.KafkaRDDIterator.next(KafkaRDD.scala:231)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:393)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)`
Caused by: org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {dns_data-0=23245772}
at org.apache.kafka.clients.consumer.internals.Fetcher.parseFetchedData(Fetcher.java:588)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:354)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1000)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938)
at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:136)
at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:68)
at org.apache.spark.streaming.kafka010.KafkaRDDIterator.next(KafkaRDD.scala:271)
at org.apache.spark.streaming.kafka010.KafkaRDDIterator.next(KafkaRDD.scala:231)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
【问题讨论】:
-
@anandbabu - 1) 这不是完整的堆栈跟踪。没有堆栈帧!向我们展示完整的堆栈跟踪......包括所有“由”辅助跟踪。 2)将额外信息放入您的问题中,而不是评论中。 (使用“编辑”按钮!)
-
我们要求查看真正的堆栈跟踪是有原因的。那就是……看看你所拥有的是否真的有
OffsetOutOfRangeException嵌套在其中。是的,我们可以在消息中看到名字,但这并不能证明什么。 -
对于猜测答案的人来说,
instanceof和e.getClass()版本正在测试相同的东西。真正的问题是OffsetOutOfRangeException是否实际上是任何e的嵌套异常......以及它的嵌套深度。这就是为什么我们需要堆栈跟踪。 -
这个问题怎么会受到如此负面的评价?这根本不是一个愚蠢的问题,而且更重要的是:每个人都有自己的答案,而且他们都不一样!
-
归根结底,我们有一个 OP,他的代码(显然)工作但不明白为什么。未来的读者将完全没有开悟。否决票的真正目的是过滤掉对普通读者没有帮助的问题;即指导 StackOverflow 知识库的curation。从这个角度来看,反对票是合理的。
标签: java apache-spark exception apache-kafka nested