【问题标题】:Flink s3 read error: Data read has a different length than the expectedFlink s3 读取错误:读取的数据长度与预期不同
【发布时间】:2019-10-16 23:42:12
【问题描述】:

使用 flink 1.7.0,但也见于 flink 1.8.0。通过 flink .readFile 源从 S3 读取 gzip 压缩的对象时,我们会遇到频繁但有些随机的错误:

org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Data read has a different length than the expected: dataLength=9713156; expectedLength=9770429; includeSkipped=true; in.getClass()=class org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0
    at org.apache.flink.fs.s3base.shaded.com.amazonaws.util.LengthCheckInputStream.checkLength(LengthCheckInputStream.java:151)
    at org.apache.flink.fs.s3base.shaded.com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:93)
    at org.apache.flink.fs.s3base.shaded.com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:76)
    at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AInputStream.closeStream(S3AInputStream.java:529)
    at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AInputStream.close(S3AInputStream.java:490)
    at java.io.FilterInputStream.close(FilterInputStream.java:181)
    at org.apache.flink.fs.s3.common.hadoop.HadoopDataInputStream.close(HadoopDataInputStream.java:89)
    at java.util.zip.InflaterInputStream.close(InflaterInputStream.java:227)
    at java.util.zip.GZIPInputStream.close(GZIPInputStream.java:136)
    at org.apache.flink.api.common.io.InputStreamFSInputWrapper.close(InputStreamFSInputWrapper.java:46)
    at org.apache.flink.api.common.io.FileInputFormat.close(FileInputFormat.java:861)
    at org.apache.flink.api.common.io.DelimitedInputFormat.close(DelimitedInputFormat.java:536)
    at org.apache.flink.streaming.api.functions.source.ContinuousFileReaderOperator$SplitReader.run(ContinuousFileReaderOperator.java:336)

是的 在给定的作业中,我们通常会看到许多/大部分作业读取成功,但几乎总是至少有一个失败(比如 50 个文件)。

看来这个错误实际上是来自 AWS 客户端,所以也许 flink 与它无关,但我希望有人能对如何使这项工作可靠地有所了解。

当错误发生时,它最终会杀死源并取消所有连接的运算符。我还是 flink 的新手,但我认为这是可以从以前的快照中恢复的东西?当这种异常发生时,我是否应该期望 flink 会重试读取文件?

【问题讨论】:

    标签: amazon-s3 apache-flink


    【解决方案1】:

    也许您可以尝试为 s3a 添加更多连接

    flink:
    ...
        config: |
          fs.s3a.connection.maximum: 320
    

    【讨论】:

      猜你喜欢
      • 2015-03-04
      • 2018-06-01
      • 2015-05-12
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2016-11-08
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多