【发布时间】:2021-08-01 02:19:18
【问题描述】:
我在亚马逊 EMR 上使用 Flink,并希望将我的管道结果流式传输到 s3 存储桶。
我使用的是 Flink 版本 => 1.11.2
这是一个代码 sn-p 代码现在看起来如何变量:
val outputPath = new Path("s3://test/flinkStreamTest/failureLogs/dt=2021-04-15/")
val sink: StreamingFileSink[String] = StreamingFileSink
.forRowFormat(outputPath, new SimpleStringEncoder[String]("UTF-8"))
.withRollingPolicy(
DefaultRollingPolicy.builder()
.withRolloverInterval(TimeUnit.MINUTES.toMillis(15))
.withInactivityInterval(TimeUnit.MINUTES.toMillis(5))
.withMaxPartSize(1024 * 1024 * 1024)
.build()
)
.build()
val enrichedStream = AsyncDataStream
.unorderedWait(
resConsumer,
new AsyncElasticRequest(elasticIndexName, elasticHost, elasticPort),
asyncTimeOut.toInt, TimeUnit.MILLISECONDS,
asyncCapacity.toInt
) // this is my pipeline result. it returns a string
enrichedStream.addSink(sink)
env.execute("run pipeline") // this is just to run the pipeline
这是我目前遇到的错误;
java.lang.UnsupportedOperationException: Recoverable writers on Hadoop are only supported for HDFS
at org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriter.<init>(HadoopRecoverableWriter.java:61)
at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:260)
at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:396)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:185)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:167)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:106)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:258)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:290)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:479)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:475)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:528)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
at java.lang.Thread.run(Thread.java:748)
我已将 s3-fs-hadoop jar 文件放在 plugins/s3-fs-hadoop 文件夹中。 我在 /usr/lib/flink/lib 中也有相同的 s3-fs-hadoop jar,以防 flink 也在该文件夹中查找 s3-fs-hadoop jar。 请有人可以帮助我。 我已经搜索和搜索,但似乎无法解决它。
谢谢
【问题讨论】:
标签: scala hadoop amazon-s3 apache-flink amazon-emr