【问题标题】:Unable to load AWS credentials from any provider in the chain - kinesis-kafka-connector无法从链中的任何提供商加载 AWS 凭证 - kinesis-kafka-connector
【发布时间】:2019-06-20 01:09:07
【问题描述】:

我正在尝试使用 Kafka-Kinesis-Connector 是一个与 Kafka Connect 一起使用的连接器,用于将消息从 Kafka 发布到 Amazon Kinesis Firehose,如链接 (https://github.com/awslabs/kinesis-kafka-connector) 中所述,并出现以下错误。我使用的是 Cloudera 版本 CDH-6.1.0-1.cdh6.1.0.p0.770702,它附带 Kafka 2.1.2 (0.10.0.1+kafka2.1.2+6)。

我已经在当前会话中加载了 AWS 凭证,这不起作用。

export AWS_ACCESS_KEY_ID="XXX"
export AWS_SECRET_ACCESS_KEY="YYYYY"
export AWS_DEFAULT_REGION="sssss"

我的worker.properties如下图

bootstrap.servers=kafkanode:9092
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
#internal.value.converter=org.apache.kafka.connect.storage.StringConverter
#internal.key.converter=org.apache.kafka.connect.storage.StringConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
internal.key.converter.schemas.enable=true
internal.value.converter.schemas.enable=true
offset.storage.file.filename=offset.log
schemas.enable=false
#Rest API
rest.port=8096
plugin.path=/home/opc/kinesis-kafka-connector-master/target/
#rest.host.name=

我的kinesis-firehose-kafka-connector.properties如下图

name=kafka_kinesis_sink_connector
connector.class=com.amazon.kinesis.kafka.FirehoseSinkConnector
tasks.max=1
topics=OGGTest
region=eu-central-1
batch=true
batchSize=500
batchSizeInBytes=1024
deliveryStream=kafka-s3-stream

错误码如下图:

        [2019-01-26 11:32:24,446] INFO Kafka version : 2.0.0-cdh6.1.0 (org.apache.kafka.common.utils.AppInfoParser:109)
  [2019-01-26 11:32:24,446] INFO Kafka commitId : unknown (org.apache.kafka.common.utils.AppInfoParser:110)
  [2019-01-26 11:32:24,449] INFO Created connector kafka_kinesis_sink_connector (org.apache.kafka.connect.cli.ConnectStandalone:104)
  [2019-01-26 11:32:25,296] ERROR WorkerSinkTask{id=kafka_kinesis_sink_connector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:177)
  com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain
    at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:131)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1164)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:762)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:724)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
    at com.amazonaws.services.kinesisfirehose.AmazonKinesisFirehoseClient.doInvoke(AmazonKinesisFirehoseClient.java:826)
    at com.amazonaws.services.kinesisfirehose.AmazonKinesisFirehoseClient.invoke(AmazonKinesisFirehoseClient.java:802)
    at com.amazonaws.services.kinesisfirehose.AmazonKinesisFirehoseClient.describeDeliveryStream(AmazonKinesisFirehoseClient.java:451)
    at com.amazon.kinesis.kafka.FirehoseSinkTask.validateDeliveryStream(FirehoseSinkTask.java:95)
    at com.amazon.kinesis.kafka.FirehoseSinkTask.start(FirehoseSinkTask.java:77)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:301)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:190)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
 [2019-01-26 11:32:25,299] ERROR WorkerSinkTask{id=kafka_kinesis_sink_connector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:178)
 [2019-01-26 11:32:33,375] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
 [2019-01-26 11:32:33,375] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:223)

请指教。提前致谢!

【问题讨论】:

  • 我假设您在 connect-standalone 之前运行了这些导出?如果是这样,那对我来说效果很好
  • @cricket_007 感谢您的回复。我确实尝试过,但是它不起作用。只是想知道,您是否在构建 jar 文件 (mvn install) 之前更改或自定义了 Kinesis Kafka 连接器文件,如源 GitHub 站点所述。请指教。
  • 我只使用 S3 连接器完成了此操作,而不是 Kinesis 连接器。但是AWSCredentialsProviderChain 应该是一样的,无论如何。导出不需要为mvn package 设置(你不需要实际安装这个连接器,只需打包它)
  • @cricket_007,感谢您的意见。

标签: java apache-kafka apache-kafka-connect amazon-kinesis-firehose


【解决方案1】:

~/.aws/credentials 文件位于运行 Connect 工作进程的操作系统用户的主目录中。大多数 AWS 开发工具包和 AWS CLI 都可以识别这些凭证。使用以下 AWS CLI 命令创建凭证文件:

aws 配置

您也可以使用文本编辑器手动创建凭据文件。该文件应包含以下格式的行:

[默认] aws_access_key_id = aws_secret_access_key =

注意:创建凭证文件时,请确保创建凭证文件的用户与运行 Connect 工作进程的用户相同,并且凭证文件是在此用户的主目录中。否则,S3 连接器将无法找到凭据。

【讨论】:

    猜你喜欢
    • 2019-12-22
    • 2017-06-15
    • 2023-03-11
    • 1970-01-01
    • 1970-01-01
    • 2021-08-11
    • 1970-01-01
    • 1970-01-01
    • 2021-08-08
    相关资源
    最近更新 更多