【发布时间】:2018-12-14 08:06:24
【问题描述】:
我正在使用以下 docker compose sn-p:
connect:
image: confluentinc/cp-kafka-connect:latest
hostname: connect
container_name: connect
depends_on:
- zookeeper
- kafka
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'kafka:9092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_PLUGIN_PATH: /usr/share/java
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
容器似乎可以正常启动,但是当我尝试通过连接容器 REST API 添加 HDFS 接收器连接时:
curl -s -X POST -H 'Content-Type: application/json' --data \
@confluent_hdfs.json http://localhost:8083/connectors
confluent_hdfs.json 文件包含的位置:
{
"name": "hdfs-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "1",
"topics": "test",
"hdfs.url": "hdfs://localhost:9000",
"flush.size": "1000",
"name": "hdfs-sink"
}
}
我收到 500 HTTP 响应。检查连接器容器的日志显示:
WARN /connectors (org.eclipse.jetty.server.HttpChannel)
javax.servlet.ServletException: javax.servlet.ServletException:
org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError:
io/confluent/connect/hdfs/HdfsSinkConnectorConfig
通过检查此问题,我看到以下帖子:
https://github.com/confluentinc/kafka-connect-hdfs/issues/273
这表明插件路径错误。然而,据我所知,我已将其正确设置为 /usr/share/java,并且我还看到了这篇文章所暗示的正确配置的符号链接。
进一步,在执行请求时:
curl http://localhost:8083/connector-plugins
我看到以下响应:
[
{"class":"io.confluent.connect.hdfs.HdfsSinkConnector","type":"sink","version":"4.1.1"},
{"class":"io.confluent.connect.hdfs.tools.SchemaSourceConnector","type":"source","version":"1.1.1-cp1"},
{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink","version":"1.1.1-cp1"},
{"class":"org.apache.kafka.connect.file.FileStreamSourceConnector","type":"source","version":"1.1.1-cp1"}
]
所以我不确定我是否遗漏了撰写文件中的某些内容,或者我在这里遗漏了其他内容?
【问题讨论】:
-
看看那里的例子github.com/confluentinc/cp-demo/blob/4.1.1-post/… 但是即使这样,看起来你的图像中缺少一些东西。也许你还没有拉4.1.1?尝试显式提取并验证容器中 /usr/share/java 中存在的内容
-
hdfs.url": "hdfs://localhost:9000假设 HDFS 与 Kafka Connect 在同一个容器上可用,但这些 Docker 映像并非如此 -
您好,对此深表歉意 - 我为我的文件系统的位置选择了一个糟糕的示例。你是完全正确的——实际上我的代码有一个不在同一个容器中的远程 url。再次 - 对此感到抱歉。
标签: docker apache-kafka apache-kafka-connect confluent-platform