【问题标题】:Confluent Kafka Connect Docker Container IssueConfluent Kafka Connect Docker 容器问题
【发布时间】:2018-12-14 08:06:24
【问题描述】:

我正在使用以下 docker compose sn-p:

connect:
    image: confluentinc/cp-kafka-connect:latest
    hostname: connect
    container_name: connect
    depends_on:
      - zookeeper
      - kafka
    ports:
      - "8083:8083"
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 'kafka:9092'
      CONNECT_REST_ADVERTISED_HOST_NAME: connect
      CONNECT_GROUP_ID: compose-connect-group
      CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
      CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_PLUGIN_PATH: /usr/share/java
      CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'

容器似乎可以正常启动,但是当我尝试通过连接容器 REST API 添加 HDFS 接收器连接时:

curl -s -X POST -H 'Content-Type: application/json' --data \
@confluent_hdfs.json http://localhost:8083/connectors

confluent_hdfs.json 文件包含的位置:

{
  "name": "hdfs-sink",
  "config": {
    "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
    "tasks.max": "1",
    "topics": "test",
    "hdfs.url": "hdfs://localhost:9000",
    "flush.size": "1000",
    "name": "hdfs-sink"
  }
}

我收到 500 HTTP 响应。检查连接器容器的日志显示:

WARN /connectors (org.eclipse.jetty.server.HttpChannel)
javax.servlet.ServletException: javax.servlet.ServletException:
org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: 
io/confluent/connect/hdfs/HdfsSinkConnectorConfig

通过检查此问题,我看到以下帖子:

https://github.com/confluentinc/kafka-connect-hdfs/issues/273

这表明插件路径错误。然而,据我所知,我已将其正确设置为 /usr/share/java,并且我还看到了这篇文章所暗示的正确配置的符号链接。

进一步,在执行请求时:

curl http://localhost:8083/connector-plugins

我看到以下响应:

[
{"class":"io.confluent.connect.hdfs.HdfsSinkConnector","type":"sink","version":"4.1.1"},
{"class":"io.confluent.connect.hdfs.tools.SchemaSourceConnector","type":"source","version":"1.1.1-cp1"},
{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink","version":"1.1.1-cp1"},
{"class":"org.apache.kafka.connect.file.FileStreamSourceConnector","type":"source","version":"1.1.1-cp1"}
]

所以我不确定我是否遗漏了撰写文件中的某些内容,或者我在这里遗漏了其他内容?

【问题讨论】:

  • 看看那里的例子github.com/confluentinc/cp-demo/blob/4.1.1-post/… 但是即使这样,看起来你的图像中缺少一些东西。也许你还没有拉4.1.1?尝试显式提取并验证容器中 /usr/share/java 中存在的内容
  • hdfs.url": "hdfs://localhost:9000 假设 HDFS 与 Kafka Connect 在同一个容器上可用,但这些 Docker 映像并非如此
  • 您好,对此深表歉意 - 我为我的文件系统的位置选择了一个糟糕的示例。你是完全正确的——实际上我的代码有一个不在同一个容器中的远程 url。再次 - 对此感到抱歉。

标签: docker apache-kafka apache-kafka-connect confluent-platform


【解决方案1】:

感谢 dawsaw,我完成了您建议的示例,我意识到问题出在我通过将连接器文件夹安装为卷来安装的连接器插件上。不幸的是,我将连接器安装在连接容器的错误部分,这似乎损害了容器正常运行的能力。

我最后的工作是:

connect:
image: confluentinc/cp-kafka-connect:4.1.1
container_name: connect
restart: always
ports:
  - "8083:8083"
depends_on:
  - zookeeper
  - kafka
volumes:
  - $PWD/confluentinc-kafka-connect-rabbitmq-1.0.0-preview:/usr/share/java/confluentinc-kafka-connect-rabbitmq-1.0.0-preview
environment:
  CONNECT_BOOTSTRAP_SERVERS: "kafka:9092"
  CONNECT_REST_ADVERTISED_HOST_NAME: "connect"
  CONNECT_REST_PORT: 8083
  CONNECT_GROUP_ID: "connect"
  CONNECT_CONFIG_STORAGE_TOPIC: connect-config
  CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
  CONNECT_STATUS_STORAGE_TOPIC: connect-status
  CONNECT_REPLICATION_FACTOR: 1
  CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
  CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
  CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
  CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
  CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
  CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
  CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
  CONNECT_PLUGIN_PATH: "/usr/share/java"

再次感谢您对此提供的帮助,并对 sn-p 最初创建的糟糕示例表示歉意。

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 2018-12-29
    • 1970-01-01
    • 2021-06-26
    • 2016-12-26
    • 2020-06-23
    • 2019-02-25
    • 2017-11-04
    相关资源
    最近更新 更多