【问题标题】:Unable to make requests to Kafka container from another container using kafka-python无法使用 kafka-python 从另一个容器向 Kafka 容器发出请求
【发布时间】:2019-11-05 13:23:04
【问题描述】:

环境:

services:  
  zookeeper:
      image: wurstmeister/zookeeper
      ports:
        - 2181
  kafka:
    image: wurstmeister/kafka
    ports:
      - 9092:9092
      #- 8004:8004
    links:
      - zookeeper
    environment:
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
      KAFKA_CREATE_TOPICS: "foo:10:1"
      # JMX_PORT: 8004
  clickhouse-01:
      image: yandex/clickhouse-server
      hostname: clickhouse-01
      container_name: clickhouse-01
      ports:
          - 9001:9000
      volumes:
          - ./config/config.xml:/etc/clickhouse-server/config.xml
          - ./config/metrika.xml:/etc/clickhouse-server/metrika.xml
          - ./config/macros/macros-01.xml:/etc/clickhouse-server/config.d/macros.xml

      ulimits:
          nofile:
              soft: 262144
              hard: 262144
      depends_on:
          - "zookeeper"

  clickhouse-02:
      image: yandex/clickhouse-server
      hostname: clickhouse-02
      container_name: clickhouse-02
      ports:
          - 9002:9000
      volumes:
          - ./config/config.xml:/etc/clickhouse-server/config.xml
          - ./config/metrika.xml:/etc/clickhouse-server/metrika.xml
          - ./config/macros/macros-02.xml:/etc/clickhouse-server/config.d/macros.xml

      ulimits:
          nofile:
              soft: 262144
              hard: 262144
      depends_on:
          - "zookeeper"

  clickhouse-03:
      image: yandex/clickhouse-server
      hostname: clickhouse-03
      container_name: clickhouse-03
      ports:
          - 9003:9000
      volumes:
          - ./config/config.xml:/etc/clickhouse-server/config.xml
          - ./config/metrika.xml:/etc/clickhouse-server/metrika.xml
          - ./config/macros/macros-03.xml:/etc/clickhouse-server/config.d/macros.xml

      ulimits:
          nofile:
              soft: 262144
              hard: 262144
      depends_on:
          - "zookeeper"

通过Zookeeper容器查询Kafka:

bash-4.4# /opt/kafka/bin/kafka-topics.sh --list --zookeeper zookeeper:2181
__consumer_offsets
foo
raw_trap

来自 Zookeeper 容器内的 Netstat 结果:

root@0a5f9a441da3:/opt/zookeeper-3.4.13# netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0a5f9a441da3:2181       kafka_1:58622 ESTABLISHED
tcp        0      0 0a5f9a441da3:2181       clickhouse-02.cli:60728 ESTABLISHED
tcp        0      0 0a5f9a441da3:2181       clickhouse-01.cli:56448 ESTABLISHED
tcp        0      0 0a5f9a441da3:2181       clickhouse-03.cli:39656 ESTABLISHED

从运行 kafka-python 的容器远程登录到代理:

root@f10fe1b58fa9:~# telnet kafka 9092
Trying 172.18.0.8...
Connected to kafka.
Escape character is '^]'.

来自 telnet 的 Kafka 错误:

kafka_1           | [2019-06-23 13:38:05,350] WARN [SocketServer brokerId=1019] Unexpected error from /172.18.0.5; closing connection (org.apache.kafka.common.network.Selector)
kafka_1           | org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1903520116 larger than 104857600)
kafka_1           |     at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:104)
kafka_1           |     at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
kafka_1           |     at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
kafka_1           |     at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
kafka_1           |     at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
kafka_1           |     at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
kafka_1           |     at kafka.network.Processor.poll(SocketServer.scala:830)
kafka_1           |     at kafka.network.Processor.run(SocketServer.scala:730)
kafka_1           |     at java.lang.Thread.run(Thread.java:748)

尝试使用 python 向 kafka 主题发送数据时出错:

>>> from kafka import KafkaProducer
>>> producer = KafkaProducer(bootstrap_servers=['kafka:9092'])
>>> producer
<kafka.producer.kafka.KafkaProducer object at 0x7ff84417b320>
>>> producer.send('foo', b'raw_bytes')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.7/site-packages/kafka/producer/kafka.py", line 564, in send
    self._wait_on_metadata(topic, self.config['max_block_ms'] / 1000.0)
  File "/usr/local/lib/python3.7/site-packages/kafka/producer/kafka.py", line 691, in _wait_on_metadata
    "Failed to update metadata after %.1f secs." % (max_wait,))
kafka.errors.KafkaTimeoutError: KafkaTimeoutError: Failed to update metadata after 60.0 secs.

我在网上翻来覆去几次试图找到解决方案。我首先确保容器中的KAFKA_ADVERTISED_HOST_NAME 是正确的,并尝试更改它,但没有成功。当我更改 bootstrap_servers=['kafka:9092'] 条目的端点时,出现错误:

>>> consumer = KafkaConsumer('foo', 
...                          group_id='test-group',
...                          bootstrap_servers=['localhost:9092'])
Traceback (most recent call last):
  File "<stdin>", line 3, in <module>
  File "/usr/local/lib/python3.7/site-packages/kafka/consumer/group.py", line 353, in __init__
    self._client = KafkaClient(metrics=self._metrics, **self.config)
  File "/usr/local/lib/python3.7/site-packages/kafka/client_async.py", line 239, in __init__
    self.config['api_version'] = self.check_version(timeout=check_timeout)
  File "/usr/local/lib/python3.7/site-packages/kafka/client_async.py", line 865, in check_version
    raise Errors.NoBrokersAvailable()
kafka.errors.NoBrokersAvailable: NoBrokersAvailable

所以看起来我可能正在建立联系,但可能从根本上误解了我试图与制作人提出的请求。

这里是我目前正在测试的 python 库的文档和示例https://kafka-python.readthedocs.io/en/master/usage.html#kafkaconsumer

编辑:我已经成功地从我们使用消费者在裸机上运行的生产 kafka 环境返回了消息。

【问题讨论】:

    标签: python docker apache-kafka kafka-python


    【解决方案1】:

    引用 Robin Moffet 关于 Kafka 侦听器和 docker 的优秀博文:

    如果您使用的是 docker,则需要将 KAFKA_ADVERTISED_LISTENERS 设置为外部地址(主机或 IP),以便客户端可以正确连接到它。否则他们会尝试连接到内部主机地址——如果无法访问,就会出现问题。

    https://rmoff.net/2018/08/02/kafka-listeners-explained/

    Kafka 客户端连接实际上是一个两步过程,首先连接到引导服务器以请求有关整个集群的元数据,然后使用通告的侦听器名称和端口连接到一个或多个集群节点。

    【讨论】:

    • 感谢您的信息。我调查了一下,我可能不清楚我是从 docker 网络内部访问 kafka 容器,而不是从我的本地主机。
    【解决方案2】:

    我似乎通过使用不同的容器版本解决了这个问题:

      kafka:
        image: wurstmeister/kafka:2.11-0.11.0.3
    

    我的消费者对象现在可以从代理中检索主题列表。我以前无法做到的事情。

    【讨论】:

    • 今天(2019 年 6 月)最新版本的 kafka 是 2.2.1,该版本是 0.11.0.3(2.11 部分是 scala 版本),因此如果可用,请考虑使用更新的图像。
    • @HansJespersen 当我拉出最新版本时,我得到了同样的错误。我要把每一个都拉出来看看会发生什么。
    • 或者可以尝试使用 confluentinc docker 镜像,尽管某些参数可能略有不同
    猜你喜欢
    • 2020-01-30
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2021-11-24
    • 1970-01-01
    • 1970-01-01
    • 2016-12-25
    • 2020-02-09
    相关资源
    最近更新 更多