【问题标题】:Cannot connect to kafka docker container from logstash docker container无法从 logstash docker 容器连接到 kafka docker 容器
【发布时间】:2020-02-09 06:15:42
【问题描述】:

我正在尝试从 logstash docker 容器连接到 kafka docker 容器,但我总是收到以下消息:

 Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.

我的 docker-compose.yml 文件是

version: '3.2'

services:
  elasticsearch:
    build:
      context: elasticsearch/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./elasticsearch/config/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
      - type: volume
        source: elasticsearch
        target: /usr/share/elasticsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      ELASTIC_PASSWORD: changeme
    networks:
      - elk
    depends_on:
      - kafka

  logstash:
    build:
      context: logstash/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./logstash/config/logstash.yml
        target: /usr/share/logstash/config/logstash.yml
        read_only: true
      - type: bind
        source: ./logstash/pipeline
        target: /usr/share/logstash/pipeline
        read_only: true
    ports:
      - "5000:5000"
      - "9600:9600"
    links:
      - kafka
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    build:
      context: kibana/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./kibana/config/kibana.yml
        target: /usr/share/kibana/config/kibana.yml
        read_only: true
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

  zookeeper:
    image: strimzi/kafka:0.11.3-kafka-2.1.0
    container_name: zookeeper
    command: [
      "sh", "-c",
      "bin/zookeeper-server-start.sh config/zookeeper.properties"
    ]
    ports:
      - "2181:2181"
    networks:
      - elk
    environment:
      LOG_DIR: /tmp/logs

  kafka:
    image: strimzi/kafka:0.11.3-kafka-2.1.0
    command: [
      "sh", "-c",
      "bin/kafka-server-start.sh config/server.properties --override listeners=$${KAFKA_LISTENERS} --override advertised.listeners=$${KAFKA_ADVERTISED_LISTENERS} --override zookeeper.connect=$${KAFKA_ZOOKEEPER_CONNECT}"
    ]
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    networks:
      - elk
    environment:
      LOG_DIR: "/tmp/logs"
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

networks:
  elk:
    driver: bridge

volumes:
  elasticsearch:

我的 logstash.conf 文件是

input {
    kafka{
        bootstrap_servers => "kafka:9092"
        topics => ["logs"]
    }
}

## Add your filters / logstash plugins configuration here

output {
    elasticsearch {
        hosts => "elasticsearch:9200"
        user => "elastic"
        password => "changeme"
    }
}

我所有的容器都在正常运行,我可以向容器外的 Kafka 主题发送消息。

【问题讨论】:

    标签: docker apache-kafka docker-compose logstash


    【解决方案1】:

    Kafka 的广告列表应该这样定义

    KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://kafka:9092   
    KAFKA_LISTENERS: PLAINTEXT://kafka:9092
    

    【讨论】:

    • 以这种方式设置 kafka 广告侦听器可防止从我拥有的未通过 localhost dockerized 的应用程序向 kafka 生成消息。是不是也可以从 docker 内部和外部访问它?
    • 在这种情况下,请将 Kafka 替换为要部署它的主机的 fqdn。
    【解决方案2】:

    您可以为 Kafka 通告的侦听器使用主机 IP 地址,这样您的 docker 服务以及在 docker 网络之外运行的服务都可以访问它。

    KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://$HOST_IP:9092
    KAFKA_LISTENERS: PLAINTEXT://$HOST_IP:9092

    参考你可以看这篇文章https://rmoff.net/2018/08/02/kafka-listeners-explained/

    【讨论】:

      【解决方案3】:

      您需要根据可以从客户端解析的主机名来定义您的侦听器。如果侦听器是localhost,则客户端(logstash)将尝试从其自己的容器将其解析为localhost,因此会出现错误。

      我已经详细写过这个here 但本质上你需要这个:

      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092, PLAINTEXT://kafka:29092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092, PLAINTEXT://kafka:29092
      

      那么Doc​​ker网络上的任何容器都使用kafka:29092来访问它,所以logstash config就变成了

      bootstrap_servers => "kafka:29092"
      

      主机本身的任何客户端继续使用localhost:9092

      您可以在此处查看 Docker Compose 的实际效果:https://github.com/confluentinc/demo-scene/blob/master/build-a-streaming-pipeline/docker-compose.yml#L40

      【讨论】:

      • 我一直被抛出问题,试图找出解决方案,最后,似乎在容器名称中使用- 会阻止Kafka内部的某些东西。删除了-(随后从我的主机名中删除了-),现在一切正常。
      猜你喜欢
      • 2020-01-30
      • 2016-12-25
      • 2019-10-23
      • 2020-02-29
      • 2020-02-08
      • 2021-04-04
      • 2021-06-17
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多