【问题标题】:expose kafka broker to interal and external IP向内部和外部 IP 公开 kafka 代理
【发布时间】:2021-04-15 18:12:55
【问题描述】:

我在 3 个 vm 上有一个 Kafka 集群,每个 vm 都有一个 Kafka 和 zookeeper 服务。 我尝试为代理配置侦听器,如下所示: 但是重启后我得到以下错误你能告诉我为什么每个日志都不同,我该如何解决这个问题

一个虚拟机日志有这个错误:

[2021-01-10 17:05:18,051] WARN [Controller id=2, targetBrokerId=0] Connection to node 0 (/172.29.141.57:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-01-10 17:05:18,107] WARN [Controller id=2, targetBrokerId=2] Connection to node 2 (/172.29.141.135:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

另一个虚拟机出现此错误:

[2021-01-10 17:05:33,579] ERROR [KafkaApi-0] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
[2021-01-10 17:05:38,579] ERROR [KafkaApi-0] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)

/opt/kafka/config/server.properties -

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://10.2.4.18:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://172.29.141.96:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/data/kafka

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.minutes=20

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=10.20.4.180:2181,10.20.4.181:2181,10.20.4.207:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
message.max.bytes=104857600

【问题讨论】:

  • 我建议您考虑使用 Ansible 或类似工具来创建集群
  • broker.id 应该从 0 开始。0 用于代理 1,1 用于代理 2,2 用于代理 3。尝试设置这些。
  • 您可以为所有代理共享 server.properties 吗?

标签: apache-kafka listener broker


【解决方案1】:

在您的 server.properties 文件中,您仅在代理 ip-host:port 上配置了 Advertisementd.listeners。您可以更改它并指定所有经纪人的逗号分隔列表吗? 例如:advertised.listeners=PLAINTEXT://ip-broker1:port1,PLAINTEXT://ip-broker2:port2,PLAINTEXT://ip-broker3:port3

【讨论】:

  • 并给每个监听器一个不同的端口?它是每个代理的一个配置示例
  • 每个经纪人只能宣传自己。客户端引导协议收集所有地址
  • @OneCricketeer 你是假的。请参阅 Kafka 文档 (kafka.apache.org/documentation) 或 StackOverflow 中的此线程:stackoverflow.com/questions/42998859/…。感谢您删除您对我的回答的反对票。
  • @Noap 你给每个broker配置的端口,这个可以一样,看你的配置。如果您只指定一个并且相应的代理由于某种原因不可用,这可能是一个问题。
  • 我非常了解文档。您不能从一台服务器公布其他机器地址。您可以宣传同一台机器的多个网络接口,这不是这个答案正在做的事情
猜你喜欢
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2023-04-02
  • 2021-04-13
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多