Kafka安装部署:
前提:jdk zookeeper安装部署,并能正常启动。
1、下载软件包并解压
tar zxvf kafka_2.11-1.0.0.tgz -C …/servers/
2、修改kafka配置文件
/export/servers/kafka_2.11-1.0.0/conf/
vim server.properties
Broker.id=0 (每个节点不能相同)
log.dirs=/export/servers/kafka_2.11-1.0.0/logs/
zookeeper.connect=node01:2181,node02:2181,node03:2181
delete.topic.enable=true
host.name=node01(每个节点不能相同)
3、多节点复制(复制后修改配置)
scp -r kafka_2.11-1.0.0 hadoop02:/PWD
4、多节点启动kafka
启动zookeeper
zkstart.sh(自己编写的脚本)
启动kafka(到每个节点启动)
hadoop01: nohup ./bin/kafka-server-start.sh config/server.properties &
hadoop02: nohup ./bin/kafka-server-start.sh config/server.properties &
hadoop03s: nohup ./bin/kafka-server-start.sh config/server.properties &
kafka集群的操作
创建topic
bin/kafka-topics.sh --create --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181 --replication-factor 2 --partitions 3 --topic 18BD3401
查询 topic
bin/kafka-topics.sh --list --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181
模拟成产者,生产数据
bin/kafka-console-producer.sh --broker-list hadoop01:9092,hadoop02:9092,hadoop03:9092 --topic 18BD3401
broker-list 表示存储数据的服务器
模拟消费者,消费数据
bin/kafka-console-consumer.sh --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181 --topic 18BD3401 --from-beginning
from-beginning 表示从头开始消费
zookeeper 作用是记录数据消费到的位置(数据消费到了哪里/第几条)
配置环境变量cd /etc/profile.d/
创建kafka脚本vim kafka.sh
export KAFKA_HOME=/export/servers/kafka_2.11-1.0.0
export PATH=KAFKA_HOME/bin
source /etc/profile(环境变量生效)
3、创建文件夹:
创建一键启动脚本vim kafka-start.sh
#!/bin/sh
for host in hadoop01 hadoop02 hadoop03
do
ssh KaTeX parse error: Expected 'EOF', got '&' at position 126: …s >/dev/null 2>&̲1 &" e…host kafka is running"
done
写好后保存,为脚本添加执行权限:
chmod 777 kafka-start.sh
查看状态:
一键启动:
./bin/kafka-start.sh
查看状态:
创建一键关闭脚本vim kafka-stop.sh
#! /bin/sh
for host in hadoop01 hadoop02 hadoop03
do
ssh host kafka is stopping"
done
写好后保存,为脚本添加执行权限:
chmod 777 kafka-stop.sh
一键关闭
./bin/kafka-stop.sh
查看状态kafka