)当 sink 到 hdfs 时:
) 需修改 flume-env.sh 配置,增添 hdfs 依赖库:
  FLUME_CLASSPATH="/root/TDH-Client/hadoop/hadoop/*:/root/TDHClient/hadoop/hadoop-hdfs/*:/root/TDH-Client/hadoop/hadoop/lib/*"
 
实例:
a1.sources=r1
a1.sinks=k2
a1.channels=c2
 
a1.sources.r1.type=avro
a1.sources.r1.channels=c1 c2
a1.sources.r1.bind=172.20.237.105
a1.sources.r1.port=8888
 
#r1的数据通过c2发送给k2输出到HDFS中存储
a1.sinks.k2.channel = c2
a1.sinks.k2.type=hdfs
a1.sinks.k2.hdfs.kerberosKeytab=/etc/hdfs1/conf/hdfs.keytab
a1.sinks.k2.hdfs.kerberosPrincipal=hdfs/gz237-105@TDH
#存储到hdfs上的位置
a1.sinks.k2.hdfs.filePrefix=log-%Y-%m-%d
a1.sinks.k2.hdfs.useLocalTimeStamp = true
a1.sinks.k2.hdfs.writeFormat = text
a1.sinks.k2.hdfs.fileType=DataStream
a1.sinks.k2.hdfs.inUseSuffix=.log
#a1.sinks.k2.hdfs.rollInterval = 0
a1.sinks.k2.hdfs.rollInterval = 60
a1.sinks.k2.hdfs.rollSize = 10240
a1.sinks.k2.hdfs.rollCount = 100
#a1.sinks.k2.hdfs.rollCount = 0
a1.sinks.k2.hdfs.idleTimeout=60
 
a1.channels.c2.type=memory
a1.channels.c2.capacity=100000
a1.channels.c2.transactionCapacity=10000

相关文章:

  • 2022-02-22
  • 2021-05-16
  • 2022-12-23
  • 2021-10-31
  • 2022-12-23
  • 2021-12-13
  • 2021-04-24
  • 2021-05-11
猜你喜欢
  • 2021-06-12
  • 2022-12-23
  • 2021-07-19
  • 2021-07-08
  • 2022-12-23
  • 2022-12-23
  • 2022-01-06
相关资源
相似解决方案