【发布时间】:2016-06-08 10:36:29
【问题描述】:
我有以下问题:
我们在高可用性设置中使用fluentd:几 K 转发器 -> 地理区域和 ES/S3 的聚合器最后使用复制插件。
我们遇到了一个故障(几天没有记录日志),自恢复以来,我们从 fluent 到我们的 ES 集群中获得了大量重复记录(包括恢复后的重复数据)。 @type copy plugin 是否存在任何可能导致此类行为的已知问题?
我们的货代配置:
# TCP input
<source>
@type forward
port X
</source>
# Logs Forwarding
<match XXX>
@type forward
# forward to logs-aggregators
<server>
#...
</server>
# use tcp for heartbeat
heartbeat_type tcp
# use longer flush_interval to reduce CPU usage.
# note that this is a trade-off against latency.
flush_interval 10s
# use file buffer to buffer events on disks.
# max 4096 8MB chunks = 32GB of buffer space
buffer_type file
buffer_path /var/log/td-agent/buffer/forward
buffer_chunk_limit 4m
buffer_queue_limit 4096
# use multi-threading to send buffered data in parallel
num_threads 8
# expire DNS cache (required for cloud environment such as EC2)
expire_dns_cache 600
</match>
我们的聚合器配置:
# TCP input
<source>
@type forward
port X
</source>
# rsyslog
<source>
@type syslog
port X
tag rsyslog
</source>
# Logs storage
<match rsyslog.**>
@type copy
<store>
@type elasticsearch
hosts X
logstash_format true
logstash_prefix rsyslog
logstash_dateformat %Y-%m-%d
num_threads 8
utc_index true
reload_on_failure true
</store>
</match>
# Bids storage
<match X>
@type copy
# push data to elasticsearch cluster
<store>
@type elasticsearch
hosts X
# save like logstash would
logstash_format true
logstash_prefix jita
logstash_dateformat %Y-%m-%d
# 64G of buffer data
buffer_chunk_limit 16m
buffer_queue_limit 4096
flush_interval 5m
num_threads 8
# everything in UTC
utc_index true
# quickly remove a dead node from the list of addresses
reload_on_failure true
</store>
# additionally store data in s3 bucket
<store>
@type s3
aws_key_id X
aws_sec_key X
s3_bucket X
s3_region X
s3_object_key_format %{path}/%{time_slice}_%{index}.%{file_extension}
store_as gzip
num_threads 8
path logs
buffer_path /var/log/td-agent/buffer/s3
time_slice_format %Y-%m-%d/%H
time_slice_wait 10m
utc
</store>
</match>
【问题讨论】:
标签: elasticsearch high-availability fluentd