【发布时间】:2020-02-15 06:17:18
【问题描述】:
我想使用 td-agent 将 haproxy 日志发送到 fluentd/elasticsearch/kibana,但我无法正确完成
我已经通过 docker 安装了 EFK,它的规则正确。 我有一个日志类型为 haproxy.tcp 的 haproxy,如下所示:
haproxy[27508]: info 127.0.0.1:45111 [12/Jul/2012:15:19:03.258] wss-relay wss-relay/local02_9876 0/0/50015 1277 cD 1/0/0/0/0 0/0
我的 td-agent.conf 是这样的:
<source>
@type tail
path /var/log/haproxy.log
format /^(?<ps>\w+)\[(?<pid>\d+)\]: (?<pri>\w+) (?<c_ip>[\w\.]+):(?<c_port>\d+) \[(?<time>.+)\] (?<f_end>[\w-]+) (?<b_end>[\w-]+)\/(?<b_server>[\w-]+) (?<tw>\d+)\/(?<tc>\d+)\/(?<tt>\d+) (?<bytes>\d+) (?<t_state>[\w-]+) (?<actconn>\d+)\/(?<feconn>\d+)\/(?<beconn>\d+)\/(?<srv_conn>\d+)\/(?<retries>\d+) (?<srv_queue>\d+)\/(?<backend_queue>\d+)$/
tag haproxy.tcp
time_format %d/%B/%Y:%H:%M:%S
</source>
<match haproxy.tcp>
@type forward
<server>
host dockerdes01
port 24224
</server>
</match>
但是日志没有到达 /var/log/td-agent/td-agent.log
如果我使用这个:
<match haproxy.tcp>
@type copy
<store>
@type stdout
</store>
<store>
@type elasticsearch
logstash_format true
flush_interval 10s # for testing.
host dockerdes01
port 9200
</store>
</match>
我在 /var/log/td-agent/td-agent.log 中看到了这一点:
2012-07-12 15:19:03.000000000 +0200 haproxy.tcp: {"ps":"haproxy","pid":"27508","pri":"info","c_ip":"127.0.0.1","c_port":"45111","f_end":"wss-relay","b_end":"wss-relay","b_server":"local02_9876","tw":"0","tc":"0","tt":"50015","bytes":"1277","t_state":"cD","actconn":"1","feconn":"0","beconn":"0","srv_conn":"0","retries":"0","srv_queue":"0","backend_queue":"0"}
但它没有到达 fluentd...
我需要日志到达 fluentd
【问题讨论】:
-
你在 docker 中使用
ha-proxy吗?喜欢就用docker的fluentd-log-driver吧。 -
不不...我正在通过 docker 创建 EFK 服务器,但日志将来自 NO 停靠主机。
标签: elasticsearch kibana fluentd td-agent