【发布时间】:2021-03-05 14:31:15
【问题描述】:
有时我们会发现 ES 中缺少一些日志,而我们可以在 Kubernetes 中看到它们。
只有我能够找到的日志中的问题,指出 kubernetes 解析器的问题,流利位日志中的内容如下:
[2020/11/22 09:53:18] [debug] [filter:kubernetes:kubernetes.1] could not merge JSON log as requested
一旦我们将 kubernetes 过滤器的“Merge_Log”选项配置为“Off”,问题似乎就消失了(至少 fluent-bit 日志中不再出现“警告/错误”)。但是当然我们失去了一个很大的功能,例如实际上具有“消息”本身以外的字段/值。
在 fluent-bit 或 elasticsearch 中除此之外没有其他错误/警告消息,这就是我主要怀疑的原因。日志(log_level in info)填写:
k --context contexto09 -n logging-system logs -f -l app=fluent-bit --max-log-requests 31 | grep -iv "\[ info\]"
[2020/11/22 19:45:02] [ warn] [engine] failed to flush chunk '1-1606074289.692844263.flb', retry in 25 seconds: task_id=31, input=appstream > output=es.0
[2020/11/22 19:45:02] [ warn] [engine] failed to flush chunk '1-1606074208.938295842.flb', retry in 25 seconds: task_id=67, input=appstream > output=es.0
[2020/11/22 19:45:08] [ warn] [engine] failed to flush chunk '1-1606074298.662911160.flb', retry in 10 seconds: task_id=76, input=appstream > output=es.0
[2020/11/22 19:45:13] [ warn] [engine] failed to flush chunk '1-1606074310.619565119.flb', retry in 9 seconds: task_id=77, input=appstream > output=es.0
[2020/11/22 19:45:13] [ warn] [engine] failed to flush chunk '1-1606073869.655178524.flb', retry in 1164 seconds: task_id=33, input=appstream > output=es.0
[2020/11/22 19:45:18] [ warn] [engine] failed to flush chunk '1-1606074298.662911160.flb', retry in 282 seconds: task_id=76, input=appstream > output=es.0
[2020/11/22 19:45:21] [ warn] [engine] failed to flush chunk '1-1606073620.626120246.flb', retry in 1974 seconds: task_id=8, input=appstream > output=es.0
[2020/11/22 19:45:21] [ warn] [engine] failed to flush chunk '1-1606074050.441691966.flb', retry in 1191 seconds: task_id=51, input=appstream > output=es.0
[2020/11/22 19:45:22] [ warn] [engine] failed to flush chunk '1-1606074310.619565119.flb', retry in 79 seconds: task_id=77, input=appstream > output=es.0
[2020/11/22 19:45:22] [ warn] [engine] failed to flush chunk '1-1606074319.600878876.flb', retry in 6 seconds: task_id=78, input=appstream > output=es.0
[2020/11/22 19:45:09] [ warn] [engine] failed to flush chunk '1-1606073576.849876665.flb', retry in 1091 seconds: task_id=4, input=appstream > output=es.0
[2020/11/22 19:45:12] [ warn] [engine] failed to flush chunk '1-1606074292.958592278.flb', retry in 898 seconds: task_id=141, input=appstream > output=es.0
[2020/11/22 19:45:14] [ warn] [engine] failed to flush chunk '1-1606074302.347198351.flb', retry in 32 seconds: task_id=143, input=appstream > output=es.0
[2020/11/22 19:45:14] [ warn] [engine] failed to flush chunk '1-1606074253.953778140.flb', retry in 933 seconds: task_id=133, input=appstream > output=es.0
[2020/11/22 19:45:16] [ warn] [engine] failed to flush chunk '1-1606074313.923004098.flb', retry in 6 seconds: task_id=144, input=appstream > output=es.0
[2020/11/22 19:45:18] [ warn] [engine] failed to flush chunk '1-1606074022.933436366.flb', retry in 73 seconds: task_id=89, input=appstream > output=es.0
[2020/11/22 19:45:18] [ warn] [engine] failed to flush chunk '1-1606074304.968844730.flb', retry in 82 seconds: task_id=145, input=appstream > output=es.0
[2020/11/22 19:45:19] [ warn] [engine] failed to flush chunk '1-1606074316.958207701.flb', retry in 10 seconds: task_id=146, input=appstream > output=es.0
[2020/11/22 19:45:19] [ warn] [engine] failed to flush chunk '1-1606074283.907428020.flb', retry in 207 seconds: task_id=139, input=appstream > output=es.0
[2020/11/22 19:45:22] [ warn] [engine] failed to flush chunk '1-1606074313.923004098.flb', retry in 49 seconds: task_id=144, input=appstream > output=es.0
[2020/11/22 19:45:24] [ warn] [engine] failed to flush chunk '1-1606074232.931522416.flb', retry in 109 seconds: task_id=129, input=appstream > output=es.0
...
...
[2020/11/22 19:46:31] [ warn] [engine] chunk '1-1606074022.933436366.flb' cannot be retried: task_id=89, input=appstream > output=es.0
如果我为 log_level 启用“调试”,那么我确实看到了这些 1. [2020/11/22 09:53:18] [debug] [filter:kubernetes:kubernetes.1] could not merge JSON log as requested,我认为这是块无法刷新的原因,因为当所有“merge_log”都没有刷新块错误时关闭。
我目前的 fluent-bit 配置是这样的:
kind: ConfigMap
metadata:
labels:
app: fluent-bit
app.kubernetes.io/instance: cluster-logging
chart: fluent-bit-2.8.6
heritage: Tiller
release: cluster-logging
name: config
namespace: logging-system
apiVersion: v1
data:
fluent-bit-input.conf: |
[INPUT]
Name tail
Path /var/log/containers/*.log
Exclude_Path /var/log/containers/cluster-logging-*.log,/var/log/containers/elasticsearch-data-*.log,/var/log/containers/kube-apiserver-*.log
Parser docker
Tag kube.*
Refresh_Interval 5
Mem_Buf_Limit 15MB
Skip_Long_Lines On
Ignore_Older 7d
DB /tail-db/tail-containers-state.db
DB.Sync Normal
[INPUT]
Name systemd
Path /var/log/journal/
Tag host.*
Max_Entries 1000
Read_From_Tail true
Strip_Underscores true
[INPUT]
Name tail
Path /var/log/containers/kube-apiserver-*.log
Parser docker
Tag kube-apiserver.*
Refresh_Interval 5
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Ignore_Older 7d
DB /tail-db/tail-kube-apiserver-containers-state.db
DB.Sync Normal
fluent-bit-filter.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_Tag_Prefix kube.var.log.containers.
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
K8S-Logging.Parser On
K8S-Logging.Exclude On
Merge_Log On
Keep_Log Off
Annotations Off
[FILTER]
Name kubernetes
Match kube-apiserver.*
Kube_Tag_Prefix kube-apiserver.var.log.containers.
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
K8S-Logging.Parser Off
K8S-Logging.Exclude Off
Merge_Log Off
Keep_Log On
Annotations Off
fluent-bit-output.conf: |
[OUTPUT]
Name es
Match logs
Host elasticsearch-data
Port 9200
Logstash_Format On
Retry_Limit 5
Type flb_type
Time_Key @timestamp
Replace_Dots On
Logstash_Prefix logs
Logstash_Prefix_Key index
Generate_ID On
Buffer_Size 2MB
Trace_Output Off
[OUTPUT]
Name es
Match sys
Host elasticsearch-data
Port 9200
Logstash_Format On
Retry_Limit 5
Type flb_type
Time_Key @timestamp
Replace_Dots On
Logstash_Prefix sys-logs
Generate_ID On
Buffer_Size 2MB
Trace_Output Off
[OUTPUT]
Name es
Match host.*
Host elasticsearch-data
Port 9200
Logstash_Format On
Retry_Limit 10
Type flb_type
Time_Key @timestamp
Replace_Dots On
Logstash_Prefix host-logs
Generate_ID On
Buffer_Size 2MB
Trace_Output Off
[OUTPUT]
Name es
Match kube-apiserver.*
Host elasticsearch-data
Port 9200
Logstash_Format On
Retry_Limit 10
Type _doc
Time_Key @timestamp
Replace_Dots On
Logstash_Prefix kube-apiserver
Generate_ID On
Buffer_Size 2MB
Trace_Output Off
fluent-bit-stream-processor.conf: |
[STREAM_TASK]
Name appstream
Exec CREATE STREAM appstream WITH (tag='logs') AS SELECT * from TAG:'kube.*' WHERE NOT (kubernetes['namespace_name']='ambassador-system' OR kubernetes['namespace_name']='argocd' OR kubernetes['namespace_name']='istio-system' OR kubernetes['namespace_name']='kube-system' OR kubernetes['namespace_name']='logging-system' OR kubernetes['namespace_name']='monitoring-system' OR kubernetes['namespace_name']='storage-system') ;
[STREAM_TASK]
Name sysstream
Exec CREATE STREAM sysstream WITH (tag='sys') AS SELECT * from TAG:'kube.*' WHERE (kubernetes['namespace_name']='ambassador-system' OR kubernetes['namespace_name']='argocd' OR kubernetes['namespace_name']='istio-system' OR kubernetes['namespace_name']='kube-system' OR kubernetes['namespace_name']='logging-system' OR kubernetes['namespace_name']='monitoring-system' OR kubernetes['namespace_name']='storage-system') ;
fluent-bit-service.conf: |
[SERVICE]
Flush 3
Daemon Off
Log_Level info
Parsers_File parsers.conf
Streams_File /fluent-bit/etc/fluent-bit-stream-processor.conf
fluent-bit.conf: |
@INCLUDE fluent-bit-service.conf
@INCLUDE fluent-bit-input.conf
@INCLUDE fluent-bit-filter.conf
@INCLUDE fluent-bit-output.conf
parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
“kube-apiserver.”的 Merge_Log 已关闭,到目前为止工作正常,尽管最终行为不可取(没有进行字段映射)。 “kube.”的 Merge_Log 已打开,并且正在按预期在 ES 中生成字段……但我们正在丢失日志。
我在 kubernetes 解析器中找到了导致此错误的相关代码,但我缺乏了解如何“修复”导致此消息的错误的知识https://github.com/fluent/fluent-bit/blob/master/plugins/filter_kubernetes/kubernetes.c#L162
这开始真的令人沮丧,我不知道为什么会发生这种情况或更好,如何解决它。有什么帮助吗?
【问题讨论】:
-
你可以试试
Buffer_Size 0吗?默认值为 32K。并且,根据documentation,注意,如果 pod 规格超过缓冲区限制,则在检索元数据时 API 响应将被丢弃,并且某些 kubernetes 元数据将无法注入日志。。 -
当然,我会这样做的。问题是我真的没有看到任何指向 buffer_size 问题的日志。只有与此“无法合并 JSON”相关的日志,我认为它们并不相关。
-
对。在代码中,从the merge option 流向
flb_parser_do,从line # 640 流向flb_parser_json_do。而且,这里的消息在内部以 MsgPack 格式进行操作。 -
您遇到错误,原因是两层深。如果你还没有,我建议你在 GitHub 存储库上 open an issue。
标签: elasticsearch kubernetes fluentd fluent-bit