我想将concat过滤器应用于部署在Kubernetes上的java应用程序的日志,以便将多行日志(不仅仅是异常)连接到一个日志事件中。
这是修复问题后的最终工作版本。
其想法是向部署中添加一个标签。
metadata:
...
spec:
...
template:
metadata:
labels:
logtype: springbootFluentd配置:
# rewrite tag of events with kubernetes label kubernetes.labels.logtype=springboot
#
# it is important to change the tag. If the tag is not modified the event will be
# reemitted with the same tag and matched again by the rewrite tag filter -> infinite loop
<match kubernetes.var.log.containers.**>
@type rewrite_tag_filter
@log_level debug
<rule>
key $.kubernetes.labels.logtype
pattern /^springboot$/
tag springboot.${tag}
</rule>
# the rewrite tag filter is an event sink. Events that are not reemitted by the plugin
# are gone. So we need a catch-all rule to reemitt any event that is not caught
# by the spring boot rule.
<rule>
key log
pattern /^.*$/
# and the tag must be changed so that the event will skip the rewrite filter after reemitting
tag unmatched.${tag}
</rule>
</match>
# Handle multiline logs for springboot logs.
<filter springboot.**>
@type concat
key log
separator ""
multiline_start_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}\ (ERROR|WARN|INFO|DEBUG|TRACE)/
</filter>发布于 2020-09-16 15:04:51
<match **/> --这要么是一个错误,要么是一个无效的流畅配置<match **>在到达<match springboot.**>之前也会匹配重写的标记。为了避免这种情况,将match spring引导放在**匹配之前,或者将**匹配缩小到来自kube的内容,例如<match kube.**>。重新标记的事件被注入到管道的乞讨中,并按照它们在config.中出现的顺序遍历它们的各个部分。
https://stackoverflow.com/questions/63920372
复制相似问题