首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >fluent-bit无法解析kubernetes日志

fluent-bit无法解析kubernetes日志
EN

Stack Overflow用户
提问于 2020-07-19 08:58:28
回答 2查看 3.9K关注 0票数 0

我想通过fluentd将Kubernetes日志从fluent-bit转发到elasticsearch,但是fluent-bit无法正确解析kubernetes日志。为了安装Fluent-bit和Fluentd,我使用Helm charts。我尝试了稳定/fluentbit和fluent/fluentbit,都遇到了同样的问题:

代码语言:javascript
复制
#0 dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch [error type]: mapper_parsing_exception [reason]: 'Could not dynamically add mapping for field [app.kubernetes.io/component]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text].'"

我将以下几行放入流畅位值文件中,如here所示

代码语言:javascript
复制
  remapMetadataKeysFilter:
    enabled: true
    match: kube.*

    ## List of the respective patterns and replacements for metadata keys replacements
    ## Pattern must satisfy the Lua spec (see https://www.lua.org/pil/20.2.html)
    ## Replacement is a plain symbol to replace with
    replaceMap:
      - pattern: "[/.]"
        replacement: "_"

...nothing已更改,但列出了相同的错误。

有没有办法解决这个问题呢?

我的values.yaml在这里:

代码语言:javascript
复制
    # Default values for fluent-bit.

# kind -- DaemonSet or Deployment
kind: DaemonSet

# replicaCount -- Only applicable if kind=Deployment
replicaCount: 1

image:
  repository: fluent/fluent-bit
  pullPolicy: Always
  # tag:

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  create: true
  annotations: {}
  name:

rbac:
  create: true

podSecurityPolicy:
  create: false

podSecurityContext:
  {}
  # fsGroup: 2000

securityContext:
  {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

service:
  type: ClusterIP
  port: 2020
  annotations:
    prometheus.io/path: "/api/v1/metrics/prometheus"
    prometheus.io/port: "2020"
    prometheus.io/scrape: "true"

serviceMonitor:
  enabled: true
  namespace: monitoring
  interval: 10s
  scrapeTimeout: 10s
  # selector:
  #  prometheus: my-prometheus

resources:
  {}
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}

podAnnotations: {}

priorityClassName: ""

env: []

envFrom: []

extraPorts: []
#   - port: 5170
#     containerPort: 5170
#     protocol: TCP
#     name: tcp

extraVolumes: []

extraVolumeMounts: []

## https://docs.fluentbit.io/manual/administration/configuring-fluent-bit
config:
  ## https://docs.fluentbit.io/manual/service
  service: |
    [SERVICE]
        Flush 1
        Daemon Off
        Log_Level info
        Parsers_File parsers.conf
        Parsers_File custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port 2020

  ## https://docs.fluentbit.io/manual/pipeline/inputs
  inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        Parser docker
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On

    [INPUT]
        Name systemd
        Tag host.*
        Systemd_Filter _SYSTEMD_UNIT=kubelet.service
        Read_From_Tail On

  ## https://docs.fluentbit.io/manual/pipeline/filters
  filters: |
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.
        Merge_Log           On
        Merge_Log_Key       log_processed
        K8S-Logging.Parser  On
        K8S-Logging.Exclude Off

    [FILTER]
        Name    lua
        Match   kube.*
        script  /fluent-bit/etc/functions.lua
        call    dedot
        
  ## https://docs.fluentbit.io/manual/pipeline/outputs
  outputs: |
    [OUTPUT]
        Name          forward
        Match         *
        Host          fluentd-in-forward.elastic-system.svc.cluster.local
        Port          24224
        tls           off
        tls.verify    off

  ## https://docs.fluentbit.io/manual/pipeline/parsers
  customParsers: |
    [PARSER]
        Name docker_no_time
        Format json
        Time_Keep Off
        Time_Key time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
EN

回答 2

Stack Overflow用户

发布于 2021-01-08 19:26:24

我也遇到过同样的问题,这是由于多个标签被转换为json而导致的。我重命名了冲突的键,以便与较新的推荐标签格式相匹配:

代码语言:javascript
复制
<filter **>
  @type rename_key
  rename_rule1 ^app$ app.kubernetes.io/name
  rename_rule2 ^chart$ helm.sh/chart
  rename_rule3 ^version$ app.kubernetes.io/version
  rename_rule4 ^component$ app.kubernetes.io/component
  rename_rule5 ^istio$ istio.io/name
</filter>
票数 1
EN

Stack Overflow用户

发布于 2020-07-20 20:03:45

我认为你的问题不在kubernetes中,不在fluentbit/fluentd图表中,你的问题在elasticsearch中,特别是在映射中。

在elsticsearch版本7.x中,同一字段不能有不同的类型(string、int等)。

为了解决这样的问题,我在用于kubernetes日志的索引模板中使用"ignore_malformed":true。

https://www.elastic.co/guide/en/elasticsearch/reference/current/ignore-malformed.html

不会对格式错误的字段进行索引,但会正常处理文档中的其他字段。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/62975255

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档