首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Fluent-bit没有将数据推送到amazon es服务

Fluent-bit没有将数据推送到amazon es服务
EN

Stack Overflow用户
提问于 2020-10-12 20:19:51
回答 2查看 1.5K关注 0票数 2

我使用helm安装了fluentbit,Fluent-bit版本是1.13.11,fluentbit pod运行良好,但仍然无法将数据发送到Amazon ES,以下是错误和yamls文件。

请提供任何网址,可以帮助我安装这个容易。

错误:-获取两种错误:-

代码语言:javascript
复制
1st -
[2020/10/12 12:05:06] [error] [out_es] could not pack/validate JSON response
{"took":0,"errors":true,"items":[{"index":{"_index":"log-test-2020.10.12","_type":"flb_type","_id":null,"status":400,"error":{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [991]/[1000] maximum shards open;"}}},{"index":{"_index":"log-test-2020.10.12","_type":"flb_type","_id":null,"status":400,"error":{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [991]/[1000] maximum shards open;"}}},{"index":{"_index":"log-test-2020.10.12","_type":"flb_type","_id":null,"status":400,"error"{"type":"validat```

2nd :- 
[2020/10/12 12:05:06] [ warn] [engine] failed to flush chunk '1-1602504304.544264456.flb', retry in 6 seconds: task_id=23, input=tail.0 > output=es.0
[2020/10/12 12:05:06] [ warn] [engine] failed to flush chunk '1-1602504304.79518090.flb', retry in 10 seconds: task_id=21, input=tail.0 > output=es.0
[2020/10/12 12:05:07] [ warn] [engine] failed to flush chunk '1-1602504295.264072662.flb', retry in 81 seconds: task_id=8, input=tail.0 > out```
代码语言:javascript
复制
fluentbit config file :- 
[INPUT]
    Name              tail
    Tag               kube.*
    Path              /var/log/containers/*.log
    Parser            docker
    DB                /var/log/flb_kube.db
    Mem_Buf_Limit     30MB
    Skip_Long_Lines   On
    Refresh_Interval  10
[OUTPUT]
    Name            es
    Match           *
    Host            ${FLUENT_ELASTICSEARCH_HOST}
    Port            ${FLUENT_ELASTICSEARCH_PORT}
    Logstash_Format On
    Logstash_Prefix log-test
    Time_Key        @timestamp
    tls             On
    Retry_Limit     False

customParsers: |

[PARSER]
    Name   apache
    Format regex
    Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
    Name   apache2
    Format regex
    Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
    Name   apache_error
    Format regex
    Regex  ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$
[PARSER]

[PARSER]
    Name   json
    Format json
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z

[PARSER]
    Name        docker
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S.%L
    Time_Keep   On
代码语言:javascript
复制
EN

回答 2

Stack Overflow用户

发布于 2020-10-13 10:30:01

更改输出:INTPUT10或更低,并将其与Retry_Limit : Buffer_Max_Size进行平衡,这将有助于保持缓冲区充满重试的项

票数 0
EN

Stack Overflow用户

发布于 2021-01-21 17:40:36

你必须增加kibana中的分片数,因为它在错误日志中清楚地说明了最大打开的分片数:-

validation_exception",“原因”:“验证失败: 1:本次操作共增加10个分片,但该集群当前最大开放分片数为991/1000;”}

在kibana dev工具UI中使用以下cmd来增加分片计数:-

PUT /_cluster/settings { "persistent“:{"cluster.max_shards_per_node”:}}

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/64317769

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档