只是为了学习如何将Docker日志收集到EFK (Elasticsearch7.10.1 + Fluentd + Kibana7.10.1)堆栈,尝试构建一个测试应用程序。
弹性启动良好,可从http://localhost:5601/获得
但是fluentd-*不能作为索引模式使用,我假设对从kibana的日志中得到的错误做如下操作:
kibana_1 | {"type":"log","@timestamp":"2021-01-03T23:46:32Z","tags":["error","elasticsearch","monitoring"],"pid":6,"message":"Request error, retrying\nGET http://elasticsearch:9200/_xpack => connect ECONNREFUSED 172.20.0.3:9200"}
kibana_1 | {"type":"log","@timestamp":"2021-01-03T23:46:32Z","tags":["warning","elasticsearch","monitoring"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","@timestamp":"2021-01-03T23:46:32Z","tags":["warning","elasticsearch","monitoring"],"pid":6,"message":"No living connections"}
kibana_1 | {"type":"log","@timestamp":"2021-01-03T23:46:32Z","tags":["warning","plugins","licensing"],"pid":6,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
kibana_1 | {"type":"log","@timestamp":"2021-01-03T23:46:32Z","tags":["warning","plugins","monitoring","monitoring"],"pid":6,"message":"X-Pack Monitoring Cluster Alerts will not be available: No Living connections"}
kibana_1 | {"type":"log","@timestamp":"2021-01-03T23:46:32Z","tags":["error","elasticsearch","data"],"pid":6,"message":"[ConnectionError]: connect ECONNREFUSED 172.20.0.3:9200"}
kibana_1 | {"type":"log","@timestamp":"2021-01-03T23:46:32Z","tags":["error","savedobjects-service"],"pid":6,"message":"Unable to retrieve version information from Elasticsearch nodes."}172.20.0.3:9200和http://elasticsearch:9200/无法通过浏览器访问
http://localhost:9200/是可达的
我遗漏了什么?我已经忙了一个星期了,不知道该去哪找了,谢谢!
Docker-compose.yml
version: '2'
services:
web:
image: httpd
ports:
- "8080:80"
links:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: httpd.access
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf
links:
- "elasticsearch"
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: elasticsearch:7.10.1
environment:
- "network.host=0.0.0.0"
- "transport.host=127.0.0.1"
expose:
- 9200
ports:
- "9200:9200"
kibana:
image: kibana:7.10.1
environment:
server.host: 0.0.0.0
elasticsearch.hosts: http://localhost:9200
ports:
- "5601:5601"Dockerfile
# fluentd/Dockerfile
FROM fluent/fluentd:v1.11.5-debian-1.0
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-document", "--version", "4.0.4"]fluentd.conf文件
# fluentd/conf/fluent.conf
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
@type copy
<store>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key @log_name
flush_interval 1s
</store>
<store>
@type stdout
</store>
</match>发布于 2021-01-04 06:46:11
这是完全可以和预期的结果。
在docker中,如果您希望您的服务(Kibana)可以从本地主机获得,您应该将它的端口映射到本地主机。你这样做是通过:
ports:
- "5601:5601"然后可以使用http://localhost:5601从浏览器(localhost)访问Kibana。
另一方面,在内部,如果您想从另一个容器访问一个容器,则应该使用容器名称(而不是localhost) --因此,如果您想访问elasticsearch容器中的Kibana,可以在elasticsearch容器中执行并调用:
curl http://kibana:5601编辑
一个有趣的例子是您的web容器,它在内部和外部使用不同的端口,因此您可以从本地主机:
curl http://localhost:8080在内部(在该码头网络中),您将通过以下方式访问:
http://web(您可以省略80,因为它是默认的http端口)
EDIT2
正如文档中所述,elasticsearch.hosts的默认值是http://elasticsearch:9200。
https://stackoverflow.com/questions/65556617
复制相似问题