我使用Docker运行ELK堆栈,用于日志管理,当前配置为ES 1.7、Logstash 1.5.4和Kibana 4.1.4。现在,我正在尝试将Elasticsearch升级到2.4.0,在https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.0/elasticsearch-2.4.0.tar.gz中使用tar.gz文件和Docker。由于es2.x不允许以root用户的身份运行它,所以我使用了
-Des.insecure.allow.root=true选项在运行elasticsearch服务时,但我的容器没有启动。日志上没有提到任何问题。
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 874 100 874 0 0 874k 0 --:--:-- --:--:-- --:--:-- 853k
//opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found
Scheduler@0.0.0 start /opt/log-management/Scheduler
node scheduler-app.js
ESExportWrapper@0.0.0 start /opt/log-management/ESExportWrapper
node app.js
Jobs are registered
[2016-09-28 09:04:24,646][INFO ][bootstrap ] max_open_files [1048576]
[2016-09-28 09:04:24,686][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
Native thread-sleep not available.
This will result in much slower performance, but it will still work.
You should re-install spawn-sync or upgrade to the lastest version of node if possible.
Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details
[2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] initializing ...
Wed, 28 Sep 2016 09:04:24 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5
Wed, 28 Sep 2016 09:04:24 GMT connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20
Wed, 28 Sep 2016 09:04:24 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15
[2016-09-28 09:04:25,399][INFO ][plugins ] [Kismet Deadly] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [1tb], net total_space [1tb], spins? [possibly], types [xfs]
[2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] heap size [7.8gb], compressed ordinary object pointers [true]
[2016-09-28 09:04:25,455][WARN ][threadpool ] [Kismet Deadly] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead
[2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] initialized
[2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] starting ...
[2016-09-28 09:04:27,695][INFO ][transport ] [Kismet Deadly] publish_address {10.240.118.68:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2016-09-28 09:04:27,700][INFO ][discovery ] [Kismet Deadly] ccs-elasticsearch/q2Sv4FUFROGIdIWJrNENVA任何线索都将不胜感激。
编辑1:由于//opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found是一个错误,而docker映像没有hostname实用程序,所以我尝试使用uname -n命令获取ES中的HOSTNAME。现在它不会抛出主机名错误,但问题仍然是一样的。它没有开始。使用正确的替代方法吗?
还有一个疑问,当我使用ES1.7(目前正在启动和运行)时,hostname实用程序也不在其中,但它运行时没有任何问题。很困惑。使用uname -n后的日志:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1083 100 1083 0 0 1093k 0 --:--:-- --:--:-- --:--:-- 1057k
> ESExportWrapper@0.0.0 start /opt/log-management/ESExportWrapper
> node app.js
> Scheduler@0.0.0 start /opt/log-management/Scheduler
> node scheduler-app.js
Jobs are registered
[2016-09-30 10:10:37,785][INFO ][bootstrap ] max_open_files [1048576]
[2016-09-30 10:10:37,822][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
Native thread-sleep not available.
This will result in much slower performance, but it will still work.
You should re-install spawn-sync or upgrade to the lastest version of node if possible.
Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details
[2016-09-30 10:10:37,993][INFO ][node ] [Helleyes] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-09-30 10:10:37,993][INFO ][node ] [Helleyes] initializing ...
Fri, 30 Sep 2016 10:10:38 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5
Fri, 30 Sep 2016 10:10:38 GMT connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20
Fri, 30 Sep 2016 10:10:38 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15
[2016-09-30 10:10:38,435][INFO ][plugins ] [Helleyes] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-09-30 10:10:38,455][INFO ][env ] [Helleyes] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [1tb], net total_space [1tb], spins? [possibly], types [xfs]
[2016-09-30 10:10:38,456][INFO ][env ] [Helleyes] heap size [7.8gb], compressed ordinary object pointers [true]
[2016-09-30 10:10:38,483][WARN ][threadpool ] [Helleyes] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead
[2016-09-30 10:10:40,151][INFO ][node ] [Helleyes] initialized
[2016-09-30 10:10:40,152][INFO ][node ] [Helleyes] starting ...
[2016-09-30 10:10:40,278][INFO ][transport ] [Helleyes] publish_address {10.240.118.68:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2016-09-30 10:10:40,283][INFO ][discovery ] [Helleyes] ccs-elasticsearch/wvVGkhxnTqaa_wS5GGjZBQ
[2016-09-30 10:10:40,360][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0x329b2977, /172.17.0.15:53388 => /10.240.118.69:9300]], closing connection
java.lang.NullPointerException
at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:40,360][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0xdf31e5e6, /172.17.0.15:46846 => /10.240.118.70:9300]], closing connection
java.lang.NullPointerException
at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:41,798][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0xcff0b2b6, /172.17.0.15:46958 => /10.240.118.70:9300]], closing connection
java.lang.NullPointerException
at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:41,800][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0xb47caaf6, /172.17.0.15:53501 => /10.240.118.69:9300]], closing connection
java.lang.NullPointerException
at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:43,302][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0x6247aa3f, /172.17.0.15:47057 => /10.240.118.70:9300]], closing connection
java.lang.NullPointerException
at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:43,303][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0x1d266aa0, /172.17.0.15:53598 => /10.240.118.69:9300]], closing connection
java.lang.NullPointerException
at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2016-09-30 10:10:44,807cluster.service new_master elected_as_master原因:禅-迪斯科-加入(elected_as_master,publish_address {10.240.118.68:9200},bound_addresses {:1:9200},{127.0.0.1:9200} 2016-09-30 10:44,852节点启动2016-09-30 10:10:44,984,984网关恢复到cluster_state中的32个索引。
部署失败后发生错误
failed: [10.240.118.68] (item={u'url': u'http://10.240.118.68:9200'}) => {"content": "", "failed": true, "item": {"url": "http://10.240.118.68:9200"}, "msg": "Status code was not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://10.240.118.68:9200"}编辑2:即使安装了hostname实用程序并且工作正常,容器也不会启动。日志与编辑1相同。
编辑3:容器确实启动,但不能在地址http://nodeip:9200到达。在3个节点中,只有1个节点有2.4个节点,还有2个节点仍然有1.7个节点,而2.4个节点不是集群的一部分。在运行2.4的容器中,curl到localhost:9200将给出运行elasticsearch的结果,但从外部无法到达。
编辑4:我试着在集群上运行ES 2.4的基本安装,在相同设置下es1.7运行良好。我已经运行ES迁移插件来检查集群是否可以运行ES 2.4,它给了我绿色。基本安装细节如下
Dockerfile
#Pulling SLES12 thin base image
FROM private-registry-1
#Author
MAINTAINER XYZ
# Pre-requisite - Adding repositories
RUN zypper ar private-registry-2
RUN zypper --no-gpg-checks -n refresh
#Install required packages and dependencies
RUN zypper -n in net-tools-1.60-764.185 wget-1.14-7.1 python-2.7.9-14.1 python-base-2.7.9-14.1 tar-1.27.1-7.1
#Downloading elasticsearch executable
ENV ES_VERSION=2.4.0
ENV ES_DIR="//opt//log-management//elasticsearch"
ENV ES_CONFIG_PATH="${ES_DIR}//config"
ENV ES_REST_PORT=9200
ENV ES_INTERNAL_COM_PORT=9300
WORKDIR /opt/log-management
RUN wget private-registry-3/elasticsearch/elasticsearch/${ES_VERSION}.tar/elasticsearch-${ES_VERSION}.tar.gz --no-check-certificate
RUN tar -xzvf ${ES_DIR}-${ES_VERSION}.tar.gz \
&& rm ${ES_DIR}-${ES_VERSION}.tar.gz \
&& mv ${ES_DIR}-${ES_VERSION} ${ES_DIR}
#Exposing elasticsearch server container port to the HOST
EXPOSE ${ES_REST_PORT} ${ES_INTERNAL_COM_PORT}
#Removing binary files which are not needed
RUN zypper -n rm wget
# Removing zypper repos
RUN zypper rr caspiancs_common
#Running elasticsearch executable
WORKDIR ${ES_DIR}
ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true用
docker build -t es-test .1)当按照其中一个注释中的指示使用docker run -d --name elasticsearch --net=host -p 9200:9200 -p 9300:9300 es-test运行并在运行容器的容器或节点内执行curl localhost:9200时,我将得到正确的响应。我仍然无法到达9200端口上集群的其他节点。
2)使用docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 es-test运行并在容器内执行curl localhost:9200时,它可以正常工作,但在节点上却不能给出错误。
curl: (56) Recv failure: Connection reset by peer我仍然无法到达9200端口上集群的其他节点。
编辑5:使用这个问题的答案,我获得了运行ES2.4的三个容器中的所有三个。但是ES不能与这三个容器形成一个集群。网络配置如下:network.host : 0.0.0.0、http.port: 9200、
#configure elasticsearch.yml for clustering
echo 'discovery.zen.ping.unicast.hosts: [ELASTICSEARCH_IPS] ' >> ${ES_CONFIG_PATH}/elasticsearch.yml使用docker logs elasticsearch获取的日志如下:
[2016-10-06 12:31:28,887][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
[2016-10-06 12:31:29,080][INFO ][node ] [Screech] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-10-06 12:31:29,081][INFO ][node ] [Screech] initializing ...
[2016-10-06 12:31:29,652][INFO ][plugins ] [Screech] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-10-06 12:31:29,684][INFO ][env ] [Screech] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [8.7gb], net total_space [9.7gb], spins? [unknown], types [rootfs]
[2016-10-06 12:31:29,684][INFO ][env ] [Screech] heap size [989.8mb], compressed ordinary object pointers [true]
[2016-10-06 12:31:29,720][WARN ][threadpool ] [Screech] requested thread pool size [60] for [index] is too large; setting to maximum [5] instead
[2016-10-06 12:31:31,387][INFO ][node ] [Screech] initialized
[2016-10-06 12:31:31,387][INFO ][node ] [Screech] starting ...
[2016-10-06 12:31:31,456][INFO ][transport ] [Screech] publish_address {172.17.0.16:9300}, bound_addresses {[::]:9300}
[2016-10-06 12:31:31,465][INFO ][discovery ] [Screech] ccs-elasticsearch/YeO41MBIR3uqzZzISwalmw
[2016-10-06 12:31:34,500][WARN ][discovery.zen ] [Screech] failed to connect to master [{Bobster}{Gh-6yBggRIypr7OuW1tXhA}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Bobster][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)每当我提到以network.host形式运行该容器的主机的IP地址时,就会出现旧的情况,即只有一个容器运行ES2.4,另两个容器运行1.7个。
刚刚看到码头代理正在监听9300或“我认为”它在监听。
elasticsearch-server/src/main/docker # netstat -nlp | grep 9300
tcp 0 0 :::9300 :::* LISTEN 6656/docker-proxy 有什么线索吗?
发布于 2016-10-12 08:01:16
我能够使用以下设置组成集群
network.publish_host=CONTAINER_HOST_ADDRESS,即运行容器的节点的地址。network.bind_host=0.0.0.0
transport.publish_port=9300
transport.publish_host=CONTAINER_HOST_ADDRESS
当您在代理/负载均衡器(如nginx或haproxy )后面运行ES时,tranport.publish_port非常重要。
发布于 2016-10-05 20:46:52
发布于 2016-10-04 16:46:23
使用-p标志启动容器时,尝试映射您的端口。
EXPOSE和--expose在任何方面都不依赖主机;默认情况下,这些规则不允许从主机访问端口。考虑到EXPOSE指令的局限性,作为Dockerfile作者,您通常应该只将EXPOSE规则作为向哪些端口提供服务的提示。应该由容器的操作员来指定进一步的网络规则。
在执行docker run时尝试映射您的端口,例如docker run -p 9200:9200 -p 9300:9300 <image>:<tag>
https://stackoverflow.com/questions/39768453
复制相似问题