我已经在4台电脑上安装了coreOS裸金属。我为所有机器设置了静态I。我跟踪了官方的coreOS 教程 coreOS + kubernetes。
由于我有一个静态的网络配置,并且我将有一个多节点的etcd集群,所以我遵循了下面的教程来引导etcd。我已经在所有的PC上运行了脚本,并且使用etcdctl member list,我看到所有的节点(PC)都出现在etcd集群中。
然后,我转到步骤2 (Deploy Node),并一步一步地遵循说明。
我在这里遇到了问题:
curl -X PUT -d "value={\"Network\":\"$POD_NETWORK\",\"Backend\":{\"Type\":\"vxlan\"}}" "$ETCD_SERVER/v2/keys/coreos.com/network/config"我使用了默认的POD_NETWORK (如步骤1所述)和ETCD_ENDPOINTS中的一个作为ETCD_SERVER。然而,当我卷曲时,连接就建立了,但是我得到了回复404 page not found。
我认为问题要么是法兰绒的,要么是etcd (可能是etcd)。即使我只是curl $ETCD_SERVER,我也找不到页面。过了几天,我真是不知所措,真的不知道哪里会有问题,怎么解决呢?如果你需要更多的信息,请告诉我。如果你能给我一个正确的方向,让我开始解决这个问题,我会很感激的。谢谢
编辑:我发现如果我curl "${ETCD_SERVER}/version",我得到正确的答复({"etcdserver":"2.3.7","etcdcluster":"2.3.0"}),如果这有帮助的话。
更新:我发现没有工作,因为我将ETCD_SERVER设置为错误的端口(2380而不是2379)。现在起作用了。但是,flanneld服务仍然没有启动,并返回一个错误。Job for flanneld.service failed because the control process exited with error code.这里是journalctl -xe的输出
-- Subject: Unit flannel-docker-opts.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flannel-docker-opts.service has failed.
--
-- The result is failed.
Mar 08 08:43:26 kubernetes-4 systemd[1]: flannel-docker-opts.service: Unit entered failed state.
Mar 08 08:43:26 kubernetes-4 systemd[1]: flannel-docker-opts.service: Failed with result 'exit-code'.
Mar 08 08:43:30 kubernetes-4 sudo[27594]: kub : TTY=pts/2 ; PWD=/home/kub ; USER=root ; COMMAND=/bin/systemctl start flanneld
Mar 08 08:43:30 kubernetes-4 sudo[27594]: pam_unix(sudo:session): session opened for user root by kub(uid=0)
Mar 08 08:43:30 kubernetes-4 sudo[27594]: pam_systemd(sudo:session): Cannot create session: Already running in a session
Mar 08 08:43:36 kubernetes-4 systemd[1]: flanneld.service: Service hold-off time over, scheduling restart.
Mar 08 08:43:36 kubernetes-4 systemd[1]: Stopped flannel - Network fabric for containers (System Application Container).
-- Subject: Unit flanneld.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flanneld.service has finished shutting down.
Mar 08 08:43:36 kubernetes-4 systemd[1]: Starting flannel - Network fabric for containers (System Application Container)...
-- Subject: Unit flanneld.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flanneld.service has begun starting up.
Mar 08 08:43:36 kubernetes-4 rkt[27608]: rm: unable to resolve UUID from file: open /var/lib/coreos/flannel-wrapper.uuid: no such file or directory
Mar 08 08:43:36 kubernetes-4 rkt[27608]: rm: failed to remove one or more pods
Mar 08 08:43:36 kubernetes-4 flannel-wrapper[27625]: + exec /usr/bin/rkt run --uuid-file-save=/var/lib/coreos/flannel-wrapper.uuid --trust-keys-from-https --mount volume=notify,target=/run/systemd/notify
Mar 08 08:43:36 kubernetes-4 flannel-wrapper[27625]: run: discovery failed
Mar 08 08:43:36 kubernetes-4 systemd[1]: flanneld.service: Main process exited, code=exited, status=254/n/a
Mar 08 08:43:36 kubernetes-4 rkt[27652]: stop: unable to resolve UUID from file: open /var/lib/coreos/flannel-wrapper.uuid: no such file or directory
Mar 08 08:43:36 kubernetes-4 systemd[1]: Failed to start flannel - Network fabric for containers (System Application Container).
-- Subject: Unit flanneld.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flanneld.service has failed.
--
-- The result is failed.
Mar 08 08:43:36 kubernetes-4 systemd[1]: flanneld.service: Unit entered failed state.
Mar 08 08:43:36 kubernetes-4 systemd[1]: flanneld.service: Failed with result 'exit-code'.
Mar 08 08:43:36 kubernetes-4 sudo[27594]: pam_unix(sudo:session): session closed for user root
Mar 08 08:43:36 kubernetes-4 systemd[1]: Starting flannel docker export service - Network fabric for containers (System Application Container)...
-- Subject: Unit flannel-docker-opts.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flannel-docker-opts.service has begun starting up.
Mar 08 08:43:36 kubernetes-4 rkt[27659]: rm: unable to resolve UUID from file: open /var/lib/coreos/flannel-wrapper2.uuid: no such file or directory
Mar 08 08:43:36 kubernetes-4 rkt[27659]: rm: failed to remove one or more pods
Mar 08 08:43:36 kubernetes-4 flannel-wrapper[27674]: + exec /usr/bin/rkt run --uuid-file-save=/var/lib/coreos/flannel-wrapper2.uuid --trust-keys-from-https --net=host --volume run-flannel,kind=host,source
Mar 08 08:43:38 kubernetes-4 flannel-wrapper[27674]: run: discovery failed
Mar 08 08:43:38 kubernetes-4 systemd[1]: flannel-docker-opts.service: Main process exited, code=exited, status=254/n/a
Mar 08 08:43:38 kubernetes-4 systemd[1]: Failed to start flannel docker export service - Network fabric for containers (System Application Container).
-- Subject: Unit flannel-docker-opts.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flannel-docker-opts.service has failed.
--
-- The result is failed.
Mar 08 08:43:38 kubernetes-4 systemd[1]: flannel-docker-opts.service: Unit entered failed state.
Mar 08 08:43:38 kubernetes-4 systemd[1]: flannel-docker-opts.service: Failed with result 'exit-code'.
Mar 08 08:43:39 kubernetes-4 sudo[27708]: kub : TTY=pts/2 ; PWD=/home/kub ; USER=root ; COMMAND=/bin/journalctl -xe
Mar 08 08:43:39 kubernetes-4 sudo[27708]: pam_unix(sudo:session): session opened for user root by kub(uid=0)
Mar 08 08:43:39 kubernetes-4 sudo[27708]: pam_systemd(sudo:session): Cannot create session: Already running in a session更新2: (为flanneld和flanneld opts服务添加systemctl输出)
systemctl cat flannel-docker-opts输出:
# /usr/lib/systemd/system/flannel-docker-opts.service
[Unit]
Description=flannel docker export service - Network fabric for containers (System Application Container)
Documentation=https://github.com/coreos/flannel
PartOf=flanneld.service
Before=docker.service
[Service]
Type=oneshot
TimeoutStartSec=60
Environment="FLANNEL_IMAGE_TAG=v0.6.2"
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/lib/coreos/flannel-wrapper2.uuid"
Environment="FLANNEL_IMAGE_ARGS=--exec=/opt/bin/mk-docker-opts.sh"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/lib/coreos/flannel-wrapper2.uuid
ExecStart=/usr/lib/coreos/flannel-wrapper -d /run/flannel/flannel_docker_opts.env -i
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/lib/coreos/flannel-wrapper2.uuid
[Install]
WantedBy=multi-user.targetsystemctl cat flanneld输出:
# /usr/lib/systemd/system/flanneld.service
[Unit]
Description=flannel - Network fabric for containers (System Application Container)
Documentation=https://github.com/coreos/flannel
After=etcd.service etcd2.service etcd-member.service
Before=docker.service flannel-docker-opts.service
Requires=flannel-docker-opts.service
[Service]
Type=notify
Restart=always
RestartSec=10s
LimitNOFILE=40000
LimitNPROC=1048576
Environment="FLANNEL_IMAGE_TAG=v0.6.2"
Environment="FLANNEL_OPTS=--ip-masq=true"
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/lib/coreos/flannel-wrapper.uuid"
EnvironmentFile=-/run/flannel/options.env
ExecStartPre=/sbin/modprobe ip_tables
ExecStartPre=/usr/bin/mkdir --parents /var/lib/coreos /run/flannel
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/lib/coreos/flannel-wrapper.uuid
ExecStart=/usr/lib/coreos/flannel-wrapper $FLANNEL_OPTS
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/lib/coreos/flannel-wrapper.uuid
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/flanneld.service.d/40-ExecStartPre-symlink.conf
[Service]
ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.env更新3:使用journalctl -xer获得一个新的错误,如果这有帮助的话:
Mar 09 08:39:15 kubernetes-4 locksmithd[1147]: Unlocking old locks failed: [etcd.service etcd2.service] are inactive. Retrying in 5m0s.
Mar 09 08:39:15 kubernetes-4 locksmithd[1147]: [etcd.service etcd2.service] are inactive发布于 2017-03-08 01:43:01
如果您在其中一个Etcd节点上以代理模式运行Etcd,则来自它们中的任何一个节点。尝试使用包含在etcdctl中的CoreOS二进制文件来查找集群运行状况:
etcdctl cluster-health应该显示如下的东西:
member ce2a822cea30bfca is healthy: got healthy result from http://10.129.69.201:2379
cluster is healthy还试着:
etcdctl set /coreos.com/network/config '{"Network":"$POD_NETWORK", "Backend": {"Type": "vxlan"}}https://stackoverflow.com/questions/42650465
复制相似问题