首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >calico控制器和calico-节点还没有准备好(CrashLoopBackOff)

calico控制器和calico-节点还没有准备好(CrashLoopBackOff)
EN

Stack Overflow用户
提问于 2021-09-15 09:09:18
回答 1查看 5.9K关注 0票数 3

我使用kubespray部署了一个全新的k8s集群,一切都很好,但是所有与棉布相关的吊舱都还没有准备好。经过许多小时的调试,我找不到为什么棉布吊舱崩溃的原因。我甚至禁用了/停止了整个消防服务,但什么也没有改变。

另一件重要的事情是,calicoctl node status输出不稳定,每次调用都显示出不同的内容:

代码语言:javascript
复制
Calico process is not running.
代码语言:javascript
复制
Calico process is running.

None of the BGP backend processes (BIRD or GoBGP) are running.
代码语言:javascript
复制
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+---------+
|  PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |  INFO   |
+----------------+-------------------+-------+----------+---------+
| 192.168.231.42 | node-to-node mesh | start | 06:23:41 | Passive |
+----------------+-------------------+-------+----------+---------+

IPv6 BGP status
No IPv6 peers found.

另一个经常出现的日志是以下消息:

代码语言:javascript
复制
bird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory
bird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory

还尝试用以下每一项更改IP_AUTODETECTION_METHOD,但没有任何改变:

代码语言:javascript
复制
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=can-reach=www.google.com
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=can-reach=8.8.8.8
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=eth1
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=eth.*

预期行为

与棉布有关的所有豆荚、雏形、部署和复制都应处于就绪状态。

当前行为

与棉布有关的所有豆荚、雏形、部署和复制都处于未就绪状态。

可能解决办法

还没有什么,我请求关于如何调试/克服这个问题的帮助。

复制步骤(对于bug)

它的最新版本的祈祷与以下背景&环境。

代码语言:javascript
复制
git reflog

7e4b176 HEAD@{0}: clone: from https://github.com/kubernetes-sigs/kubespray.git

上下文

我正在尝试部署一个k8s集群,其中有一个主节点和一个工作节点。还请注意,参与此集群的服务器位于几乎空隙/离线环境中,访问全球互联网的权限有限,当然,使用kubespray部署集群的不可接受的过程是成功的,但我要面对这个问题。

你的环境

代码语言:javascript
复制
cat inventory/mycluster/hosts.yaml

all:
  hosts:
    node1:
      ansible_host: 192.168.231.41
      ansible_port: 32244
      ip: 192.168.231.41
      access_ip: 192.168.231.41
    node2:
      ansible_host: 192.168.231.42
      ansible_port: 32244
      ip: 192.168.231.42
      access_ip: 192.168.231.42
  children:
    kube_control_plane:
      hosts:
        node1:
    kube_node:
      hosts:
        node1:
        node2:
    etcd:
      hosts:
        node1:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}
代码语言:javascript
复制
calicoctl version

Client Version:    v3.19.2
Git commit:        6f3d4900
Cluster Version:   v3.19.2
Cluster Type:      kubespray,bgp,kubeadm,kdd,k8s
代码语言:javascript
复制
cat /etc/centos-release

CentOS Linux release 7.9.2009 (Core)
代码语言:javascript
复制
uname -r

3.10.0-1160.42.2.el7.x86_64
代码语言:javascript
复制
kubectl version

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:10:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
代码语言:javascript
复制
 kubectl get nodes -o wide

NAME    STATUS   ROLES                  AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
node1   Ready    control-plane,master   19h   v1.21.4   192.168.231.41   <none>        CentOS Linux 7 (Core)   3.10.0-1160.42.2.el7.x86_64   docker://20.10.8
node2   Ready    <none>                 19h   v1.21.4   192.168.231.42   <none>        CentOS Linux 7 (Core)   3.10.0-1160.42.2.el7.x86_64   docker://20.10.8
代码语言:javascript
复制
kubectl get all --all-namespaces -o wide

NAMESPACE     NAME                                           READY   STATUS             RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
kube-system   pod/calico-kube-controllers-8575b76f66-57zw4   0/1     CrashLoopBackOff   327        19h   192.168.231.42   node2   <none>           <none>
kube-system   pod/calico-node-4hkzb                          0/1     Running            245        14h   192.168.231.42   node2   <none>           <none>
kube-system   pod/calico-node-hznhc                          0/1     Running            245        14h   192.168.231.41   node1   <none>           <none>
kube-system   pod/coredns-8474476ff8-b6lqz                   1/1     Running            0          19h   10.233.96.1      node2   <none>           <none>
kube-system   pod/coredns-8474476ff8-gdkml                   1/1     Running            0          19h   10.233.90.1      node1   <none>           <none>
kube-system   pod/dns-autoscaler-7df78bfcfb-xnn4r            1/1     Running            0          19h   10.233.90.2      node1   <none>           <none>
kube-system   pod/kube-apiserver-node1                       1/1     Running            0          19h   192.168.231.41   node1   <none>           <none>
kube-system   pod/kube-controller-manager-node1              1/1     Running            0          19h   192.168.231.41   node1   <none>           <none>
kube-system   pod/kube-proxy-dmw22                           1/1     Running            0          19h   192.168.231.41   node1   <none>           <none>
kube-system   pod/kube-proxy-wzpnv                           1/1     Running            0          19h   192.168.231.42   node2   <none>           <none>
kube-system   pod/kube-scheduler-node1                       1/1     Running            0          19h   192.168.231.41   node1   <none>           <none>
kube-system   pod/nginx-proxy-node2                          1/1     Running            0          19h   192.168.231.42   node2   <none>           <none>
kube-system   pod/nodelocaldns-6h5q2                         1/1     Running            0          19h   192.168.231.42   node2   <none>           <none>
kube-system   pod/nodelocaldns-7fwbd                         1/1     Running            0          19h   192.168.231.41   node1   <none>           <none>

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE   SELECTOR
default       service/kubernetes   ClusterIP   10.233.0.1   <none>        443/TCP                  19h   <none>
kube-system   service/coredns      ClusterIP   10.233.0.3   <none>        53/UDP,53/TCP,9153/TCP   19h   k8s-app=kube-dns

NAMESPACE     NAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS    IMAGES                                     SELECTOR
kube-system   daemonset.apps/calico-node    2         2         0       2            0           kubernetes.io/os=linux   19h   calico-node   quay.io/calico/node:v3.19.2                k8s-app=calico-node
kube-system   daemonset.apps/kube-proxy     2         2         2       2            2           kubernetes.io/os=linux   19h   kube-proxy    k8s.gcr.io/kube-proxy:v1.21.4              k8s-app=kube-proxy
kube-system   daemonset.apps/nodelocaldns   2         2         2       2            2           kubernetes.io/os=linux   19h   node-cache    k8s.gcr.io/dns/k8s-dns-node-cache:1.17.1   k8s-app=nodelocaldns

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                IMAGES                                                       SELECTOR
kube-system   deployment.apps/calico-kube-controllers   0/1     1            0           19h   calico-kube-controllers   quay.io/calico/kube-controllers:v3.19.2                      k8s-app=calico-kube-controllers
kube-system   deployment.apps/coredns                   2/2     2            2           19h   coredns                   k8s.gcr.io/coredns/coredns:v1.8.0                            k8s-app=kube-dns
kube-system   deployment.apps/dns-autoscaler            1/1     1            1           19h   autoscaler                k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3   k8s-app=dns-autoscaler

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE   CONTAINERS                IMAGES                                                       SELECTOR
kube-system   replicaset.apps/calico-kube-controllers-8575b76f66   1         1         0       19h   calico-kube-controllers   quay.io/calico/kube-controllers:v3.19.2                      k8s-app=calico-kube-controllers,pod-template-hash=8575b76f66
kube-system   replicaset.apps/coredns-8474476ff8                   2         2         2       19h   coredns                   k8s.gcr.io/coredns/coredns:v1.8.0                            k8s-app=kube-dns,pod-template-hash=8474476ff8
kube-system   replicaset.apps/dns-autoscaler-7df78bfcfb            1         1         1       19h   autoscaler                k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3   k8s-app=dns-autoscaler,pod-template-hash=7df78bfcfb
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-09-21 18:18:52

幸运的是,将livenessProbe & readinessProbelivenessProbe&readinessProbe1增加到60解决了这个问题。

代码语言:javascript
复制
kubectl edit -n kube-system daemonset.apps/calico-node
kubectl edit -n kube-system deployment.apps/calico-kube-controllers

https://github.com/projectcalico/calico/issues/4935

票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/69190171

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档