首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >库伯内特斯不能卷曲到节间

库伯内特斯不能卷曲到节间
EN

Stack Overflow用户
提问于 2022-01-06 14:48:20
回答 2查看 979关注 0票数 0

我是K8s的新手,只是用这个来安装

https://kubernetes.io/docs/setup/production-environment/tools/kubespray/

k8s - v1.20.2.

calico - 3.16.6

pod-cidr = 10.214.0.0/16。

服务-cidr= 10.215.0.1/16。

代码语言:javascript
复制
kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP              NODE              NOMINATED NODE   READINESS GATES
web-79d88c97d6-lk2sq    1/1     Running   0          31h   10.214.39.1     dev1   <none>           <none>
web2-5b669f8984-gwfx2   1/1     Running   0          8h    10.214.40.130   dev2   <none>           <none>

我可以卷曲到dev2 的79d88c97d6-lk2xpod,但在dev1不能。

是正常的吗?,如果不是这样,我怎样才能修复它?

代码语言:javascript
复制
curl -v 10.214.39.1:8080
* About to connect() to 10.214.39.1 port 8080 (#0)
*   Trying 10.214.39.1...
* Connection timed out
* Failed connect to 10.214.39.1:8080; Connection timed out
* Closing connection 0
代码语言:javascript
复制
traceroute 10.214.39.1
traceroute to 10.214.39.1 (10.214.39.1), 30 hops max, 60 byte packets
 1  gateway (10.61.8.1)  7.790 ms  8.173 ms  8.394 ms
 2  10.6.41.10 (10.6.41.10)  0.613 ms 10.6.41.6 (10.6.41.6)  0.505 ms 10.6.41.8 (10.6.41.8)  0.612 ms
 3  10.6.32.150 (10.6.32.150)  0.740 ms 10.6.9.169 (10.6.9.169)  0.962 ms 10.6.32.158 (10.6.32.158)  1.233 ms
 4  10.6.32.178 (10.6.32.178)  0.350 ms 10.6.32.232 (10.6.32.232)  8.849 ms 10.6.32.236 (10.6.32.236)  8.850 ms
 5  172.24.0.172 (172.24.0.172)
 6  * * *
 7  * * *
 8  * * *
 9  * * *

从web-79d88c97d6-lk2xpod到web2-5b669f8984-gwfx2荚curl

Kube-系统Pod状态

代码语言:javascript
复制
kube-system   calico-kube-controllers-847f479bc5-k52fm      1/1     Running            0          31h
kube-system   calico-node-8x5h8                             1/1     Running            0          31h
kube-system   calico-node-9lhbn                             1/1     Running            0          31h
kube-system   calico-node-bh9f8                             1/1     Running            0          31h
kube-system   calico-node-dpxjk                             1/1     Running            0          31h
kube-system   calico-node-fl5gj                             1/1     Running            0          31h
kube-system   calico-node-g2qzl                             1/1     Running            0          31h
kube-system   calico-node-g9x82                             1/1     Running            0          31h
kube-system   calico-node-pl292                             1/1     Running            0          31h
kube-system   calico-node-t7kwd                             1/1     Running            0          31h
kube-system   calico-node-v5s8r                             1/1     Running            0          31h
kube-system   coredns-847f564ccf-l4qk9                      0/1     CrashLoopBackOff   441        31h
kube-system   dns-autoscaler-b5c786945-pxbwh                0/1     Running            0          31h
kube-system   kube-apiserver-dev10               1/1     Running            0          31h
kube-system   kube-apiserver-dev8                1/1     Running            0          31h
kube-system   kube-apiserver-dev9                1/1     Running            0          31h
kube-system   kube-controller-manager-dev10      1/1     Running            0          168m
kube-system   kube-controller-manager-dev8       1/1     Running            0          167m
kube-system   kube-controller-manager-dev9       1/1     Running            0          166m
kube-system   kube-proxy-89cbl                              1/1     Running            0          31h
kube-system   kube-proxy-8d6tm                              1/1     Running            0          31h
kube-system   kube-proxy-8qnm9                              1/1     Running            0          31h
kube-system   kube-proxy-bblxx                              1/1     Running            0          31h
kube-system   kube-proxy-fshgk                              1/1     Running            0          31h
kube-system   kube-proxy-j5s6f                              1/1     Running            0          31h
kube-system   kube-proxy-m8jts                              1/1     Running            0          31h
kube-system   kube-proxy-r9wqh                              1/1     Running            0          31h
kube-system   kube-proxy-t4r7g                              1/1     Running            0          31h
kube-system   kube-proxy-wxs4m                              1/1     Running            0          31h
kube-system   kube-scheduler-dev10               1/1     Running            0          168m
kube-system   kube-scheduler-dev8                1/1     Running            0          167m
kube-system   kube-scheduler-dev9                1/1     Running            0          166m
kube-system   kubernetes-dashboard-5f87bdc77d-4j8rp         1/1     Running            408        31h
kube-system   kubernetes-metrics-scraper-64db6db887-k48b6   0/1     CrashLoopBackOff   417        31h
kube-system   nginx-proxy-dev1                   1/1     Running            0          31h
kube-system   nginx-proxy-dev2                   1/1     Running            0          31h
kube-system   nginx-proxy-dev3                   1/1     Running            0          31h
kube-system   nginx-proxy-dev4                   1/1     Running            0          31h
kube-system   nginx-proxy-dev5                   1/1     Running            0          31h
kube-system   nginx-proxy-dev6                   1/1     Running            0          31h
kube-system   nginx-proxy-dev7                   1/1     Running            0          31h
kube-system   nodelocaldns-5plj9                            1/1     Running            0          31h
kube-system   nodelocaldns-hb6lm                            1/1     Running            0          31h
kube-system   nodelocaldns-j4wtf                            1/1     Running            0          31h
kube-system   nodelocaldns-lkj2g                            1/1     Running            0          31h
kube-system   nodelocaldns-pp4xd                            1/1     Running            0          31h
kube-system   nodelocaldns-ttvwq                            1/1     Running            0          31h
kube-system   nodelocaldns-vgnwv                            1/1     Running            0          31h
kube-system   nodelocaldns-vpfjm                            1/1     Running            0          31h
kube-system   nodelocaldns-xfzw4                            1/1     Running            0          31h
kube-system   nodelocaldns-zrnfl                            1/1     Running            0          31h
代码语言:javascript
复制
kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
代码语言:javascript
复制
kubectl get pods -o wide
NAME                                    READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
test-curl-deployment-5cdddb7854-24n82   1/1     Running   0          17s   10.214.223.135   dev4   <none>           <none>
test-curl-deployment-5cdddb7854-jpzwp   1/1     Running   0          17s   10.214.102.6     dev3   <none>           <none>
test-curl-deployment-5cdddb7854-qtr8k   1/1     Running   0          17s   10.214.229.6     dev2   <none>           <none>
web-79d88c97d6-lk2sq                    1/1     Running   0          2d    10.214.39.1      dev5   <none>           <none>
web2-5b669f8984-gwfx2                   1/1     Running   0          25h   10.214.40.130    dev7   <none>           <none>


$ kubectl exec test-curl-deployment-5cdddb7854-24n82 -- curl -m 3 test-curl-deployment-5cdddb7854-jpzwp
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
curl: (28) Resolving timed out after 3000 milliseconds
command terminated with exit code 28

编码、荚记录和描述

代码语言:javascript
复制
Name:                 coredns-847f564ccf-l4qk9
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 dev10/xxx.xxx.xxx.xxx
Start Time:           Wed, 05 Jan 2022 15:29:19 +0900
Labels:               k8s-app=kube-dns
                      pod-template-hash=847f564ccf
Annotations:          cni.projectcalico.org/podIP: 10.214.122.1/32
                      cni.projectcalico.org/podIPs: 10.214.122.1/32
                      seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:               Running
IP:                   10.214.122.1
IPs:
  IP:           10.214.122.1
Controlled By:  ReplicaSet/coredns-847f564ccf
Containers:
  coredns:
    Container ID:  docker://91492fa6c6a42b2606fc8ae5edc5c5f188bb4a8175e3a6c5185f8a9dbe30cc5d
    Image:         k8s.gcr.io/coredns:1.7.0
    Image ID:      docker-pullable://k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 07 Jan 2022 16:58:24 +0900
      Finished:     Fri, 07 Jan 2022 17:00:13 +0900
    Ready:          False
    Restart Count:  687
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=0s timeout=5s period=10s #success=1 #failure=10
    Readiness:    http-get http://:8181/ready delay=0s timeout=5s period=10s #success=1 #failure=10
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-hr9ht (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-hr9ht:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-hr9ht
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/control-plane:NoSchedule
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                     From     Message
  ----     ------     ----                    ----     -------
  Warning  BackOff    8m4s (x3447 over 20h)   kubelet  Back-off restarting failed container
  Warning  Unhealthy  3m19s (x2865 over 20h)  kubelet  Liveness probe failed: Get "http://10.214.122.1:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

0107 07:59:56.981861       1 trace.go:116] Trace[817455089]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2022-01-07 07:59:26.981164884 +0000 UTC m=+62.651400285) (total time: 30.000649425s):
Trace[817455089]: [30.000649425s] [30.000649425s] END
E0107 07:59:56.981887       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.215.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.215.0.1:443: i/o timeout
I0107 07:59:57.665238       1 trace.go:116] Trace[1006933274]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2022-01-07 07:59:27.66473076 +0000 UTC m=+63.334966123) (total time: 30.0004649s):
Trace[1006933274]: [30.0004649s] [30.0004649s] END
E0107 07:59:57.665261       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.215.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.215.0.1:443: i/o timeout
I0107 07:59:57.935685       1 trace.go:116] Trace[629431445]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2022-01-07 07:59:27.935177307 +0000 UTC m=+63.605412681) (total time: 30.000464179s):
Trace[629431445]: [30.000464179s] [30.000464179s] END
E0107 07:59:57.935704       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.215.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.215.0.1:443: i/o timeout
EN

回答 2

Stack Overflow用户

发布于 2022-01-07 07:54:41

由于卷曲错误信息显示您有dns问题,而不是网络!curl未能解析荚名,如果您在kube-system命名空间中检查荚状态,则coredns无法运行,并且处于CrashLoopBackOff状态。检查您的核心dns日志。

票数 1
EN

Stack Overflow用户

发布于 2022-01-06 14:54:40

嗨,这是不正常的卷曲一个节点,而不是卷曲另一个,即使你认为那里的ip是不同的,这只是节点的差异。您可以通过kubectl exec进行卷曲,并检查您是否能够将服务卷起。

代码语言:javascript
复制
kubectl exec pod_name -- curl -m 3 if any service_name_pods_connects_to

你也许可以分享结果。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/70608950

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档