首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >网络问题kubernetes集群

网络问题kubernetes集群
EN

Server Fault用户
提问于 2020-05-10 19:12:51
回答 1查看 1.9K关注 0票数 0

我在VPN中有一个由一个主节点和三个节点组成的Kubernetes集群,它显示就绪状态。它是用kubeadm和法兰绒建造的。VPN网络的范围为192.168.1.0/16。

$ kubectl获取节点-o wide

代码语言:javascript
复制
NAME        STATUS   ROLES    AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8-master   Ready    master   144d   v1.17.0   192.168.1.132           Ubuntu 18.04.3 LTS   4.15.0-72-generic   docker://18.9.7
k8-n1       Ready       144d   v1.17.0   192.168.1.133           Ubuntu 18.04.3 LTS   4.15.0-72-generic   docker://18.9.7
k8-n2       Ready       144d   v1.17.0   192.168.1.134           Ubuntu 18.04.3 LTS   4.15.0-72-generic   docker://18.9.7
k8-n3       Ready       144d   v1.17.0   192.168.1.135           Ubuntu 18.04.3 LTS   4.15.0-72-generic   docker://18.9.7

我可以到达节点。

192.168.1.133元

代码语言:javascript
复制
PING 192.168.1.133 (192.168.1.133) 56(84) bytes of data.
64 bytes from 192.168.1.133: icmp_seq=1 ttl=64 time=0.219 ms
64 bytes from 192.168.1.133: icmp_seq=2 ttl=64 time=0.246 ms
64 bytes from 192.168.1.133: icmp_seq=3 ttl=64 time=0.199 ms
64 bytes from 192.168.1.133: icmp_seq=4 ttl=64 time=0.209 ms
^X^C
--- 192.168.1.133 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3071ms
rtt min/avg/max/mdev = 0.199/0.218/0.246/0.020 ms

192.168.1.134元

代码语言:javascript
复制
PING 192.168.1.134 (192.168.1.134) 56(84) bytes of data.
64 bytes from 192.168.1.134: icmp_seq=1 ttl=64 time=0.288 ms
64 bytes from 192.168.1.134: icmp_seq=2 ttl=64 time=0.272 ms
64 bytes from 192.168.1.134: icmp_seq=3 ttl=64 time=0.268 ms
^C
--- 192.168.1.134 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2032ms
rtt min/avg/max/mdev = 0.268/0.276/0.288/0.008 ms

192.168.1.135元

代码语言:javascript
复制
PING 192.168.1.135 (192.168.1.135) 56(84) bytes of data.
64 bytes from 192.168.1.135: icmp_seq=1 ttl=64 time=0.278 ms
64 bytes from 192.168.1.135: icmp_seq=2 ttl=64 time=0.221 ms
64 bytes from 192.168.1.135: icmp_seq=3 ttl=64 time=0.181 ms
^C
--- 192.168.1.135 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2030ms

但是我设置了一个nginx 2荚部署来测试它是否有效。

代码语言:javascript
复制
nginx-deployment-574b87c764-2gz8t   1/1     Running   0          25m     192.168.2.12   k8-n2              
nginx-deployment-574b87c764-rst8x   1/1     Running   0          25m     192.168.1.17   k8-n1              

$ kubectl得到svc

代码语言:javascript
复制
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes         ClusterIP   10.96.0.1               443/TCP        3d17h
nginx-deployment   NodePort    10.96.211.211           80:31577/TCP   13s

我无法连接到它。

代码语言:javascript
复制
$ curl k8-n1:31577
curl: (7) Failed to connect to k8-n1 port 31577: Connection refused
$ curl k8-n2:31577
curl: (7) Failed to connect to k8-n2 port 31577: Connection refused
$ curl k8-n3:31577
curl: (7) Failed to connect to k8-n3 port 31577: Connection refused
$ curl 10.96.211.211:80
curl: (7) Failed to connect to 10.96.211.211 port 80: Connection refused
$ curl 192.168.1.17:80
curl: (7) Failed to connect to 192.168.1.17 port 80: No route to host
$ curl 192.168.1.17:31577
curl: (7) Failed to connect to 192.168.1.17 port 31577: No route to host
$ curl 192.168.1.133:31577
curl: (7) Failed to connect to 192.168.1.133 port 31577: Connection refused
$ curl 192.168.1.133:6443
curl: (7) Failed to connect to 192.168.1.133 port 6443: Connection refused

我改变了:

代码语言:javascript
复制
sudo kubeadm init --pod-network-cidr=192.168.1.0/16 --apiserver-advertise-address=192.168.1.132

我将flannel.yaml网络改为192.168.1.0/16

代码语言:javascript
复制
kubectl edit cm -n kube-system kube-flannel-cfg

重启后的核心-dns结荚描述:

代码语言:javascript
复制
  Normal   Scheduled  109s                default-scheduler  Successfully assigned kube-system/coredns-6955765f44-vwqgm to k8-n1
  Normal   Pulled     106s                kubelet, k8-n1     Container image "k8s.gcr.io/coredns:1.6.5" already present on machine
  Normal   Created    105s                kubelet, k8-n1     Created container coredns
  Normal   Started    105s                kubelet, k8-n1     Started container coredns
  Warning  Unhealthy  3s (x11 over 103s)  kubelet, k8-n1     Readiness probe failed: Get http://192.168.1.19:8181/ready: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  1s (x5 over 41s)    kubelet, k8-n1     Liveness probe failed: Get http://192.168.1.19:8080/health: dial tcp 192.168.1.19:8080: connect: no route to host
  Normal   Killing    1s                  kubelet, k8-n1     Container coredns failed liveness probe, will be restarted

我希望有任何帮助或要求更多的信息。

EN

回答 1

Server Fault用户

回答已采纳

发布于 2020-05-12 08:11:24

在检查这个问题时,我注意到使用CIDR 192.168.1.0/16初始化的集群与此节点IP地址重叠,然后导致coreDNS荚的问题。

使用新的不同的CIDR初始化集群解决了这个问题。

票数 1
EN
页面原文内容由Server Fault提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://serverfault.com/questions/1016550

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档