首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >当控制飞机加入HA机群时kube-apiserver退出

当控制飞机加入HA机群时kube-apiserver退出
EN

Server Fault用户
提问于 2021-02-27 03:46:47
回答 1查看 1.2K关注 0票数 1
  1. Ubuntu 16.04.5
  2. 所有操作都是以root用户的身份完成的。
  3. 软件版本如下:
代码语言:javascript
复制
ii  kubeadm                             1.20.4-00                                  amd64        Kubernetes Cluster Bootstrapping Tool
ii  kubectl                             1.20.4-00                                  amd64        Kubernetes Command Line Tool
ii  kubelet                             1.20.4-00                                  amd64        Kubernetes Node Agent
ii  kubernetes-cni                      0.8.7-00                                   amd64        Kubernetes CNI
ii  containerd.io                       1.2.6-3                                    amd64        An open and reliable container runtime

我正在使用kubeadm创建高度可用的集群,遵循指南:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

假设控制平面节点为: A(devops1ar01n01 172.16.80.3) B(devops1ar01n02 172.16.80.4) C(devops1ar01n03 172.16.80.5)。

我遵循下面的链接设置一个负载平衡器使用kube。在A和B两种情况下,我创建了文件/etc/kube-vip/config.yaml和/etc/kubernetes/mainfest/kube-vip.yaml:

https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#kube-vip

下面的I ran命令插入第一个控制平面节点A(kube监听端口16443):

kubeadm init --control-plane-endpoint kube-vip:16443 --upload-certs

产出如下:

代码语言:javascript
复制
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join kube-vip:16443 --token pa0bw2.gn6bqnyjlmh0o7xn \
    --discovery-token-ca-cert-hash sha256:fd7bb5afe0307b8694c218f07c1f3adbf270254d1f37bcec75ed292b7223cc8b \
    --control-plane --certificate-key 44995042d21c87ea5ed4f62443fe665cbfd7c71397485ca9f06d1483548c1883

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join kube-vip:16443 --token pa0bw2.gn6bqnyjlmh0o7xn \
    --discovery-token-ca-cert-hash sha256:fd7bb5afe0307b8694c218f07c1f3adbf270254d1f37bcec75ed292b7223cc8b 

然后,我跟随输出运行命令:

代码语言:javascript
复制
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

export KUBECONFIG=/etc/kubernetes/admin.conf

然后,通过运行以下命令,在节点A上安装了CNI插件编织:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=1.20.4"

然后检查吊舱:

代码语言:javascript
复制
root@devops1ar01n01:~# kubectl get pod -n kube-system
NAME                                     READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-s2bh5                  0/1     Running   0          36m
coredns-74ff55c5b-stm2l                  1/1     Running   0          36m
etcd-devops1ar01n01                      1/1     Running   0          36m
kube-apiserver-devops1ar01n01            1/1     Running   0          36m
kube-controller-manager-devops1ar01n01   1/1     Running   0          36m
kube-proxy-bnzpd                         1/1     Running   0          36m
kube-scheduler-devops1ar01n01            1/1     Running   0          36m
kube-vip-devops1ar01n01                  1/1     Running   0          36m
weave-net-8fmf9                          2/2     Running   0          14s

直到那时,所有的操作都进行得很顺利,但是当节点B加入集群时,出现了一些问题。(-v=8,查看详细的输出,--ignore-preflight-errors="DirAvailable--etc-kubernetes-manifests“由于现有文件/etc/kubernetes/mainfest/kube),下面的命令在节点B上运行。

代码语言:javascript
复制
kubeadm join kube-vip:16443 --token pa0bw2.gn6bqnyjlmh0o7xn \
   --discovery-token-ca-cert-hash sha256:fd7bb5afe0307b8694c218f07c1f3adbf270254d1f37bcec75ed292b7223cc8b \
   --control-plane \
   --certificate-key 44995042d21c87ea5ed4f62443fe665cbfd7c71397485ca9f06d1483548c1883 \
   --ignore-preflight-errors="DirAvailable--etc-kubernetes-manifests" 
   --v=8

然后显示以下消息(172.16.80.4是节点B的ip ):

代码语言:javascript
复制
[kubelet-check] Initial timeout of 40s passed.
I0226 11:12:44.981744   11128 etcd.go:468] Failed to get etcd status for https://172.16.80.4:2379: failed to dial endpoint https://172.16.80.4:2379 with maintenance client: context deadline exceeded
I0226 11:12:52.890038   11128 etcd.go:468] Failed to get etcd status for https://172.16.80.4:2379: failed to dial endpoint https://172.16.80.4:2379 with maintenance client: context deadline exceeded
I0226 11:13:03.915500   11128 etcd.go:468] Failed to get etcd status for https://172.16.80.4:2379: failed to dial endpoint https://172.16.80.4:2379 with maintenance client: context deadline exceeded
I0226 11:13:19.337921   11128 etcd.go:468] Failed to get etcd status for https://172.16.80.4:2379: failed to dial endpoint https://172.16.80.4:2379 with maintenance client: context deadline exceeded

我发现在节点B上没有创建etcd容器:

代码语言:javascript
复制
root@devops1ar01n02:~# docker ps | grep -v pause
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
b0188090c251        ae5eb22e4a9d           "kube-apiserver --ad…"   20 seconds ago      Up 19 seconds                           k8s_kube-apiserver_kube-apiserver-devops1ar01n02_kube-system_50f7004f736896db78d143e1d44bfbb5_4
c8c93ad432e9        7f92d556d4ff           "/usr/bin/launch.sh"     3 minutes ago       Up 3 minutes                            k8s_weave-npc_weave-net-lthlv_kube-system_eac41670-a119-4085-99e7-7cf08185deb7_0
e9946edd52ba        5f8cb769bd73           "kube-scheduler --au…"   3 minutes ago       Up 3 minutes                            k8s_kube-scheduler_kube-scheduler-devops1ar01n02_kube-system_90280dfce8bf44f46a3e41b6c4a9f551_0
4ffe61f78cf5        a00c858e350e           "/kube-vip start -c …"   3 minutes ago       Up 3 minutes                            k8s_kube-vip_kube-vip-devops1ar01n02_kube-system_dd4d116d758ec63efaf78fc4112d63e6_0
7019afbd1497        0a41a1414c53           "kube-controller-man…"   3 minutes ago       Up 3 minutes                            k8s_kube-controller-manager_kube-controller-manager-devops1ar01n02_kube-system_9375c16649f1cd963bdbc6e4125314fc_0
32035400ad9d        c29e6c583067           "/usr/local/bin/kube…"   3 minutes ago       Up 3 minutes                            k8s_kube-proxy_kube-proxy-v5nwr_kube-system_d0c1ce98-b066-4349-89e0-6113b8fa1708_0

当我回到节点A检查豆荚时,kubectl命令挂起,然后超时:

代码语言:javascript
复制
root@devops1ar01n01:~# kubectl get pod -n kube-system
Unable to connect to the server: net/http: TLS handshake timeout

我检查了集装箱,发现库贝-阿皮瑟弗总是在重新启动:

代码语言:javascript
复制
root@devops1ar01n01:~# docker ps -a | grep -v pause | grep kube-apiserver
d5f85d72d2dc        ae5eb22e4a9d           "kube-apiserver --ad…"   8 seconds ago        Up 8 seconds                                        k8s_kube-apiserver_kube-apiserver-devops1ar01n01_kube-system_860fed4d3a137b129887eb23f07be1b6_6
a3bd40ba5552        ae5eb22e4a9d           "kube-apiserver --ad…"   About a minute ago   Exited (1) About a minute ago                       k8s_kube-apiserver_kube-apiserver-devops1ar01n01_kube-system_860fed4d3a137b129887eb23f07be1b6_5

我在节点A上运行docker logs 以获取退出的kube容器的日志,输出如下:

代码语言:javascript
复制
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I0226 06:22:58.186450       1 server.go:632] external host was not specified, using 172.16.80.3
I0226 06:22:58.187781       1 server.go:182] Version: v1.20.4
I0226 06:22:59.289621       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0226 06:22:59.294839       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0226 06:22:59.294939       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0226 06:22:59.299670       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0226 06:22:59.299772       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0226 06:22:59.318732       1 client.go:360] parsed scheme: "endpoint"
I0226 06:22:59.318985       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379   0 }]
I0226 06:23:00.290273       1 client.go:360] parsed scheme: "endpoint"
I0226 06:23:00.290377       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379   0 }]
Error: context deadline exceeded

systemctl status kubelet的输出如下:

代码语言:javascript
复制
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Fri 2021-02-26 10:31:19 CST; 3h 58min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 19602 (kubelet)
    Tasks: 17
   Memory: 67.0M
      CPU: 20min 47.367s
   CGroup: /system.slice/kubelet.service
           └─19602 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --p

Feb 26 14:29:34 devops1ar01n01 kubelet[19602]: Trace[336098419]: [10.001217223s] [10.001217223s] END
Feb 26 14:29:34 devops1ar01n01 kubelet[19602]: E0226 14:29:34.695789   19602 reflector.go:138] object-"kube-system"/"kube-proxy-token-x5lsv": Failed to watch *v1.Secret: failed to list *v1.Secret: Get "
Feb 26 14:29:36 devops1ar01n01 kubelet[19602]: E0226 14:29:36.056753   19602 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://kube-vip:16443/apis/coordination.k8s.
Feb 26 14:29:40 devops1ar01n01 kubelet[19602]: E0226 14:29:40.068403   19602 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"
Feb 26 14:29:40 devops1ar01n01 kubelet[19602]: E0226 14:29:40.068717   19602 event.go:218] Unable to write event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"k
Feb 26 14:29:40 devops1ar01n01 kubelet[19602]: E0226 14:29:40.069126   19602 kubelet_node_status.go:470] Error updating node status, will retry: error getting node "devops1ar01n01": Get "https://kube-vi
Feb 26 14:29:44 devops1ar01n01 kubelet[19602]: I0226 14:29:44.012843   19602 scope.go:111] [topologymanager] RemoveContainer - Container ID: 3ab3a85e785ae39f705ca30aad59a52ec17d12e9f31cbf920695d7af9cf93
Feb 26 14:29:44 devops1ar01n01 kubelet[19602]: E0226 14:29:44.031056   19602 pod_workers.go:191] Error syncing pod 860fed4d3a137b129887eb23f07be1b6 ("kube-apiserver-devops1ar01n01_kube-system(860fed4d3a
Feb 26 14:29:50 devops1ar01n01 kubelet[19602]: E0226 14:29:50.070777   19602 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"
Feb 26 14:29:50 devops1ar01n01 kubelet[19602]: E0226 14:29:50.072142   19602 kubelet_node_status.go:470] Error updating node status, will retry: error getting node "devops1ar01n01": Get "https://kube-vi

我尝试了kubeadm重置两个节点并重复操作,但是这种情况总是会出现的。我该如何进一步调试呢?

EN

回答 1

Server Fault用户

回答已采纳

发布于 2021-02-28 10:57:44

这个问题终于由我自己解决了。这可能是由负载平衡器中的健康检查url错误造成的。

我遵循这个指南https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#kube-vip为我的HA集群创建负载均衡器,我选择了kube的方式,这似乎是最方便的。

如果您使用kube,他的健康检查url可能是/healthz,如第二个方法(https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#keepalived-and-haproxy)的he配置文件中所描述的那样。然而,我发现健康检查url在/etc/kubernetes/mainfest/kube-apiserver.yaml中是/livez

我认为,健康检查失败的原因应该是库贝-vip的健康检查错误,从而导致库贝-阿皮瑟弗的持续重新启动。

为了验证这一点,为了编辑负载均衡器中的健康检查url,我选择了第二种方法来创建负载均衡器(https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#keepalived-and-haproxy),并将/healthz更改为/etc/haproxy/haproxy.cfg中的/livez

然后我按照指南运行kube initkube join,它可以很好地工作。

票数 1
EN
页面原文内容由Server Fault提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://serverfault.com/questions/1055263

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档