首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >kubeadm初始化错误,主节点env centos7.9

kubeadm初始化错误,主节点env centos7.9
EN

Stack Overflow用户
提问于 2021-03-08 17:34:47
回答 1查看 211关注 0票数 0

环境

代码语言:javascript
复制
k8s v1.20
cri  containerd
system centos7.9 

代码语言:javascript
复制
kubeadm init --service-cidr=172.30.0.0/16  --pod-network-cidr=10.128.0.0/14 --cri-socket=/run/containerd/containerd.sock --image-repository=registry.aliyuncs.com

错误日志

代码语言:javascript
复制
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.

Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
    - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'

journalctl -xeu kubelet

代码语言:javascript
复制
Mar 09 22:26:51 master1 kubelet[3179]: I0309 22:26:51.252245    3179 kubelet_node_status.go:71] Attempting to register node master1
Mar 09 22:26:51 master1 kubelet[3179]: E0309 22:26:51.252670    3179 kubelet_node_status.go:93] Unable to register node "master1" with API server: Post "https://192.168.10.1:6443/api/v1/nodes": dial tcp 192.168

Mar 09 22:26:54 master1 kubelet[3179]: E0309 22:26:54.374695    3179 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: nodes have not yet been read at least once,
Mar 09 22:26:54 master1 kubelet[3179]: E0309 22:26:54.394116    3179 kubelet.go:2184] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: c
Mar 09 22:26:55 master1 kubelet[3179]: I0309 22:26:55.230293    3179 kubelet.go:449] kubelet nodes not sync
Mar 09 22:26:55 master1 kubelet[3179]: E0309 22:26:55.247912    3179 kubelet.go:2264] nodes have not yet been read at least once, cannot construct node object
Mar 09 22:26:55 master1 kubelet[3179]: I0309 22:26:55.348020    3179 kubelet.go:449] kubelet nodes not sync
Mar 09 22:26:55 master1 kubelet[3179]: E0309 22:26:55.717348    3179 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.10.1:644
Mar 09 22:26:56 master1 kubelet[3179]: I0309 22:26:56.231699    3179 kubelet.go:449] kubelet nodes not sync
Mar 09 22:26:57 master1 kubelet[3179]: E0309 22:26:57.134170    3179 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"master1.166ab2b6
Mar 09 22:26:57 master1 kubelet[3179]: E0309 22:26:57.134249    3179 event.go:218] Unable to write event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"master1.166ab2b6b
Mar 09 22:26:57 master1 kubelet[3179]: E0309 22:26:57.136426    3179 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"master1.166ab2b6
Mar 09 22:26:57 master1 kubelet[3179]: I0309 22:26:57.230722    3179 kubelet.go:449] kubelet nodes not sync
Mar 09 22:26:57 master1 kubelet[3179]: E0309 22:26:57.844788    3179 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://192.168.10.1:6443/apis/coordination.k8s.io/v1/namespa
EN

回答 1

Stack Overflow用户

发布于 2021-03-12 10:40:56

Kubelet并不健康,我无法处理它。

我已经创建了一个新的虚拟机,并使用相同的步骤,它工作了。

另外,

通过更改containerd的版本,我找到了答案

  • from containerd1.4.1
  • to containerd1.4.4

它起作用了,只有这个旧的虚拟机有问题。我猜这可能是个bug。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/66527358

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档