首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何修复kubernetes污点node.kubernetes.io/not-ready: NoSchedule

如何修复kubernetes污点node.kubernetes.io/not-ready: NoSchedule
EN

Stack Overflow用户
提问于 2021-05-10 16:25:27
回答 1查看 839关注 0票数 1

我正在尝试运行在Docker Desktop环境中运行的本地开发kubernetes集群。但它一直有以下污点:node.kubernetes.io/not-ready:NoSchedule

手动去除污点,如kubectl taint nodes --all node.kubernetes.io/not-ready-,没有帮助,因为它马上就会回来

kubectl describe node,是:

代码语言:javascript
复制
Name:               docker-desktop
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=docker-desktop
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 07 May 2021 11:00:31 +0100
Taints:             node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  docker-desktop
  AcquireTime:     <unset>
  RenewTime:       Fri, 07 May 2021 16:14:19 +0100
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 07 May 2021 16:14:05 +0100   Fri, 07 May 2021 11:00:31 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 07 May 2021 16:14:05 +0100   Fri, 07 May 2021 11:00:31 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 07 May 2021 16:14:05 +0100   Fri, 07 May 2021 11:00:31 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 07 May 2021 16:14:05 +0100   Fri, 07 May 2021 16:11:05 +0100   KubeletNotReady              PLEG is not healthy: pleg was last seen active 6m22.485400578s ago; threshold is 3m0s
Addresses:
  InternalIP:  192.168.65.4
  Hostname:    docker-desktop
Capacity:
  cpu:                5
  ephemeral-storage:  61255492Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             18954344Ki
  pods:               110
Allocatable:
  cpu:                5
  ephemeral-storage:  56453061334
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             18851944Ki
  pods:               110
System Info:
  Machine ID:                 f4da8f67-6e48-47f4-94f7-0a827259b845
  System UUID:                d07e4b6a-0000-0000-b65f-2398524d39c2
  Boot ID:                    431e1681-fdef-43db-9924-cb019ff53848
  Kernel Version:             5.10.25-linuxkit
  OS Image:                   Docker Desktop
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.6
  Kubelet Version:            v1.19.7
  Kube-Proxy Version:         v1.19.7
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests         Limits
  --------           --------         ------
  cpu                1160m (23%)      1260m (25%)
  memory             1301775360 (6%)  13288969216 (68%)
  ephemeral-storage  0 (0%)           0 (0%)
  hugepages-1Gi      0 (0%)           0 (0%)
  hugepages-2Mi      0 (0%)           0 (0%)
Events:
  Type    Reason                   Age                  From        Message
  ----    ------                   ----                 ----        -------
  Normal  NodeNotReady             86m (x2 over 90m)    kubelet     Node docker-desktop status is now: NodeNotReady
  Normal  NodeReady                85m (x3 over 5h13m)  kubelet     Node docker-desktop status is now: NodeReady
  Normal  Starting                 61m                  kubelet     Starting kubelet.
  Normal  NodeAllocatableEnforced  61m                  kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  61m (x8 over 61m)    kubelet     Node docker-desktop status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    61m (x7 over 61m)    kubelet     Node docker-desktop status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     61m (x8 over 61m)    kubelet     Node docker-desktop status is now: NodeHasSufficientPID
  Normal  Starting                 60m                  kube-proxy  Starting kube-proxy.
  Normal  NodeNotReady             55m                  kubelet     Node docker-desktop status is now: NodeNotReady
  Normal  Starting                 49m                  kubelet     Starting kubelet.
  Normal  NodeAllocatableEnforced  49m                  kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     49m (x7 over 49m)    kubelet     Node docker-desktop status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientMemory  49m (x8 over 49m)    kubelet     Node docker-desktop status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    49m (x8 over 49m)    kubelet     Node docker-desktop status is now: NodeHasNoDiskPressure
  Normal  Starting                 48m                  kube-proxy  Starting kube-proxy.
  Normal  NodeNotReady             41m                  kubelet     Node docker-desktop status is now: NodeNotReady
  Normal  Starting                 37m                  kubelet     Starting kubelet.
  Normal  NodeAllocatableEnforced  37m                  kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     37m (x7 over 37m)    kubelet     Node docker-desktop status is now: NodeHasSufficientPID
  Normal  NodeHasNoDiskPressure    37m (x8 over 37m)    kubelet     Node docker-desktop status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientMemory  37m (x8 over 37m)    kubelet     Node docker-desktop status is now: NodeHasSufficientMemory
  Normal  Starting                 36m                  kube-proxy  Starting kube-proxy.
  Normal  NodeAllocatableEnforced  21m                  kubelet     Updated Node Allocatable limit across pods
  Normal  Starting                 21m                  kubelet     Starting kubelet.
  Normal  NodeHasSufficientMemory  21m (x8 over 21m)    kubelet     Node docker-desktop status is now: NodeHasSufficientMemory
  Normal  NodeHasSufficientPID     21m (x7 over 21m)    kubelet     Node docker-desktop status is now: NodeHasSufficientPID
  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)    kubelet     Node docker-desktop status is now: NodeHasNoDiskPressure
  Normal  Starting                 21m                  kube-proxy  Starting kube-proxy.
  Normal  NodeReady                6m16s (x2 over 14m)  kubelet     Node docker-desktop status is now: NodeReady
  Normal  NodeNotReady             3m16s (x3 over 15m)  kubelet     Node docker-desktop status is now: NodeNotReady

分配的资源非常重要,因为集群也很大

代码语言:javascript
复制
CPU: 5GB
Memory: 18GB
SWAP: 1GB
Disk Image: 60GB

机器: Mac Core i7,32 GB内存,512 GB固态硬盘

我可以看到问题出在PLEG上,但我需要了解是什么导致Pod Lifecycle Event Generator导致错误。是否分配了足够的节点资源或其他什么。

有什么想法吗?

EN

回答 1

Stack Overflow用户

发布于 2021-05-10 20:46:26

在我的例子中,问题是一些超级需要资源的pod。因此,我不得不缩减一些部署,以便能够拥有一个稳定的环境

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/67467158

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档