首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >microk8s吊舱经常在我的覆盆子上重新启动

microk8s吊舱经常在我的覆盆子上重新启动
EN

Stack Overflow用户
提问于 2021-03-07 11:35:27
回答 1查看 358关注 0票数 2

我在我的raspberry pi 4上安装了一个64位的ubuntu,在我看来,每个吊舱都会频繁地重新启动:

代码语言:javascript
复制
microk8s.kubectl describe pod redis-c49fd5d65-g8ghn
Name:         redis-c49fd5d65-g8ghn
Namespace:    default
Priority:     0
Node:         raspberrypi4-docker1/192.168.0.45
Start Time:   Thu, 10 Sep 2020 08:11:38 +0000
Labels:       app=redis
              pod-template-hash=c49fd5d65
Annotations:  <none>
Status:       Running
IP:           10.1.42.201
IPs:
  IP:           10.1.42.201
Controlled By:  ReplicaSet/redis-c49fd5d65
Containers:
  redis:
    Container ID:   containerd://9b8300e456691025ccbfbee588a52069a1fa25ffa6f0c1b5f5f652227a1172f3
    Image:          hypriot/rpi-redis:latest
    Image ID:       sha256:2e0128f189c5b19a15001e48fac1d0326326cebb4195abf6a56519e374636f1f
    Port:           6379/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 07 Mar 2021 10:15:57 +0000
    Last State:     Terminated
      Reason:       Unknown
      Exit Code:    255
      Started:      Sun, 07 Mar 2021 09:24:16 +0000
      Finished:     Sun, 07 Mar 2021 10:14:43 +0000
    Ready:          True
    Restart Count:  4579
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dn4bk (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-dn4bk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-dn4bk
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason          Age                    From     Message
  ----     ------          ----                   ----     -------
  Normal   SandboxChanged  8d                     kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   SandboxChanged  8d                     kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          8d                     kubelet  Container image "hypriot/rpi-redis:latest" already present on machine
  Normal   Created         8d                     kubelet  Created container redis
  Normal   Started         8d                     kubelet  Started container redis
  Normal   SandboxChanged  8d                     kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          8d                     kubelet  Container image "hypriot/rpi-redis:latest" already present on machine
  Normal   Created         8d                     kubelet  Created container redis
  Normal   Started         8d                     kubelet  Started container redis
  Normal   SandboxChanged  8d                     kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          8d                     kubelet  Container image "hypriot/rpi-redis:latest" already present on machine
  Normal   Created         8d                     kubelet  Created container redis
  Normal   Started         8d                     kubelet  Started container redis

...
  Normal   SandboxChanged  108m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          107m                   kubelet  Container image "hypriot/rpi-redis:latest" already present on machine
  Normal   Created         107m                   kubelet  Created container redis
  Normal   Started         107m                   kubelet  Started container redis
  Normal   SandboxChanged  101m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          101m                   kubelet  Container image "hypriot/rpi-redis:latest" already present on machine
  Normal   Created         101m                   kubelet  Created container redis
  Normal   Started         101m                   kubelet  Started container redis
  Normal   SandboxChanged  49m                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          49m                    kubelet  Container image "hypriot/rpi-redis:latest" already present on machine
  Normal   Started         49m                    kubelet  Started container redis
  Normal   Created         49m                    kubelet  Created container redis

我读到这个错误可能是网络失败的结果,我可以在日志中找到DNS错误消息:

代码语言:javascript
复制
Mar 07 11:24:52 raspberrypi4-docker1 systemd-resolved[1760]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Mar 07 11:24:52 raspberrypi4-docker1 systemd-resolved[1760]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Mar 07 11:24:52 raspberrypi4-docker1 systemd-resolved[1760]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Mar 07 11:24:52 raspberrypi4-docker1 systemd-resolved[1760]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Mar 07 11:24:55 raspberrypi4-docker1 microk8s.daemon-kubelet[4953]: E0307 11:24:55.190320    4953 summary_sys_containers.go:47] Failed to get system container stats for "/systemd/system.slice": failed to get cgroup stats for "/systemd/system.slice": failed to get container info for "/systemd/system.slice": unknown container "/systemd/system.slice"

Microk8s检查输出:

代码语言:javascript
复制
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-flanneld is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-apiserver is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Service snap.microk8s.daemon-proxy is running
  Service snap.microk8s.daemon-kubelet is running
  Service snap.microk8s.daemon-scheduler is running
  Service snap.microk8s.daemon-controller-manager is running
  Service snap.microk8s.daemon-etcd is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy openSSL information to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster

Building the report tarball
  Report tarball is at /var/snap/microk8s/2038/inspection-report-20210307_113359.tar.gz

怎样才能防止重新启动集装箱?

EN

回答 1

Stack Overflow用户

发布于 2022-11-12 09:45:37

我从K3S切换回MicroK8S,在一个新安装的只有一个节点的MicroK8S集群上重新启动。在检查日志之后,我在其中一个MicroK8s服务中发现了一个奇怪的日志:

代码语言:javascript
复制
root@biehler2:/git/webnut-ups-k8s/k8s# snap logs "microk8s.daemon-apiserver-kicker "
2022-11-11T18:49:14Z microk8s.daemon-apiserver-kicker[335004]: CSR change detected. Reconfiguring the kube-apiserver

我问了谷歌,发现了这个:https://github.com/canonical/microk8s/issues/1710#issuecomment-721043408

代码语言:javascript
复制
When the kicker detects some change in the network it will reconfigure and restart the apiserver. This is to help those who are moving from one network to another.

我认为api-server的重新启动可能迫使荚重新启动(不知道K8S在这里是如何工作的)。至少,重新启动的时间戳和服务日志的时间戳已经匹配。

我还找到了一个可能的解决办法:

代码语言:javascript
复制
I think yes you can turn it off systemctl stop snap.microk8s-daemon-apiservice-kicker
You can give that a try though.

下面的一些评论我发现如下:

代码语言:javascript
复制
do you by any chance have ipv6 enabled on the machine? For example when you do hostname -I does it show ips which are not ipv4? This can be the reason why the kicker is constantly restarting the apiservice.

If you can turn off ipv6 and then re-enable the kicker it shouldn't restart the apiservice.

这是:

代码语言:javascript
复制
if you want to turn disable the kicker from detecting network changes you can follow the instructions below. See if this helps.

注释的作者引用了以下链接:https://github.com/canonical/microk8s/issues/1822#issuecomment-745335208

代码语言:javascript
复制
MIcroK8s has a service that periodically check for changes on your network, reconfigures and restarts the API server if needed. Looking at this services logs (journalctl -n 1000 -u snap.microk8s.daemon-apiserver-kicker) I see a few restarts. To stop the API server restarts even if the network changes you can configure it to use a specific interface. To do so, edit /var/snap/microk8s/currect/args/kube-apiserver and add one of the arguments --advertise-address, --bind-address as described in [1]; then do a microk8s.stop; microk8s.start.

[1] https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/

所以我看到了三个选择:

禁用service

  • reconfigure API服务器
  • IPv6

我最近禁用了kicker服务。直到现在,我没有看到任何重新启动,也没有影响我的集群。不知道kicker服务是做什么的。也许这也能帮到其他人。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/66516049

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档