首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何解决在我的k8s集群中nginx入口控制器开始失败的错误?

如何解决在我的k8s集群中nginx入口控制器开始失败的错误?
EN

Stack Overflow用户
提问于 2022-04-19 07:38:42
回答 1查看 1.7K关注 0票数 0

v2.4.2

  • kubernetes版本: v1.17.4

在我的k8s集群中,nginx入口控制器不能正常工作并总是重新启动,我在日志中没有得到任何有用的信息,谢谢您的帮助。

集群节点:

代码语言:javascript
复制
> kubectl get nodes  
NAME      STATUS   ROLES                      AGE   VERSION
master1   Ready    controlplane,etcd,worker   18d   v1.17.4
master2   Ready    controlplane,etcd,worker   17d   v1.17.4
node1     Ready    worker                     17d   v1.17.4
node2     Ready    worker                     17d   v1.17.4

ingress nginx命名空间中的集群荚

代码语言:javascript
复制
> kubectl get pods -n ingress-nginx
NAME                                    READY   STATUS    RESTARTS   AGE
default-http-backend-5bb77998d7-k7gdh   1/1     Running   1          17d
nginx-ingress-controller-6l4jh          0/1     Running   10         27m
nginx-ingress-controller-bh2pg          1/1     Running   0          63m
nginx-ingress-controller-drtzx          1/1     Running   0          63m
nginx-ingress-controller-qndbw          1/1     Running   0          63m

nginx入口控制器-6l4jh的吊舱记录

代码语言:javascript
复制
> kubectl logs nginx-ingress-controller-6l4jh -n ingress-nginx
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       nginx-0.25.1-rancher1
  Build:         
  Repository:    https://github.com/rancher/ingress-nginx.git
  nginx version: openresty/1.15.8.1

-------------------------------------------------------------------------------

> 

描述信息

代码语言:javascript
复制
> kubectl describe pod nginx-ingress-controller-6l4jh -n ingress-nginx
Name:         nginx-ingress-controller-6l4jh
Namespace:    ingress-nginx
Priority:     0
Node:         node2/172.26.13.11
Start Time:   Tue, 19 Apr 2022 07:12:16 +0000
Labels:       app=ingress-nginx
              controller-revision-hash=758cb9dbbc
              pod-template-generation=8
Annotations:  cattle.io/timestamp: 2022-04-19T07:08:51Z
              field.cattle.io/ports:
                [[{"containerPort":80,"dnsName":"nginx-ingress-controller-hostport","hostPort":80,"kind":"HostPort","name":"http","protocol":"TCP","source...
              field.cattle.io/publicEndpoints:
                [{"addresses":["172.26.13.130"],"nodeId":"c-wv692:m-d5802d05bbf0","port":80,"protocol":"TCP"},{"addresses":["172.26.13.130"],"nodeId":"c-w...
              prometheus.io/port: 10254
              prometheus.io/scrape: true
Status:       Running
IP:           172.26.13.11
IPs:
  IP:           172.26.13.11
Controlled By:  DaemonSet/nginx-ingress-controller
Containers:
  nginx-ingress-controller:
    Container ID:  docker://09a6248edb921b9c9cbab678c793fe1cc3d28322ea6abbb8f15c899351ce4b40
    Image:         172.26.13.133:5000/rancher/nginx-ingress-controller:nginx-0.25.1-rancher1
    Image ID:      docker-pullable://172.26.13.133:5000/rancher/nginx-ingress-controller@sha256:fe50ceea3d1a0bc9a7ccef8d5845c9a30b51f608e411467862dff590185a47d2
    Ports:         80/TCP, 443/TCP
    Host Ports:    80/TCP, 443/TCP
    Args:
      /nginx-ingress-controller
      --default-backend-service=$(POD_NAMESPACE)/default-http-backend
      --configmap=$(POD_NAMESPACE)/nginx-configuration
      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
      --udp-services-configmap=$(POD_NAMESPACE)/udp-services
      --annotations-prefix=nginx.ingress.kubernetes.io
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Tue, 19 Apr 2022 07:40:12 +0000
      Finished:     Tue, 19 Apr 2022 07:41:32 +0000
    Ready:          False
    Restart Count:  11
    Liveness:       http-get http://:10254/healthz delay=60s timeout=20s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=60s timeout=20s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-controller-6l4jh (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-serviceaccount-token-2kdbj (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  nginx-ingress-serviceaccount-token-2kdbj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nginx-ingress-serviceaccount-token-2kdbj
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     :NoExecute
                 :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  <unknown>           default-scheduler  Successfully assigned ingress-nginx/nginx-ingress-controller-6l4jh to node2
  Normal   Pulled     27m (x3 over 30m)   kubelet, node2     Container image "172.26.13.133:5000/rancher/nginx-ingress-controller:nginx-0.25.1-rancher1" already present on machine
  Normal   Created    27m (x3 over 30m)   kubelet, node2     Created container nginx-ingress-controller
  Normal   Started    27m (x3 over 30m)   kubelet, node2     Started container nginx-ingress-controller
  Normal   Killing    27m (x2 over 28m)   kubelet, node2     Container nginx-ingress-controller failed liveness probe, will be restarted
  Warning  Unhealthy  25m (x10 over 29m)  kubelet, node2     Liveness probe failed: Get http://172.26.13.11:10254/healthz: dial tcp 172.26.13.11:10254: connect: connection refused
  Warning  Unhealthy  10m (x21 over 29m)  kubelet, node2     Readiness probe failed: Get http://172.26.13.11:10254/healthz: dial tcp 172.26.13.11:10254: connect: connection refused
  Warning  BackOff    8s (x69 over 20m)   kubelet, node2     Back-off restarting failed container
> 
EN

回答 1

Stack Overflow用户

发布于 2022-04-19 07:46:23

听起来,入口控制器吊舱没有通过活性/准备状态检查,但看起来只在某个节点上。你可以试试:

  • 检查端口
  • 更新到比nginx-0.25.1

更新的防火墙节点。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/71921252

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档