首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Arm64发行:没有端点,代码:503

Arm64发行:没有端点,代码:503
EN

Stack Overflow用户
提问于 2017-06-12 07:47:28
回答 1查看 479关注 0票数 0
  1. Kubernetes版本(使用kubectl版本): 客户端版本:“1”,小调:“6”,GitVersion:"v1.6.4",GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507",GitTreeState:“干净”,BuildDate:"2017-05-19T18:44:27Z",GoVersion:"go1.7.5",Compiler:"gc",Platform:"linux/arm64"}服务器版本:version.Info{主:“1”,小调:“5”,GitVersion:"v1.5.2",GitCommit:“08e099554f3c31f6e6f07b448ab3ed78d0520507”,GitTreeState:“干净”,BuildDate:"2017-01-12T04:52:34Z",GoVersion:"go1.7.4",编译器:“gc”,平台:“linux/arm64”}
  2. 环境: 操作系统(例如来自/etc/ OS -VERSION=):NAME="Ubuntu“VERSION="16.04.2 LTS (Xenial Xerus)”“ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.2 LTS”VERSION_ID="16.04“HOME_URL=”SUPPORT_URL=“http://help.ubuntu.com/BUG_REPORT_URL=”http://bugs.launchpad.net/ubuntu/ VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial
  3. 内核(例如,uname -a): Linux node4 4.11.0-rc6-next-20170411-00286-gcc55807 #0 SMP抢占Mon 6月5日18:56:20 aarch64 GNU/Linux
  4. 发生了什么: 我想使用Kube-Deployment.sh在ARM64上安装主程序,但是当我访问$myip:8080/ui时遇到了错误:{“类别”:“状态”、"apiVersion":"v1“、”元数据“:{}、”状态“:"Failure”、"message":“没有可用的服务端点”kubernetes-dashboard“、"ServiceUnavailable",“代码”:503 }我的分支是2017-2-7 (c8d6fbfc…)。顺便说一句,通过使用相同的安装步骤,它可以在X86-AMD 64平台上工作。
  5. 还有什么我们需要知道的:

5.1 kubectl获取pod -命名空间=kube-system

代码语言:javascript
复制
        k8s-master-10.193.20.23 4/4 Running 17 1h
        k8s-proxy-v1-sk8vd 1/1 Running 0 1h
        kube-addon-manager-10.193.20.23 2/2 Running 2 1h
        kube-dns-3365905565-xvj7n 2/4 CrashLoopBackOff 65 1h
        kubernetes-dashboard-1416335539-lhlhz 0/1 CrashLoopBackOff 22 1h

5.2kubectl描述荚kubernetes-仪表板-1416335539-lhlhz命名空间=kube-system

代码语言:javascript
复制
        Name:   kubernetes-dashboard-1416335539-lhlhz
        Namespace:  kube-system
        Node:   10.193.20.23/10.193.20.23
        Start Time: Mon, 12 Jun 2017 10:04:07 +0800
        Labels: k8s-app=kubernetes-dashboard
        pod-template-hash=1416335539
        Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kubernetes-dashboard-1416335539","uid":"6ab170d2-4f13-11e7-a...
        scheduler.alpha.kubernetes.io/critical-pod=
        scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}]
        Status: Running
        IP: 10.1.70.2
        Controllers:    ReplicaSet/kubernetes-dashboard-1416335539
        Containers:
        kubernetes-dashboard:
        Container ID:   docker://fbdbe4c047803b0e98ca7412ca617031f1f31d881e3a5838298a1fda24a1ae18
        Image:  gcr.io/google_containers/kubernetes-dashboard-arm64:v1.5.0
        Image ID:   docker-pullable://gcr.io/google_containers/kubernetes-dashboard-arm64@sha256:559d58ef0d8e9dbe78f80060401b97d6262462318c0b8e071937a73896ea1d3d
        Port:   9090/TCP
        State:  Running
        Started:    Mon, 12 Jun 2017 11:30:03 +0800
        Last State: Terminated
        Reason: Error
        Exit Code:  1
        Started:    Mon, 12 Jun 2017 11:24:28 +0800
        Finished:   Mon, 12 Jun 2017 11:24:59 +0800
        Ready:  True
        Restart Count:  23
        Limits:
        cpu:    100m
        memory: 50Mi
        Requests:
        cpu:    100m
        memory: 50Mi
        Liveness:   http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
        Environment:    
        Mounts:
        /var/run/secrets/kubernetes.io/serviceaccount from default-token-0mnn8 (ro)
        Conditions:
        Type    Status
        Initialized True
        Ready True
        PodScheduled True
        Volumes:
        default-token-0mnn8:
        Type:   Secret (a volume populated by a Secret)
        SecretName: default-token-0mnn8
        Optional:   false
        QoS Class:  Guaranteed
        Node-Selectors: 
        Tolerations:    
        Events:
        FirstSeen   LastSeen    Count   From    SubObjectPath   Type    Reason  Message

        30m 30m 1   kubelet, 10.193.20.23   spec.containers{kubernetes-dashboard}   Normal  Killing Killing container with docker id b0562b3640ae: pod "kubernetes-dashboard-1416335539-lhlhz_kube-system(6ab54dba-4f13-11e7-a56b-6805ca369d7f)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
        18m 18m 1   kubelet, 10.193.20.23   spec.containers{kubernetes-dashboard}   Normal  Killing Killing container with docker id 477066c3a00f: pod "kubernetes-dashboard-1416335539-lhlhz_kube-system(6ab54dba-4f13-11e7-a56b-6805ca369d7f)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
        12m 12m 1   kubelet, 10.193.20.23   spec.containers{kubernetes-dashboard}   Normal  Killing Killing container with docker id 3e021d6df31f: pod "kubernetes-dashboard-1416335539-lhlhz_kube-system(6ab54dba-4f13-11e7-a56b-6805ca369d7f)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
        11m 11m 1   kubelet, 10.193.20.23   spec.containers{kubernetes-dashboard}   Normal  Killing Killing container with docker id 43fe3c37817d: pod "kubernetes-dashboard-1416335539-lhlhz_kube-system(6ab54dba-4f13-11e7-a56b-6805ca369d7f)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
        5m  5m  1   kubelet, 10.193.20.23   spec.containers{kubernetes-dashboard}   Normal  Killing Killing container with docker id 23cea72e1f45: pod "kubernetes-dashboard-1416335539-lhlhz_kube-system(6ab54dba-4f13-11e7-a56b-6805ca369d7f)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
        1h  5m  7   kubelet, 10.193.20.23   spec.containers{kubernetes-dashboard}   Warning Unhealthy   Liveness probe failed: Get http://10.1.70.2:9090/: dial tcp 10.1.70.2:9090: getsockopt: connection refused
        1h  38s 335 kubelet, 10.193.20.23   spec.containers{kubernetes-dashboard}   Warning BackOff Back-off restarting failed docker container
        1h  38s 307 kubelet, 10.193.20.23   Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-1416335539-lhlhz_kube-system(6ab54dba-4f13-11e7-a56b-6805ca369d7f)"

        1h  27s 24  kubelet, 10.193.20.23   spec.containers{kubernetes-dashboard}   Normal  Pulled  Container image "gcr.io/google_containers/kubernetes-dashboard-arm64:v1.5.0" already present on machine
        59m 23s 15  kubelet, 10.193.20.23   spec.containers{kubernetes-dashboard}   Normal  Created (events with common reason combined)
        59m 22s 15  kubelet, 10.193.20.23   spec.containers{kubernetes-dashboard}   Normal  Started (events with common reason combined)

5.3 kubectl得到svc,ep,rc,rs,部署,-o wide -所有名称空间命名空间名称集群-IP外部IP端口(S)年龄选择器默认svc/kubernetes 10.0.0.1 443/TCP 16m kube-system svc/kube-dns 10.0.1053/UDP,53/TCP 16m k8s-app=kube-dns kube-system svc/kubernetes-仪表板10.0.0.95 80/TCP 16m K8s-app=kubernetes-仪表板

代码语言:javascript
复制
    NAMESPACE     NAME                         ENDPOINTS           AGE
    default       ep/kubernetes                10.193.20.23:6443   16m
    kube-system   ep/kube-controller-manager   <none>              11m
    kube-system   ep/kube-dns                                      16m
    kube-system   ep/kube-scheduler            <none>              11m
    kube-system   ep/kubernetes-dashboard                          16m

    NAMESPACE     NAME                                 DESIRED   CURRENT   READY     AGE       CONTAINER(S)                              IMAGE(S)                                                                                                                                                                                       SELECTOR
    kube-system   rs/kube-dns-3365905565               1         1         0         16m       kubedns,dnsmasq,dnsmasq-metrics,healthz   gcr.io/google_containers/kubedns-arm64:1.9,gcr.io/google_containers/kube-dnsmasq-arm64:1.4,gcr.io/google_containers/dnsmasq-metrics-arm64:1.0,gcr.io/google_containers/exechealthz-arm64:1.2   k8s-app=kube-dns,pod-template-hash=3365905565
    kube-system   rs/kubernetes-dashboard-1416335539   1         1         0         16m       kubernetes-dashboard                      gcr.io/google_containers/kubernetes-dashboard-arm64:v1.5.0                                                                                                                                     k8s-app=kubernetes-dashboard,pod-template-hash=1416335539

    NAMESPACE     NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINER(S)                              IMAGE(S)                                                                                                                                                                                       SELECTOR
    kube-system   deploy/kube-dns               1         1         1            0           16m       kubedns,dnsmasq,dnsmasq-metrics,healthz   gcr.io/google_containers/kubedns-arm64:1.9,gcr.io/google_containers/kube-dnsmasq-arm64:1.4,gcr.io/google_containers/dnsmasq-metrics-arm64:1.0,gcr.io/google_containers/exechealthz-arm64:1.2   k8s-app=kube-dns
    kube-system   deploy/kubernetes-dashboard   1         1         1            0           16m       kubernetes-dashboard                      gcr.io/google_containers/kubernetes-dashboard-arm64:v1.5.0                                                                                                                                     k8s-app=kubernetes-dashboard

    NAMESPACE     NAME                                       READY     STATUS             RESTARTS   AGE       IP             NODE
    kube-system   po/k8s-master-10.193.20.23                 4/4       Running            50         15m       10.193.20.23   10.193.20.23
    kube-system   po/k8s-proxy-v1-5b831                      1/1       Running            0          16m       10.193.20.23   10.193.20.23
    kube-system   po/kube-addon-manager-10.193.20.23         2/2       Running            6          15m       10.193.20.23   10.193.20.23
    kube-system   po/kube-dns-3365905565-jxg4f               1/4       CrashLoopBackOff   20         16m       10.1.5.3       10.193.20.23
    kube-system   po/kubernetes-dashboard-1416335539-frt3v   0/1       CrashLoopBackOff   7          16m       10.1.5.2       10.193.20.23



 5.4 kubectl describe pods kube-dns-3365905565-lb0mq --namespace=kube-system
Name:       kube-dns-3365905565-lb0mq
Namespace:  kube-system
Node:       10.193.20.23/10.193.20.23
Start Time: Wed, 14 Jun 2017 10:43:46 +0800
Labels:     k8s-app=kube-dns
        pod-template-hash=3365905565
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kube-dns-3365905565","uid":"4870aec2-50ab-11e7-a420-6805ca36...
        scheduler.alpha.kubernetes.io/critical-pod=
        scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}]
Status:     Running
IP:     10.1.20.3
Controllers:    ReplicaSet/kube-dns-3365905565
Containers:
  kubedns:
    Container ID:   docker://729562769b48be60a02b62692acd3d1e1c67ac2505f4cb41240067777f45fd77
    Image:      gcr.io/google_containers/kubedns-arm64:1.9
    Image ID:       docker-pullable://gcr.io/google_containers/kubedns-arm64@sha256:3c78a2c5b9b86c5aeacf9f5967f206dcf1e64362f3e7f274c1c078c954ecae38
    Ports:      10053/UDP, 10053/TCP, 10055/TCP
    Args:
      --domain=cluster.local.
      --dns-port=10053
      --config-map=kube-dns
      --v=0
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Wed, 14 Jun 2017 10:56:29 +0800
      Finished:     Wed, 14 Jun 2017 10:58:06 +0800
    Ready:      False
    Restart Count:  6
    Limits:
      memory:   170Mi
    Requests:
      cpu:  100m
      memory:   70Mi
    Liveness:   http-get http://:8080/healthz-kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:  http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Environment:
      PROMETHEUS_PORT:  10055
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-1t5v9 (ro)
  dnsmasq:
    Container ID:   docker://b6d7e98a4af2715294764929f901947ab3b985be45d9f213245bd338ab8c3101
    Image:      gcr.io/google_containers/kube-dnsmasq-arm64:1.4
    Image ID:       docker-pullable://gcr.io/google_containers/kube-dnsmasq-arm64@sha256:dff5f9e2a521816aa314d469fd8ef961270fe43b4a74bab490385942103f3728
    Ports:      53/UDP, 53/TCP
    Args:
      --cache-size=1000
      --no-resolv
      --server=127.0.0.1#10053
      --log-facility=-
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Wed, 14 Jun 2017 10:55:50 +0800
      Finished:     Wed, 14 Jun 2017 10:57:26 +0800
    Ready:      False
    Restart Count:  6
    Requests:
      cpu:      150m
      memory:       10Mi
    Liveness:       http-get http://:8080/healthz-dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-1t5v9 (ro)
  dnsmasq-metrics:
    Container ID:   docker://51693aea0e732e488b631dcedc082f5a9e23b5b74857217cf005d1e947375367
    Image:      gcr.io/google_containers/dnsmasq-metrics-arm64:1.0
    Image ID:       docker-pullable://gcr.io/google_containers/dnsmasq-metrics-arm64@sha256:fc0e8b676a26ed0056b8c68611b74b9b5f3f00c608e5b11ef1608484ce55dd9a
    Port:       10054/TCP
    Args:
      --v=2
      --logtostderr
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Exit Code:    128
      Started:      Wed, 14 Jun 2017 10:57:28 +0800
      Finished:     Wed, 14 Jun 2017 10:57:28 +0800
    Ready:      False
    Restart Count:  7
    Requests:
      memory:       10Mi
    Liveness:       http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-1t5v9 (ro)
  healthz:
    Container ID:   docker://fab7ef143a95ad4d2f6363d5fcdc162eba1522b92726665916462be765289327
    Image:      gcr.io/google_containers/exechealthz-arm64:1.2
    Image ID:       docker-pullable://gcr.io/google_containers/exechealthz-arm64@sha256:e8300fde6c36b454cc00b5fffc96d6985622db4d8eb42a9f98f24873e9535b5c
    Port:       8080/TCP
    Args:
      --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
      --url=/healthz-dnsmasq
      --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
      --url=/healthz-kubedns
      --port=8080
      --quiet
    State:      Running
      Started:      Wed, 14 Jun 2017 10:44:31 +0800
    Ready:      True
    Restart Count:  0
    Limits:
      memory:   50Mi
    Requests:
      cpu:      10m
      memory:       50Mi
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-1t5v9 (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  default-token-1t5v9:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-1t5v9
    Optional:   false
QoS Class:  Burstable
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason      Message
  --------- --------    -----   ----            -------------               --------    ------      -------
  15m       15m     1   default-scheduler                       Normal      Scheduled   Successfully assigned kube-dns-3365905565-lb0mq to 10.193.20.23
  14m       14m     1   kubelet, 10.193.20.23   spec.containers{kubedns}        Normal      Created     Created container with docker id 2fef2db445e6; Security:[seccomp=unconfined]
  14m       14m     1   kubelet, 10.193.20.23   spec.containers{kubedns}        Normal      Started     Started container with docker id 2fef2db445e6
  14m       14m     1   kubelet, 10.193.20.23   spec.containers{dnsmasq}        Normal      Created     Created container with docker id 41ec998eeb76; Security:[seccomp=unconfined]
  14m       14m     1   kubelet, 10.193.20.23   spec.containers{dnsmasq}        Normal      Started     Started container with docker id 41ec998eeb76
  14m       14m     1   kubelet, 10.193.20.23   spec.containers{dnsmasq-metrics}    Normal      Created     Created container with docker id 676ef0e877c8; Security:[seccomp=unconfined]
  14m       14m     1   kubelet, 10.193.20.23   spec.containers{healthz}        Normal      Pulled      Container image "gcr.io/google_containers/exechealthz-arm64:1.2" already present on machine
  14m       14m     1   kubelet, 10.193.20.23   spec.containers{dnsmasq-metrics}    Warning     Failed      Failed to start container with docker id 676ef0e877c8 with error: Error response from daemon: {"message":"linux spec user: unable to find group nobody: no matching entries in group file"}
  14m       14m     1   kubelet, 10.193.20.23   spec.containers{healthz}        Normal      Created     Created container with docker id fab7ef143a95; Security:[seccomp=unconfined]
  14m       14m     1   kubelet, 10.193.20.23   spec.containers{healthz}        Normal      Started     Started container with docker id fab7ef143a95
  14m       14m     1   kubelet, 10.193.20.23   spec.containers{dnsmasq-metrics}    Warning     Failed      Failed to start container with docker id 45f6bd7f1f3a with error: Error response from daemon: {"message":"linux spec user: unable to find group nobody: no matching entries in group file"}
  14m       14m     1   kubelet, 10.193.20.23   spec.containers{dnsmasq-metrics}    Normal      Created     Created container with docker id 45f6bd7f1f3a; Security:[seccomp=unconfined]
  14m       14m     1   kubelet, 10.193.20.23                       Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "dnsmasq-metrics" with CrashLoopBackOff: "Back-off 10s restarting failed container=dnsmasq-metrics pod=kube-dns-3365905565-lb0mq_kube-system(48845c1a-50ab-11e7-a420-6805ca369d7f)"

  14m   14m 1   kubelet, 10.193.20.23   spec.containers{dnsmasq-metrics}    Normal  Created     Created container with docker id 2d1e5adb97bb; Security:[seccomp=unconfined]
  14m   14m 1   kubelet, 10.193.20.23   spec.containers{dnsmasq-metrics}    Warning Failed      Failed to start container with docker id 2d1e5adb97bb with error: Error response from daemon: {"message":"linux spec user: unable to find group nobody: no matching entries in group file"}
  14m   14m 2   kubelet, 10.193.20.23                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "dnsmasq-metrics" with CrashLoopBackOff: "Back-off 20s restarting failed container=dnsmasq-metrics pod=kube-dns-3365905565-lb0mq_kube-system(48845c1a-50ab-11e7-a420-6805ca369d7f)"
EN

回答 1

Stack Overflow用户

发布于 2017-06-13 06:21:47

因此,看起来您在Kubernetes中碰到了一个(或几个)bug。我建议您重新尝试使用更新的版本(也可能是另一个停靠版本)。报告这些bug也是个好主意(https://github.com/kubernetes/dashboard/issues)。

总之,请记住,arm上的Kubernetes是一个高级主题,您应该期待问题并准备调试/解决这些问题:/

码头映像(gcr.io/google_containers/dnsmasq-metrics-amd64)可能有问题。非amd64材料没有经过很好的测试。

你能试着跑:

代码语言:javascript
复制
kubectl set image --namespace=kube-system deployment/kube-dns dnsmasq-metrics=lenart/dnsmasq-metrics-arm64:1.0`

无法到达仪表板,因为仪表板Pod是不健康和失败的准备探头。因为它还没有准备好,所以它没有考虑到仪表板服务,因此服务没有端点,这将导致您报告的错误消息。

仪表板很可能不健康,因为kube还没有准备好( Pod准备中的1/4容器,应该是4/4)。

kube很可能是不健康的,因为您没有部署pod网络(覆盖网络)。

转到加载项,选择一个网络插件并部署它。Weave有1.5兼容的版本,不需要安装。

在你做完之后,再给它几分钟。如果您是住院病人,只需删除kubernetes-仪表板和kube吊舱(不是部署/控制器!!)。如果这不能解决您的问题,那么请用新的信息更新您的问题。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/44494021

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档