首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >kubectl集群-信息获取502坏网关错误

kubectl集群-信息获取502坏网关错误
EN

Ask Ubuntu用户
提问于 2016-10-11 04:38:47
回答 1查看 1.3K关注 0票数 6

我使用juju deploy canonical-kubernetes来部署K8S。但是,当运行./kubectl cluster-info时,如Kubernetes的典型分布魅力文档所述,请获得以下错误:

代码语言:javascript
复制
Error from server: an error on the server ("<html>\r\n<head><title>502
Bad Gateway</title></head>\r\n<body bgcolor=\"white\">\r\n<center>
<h1>502           Bad Gateway</h1></center>\r\n<hr><center>nginx/1.10.0
 (Ubuntu)</center>\r\n</body>\r\n</html>") has prevented the request from succeeding

Juju状态输出:

代码语言:javascript
复制
MODEL    CONTROLLER  CLOUD/REGION         VERSION
default  lxd-test    localhost/localhost  2.0-rc3

APP                    VERSION  STATUS       SCALE  CHARM                  STORE       REV  OS      NOTES
easyrsa                3.0.1    active           1  easyrsa                jujucharms    2  ubuntu  
elasticsearch                   active           2  elasticsearch          jujucharms   19  ubuntu  
etcd                   2.2.5    active           3  etcd                   jujucharms   13  ubuntu  
filebeat                        active           4  filebeat               jujucharms    5  ubuntu  
flannel                0.6.1    waiting          4  flannel                jujucharms    3  ubuntu  
kibana                          active           1  kibana                 jujucharms   15  ubuntu  
kubeapi-load-balancer  1.10.0   active           1  kubeapi-load-balancer  jujucharms    2  ubuntu  exposed
kubernetes-master      1.4.0    maintenance      1  kubernetes-master      jujucharms    3  ubuntu  
kubernetes-worker      1.4.0    waiting          3  kubernetes-worker      jujucharms    3  ubuntu  exposed
topbeat                         active           3  topbeat                jujucharms    5  ubuntu  

UNIT                      WORKLOAD     AGENT      MACHINE  PUBLIC-ADDRESS  PORTS            MESSAGE
easyrsa/0*                active       idle       0        10.181.160.79                    Certificate Authority connected.
elasticsearch/0*          active       idle       1        10.181.160.62   9200/tcp         Ready
elasticsearch/1           active       idle       2        10.181.160.72   9200/tcp         Ready
etcd/0*                   active       idle       3        10.181.160.41   2379/tcp         Healthy with 3 known peers. (leader)
etcd/1                    active       idle       4        10.181.160.135  2379/tcp         Healthy with 3 known peers.
etcd/2                    active       idle       5        10.181.160.204  2379/tcp         Healthy with 3 known peers.
kibana/0*                 active       idle       6        10.181.160.54   80/tcp,9200/tcp  ready
kubeapi-load-balancer/0*  active       idle       7        10.181.160.42   443/tcp          Loadbalancer ready.
kubernetes-master/0*      maintenance  idle       8        10.181.160.208                   Rendering authentication templates.
  filebeat/0              active       idle                10.181.160.208                   Filebeat ready.
  flannel/0*              waiting      idle                10.181.160.208                   Flannel is starting up.
kubernetes-worker/0*      waiting      idle       9        10.181.160.94                    Waiting for cluster-manager to initiate start.
  filebeat/1*             active       idle                10.181.160.94                    Filebeat ready.
  flannel/1               waiting      idle                10.181.160.94                    Flannel is starting up.
  topbeat/0               active       idle                10.181.160.94                    Topbeat ready.
kubernetes-worker/1       waiting      idle       10       10.181.160.95                    Waiting for cluster-manager to initiate start.
  filebeat/2              active       idle                10.181.160.95                    Filebeat ready.
  flannel/2               waiting      idle                10.181.160.95                    Flannel is starting up.
  topbeat/1*              active       executing           10.181.160.95                    (update-status) Topbeat ready.
kubernetes-worker/2       waiting      idle       11       10.181.160.148                   Waiting for cluster-manager to initiate start.
  filebeat/3              active       idle                10.181.160.148                   Filebeat ready.
  flannel/3               waiting      idle                10.181.160.148                   Flannel is starting up.
  topbeat/2               active       idle                10.181.160.148                   Topbeat ready.

MACHINE  STATE    DNS             INS-ID          SERIES  AZ
0        started  10.181.160.79   juju-23ce86-0   xenial  
1        started  10.181.160.62   juju-23ce86-1   trusty  
2        started  10.181.160.72   juju-23ce86-2   trusty  
3        started  10.181.160.41   juju-23ce86-3   xenial  
4        started  10.181.160.135  juju-23ce86-4   xenial  
5        started  10.181.160.204  juju-23ce86-5   xenial  
6        started  10.181.160.54   juju-23ce86-6   trusty  
7        started  10.181.160.42   juju-23ce86-7   xenial  
8        started  10.181.160.208  juju-23ce86-8   xenial  
9        started  10.181.160.94   juju-23ce86-9   xenial  
10       started  10.181.160.95   juju-23ce86-10  xenial  
11       started  10.181.160.148  juju-23ce86-11  xenial  

RELATION           PROVIDES               CONSUMES               TYPE
certificates       easyrsa                kubeapi-load-balancer  regular
certificates       easyrsa                kubernetes-master      regular
certificates       easyrsa                kubernetes-worker      regular
peer               elasticsearch          elasticsearch          peer
elasticsearch      elasticsearch          filebeat               regular
rest               elasticsearch          kibana                 regular
elasticsearch      elasticsearch          topbeat                regular
cluster            etcd                   etcd                   peer
etcd               etcd                   flannel                regular
etcd               etcd                   kubernetes-master      regular
juju-info          filebeat               kubernetes-master      regular
juju-info          filebeat               kubernetes-worker      regular
sdn-plugin         flannel                kubernetes-master      regular
sdn-plugin         flannel                kubernetes-worker      regular
loadbalancer       kubeapi-load-balancer  kubernetes-master      regular
kube-api-endpoint  kubeapi-load-balancer  kubernetes-worker      regular
beats-host         kubernetes-master      filebeat               subordinate
host               kubernetes-master      flannel                subordinate
kube-dns           kubernetes-master      kubernetes-worker      regular
beats-host         kubernetes-worker      filebeat               subordinate
host               kubernetes-worker      flannel                subordinate
beats-host         kubernetes-worker      topbeat                subordinate
EN

回答 1

Ask Ubuntu用户

发布于 2016-10-13 12:03:40

这似乎是因为您正在LXD上部署Kubernetes。根据规范Kubernetes的自述的说法:

此时,LXD不支持kubernetes-主人、kubernetes-worker、kubeapi负载平衡器和等等.

这是Docker和LXD之间的一个限制--我们希望很快就能解决。同时,这些组件至少需要在VM上运行。

您可以使用LXD手动完成此操作,在LXD中部署其余组件,然后在计算机上手动启动几个KVM实例。

我将尝试得到一套明确的指示,并在这里回复他们。

票数 2
EN
页面原文内容由Ask Ubuntu提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://askubuntu.com/questions/835522

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档