首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Kubernetes没有在可用节点上展开豆荚

Kubernetes没有在可用节点上展开豆荚
EN

Stack Overflow用户
提问于 2017-12-29 04:53:17
回答 2查看 781关注 0票数 0

我有一个GKE集群,只有一个大小为2的节点池。当我添加第三个节点时,没有一个豆荚被分发到第三个节点。

下面是原始的2节点节点池:

代码语言:javascript
复制
$ kubectl get node
NAME                              STATUS    ROLES     AGE       VERSION
gke-cluster0-pool-d59e9506-b7nb   Ready     <none>    13m       v1.8.3-gke.0
gke-cluster0-pool-d59e9506-vp6t   Ready     <none>    18m       v1.8.3-gke.0

下面是运行在原始节点池上的吊舱:

代码语言:javascript
复制
$ kubectl get po -o wide --all-namespaces
NAMESPACE     NAME                                         READY     STATUS      RESTARTS   AGE       IP           NODE
default       attachment-proxy-659bdc84d-ckdq9             1/1       Running     0          10m       10.0.38.3    gke-cluster0-pool-d59e9506-vp6t
default       elasticsearch-0                              1/1       Running     0          4m        10.0.39.11   gke-cluster0-pool-d59e9506-b7nb
default       front-webapp-646bc49675-86jj6                1/1       Running     0          10m       10.0.38.10   gke-cluster0-pool-d59e9506-vp6t
default       kafka-0                                      1/1       Running     3          4m        10.0.39.9    gke-cluster0-pool-d59e9506-b7nb
default       mailgun-http-98f8d997c-hhfdc                 1/1       Running     0          4m        10.0.38.17   gke-cluster0-pool-d59e9506-vp6t
default       stamps-5b6fc489bc-6xtqz                      2/2       Running        3          10m       10.0.38.13   gke-cluster0-pool-d59e9506-vp6t
default       user-elasticsearch-6b6dd7fc8-b55xx           1/1       Running     0          10m       10.0.38.4    gke-cluster0-pool-d59e9506-vp6t
default       user-http-analytics-6bdd49bd98-p5pd5         1/1       Running     0          4m        10.0.39.8    gke-cluster0-pool-d59e9506-b7nb
default       user-http-graphql-67884c678c-7dcdq           1/1       Running     0          4m        10.0.39.7    gke-cluster0-pool-d59e9506-b7nb
default       user-service-5cbb8cfb4f-t6zhv                1/1       Running     0          4m        10.0.38.15   gke-cluster0-pool-d59e9506-vp6t
default       user-streams-0                               1/1       Running     0          4m        10.0.39.10   gke-cluster0-pool-d59e9506-b7nb
default       user-streams-elasticsearch-c64b64d6f-2nrtl   1/1       Running     3          10m       10.0.38.6    gke-cluster0-pool-d59e9506-vp6t
default       zookeeper-0                                  1/1       Running     0          4m        10.0.39.12   gke-cluster0-pool-d59e9506-b7nb
kube-lego     kube-lego-7799f6b457-skkrc                   1/1       Running     0          10m       10.0.38.5    gke-cluster0-pool-d59e9506-vp6t
kube-system   event-exporter-v0.1.7-7cb7c5d4bf-vr52v       2/2       Running     0          10m       10.0.38.7    gke-cluster0-pool-d59e9506-vp6t
kube-system   fluentd-gcp-v2.0.9-648rh                     2/2       Running     0          14m       10.0.38.2    gke-cluster0-pool-d59e9506-vp6t
kube-system   fluentd-gcp-v2.0.9-fqjz6                     2/2       Running     0          9m        10.0.39.2    gke-cluster0-pool-d59e9506-b7nb
kube-system   heapster-v1.4.3-6fc45b6cc4-8cl72             3/3       Running     0          4m        10.0.39.6    gke-cluster0-pool-d59e9506-b7nb
kube-system   k8s-snapshots-5699c68696-h8r75               1/1       Running     0          4m        10.0.38.16   gke-cluster0-pool-d59e9506-vp6t
kube-system   kube-dns-778977457c-b48w5                    3/3       Running     0          4m        10.0.39.5    gke-cluster0-pool-d59e9506-b7nb
kube-system   kube-dns-778977457c-sw672                    3/3       Running     0          10m       10.0.38.9    gke-cluster0-pool-d59e9506-vp6t
kube-system   kube-dns-autoscaler-7db47cb9b7-tjt4l         1/1       Running     0          10m       10.0.38.11   gke-cluster0-pool-d59e9506-vp6t
kube-system   kube-proxy-gke-cluster0-pool-d59e9506-b7nb   1/1       Running     0          9m        10.128.0.4   gke-cluster0-pool-d59e9506-b7nb
kube-system   kube-proxy-gke-cluster0-pool-d59e9506-vp6t   1/1       Running     0          14m       10.128.0.2   gke-cluster0-pool-d59e9506-vp6t
kube-system   kubernetes-dashboard-76c679977c-mwqlv        1/1       Running     0          10m       10.0.38.8    gke-cluster0-pool-d59e9506-vp6t
kube-system   l7-default-backend-6497bcdb4d-wkx28          1/1       Running     0          10m       10.0.38.12   gke-cluster0-pool-d59e9506-vp6t
kube-system   nginx-ingress-controller-78d546664f-gf6mx    1/1       Running     0          4m        10.0.39.3    gke-cluster0-pool-d59e9506-b7nb
kube-system   tiller-deploy-5458cb4cc-26x26                1/1       Running     0          4m        10.0.39.4    gke-cluster0-pool-d59e9506-b7nb

然后将另一个节点添加到节点池中:

代码语言:javascript
复制
gcloud container clusters resize cluster0 --node-pool pool --size 3

第三项增加并准备就绪:

代码语言:javascript
复制
NAME                              STATUS    ROLES     AGE       VERSION
gke-cluster0-pool-d59e9506-1rzm   Ready     <none>    3m        v1.8.3-gke.0
gke-cluster0-pool-d59e9506-b7nb   Ready     <none>    14m       v1.8.3-gke.0
gke-cluster0-pool-d59e9506-vp6t   Ready     <none>    19m       v1.8.3-gke.0

但是,除了属于DaemonSet的豆荚之外,没有任何一个豆荚被调度到添加的节点上:

代码语言:javascript
复制
$ kubectl get po -o wide --all-namespaces
NAMESPACE     NAME                                         READY     STATUS      RESTARTS   AGE       IP           NODE
default       attachment-proxy-659bdc84d-ckdq9             1/1       Running     0          17m       10.0.38.3    gke-cluster0-pool-d59e9506-vp6t
default       elasticsearch-0                              1/1       Running     0          10m       10.0.39.11   gke-cluster0-pool-d59e9506-b7nb
default       front-webapp-646bc49675-86jj6                1/1       Running     0          17m       10.0.38.10   gke-cluster0-pool-d59e9506-vp6t
default       kafka-0                                      1/1       Running     3          11m       10.0.39.9    gke-cluster0-pool-d59e9506-b7nb
default       mailgun-http-98f8d997c-hhfdc                 1/1       Running     0          10m       10.0.38.17   gke-cluster0-pool-d59e9506-vp6t
default       stamps-5b6fc489bc-6xtqz                      2/2       Running        3          16m       10.0.38.13   gke-cluster0-pool-d59e9506-vp6t
default       user-elasticsearch-6b6dd7fc8-b55xx           1/1       Running     0          17m       10.0.38.4    gke-cluster0-pool-d59e9506-vp6t
default       user-http-analytics-6bdd49bd98-p5pd5         1/1       Running     0          10m       10.0.39.8    gke-cluster0-pool-d59e9506-b7nb
default       user-http-graphql-67884c678c-7dcdq           1/1       Running     0          10m       10.0.39.7    gke-cluster0-pool-d59e9506-b7nb
default       user-service-5cbb8cfb4f-t6zhv                1/1       Running     0          10m       10.0.38.15   gke-cluster0-pool-d59e9506-vp6t
default       user-streams-0                               1/1       Running     0          10m       10.0.39.10   gke-cluster0-pool-d59e9506-b7nb
default       user-streams-elasticsearch-c64b64d6f-2nrtl   1/1       Running     3          17m       10.0.38.6    gke-cluster0-pool-d59e9506-vp6t
default       zookeeper-0                                  1/1       Running     0          10m       10.0.39.12   gke-cluster0-pool-d59e9506-b7nb
kube-lego     kube-lego-7799f6b457-skkrc                   1/1       Running     0          17m       10.0.38.5    gke-cluster0-pool-d59e9506-vp6t
kube-system   event-exporter-v0.1.7-7cb7c5d4bf-vr52v       2/2       Running     0          17m       10.0.38.7    gke-cluster0-pool-d59e9506-vp6t
kube-system   fluentd-gcp-v2.0.9-648rh                     2/2       Running     0          20m       10.0.38.2    gke-cluster0-pool-d59e9506-vp6t
kube-system   fluentd-gcp-v2.0.9-8tb4n                     2/2       Running     0          4m        10.0.40.2    gke-cluster0-pool-d59e9506-1rzm
kube-system   fluentd-gcp-v2.0.9-fqjz6                     2/2       Running     0          15m       10.0.39.2    gke-cluster0-pool-d59e9506-b7nb
kube-system   heapster-v1.4.3-6fc45b6cc4-8cl72             3/3       Running     0          11m       10.0.39.6    gke-cluster0-pool-d59e9506-b7nb
kube-system   k8s-snapshots-5699c68696-h8r75               1/1       Running     0          10m       10.0.38.16   gke-cluster0-pool-d59e9506-vp6t
kube-system   kube-dns-778977457c-b48w5                    3/3       Running     0          11m       10.0.39.5    gke-cluster0-pool-d59e9506-b7nb
kube-system   kube-dns-778977457c-sw672                    3/3       Running     0          17m       10.0.38.9    gke-cluster0-pool-d59e9506-vp6t
kube-system   kube-dns-autoscaler-7db47cb9b7-tjt4l         1/1       Running     0          17m       10.0.38.11   gke-cluster0-pool-d59e9506-vp6t
kube-system   kube-proxy-gke-cluster0-pool-d59e9506-1rzm   1/1       Running     0          4m        10.128.0.3   gke-cluster0-pool-d59e9506-1rzm
kube-system   kube-proxy-gke-cluster0-pool-d59e9506-b7nb   1/1       Running     0          15m       10.128.0.4   gke-cluster0-pool-d59e9506-b7nb
kube-system   kube-proxy-gke-cluster0-pool-d59e9506-vp6t   1/1       Running     0          20m       10.128.0.2   gke-cluster0-pool-d59e9506-vp6t
kube-system   kubernetes-dashboard-76c679977c-mwqlv        1/1       Running     0          17m       10.0.38.8    gke-cluster0-pool-d59e9506-vp6t
kube-system   l7-default-backend-6497bcdb4d-wkx28          1/1       Running     0          17m       10.0.38.12   gke-cluster0-pool-d59e9506-vp6t
kube-system   nginx-ingress-controller-78d546664f-gf6mx    1/1       Running     0          11m       10.0.39.3    gke-cluster0-pool-d59e9506-b7nb
kube-system   tiller-deploy-5458cb4cc-26x26                1/1       Running     0          11m       10.0.39.4    gke-cluster0-pool-d59e9506-b7nb

怎么一回事?为什么豆荚不扩散到添加的节点上?我原以为豆荚会被分发给第三个节点。如何将工作负载扩展到第三个节点?

从技术上讲,在清单资源请求方面,我的整个应用程序适合于一个节点。但是,当添加第二个节点时,应用程序被分发到第二个节点。所以我会想,当我添加第三个节点的时候,豆荚也会被调度到那个节点上。但这不是我所看到的。只有DaemonSet被调度到第三个节点上。我尝试过增长和缩小节点池,但没有效果。

更新

两个可抢占的节点重新启动,现在所有的豆荚都在一个节点上。到底怎么回事?增加资源请求是分散资源的唯一途径吗?

EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2017-12-30 16:33:34

这是预期的行为。新的吊舱将被安排到空节点上,但运行中的豆荚不会自动移动。kubernetes调度程序对于重新调度豆荚通常比较保守,所以它不会无缘无故地这样做。荚可以是有状态的(像db),所以kubernetes不想杀死和重新安排一个荚。

有一个正在开发中的项目将完成您正在寻找的事情:https://github.com/kubernetes-incubator/descheduler我还没有使用它,但是它正在社区中进行积极的开发。

票数 5
EN

Stack Overflow用户

发布于 2017-12-29 13:39:10

我在这里是一个完整的n00b,并且正在学习Docker/Kubernetes,在阅读完您的问题后,听起来好像您对仲裁有一个问题。您试过启动多达5个节点吗?(n/2+1) Kubernetes和Docker Swarmkit都使用Raft协商一致算法。您也可能想要入住木筏。这段视频如果确实符合你的困境,可能会对你有所帮助。它谈论的是木筏和法定人数。https://youtu.be/Qsv-q8WbIZY?t=2m58s

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/48017568

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档