首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Kubernetes -如何将pod分配给具有特定标签的节点

Kubernetes -如何将pod分配给具有特定标签的节点
EN

Stack Overflow用户
提问于 2020-07-16 22:39:20
回答 1查看 376关注 0票数 1

假设我有以下节点,标签分别为env=stagingenv=production

代码语言:javascript
复制
server0201     Ready    worker   79d   v1.18.2   10.2.2.22     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0202     Ready    worker   79d   v1.18.2   10.2.2.23     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0203     Ready    worker   35d   v1.18.3   10.2.2.30     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     staging
server0301     Ready    worker   35d   v1.18.3   10.2.3.21     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0302     Ready    worker   35d   v1.18.3   10.2.3.29     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0303     Ready    worker   35d   v1.18.0   10.2.3.30     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     staging
server0304     Ready    worker   65d   v1.18.2   10.2.6.22     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production

我尝试使用nodeSelectornodeAffinity,但当我的选择器标签为env=staging时,无论我创建了多少副本,我所有的pod都一直驻留在server0203上,而不是server0303上。

同样,如果我使用env=production,它将只登陆server0201。

我应该怎么做才能确保我的pod均匀地分布到我分配了这些标签的节点?

以下是我的部署规范

代码语言:javascript
复制
apiVersion: apps/v1
kind: Deployment
metadata:
  name: helloworld
  namespace: gab
spec:
  selector:
    matchLabels:
      app: helloworld
  replicas: 2 # tells deployment to run 1 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      labels:
        app: helloworld
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: env
                operator: Equals
                values:
                - staging
      containers:
      - name: helloworld
        image: karthequian/helloworld:latest
        ports:
        - containerPort: 80

工作节点中没有污染

代码语言:javascript
复制
kubectl get nodes -o json | jq '.items[].spec.taints'
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
null
null
null
null
null
null
null

所有标签的显示

代码语言:javascript
复制
server0201     Ready    worker   80d   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0202,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0202     Ready    worker   80d   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0203,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0203     Ready    worker   35d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=staging,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0210,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0301     Ready    worker   35d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0301,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0302     Ready    worker   35d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0309,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0303     Ready    worker   35d   v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=staging,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0310,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0304     Ready    worker   65d   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0602,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
EN

回答 1

Stack Overflow用户

发布于 2020-07-17 00:10:40

玩了一段时间后,我意识到NodeSelectorPodAffinity没有问题。事实上,我甚至可以通过使用限制在我的名称空间中的node-selector注释来实现我的问题想要实现的目标。

代码语言:javascript
复制
apiVersion: v1
kind: Namespace
metadata:
  name: gab
  annotations:
    scheduler.alpha.kubernetes.io/node-selector: env=production
spec: {}
status: {}    

只要我的部署在名称空间中,节点选择器就可以工作。

代码语言:javascript
复制
kind: Deployment
metadata:
  name: helloworld
  namespace: gab
spec:
  selector:
    matchLabels:
      app: helloworld
  replicas: 10 # tells deployment to run 1 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      labels:
        app: helloworld
    spec:
      containers:
      - name: helloworld
        image: karthequian/helloworld:latest
        ports:
        - containerPort: 80

现在为什么它在一开始对我来说是有效的,因为我的staging标记节点的第二个节点比我一直驻留的那个节点的利用率略高。

代码语言:javascript
复制
Resource           Requests     Limits
  --------           --------     ------
  cpu                3370m (14%)  8600m (35%)
  memory             5350Mi (4%)  8600Mi (6%)
  ephemeral-storage  0 (0%)       0 (0%)

我一直在登陆的节点是

代码语言:javascript
复制
  Resource           Requests    Limits
  --------           --------    ------
  cpu                1170m (4%)  500100m (2083%)
  memory             164Mi (0%)  100Mi (0%)
  ephemeral-storage  0 (0%)      0 (0%)

当我测试并切换到production时,因为有更多的节点,所以它被分布到几个节点上。

因此,我认为,调度器基于Server load (我可能是错的)来平衡pod,而不是尝试均匀分布

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/62937188

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档