假设我有以下节点,标签分别为env=staging和env=production
server0201 Ready worker 79d v1.18.2 10.2.2.22 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.5 production
server0202 Ready worker 79d v1.18.2 10.2.2.23 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.5 production
server0203 Ready worker 35d v1.18.3 10.2.2.30 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.5 staging
server0301 Ready worker 35d v1.18.3 10.2.3.21 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.5 production
server0302 Ready worker 35d v1.18.3 10.2.3.29 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.5 production
server0303 Ready worker 35d v1.18.0 10.2.3.30 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.5 staging
server0304 Ready worker 65d v1.18.2 10.2.6.22 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.5 production我尝试使用nodeSelector和nodeAffinity,但当我的选择器标签为env=staging时,无论我创建了多少副本,我所有的pod都一直驻留在server0203上,而不是server0303上。
同样,如果我使用env=production,它将只登陆server0201。
我应该怎么做才能确保我的pod均匀地分布到我分配了这些标签的节点?
以下是我的部署规范
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld
namespace: gab
spec:
selector:
matchLabels:
app: helloworld
replicas: 2 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: helloworld
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: env
operator: Equals
values:
- staging
containers:
- name: helloworld
image: karthequian/helloworld:latest
ports:
- containerPort: 80工作节点中没有污染
kubectl get nodes -o json | jq '.items[].spec.taints'
[
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/master"
}
]
[
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/master"
}
]
[
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/master"
}
]
null
null
null
null
null
null
null所有标签的显示
server0201 Ready worker 80d v1.18.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0202,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0202 Ready worker 80d v1.18.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0203,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0203 Ready worker 35d v1.18.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=staging,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0210,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0301 Ready worker 35d v1.18.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0301,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0302 Ready worker 35d v1.18.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0309,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0303 Ready worker 35d v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=staging,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0310,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0304 Ready worker 65d v1.18.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0602,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker发布于 2020-07-17 00:10:40
玩了一段时间后,我意识到NodeSelector或PodAffinity没有问题。事实上,我甚至可以通过使用限制在我的名称空间中的node-selector注释来实现我的问题想要实现的目标。
apiVersion: v1
kind: Namespace
metadata:
name: gab
annotations:
scheduler.alpha.kubernetes.io/node-selector: env=production
spec: {}
status: {} 只要我的部署在名称空间中,节点选择器就可以工作。
kind: Deployment
metadata:
name: helloworld
namespace: gab
spec:
selector:
matchLabels:
app: helloworld
replicas: 10 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: karthequian/helloworld:latest
ports:
- containerPort: 80现在为什么它在一开始对我来说是有效的,因为我的staging标记节点的第二个节点比我一直驻留的那个节点的利用率略高。
Resource Requests Limits
-------- -------- ------
cpu 3370m (14%) 8600m (35%)
memory 5350Mi (4%) 8600Mi (6%)
ephemeral-storage 0 (0%) 0 (0%)我一直在登陆的节点是
Resource Requests Limits
-------- -------- ------
cpu 1170m (4%) 500100m (2083%)
memory 164Mi (0%) 100Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)当我测试并切换到production时,因为有更多的节点,所以它被分布到几个节点上。
因此,我认为,调度器基于Server load (我可能是错的)来平衡pod,而不是尝试均匀分布
https://stackoverflow.com/questions/62937188
复制相似问题