集群:1个master 2个worker
我正在使用使用PV (kubernetes.io/no-provisioner storageClass)的本地卷部署StatefulSet,其中包含3个副本。为两个工作节点创建了2个PV。
预期:将在两个工作进程上调度pods,并共享相同的卷。
结果:在单个worker节点上创建了3个有状态pod。yaml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv-1
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv-2
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/vol2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node2
---
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: test
labels:
app: test
spec:
ports:
- name: test-headless
port: 8000
clusterIP: None
selector:
app: test
---
apiVersion: v1
kind: Service
metadata:
name: test-service
labels:
app: test
spec:
ports:
- name: test
port: 8000
protocol: TCP
nodePort: 30063
type: NodePort
selector:
app: test
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-stateful
spec:
selector:
matchLabels:
app: test
serviceName: stateful-service
replicas: 6
template:
metadata:
labels:
app: test
spec:
containers:
- name: container-1
image: <Image-name>
imagePullPolicy: Always
ports:
- name: http
containerPort: 8000
volumeMounts:
- name: localvolume
mountPath: /tmp/
volumes:
- name: localvolume
persistentVolumeClaim:
claimName: example-local-claim发布于 2018-06-12 19:53:05
这是因为Kubernetes不关心分发。它具有用于提供特定分发的机制,称为Pod亲和性。要在所有workers上分发pods,您可以使用Pod Affinity。此外,你可以使用软亲和性( differences explain here ),它不是很严格,允许生成你所有的pod。例如,StatefulSet将如下所示:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: my-app
replicas: 3
template:
metadata:
labels:
app: my-app
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: app-name
image: k8s.gcr.io/super-app:0.8
ports:
- containerPort: 21
name: web此StatefulSet将尝试在新的worker上生成每个pod;如果没有足够的worker,它将在pod已经存在的节点上生成pod。
https://stackoverflow.com/questions/50812843
复制相似问题