这里是k8s新手。
StatefulSets允许创建带有a)预定义名称和b)订单的豆荚。在我的情况下,我不需要命令(b),这是给我带来麻烦。(a)在我的情况下是有用的,因为我需要保持在货柜死亡时的状态。
举个例子,我有po-0,po-1,po-2,只是想让po-0死掉,但这就是所发生的事情:
预计这将:
1. [ pod-0:Running, pod-1:Running, pod-2:Running ]
2. My app needs to scale to 2 replicas by killing pod-0, so "k delete pod/pod-0" and "Replicas: 2"
3. [ pod-0:terminating, pod-1:Running, pod-2:Running ],我想保持这种状态!
4. [ pod-1:Running, pod-2:Running ]这,我不想!,但不能阻止K8s做:
5. [ pod-0:Starting, pod-1:Running, pod-2:Running ] (K8s shifts the pipe!!!)
6. [ pod-0:Running, pod-1:Running, pod-2:Terminating ] (K8s shifts the pipe!!!)
7. [ pod-0:Running, pod-1:Running ] (K8s shifts the pipe!!!)如何使用K8s (保持一组非顺序命名的豆荚)实现所需的行为?
我已经看到了一个很有前途的"AdvancedStatefulSet"(https://openkruise.io/en-us/docs/advanced_statefulset.html),它允许这样做,但是该产品还不适合生产。至少,它不工作在迷你库(Minkube1.16.0,码头19.03.13,OpenKruise 0.7.0)。
有人要我的部署文件,如下所示:
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: contextcf
labels:
name: contextcf
spec:
serviceName: contextcf
selector:
matchLabels:
name: contextcf
replicas: 3
template:
metadata:
labels:
name: contextcf
spec:
containers:
- name: contextcf
image: (my-registry)/contextcf:1.0.0
ports:
- name: web
containerPort: 80
# Volume sections removed, no issues there. The application is a simple as this.发布于 2021-01-21 10:22:52
你能附上你的YAML文件吗?
我有吊舱-0,吊舱-1,吊舱-2,我只想让吊舱-0死,但这就是发生的事情。
我不能用最简单的StatefulSet复制这个
$ cat sts.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web新创建的由StatefulSet控制的豆荚
$ k -n test2 get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 19s 10.8.252.144 k8s-vm04 <none> <none>
web-1 1/1 Running 0 12s 10.8.252.76 k8s-vm03 <none> <none>
web-2 1/1 Running 0 6s 10.8.253.8 k8s-vm02 <none> <none>尝试删除web-0
$ k -n test2 delete pod web-0
pod "web-0" deletedWeb0处于终止状态。
$ k -n test2 get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 0/1 Terminating 0 47s 10.8.252.144 k8s-vm04 <none> <none>
web-1 1/1 Running 0 40s 10.8.252.76 k8s-vm03 <none> <none>
web-2 1/1 Running 0 34s 10.8.253.8 k8s-vm02 <none> <none>web-0正在创建状态。
$ k -n test2 get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 0/1 ContainerCreating 0 1s <none> k8s-vm04 <none> <none>
web-1 1/1 Running 0 45s 10.8.252.76 k8s-vm03 <none> <none>
web-2 1/1 Running 0 39s 10.8.253.8 k8s-vm02 <none> <none>所有的吊舱都在运行
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 1m21s 10.8.252.145 k8s-vm04 <none> <none>
web-1 1/1 Running 0 2m59s 10.8.252.76 k8s-vm03 <none> <none>
web-2 1/1 Running 0 2m5s 10.8.253.8 k8s-vm02 <none> <none>其他吊舱仍在运行,没有处于终止状态。
如果您讨论缩放StatefulSet,这个statefulset.spec.podManagementPolicy可能会对您有所帮助。
$ k explain statefulset.spec.podManagementPolicy
KIND: StatefulSet
VERSION: apps/v1
FIELD: podManagementPolicy <string>
DESCRIPTION:
podManagementPolicy controls how pods are created during initial scale up,
when replacing pods on nodes, or when scaling down. The default policy is
`OrderedReady`, where pods are created in increasing order (pod-0, then
pod-1, etc) and the controller will wait until each pod is ready before
continuing. When scaling down, the pods are removed in the opposite order.
The alternative policy is `Parallel` which will create pods in parallel to
match the desired scale without waiting, and on scale down will delete all
pods at once.https://stackoverflow.com/questions/65823002
复制相似问题