首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >重新启动Kubernetes petset将清除持久卷

重新启动Kubernetes petset将清除持久卷
EN

Stack Overflow用户
提问于 2017-03-02 03:10:17
回答 1查看 97关注 0票数 0

我正在运行3个zookeepers petset,它们的卷都在使用glusterfs持久卷。如果您是第一次启动petset,那么一切都很好。

我的一个要求是,如果petset被终止,那么在我重新启动它之后,它们将仍然使用相同的持久卷。

我现在面临的问题是,重启petset后,持久卷中的原始数据将被清除。那么,我如何才能解决这个问题,而不是手动将文件复制出该卷呢?我尝试了reclaimPolicy保留和删除,它们都会清理卷。谢谢。

下面是配置文件。

光伏

代码语言:javascript
复制
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-0
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-0
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-0
    namespace: default
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-1
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-1
    namespace: default
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-2
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-2
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-2
    namespace: default

聚氯乙烯

代码语言:javascript
复制
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-0
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-1
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-2
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi

petset

代码语言:javascript
复制
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
  name: zookeeper
spec:
  serviceName: "zookeeper"
  replicas: 1
  template:
    metadata:
      labels:
        app: zookeeper
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: zookeeper
        securityContext:
          privileged: true
          capabilities:
            add:
              - IPC_LOCK
        image: kuanghaochina/zookeeper-3.5.2-alpine-jdk:latest
        imagePullPolicy: Always
        ports:
          - containerPort: 2888
            name: peer
          - containerPort: 3888
            name: leader-election
          - containerPort: 2181
            name: client
        env:
        - name: ZOOKEEPER_LOG_LEVEL
          value: INFO
        volumeMounts:
        - name: glusterfsvol
          mountPath: /opt/zookeeper/data
          subPath: data
        - name: glusterfsvol
          mountPath: /opt/zookeeper/dataLog
          subPath: dataLog
  volumeClaimTemplates:
  - metadata:
      name: glusterfsvol
    spec:
      accessModes: 
        - ReadWriteMany
      resources:
        requests:
          storage: 1Gi

找到的原因是我使用zkServer-initialize.sh强制zookeeper使用id,但在脚本中,它将清除dataDir。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2017-03-03 06:06:23

找到的原因是我使用zkServer-initialize.sh强制zookeeper使用id,但在脚本中,它将清除dataDir。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/42540048

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档