首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >操作符在名称空间删除期间删除S3备份文件(.md5和sst_info文件)

操作符在名称空间删除期间删除S3备份文件(.md5和sst_info文件)
EN

Stack Overflow用户
提问于 2022-07-08 05:06:58
回答 1查看 91关注 0票数 0

当我运行操作符时,一切都很好,甚至一开始备份的恢复也很好。

问题是:备份文件被发送到AWS S3,其中包含两个文件夹(full,sst_info)和一个.md5文件

为了恢复数据库,所有这些文件都是必需的。但是,当删除kubernetes集群中的命名空间时,sst_info文件夹和.md5文件也将在s3桶中被删除。结果导致无法恢复备份。

单击此处查看名称空间删除前备份文件夹和文件的s3屏幕截图

单击此处查看删除命名空间后备份文件夹和文件的s3屏幕截图

任何帮助都将不胜感激。

percona运算符:https://github.com/percona/percona-xtradb-cluster-operator

我的cr.yaml文件:

代码语言:javascript
复制
apiVersion: pxc.percona.com/v1-11-0
kind: PerconaXtraDBCluster
metadata:
  name: cluster1
  finalizers:
    - delete-pxc-pods-in-order
#    - delete-proxysql-pvc
#    - delete-pxc-pvc
#  annotations:
#    percona.com/issue-vault-token: "true"
spec:
  crVersion: 1.11.0
#  secretsName: my-cluster-secrets
#  vaultSecretName: keyring-secret-vault
#  sslSecretName: my-cluster-ssl
#  sslInternalSecretName: my-cluster-ssl-internal
#  logCollectorSecretName: my-log-collector-secrets
#  initImage: percona/percona-xtradb-cluster-operator:1.11.0
#  enableCRValidationWebhook: true
#  tls:
#    SANs:
#      - pxc-1.example.com
#      - pxc-2.example.com
#      - pxc-3.example.com
#    issuerConf:
#      name: special-selfsigned-issuer
#      kind: ClusterIssuer
#      group: cert-manager.io
  allowUnsafeConfigurations: false
#  pause: false
  updateStrategy: SmartUpdate
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: 8.0-recommended
    schedule: "0 4 * * *"
  pxc:
    size: 3
    image: percona/percona-xtradb-cluster:8.0.27-18.1
    autoRecovery: true
#    expose:
#      enabled: true
#      type: LoadBalancer
#      trafficPolicy: Local
#      loadBalancerSourceRanges:
#        - 10.0.0.0/8
#      annotations:
#        networking.gke.io/load-balancer-type: "Internal"
#    replicationChannels:
#    - name: pxc1_to_pxc2
#      isSource: true
#    - name: pxc2_to_pxc1
#      isSource: false
#      configuration:
#        sourceRetryCount: 3
#        sourceConnectRetry: 60
#      sourcesList:
#      - host: 10.95.251.101
#        port: 3306
#        weight: 100
#    schedulerName: mycustom-scheduler
#    readinessDelaySec: 15
#    livenessDelaySec: 600
#    configuration: |
#      [mysqld]
#      wsrep_debug=CLIENT
#      wsrep_provider_options="gcache.size=1G; gcache.recover=yes"
#      [sst]
#      xbstream-opts=--decompress
#      [xtrabackup]
#      compress=lz4
#      for PXC 5.7
#      [xtrabackup]
#      compress
#    imagePullSecrets:
#      - name: private-registry-credentials
#    priorityClassName: high-priority
#    annotations:
#      iam.amazonaws.com/role: role-arn
#    labels:
#      rack: rack-22
#    readinessProbes:
#      initialDelaySeconds: 15
#      timeoutSeconds: 15
#      periodSeconds: 30
#      successThreshold: 1
#      failureThreshold: 5
#    livenessProbes:
#      initialDelaySeconds: 300
#      timeoutSeconds: 5
#      periodSeconds: 10
#      successThreshold: 1
#      failureThreshold: 3
#    containerSecurityContext:
#      privileged: false
#    podSecurityContext:
#      runAsUser: 1001
#      runAsGroup: 1001
#      supplementalGroups: [1001]
#    serviceAccountName: percona-xtradb-cluster-operator-workload
#    imagePullPolicy: Always
#    runtimeClassName: image-rc
#    sidecars:
#    - image: busybox
#      command: ["/bin/sh"]
#      args: ["-c", "while true; do trap 'exit 0' SIGINT SIGTERM SIGQUIT SIGKILL; done;"]
#      name: my-sidecar-1
#      resources:
#        requests:
#          #memory: 100M
#          cpu: 100m
#        limits:
#          #memory: 200M
#          cpu: 200m
#    envVarsSecret: my-env-var-secrets
    #resources:
      #requests:
        #memory: 1G
        #cpu: 600m
#        ephemeral-storage: 1G
#      limits:
#        #memory: 1G
#        cpu: "1"
#        ephemeral-storage: 1G
#    nodeSelector:
#      disktype: ssd
   # affinity:
   #   antiAffinityTopologyKey: "kubernetes.io/hostname"
   #   antiAffinityTopologyKey: "kubernetes.io/hostname"
#      advanced:
#        nodeAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            nodeSelectorTerms:
#            - matchExpressions:
#              - key: kubernetes.io/e2e-az-name
#                operator: In
#                values:
#                - e2e-az1
#                - e2e-az2
#    tolerations:
#    - key: "node.alpha.kubernetes.io/unreachable"
#      operator: "Exists"
#      effect: "NoExecute"
#      tolerationSeconds: 6000
    podDisruptionBudget:
      maxUnavailable: 1
#      minAvailable: 0
    volumeSpec:
#      emptyDir: {}
#      hostPath:
#        path: /data
#        type: Directory
      persistentVolumeClaim:
#        storageClassName: standard
#        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 6G
    gracePeriod: 600
  haproxy:
    enabled: true
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.11.0-haproxy
#    replicasServiceEnabled: false
#    imagePullPolicy: Always
#    schedulerName: mycustom-scheduler
#    readinessDelaySec: 15
#    livenessDelaySec: 600
#    configuration: |
#
#    the actual default configuration file can be found here https://github.com/percona/percona-docker/blob/main/haproxy/dockerdir/etc/haproxy/haproxy-global.cfg
#
#      global
#        maxconn 2048
#        external-check
#        insecure-fork-wanted
#        stats socket /etc/haproxy/pxc/haproxy.sock mode 600 expose-fd listeners level admin
#
#      defaults
#        default-server init-addr last,libc,none
#        log global
#        mode tcp
#        retries 10
#        timeout client 28800s
#        timeout connect 100500
#        timeout server 28800s
#
#      frontend galera-in
#        bind *:3309 accept-proxy
#        bind *:3306
#        mode tcp
#        option clitcpka
#        default_backend galera-nodes
#
#      frontend galera-admin-in
#        bind *:33062
#        mode tcp
#        option clitcpka
#        default_backend galera-admin-nodes
#
#      frontend galera-replica-in
#        bind *:3307
#        mode tcp
#        option clitcpka
#        default_backend galera-replica-nodes
#
#      frontend galera-mysqlx-in
#        bind *:33060
#        mode tcp
#        option clitcpka
#        default_backend galera-mysqlx-nodes
#
#      frontend stats
#        bind *:8404
#        mode http
#        option http-use-htx
#        http-request use-service prometheus-exporter if { path /metrics }
#    imagePullSecrets:
#      - name: private-registry-credentials
#    annotations:
#      iam.amazonaws.com/role: role-arn
#    labels:
#      rack: rack-22
#    readinessProbes:
#      initialDelaySeconds: 15
#      timeoutSeconds: 1
#      periodSeconds: 5
#      successThreshold: 1
#      failureThreshold: 3
#    livenessProbes:
#      initialDelaySeconds: 60
#      timeoutSeconds: 5
#      periodSeconds: 30
#      successThreshold: 1
#      failureThreshold: 4
#    serviceType: ClusterIP
#    externalTrafficPolicy: Cluster
#    replicasServiceType: ClusterIP
#    replicasExternalTrafficPolicy: Cluster
#    runtimeClassName: image-rc
#    sidecars:
#    - image: busybox
#      command: ["/bin/sh"]
#      args: ["-c", "while true; do trap 'exit 0' SIGINT SIGTERM SIGQUIT SIGKILL; done;"]
#      name: my-sidecar-1
#      resources:
#        requests:
#          #memory: 100M
#          cpu: 100m
#        limits:
#          #memory: 200M
#          cpu: 200m
#    envVarsSecret: my-env-var-secrets
    #resources:
      #requests:
        #memory: 1G
        #cpu: 600m
#      limits:
#        #memory: 1G
#        cpu: 700m
#    priorityClassName: high-priority
#    nodeSelector:
#      disktype: ssd
#    sidecarResources:
#      requests:
#        #memory: 1G
#        cpu: 500m
#      limits:
#        #memory: 2G
#        cpu: 600m
#    containerSecurityContext:
#      privileged: false
#    podSecurityContext:
#      runAsUser: 1001
#      runAsGroup: 1001
#      supplementalGroups: [1001]
#    serviceAccountName: percona-xtradb-cluster-operator-workload
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
#      advanced:
#        nodeAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            nodeSelectorTerms:
#            - matchExpressions:
#              - key: kubernetes.io/e2e-az-name
#                operator: In
#                values:
#                - e2e-az1
#                - e2e-az2
#    tolerations:
#    - key: "node.alpha.kubernetes.io/unreachable"
#      operator: "Exists"
#      effect: "NoExecute"
#      tolerationSeconds: 6000
    podDisruptionBudget:
      maxUnavailable: 1
#      minAvailable: 0
    gracePeriod: 30
#    loadBalancerSourceRanges:
#      - 10.0.0.0/8
#    serviceAnnotations:
#      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
#    serviceLabels:
#      rack: rack-23
  proxysql:
    enabled: false
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.11.0-proxysql
#    imagePullPolicy: Always
#    configuration: |
#      datadir="/var/lib/proxysql"
#
#      admin_variables =
#      {
#        admin_credentials="proxyadmin:admin_password"
#        mysql_ifaces="0.0.0.0:6032"
#        refresh_interval=2000
#
#        cluster_username="proxyadmin"
#        cluster_password="admin_password"
#        checksum_admin_variables=false
#        checksum_ldap_variables=false
#        checksum_mysql_variables=false
#        cluster_check_interval_ms=200
#        cluster_check_status_frequency=100
#        cluster_mysql_query_rules_save_to_disk=true
#        cluster_mysql_servers_save_to_disk=true
#        cluster_mysql_users_save_to_disk=true
#        cluster_proxysql_servers_save_to_disk=true
#        cluster_mysql_query_rules_diffs_before_sync=1
#        cluster_mysql_servers_diffs_before_sync=1
#        cluster_mysql_users_diffs_before_sync=1
#        cluster_proxysql_servers_diffs_before_sync=1
#      }
#
#      mysql_variables=
#      {
#        monitor_password="monitor"
#        monitor_galera_healthcheck_interval=1000
#        threads=2
#        max_connections=2048
#        default_query_delay=0
#        default_query_timeout=10000
#        poll_timeout=2000
#        interfaces="0.0.0.0:3306"
#        default_schema="information_schema"
#        stacksize=1048576
#        connect_timeout_server=10000
#        monitor_history=60000
#        monitor_connect_interval=20000
#        monitor_ping_interval=10000
#        ping_timeout_server=200
#        commands_stats=true
#        sessions_sort=true
#        have_ssl=true
#        ssl_p2s_ca="/etc/proxysql/ssl-internal/ca.crt"
#        ssl_p2s_cert="/etc/proxysql/ssl-internal/tls.crt"
#        ssl_p2s_key="/etc/proxysql/ssl-internal/tls.key"
#        ssl_p2s_cipher="ECDHE-RSA-AES128-GCM-SHA256"
#      }
#    readinessDelaySec: 15
#    livenessDelaySec: 600
#    schedulerName: mycustom-scheduler
#    imagePullSecrets:
#      - name: private-registry-credentials
#    annotations:
#      iam.amazonaws.com/role: role-arn
#    labels:
#      rack: rack-22
#    serviceType: ClusterIP
#    externalTrafficPolicy: Cluster
#    runtimeClassName: image-rc
#    sidecars:
#    - image: busybox
#      command: ["/bin/sh"]
#      args: ["-c", "while true; do trap 'exit 0' SIGINT SIGTERM SIGQUIT SIGKILL; done;"]
#      name: my-sidecar-1
#      resources:
#        requests:
#          #memory: 100M
#          cpu: 100m
#        limits:
#          #memory: 200M
#          cpu: 200m
#    envVarsSecret: my-env-var-secrets
    #resources:
      #requests:
        #memory: 1G
        #cpu: 600m
#      limits:
#        #memory: 1G
#        cpu: 700m
#    priorityClassName: high-priority
#    nodeSelector:
#      disktype: ssd
#    sidecarResources:
#      requests:
#        #memory: 1G
#        cpu: 500m
#      limits:
#        #memory: 2G
#        cpu: 600m
#    containerSecurityContext:
#      privileged: false
#    podSecurityContext:
#      runAsUser: 1001
#      runAsGroup: 1001
#      supplementalGroups: [1001]
#    serviceAccountName: percona-xtradb-cluster-operator-workload
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
#      advanced:
#        nodeAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            nodeSelectorTerms:
#            - matchExpressions:
#              - key: kubernetes.io/e2e-az-name
#                operator: In
#                values:
#                - e2e-az1
#                - e2e-az2
#    tolerations:
#    - key: "node.alpha.kubernetes.io/unreachable"
#      operator: "Exists"
#      effect: "NoExecute"
#      tolerationSeconds: 6000
    volumeSpec:
#      emptyDir: {}
#      hostPath:
#        path: /data
#        type: Directory
      persistentVolumeClaim:
#        storageClassName: standard
#        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 2G
    podDisruptionBudget:
      maxUnavailable: 1
#      minAvailable: 0
    gracePeriod: 30
#   loadBalancerSourceRanges:
#     - 10.0.0.0/8
#   serviceAnnotations:
#     service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
#   serviceLabels:
#     rack: rack-23
  logcollector:
    enabled: true
    image: percona/percona-xtradb-cluster-operator:1.11.0-logcollector
#    configuration: |
#      [OUTPUT]
#           Name  es
#           Match *
#           Host  192.168.2.3
#           Port  9200
#           Index my_index
#           Type  my_type
    #resources:
      #requests:
        #memory: 100M
        #cpu: 200m
  pmm:
    enabled: false
    image: percona/pmm-client:2.28.0
    serverHost: monitoring-service
#    serverUser: admin
#    pxcParams: "--disable-tablestats-limit=2000"
#    proxysqlParams: "--custom-labels=CUSTOM-LABELS"
    #resources:
      #requests:
        #memory: 150M
        #cpu: 300m
  backup:
    image: percona/percona-xtradb-cluster-operator:1.11.0-pxc8.0-backup
    backoffLimit: 1
#    serviceAccountName: percona-xtradb-cluster-operator
#    imagePullSecrets:
#      - name: private-registry-credentials
    pitr:
      enabled: true
      storageName: s3-us-west-binlog
      timeBetweenUploads: 60
#      resources:
#        requests:
#          #memory: 0.1G
#          cpu: 100m
#        limits:
#          #memory: 1G
#          cpu: 700m
    storages:
      s3-us-west:
        type: s3
        verifyTLS: true
#        nodeSelector:
#          storage: tape
#          backupWorker: 'True'
#        resources:
#          requests:
#            #memory: 1G
#            cpu: 600m
#        affinity:
#          nodeAffinity:
#            requiredDuringSchedulingIgnoredDuringExecution:
#              nodeSelectorTerms:
#              - matchExpressions:
#                - key: backupWorker
#                  operator: In
#                  values:
#                  - 'True'
#        tolerations:
#          - key: "backupWorker"
#            operator: "Equal"
#            value: "True"
#            effect: "NoSchedule"
#        annotations:
#          testName: scheduled-backup
#        labels:
#          backupWorker: 'True'
#        schedulerName: 'default-scheduler'
#        priorityClassName: 'high-priority'
#        containerSecurityContext:
#          privileged: true
#        podSecurityContext:
#          fsGroup: 1001
#          supplementalGroups: [1001, 1002, 1003]
        s3:
          bucket: abc-percona-mysql-test
          credentialsSecret: my-cluster-name-backup-s3
          region: us-west-1
      s3-us-west-binlog:
        type: s3
        verifyTLS: true
        s3:
          bucket: abc-percona-mysql-test-binlogs
          credentialsSecret: my-cluster-name-backup-s3
          region: us-west-1
      fs-pvc:
        type: filesystem
#        nodeSelector:
#          storage: tape
#          backupWorker: 'True'
#        resources:
#          requests:
#            #memory: 1G
#            cpu: 600m
#        affinity:
#          nodeAffinity:
#            requiredDuringSchedulingIgnoredDuringExecution:
#              nodeSelectorTerms:
#              - matchExpressions:
#                - key: backupWorker
#                  operator: In
#                  values:
#                  - 'True'
#        tolerations:
#          - key: "backupWorker"
#            operator: "Equal"
#            value: "True"
#            effect: "NoSchedule"
#        annotations:
#          testName: scheduled-backup
#        labels:
#          backupWorker: 'True'
#        schedulerName: 'default-scheduler'
#        priorityClassName: 'high-priority'
#        containerSecurityContext:
#          privileged: true
#        podSecurityContext:
#          fsGroup: 1001
#          supplementalGroups: [1001, 1002, 1003]
        volume:
          persistentVolumeClaim:
#            storageClassName: standard
            accessModes: [ "ReadWriteOnce" ]
            resources:
              requests:
                storage: 6G
    schedule:
      - name: "daily-backup-s3"
        schedule: "0/5 * * * *"
        keep: 10
        storageName: s3-us-west
 #     - name: "daily-backup"
 #       schedule: "0 0 * * *"
 #       keep: 5
 #       storageName: fs-pvc

我的restore.yaml文件:

代码语言:javascript
复制
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBClusterRestore
metadata:
  name: restore1
spec:
  pxcCluster: cluster1
  #backupName: backup1
  #backupName: cron-cluster1-s3-us-west-20226305240-scrvc
  backupSource: #
    destination: s3://osos-percona-mysql-test/cluster1-2022-07-05-11:06:00-full
    s3:
      bucket: osos-percona-mysql-test
      credentialsSecret: my-cluster-name-backup-s3
      region: us-west-1
  pitr:
    type: latest
    date: "2022-07-05 11:10:30"
    backupSource: #
      s3:
        bucket: osos-percona-mysql-test-binlogs
        credentialsSecret: my-cluster-name-backup-s3
        region: us-west-1

我的operator.yaml文件:

代码语言:javascript
复制
apiVersion: apps/v1
kind: Deployment
metadata:
  name: percona-xtradb-cluster-operator
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/component: operator
      app.kubernetes.io/instance: percona-xtradb-cluster-operator
      app.kubernetes.io/name: percona-xtradb-cluster-operator
      app.kubernetes.io/part-of: percona-xtradb-cluster-operator
  strategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: operator
        app.kubernetes.io/instance: percona-xtradb-cluster-operator
        app.kubernetes.io/name: percona-xtradb-cluster-operator
        app.kubernetes.io/part-of: percona-xtradb-cluster-operator
    spec:
      terminationGracePeriodSeconds: 600
      containers:
      - command:
        - percona-xtradb-cluster-operator
        env:
        - name: WATCH_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: OPERATOR_NAME
          value: percona-xtradb-cluster-operator
        image: percona/percona-xtradb-cluster-operator:1.11.0
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /metrics
            port: metrics
            scheme: HTTP
        resources:
          limits:
            cpu: 200m
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 20Mi
        name: percona-xtradb-cluster-operator
        ports:
        - containerPort: 8080
          name: metrics
          protocol: TCP
      serviceAccountName: percona-xtradb-cluster-operator

EN

回答 1

Stack Overflow用户

发布于 2022-07-09 12:21:51

为什么要删除名称空间?你想要达到什么目的?您是否试图删除由运算符创建的现有群集?

在这种情况下,我建议您使用kubectl delete命令删除集群,如下所示:

代码语言:javascript
复制
kubectl delete perconaxtradbcluster/cluster1

这将确保,通过设计,所有相关的对象得到清理。没有必要尝试删除命名空间,因为这个删除操作对运算符是不可见的,因此也就是您所看到的问题。

这就是使用operators与Kubernetes交互的意义,不需要手动操作,操作符应该通过操作操作符对象提供现成的维护实用程序:)

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/72906877

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档