首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >在kubernetes中调整持久卷大小时出错

在kubernetes中调整持久卷大小时出错
EN

Stack Overflow用户
提问于 2019-10-17 10:16:39
回答 1查看 2.7K关注 0票数 1

我有一个使用8Gi持久块卷的Prometheus服务器端。卷提供商是鲁克-卡夫。

由于没有更多的空间可用,吊舱处于崩溃回退状态:

代码语言:javascript
复制
[root@node4 ~]# df -h | grep rbd

/dev/rbd0   8.0G  8.0G   36K 100% /var/lib/kubelet/plugins/ceph.rook.io/rook-ceph/mounts/pvc-80f98193-deae-11e9-a240-0025b50a01df

豆荚需要更多的空间,所以我决定把音量调整到20 to。

通过以下文档:https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/

我在持久性卷声明中编辑了resources.requests.storage: 20Gi。并升级了头盔发行版。

现在我可以看到PV已经调整到20 PV。但PVC仍然表明,它声称8Gi。

代码语言:javascript
复制
$ kubectl get pvc -n prometheus

NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
prometheus-alertmanager   Bound    pvc-80f5eb1a-deae-11e9-a240-0025b50a01df   2Gi        RWO            rook-ceph-block   22d
prometheus-server         Bound    pvc-80f98193-deae-11e9-a240-0025b50a01df   8Gi        RWO            rook-ceph-block   22d

$ kubectl get pv

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                STORAGECLASS      REASON   AGE
pvc-80f5eb1a-deae-11e9-a240-0025b50a01df   2Gi        RWO            Delete           Bound    prometheus/prometheus-alertmanager   rook-ceph-block            22d
pvc-80f98193-deae-11e9-a240-0025b50a01df   20Gi       RWO            Delete           Bound    prometheus/prometheus-server         rook-ceph-block            22d
pvc-fb73b383-deb2-11e9-a240-0025b50a01df   10Gi       RWO            Delete           Bound    grafana/grafana                      rook-ceph-block            22d

PVC描述说:

代码语言:javascript
复制
Conditions:
  Type                      Status  LastProbeTime                     LastTransitionTime                Reason  Message
  ----                      ------  -----------------                 ------------------                ------  -------
  FileSystemResizePending   True    Mon, 01 Jan 0001 00:00:00 +0000   Thu, 17 Oct 2019 15:49:05 +0530           Waiting for user to (re-)start a pod to finish file system resize of volume on node.

然后我删除了吊舱重新启动。

但豆荚仍处于崩溃回退状态。豆荚描述说:

代码语言:javascript
复制
 Warning  FailedMount  2m17s (x2 over 2m17s)  kubelet, node4     MountVolume.SetUp failed for volume "pvc-80f98193-deae-11e9-a240-0025b50a01df" : mount command failed, status: Failure, reason: Rook: Mount volume failed: failed to attach volume pvc-80f98193-deae-11e9-a240-0025b50a01df for pod prometheus/prometheus-server-756c8495ff-wtx84. Volume is already attached by pod prometheus/prometheus-server-756c8495ff-hcd85. Status Running

当列出豆荚时,我只能看到新的prometheus-server-756c8495ff-wtx84 (而不是旧的pod prometheus-server-756c8495ff-hcd85 ):

代码语言:javascript
复制
$ kubectl get pods -n prometheus

NAME                                            READY   STATUS             RESTARTS   AGE
prometheus-alertmanager-6f756695d5-wvgr7        2/2     Running            0          22d
prometheus-kube-state-metrics-67cfbbd9d-bwx4w   1/1     Running            0          22d
prometheus-node-exporter-444bz                  1/1     Running            0          22d
prometheus-node-exporter-4hjr9                  1/1     Running            0          22d
prometheus-node-exporter-8plk7                  1/1     Running            0          22d
prometheus-node-exporter-pftf6                  1/1     Running            0          22d
prometheus-node-exporter-prndk                  1/1     Running            0          22d
prometheus-node-exporter-rchtg                  1/1     Running            0          22d
prometheus-node-exporter-xgmzs                  1/1     Running            0          22d
prometheus-pushgateway-77744d999c-5ndlm         1/1     Running            0          22d
prometheus-server-756c8495ff-wtx84              1/2     CrashLoopBackOff   5          4m31s

我怎样才能解决这个问题?

编辑:

部署战略是:

代码语言:javascript
复制
StrategyType:           RollingUpdate
RollingUpdateStrategy:  1 max unavailable, 1 max surge

我可以看到,即使kubectl get pv显示pv具有20 8Gi容量,实际rbd块的rook也只有8Gi大小:

代码语言:javascript
复制
[root@rook-ceph-operator-775cf575c5-dfpql /]# rbd info replicated-metadata-pool/pvc-80f98193-deae-11e9-a240-0025b50a01df

rbd image 'pvc-80f98193-deae-11e9-a240-0025b50a01df':
        size 8 GiB in 2048 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 434b1922b4b40a
        data_pool: ec-data-pool
        block_name_prefix: rbd_data.1.434b1922b4b40a
        format: 2
        features: layering, data-pool
        op_features:
        flags:
        create_timestamp: Tue Sep 24 09:34:28 2019
        access_timestamp: Tue Sep 24 09:34:28 2019
        modify_timestamp: Tue Sep 24 09:34:28 2019

storageclass.yaml是:

代码语言:javascript
复制
$ kubectl get sc -n prometheus -o yaml

apiVersion: v1
items:
- allowVolumeExpansion: true
  apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    creationTimestamp: "2019-08-01T11:27:31Z"
    name: rook-ceph-block
    resourceVersion: "15172025"
    selfLink: /apis/storage.k8s.io/v1/storageclasses/rook-ceph-block
    uid: 59e3b081-b44f-11e9-a240-0025b50a01df
  parameters:
    blockPool: replicated-metadata-pool
    clusterNamespace: rook-ceph
    dataBlockPool: ec-data-pool
    fstype: xfs
  provisioner: ceph.rook.io/block
  reclaimPolicy: Delete
  volumeBindingMode: Immediate
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
EN

回答 1

Stack Overflow用户

发布于 2019-10-17 10:36:04

您可以尝试手动调整ext4格式的大小。这是公开的问题(https://github.com/rook/rook/issues/3133)

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/58430198

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档