首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Rook Ceph配置问题

Rook Ceph配置问题
EN

Stack Overflow用户
提问于 2021-09-25 03:09:36
回答 1查看 540关注 0票数 0

我在尝试创建PVC时遇到问题。置备程序似乎无法创建空间。

代码语言:javascript
复制
k describe pvc avl-vam-pvc-media-ceph
Name:          avl-vam-pvc-media-ceph
Namespace:     default
StorageClass:  rook-ceph-block
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type     Reason                Age                From                                                                                                        Message
  ----     ------                ----               ----                                                                                                        -------
  Normal   ExternalProvisioning  10s (x5 over 67s)  persistentvolume-controller                                                                                 waiting for a volume to be created, either by external provisioner "rook-ceph.rbd.csi.ceph.com" or manually created by system administrator
  Normal   Provisioning          5s (x8 over 67s)   rook-ceph.rbd.csi.ceph.com_csi-rbdplugin-provisioner-6799bd4cb7-sv4gz_73756eff-f42e-4d8f-8448-d5dedd94d1f2  External provisioner is provisioning volume for claim "default/avl-vam-pvc-media-ceph"
  Warning  ProvisioningFailed    5s (x8 over 67s)   rook-ceph.rbd.csi.ceph.com_csi-rbdplugin-provisioner-6799bd4cb7-sv4gz_73756eff-f42e-4d8f-8448-d5dedd94d1f2  failed to provision volume with StorageClass "rook-ceph-block": rpc error: code = InvalidArgument desc = multi node access modes are only supported on rbd `block` type volumes

下面是我的PVC yaml

代码语言:javascript
复制
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: avl-vam-pvc-media-ceph
spec:
  storageClassName: "rook-ceph-block"
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

我使用./rook/cluster/examples/kubernetes/ceph/csi/rbd/storageclass.yaml来创建我的存储类。我搞不懂哪里出了问题。

我在我的ceph集群中发现的另一件奇怪的事情是,我的pgs似乎卡在了小尺寸

代码语言:javascript
复制
ceph health detail
HEALTH_WARN Degraded data redundancy: 33 pgs undersized
[WRN] PG_DEGRADED: Degraded data redundancy: 33 pgs undersized
    pg 1.0 is stuck undersized for 51m, current state active+undersized, last acting [1,0]
    pg 2.0 is stuck undersized for 44m, current state active+undersized, last acting [3,0]
    pg 2.1 is stuck undersized for 44m, current state active+undersized, last acting [2,5]
    pg 2.2 is stuck undersized for 44m, current state active+undersized, last acting [5,4]
    pg 2.3 is stuck undersized for 44m, current state active+undersized, last acting [5,4]
    pg 2.4 is stuck undersized for 44m, current state active+undersized, last acting [2,1]
    pg 2.5 is stuck undersized for 44m, current state active+undersized, last acting [3,4]
    pg 2.6 is stuck undersized for 44m, current state active+undersized, last acting [2,3]
    pg 2.7 is stuck undersized for 44m, current state active+undersized, last acting [3,2]
    pg 2.8 is stuck undersized for 44m, current state active+undersized, last acting [3,0]
    pg 2.9 is stuck undersized for 44m, current state active+undersized, last acting [4,1]
    pg 2.a is stuck undersized for 44m, current state active+undersized, last acting [2,3]
    pg 2.b is stuck undersized for 44m, current state active+undersized, last acting [3,4]
    pg 2.c is stuck undersized for 44m, current state active+undersized, last acting [2,3]
    pg 2.d is stuck undersized for 44m, current state active+undersized, last acting [0,1]
    pg 2.e is stuck undersized for 44m, current state active+undersized, last acting [2,3]
    pg 2.f is stuck undersized for 44m, current state active+undersized, last acting [1,0]
    pg 2.10 is stuck undersized for 44m, current state active+undersized, last acting [2,1]
    pg 2.11 is stuck undersized for 44m, current state active+undersized, last acting [3,4]
    pg 2.12 is stuck undersized for 44m, current state active+undersized, last acting [3,2]
    pg 2.13 is stuck undersized for 44m, current state active+undersized, last acting [0,5]
    pg 2.14 is stuck undersized for 44m, current state active+undersized, last acting [3,4]
    pg 2.15 is stuck undersized for 44m, current state active+undersized, last acting [4,3]
    pg 2.16 is stuck undersized for 44m, current state active+undersized, last acting [5,2]
    pg 2.17 is stuck undersized for 44m, current state active+undersized, last acting [5,2]
    pg 2.18 is stuck undersized for 44m, current state active+undersized, last acting [5,2]
    pg 2.19 is stuck undersized for 44m, current state active+undersized, last acting [0,3]
    pg 2.1a is stuck undersized for 44m, current state active+undersized, last acting [3,2]
    pg 2.1b is stuck undersized for 44m, current state active+undersized, last acting [2,5]
    pg 2.1c is stuck undersized for 44m, current state active+undersized, last acting [5,4]
    pg 2.1d is stuck undersized for 44m, current state active+undersized, last acting [3,0]
    pg 2.1e is stuck undersized for 44m, current state active+undersized, last acting [2,5]
    pg 2.1f is stuck undersized for 44m, current state active+undersized, last acting [4,3]

我确实有OSD

代码语言:javascript
复制
 ceph osd tree
ID  CLASS  WEIGHT    TYPE NAME                  STATUS  REWEIGHT  PRI-AFF
-1         10.47958  root default
-3          5.23979      host hostname1
 0    ssd   1.74660          osd.0                  up   1.00000  1.00000
 2    ssd   1.74660          osd.2                  up   1.00000  1.00000
 4    ssd   1.74660          osd.4                  up   1.00000  1.00000
-5          5.23979      host hostname2
 1    ssd   1.74660          osd.1                  up   1.00000  1.00000
 3    ssd   1.74660          osd.3                  up   1.00000  1.00000
 5    ssd   1.74660          osd.5                  up   1.00000  1.00000
EN

回答 1

Stack Overflow用户

发布于 2021-09-25 12:39:46

在使用rbd时,您应该将accessModes设置为ReadWriteOnceReadWriteMany由cephfs支持。另外,因为您的副本是3,并且故障域( ceph决定复制每个数据副本)是按主机进行的,所以您应该添加3个或更多节点来解决卡住的pg。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/69322849

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档