我按照快速启动指南创建了一个小型Ceph集群,除了一个例外,我为OSD使用了一个单独的磁盘,而不是一个文件夹。而不是
ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1我发布了
ceph-deploy osd prepare node2:/dev/sdb node3:/dev/sdb
ceph-deploy osd activate node2:/dev/sdb1 node3:/dev/sdb1在相同的环境中,文件夹方法工作良好,集群达到active+clean状态。
我检查了两个OSD是否显示为up,并试图遵循故障排除指南,但所描述的方法似乎都不起作用。
下面是来自ceph树、ceph -s和ceph转储的输出
# id weight type name up/down reweight
-1 0 root default
-2 0 host node2
0 0 osd.0 up 1
-3 0 host node3
1 0 osd.1 up 1
cluster 5d7d7a6f-63c9-43c5-aebb-5458fd3ae43e
health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
monmap e1: 1 mons at {node1=10.10.10.12:6789/0}, election epoch 1, quorum 0 node1
osdmap e8: 2 osds: 2 up, 2 in
pgmap v15: 192 pgs, 3 pools, 0 bytes data, 0 objects
68476 kB used, 6055 MB / 6121 MB avail
192 active+degraded
epoch 8
fsid 5d7d7a6f-63c9-43c5-aebb-5458fd3ae43e
created 2015-04-04 21:45:58.089596
modified 2015-04-04 23:26:06.840590
flags
pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool crash_replay_interval 45 stripe_width 0
pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
max_osd 2
osd.0 up in weight 1 up_from 4 up_thru 4 down_at 0 last_clean_interval [0,0) 10.10.10.13:6800/1749 10.10.10.13:6801/1749 10.10.10.13:6802/1749 10.10.10.13:6803/1749 exists,up 42d5622d-8907-4991-a6b6-869190c21678
osd.1 up in weight 1 up_from 8 up_thru 0 down_at 0 last_clean_interval [0,0) 10.10.10.14:6800/1750 10.10.10.14:6801/1750 10.10.10.14:6802/1750 10.10.10.14:6803/1750 exists,up b0a515d3-5f24-4e69-a5b3-1e094617b5b4发布于 2015-04-05 13:26:59
经过更多的研究,发现线索是在osd树的输出中,所有的权重都被设置为0。这似乎是一个问题,无论是Ceph脚本,因为它是100%的可重现性。在压碎图中重置osd重量可以解决这个问题。我所要做的就是发出以下命令:
ceph osd crush reweight osd.0 6
ceph osd crush reweight osd.1 6https://serverfault.com/questions/680492
复制相似问题