如果输出是"HEALTH_OK": 如果有错误日志,通过钉钉发送当前状态,并删除错误日志。 如果输出不是"HEALTH_OK": 如果有错误日志,说明已经报错,就不再报错,直接退出。 如果没有错误日志,把报错信息记录下来,通过钉钉发送当前状态,并创建错误日志。 = HEALTH_OK ]; then problem=$(echo $cephHealth | awk '{$1=""; print $0}') if [ -e $LOG
stopping w/out rebalancing,命令如下: # ceph osd unset noout # ceph -w 使用 ceph-w 可查看集群运作输出,同步完毕后集群 health 应为HEALTH_OK ceph -s 状态为HEALTH_OK ceph osd tree OSD 状态皆为UP 4.3 恢复使用指令及其说明 ceph -s : 确认 ceph cluster status ceph
[root@ceph1 ceph]# ceph -s cluster: id: fcb2fa5e-481a-4494-9a27-374048f37113 health: HEALTH_OK [root@ceph1 ceph]# ceph -s cluster: id: fcb2fa5e-481a-4494-9a27-374048f37113 health: HEALTH_OK [root@ceph1 ceph]# ceph -s cluster: id: fcb2fa5e-481a-4494-9a27-374048f37113 health: HEALTH_OK [root@ceph1 ceph]# ceph -s cluster: id: fcb2fa5e-481a-4494-9a27-374048f37113 health: HEALTH_OK
9764da52395923e0b32908d83a9f7304401fee43) root@demo:/home/demouser# ceph -s cluster 23d6f3f9-0b86-432c-bb18-1722f73e93e0 health HEALTH_OK 当前软件包版本已经更新 root@demo:/home/demouser# ceph -s cluster 23d6f3f9-0b86-432c-bb18-1722f73e93e0 health HEALTH_OK 0.94.10"} root@demo:/home/demouser# ceph -s cluster 23d6f3f9-0b86-432c-bb18-1722f73e93e0 health HEALTH_OK demouser# ceph -s #调整crushmap兼容性参数以后恢复正常 cluster 23d6f3f9-0b86-432c-bb18-1722f73e93e0 health HEALTH_OK
pool set cephfs_metadata pg_num 64 ceph osd pool set cephfs_metadata pgp_num 64 变更完后 Ceph 集群的状态变为 HEALTH_OK
demohost cephuser]# ceph -s cluster: id: 21cc0dcd-06f3-4d5d-82c2-dbd411ef0ed9 health: HEALTH_OK demohost-40 supdev]# ceph -s cluster: id: 21cc0dcd-06f3-4d5d-82c2-dbd411ef0ed9 health: HEALTH_OK demohost-40 supdev]# ceph -s cluster: id: 21cc0dcd-06f3-4d5d-82c2-dbd411ef0ed9 health: HEALTH_OK
[root@node1 ~]# ceph -w cluster: id: 97e5619b-a208-46aa-903b-a69cfd57cdab health: HEALTH_OK services
[root@node1 ~]# ceph -s cluster: id: 97e5619b-a208-46aa-903b-a69cfd57cdab health: HEALTH_OK services
查看集群状态 复制 # docker exec b79a ceph -s cluster 96ae62d2-2249-4173-9dee-3a7215cba51c health HEALTH_OK 212 MB used, 598 GB / 599 GB avail 64 active+clean 可以看到 mon 和 osd 都已经正确配置,切集群状态为 HEALTH_OK 集群的最终状态 复制 # docker exec b79a02 ceph -s cluster 96ae62d2-2249-4173-9dee-3a7215cba51c health HEALTH_OK
此时应该等待数据恢复结束,集群恢复到 HEALTH_OK 状态,再进行下一步操作。 删除 CRUSH Map 中的对应 OSD 条目,它就不再接收数据了。 等待数据重新分布结束,整个集群会恢复到 HEALTH_OK 状态。 删除 OSD 认证密钥: ceph auth del osd.{osd-num} 删除 OSD 。
node4 ceph-deploy osd create --data /dev/vdb node5 检查osd状态 [admin@node3 my-cluster]$ sudo ceph health HEALTH_OK my-cluster]$ sudo ceph -s cluster: id: 3a2a06c7-124f-4703-b798-88eb2950361e health: HEALTH_OK node3 my-cluster]$ ceph -s cluster: id: 3a2a06c7-124f-4703-b798-88eb2950361e health: HEALTH_OK
root@mon:~# ceph pg repair 9.14 instructing pg 9.14 on osd.1 to repair 4、检查 Ceph 集群是否恢复到 HEALTH_OK 状态 root@mon:~# ceph -s cluster 614e77b4-c997-490a-a3f9-e89aa0274da3 health HEALTH_OK monmap 7f628edeb700 -1 log_channel(cluster) log [ERR] : 9.14 repair 1 errors, 1 fixed 如果经过前面的步骤,Ceph 仍没有达到 HEALTH_OK ceph pg repair {pg_id} 6、检查 Ceph 集群是否恢复到 HEALTH_OK 状态。
@localhost cluter]# ceph -s cluster: id: 25d59c28-01b8-435a-a28a-1215d6989376 health: HEALTH_OK
root@devin-ceph1 ~]# ceph -s cluster: id: 7533cea8-7109-439a-81aa-6d3de31ab1cc health: HEALTH_OK
ceph-deploy osd create --data /dev/sdc node3 如果报错,记得用root执行 检查osd状态 [admin@node1 ~]$ sudo ceph health HEALTH_OK admin@node1 ~]$ sudo ceph -s cluster: id: af6bf549-45be-419c-92a4-8797c9a36ee8 health: HEALTH_OK @node1 my-cluster]$ ceph -s cluster: id: af6bf549-45be-419c-92a4-8797c9a36ee8 health: HEALTH_OK
~]# docker exec mon ceph -s cluster: id: da8f7f5b-b767-4420-a510-287f4ced25de health: HEALTH_OK ~]# docker exec mon ceph -s cluster: id: da8f7f5b-b767-4420-a510-287f4ced25de health: HEALTH_OK ~]# docker exec mon ceph -s cluster: id: da8f7f5b-b767-4420-a510-287f4ced25de health: HEALTH_OK
summer133-112 in ~ ♥ 10:37 > summer -s cluster 0be48747-efac-4ece-8cbe-9a5d06baccab health HEALTH_OK current/3.f1_head ♥ 10:48 > summer -s cluster 0be48747-efac-4ece-8cbe-9a5d06baccab health HEALTH_OK
6306de945a6c940439ab584aba9b622f2aa6222947d3d4cde75a4b82649a47ff cluster: id: 2ae6d05a-229a-11ec-925e-52540000fa0c health: HEALTH_OK 18M - 16.2.0-117.el8cp 2142b60d7974 e5447c052636 [ceph: root@clienta /]# ceph health HEALTH_OK
root@lab8106 ceph]# ceph -s cluster: id: 49ee8a7f-fb7c-4239-a4b7-acf0bc37430d health: HEALTH_OK
[root@ceph1 ceph]# ceph -s cluster: id: cde3244e-89e0-4630-84d5-bf08c0e33b24 health: HEALTH_OK [root@ceph1 ceph]# ceph -s cluster: id: cde3244e-89e0-4630-84d5-bf08c0e33b24 health: HEALTH_OK