我想创建ceph集群,然后通过S3 RESTful api连接到它。所以,我在"Ubuntu 16.04.5LTS“上部署了ceph集群(Mimic13.2.4),有3个OSD (每个硬盘10 So一个)。
使用本教程:
1) http://docs.ceph.com/docs/mimic/start/quick-start-preflight/#ceph-deploy-setup
2) http://docs.ceph.com/docs/mimic/start/quick-ceph-deploy/此时,ceph状态为OK:
root@ubuntu-srv:/home/slavik/my-cluster# ceph -s
cluster:
id: d7459118-8c16-451d-9774-d09f7a926d0e
health: HEALTH_OK
services:
mon: 1 daemons, quorum ubuntu-srv
mgr: ubuntu-srv(active)
osd: 3 osds: 3 up, 3 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 27 GiB / 30 GiB avail
pgs:3)“要使用Ceph的Ceph Object Gateway组件,您必须部署一个RGW实例。执行以下命令创建一个新的RGW实例:”
root@ubuntu-srv:/home/slavik/my-cluster# ceph-deploy rgw create ubuntu-srv
....
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host ubuntu-srv and default port 7480
root@ubuntu-srv:/home/slavik/my-cluster# ceph -s
cluster:
id: d7459118-8c16-451d-9774-d09f7a926d0e
health: HEALTH_WARN
too few PGs per OSD (2 < min 30)
services:
mon: 1 daemons, quorum ubuntu-srv
mgr: ubuntu-srv(active)
osd: 3 osds: 3 up, 3 in
data:
pools: 1 pools, 8 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 27 GiB / 30 GiB avail
pgs: 37.500% pgs unknown
62.500% pgs not active
5 creating+peering
3 unknownCeph状态已更改为HEALTH_WARN -为什么?如何解决?
发布于 2019-01-17 23:34:16
你的问题是
health: HEALTH_WARN
too few PGs per OSD (2 < min 30)通过运行以下命令查看您当前的pg配置:
ceph osd转储|grep池
查看每个池为pg count配置的内容,然后转到https://ceph.com/pgcalc/以计算您的池应针对哪些内容进行配置。
警告是,每个osd的pg数量很少,现在每个osd有2个pg,其中min应该是30。
https://stackoverflow.com/questions/54219547
复制相似问题