我使用推荐值配置了Ceph (使用文档中的公式)。我有3个OSD,我的配置(我已经放在监视器节点和所有3个OSD上)包括以下内容:
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 150
osd pool default pgp num = 150当我运行ceph status时,我得到:
health HEALTH_WARN
too many PGs per OSD (1042 > max 300)这是令人困惑的,原因有二。首先,因为推荐的公式不能满足Ceph。其次,也是最令人费解的是,它说我每个OSD有1042个PG,而我的配置是150个。
我做错了什么?
发布于 2017-01-02 03:42:26
在设置PG计数之前,你需要知道3件事。
OSD 1.的数量
ceph osd ls
Sample Output:
0
1
2
Here Total number of osd is three.2.池的数量
ceph osd pool ls或rados lspools
Sample Output:
rbd
images
vms
volumes
backups
Here Total number of pool is five.3.复制计数
ceph osd dump | grep repli
Sample Output:
pool 0 'rbd' replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 38 flags hashpspool stripe_width 0
pool 1 'images' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 30 pgp_num 30 last_change 40 flags hashpspool stripe_width 0
pool 2 'vms' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 30 pgp_num 30 last_change 42 flags hashpspool stripe_width 0
pool 3 'volumes' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 30 pgp_num 30 last_change 36 flags hashpspool stripe_width 0
pool 4 'backups' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 30 pgp_num 30 last_change 44 flags hashpspool stripe_width 0
You can see each pool has replication count two.现在让我们开始计算
计算:
总PGs计算:
Total PGs = (Total_number_of_OSD * 100) / max_replication_count
This result must be rounded up to the nearest power of 2.示例:
OSD数量:3
复制次数:2
总PG= (3 * 100) /2= 150。150到2的最接近的幂是256。
因此,建议的最大PG数为256
您可以为每个池设置PG
每个池的PG总数计算:
Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool count
This result must be rounded up to the nearest power of 2.示例:
OSD数量:3
复制次数:2
池数:5
总PG= ((3 * 100) /2)/5= 150 /5= 30。30到2的最接近的幂是32。
因此,每个池的PG总数为32。
2的幂表:
2^0 1
2^1 2
2^2 4
2^3 8
2^4 16
2^5 32
2^6 64
2^7 128
2^8 256
2^9 512
2^10 1024有用的命令
ceph osd pool create <pool-name> <pg-number> <pgp-number> - To create a new pool
ceph osd pool get <pool-name> pg_num - To get number of PG in a pool
ceph osd pool get <pool-name> pgp_num - To get number of PGP in a pool
ceph osd pool set <pool-name> pg_num <number> - To increase number of PG in a pool
ceph osd pool set <pool-name> pgp_num <number> - To increase number of PGP in a pool
*usually pg and pgp number is same发布于 2018-04-04 05:36:01
我是如何在12.2.4 luminous中修复它的:
每个OSD的PG太多(380 > max 200)可能会导致许多阻塞请求。
首先,您需要设置:
[global]
mon_max_pg_per_osd = 800 # < depends on you amount of PGs
osd max pg per osd hard ratio = 10 # < default is 2, try to set at least 5. It will be
mon allow pool delete = true # without it you can't remove a pool 然后逐个重启所有的MON和OSD。
检查该值:
ceph --admin-daemon /var/run/ceph/ceph-mon.ceph2.asok config get mon_max_pg_per_osd
ceph --admin-daemon /var/run/ceph/ceph-osd.3.asok config get osd_max_pg_per_osd_hard_ratio现在看这里:
rados lspools
ceph osd pool get .users.email pg_num在我的例子中,默认的pg_num是128左右(我的集群是4年前的,它有很多升级,很多变化)。你可以像这样减少它。
注意:
ceph osd pool create .users.email.new 8
rados cppool .users.email default.rgw.lc.new
ceph osd pool delete .users.email .users.email --yes-i-really-really-mean-it
ceph osd pool rename .users.email.new .users.email
ceph osd pool application enable .users.email rgw如果这还不够,试着找到另一个可以削减的池子。
https://stackoverflow.com/questions/40771273
复制相似问题