首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >ceph crush映射-复制

ceph crush映射-复制
EN

Stack Overflow用户
提问于 2015-07-09 22:20:11
回答 1查看 968关注 0票数 1

仍然有些困惑Ceph crush地图是如何工作的,并希望有人能提供一些启发。这是我的osd树:

代码语言:javascript
复制
core@store101 ~ $ ceph osd tree
ID  WEIGHT  TYPE NAME                                UP/DOWN REWEIGHT PRIMARY-AFFINITY 
 -1 6.00000 root default                                                               
 -2 3.00000     datacenter dc1                                                         
 -4 3.00000         rack rack_dc1                                                      
-10 1.00000             host store101                                   
  4 1.00000                 osd.4                         up  1.00000          1.00000 
 -7 1.00000             host store102                                   
  1 1.00000                 osd.1                         up  1.00000          1.00000 
 -9 1.00000             host store103                                   
  3 1.00000                 osd.3                         up  1.00000          1.00000 
 -3 3.00000     datacenter dc2                                                         
 -5 3.00000         rack rack_dc2                                                      
 -6 1.00000             host store104                                   
  0 1.00000                 osd.0                         up  1.00000          1.00000 
 -8 1.00000             host store105                                   
  2 1.00000                 osd.2                         up  1.00000          1.00000 
-11 1.00000             host store106                                   
  5 1.00000                 osd.5                         up  1.00000          1.00000 

我只是想确保在复制值为2或更大的情况下,对象的所有副本都不在同一个数据中心。我的规则(取自互联网)是:

代码语言:javascript
复制
rule replicated_ruleset_dc {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 2 type datacenter
        step choose firstn 2 type rack
        step chooseleaf firstn 0 type host
        step emit
}

但是,如果我转储放置组,我会立即看到来自同一数据中心的两个osd。osd的5.0

代码语言:javascript
复制
core@store101 ~ $ ceph pg dump | grep 5,0
1.73    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.939197  0'0 96:113  [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854945  0'0 2015-07-09 12:05:01.854945
1.70    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.947403  0'0 96:45   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854941  0'0 2015-07-09 12:05:01.854941
1.6f    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.947056  0'0 96:45   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854940  0'0 2015-07-09 12:05:01.854940
1.6c    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.938591  0'0 96:45   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854939  0'0 2015-07-09 12:05:01.854939
1.66    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.937803  0'0 96:107  [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854936  0'0 2015-07-09 12:05:01.854936
1.67    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.929323  0'0 96:33   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854937  0'0 2015-07-09 12:05:01.854937
1.65    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.928200  0'0 96:33   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854936  0'0 2015-07-09 12:05:01.854936
1.63    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.927642  0'0 96:107  [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854935  0'0 2015-07-09 12:05:01.854935
1.3f    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.924738  0'0 96:33   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854920  0'0 2015-07-09 12:05:01.854920
1.36    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.917833  0'0 96:45   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854916  0'0 2015-07-09 12:05:01.854916
1.33    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.911484  0'0 96:104  [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854915  0'0 2015-07-09 12:05:01.854915
1.2b    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.878280  0'0 96:58   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854911  0'0 2015-07-09 12:05:01.854911
1.5 0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.942620  0'0 96:98   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854892  0'0 2015-07-09 12:05:01.854892

如何确保在另一个dc中至少有一个副本?

EN

回答 1

Stack Overflow用户

发布于 2015-12-01 01:26:01

我昨天改变了我的ceph crush地图:ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 181.99979 root default -12 90.99989 rack rack1 -2 15.46999 host ceph0 1 3.64000 osd.1 up 1.00000 1.00000 0 3.64000 osd.0 up 1.00000 1.00000 8 2.73000 osd.8 up 1.00000 1.00000 9 2.73000 osd.9 up 1.00000 1.00000 19 2.73000 osd.19 up 1.00000 1.00000 ... -13 90.99989 rack rack2 -3 15.46999 host ceph2 2 3.64000 osd.2 up 1.00000 1.00000 3 3.64000 osd.3 up 1.00000 1.00000 10 2.73000 osd.10 up 1.00000 1.00000 11 2.73000 osd.11 up 1.00000 1.00000 18 2.73000 osd.18 up 1.00000 1.00000 ... rack rack1 { id -12 # do not change unnecessarily # weight 91.000 alg straw hash 0 # rjenkins1 item ceph0 weight 15.470 ... } rack rack2 { id -13 # do not change unnecessarily # weight 91.000 alg straw hash 0 # rjenkins1 item ceph2 weight 15.470 ... } root default { id -1 # do not change unnecessarily # weight 182.000 alg straw hash 0 # rjenkins1 item rack1 weight 91.000 item rack2 weight 91.000 } rule racky { ruleset 3 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type rack step emit }请显示你的“根默认”部分

然后试试这个rule replicated_ruleset_dc { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type datacenter step emit }

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/31320280

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档