MB/sec 3.15秒钟读取了1018 MB (物理读)的数据,每秒的速度323.27 MB/sec # dd+time 简单测试硬盘的写速率 root in summer in ceph/osd/ceph -2 ➜ time dd if=/dev/zero of=/var/lib/ceph/osd/ceph-2/xsw4 bs=1M count=100 100+0 records in 100+0 records 0.102372 s, 1.0 GB/s real 0m0.148s user 0m0.000s sys 0m0.081s root in summer in ceph/osd/ceph -2 ➜ du -sh * 100M xsw4 root in summer in ceph/osd/ceph-2 ➜ 简单测试硬盘的读速率 root in summer in ceph /osd/ceph-2 ➜ time dd if=/var/lib/ceph/osd/ceph-2/xsw4 of=/dev/null bs=1M 100+0 records in 100+0 records
ceph-1 ~]# mkdir cluster [root@ceph-1 ~]# cd cluster/ [root@ceph-1 cluster]# ceph-deploy new ceph-1 ceph : /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy newceph-1 ceph -2's password: [ceph-2][DEBUG ] connected to host: ceph-2 .. .. -2:/dev/sdc ceph-2:/dev/sdd ceph-3:/dev/sdb ceph-3:/dev/sdcceph-3:/dev/sdd --zap-disk ceph-deploy -- overwrite-conf osd activate ceph-1:/dev/sdb1 ceph-1:/dev/sdc1ceph-1:/dev/sdd1 ceph-2:/dev/sdb1 ceph-2
8.8.8.8"; nmcli con up ens18" 配置基础环境 # 配置主机名 hostnamectl set-hostname ceph-1 hostnamectl set-hostname ceph localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.25 ceph-1 192.168.1.26 ceph [root@ceph-1 ~]# [root@ceph-1 ~]# ceph orch apply mgr --placement="3 ceph-1 ceph-2 ceph-3" Scheduled -2;ceph-3;count:3 mon 3/3 4m ago 118s ceph-1;ceph-2;ceph-3;count: -2:/dev/sdb Created osd(s) 1 on host 'ceph-2' [root@ceph-1 ~]# ceph orch daemon add osd ceph-3:/dev/sdb
台装有CentOS 7的主机,用作ntp server和ceph本地源,详细信息如下: 集群配置如下: 主机 IP 功能 ceph-1 192.168.56.100 deploy、mon1、osd3 ceph 在ceph-1/ceph-2/ceph-3三个节点上: 修改/etc/ntp.conf,注释掉四行server,添加一行server指向ceph-admin: 重启ntp服务并观察client是否正确连接到 -2:/dev/sdb:/dev/sde1ceph-2:/dev/sdc:/dev/sde2 ceph-2:/dev/sdd:/dev/sde3 ceph-3:/dev/sdb:/dev/sde1ceph ceph-deploy--overwrite-conf osd activate ceph-1:/dev/sdb1 ceph-1:/dev/sdc1ceph-1:/dev/sdd1 ceph-2:/dev /sdb1 ceph-2:/dev/sdc1 ceph-2:/dev/sdd1ceph-3:/dev/sdb1 ceph-3:/dev/sdc1 ceph-3:/dev/sdd1 我在部署的时候出了个小问题
图片 搭建ceph(操作篇) 个人环境 主机名 IP 网卡模式 内存 系统盘 数据盘 ceph-1 192.168.200.43 NAT 2G 100G 20G ceph-2 192.168.200.44 hostnamectl set-hostname ceph-1 [root@ceph-1 ~]# bash [root@localhost ~]# hostnamectl set-hostname ceph [root@ceph-1 ~]# ssh-keygen [root@ceph-1 ~]# ssh-copy-id ceph-1 [root@ceph-1 ~]# ssh-copy-id ceph-2 [ ssh-copy-id ceph-client [root@ceph-1 ~]# for i in 1 2 3 client; do ssh ceph-$i hostname ; done ceph-1 ceph $ ceph-deploy admin ceph-1 ceph-2 ceph-3 #各节点添加r读权限 $ chmod +r /etc/ceph/ceph.client.admin.keyring 4
台服务器来做集群,ceph默认会进行文件三份的冗余来保障文件不易丢失,服务器IP地址如下: PS:这里使用的是Centos7的系统版本 192.168.3.101 ceph-1 192.168.3.102 ceph da8f7f5b-b767-4420-a510-287f4ced25de health: HEALTH_OK services: mon: 3 daemons, quorum ceph-1,ceph 287f4ced25de health: HEALTH_WARN no active mgr services: mon: 3 daemons, quorum ceph-1,ceph da8f7f5b-b767-4420-a510-287f4ced25de health: HEALTH_OK services: mon: 3 daemons, quorum ceph-1,ceph
type xfs (rw,noatime,nodiratime,attr2,inode64,logbsize=256k,noquota) /dev/sdb4 on /var/lib/ceph/osd/ceph
[root@centos7 ceph]# cd osd/ceph-2/current/ 2. [root@centos7 ceph]# cd 5.1c_head/ 3. 本例中顺藤摸瓜,进入osd/ceph-2/current,由于pg是5.1c,进入5.1c的目录,找到了文件rbd\udirectory__head_30A98C1C__5,命名规则也很简单{object_name
fsid 然后添加auth和crush map,重启osd ceph auth add osd.2 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph
1.00000 1.00000 2 1.98999 osd.2 up 1.00000 1.00000 -4 5.96997 host ceph ceph pg dump_stuck |egrep ^0.44 0.44 active+undersized+degraded [0,7] 0 [0,7] 0 这里我们前往 ceph
275G 833M 274G 1% /var/lib/ceph/osd/ceph-1 /dev/sdd1 275G 833M 274G 1% /var/lib/ceph/osd/ceph
BUILD | spawning | NOSTATE | private=172.16.1.49 | | aa666bd9-e370-4c53-8af3-f1bf7ba77900 | ceph
: /usr/sbin/ceph-disk list [ceph3][INFO ] ---------------------------------------- [ceph3][INFO ] ceph ][INFO ] ---------------------------------------- [ceph3][INFO ] Path /var/lib/ceph/osd/ceph
还有一个副本在运行了,那么这样的RULE就形如: take(root) ============> [default] 注意是根节点的名字 choose(3, host) ========> [ceph-1, ceph row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 region type 10 root # buckets host ceph # do not change unnecessarily # weight 17.910 alg straw hash 0 # rjenkins1 item ceph
vdb 252:16 0 20G 0 disk ├─vdb1 252:17 0 15G 0 part /var/lib/ceph/osd/ceph