我试图在已经部署的系统( RAID1 6.4)上创建RAID1镜像磁盘。我有两个磁盘:带有操作系统的源代码/dev/sda和镜像的/dev/sdb。这些磁盘以虚拟磁盘的形式从VMware ESXi连接,它们具有相同的大小和厚/薄配置。
我正在学习本教程:http://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-lvm-system-incl-grub-configuration-centos-5.3
/dev/sda大小和分区:
Disk /dev/sda: 96.6 GB, 96636764160 bytes
255 heads, 63 sectors/track, 11748 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00029e34
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 11749 94166016 8e Linux LVM/dev/sdb完全为空,因此在通过以下方法复制分区表之后:
sfdisk -d /dev/sda | sfisk --force /dev/sdb并将分区/dev/sdb1 1和/dev/sdb1 2更改为Linux自动检测,
/dev/sdb大小和分区如下所示:
Disk /dev/sdb: 96.6 GB, 96636764160 bytes
255 heads, 63 sectors/track, 11748 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c1935
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 26 204800 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2 26 11749 94166016 fd Linux raid autodetect为了确保没有以前的数组或任何我对超级块进行零化的残余物:
mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2创建数组:因此,在创建数组时,我使用以下命令:
[root@testmachine test]# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.和
[root@testmachine test]# mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb2
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.这些数组可以看到:
[root@testmachine test]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb2[1]
94100352 blocks super 1.2 [2/1] [_U]
md1 : active raid1 sdb1[1]
204608 blocks super 1.2 [2/1] [_U]
unused devices: <none>问题:问题是/dev/md2数组的大小为94100352块,但是/dev/sda的大小很小(大约50‘t?)-所以当我用"pvcreate /dev/md2“创建物理卷时,卷的大小是不同的,并且不能使用"pvmove”,因此我无法完成对LVM卷的镜像。
--- Physical volume ---
PV Name /dev/sda2
VG Name vg_testmachine
PV Size 89.80 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 22989
Free PE 0
Allocated PE 22989
PV UUID KSqdKU-9ckP-gZ1r-JwYo-QPSE-RFrZ-lAfRBi
"/dev/md2" is a new physical volume of "89.74 GiB"
--- NEW Physical volume ---
PV Name /dev/md2
VG Name
PV Size 89.74 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID LqNUb7-5zsr-kZ7T-L96R-xKjD-OReg-k6BqDV(注意大小的差异)
我不是专家,因为它涉及到磁盘块等。有谁知道这可能是什么原因?
发布于 2013-07-17 12:58:29
/dev/md2小于/dev/sda2 2的原因是在分区/dev/sdb2 2的开头有一个RAID超级块。超级块包含一个唯一的标识符以及组成该数组的其他磁盘/分区的信息,因此Linux内核可以在引导时自动组装该数组,即使您更改磁盘的顺序或将内容复制到一个完整的新磁盘。这是一个很小的开销,你付出了很大的灵活性。
当然,它阻止您仅仅镜像/dev/sda2 2到/dev/sdb2 2,因为大小不同。如果继续阅读链接的文章,则必须在(降级的) RAID数组中创建文件系统,复制文件,将引导加载程序更改为从/dev/md1和挂载/dev/md2启动,然后可以将/dev/sda*作为RAID配置中的第二个磁盘。这是有可能的,但不是为了心脏的虚弱.从一开始就使用RAID进行备份和重新安装可能更快、更安全和更容易。
https://serverfault.com/questions/524008
复制相似问题