首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >mdadm raid没有挂载

mdadm raid没有挂载
EN

Unix & Linux用户
提问于 2014-08-02 18:05:52
回答 1查看 66.9K关注 0票数 4

/etc/mdadm.conf中定义了一个raid数组,如下所示:

代码语言:javascript
复制
ARRAY /dev/md0 devices=/dev/sdb6,/dev/sdc6
ARRAY /dev/md1 devices=/dev/sdb7,/dev/sdc7

但当我试图骑上它们的时候,我得到了这个:

代码语言:javascript
复制
# mount /dev/md0 /mnt/media/
mount: special device /dev/md0 does not exist
# mount /dev/md1 /mnt/data
mount: special device /dev/md1 does not exist

与此同时,/proc/mdstat说:

代码语言:javascript
复制
# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md125 : inactive dm-6[0](S)
      238340224 blocks

md126 : inactive dm-5[0](S)
      244139648 blocks

md127 : inactive dm-3[0](S)
      390628416 blocks

unused devices: <none>

所以我试了一下:

代码语言:javascript
复制
# mount /dev/md126 /mnt/data
mount: /dev/md126: can't read superblock
# mount /dev/md125 /mnt/media
mount: /dev/md125: can't read superblock

解析中的fs为ext3,当我用-t指定fs时,

代码语言:javascript
复制
mount: wrong fs type, bad option, bad superblock on /dev/md126,
       missing codepage or helper program, or other error
       (could this be the IDE device where you in fact use
       ide-scsi so that sr0 or sda or so is needed?)
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

如何安装raid阵列?以前起过作用。

编辑1

代码语言:javascript
复制
# mdadm --detail --scan
mdadm: cannot open /dev/md/127_0: No such file or directory
mdadm: cannot open /dev/md/0_0: No such file or directory
mdadm: cannot open /dev/md/1_0: No such file or directory

编辑2

代码语言:javascript
复制
# dmsetup ls
isw_cabciecjfi_Raid7    (252:6)
isw_cabciecjfi_Raid6    (252:5)
isw_cabciecjfi_Raid5    (252:4)
isw_cabciecjfi_Raid3    (252:3)
isw_cabciecjfi_Raid2    (252:2)
isw_cabciecjfi_Raid1    (252:1)
isw_cabciecjfi_Raid     (252:0)
# dmsetup table
isw_cabciecjfi_Raid7: 0 476680617 linear 252:0 1464854958
isw_cabciecjfi_Raid6: 0 488279484 linear 252:0 976575411
isw_cabciecjfi_Raid5: 0 11968362 linear 252:0 1941535638
isw_cabciecjfi_Raid3: 0 781257015 linear 252:0 195318270
isw_cabciecjfi_Raid2: 0 976928715 linear 252:0 976575285
isw_cabciecjfi_Raid1: 0 195318207 linear 252:0 63
isw_cabciecjfi_Raid: 0 1953519616 mirror core 2 131072 nosync 2 8:32 0 8:16 0 1 handle_errors

编辑3

代码语言:javascript
复制
# file -s -L /dev/mapper/*
/dev/mapper/control:              ERROR: cannot read `/dev/mapper/control' (Invalid argument)
/dev/mapper/isw_cabciecjfi_Raid:  x86 boot sector
/dev/mapper/isw_cabciecjfi_Raid1: Linux rev 1.0 ext4 filesystem data, UUID=a8d48d53-fd68-40d8-8dd5-3cecabad6e7a (needs journal recovery) (extents) (large files) (huge files)
/dev/mapper/isw_cabciecjfi_Raid3: Linux rev 1.0 ext4 filesystem data, UUID=3cb24366-b9c8-4e68-ad7b-22449668f047 (extents) (large files) (huge files)
/dev/mapper/isw_cabciecjfi_Raid5: Linux/i386 swap file (new style), version 1 (4K pages), size 1496044 pages, no label, UUID=f07e031f-368a-443e-a21c-77fa27adf795
/dev/mapper/isw_cabciecjfi_Raid6: Linux rev 1.0 ext3 filesystem data, UUID=0f0b401a-f238-4b20-9b2a-79cba56dd9d0 (large files)
/dev/mapper/isw_cabciecjfi_Raid7: Linux rev 1.0 ext3 filesystem data, UUID=b2d66029-eeb9-4e4a-952c-0a3bd0696159 (large files)
# 

另外,当我的系统中有一个额外的磁盘/dev/mapper/isw_cabciecjfi_Raid时,我试图挂载一个分区,但是得到了:

代码语言:javascript
复制
# mount /dev/mapper/isw_cabciecjfi_Raid6 /mnt/media
mount: unknown filesystem type 'linux_raid_member'

我重新启动并确认RAID已在我的BIOS中切换。

代码语言:javascript
复制
I tried to force a mount which seems to allow me to mount but the content of the partition is inaccessible sio it still doesn't work as expected:
# mount -ft ext3 /dev/mapper/isw_cabciecjfi_Raid6 /mnt/media
# ls -l /mnt/media/
total 0
# mount -ft ext3 /dev/mapper/isw_cabciecjfi_Raid /mnt/data
# ls -l /mnt/data
total 0

编辑4

在执行建议的命令之后,我只得到:

代码语言:javascript
复制
$ sudo mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7
mdadm: cannot open /dev/sd[bc]6: No such file or directory
mdadm: cannot open /dev/sd[bc]7: No such file or directory

编辑5

我现在已经安装了/dev/md127,但是/dev/md0/dev/md1仍然无法访问:

代码语言:javascript
复制
# mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7
mdadm: cannot open /dev/sd[bc]6: No such file or directory
mdadm: cannot open /dev/sd[bc]7: No such file or directory



root@regDesktopHome:~# mdadm --stop /dev/md12[567]
mdadm: stopped /dev/md127
root@regDesktopHome:~# mdadm --assemble --scan
mdadm: /dev/md127 has been started with 1 drive (out of 2).
root@regDesktopHome:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : active raid1 dm-3[0]
      390628416 blocks [2/1] [U_]

md1 : inactive dm-6[0](S)
      238340224 blocks

md0 : inactive dm-5[0](S)
      244139648 blocks

unused devices: <none>
root@regDesktopHome:~# ls -l /dev/mapper
total 0
crw------- 1 root root  10, 236 Aug 13 22:43 control
brw-rw---- 1 root disk 252,   0 Aug 13 22:43 isw_cabciecjfi_Raid
brw------- 1 root root 252,   1 Aug 13 22:43 isw_cabciecjfi_Raid1
brw------- 1 root root 252,   2 Aug 13 22:43 isw_cabciecjfi_Raid2
brw------- 1 root root 252,   3 Aug 13 22:43 isw_cabciecjfi_Raid3
brw------- 1 root root 252,   4 Aug 13 22:43 isw_cabciecjfi_Raid5
brw------- 1 root root 252,   5 Aug 13 22:43 isw_cabciecjfi_Raid6
brw------- 1 root root 252,   6 Aug 13 22:43 isw_cabciecjfi_Raid7
root@regDesktopHome:~# mdadm --examine
mdadm: No devices to examine
root@regDesktopHome:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : active raid1 dm-3[0]
      390628416 blocks [2/1] [U_]

md1 : inactive dm-6[0](S)
      238340224 blocks

md0 : inactive dm-5[0](S)
      244139648 blocks

unused devices: <none>
root@regDesktopHome:~# mdadm --examine /dev/dm-[356]
/dev/dm-3:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 124cd4a5:2965955f:cd707cc0:bc3f8165
  Creation Time : Tue Sep  1 18:50:36 2009
     Raid Level : raid1
  Used Dev Size : 390628416 (372.53 GiB 400.00 GB)
     Array Size : 390628416 (372.53 GiB 400.00 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 127

    Update Time : Sat May 31 18:52:12 2014
          State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 23fe942e - correct
         Events : 167


      Number   Major   Minor   RaidDevice State
this     0       8       35        0      active sync

   0     0       8       35        0      active sync
   1     1       8       19        1      active sync
/dev/dm-5:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 91e560f1:4e51d8eb:cd707cc0:bc3f8165
  Creation Time : Tue Sep  1 19:15:33 2009
     Raid Level : raid1
  Used Dev Size : 244139648 (232.83 GiB 250.00 GB)
     Array Size : 244139648 (232.83 GiB 250.00 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0

    Update Time : Fri May  9 21:48:44 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : bfad9d61 - correct
         Events : 75007


      Number   Major   Minor   RaidDevice State
this     0       8       38        0      active sync

   0     0       8       38        0      active sync
   1     1       8       22        1      active sync
/dev/dm-6:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 0abe503f:401d8d09:cd707cc0:bc3f8165
  Creation Time : Tue Sep  8 21:19:15 2009
     Raid Level : raid1
  Used Dev Size : 238340224 (227.30 GiB 244.06 GB)
     Array Size : 238340224 (227.30 GiB 244.06 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1

    Update Time : Fri May  9 21:48:44 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2a7a125f - correct
         Events : 3973383


      Number   Major   Minor   RaidDevice State
this     0       8       39        0      active sync

   0     0       8       39        0      active sync
   1     1       8       23        1      active sync
root@regDesktopHome:~# 

编辑6

我用mdadm --stop /dev/md[01]阻止了他们,并确认/proc/mdstat不再给他们看了,然后执行了mdadm --asseble --scan

代码语言:javascript
复制
# mdadm --assemble --scan
mdadm: /dev/md0 has been started with 1 drives.
mdadm: /dev/md1 has been started with 2 drives.

但是,如果我试图挂载任何一个数组,我仍然可以得到:

代码语言:javascript
复制
root@regDesktopHome:~# mount /dev/md1 /mnt/data
mount: wrong fs type, bad option, bad superblock on /dev/md1,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

同时,我发现我的超级块似乎被损坏了(我已经向tune2fsfdisk确认,我正在处理一个ext3分区):

代码语言:javascript
复制
root@regDesktopHome:~# e2fsck /dev/md1
e2fsck 1.42.9 (4-Feb-2014)
The filesystem size (according to the superblock) is 59585077 blocks
The physical size of the device is 59585056 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes
root@regDesktopHome:~# e2fsck /dev/md0
e2fsck 1.42.9 (4-Feb-2014)
The filesystem size (according to the superblock) is 61034935 blocks
The physical size of the device is 61034912 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes

但是,这两个分区都备份了一些超级块:

代码语言:javascript
复制
root@regDesktopHome:~# mke2fs -n /dev/md0 mke2fs 1.42.9 (4-Feb-2014)
Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment
size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 15261696
inodes, 61034912 blocks 3051745 blocks (5.00%) reserved for the super
user First data block=0 Maximum filesystem blocks=4294967296 1863
block groups 32768 blocks per group, 32768 fragments per group 8192
inodes per group Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 265408, 
        4096000, 7962624, 11239424, 20480000, 23887872

root@regDesktopHome:~# mke2fs -n /dev/md1 mke2fs 1.42.9 (4-Feb-2014)
Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment
size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 14901248
inodes, 59585056 blocks 2979252 blocks (5.00%) reserved for the super
user First data block=0 Maximum filesystem blocks=4294967296 1819
block groups 32768 blocks per group, 32768 fragments per group 8192
inodes per group Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872

您认为,我应该尝试将两个数组上的备份恢复到23887872吗?我想我可以用e2fsck -b 23887872 /dev/md[01]来做这件事,你建议试一试吗?

我不一定想尝试一些我不知道的东西,也许会破坏我磁盘上的数据.man e2fsck并不一定说这是危险的,但也许还有另一种更节省的方法来修复超级块.?

作为社区的最后一次更新

我用resize2fs把我的超级积木重新整理好,我的硬盘又挂上了!(resize2fs /dev/md0 & resize2fs /dev/md1让我恢复了!)说来话长,但终于成功了!我在mdadm方面学到了很多东西!谢谢你@IanMacintosh

EN

回答 1

Unix & Linux用户

回答已采纳

发布于 2014-08-08 08:32:29

数组没有正确启动。从运行中的配置中删除它们如下:

代码语言:javascript
复制
mdadm --stop /dev/md12[567]

现在尝试使用autoscan和汇编语言特性。

代码语言:javascript
复制
mdadm --assemble --scan

假设这是可行的,那么使用以下方法保存您的配置(假设Debian派生)(这将覆盖您的配置,因此我们首先进行备份):

代码语言:javascript
复制
mv /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.old
/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

您现在应该修复重新启动,它将自动组装和启动每一次。

如果没有,则提供以下内容的输出:

代码语言:javascript
复制
mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7

它会有点长,但是显示了您需要知道的关于数组的数组和成员磁盘、它们的状态等的一切。

顺便说一下,如果不分别在磁盘上创建多个raid数组(即/dev/sdbc6和/dev/sdbc7),通常情况下效果会更好。相反,只创建一个数组,如果必须的话,您可以在数组上创建分区。大多数情况下,LVM是对数组进行分区的更好的方法。

票数 11
EN
页面原文内容由Unix & Linux提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://unix.stackexchange.com/questions/148062

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档