首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何打开我的mdadm RAID-5阵列?

如何打开我的mdadm RAID-5阵列?
EN

Ask Ubuntu用户
提问于 2021-11-04 12:58:34
回答 1查看 1.4K关注 0票数 0
  1. 如何打开我的mdadm RAID-5阵列?
  2. 如何使这些更改保持不变?

昨晚我重新启动了服务器,发现大约8个月前创建的raid数组没有恢复,我无法访问我的数据。我运行了一堆命令:

几个月前,我在RAID-5数组中添加了一个新的磁盘/dev/sdh,该数组在之后被挂载到/srv/share。一切似乎都很好,我们有额外的空间,并一直在使用它-实际上,我不确定我们是否重新启动,除了昨晚。RAID-5最初是在ubuntu 18.04下创建的,现在由ubuntu 20.04使用。

代码语言:javascript
复制
$ cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdf[3](S) sdb[1](S) sda[0](S)
      23441691144 blocks super 1.2
       
unused devices: 


$ lsblk | grep -v loop
NAME   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda      8:0    0   7.3T  0 disk  
└─md0    9:0    0  21.9T  0 raid5 
sdb      8:16   0   7.3T  0 disk  
└─md0    9:0    0  21.9T  0 raid5 
sdc      8:32   0   4.6T  0 disk  
└─sdc1   8:33   0   4.6T  0 part  /srv/datasets
sdd      8:48   0 298.1G  0 disk  
├─sdd1   8:49   0   190M  0 part  /boot/efi
└─sdd2   8:50   0 297.9G  0 part  /
sde      8:64   0   3.7T  0 disk  
└─sde1   8:65   0   3.7T  0 part  /srv
sdf      8:80   0   7.3T  0 disk  
└─md0    9:0    0  21.9T  0 raid5 
sdg      8:96   0   1.8T  0 disk  
├─sdg1   8:97   0   1.8T  0 part  /home
└─sdg2   8:98   0    47G  0 part  [SWAP]
sdh      8:112  0   7.3T  0 disk  
└─sdh1   8:113  0   7.3T  0 part  


$ sudo fdisk -l | grep sdh
Disk /dev/sdh: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
/dev/sdh1   2048 15628050431 15628048384  7.3T Linux filesystem



$ sudo mdadm -Db /dev/md0
INACTIVE-ARRAY /dev/md0 metadata=1.2 name=perception:0 UUID=c8004245:4e163594:65e30346:68ed2791
$ sudo mdadm -Db /dev/md/0
mdadm: cannot open /dev/md/0: No such file or directory



From /etc/mdadm/mdadm.conf:
ARRAY /dev/md/0  metadata=1.2 UUID=c8004245:4e163594:65e30346:68ed2791 name=perception:0



$ sudo mdadm --detail /dev/md0 
/dev/md0:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 3
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 3

              Name : perception:0
              UUID : c8004245:4e163594:65e30346:68ed2791
            Events : 91689

    Number   Major   Minor   RaidDevice

       -       8        0        -        /dev/sda
       -       8       80        -        /dev/sdf
       -       8       16        -        /dev/sdb


sudo mdadm --detail /dev/md/0 
mdadm: cannot open /dev/md/0: No such file or directory



mdadm --assemble --scan
  [does nothing]

$ blkid /dev/md0 [nothing]
$ blkid /dev/md/0 [nothing]

$ blkid | grep raid
/dev/sdb: UUID="c8004245-4e16-3594-65e3-034668ed2791" UUID_SUB="3fefdb86-4c6b-fb76-a35e-3a846075eb54" LABEL="perception:0" TYPE="linux_raid_member"
/dev/sdf: UUID="c8004245-4e16-3594-65e3-034668ed2791" UUID_SUB="d4a58f2c-bc8b-8fd0-6b22-63b047e09c13" LABEL="perception:0" TYPE="linux_raid_member"
/dev/sda: UUID="c8004245-4e16-3594-65e3-034668ed2791" UUID_SUB="afaea924-a15a-c5cf-f9a8-d73075201ff7" LABEL="perception:0" TYPE="linux_raid_member"

/etc/fstab中的相关行是:

代码语言:javascript
复制
UUID=f495abb3-36e6-4782-8f5e-83c6d3fc78eb /srv/share     ext4    defaults        0       2


$ sudo mount -a
mount: /srv/share: can't find UUID=f495abb3-36e6-4782-8f5e-83c6d3fc78eb.

我尝试将fstab中的UUID更改为c8004245:4e163594:65e30346:68ed2791,然后重新装入:

代码语言:javascript
复制
$ sudo mount -a
mount: /srv/share: can't find UUID=c8004245:4e163594:65e30346:68ed2791.

然后我更改为c8004245-4e16-3594-65e3-034668ed2791并重新装入:

代码语言:javascript
复制
$ sudo mount -a
mount: /srv/share: /dev/sdb already mounted or mount point busy.

然后,我使用新的fstab条目重新启动:c8004245-4e16-3594-65e3-034668ed2791

但是,上述任何命令都没有区别^

我尝试将mdadm.conf更改为:

代码语言:javascript
复制
ARRAY /dev/md/0  metadata=1.2 UUID=c8004245:4e163594:65e30346:68ed2791 name=perception:0

至:

代码语言:javascript
复制
ARRAY /dev/md0  metadata=1.2 UUID=c8004245:4e163594:65e30346:68ed2791 name=perception:0

=>没有什么区别吗?

试着停止并从-v开始

代码语言:javascript
复制
$ sudo mdadm --stop /dev/md0
mdadm: stopped /dev/md0

$ sudo mdadm --assemble --scan -v                                   
[ excluding all the random loop drive stuff ]
mdadm: /dev/sdb is identified as a member of /dev/md/0, slot 1.
mdadm: /dev/sdf is identified as a member of /dev/md/0, slot 2.
mdadm: /dev/sda is identified as a member of /dev/md/0, slot 0.
mdadm: added /dev/sdb to /dev/md/0 as 1
mdadm: added /dev/sdf to /dev/md/0 as 2
mdadm: no uptodate device for slot 3 of /dev/md/0
mdadm: added /dev/sda to /dev/md/0 as 0
mdadm: /dev/md/0 has been started with 3 drives (out of 4).


$ dmesg
[  988.616710] md/raid:md0: device sda operational as raid disk 0
[  988.616718] md/raid:md0: device sdf operational as raid disk 2
[  988.616721] md/raid:md0: device sdb operational as raid disk 1
[  988.618892] md/raid:md0: raid level 5 active with 3 out of 4 devices, algorithm 2
[  988.639345] md0: detected capacity change from 0 to 46883371008

cat /proc/mdstat现在说raid是活动的

代码语言:javascript
复制
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sda[0] sdf[3] sdb[1]
      23441685504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      bitmap: 0/59 pages [0KB], 65536KB chunk
unused devices: 

mount/srv/share成功地挂载了

代码语言:javascript
复制
sudo mount -a -v
/                        : ignored
/boot/efi                : already mounted
none                     : ignored
/home                    : already mounted
/srv                     : already mounted
/srv/share               : successfully mounted
/srv/datasets            : already mounted

/srv/share还是没有出现在df -h

我仍然看不到/srv/share中的数据

代码语言:javascript
复制
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             32G     0   32G   0% /dev
tmpfs           6.3G  2.5M  6.3G   1% /run
/dev/sdd2       293G   33G  245G  12% /
tmpfs            32G   96K   32G   1% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs            32G     0   32G   0% /sys/fs/cgroup
/dev/sde1       3.6T  455G  3.0T  14% /srv
/dev/sdd1       188M  5.2M  182M   3% /boot/efi
/dev/sdc1       4.6T  3.6T  768G  83% /srv/datasets
/dev/sdg1       1.8T  1.5T  164G  91% /home
EN

回答 1

Ask Ubuntu用户

发布于 2021-11-04 14:03:10

这里的答案是https://unix.stackexchange.com/questions/210416/new-raid-array-will-not-auto-assemble-leads-to-boot-problems

有帮助

代码语言:javascript
复制
dpkg-reconfigure mdadm    # Choose "all" disks to start at boot
 update-initramfs -u       # Updates the existing initramfs
票数 0
EN
页面原文内容由Ask Ubuntu提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://askubuntu.com/questions/1373550

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档