在DSM更新之后,Synology品牌的高速缓存驱动器出现问题,导致我的池崩溃并变得不可用。底部的3个点只有删除池的选项,没有启用读/写或修复卷的选项。我接触到语法支持,一周后,他们能够以只读模式返回池,这样我就可以备份我的数据了。
长期以来,启用远程访问使我感到不安,因此,在我开始备份数据之后,缓存的缺乏及其提供的改进开始让我感到厌烦。我重新添加了缓存,这就是我真正的问题开始的地方。经过一整天的故障排除,我想记录一下如何在DSM 7上恢复池,以防您发现自己陷入了类似的困境。
在遵循一些raid论坛上的建议后,我将驱动器标记为失败,并从数组中删除一个驱动器,重新添加它,只会使我的问题更糟!结果是:
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sda5[6](S) sdb5[1](E) sdf5[5](E) sde5[4](E) sdd5[3](E) sdc5[2](E)
87837992640 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_EEEEE]
md1 : active raid1 sdg2[1]
2097088 blocks [12/1] [_U__________]
md0 : active raid1 sdg1[1]
2490176 blocks [12/1] [_U__________]但是看起来很糟,mdadm -D /dev/md2 reported a much better results:
#mdadm -D /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Fri Aug 5 22:03:13 2022
Raid Level : raid5
Array Size : 87837992640 (83768.84 GiB 89946.10 GB)
Used Dev Size : 17567598528 (16753.77 GiB 17989.22 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Wed May 17 21:46:40 2023
State : clean
Active Devices : 5
Working Devices : 6
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Name : File01:2
UUID : 9ef80d24:68ea4c4f:3b281ebe:790302f5
Events : 1454
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 21 1 faulty active sync /dev/sdb5
2 8 37 2 faulty active sync /dev/sdc5
3 8 53 3 faulty active sync /dev/sdd5
4 8 69 4 faulty active sync /dev/sde5
5 8 85 5 faulty active sync /dev/sdf5
6 8 5 - spare /dev/sda5更糟糕的是,DSM 7更改了底层代码,因此像syno_poweroff_task -d这样的命令不再存在!我们将花费大量的时间来达成解决方案,希望这能帮助那些急需帮助的人。
下面是如何让您的数组重新构建:
我的数组有两个卷,这阻止了我停止数组&必须使用新的语法包卸载它们
# synostgvolume --unmount -p /volume1
# synostgvolume --unmount -p /syno_vg_reserved_area
# synovspace -all-unload这将使数组看起来如下:非活动&不可用是您所寻找的
#lvm
lvm> lvscan
inactive '/dev/vg2/syno_vg_reserved_area' [12.00 MiB] inherit
inactive '/dev/vg2/volume_1' [81.80 TiB] inherit
lvm> lvdisplay
--- Logical volume ---
LV Path /dev/vg2/syno_vg_reserved_area
LV Name syno_vg_reserved_area
VG Name vg2
LV UUID 2E1szd-mdDP-4kkJ-YIcF-zh1B-t3t3-1hq1Ct
LV Write Access read/write
LV Creation host, time ,
LV Status NOT available
LV Size 12.00 MiB
Current LE 3
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/vg2/volume_1
LV Name volume_1
VG Name vg2
LV UUID scFIlA-VoSt-KhC1-WP0u-DYBl-3IfY-Nrc4Cj
LV Write Access read/write
LV Creation host, time ,
LV Status NOT available
LV Size 81.80 TiB
Current LE 21444608
Segments 1
Allocation inherit
Read ahead sectors auto 现在,您可以发出汇编程序命令,以返回数组,如果一切顺利,您的数组将开始重建。即使我搞砸了,没有添加最后一个驱动器:(立即恢复我的数组)。
# mdadm --stop /dev/md2
# mdadm --verbose --create /dev/md2 --chunk=64 --level=5 --raid-devices=6 missing dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 # mdadm --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Wed May 17 22:36:29 2023
Raid Level : raid5
Array Size : 87837992640 (83768.84 GiB 89946.10 GB)
Used Dev Size : 17567598528 (16753.77 GiB 17989.22 GB)
Raid Devices : 6
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Wed May 17 22:36:29 2023
State : clean, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : Morpheous:2 (local to host Morpheous)
UUID : 96f5e08a:d64e6b15:97240cc1:54309926
Events : 1
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 21 1 active sync /dev/sdb5
2 8 37 2 active sync /dev/sdc5
3 8 53 3 active sync /dev/sdd5
4 8 69 4 active sync /dev/sde5
5 8 85 5 active sync /dev/sdf5最后再加上备用的,等等中提琴!
mdadm --manage /dev/md2 --add /dev/sda5 cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sda5[6] sdf5[5] sde5[4] sdd5[3] sdc5[2] sdb5[1]
87837992640 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU]
[>....................] recovery = 0.0% (4341888/17567598528) finish=3980.8min speed=73531K/sec
md1 : active raid1 sdg2[1]
2097088 blocks [12/1] [_U__________]
md0 : active raid1 sdg1[1]
2490176 blocks [12/1] [_U__________] 希望这能对未来的人有所帮助。
发布于 2023-05-19 13:22:01
在DSM 7中,下面是如何卸载文件系统:
# synostgvolume --unmount -p /volume1
# synostgvolume --unmount -p /syno_vg_reserved_area
# synovspace -all-unload停止数组&重新构建数组:
# mdadm --stop /dev/md2
# mdadm --verbose --create /dev/md2 --chunk=64 --level=5 --raid-devices=6 missing dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 https://serverfault.com/questions/1131394
复制相似问题