首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >加密LUKS LVM中无数据丢失的磁盘替换

加密LUKS LVM中无数据丢失的磁盘替换
EN

Unix & Linux用户
提问于 2021-06-14 12:20:10
回答 2查看 286关注 0票数 0

这是我在场外的rsync备份服务器的设置。

Ubuntu20.10有9个硬盘。

磁盘/dev/sd阿-h属于备份卷组。

系统在/dev/sdi上

服务器是:

  • 通过网络控制开关供电(否则将与电网断电)
  • 配置了Wake on-lan
  • 配置了可用于通过网络解锁密码并允许系统启动的下拉列表。

初始LVM LUKS设置:

代码语言:javascript
复制
cryptsetup luksFormat --hash=sha512 --key-size=512 --cipher=aes-xts-plain64 --verify-passphrase /dev/sda
cryptsetup luksFormat --hash=sha512 --key-size=512 --cipher=aes-xts-plain64 --verify-passphrase /dev/sdb
cryptsetup luksFormat --hash=sha512 --key-size=512 --cipher=aes-xts-plain64 --verify-passphrase /dev/sdc
cryptsetup luksFormat --hash=sha512 --key-size=512 --cipher=aes-xts-plain64 --verify-passphrase /dev/sdd
cryptsetup luksFormat --hash=sha512 --key-size=512 --cipher=aes-xts-plain64 --verify-passphrase /dev/sde
cryptsetup luksFormat --hash=sha512 --key-size=512 --cipher=aes-xts-plain64 --verify-passphrase /dev/sdf
cryptsetup luksFormat --hash=sha512 --key-size=512 --cipher=aes-xts-plain64 --verify-passphrase /dev/sdg
cryptsetup luksFormat --hash=sha512 --key-size=512 --cipher=aes-xts-plain64 --verify-passphrase /dev/sdh
代码语言:javascript
复制
cryptsetup luksOpen /dev/sda luks_sda
cryptsetup luksOpen /dev/sdb luks_sdb
cryptsetup luksOpen /dev/sdc luks_sdc
cryptsetup luksOpen /dev/sdd luks_sdd
cryptsetup luksOpen /dev/sde luks_sde
cryptsetup luksOpen /dev/sdf luks_sdf
cryptsetup luksOpen /dev/sdg luks_sdg
cryptsetup luksOpen /dev/sdh luks_sdh
代码语言:javascript
复制
pvcreate /dev/mapper/luks_sda
pvcreate /dev/mapper/luks_sdb
pvcreate /dev/mapper/luks_sdc
pvcreate /dev/mapper/luks_sdd
pvcreate /dev/mapper/luks_sde
pvcreate /dev/mapper/luks_sdf
pvcreate /dev/mapper/luks_sdg
pvcreate /dev/mapper/luks_sdh
代码语言:javascript
复制
vgcreate tiburon_backup_vg /dev/mapper/luks_sda

添加其他/dev/mapper/luks_sd*设备到创建的带有挂载点的vg创建lv中

为每个luks_sd*更新/etc/crypttab:

代码语言:javascript
复制
luks_sd[a-h] /dev/sd[a-h] /etc/luks-keys/luks_sd[a-h] luks

然后更新initramfs:

代码语言:javascript
复制
update-initramfs -uv
reboot

7年来一切都很好,直到现在我需要替换/dev/sdf,因为它有越来越多的坏部门。

不确定如何在不复制5TB数据和不丢失数据的情况下继续工作。

下面是我到目前为止发现的(为了不丢失数据):

代码语言:javascript
复制
cryptsetup status

cryptswap1
luks_sde
Tiburon2--vg-root
luks_sda
luks_sdf                 #problematic luks disk
Tiburon2--vg-swap_1
luks_sdb
luks_sdg
tiburon_backup_vg-tiburon_backup          #problematic vg-lv
luks_sdc
luks_sdh
luks_sdd
sdb5_crypt
代码语言:javascript
复制
cryptsetup status luks_sdf

/dev/mapper/luks_sdf is active and is in use.
  type:    LUKS1
  cipher:  aes-xts-plain64
  keysize: 512 bits
  key location: dm-crypt
  device:  /dev/sdf
  sector size:  512
  offset:  4096 sectors
  size:    3907025072 sectors
  mode:    read/write
代码语言:javascript
复制
umount /tiburon_backup

vgchange -a n tiburon_backup_vg

  0 logical volume(s) in volume group "tiburon_backup_vg" now active


pvmove /dev/mapper/luks_sdf

  Insufficient free space: 476931 extents needed, but only 1 available
  Unable to allocate mirror extents for tiburon_backup_vg/pvmove0.
  Failed to convert pvmove LV to mirrored.
代码语言:javascript
复制
#Therefore:
e2fsck -f /dev/mapper/tiburon_backup_vg-tiburon_backup


#FS/VG has 8TB, and 4TB is in use, therefore shrinking it to 5TB:

resize2fs -p /dev/mapper/tiburon_backup_vg-tiburon_backup  5T

Początkowy przebieg 2 (maksymalny = 262145)
Relokowanie bloków           XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Początkowy przebieg 3 (maksymalny = 40960)
Przeszukiwanie tablicy i-węzłówXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
代码语言:javascript
复制
lvreduce -L 5T /dev/mapper/tiburon_backup_vg-tiburon_backup

  WARNING: Reducing active logical volume to 5,00 TiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce tiburon_backup_vg/tiburon_backup? [y/n]: y
  Size of logical volume tiburon_backup_vg/tiburon_backup changed from <7,80 TiB (2043653 extents) to 5,00 TiB.
  Logical volume tiburon_backup_vg/tiburon_backup successfully resized.



e2fsck -f /dev/mapper/tiburon_backup_vg-tiburon_backup

e2fsck 1.45.6 (20-Mar-2020)
Przebieg 1: Sprawdzanie i-węzłów, bloków i rozmiarów
Przebieg 2: Sprawdzanie struktury katalogów
Przebieg 3: Sprawdzanie łączności katalogów
Przebieg 4: Sprawdzanie liczników odwołań
Przebieg 5: Sprawdzanie sumarycznych informacji o grupach
/dev/mapper/tiburon_backup_vg-tiburon_backup: 11/176128 plików (0.0% nieciągłych), 281453/1408000 bloków

现在,pvscan显示/dev/mapper/luks_sdf为空:

PV /dev/mapper/luks_sdf VG tiburon_backup_vg lvm2 <1,82 TiB / 1,82 TiB免费

因此,如果我现在运行:

代码语言:javascript
复制
pvmove /dev/mapper/luks_sdf

它应该将剩余的块从这个pv映射到vg中的其他空闲空间,对吗?(还是不?)

在那之后,我打算做:

代码语言:javascript
复制
vgchange -a n tiburon_backup_vg

cryptsetup close luks_sdf

vgreduce tiburon_backup_vg /dev/mapper/luks_sdf

pvremove /dev/sdf


#remove luks_sdf from /etc/crypttab

这样做有效吗,还是有更好的方法从LUKS上的vm中清除出错误的磁盘?

非常感谢你可能有的想法。

EN

回答 2

Unix & Linux用户

发布于 2021-06-14 12:51:52

您的操作顺序需要一个小的更正.

是的,如果还剩下任何分配的块,pvmove就会这么做。如果/dev/mapper/luks_sdf实际上已经完全没有LVM数据,那么它不会有什么害处。

如果成功,pvdisplay /dev/mapper/luks_sdf应该在Total PEFree PE字段中显示完全相同的值,而Allocated PE应该是0。

此时,您不必执行vgchange -a n tiburon_backup_vg;只需执行vgreduce tiburon_backup_vg /dev/mapper/luks_sdf就可以从VG中删除它(因为它现在没有LVM数据,所以可以在线执行)。

由于LVM位于LUKS之上,所以在cryptsetup close luks_sdf之前执行此操作非常重要,因为在此之后,系统将只看到/dev/sdf的加密内容:如果您尝试pvremove /dev/sdf,它将告诉您没有要删除的LVM头(因为它只会看到毫无意义的加密数据)。

在这种情况下,不需要运行pvremove:只要磁盘已从VG中删除,LVM就不再需要它的存在,而且即使您热拔出它,也不会介意。(如果你的硬件实际上没有热插拔能力,就不要拔掉插头。)

在关闭之前,请记住从/dev/sdf中删除或注释掉/etc/crypttab,并更新initramfs,否则系统将在启动时将您拖入紧急模式,因为它将试图激活/dev/sdf上的LUKS,而不再找到该磁盘(或者将找到一个没有现有LUKS头的新磁盘)。

票数 1
EN

Unix & Linux用户

发布于 2021-06-14 21:40:52

我希望这个测试能对将来的人有所帮助。

代码语言:javascript
复制
root@Tiburon3:~# pvscan
  PV /dev/mapper/luks_sda     VG bck_vg          lvm2 [<232.87 GiB / 0    free]
  PV /dev/mapper/luks_sdb     VG bck_vg          lvm2 [<931.50 GiB / 0    free]
  PV /dev/mapper/luks_sdc     VG bck_vg          lvm2 [<1.82 TiB / 0    free]
  PV /dev/mapper/luks_sdd     VG bck_vg          lvm2 [149.03 GiB / 0    free]
  PV /dev/mapper/luks_sde     VG bck_vg          lvm2 [<1.82 TiB / 0    free]
  PV /dev/mapper/luks_sdf     VG bck_vg          lvm2 [<1.82 TiB / 0    free]
  PV /dev/mapper/luks_sdg     VG bck_vg          lvm2 [<931.50 GiB / 0    free]
  PV /dev/mapper/luks_sdh     VG bck_vg          lvm2 [149.03 GiB / 79.83 GiB free]
  PV /dev/mapper/sdb5_crypt   VG Tiburon3-vg     lvm2 [<148.53 GiB / <111.28 GiB free]
  Total: 9 [7.94 TiB] / in use: 9 [7.94 TiB] / in no VG: 0 [0   ]
代码语言:javascript
复制
root@Tiburon3:~# df -hP /tiburon_backup/
Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/bck_vg-lv_tib_bck  7.7T   93M  7.3T   1% /tiburon_backup

#用虚拟数据填充FS以达到VG的末尾:

代码语言:javascript
复制
cd /tiburon_backup/

FROMHERE=848
for ((i=FROMHERE; i>=1; i--))
do
    fallocate -l 10GB gentoo_root$i.img
done


root@Tiburon3:/tiburon_backup# df -hP /tiburon_backup/
Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/bck_vg-lv_tib_bck  7.7T  7.3T   20G 100% /tiburon_backup

fallocate -l 10G gentoo_root000.img
代码语言:javascript
复制
root@Tiburon3:/tiburon_backup# pvscan
  PV /dev/mapper/luks_sda     VG bck_vg          lvm2 [<232.87 GiB / 0    free]
  PV /dev/mapper/luks_sdb     VG bck_vg          lvm2 [<931.50 GiB / 0    free]
  PV /dev/mapper/luks_sdc     VG bck_vg          lvm2 [<1.82 TiB / 0    free]
  PV /dev/mapper/luks_sdd     VG bck_vg          lvm2 [149.03 GiB / 0    free]
  PV /dev/mapper/luks_sde     VG bck_vg          lvm2 [<1.82 TiB / 0    free]
  PV /dev/mapper/luks_sdf     VG bck_vg          lvm2 [<1.82 TiB / 0    free]
  PV /dev/mapper/luks_sdg     VG bck_vg          lvm2 [<931.50 GiB / 0    free]
  PV /dev/mapper/luks_sdh     VG bck_vg          lvm2 [149.03 GiB / 79.83 GiB free]
代码语言:javascript
复制
  rm -f gentoo_root[1-9]*


root@Tiburon3:/tiburon_backup# df -hP .
  Filesystem                     Size  Used Avail Use% Mounted on
  /dev/mapper/bck_vg-lv_tib_bck  7.7T   10G  7.3T   1% /tiburon_backup


root@Tiburon3:/tiburon_backup# du -sh *
  10G   gentoo_root000.img
代码语言:javascript
复制
root@Tiburon3:/# 
btrfs filesystem resize -1T /tiburon_backup
Output:
Resize '/tiburon_backup' of '-1T'

注意,与调整空fs大小相比,这花费了相当长的时间。

我想知道我是否可以使用一些--进度或者--详细的参数来查看更多的输出。

代码语言:javascript
复制
root@Tiburon3:/tiburon_backup# df -hP .
Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/bck_vg-lv_tib_bck  6.8T   12G  6.8T   1% /tiburon_backup


umount /tiburon_backup


root@Tiburon3:/# pvscan
  PV /dev/mapper/luks_sda     VG bck_vg          lvm2 [<232.87 GiB / <232.87 GiB free]
  PV /dev/mapper/luks_sdb     VG bck_vg          lvm2 [<931.50 GiB / <931.50 GiB free]
  PV /dev/mapper/luks_sdc     VG bck_vg          lvm2 [<1.82 TiB / 0    free]
  PV /dev/mapper/luks_sdd     VG bck_vg          lvm2 [149.03 GiB / 149.03 GiB free]
  PV /dev/mapper/luks_sde     VG bck_vg          lvm2 [<1.82 TiB / 0    free]
  PV /dev/mapper/luks_sdf     VG bck_vg          lvm2 [<1.82 TiB / 469.00 GiB free]
  PV /dev/mapper/luks_sdg     VG bck_vg          lvm2 [<931.50 GiB / <931.50 GiB free]

  PV /dev/mapper/luks_sdh     VG bck_vg          lvm2 [149.03 GiB / 149.03 GiB free] #Lets remove this one, for testing purposes, smallest size

  PV /dev/mapper/sdb5_crypt   VG Tiburon3-vg     lvm2 [<148.53 GiB / <111.28 GiB free]
  Total: 9 [7.94 TiB] / in use: 9 [7.94 TiB] / in no VG: 0 [0   ]
代码语言:javascript
复制
root@Tiburon3:/# pvdisplay /dev/mapper/luks_sdh
  --- Physical volume ---
  PV Name               /dev/mapper/luks_sdh
  VG Name               bck_vg
  PV Size               149.03 GiB / not usable <3.84 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              38152
  Free PE               20437

  Allocated PE          17715   **#allright!, this is the scenario I wanted to test. Data migration before removing the disk.**

  PV UUID               VRhdHD-5aam-9Wha-Qzwg-f8Iz-hosM-Wz0Q3q
代码语言:javascript
复制
root@Tiburon3:/# pvmove /dev/mapper/luks_sdh
    No extents available for allocation.

为了安全起见,我不会减少LV的全部1TB,但在800 be,应该足够转移剩余的分配的be从luks_sdh.

代码语言:javascript
复制
root@Tiburon3:/# lvreduce -L -800G /dev/mapper/bck_vg-lv_tib_bck
      WARNING: Reducing active logical volume to <6.94 TiB.
      THIS MAY DESTROY YOUR DATA (filesystem etc.)
    Do you really want to reduce bck_vg/lv_tib_bck? [y/n]: y
      Size of logical volume bck_vg/lv_tib_bck changed from <7.72 TiB (2023191 extents) to <6.94 TiB (1818391 extents).
      Logical volume bck_vg/lv_tib_bck successfully resized.


root@Tiburon3:/# pvmove /dev/mapper/luks_sdh  --alloc anywhere
  /dev/mapper/luks_sdh: Moved: 0.06%
[...]
  /dev/mapper/luks_sdh: Moved: 82.00%
[...]
  /dev/mapper/luks_sdh: Moved: 99.99%
Done!
代码语言:javascript
复制
root@Tiburon3:/# pvdisplay /dev/mapper/luks_sdh
  --- Physical volume ---
  PV Name               /dev/mapper/luks_sdh
  VG Name               bck_vg
  PV Size               149.03 GiB / not usable <3.84 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              38152
  Free PE               38152
  
  Allocated PE          0       **#DATA WAS MOVED OUT OF THIS PV!**
  
  PV UUID               VRhdHD-5aam-9Wha-Qzwg-f8Iz-hosM-Wz0Q3q
代码语言:javascript
复制
root@Tiburon3:/# vgreduce bck_vg /dev/mapper/luks_sdh
    Removed "/dev/mapper/luks_sdh" from volume group "bck_vg"


root@Tiburon3:/# mount -a

root@Tiburon3:/# cd /tiburon_backup/

root@Tiburon3:/tiburon_backup# du -sh *
    10G gentoo_root000.img    **#DATA IS INTACT**

现在:

密码设置关闭luks_sdh

现在,正如上面telcoM所建议的,从/etc/crypttab删除(或注释掉) luks_sdh并更新initramfs是明智的:

更新-initramfs -uv

并重新启动,以测试它是否正常工作。

现在我将在我的私人Prod Env上运行这个程序;)

@telcoM,非常感谢您的建议!

票数 0
EN
页面原文内容由Unix & Linux提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://unix.stackexchange.com/questions/654200

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档