首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >重新启动后,DRBD将产生连接的无磁盘/无磁盘状态。

重新启动后,DRBD将产生连接的无磁盘/无磁盘状态。
EN

Stack Overflow用户
提问于 2019-03-12 17:32:06
回答 2查看 2.8K关注 0票数 4

在一次不引人注意的停电之后,面临一个重大问题,每次重新启动DBRB都会出现连接的无盘/无盘状态。

主要问题:

  • 转储-md响应:发现的元数据“不干净”。
  • apply-al命令终止,退出代码20,消息打开(/dev/nvme0n1p1)失败:设备或资源繁忙
  • 不能独占打开drbd资源配置。

关于环境的

这个drbd资源规范用作lvm的块存储,它被配置为(共享lvm)存储到proxmoxve5.3-8集群。在drbd块设备之上配置了lvm,但在drbd主机lvm配置上,drbd序列化下面的设备(/dev/nvme0n1p1)被过滤掉(/etc/lvm/lvm.conf,如下所示)

drbd下的设备是PCIe NVMe设备。

它具有由systemctl显示的一些额外的属性:

代码语言:javascript
复制
root@pmx0:~# systemctl list-units | grep nvme
sys-devices-pci0000:00-0000:00:01.1-0000:0c:00.0-nvme-nvme0-nvme0n1-nvme0n1p1.device             loaded active     plugged   /sys/devices/pci0000:00/0000:00:01.1/0000:0c:00.0/nvme/nvme0/nvme0n1/nvme0n1p1
sys-devices-pci0000:00-0000:00:01.1-0000:0c:00.0-nvme-nvme0-nvme0n1.device                       loaded active     plugged   /sys/devices/pci0000:00/0000:00:01.1/0000:0c:00.0/nvme/nvme0/nvme0n1

在sytemctl中列出的其他存储设备普通SAS磁盘看起来有点不同:

代码语言:javascript
复制
root@pmx0:~# systemctl list-units | grep sdb
sys-devices-pci0000:00-0000:00:01.0-0000:0b:00.0-host0-target0:2:1-0:2:1:0-block-sdb-sdb1.device loaded active     plugged   PERC_H710 1
sys-devices-pci0000:00-0000:00:01.0-0000:0b:00.0-host0-target0:2:1-0:2:1:0-block-sdb-sdb2.device loaded active     plugged   PERC_H710 2
sys-devices-pci0000:00-0000:00:01.0-0000:0b:00.0-host0-target0:2:1-0:2:1:0-block-sdb.device      loaded active     plugged   PERC_H710

列出NVMe /sys/设备/.与ls:

代码语言:javascript
复制
root@pmx0:~# ls /sys/devices/pci0000:00/0000:00:01.1/0000:0c:00.0/nvme/nvme0/nvme0n1/nvme0n1p1
alignment_offset  dev  discard_alignment  holders  inflight  partition  power  ro  size  start  stat  subsystem  trace  uevent

事情不是好消息:

  • 重新启动没有帮助
  • 重新启动drbd服务没有帮助
  • drbdadm断开/断开/附加/重新启动服务没有帮助
  • nfs-内核-服务器服务不信任这些drbd节点(因此不能取消配置nfs-server)。

调查后的

转储-md响应:发现元数据是“不干净的”,请应用-al first apply-al命令终止,退出代码20,此消息为:(/dev/nvme0n1p1)失败:设备或资源繁忙的 的问题似乎是,我的drbd资源配置使用的设备 (/dev/nvme0n1p1)不能打开独占。

失败的DRBD命令:

代码语言:javascript
复制
root@pmx0:~# drbdadm attach r0
open(/dev/nvme0n1p1) failed: Device or resource busy
Operation canceled.
Command 'drbdmeta 0 v08 /dev/nvme0n1p1 internal apply-al' terminated with exit code 20
root@pmx0:~# drbdadm apply-al r0
open(/dev/nvme0n1p1) failed: Device or resource busy
Operation canceled.
Command 'drbdmeta 0 v08 /dev/nvme0n1p1 internal apply-al' terminated with exit code 20

root@pmx0:~# drbdadm dump-md r0
open(/dev/nvme0n1p1) failed: Device or resource busy

Exclusive open failed. Do it anyways?
[need to type 'yes' to confirm] yes

Found meta data is "unclean", please apply-al first
Command 'drbdmeta 0 v08 /dev/nvme0n1p1 internal dump-md' terminated with exit code 255

DRBD服务状态/命令:

代码语言:javascript
复制
root@pmx0:~# drbd-overview
 0:r0/0  Connected Secondary/Secondary Diskless/Diskless
root@pmx0:~# drbdadm dstate r0
Diskless/Diskless
root@pmx0:~# drbdadm disconnect r0
root@pmx0:~# drbd-overview
 0:r0/0  . . .
root@pmx0:~# drbdadm detach r0
root@pmx0:~# drbd-overview
 0:r0/0  . . .

试图重新附加资源r0:

代码语言:javascript
复制
root@pmx0:~# drbdadm attach r0
open(/dev/nvme0n1p1) failed: Device or resource busy
Operation canceled.
Command 'drbdmeta 0 v08 /dev/nvme0n1p1 internal apply-al' terminated with exit code 20
root@pmx0:~# drbdadm apply-al r0
open(/dev/nvme0n1p1) failed: Device or resource busy
Operation canceled.
Command 'drbdmeta 0 v08 /dev/nvme0n1p1 internal apply-al' terminated with exit code 20

lsof,fuser零输出:

代码语言:javascript
复制
root@pmx0:~# lsof /dev/nvme0n1p1
root@pmx0:~# fuser /dev/nvme0n1p1
root@pmx0:~# fuser /dev/nvme0n1
root@pmx0:~# lsof /dev/nvme0n1

资源磁盘分区和LVM配置:

代码语言:javascript
复制
root@pmx0:~# fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 1.9 TiB, 2048408248320 bytes, 4000797360 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x59762e31

Device         Boot Start        End    Sectors  Size Id Type
/dev/nvme0n1p1       2048 3825207295 3825205248  1.8T 83 Linux
root@pmx0:~# pvs
  PV             VG           Fmt  Attr PSize   PFree
  /dev/sdb2      pve          lvm2 a--  135.62g  16.00g
root@pmx0:~# vgs
  VG           #PV #LV #SN Attr   VSize   VFree
  pve            1   3   0 wz--n- 135.62g  16.00g
root@pmx0:~# lvs
  LV            VG           Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve          twi-a-tz--  75.87g             0.00   0.04
  root          pve          -wi-ao----  33.75g
  swap          pve          -wi-ao----   8.00g
root@pmx0:~# vi /etc/lvm/lvm.conf
root@pmx0:~# cat /etc/lvm/lvm.conf | grep nvm
        filter = [ "r|/dev/nvme0n1p1|", "a|/dev/sdb|", "a|sd.*|", "a|drbd.*|", "r|.*|" ]

DRBD资源配置:

代码语言:javascript
复制
root@pmx0:~# cat /etc/drbd.d/r0.res
resource r0 {
        protocol C;
        startup {
                wfc-timeout  0;     # non-zero wfc-timeout can be dangerous (http://forum.proxmox.com/threads/3465-Is-it-safe-to-use-wfc-timeout-in-DRBD-configuration)
                degr-wfc-timeout 300;
        become-primary-on both;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "*********";
                allow-two-primaries;
                after-sb-0pri discard-zero-changes;
                after-sb-1pri discard-secondary;
                after-sb-2pri disconnect;
                #data-integrity-alg crc32c;     # has to be enabled only for test and disabled for production use (check man drbd.conf, section "NOTES ON DATA INTEGRITY")
        }
        on pmx0 {
                device /dev/drbd0;
                disk /dev/nvme0n1p1;
                address 10.0.20.15:7788;
                meta-disk internal;
        }
        on pmx1 {
                device /dev/drbd0;
                disk /dev/nvme0n1p1;
                address 10.0.20.16:7788;
                meta-disk internal;
        }
        disk {
                # no-disk-barrier and no-disk-flushes should be applied only to systems with non-volatile (battery backed) controller caches.
                # Follow links for more information:
                # http://www.drbd.org/users-guide-8.3/s-throughput-tuning.html#s-tune-disable-barriers
                # http://www.drbd.org/users-guide/s-throughput-tuning.html#s-tune-disable-barriers
                no-disk-barrier;
                no-disk-flushes;
        }
}

其他节点:

代码语言:javascript
复制
root@pmx1:~# drbd-overview
 0:r0/0  Connected Secondary/Secondary Diskless/Diskless

等等,每个命令响应和配置都显示相同的节点pmx0。

Debian和DRBD版本:

代码语言:javascript
复制
root@pmx0:~# uname -a
Linux pmx0 4.15.18-10-pve #1 SMP PVE 4.15.18-32 (Sat, 19 Jan 2019 10:09:37 +0100) x86_64 GNU/Linux
root@pmx0:~# cat /etc/debian_version
9.8
root@pmx0:~# dpkg --list| grep drbd
ii  drbd-utils                           8.9.10-2                       amd64        RAID 1 over TCP/IP for Linux (user utilities)
root@pmx0:~# lsmod | grep drbd
drbd                  364544  1
lru_cache              16384  1 drbd
libcrc32c              16384  2 dm_persistent_data,drbd
root@pmx0:~# modinfo drbd
filename:       /lib/modules/4.15.18-10-pve/kernel/drivers/block/drbd/drbd.ko
alias:          block-major-147-*
license:        GPL
version:        8.4.10
description:    drbd - Distributed Replicated Block Device v8.4.10
author:         Philipp Reisner <phil@linbit.com>, Lars Ellenberg <lars@linbit.com>
srcversion:     9A7FB947BDAB6A2C83BA0D4
depends:        lru_cache,libcrc32c
retpoline:      Y
intree:         Y
name:           drbd
vermagic:       4.15.18-10-pve SMP mod_unload modversions
parm:           allow_oos:DONT USE! (bool)
parm:           disable_sendpage:bool
parm:           proc_details:int
parm:           minor_count:Approximate number of drbd devices (1-255) (uint)
parm:           usermode_helper:string

挂载:

代码语言:javascript
复制
root@pmx0:~# cat /proc/mounts
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,relatime 0 0
udev /dev devtmpfs rw,nosuid,relatime,size=24679656k,nr_inodes=6169914,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=4940140k,mode=755 0 0
/dev/mapper/pve-root / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/rdma cgroup rw,nosuid,nodev,noexec,relatime,rdma 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=39,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=20879 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
sunrpc /run/rpc_pipefs rpc_pipefs rw,relatime 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
/dev/sda1 /mnt/intelSSD700G ext3 rw,relatime,errors=remount-ro,data=ordered 0 0
lxcfs /var/lib/lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
10.0.0.15:/samba/shp /mnt/pve/bckNFS nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.0.15,mountvers=3,mountport=42772,mountproto=udp,local_lock=none,addr=10.0.0.15 0 0
EN

回答 2

Stack Overflow用户

发布于 2020-05-25 04:03:15

最后,我找到了这个问题的根源!

在启动时,设备映射程序在连接到支持设备的drbd设备上创建逻辑卷映射,但在drbd服务启动之前进行。在此之后,drbd无法独占打开备用设备资源。如果手动删除lv设备的映射,则可以切换drbd资源。

dmsetup remove <volume name>可以删除映射,如下所示:

代码语言:javascript
复制
root@pmx0:~# lsblk
NAME                                    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                       8:0    0 893.8G  0 disk
└─sda1                                    8:1    0 744.6G  0 part
  └─drbd--intel800--vg-vm--103--disk--0 253:2    0    80G  0 lvm
sdb                                       8:16   0 136.1G  0 disk
├─sdb1                                    8:17   0   512M  0 part
└─sdb2                                    8:18   0 135.6G  0 part
  ├─pve-swap                            253:0    0     8G  0 lvm  [SWAP]
  └─pve-root                            253:1    0  33.8G  0 lvm  /
sdc                                       8:32   0   1.8T  0 disk
└─sdc1                                    8:33   0   1.8T  0 part
  └─drbd1                               147:1    0   1.8T  1 disk
sr0                                      11:0    1  1024M  0 rom
root@pmx0:~# dmsetup info -c
Name                                Maj Min Stat Open Targ Event  UUID
pve-swap                            253   0 L--w    2    1      0 LVM-upAG64GGzE9OLCOcDKvIwuNVzCg238v0xxrfApwyCQdQN3HBHnpPOhCSJe0eMQP3
pve-root                            253   1 L--w    1    1      0 LVM-upAG64GGzE9OLCOcDKvIwuNVzCg238v0kYEDRlWWy5IXJYWqB2Fzc117JT9w2004
drbd--intel800--vg-vm--103--disk--0 253   2 L--w    0    1      0 LVM-849ik4y1F5s9tZbA21R2TaiI9uK42SPp4waMZVBqzudKY3vXBxAV3IULRlEthcGW
root@pmx0:~# drbdadm up r0
open(/dev/sda1) failed: Device or resource busy
Operation canceled.
Command 'drbdmeta 0 v08 /dev/sda1 internal apply-al' terminated with exit code 20
root@pmx0:~# dmsetup remove drbd--intel800--vg-vm--103--disk--0
root@pmx0:~# dmsetup info -c
Name                           Maj Min Stat Open Targ Event  UUID
pve-swap                       253   0 L--w    2    1      0 LVM-upAG64GGzE9OLCOcDKvIwuNVzCg238v0xxrfApwyCQdQN3HBHnpPOhCSJe0eMQP3
pve-root                       253   1 L--w    1    1      0 LVM-upAG64GGzE9OLCOcDKvIwuNVzCg238v0kYEDRlWWy5IXJYWqB2Fzc117JT9w2004
root@pmx0:~# drbdadm attach r0
Marked additional 4948 MB as out-of-sync based on AL.
root@pmx0:~# drbd-overview
 0:r0/0  Connected Secondary/Secondary UpToDate/Diskless

在对另一个节点执行相同的操作后,drbd磁盘状态转到UpToDate/UpToDate,可以通过命令drbdadm主r0将其设置为主/主

代码语言:javascript
复制
root@pmx1:~# drbdadm attach r0
root@pmx1:~# drbd-overview
0:r0/0 SyncTarget Secondary/Secondary Inconsistent/UpToDate
[==>.................] sync'ed: 17.3% (4100/4948)M
root@pmx0:~# drbd-overview
0:r0/0 Connected Secondary/Secondary UpToDate/UpToDate

不幸的是,在每次重新启动时都会出现此问题。设备映射器再次创建映射,同样的问题也会发生。

因此,drbd无法在启动时设置资源。没有找到任何解决办法来解决这个问题。lvm.conf中的lvm过滤器不影响此问题,只从扫描中筛选pv/vg/lv,但是设备映射器仍然在启动时创建映射。不幸的是,我无法找到解决办法来防止它。

我有个解决办法,但我不喜欢。将dmsetup命令放在drbd启动脚本的start(开始)部分。这样,脚本在启动drbd之前删除映射,但它需要lv卷的确切名称。每次我在共享vg上创建一个新lv时,它都需要放入drbd启动脚本。

票数 3
EN

Stack Overflow用户

发布于 2019-05-17 15:49:03

我也遇到了类似的问题和错误消息,尽管我的设置有点不同:用于DRBD存储的LVM逻辑卷位于MD raid1之上。我不明白是什么导致了问题(系统停止了,我不得不进行冷重启),但是下面的命令帮助我找到并解决了“繁忙”问题。

这个博客下面的代码

代码语言:javascript
复制
dmsetup info -c  => find Major and Minor of problematic device (253 and 2 in my case)

ls -la /sys/dev/block/253\:2/holders

包含到/dev/dm-9的链接。

所有其他drbd设备的持有者指向类似于drbd3 -> ../../drbd3的东西。

因此(警告:我不知道这会造成什么损害。)它刚刚对我起了作用):

代码语言:javascript
复制
dmsetup remove /dev/dm-9

drbdadm up RESOURCE
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/55127490

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档