大家好,
我正在尝试设置一个具有共享资源ClusterIP、ClusterSamba、ClusterNFS、DRBD (克隆资源)和一个DRBDFS的2 PC集群。
项目的开始是在从无到有之后进行的。当完成本指南中的所有操作时,它的工作就没有问题了。因此,我想使用本指南中的一些块,并将我的项目设置为一个共享IP (ClusterIP),该共享IP自动分配给一个节点,并在该节点上(这里开始:/ )将我的/dev/drbd1 1设备挂载到/exports,然后通过SAMBA和NFS共享这个挂载。
当我启动集群时,所有的资源都会相应地增加,但是DRBD不会在次要节点(主节点/未知节点)上上升。如果我手动地启动它,它就同步并工作,当我停止集群(或强制重新启动第一个节点)时,所有资源都转移到其他节点,一切都正常(除了其他节点上的DRBD进入未知状态之外)。
现在,问题是:为什么当我停止集群时,DRBD会在次要节点上运行呢?或者为什么它不从次要节点的次要角色开始呢?
对不起,如果我的描述不好:)
下面是我使用的命令,下面是信任和结果:
# apt install -y pacemaker pcs psmisc policycoreutils-python-utils drbd-utils samba nfs-kernel-server
# systemctl start pcsd.service
# systemctl enable pcsd.service
# passwd hacluster
# pcs host auth alice bob
# pcs cluster setup myCluster alice bob --force
# pcs cluster start --all
# pcs property set stonith-enabled=false
# pcs property set no-quorum-policy=ignore
# modprobe drbd
# echo drbd >/etc/modules-load.d/drbd.conf
# drbdadm create-md r0
# drbdadm up r0
# drbdadm primary r0 --force
# mkfs.ext4 /dev/drbd1
# systemctl disable smbd
# systemctl disable nfs-kernel-server.service
# mkdir /exports
# vi /etc/samba/smb.conf
# vi /etc/exports
# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=10.1.1.30 cidr_netmask=24 op monitor interval=30s
# pcs resource defaults resource-stickiness=100
# pcs resource op defaults timeout=240s
# pcs resource create ClusterSamba lsb:smbd op monitor interval=60s
# pcs resource create ClusterNFS ocf:heartbeat:nfsserver op monitor interval=60s
# pcs resource create DRBD ocf:linbit:drbd drbd_resource=r0 op monitor interval=60s
# pcs resource promotable DRBD promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true
# pcs resource create DRBDFS Filesystem device="/dev/drbd1" directory="/exports" fstype="ext4"
# pcs constraint order ClusterIP then ClusterNFS
# pcs constraint order ClusterNFS then ClusterSamba
# pcs constraint order promote DRBD-clone then start DRBDFS
# pcs constraint order DRBDFS then ClusterNFS
# pcs constraint order ClusterIP then DRBD-clone
# pcs constraint colocation ClusterSamba with ClusterIP
# pcs constraint colocation add ClusterSamba with ClusterIP
# pcs constraint colocation add ClusterNFS with ClusterIP
# pcs constraint colocation add DRBDFS with DRBD-clone INFINITY with-rsc-role=Master
# pcs constraint colocation add DRBD-clone with ClusterIP
# pcs cluster stop --all && sleep 2 && pcs cluster start --all这里有一些信息和数据:
cat /etc/drbd.d/r0.res
resource r0 {
device /dev/drbd1;
disk /dev/sdb;
meta-disk internal;
net {
allow-two-primaries;
}
on alice {
address 10.1.1.31:7788;
}
on bob {
address 10.1.1.32:7788;
}
}cat /etc/corosync/corosync.conf
totem {
version: 2
cluster_name: myCluster
transport: knet
crypto_cipher: aes256
crypto_hash: sha256
}
nodelist {
node {
ring0_addr: alice
name: alice
nodeid: 1
}
node {
ring0_addr: bob
name: bob
nodeid: 2
}
}
quorum {
provider: corosync_votequorum
two_node: 1
}
logging {
to_logfile: yes
logfile: /var/log/corosync/corosync.log
to_syslog: yes
timestamp: on
}pcs status
Cluster name: myCluster
Stack: corosync
Current DC: alice (version 2.0.1-9e909a5bdd) - partition with quorum
Last updated: Fri May 15 12:28:30 2020
Last change: Fri May 15 11:04:50 2020 by root via cibadmin on bob
2 nodes configured
6 resources configured
Online: [ alice bob ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started alice
ClusterSamba (lsb:smbd): Started alice
ClusterNFS (ocf::heartbeat:nfsserver): Started alice
Clone Set: DRBD-clone [DRBD] (promotable)
Masters: [ alice ]
Stopped: [ bob ]
DRBDFS (ocf::heartbeat:Filesystem): Started alice
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabledpcs constraint --full
Location Constraints:
Ordering Constraints:
start ClusterIP then start ClusterNFS (kind:Mandatory) (id:order-ClusterIP-ClusterNFS-mandatory)
start ClusterNFS then start ClusterSamba (kind:Mandatory) (id:order-ClusterNFS-ClusterSamba-mandatory)
promote DRBD-clone then start DRBDFS (kind:Mandatory) (id:order-DRBD-clone-DRBDFS-mandatory)
start DRBDFS then start ClusterNFS (kind:Mandatory) (id:order-DRBDFS-ClusterNFS-mandatory)
start ClusterIP then start DRBD-clone (kind:Mandatory) (id:order-ClusterIP-DRBD-clone-mandatory)
start ClusterIP then promote DRBD-clone (kind:Mandatory) (id:order-ClusterIP-DRBD-clone-mandatory-1)
Colocation Constraints:
ClusterSamba with ClusterIP (score:INFINITY) (id:colocation-ClusterSamba-ClusterIP-INFINITY)
ClusterNFS with ClusterIP (score:INFINITY) (id:colocation-ClusterNFS-ClusterIP-INFINITY)
DRBDFS with DRBD-clone (score:INFINITY) (with-rsc-role:Master) (id:colocation-DRBDFS-DRBD-clone-INFINITY)
DRBD-clone with ClusterIP (score:INFINITY) (id:colocation-DRBD-clone-ClusterIP-INFINITY)
Ticket Constraints:cat /proc/drbd
version: 8.4.10 (api:1/proto:86-101)
srcversion: 983FCB77F30137D4E127B83
1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r-----
ns:0 nr:4 dw:8 dr:17 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:4附注:
我通常是通过回答问题来寻找答案的人,但我找不到这个答案。
发布于 2020-05-26 09:39:35
设法通过脚本启动集群来解决这个问题,这确保了drbd在两个节点上运行。
https://unix.stackexchange.com/questions/586802
复制相似问题