首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >PCS Stonith (击剑)将杀死两个节点集群,如果第一个是关闭的。

PCS Stonith (击剑)将杀死两个节点集群,如果第一个是关闭的。
EN

Unix & Linux用户
提问于 2019-06-28 07:14:45
回答 1查看 2.4K关注 0票数 2

我已经使用pcs配置了一个双节点物理服务器集群(HP ProLiant DL560 Gen8)。我还使用fence_ilo4在它们上配置了围栏。

如果一个节点发生故障(我的意思是断电),第二个节点也会死掉,那么会发生奇怪的事情。击剑会杀死自己,导致两台服务器离线。

我如何纠正这种行为?

我尝试的是在wait_for_all: 0部分的/etc/corosync/corosync.conf中添加“D2”和"expected_votes: 1“。但它还是会杀了它。

在某个时候,一些维护将在其中一个服务器上执行,并且它将不得不关闭。如果发生这种情况,我不想让另一个节点倒下。

以下是一些输出

代码语言:javascript
复制
[root@kvm_aquila-02 ~]# pcs quorum status
Quorum information
------------------
Date:             Fri Jun 28 09:07:18 2019
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          2
Ring ID:          1/284
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           1  
Flags:            2Node Quorate 

Membership information
----------------------
    Nodeid      Votes    Qdevice Name
         1          1         NR kvm_aquila-01
         2          1         NR kvm_aquila-02 (local)


[root@kvm_aquila-02 ~]# pcs config show
Cluster Name: kvm_aquila
Corosync Nodes:
 kvm_aquila-01 kvm_aquila-02
Pacemaker Nodes:
 kvm_aquila-01 kvm_aquila-02

Resources:
 Clone: dlm-clone
  Meta Attrs: interleave=true ordered=true 
  Resource: dlm (class=ocf provider=pacemaker type=controld)
   Operations: monitor interval=30s on-fail=fence (dlm-monitor-interval-30s)
               start interval=0s timeout=90 (dlm-start-interval-0s)
               stop interval=0s timeout=100 (dlm-stop-interval-0s)
 Clone: clvmd-clone
  Meta Attrs: interleave=true ordered=true 
  Resource: clvmd (class=ocf provider=heartbeat type=clvm)
   Operations: monitor interval=30s on-fail=fence (clvmd-monitor-interval-30s)
               start interval=0s timeout=90s (clvmd-start-interval-0s)
               stop interval=0s timeout=90s (clvmd-stop-interval-0s)
 Group: test_VPS
  Resource: test (class=ocf provider=heartbeat type=VirtualDomain)
   Attributes: config=/shared/xml/test.xml hypervisor=qemu:///system migration_transport=ssh
   Meta Attrs: allow-migrate=true is-managed=true priority=100 target-role=Started 
   Utilization: cpu=4 hv_memory=4096
   Operations: migrate_from interval=0 timeout=120s (test-migrate_from-interval-0)
               migrate_to interval=0 timeout=120 (test-migrate_to-interval-0)
               monitor interval=10 timeout=30 (test-monitor-interval-10)
               start interval=0s timeout=300s (test-start-interval-0s)
               stop interval=0s timeout=300s (test-stop-interval-0s)

Stonith Devices:
 Resource: kvm_aquila-01 (class=stonith type=fence_ilo4)
  Attributes: ipaddr=10.0.4.39 login=fencing passwd=0ToleranciJa pcmk_host_list="kvm_aquila-01 kvm_aquila-02"
  Operations: monitor interval=60s (kvm_aquila-01-monitor-interval-60s)
 Resource: kvm_aquila-02 (class=stonith type=fence_ilo4)
  Attributes: ipaddr=10.0.4.49 login=fencing passwd=0ToleranciJa pcmk_host_list="kvm_aquila-01 kvm_aquila-02"
  Operations: monitor interval=60s (kvm_aquila-02-monitor-interval-60s)
Fencing Levels:

Location Constraints:
Ordering Constraints:
  start dlm-clone then start clvmd-clone (kind:Mandatory)
Colocation Constraints:
  clvmd-clone with dlm-clone (score:INFINITY)
Ticket Constraints:

Alerts:
 No alerts defined

Resources Defaults:
 No defaults set
Operations Defaults:
 No defaults set

Cluster Properties:
 cluster-infrastructure: corosync
 cluster-name: kvm_aquila
 dc-version: 1.1.19-8.el7_6.4-c3c624ea3d
 have-watchdog: false
 last-lrm-refresh: 1561619537
 no-quorum-policy: ignore
 stonith-enabled: true

Quorum:
  Options:
    wait_for_all: 0

[root@kvm_aquila-02 ~]# pcs cluster status
Cluster Status:
 Stack: corosync
 Current DC: kvm_aquila-02 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum
 Last updated: Fri Jun 28 09:14:11 2019
 Last change: Thu Jun 27 16:23:44 2019 by root via cibadmin on kvm_aquila-01
 2 nodes configured
 7 resources configured

PCSD Status:
  kvm_aquila-02: Online
  kvm_aquila-01: Online
[root@kvm_aquila-02 ~]# pcs status
Cluster name: kvm_aquila
Stack: corosync
Current DC: kvm_aquila-02 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum
Last updated: Fri Jun 28 09:14:31 2019
Last change: Thu Jun 27 16:23:44 2019 by root via cibadmin on kvm_aquila-01

2 nodes configured
7 resources configured

Online: [ kvm_aquila-01 kvm_aquila-02 ]

Full list of resources:

 kvm_aquila-01  (stonith:fence_ilo4):   Started kvm_aquila-01
 kvm_aquila-02  (stonith:fence_ilo4):   Started kvm_aquila-02
 Clone Set: dlm-clone [dlm]
     Started: [ kvm_aquila-01 kvm_aquila-02 ]
 Clone Set: clvmd-clone [clvmd]
     Started: [ kvm_aquila-01 kvm_aquila-02 ]
 Resource Group: test_VPS
     test   (ocf::heartbeat:VirtualDomain): Started kvm_aquila-01

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
EN

回答 1

Unix & Linux用户

回答已采纳

发布于 2019-06-28 14:38:16

看起来您已经将STONITH设备配置为能够对两个节点进行隔离。您也没有位置限制,使负责保护给定节点不运行在同一节点上的栅栏代理(STONITH自杀),这是一种不好的做法。

尝试配置STONITH设备和位置约束,如下所示:

代码语言:javascript
复制
pcs stonith create kvm_aquila-01 fence_ilo4 pcmk_host_list=kvm_aquila-01 ipaddr=10.0.4.39 login=fencing passwd=0ToleranciJa op monitor interval=60s
pcs stonith create kvm_aquila-02 fence_ilo4 pcmk_host_list=kvm_aquila-02 ipaddr=10.0.4.49 login=fencing passwd=0ToleranciJa op monitor interval=60s
pcs constraint location kvm_aquila-01 avoids kvm_aquila-01=INFINITY
pcs constraint location kvm_aquila-02 avoids kvm_aquila-02=INFINITY
票数 1
EN
页面原文内容由Unix & Linux提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://unix.stackexchange.com/questions/527400

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档