首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >drbd无法启动无法加载drbd模块。

drbd无法启动无法加载drbd模块。
EN

Server Fault用户
提问于 2014-11-03 02:31:41
回答 1查看 10.5K关注 0票数 2

我试图在虚拟盒上使用centoOS 6.3学习drbd,我信任了两个vm,节点1是原始的,节点2是从节点1克隆的,但是我不能启动'service DRBD start‘有一个错误消息’启动drbd资源:无法加载drbd模块‘,而节点2可以启动命令,下面是配置

root@localhost db# cat /etc/drbd.conf

代码语言:javascript
复制
# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example

    #include "drbd.d/global_common.conf";
    #include "drbd.d/*.res";

    global {
        # do not participate in online usage survey
        usage-count no;
    }

    resource data {

        # write IO is reported as completed if it has reached both local
        # and remote disk
        protocol C;

        net {
            # set up peer authentication
            cram-hmac-alg sha1;
            shared-secret "s3cr3tp@ss";
            # default value 32 - increase as required
            max-buffers 512;
            # highest number of data blocks between two write barriers
            max-epoch-size 512;
            # size of the TCP socket send buffer - can tweak or set to 0 to
            # allow kernel to autotune
            sndbuf-size 0;
        }

        startup {
            # wait for connection timeout - boot process blocked
            # until DRBD resources are connected
            wfc-timeout 30;
            # WFC timeout if peer was outdated
            outdated-wfc-timeout 20;
            # WFC timeout if this node was in a degraded cluster (i.e. only had one
            # node left)
            degr-wfc-timeout 30;
        }

        disk {
            # the next two are for safety - detach on I/O error
            # and set up fencing - resource-only will attempt to
            # reach the other node and fence via the fence-peer
            # handler
            on-io-error detach;
            fencing resource-only;
            # no-disk-flushes; # if we had battery-backed RAID
            # no-md-flushes; # if we had battery-backed RAID
            # ramp up the resync rate
            # resync-rate 10M;
        }
        handlers {
            # specify the two fencing handlers
            # see: http://www.drbd.org/users-guide-8.4/s-pacemaker-fencing.html
            fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
            after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
        }
        # first node
        on node1.mycluster.org {
            # DRBD device
            device /dev/drbd0;
            # backing store device
            disk /dev/sdb;
            # IP address of node, and port to listen on
            address 192.168.1.101:7789;
            # use internal meta data (don't create a filesystem before
            # you create metadata!)
            meta-disk internal;
        }
        # second node
        on node2.mycluster.org {
            # DRBD debice
            device /dev/drbd0;
            # backing store device
            disk /dev/sdb;
            # IP address of node, and port to listen on
            address 192.168.1.102:7789;
            # use internal meta data (don't create a filesystem before
            # you create metadata!)
            meta-disk internal;
        }
    }

谁知道问题出在哪里?

EN

回答 1

Server Fault用户

回答已采纳

发布于 2014-11-03 06:01:59

这听起来不像是配置问题,而是听起来好像还没有安装DRBD的内核模块。您需要安装相应的kmod版本。(如果键入mod探测drbd,会发生什么情况?)

在命令行中尝试执行yum搜索drbd

然后选择正确的包-可能类似kmod-drbd83 83。

如果这不起作用,也许可以升级到更新版本的CentOS和内核。

票数 2
EN
页面原文内容由Server Fault提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://serverfault.com/questions/641520

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档