首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >CephFS坐骑。无法读取超级块

CephFS坐骑。无法读取超级块
EN

Stack Overflow用户
提问于 2019-01-04 21:46:30
回答 1查看 719关注 0票数 0

对这个问题有什么建议吗?已经尝试了无数的东西,但都没有用。

此命令在错误Can't read superblock中失败。

sudo mount -t ceph worker2:6789:/ /mnt/mycephfs -o name=admin,secret=AQAYjCpcAAAAABAAxs1mrh6nnx+0+1VUqW2p9A==

一些可能有帮助的更多信息

uname -a Linux cephfs-test-admin-1 4.14.84-coreos #1 SMP Sat Dec 15 22:39:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

和Ceph状态都没有显示任何问题。

代码语言:javascript
复制
dmesg | tail
[228343.304863] libceph: resolve 'worker2' (ret=0): 10.1.96.4:0
[228343.322279] libceph: mon0 10.1.96.4:6789 session established
[228343.323622] libceph: client107238 fsid 762e6263-a95c-40da-9813-9df4fef12f53


ceph -s
  cluster:
    id:     762e6263-a95c-40da-9813-9df4fef12f53
    health: HEALTH_WARN
            too few PGs per OSD (16 < min 30)
  services:
    mon: 3 daemons, quorum worker2,worker0,worker1
    mgr: worker1(active)
    mds: cephfs-1/1/1 up  {0=mds-ceph-mds-85b4fbb478-c6jzv=up:active}
    osd: 3 osds: 3 up, 3 in
  data:
    pools:   2 pools, 16 pgs
    objects: 21 objects, 2246 bytes
    usage:   342 MB used, 76417 MB / 76759 MB avail
    pgs:     16 active+clean

ceph osd status
+----+---------+-------+-------+--------+---------+--------+---------+-----------+
| id |   host  |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+---------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | worker2 |  114M | 24.8G |    0   |     0   |    0   |     0   | exists,up |
| 1  | worker0 |  114M | 24.8G |    0   |     0   |    0   |     0   | exists,up |
| 2  | worker1 |  114M | 24.8G |    0   |     0   |    0   |     0   | exists,up |
+----+---------+-------+-------+--------+---------+--------+---------+-----------+

ceph -v
ceph version 12.2.3 (2dab17a455c09584f2a85e6b10888337d1ec8949) luminous (stable)

syslog的一些输出:

代码语言:javascript
复制
Jan 04 21:24:04 worker2 kernel: libceph: resolve 'worker2' (ret=0): 10.1.96.4:0
Jan 04 21:24:04 worker2 kernel: libceph: mon0 10.1.96.4:6789 session established
Jan 04 21:24:04 worker2 kernel: libceph: client159594 fsid 762e6263-a95c-40da-9813-9df4fef12f53
Jan 04 21:24:10 worker2 systemd[1]: Started OpenSSH per-connection server daemon (58.242.83.28:36729).
Jan 04 21:24:11 worker2 sshd[12315]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=58.242.83.28  us>
Jan 04 21:24:14 worker2 sshd[12315]: Failed password for root from 58.242.83.28 port 36729 ssh2
Jan 04 21:24:15 worker2 sshd[12315]: Failed password for root from 58.242.83.28 port 36729 ssh2
Jan 04 21:24:18 worker2 sshd[12315]: Failed password for root from 58.242.83.28 port 36729 ssh2
Jan 04 21:24:18 worker2 sshd[12315]: Received disconnect from 58.242.83.28 port 36729:11:  [preauth]
Jan 04 21:24:18 worker2 sshd[12315]: Disconnected from authenticating user root 58.242.83.28 port 36729 [preauth]
Jan 04 21:24:18 worker2 sshd[12315]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=58.242.83.28  user=root
Jan 04 21:24:56 worker2 systemd[1]: Started OpenSSH per-connection server daemon (24.114.79.151:58123).
Jan 04 21:24:56 worker2 sshd[12501]: Accepted publickey for core from 24.114.79.151 port 58123 ssh2: RSA SHA256:t4t9yXeR2yC7s9c37mdS/F7koUs2x>
Jan 04 21:24:56 worker2 sshd[12501]: pam_unix(sshd:session): session opened for user core by (uid=0)
Jan 04 21:24:56 worker2 systemd[1]: Failed to set up mount unit: Invalid argument
Jan 04 21:24:56 worker2 systemd[1]: Failed to set up mount unit: Invalid argument
Jan 04 21:24:56 worker2 systemd[1]: Failed to set up mount unit: Invalid argument
Jan 04 21:24:56 worker2 systemd[1]: Failed to set up mount unit: Invalid argument
Jan 04 21:24:56 worker2 systemd[1]: Failed to set up mount unit: Invalid argument
Jan 04 21:24:56 worker2 systemd[1]: Failed to set up mount unit: Invalid argument
Jan 04 21:24:56 worker2 systemd[1]: Failed to set up mount unit: Invalid argument
Jan 04 21:24:56 worker2 systemd[1]: Failed to set up mount unit: Invalid argument
Jan 04 21:24:56 worker2 systemd[1]: Failed to set up mount unit: Invalid argument
Jan 04 21:24:56 worker2 systemd[1]: Failed to set up mount unit: Invalid argument
Jan 04 21:24:56 worker2 systemd[1]: Failed to set up mount unit: Invalid argument
EN

回答 1

Stack Overflow用户

发布于 2019-01-12 15:36:04

因此,在挖掘问题后,是由于XFS分区问题..。

一开始不知道我是怎么错过的。

简而言之:使用xfs创建partion的尝试失败了。也就是说,运行mkfs.xfs /dev/vdb1只需挂起。操作系统仍然会正确地创建和标记分区,但是它们会被破坏--只有在试图通过获取Can't read superblock错误来挂载时才会发现这个事实。

卡夫就是这样做的: 1.运行deploy 2.创建XFS分区mkfs.xfs ... 3. OS将创建这些错误分区4.由于仍然可以读取OS的状态,所以所有的状态报告和日志都不会出现问题(mkfs.xfs没有报告错误,它只是挂起) 5.当您试图挂载cephFS或使用块存储时,由于损坏的partions,整个事件都会被破坏。

根本原因:还不清楚。但是,我怀疑在从我的云提供商提供/附加SSD磁盘级别时做了一些不正确的事情。现在一切都很好

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/54046535

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档