首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >随Ubuntu 20.04升级而去的池

随Ubuntu 20.04升级而去的池
EN

Server Fault用户
提问于 2020-05-23 10:44:35
回答 1查看 1.5K关注 0票数 1

我将我的服务器(SuperMicro X11-SSM-F,LSISA9911-8i)从Ubuntu18.04升级到20.04。服务器有2个zpool,一个由单个WD Red 10 TB (下载池)组成,另一个由8 WD Red 10 TB和2 Seagate IronWolf 8TB组成,配置在5x2个镜像(主池)中。池是使用/dev/disk/by-id引用创建的,以便在重新启动时保持稳定。泳池定期清洗,最后一次清洗是几周前的,没有出现任何错误。

当我更新到Ubuntu20.04后重新启动时,第二个池(主池)就消失了。在运行zfs import之后,它会重新导入它,但是对大多数磁盘( WD,而不是Seagates)使用sdX引用。此外,与单一的WD的池是好的,并参考它的磁盘按-id。母版池的zpool status输出如下(这是内存中的):

代码语言:javascript
复制
    NAME                                  STATE     READ WRITE CKSUM
    masterpool                            ONLINE       0     0     0
      mirror-0                            ONLINE       0     0     0
        sdb                               ONLINE       0     0     0
        sdk                               ONLINE       0     0     0
      mirror-1                            ONLINE       0     0     0
        sdi                               ONLINE       0     0     0
        sdf                               ONLINE       0     0     0
      mirror-2                            ONLINE       0     0     0
        sdd                               ONLINE       0     0     0
        sde                               ONLINE       0     0     0
      mirror-3                            ONLINE       0     0     0
        sdh                               ONLINE       0     0     0
        sdc                               ONLINE       0     0     0
      mirror-4                            ONLINE       0     0     0
        ata-ST8000VN0022-2EL112_ZA17FZXF  ONLINE       0     0     0
        ata-ST8000VN0022-2EL112_ZA17H5D3  ONLINE       0     0     0

这并不理想,因为这些标识符不稳定,所以在稍微在线查看之后,我重新导出了池,并运行了zpool import -d /dev/disk/by-id masterpool

但现在,兹普尔告诉我,存在校验和错误:

代码语言:javascript
复制
    NAME                                  STATE     READ WRITE CKSUM
    masterpool                            ONLINE       0     0     0
      mirror-0                            ONLINE       0     0     0
        wwn-0x5000cca26af27d8b            ONLINE       0     0     2
        wwn-0x5000cca273ee8907            ONLINE       0     0     0
      mirror-1                            ONLINE       0     0     0
        wwn-0x5000cca26aeb9280            ONLINE       0     0     8
        wwn-0x5000cca273eeaed7            ONLINE       0     0     0
      mirror-2                            ONLINE       0     0     0
        wwn-0x5000cca273c21a05            ONLINE       0     0     0
        wwn-0x5000cca267eaa17a            ONLINE       0     0     0
      mirror-3                            ONLINE       0     0     0
        wwn-0x5000cca26af7e655            ONLINE       0     0     0
        wwn-0x5000cca273c099dd            ONLINE       0     0     0
      mirror-4                            ONLINE       0     0     0
        ata-ST8000VN0022-2EL112_ZA17FZXF  ONLINE       0     0     0
        ata-ST8000VN0022-2EL112_ZA17H5D3  ONLINE       0     0     0

因此,我正在运行一个检查点,zfs还发现了一些校验和错误:

代码语言:javascript
复制
  pool: masterpool
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-9P
  scan: scrub in progress since Fri May 22 21:47:34 2020
        27.1T scanned at 600M/s, 27.0T issued at 597M/s, 31.1T total
        112K repaired, 86.73% done, 0 days 02:00:45 to go
config:

        NAME                                  STATE     READ WRITE CKSUM
        masterpool                            DEGRADED     0     0     0
          mirror-0                            DEGRADED     0     0     0
            wwn-0x5000cca26af27d8b            DEGRADED     0     0    15  too many errors  (repairing)
            wwn-0x5000cca273ee8907            ONLINE       0     0     0
          mirror-1                            DEGRADED     0     0     0
            wwn-0x5000cca26aeb9280            DEGRADED     0     0    18  too many errors  (repairing)
            wwn-0x5000cca273eeaed7            ONLINE       0     0     0
          mirror-2                            ONLINE       0     0     0
            wwn-0x5000cca273c21a05            ONLINE       0     0     0
            wwn-0x5000cca267eaa17a            ONLINE       0     0     0
          mirror-3                            ONLINE       0     0     0
            wwn-0x5000cca26af7e655            ONLINE       0     0     0
            wwn-0x5000cca273c099dd            ONLINE       0     0     0
          mirror-4                            ONLINE       0     0     0
            ata-ST8000VN0022-2EL112_ZA17FZXF  ONLINE       0     0     0
            ata-ST8000VN0022-2EL112_ZA17H5D3  ONLINE       0     0     0

奇怪的是,smart one在智能监控数据中没有显示任何错误(两个磁盘的输出相似,只显示一个):

代码语言:javascript
复制
$ sudo smartctl /dev/disk/by-id/wwn-0x5000cca26aeb9280 -a
...
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0004   129   129   054    Old_age   Offline      -       112
  3 Spin_Up_Time            0x0007   153   153   024    Pre-fail  Always       -       431 (Average 430)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       31
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000a   100   100   067    Old_age   Always       -       0
  8 Seek_Time_Performance   0x0004   128   128   020    Old_age   Offline      -       18
  9 Power_On_Hours          0x0012   098   098   000    Old_age   Always       -       15474
 10 Spin_Retry_Count        0x0012   100   100   060    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       31
 22 Helium_Level            0x0023   100   100   025    Pre-fail  Always       -       100
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       664
193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       664
194 Temperature_Celsius     0x0002   158   158   000    Old_age   Always       -       41 (Min/Max 16/41)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%        19         -
# 2  Short offline       Completed without error       00%         0         -

...

另外,我注意到,/dev/disk/by-id中的许多别名都消失了( WD的所有ata-*都没有了,云池中唯一的名称除外):

代码语言:javascript
复制
# ls /dev/disk/by-id/ -l
total 0
lrwxrwxrwx 1 root root  9 May 22 23:19 ata-Samsung_SSD_850_EVO_500GB_S2RANX0H608885H -> ../../sda
lrwxrwxrwx 1 root root 10 May 22 23:19 ata-Samsung_SSD_850_EVO_500GB_S2RANX0H608885H-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 May 23 01:28 ata-ST8000VN0022-2EL112_ZA17FZXF -> ../../sdc
lrwxrwxrwx 1 root root 10 May 23 01:28 ata-ST8000VN0022-2EL112_ZA17FZXF-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 May 23 01:28 ata-ST8000VN0022-2EL112_ZA17FZXF-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 May 23 01:16 ata-ST8000VN0022-2EL112_ZA17H5D3 -> ../../sdb
lrwxrwxrwx 1 root root 10 May 23 01:16 ata-ST8000VN0022-2EL112_ZA17H5D3-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 May 23 01:16 ata-ST8000VN0022-2EL112_ZA17H5D3-part9 -> ../../sdb9
lrwxrwxrwx 1 root root  9 May 22 23:21 ata-WDC_WD100EFAX-68LHPN0_2YG1R7PD -> ../../sdd
lrwxrwxrwx 1 root root 10 May 22 23:21 ata-WDC_WD100EFAX-68LHPN0_2YG1R7PD-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 May 22 23:21 ata-WDC_WD100EFAX-68LHPN0_2YG1R7PD-part9 -> ../../sdd9
lrwxrwxrwx 1 root root  9 May 22 23:19 scsi-0ATA_Samsung_SSD_850_S2RANX0H608885H -> ../../sda
lrwxrwxrwx 1 root root 10 May 22 23:19 scsi-0ATA_Samsung_SSD_850_S2RANX0H608885H-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 May 23 01:28 scsi-0ATA_ST8000VN0022-2EL_ZA17FZXF -> ../../sdc
lrwxrwxrwx 1 root root 10 May 23 01:28 scsi-0ATA_ST8000VN0022-2EL_ZA17FZXF-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 May 23 01:28 scsi-0ATA_ST8000VN0022-2EL_ZA17FZXF-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 May 23 01:16 scsi-0ATA_ST8000VN0022-2EL_ZA17H5D3 -> ../../sdb
lrwxrwxrwx 1 root root 10 May 23 01:16 scsi-0ATA_ST8000VN0022-2EL_ZA17H5D3-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 May 23 01:16 scsi-0ATA_ST8000VN0022-2EL_ZA17H5D3-part9 -> ../../sdb9
lrwxrwxrwx 1 root root  9 May 22 23:21 scsi-0ATA_WDC_WD100EFAX-68_2YG1R7PD -> ../../sdd
lrwxrwxrwx 1 root root 10 May 22 23:21 scsi-0ATA_WDC_WD100EFAX-68_2YG1R7PD-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 May 22 23:21 scsi-0ATA_WDC_WD100EFAX-68_2YG1R7PD-part9 -> ../../sdd9
lrwxrwxrwx 1 root root  9 May 22 23:19 scsi-1ATA_Samsung_SSD_850_EVO_500GB_S2RANX0H608885H -> ../../sda
lrwxrwxrwx 1 root root 10 May 22 23:19 scsi-1ATA_Samsung_SSD_850_EVO_500GB_S2RANX0H608885H-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 May 23 01:28 scsi-1ATA_ST8000VN0022-2EL112_ZA17FZXF -> ../../sdc
lrwxrwxrwx 1 root root 10 May 23 01:28 scsi-1ATA_ST8000VN0022-2EL112_ZA17FZXF-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 May 23 01:28 scsi-1ATA_ST8000VN0022-2EL112_ZA17FZXF-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 May 23 01:16 scsi-1ATA_ST8000VN0022-2EL112_ZA17H5D3 -> ../../sdb
lrwxrwxrwx 1 root root 10 May 23 01:16 scsi-1ATA_ST8000VN0022-2EL112_ZA17H5D3-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 May 23 01:16 scsi-1ATA_ST8000VN0022-2EL112_ZA17H5D3-part9 -> ../../sdb9
lrwxrwxrwx 1 root root  9 May 22 23:21 scsi-1ATA_WDC_WD100EFAX-68LHPN0_2YG1R7PD -> ../../sdd
lrwxrwxrwx 1 root root 10 May 22 23:21 scsi-1ATA_WDC_WD100EFAX-68LHPN0_2YG1R7PD-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 May 22 23:21 scsi-1ATA_WDC_WD100EFAX-68LHPN0_2YG1R7PD-part9 -> ../../sdd9
lrwxrwxrwx 1 root root  9 May 23 01:28 scsi-35000c500a2e631c6 -> ../../sdc
lrwxrwxrwx 1 root root 10 May 23 01:28 scsi-35000c500a2e631c6-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 May 23 01:28 scsi-35000c500a2e631c6-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 May 23 01:16 scsi-35000c500a2edebe0 -> ../../sdb
lrwxrwxrwx 1 root root 10 May 23 01:16 scsi-35000c500a2edebe0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 May 23 01:16 scsi-35000c500a2edebe0-part9 -> ../../sdb9
lrwxrwxrwx 1 root root  9 May 23 00:38 scsi-35000cca267eaa17a -> ../../sdg
lrwxrwxrwx 1 root root 10 May 23 00:38 scsi-35000cca267eaa17a-part1 -> ../../sdg1
lrwxrwxrwx 1 root root 10 May 23 00:38 scsi-35000cca267eaa17a-part9 -> ../../sdg9
lrwxrwxrwx 1 root root  9 May 23 01:20 scsi-35000cca26aeb9280 -> ../../sdl
lrwxrwxrwx 1 root root 10 May 23 01:20 scsi-35000cca26aeb9280-part1 -> ../../sdl1
lrwxrwxrwx 1 root root 10 May 23 01:20 scsi-35000cca26aeb9280-part9 -> ../../sdl9
lrwxrwxrwx 1 root root  9 May 23 01:20 scsi-35000cca26af27d8b -> ../../sdk
lrwxrwxrwx 1 root root 10 May 23 01:20 scsi-35000cca26af27d8b-part1 -> ../../sdk1
lrwxrwxrwx 1 root root 10 May 23 01:20 scsi-35000cca26af27d8b-part9 -> ../../sdk9
lrwxrwxrwx 1 root root  9 May 23 02:35 scsi-35000cca26af7e655 -> ../../sdi
lrwxrwxrwx 1 root root 10 May 23 02:35 scsi-35000cca26af7e655-part1 -> ../../sdi1
lrwxrwxrwx 1 root root 10 May 23 02:35 scsi-35000cca26af7e655-part9 -> ../../sdi9
lrwxrwxrwx 1 root root  9 May 23 00:35 scsi-35000cca273c099dd -> ../../sdf
lrwxrwxrwx 1 root root 10 May 23 00:35 scsi-35000cca273c099dd-part1 -> ../../sdf1
lrwxrwxrwx 1 root root 10 May 23 00:35 scsi-35000cca273c099dd-part9 -> ../../sdf9
lrwxrwxrwx 1 root root  9 May 22 23:21 scsi-35000cca273c0c7e3 -> ../../sdd
lrwxrwxrwx 1 root root 10 May 22 23:21 scsi-35000cca273c0c7e3-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 May 22 23:21 scsi-35000cca273c0c7e3-part9 -> ../../sdd9
lrwxrwxrwx 1 root root  9 May 23 03:01 scsi-35000cca273c21a05 -> ../../sdj
lrwxrwxrwx 1 root root 10 May 23 03:01 scsi-35000cca273c21a05-part1 -> ../../sdj1
lrwxrwxrwx 1 root root 10 May 23 03:01 scsi-35000cca273c21a05-part9 -> ../../sdj9
lrwxrwxrwx 1 root root  9 May 23 00:35 scsi-35000cca273ee8907 -> ../../sde
lrwxrwxrwx 1 root root 10 May 23 00:35 scsi-35000cca273ee8907-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 May 23 00:35 scsi-35000cca273ee8907-part9 -> ../../sde9
lrwxrwxrwx 1 root root  9 May 23 00:04 scsi-35000cca273eeaed7 -> ../../sdh
lrwxrwxrwx 1 root root 10 May 23 00:04 scsi-35000cca273eeaed7-part1 -> ../../sdh1
lrwxrwxrwx 1 root root 10 May 23 00:04 scsi-35000cca273eeaed7-part9 -> ../../sdh9
lrwxrwxrwx 1 root root  9 May 22 23:19 scsi-35002538d40f8ba4c -> ../../sda
lrwxrwxrwx 1 root root 10 May 22 23:19 scsi-35002538d40f8ba4c-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 May 22 23:19 scsi-SATA_Samsung_SSD_850_S2RANX0H608885H -> ../../sda
lrwxrwxrwx 1 root root 10 May 22 23:19 scsi-SATA_Samsung_SSD_850_S2RANX0H608885H-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 May 23 01:28 scsi-SATA_ST8000VN0022-2EL_ZA17FZXF -> ../../sdc
lrwxrwxrwx 1 root root 10 May 23 01:28 scsi-SATA_ST8000VN0022-2EL_ZA17FZXF-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 May 23 01:28 scsi-SATA_ST8000VN0022-2EL_ZA17FZXF-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 May 23 01:16 scsi-SATA_ST8000VN0022-2EL_ZA17H5D3 -> ../../sdb
lrwxrwxrwx 1 root root 10 May 23 01:16 scsi-SATA_ST8000VN0022-2EL_ZA17H5D3-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 May 23 01:16 scsi-SATA_ST8000VN0022-2EL_ZA17H5D3-part9 -> ../../sdb9
lrwxrwxrwx 1 root root  9 May 23 01:20 scsi-SATA_WDC_WD100EFAX-68_2TK2VELD -> ../../sdl
lrwxrwxrwx 1 root root 10 May 23 01:20 scsi-SATA_WDC_WD100EFAX-68_2TK2VELD-part1 -> ../../sdl1
lrwxrwxrwx 1 root root 10 May 23 01:20 scsi-SATA_WDC_WD100EFAX-68_2TK2VELD-part9 -> ../../sdl9
lrwxrwxrwx 1 root root  9 May 23 01:20 scsi-SATA_WDC_WD100EFAX-68_2TKL26ZD -> ../../sdk
lrwxrwxrwx 1 root root 10 May 23 01:20 scsi-SATA_WDC_WD100EFAX-68_2TKL26ZD-part1 -> ../../sdk1
lrwxrwxrwx 1 root root 10 May 23 01:20 scsi-SATA_WDC_WD100EFAX-68_2TKL26ZD-part9 -> ../../sdk9
lrwxrwxrwx 1 root root  9 May 23 02:35 scsi-SATA_WDC_WD100EFAX-68_2TKYZ3ND -> ../../sdi
lrwxrwxrwx 1 root root 10 May 23 02:35 scsi-SATA_WDC_WD100EFAX-68_2TKYZ3ND-part1 -> ../../sdi1
lrwxrwxrwx 1 root root 10 May 23 02:35 scsi-SATA_WDC_WD100EFAX-68_2TKYZ3ND-part9 -> ../../sdi9
lrwxrwxrwx 1 root root  9 May 23 00:35 scsi-SATA_WDC_WD100EFAX-68_2YG19ZMD -> ../../sdf
lrwxrwxrwx 1 root root 10 May 23 00:35 scsi-SATA_WDC_WD100EFAX-68_2YG19ZMD-part1 -> ../../sdf1
lrwxrwxrwx 1 root root 10 May 23 00:35 scsi-SATA_WDC_WD100EFAX-68_2YG19ZMD-part9 -> ../../sdf9
lrwxrwxrwx 1 root root  9 May 22 23:21 scsi-SATA_WDC_WD100EFAX-68_2YG1R7PD -> ../../sdd
lrwxrwxrwx 1 root root 10 May 22 23:21 scsi-SATA_WDC_WD100EFAX-68_2YG1R7PD-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 May 22 23:21 scsi-SATA_WDC_WD100EFAX-68_2YG1R7PD-part9 -> ../../sdd9
lrwxrwxrwx 1 root root  9 May 23 03:01 scsi-SATA_WDC_WD100EFAX-68_2YG4MA0D -> ../../sdj
lrwxrwxrwx 1 root root 10 May 23 03:01 scsi-SATA_WDC_WD100EFAX-68_2YG4MA0D-part1 -> ../../sdj1
lrwxrwxrwx 1 root root 10 May 23 03:01 scsi-SATA_WDC_WD100EFAX-68_2YG4MA0D-part9 -> ../../sdj9
lrwxrwxrwx 1 root root  9 May 23 00:35 scsi-SATA_WDC_WD100EFAX-68_2YK9BHKD -> ../../sde
lrwxrwxrwx 1 root root 10 May 23 00:35 scsi-SATA_WDC_WD100EFAX-68_2YK9BHKD-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 May 23 00:35 scsi-SATA_WDC_WD100EFAX-68_2YK9BHKD-part9 -> ../../sde9
lrwxrwxrwx 1 root root  9 May 23 00:04 scsi-SATA_WDC_WD100EFAX-68_2YK9PKUD -> ../../sdh
lrwxrwxrwx 1 root root 10 May 23 00:04 scsi-SATA_WDC_WD100EFAX-68_2YK9PKUD-part1 -> ../../sdh1
lrwxrwxrwx 1 root root 10 May 23 00:04 scsi-SATA_WDC_WD100EFAX-68_2YK9PKUD-part9 -> ../../sdh9
lrwxrwxrwx 1 root root  9 May 23 00:38 scsi-SATA_WDC_WD100EFAX-68_JEK0T76Z -> ../../sdg
lrwxrwxrwx 1 root root 10 May 23 00:38 scsi-SATA_WDC_WD100EFAX-68_JEK0T76Z-part1 -> ../../sdg1
lrwxrwxrwx 1 root root 10 May 23 00:38 scsi-SATA_WDC_WD100EFAX-68_JEK0T76Z-part9 -> ../../sdg9
lrwxrwxrwx 1 root root  9 May 23 01:28 wwn-0x5000c500a2e631c6 -> ../../sdc
lrwxrwxrwx 1 root root 10 May 23 01:28 wwn-0x5000c500a2e631c6-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 May 23 01:28 wwn-0x5000c500a2e631c6-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 May 23 01:16 wwn-0x5000c500a2edebe0 -> ../../sdb
lrwxrwxrwx 1 root root 10 May 23 01:16 wwn-0x5000c500a2edebe0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 May 23 01:16 wwn-0x5000c500a2edebe0-part9 -> ../../sdb9
lrwxrwxrwx 1 root root  9 May 23 00:38 wwn-0x5000cca267eaa17a -> ../../sdg
lrwxrwxrwx 1 root root 10 May 23 00:38 wwn-0x5000cca267eaa17a-part1 -> ../../sdg1
lrwxrwxrwx 1 root root 10 May 23 00:38 wwn-0x5000cca267eaa17a-part9 -> ../../sdg9
lrwxrwxrwx 1 root root  9 May 23 01:20 wwn-0x5000cca26aeb9280 -> ../../sdl
lrwxrwxrwx 1 root root 10 May 23 01:20 wwn-0x5000cca26aeb9280-part1 -> ../../sdl1
lrwxrwxrwx 1 root root 10 May 23 01:20 wwn-0x5000cca26aeb9280-part9 -> ../../sdl9
lrwxrwxrwx 1 root root  9 May 23 01:20 wwn-0x5000cca26af27d8b -> ../../sdk
lrwxrwxrwx 1 root root 10 May 23 01:20 wwn-0x5000cca26af27d8b-part1 -> ../../sdk1
lrwxrwxrwx 1 root root 10 May 23 01:20 wwn-0x5000cca26af27d8b-part9 -> ../../sdk9
lrwxrwxrwx 1 root root  9 May 23 02:35 wwn-0x5000cca26af7e655 -> ../../sdi
lrwxrwxrwx 1 root root 10 May 23 02:35 wwn-0x5000cca26af7e655-part1 -> ../../sdi1
lrwxrwxrwx 1 root root 10 May 23 02:35 wwn-0x5000cca26af7e655-part9 -> ../../sdi9
lrwxrwxrwx 1 root root  9 May 23 00:35 wwn-0x5000cca273c099dd -> ../../sdf
lrwxrwxrwx 1 root root 10 May 23 00:35 wwn-0x5000cca273c099dd-part1 -> ../../sdf1
lrwxrwxrwx 1 root root 10 May 23 00:35 wwn-0x5000cca273c099dd-part9 -> ../../sdf9
lrwxrwxrwx 1 root root  9 May 22 23:21 wwn-0x5000cca273c0c7e3 -> ../../sdd
lrwxrwxrwx 1 root root 10 May 22 23:21 wwn-0x5000cca273c0c7e3-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 May 22 23:21 wwn-0x5000cca273c0c7e3-part9 -> ../../sdd9
lrwxrwxrwx 1 root root  9 May 23 03:01 wwn-0x5000cca273c21a05 -> ../../sdj
lrwxrwxrwx 1 root root 10 May 23 03:01 wwn-0x5000cca273c21a05-part1 -> ../../sdj1
lrwxrwxrwx 1 root root 10 May 23 03:01 wwn-0x5000cca273c21a05-part9 -> ../../sdj9
lrwxrwxrwx 1 root root  9 May 23 00:35 wwn-0x5000cca273ee8907 -> ../../sde
lrwxrwxrwx 1 root root 10 May 23 00:35 wwn-0x5000cca273ee8907-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 May 23 00:35 wwn-0x5000cca273ee8907-part9 -> ../../sde9
lrwxrwxrwx 1 root root  9 May 23 00:04 wwn-0x5000cca273eeaed7 -> ../../sdh
lrwxrwxrwx 1 root root 10 May 23 00:04 wwn-0x5000cca273eeaed7-part1 -> ../../sdh1
lrwxrwxrwx 1 root root 10 May 23 00:04 wwn-0x5000cca273eeaed7-part9 -> ../../sdh9
lrwxrwxrwx 1 root root  9 May 22 23:19 wwn-0x5002538d40f8ba4c -> ../../sda
lrwxrwxrwx 1 root root 10 May 22 23:19 wwn-0x5002538d40f8ba4c-part1 -> ../../sda1

因此,这引发了许多问题:

1)为什么我的泳池不见了?这是因为/dev/disk/by-id/中的符号链接消失了,而zfs无法定位大部分磁盘吗?

2)校验和错误是否令人担忧?磁盘看起来很健康。在用sdX引用导入池时,我只查看了几个目录和文件,如果zfs以错误的顺序导入磁盘,这会导致校验和被错误地重写吗?

3)如何找回丢失的/dev/disk/by-id/ata-*符号链接?Ubuntu20.04有什么变化会导致他们消失吗?

4)我认为通过/dev/disk/by-id/引用我的磁盘是个好主意,因为它们是稳定的。这不是最好的办法吗?

5)我不喜欢wwn-*的名字,因为它们对我来说是非描述性的。我更希望有反映磁盘序列号的名称,这样,如果需要进行替换,我就可以很容易地识别它们。我继续在/dev/disk/by-vdev/中设置别名(别名为wwn-*),遵循http://kbdone.com/zfs-basics/#Consistent_设备_ID_通过_vdev_idconf_文件中的建议:

代码语言:javascript
复制
$ cat /etc/zfs/vdev_id.conf
alias ST8000VN0022-2EL_ZA17H5D3 /dev/disk/by-id/wwn-0x5000c500a2edebe0
alias ST8000VN0022-2EL_ZA17FZXF /dev/disk/by-id/wwn-0x5000c500a2e631c6
alias WD100EFAX-68_2YG1R7PD /dev/disk/by-id/wwn-0x5000cca273c0c7e3
alias WD100EFAX-68_2YK9BHKD /dev/disk/by-id/wwn-0x5000cca273ee8907
...

有什么想法?

谢谢!

编辑:清除完成后的zpool status输出:

代码语言:javascript
复制
root@cloud:~# zpool status
  pool: downloadpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 0 days 11:33:18 with 0 errors on Sun May 10 11:57:19 2020
config:

        NAME                                  STATE     READ WRITE CKSUM
        downloadpool                          ONLINE       0     0     0
          ata-WDC_WD100EFAX-68LHPN0_2YG1R7PD  ONLINE       0     0     0

errors: No known data errors

  pool: masterpool
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-9P
  scan: scrub repaired 112K in 0 days 15:06:09 with 0 errors on Sat May 23 12:53:43 2020
config:

        NAME                                  STATE     READ WRITE CKSUM
        masterpool                            DEGRADED     0     0     0
          mirror-0                            DEGRADED     0     0     0
            wwn-0x5000cca26af27d8b            DEGRADED     0     0    15  too many errors
            wwn-0x5000cca273ee8907            ONLINE       0     0     0
          mirror-1                            DEGRADED     0     0     0
            wwn-0x5000cca26aeb9280            DEGRADED     0     0    18  too many errors
            wwn-0x5000cca273eeaed7            ONLINE       0     0     0
          mirror-2                            ONLINE       0     0     0
            wwn-0x5000cca273c21a05            ONLINE       0     0     0
            wwn-0x5000cca267eaa17a            ONLINE       0     0     0
          mirror-3                            ONLINE       0     0     0
            wwn-0x5000cca26af7e655            ONLINE       0     0     0
            wwn-0x5000cca273c099dd            ONLINE       0     0     0
          mirror-4                            ONLINE       0     0     0
            ata-ST8000VN0022-2EL112_ZA17FZXF  ONLINE       0     0     0
            ata-ST8000VN0022-2EL112_ZA17H5D3  ONLINE       0     0     0

errors: No known data errors
EN

回答 1

Server Fault用户

发布于 2020-07-21 13:10:39

我也有同样的问题。你的职位帮助我朝着正确的方向前进。这就是我的想法。

我有6个驱动器,2个驱动器在zfs池'A‘连接到SATA控制器的主板,和4个驱动器在zfs池'B’连接到我的LSI SAS 9211控制器。安装程序在/dev/disk/by-id中查找设备的池。

在从Ubuntu18.04升级到Ubuntu20.04之后,所有附在SAS控制器上的磁盘的设备id从设备id ata-*升级到scsi-SATA*。重新启动服务器后,zfs池B丢失,因为zfs在导入期间找不到设备id。与主板上的SATA控制器连接的驱动器的设备id保持不变。使用这些驱动器的zfs池可以导入,并且在版本升级后不会丢失。

我就是这样把丢失的“B”池修好的:

首先,我列出了可以导入的所有池:

代码语言:javascript
复制
sudo zpool import

它列出了我丢失的池'B',以及池中所有正确的驱动器,但在/dev中被命名为设备。因此,我使用/dev/disk/by-id中列出的设备id导入了池。我收到了一个警告,这个池看起来可能是活动的,所以我不得不使用-f强制导入,如下所示:

代码语言:javascript
复制
sudo zpool import -f -d /dev/disk/by-id B

一切都好起来了。B池又可用了。我没有出口游泳池。在导入池时,我没有告诉您先使用设备id。现在使用的设备id是不同的: wwn-*

我在泳池上做了个擦洗,没有出错。

回答你的问题:

  1. 我认为从Ubuntu18.04升级到20.04会导致/dev/disk/by-id中的链接发生变化。
  2. 我没有使用/dev引用导入池,而是使用选项-f导入。这就是你和我所做的不同之处。但我无法想象这会是一个问题,除非错误的驱动器在哪里使用。
  3. 我没有通过id链接把旧磁盘拿回来。但是,通过使用指令导入池来使用磁盘id,它使用的是新的磁盘id,这对我来说已经足够了。我不需要那些旧的了。
  4. 我仍然认为通过/dev/disk/by-id/引用磁盘是个好主意。它们在重新启动时和磁盘在服务器中物理移动时是稳定的(我对此进行了测试)。我有点失望的是,发布升级会阻止磁盘id命名。但我很高兴能在我的案件中通过再次输入池来解决问题。
  5. 我也有同样的理由。谢谢你的提示使用别名!也许我会用这个。
票数 0
EN
页面原文内容由Server Fault提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://serverfault.com/questions/1018350

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档