首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >RHEL6上的设备映射程序无法为LVM逻辑卷创建devs

RHEL6上的设备映射程序无法为LVM逻辑卷创建devs
EN

Unix & Linux用户
提问于 2014-04-03 12:50:01
回答 2查看 11.8K关注 0票数 4

我让XEN客户运行RHEL6,它有一个来自Dom0的LUN。它包含一个名为vg_ALHINT的LVM卷组( Integration,ALH是其的缩写)。数据为Oracle 11g。VG被导入、激活,udev为每个逻辑卷创建映射。

但是,设备映射程序没有为其中一个逻辑卷( 左室 )创建映射,也没有为问题中的LV创建/dev/dm-2映射,与其他LV相比,/dev/dm-2具有不同的主要次要数字。

代码语言:javascript
复制
#  dmsetup table
vg_ALHINT-arch: 0 4300800 linear 202:16 46139392
vg0-lv6: 0 20971520 linear 202:2 30869504
vg_ALHINT-safeset2: 0 4194304 linear 202:16 35653632
vg0-lv5: 0 2097152 linear 202:2 28772352
vg_ALHINT-safeset1: 0 4186112 linear 202:16 54528000
vg0-lv4: 0 524288 linear 202:2 28248064
vg0-lv3: 0 4194304 linear 202:2 24053760
vg_ALHINT-oradata:     **
vg0-lv2: 0 4194304 linear 202:2 19859456
vg0-lv1: 0 2097152 linear 202:2 17762304
vg0-lv0: 0 17760256 linear 202:2 2048
vg_ALHINT-admin: 0 4194304 linear 202:16 41945088

**您可以在上面看到vg_ALHINT-oradata是空的。

代码语言:javascript
复制
# ls -l /dev/mapper/
total 0
[root@iui-alhdb01 ~]# ls -l /dev/mapper/
total 0
crw-rw---- 1 root root  10, 58 Apr  3 13:43 control
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv0 -> ../dm-0
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv1 -> ../dm-1
lrwxrwxrwx 1 root root       7 Apr  3 14:35 vg0-lv2 -> ../dm-2
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv3 -> ../dm-3
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv4 -> ../dm-4
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv5 -> ../dm-5
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv6 -> ../dm-6
lrwxrwxrwx 1 root root       7 Apr  3 13:59 vg_ALHINT-admin -> ../dm-8
lrwxrwxrwx 1 root root       7 Apr  3 13:59 vg_ALHINT-arch -> ../dm-9
brw-rw---- 1 root disk 253,  7 Apr  3 14:37 vg_ALHINT-oradata
lrwxrwxrwx 1 root root       8 Apr  3 13:59 vg_ALHINT-safeset1 -> ../dm-10
lrwxrwxrwx 1 root root       8 Apr  3 13:59 vg_ALHINT-safeset2 -> ../dm-11

直到我运行dmsetup mknodes时才创建vg_ALHINT-

代码语言:javascript
复制
# cat /proc/partitions
major minor  #blocks  name

 202        0   26214400 xvda
 202        1     262144 xvda1
 202        2   25951232 xvda2
 253        0    8880128 dm-0
 253        1    1048576 dm-1
 253        2    2097152 dm-2
 253        3    2097152 dm-3
 253        4     262144 dm-4
 253        5    1048576 dm-5
 253        6   10485760 dm-6
 202       16   29360128 xvdb
 253        8    2097152 dm-8
 253        9    2150400 dm-9
 253       10    2093056 dm-10
 253       11    2097152 dm-11

dm-7应该是vg_ALHINT-或者它不见了。我运行了dmsetup mknodes,创建了dm-7,但/proc/paritions中仍然缺少。

代码语言:javascript
复制
# ls -l /dev/dm-7
brw-rw---- 1 root disk 253, 7 Apr  3 13:59 /dev/dm-7

它的主要和次要数字是253:7,但是设备和VG中相同的LV有202:nn。

lvs告诉我这个LV是挂起的:

代码语言:javascript
复制
# lvs
    Logging initialised at Thu Apr  3 14:44:19 2014
    Set umask from 0022 to 0077
    Finding all logical volumes
  LV       VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv0      vg0       -wi-ao----   8.47g
  lv1      vg0       -wi-ao----   1.00g
  lv2      vg0       -wi-ao----   2.00g
  lv3      vg0       -wi-ao----   2.00g
  lv4      vg0       -wi-ao---- 256.00m
  lv5      vg0       -wi-ao----   1.00g
  lv6      vg0       -wi-ao----  10.00g
  admin    vg_ALHINT -wi-a-----   2.00g
  arch     vg_ALHINT -wi-a-----   2.05g
  oradata  vg_ALHINT -wi-s-----  39.95g
  safeset1 vg_ALHINT -wi-a-----   2.00g
  safeset2 vg_ALHINT -wi-a-----   2.00g
    Wiping internal VG cache

该光盘是从我们生产数据库的快照创建的。Oracle被关闭,VG已在快照之前导出。我应该注意,我每周都会通过脚本对1000多个数据库执行同样的操作。因为这是一个快照,所以我从原始的设备映射器获得了表,我用它来尝试重新创建它丢失的表:

代码语言:javascript
复制
0 35651584 linear 202:16 2048
35651584 4087808 linear 202:16 50440192
39739392 2097152 linear 202:16 39847936
41836544 41943040 linear 202:16 58714112

dmsetup suspend /dev/dm-7挂起设备后,我运行dmsetup load /dev/dm-7 $table.txt

接下来我试着恢复这个装置,

代码语言:javascript
复制
# dmsetup resume /dev/dm-7
device-mapper: resume ioctl on vg_ALHINT-oradata failed: Invalid argument
Command failed
#

任何想法因为我真的迷失了。(是的,我重新启动并重新拍摄了这些内容,并且总是遇到同样的问题。我甚至重新安装了这台服务器并运行了yum update。)

//编辑。

我忘了补充一下,这是我们生产环境中的原始dmsetup表,我试图像上面提到的那样将oradata布局加载到我们的集成服务器中。

代码语言:javascript
复制
#  dmsetup table
vg_ALHPRD-safeset2: 0 4194304 linear 202:32 35653632
vg_ALHPRD-safeset1: 0 4186112 linear 202:32 54528000
vg_ALHPRD-oradata: 0 35651584 linear 202:32 2048
vg_ALHPRD-oradata: 35651584 4087808 linear 202:32 50440192
vg_ALHPRD-oradata: 39739392 2097152 linear 202:32 39847936
vg_ALHPRD-oradata: 41836544 41943040 linear 202:32 58714112
vg_ALHPRD-admin: 0 4194304 linear 202:32 41945088

//编辑

我运行了vgscan --mknodes,并有:

代码语言:javascript
复制
The link /dev/vg_ALHINT/oradata should have been created by udev but it was not found. Falling back to direct link creation.



# ls -l /dev/vg_ALHINT/oradata
lrwxrwxrwx 1 root root 29 Apr 3 14:50 /dev/vg_ALHINT/oradata -> /dev/mapper/vg_ALHINT-oradata

仍然无法激活此操作,并收到以下错误消息:

代码语言:javascript
复制
device-mapper: resume ioctl on failed: Invalid argument Unable to resume vg_ALHINT-oradata (253:7) 

//编辑

我在/var/log/消息中看到堆栈跟踪:

代码语言:javascript
复制
Apr  3 13:58:09 iui-alhdb01 kernel: blkfront: xvdb: barriers disabled
Apr  3 13:58:09 iui-alhdb01 kernel: xvdb: unknown partition table
Apr  3 13:59:35 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 14:02:31 iui-alhdb01 ntpd[1093]: 0.0.0.0 c612 02 freq_set kernel 5.242 PPM
Apr  3 14:02:31 iui-alhdb01 ntpd[1093]: 0.0.0.0 c615 05 clock_sync
Apr  3 14:30:13 iui-alhdb01 kernel: device-mapper: table: 253:2: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 14:33:34 iui-alhdb01 kernel: INFO: task vi:1394 blocked for more than 120 seconds.
Apr  3 14:33:34 iui-alhdb01 kernel:      Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr  3 14:33:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  3 14:33:34 iui-alhdb01 kernel: vi            D 0000000000000000     0  1394   1271 0x00000084
Apr  3 14:33:34 iui-alhdb01 kernel: ffff88007aef19b8 0000000000000082 ffff88007aef1978 ffffffffa000443c
Apr  3 14:33:34 iui-alhdb01 kernel: ffff88007d208d80 ffff880037cabc08 ffff880037cda0c8 ffff8800022168a8
Apr  3 14:33:34 iui-alhdb01 kernel: ffff880037da45f8 ffff88007aef1fd8 000000000000fbc8 ffff880037da45f8
Apr  3 14:33:34 iui-alhdb01 kernel: Call Trace:
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf230>] sync_buffer+0x40/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8152918f>] __wait_on_bit+0x5f/0x90
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff81529238>] out_of_line_wait_on_bit+0x78/0x90
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8109b310>] ? wake_bit_function+0x0/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1e6>] __wait_on_buffer+0x26/0x30
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffffa0085875>] __ext4_get_inode_loc+0x1e5/0x3b0 [ext4]
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffffa0088006>] ext4_iget+0x86/0x7d0 [ext4]
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffffa008ec35>] ext4_lookup+0xa5/0x140 [ext4]
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff81198b05>] do_lookup+0x1a5/0x230
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff81198e90>] __link_path_walk+0x200/0xff0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8114a667>] ? handle_pte_fault+0xf7/0xb00
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811a3c6a>] ? dput+0x9a/0x150
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff81199f3a>] path_walk+0x6a/0xe0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119a14b>] filename_lookup+0x6b/0xc0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119b277>] user_path_at+0x57/0xa0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119707b>] ? putname+0x2b/0x40
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118eac0>] vfs_fstatat+0x50/0xa0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811c4645>] ? nr_blockdev_pages+0x15/0x70
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8115c4ad>] ? si_swapinfo+0x1d/0x90
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118ec3b>] vfs_stat+0x1b/0x20
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118ec64>] sys_newstat+0x24/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff810e2057>] ? audit_syscall_entry+0x1d7/0x200
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr  3 14:35:34 iui-alhdb01 kernel: INFO: task vi:1394 blocked for more than 120 seconds.
Apr  3 14:35:34 iui-alhdb01 kernel:      Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr  3 14:35:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  3 14:35:34 iui-alhdb01 kernel: vi            D 0000000000000000     0  1394   1271 0x00000084
Apr  3 14:35:34 iui-alhdb01 kernel: ffff88007aef19b8 0000000000000082 ffff88007aef1978 ffffffffa000443c
Apr  3 14:35:34 iui-alhdb01 kernel: ffff88007d208d80 ffff880037cabc08 ffff880037cda0c8 ffff8800022168a8
Apr  3 14:35:34 iui-alhdb01 kernel: ffff880037da45f8 ffff88007aef1fd8 000000000000fbc8 ffff880037da45f8
Apr  3 14:35:34 iui-alhdb01 kernel: Call Trace:
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf230>] sync_buffer+0x40/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8152918f>] __wait_on_bit+0x5f/0x90
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81529238>] out_of_line_wait_on_bit+0x78/0x90
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8109b310>] ? wake_bit_function+0x0/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1e6>] __wait_on_buffer+0x26/0x30
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa0085875>] __ext4_get_inode_loc+0x1e5/0x3b0 [ext4]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa0088006>] ext4_iget+0x86/0x7d0 [ext4]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa008ec35>] ext4_lookup+0xa5/0x140 [ext4]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81198b05>] do_lookup+0x1a5/0x230
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81198e90>] __link_path_walk+0x200/0xff0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8114a667>] ? handle_pte_fault+0xf7/0xb00
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811a3c6a>] ? dput+0x9a/0x150
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81199f3a>] path_walk+0x6a/0xe0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119a14b>] filename_lookup+0x6b/0xc0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119b277>] user_path_at+0x57/0xa0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119707b>] ? putname+0x2b/0x40
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118eac0>] vfs_fstatat+0x50/0xa0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4645>] ? nr_blockdev_pages+0x15/0x70
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8115c4ad>] ? si_swapinfo+0x1d/0x90
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118ec3b>] vfs_stat+0x1b/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118ec64>] sys_newstat+0x24/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810e2057>] ? audit_syscall_entry+0x1d7/0x200
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr  3 14:35:34 iui-alhdb01 kernel: INFO: task vgdisplay:1437 blocked for more than 120 seconds.
Apr  3 14:35:34 iui-alhdb01 kernel:      Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr  3 14:35:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  3 14:35:34 iui-alhdb01 kernel: vgdisplay     D 0000000000000000     0  1437   1423 0x00000080
Apr  3 14:35:34 iui-alhdb01 kernel: ffff88007da35a18 0000000000000086 ffff88007da359d8 ffffffffa000443c
Apr  3 14:35:34 iui-alhdb01 kernel: 000000000007fff0 0000000000010000 ffff88007da359d8 ffff88007d24d380
Apr  3 14:35:34 iui-alhdb01 kernel: ffff880037c8c5f8 ffff88007da35fd8 000000000000fbc8 ffff880037c8c5f8
Apr  3 14:35:34 iui-alhdb01 kernel: Call Trace:
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c8a9d>] __blockdev_direct_IO_newtrunc+0xb7d/0x1270
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c9207>] __blockdev_direct_IO+0x77/0xe0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5487>] blkdev_direct_IO+0x57/0x60
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811217bb>] generic_file_aio_read+0x6bb/0x700
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5fd0>] ? blkdev_get+0x10/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5fe0>] ? blkdev_open+0x0/0xc0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118617f>] ? __dentry_open+0x23f/0x360
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4841>] blkdev_aio_read+0x51/0x80
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81188e8a>] do_sync_read+0xfa/0x140
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810ec3f6>] ? rcu_process_dyntick+0xd6/0x120
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8109b290>] ? autoremove_wake_function+0x0/0x40
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c479c>] ? block_ioctl+0x3c/0x40
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119dc12>] ? vfs_ioctl+0x22/0xa0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119ddb4>] ? do_vfs_ioctl+0x84/0x580
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81226496>] ? security_file_permission+0x16/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81189775>] vfs_read+0xb5/0x1a0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811898b1>] sys_read+0x51/0x90
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810e1e4e>] ? __audit_syscall_exit+0x25e/0x290
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr  3 14:39:19 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 14:53:57 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 15:02:42 iui-alhdb01 yum[1544]: Installed: sos-2.2-47.el6.noarch
Apr  3 15:52:29 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 15:59:08 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
EN

回答 2

Unix & Linux用户

发布于 2014-04-03 14:39:52

请参阅内核文档中的devices.txt:主202是"Xen虚拟块设备“,the 253是LVM /设备映射器。

您的所有dm-x设备都是253:n;它们只是指向202:n

错误信息是明确的:

代码语言:javascript
复制
device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256

看来XEN设备已经发生了变化。您的vg_ALHPRD-oradata无法加载,因为它试图访问根本不存在的202:16上的存储空间。

票数 3
EN

Unix & Linux用户

发布于 2014-04-04 08:48:52

看起来Hypervisor上的多路径拒绝更新LUN大小的映射。

这个LUN最初是28 on,后来在存储数组上增长到48 on。

VG信息认为它的48G,实际上这个光盘是48G,但多路径不会更新,并认为它仍然是28G。

多径依附于28G:

代码语言:javascript
复制
# multipath -l 350002acf962421ba
350002acf962421ba dm-17 3PARdata,VV
size=28G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 8:0:0:22   sdt   65:48    active undef running
  |- 10:0:0:22  sdbh  67:176   active undef running
  |- 7:0:0:22   sddq  71:128   active undef running
  |- 9:0:0:22   sdfb  129:208  active undef running
  |- 8:0:1:22   sdmz  70:432   active undef running
  |- 7:0:1:22   sdoj  128:496  active undef running
  |- 10:0:1:22  sdop  129:336  active undef running
  |- 9:0:1:22   sdqm  132:352  active undef running
  |- 7:0:2:22   sdxh  71:624   active undef running
  |- 8:0:2:22   sdzy  131:704  active undef running
  |- 10:0:2:22  sdaab 131:752  active undef running
  |- 9:0:2:22   sdaed 66:912   active undef running
  |- 7:0:3:22   sdakm 132:992  active undef running
  |- 10:0:3:22  sdall 134:880  active undef running
  |- 8:0:3:22   sdamx 8:1232   active undef running
  `- 9:0:3:22   sdaqa 69:1248  active undef running

存储上的实际磁盘大小:

代码语言:javascript
复制
# showvv ALHIDB_SNP_001
                                                                          -Rsvd(MB)-- -(MB)-
  Id Name           Prov Type  CopyOf            BsId Rd -Detailed_State- Adm Snp Usr  VSize
4098 ALHIDB_SNP_001 snp  vcopy ALHIDB_SNP_001.ro 5650 RW normal            --  --  --  49152

为了确保我有正确的光盘:

代码语言:javascript
复制
# showvlun -showcols VVName,VV_WWN| grep -i  0002acf962421ba
ALHIDB_SNP_001          50002ACF962421BA 

而VG认为它的48G

代码语言:javascript
复制
  --- Volume group ---
  VG Name               vg_ALHINT
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  30
  VG Access             read/write
  VG Status             exported/resizable
  MAX LV                0
  Cur LV                5
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               48.00 GiB
  PE Size               4.00 MiB
  Total PE              12287
  Alloc PE / Size       12287 / 48.00 GiB
  Free  PE / Size       0 / 0
  VG UUID               qqZ9Vi-5Ob1-R6zb-YeWa-jDfg-9wc7-E2wsem

当我为新光盘重新扫描HBAs并重新配置多点处理时,磁盘仍然显示28G,所以我尝试了这个dhad没有改变:

代码语言:javascript
复制
# multipathd -k'resize map 350002acf962421ba'

版本:

代码语言:javascript
复制
lvm2-2.02.56-8.100.3.el5
device-mapper-multipath-libs-0.4.9-46.100.5.el5

解决办法是因为我想不出解决方案--我是这么做的:我之前没有写过我在上面运行OVM 3.2,所以我的部分解决方案将包括OVM。i)通过OVM关闭Xen上的客人。(2)删除光盘iii)从OVM中删除LUN( iv)从管理程序中删除未出现的LUN。(5) OVM重扫描存储。( vi)等待30分钟;( vii)将我的光盘呈现给具有不同LUN ID的Hypervisors。(八) OVM重扫描存储

现在我看到了48G光盘。

代码语言:javascript
复制
# multipath -l 350002acf962421ba
350002acf962421ba dm-18 3PARdata,VV
size=48G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 9:0:0:127  sdt   65:48    active undef running
  |- 9:0:1:127  sdbh  67:176   active undef running
  |- 9:0:2:127  sddo  71:96    active undef running
  |- 9:0:3:127  sdfb  129:208  active undef running
  |- 10:0:3:127 sdmz  70:432   active undef running
  |- 10:0:0:127 sdoh  128:464  active undef running
  |- 10:0:1:127 sdop  129:336  active undef running
  |- 10:0:2:127 sdqm  132:352  active undef running
  |- 7:0:1:127  sdzu  131:640  active undef running
  |- 7:0:0:127  sdxh  71:624   active undef running
  |- 7:0:3:127  sdaed 66:912   active undef running
  |- 7:0:2:127  sdaab 131:752  active undef running
  |- 8:0:0:127  sdakm 132:992  active undef running
  |- 8:0:1:127  sdall 134:880  active undef running
  |- 8:0:2:127  sdamx 8:1232   active undef running
  `- 8:0:3:127  sdaqa 69:1248  active undef running
票数 1
EN
页面原文内容由Unix & Linux提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://unix.stackexchange.com/questions/122947

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档