Bug#743945: Fwd: LVM2 raid1 crashes
repdeb at email.cz
repdeb at email.cz
Tue Apr 8 16:22:23 UTC 2014
Package: lvm2
Version: 2.02.95-8
Severity: important
Hi, LVM2 raid1 created by:
"lvcreate --type raid1 --nosync -n..."
during manipulation with snapshot crashes every time.
LVM2 mirrored volume created by:
"lvcreate -m1 --nosync -n ..."
seems to be stable at work.
-- System Information:
Debian Release: 7.4
APT prefers stable-updates
APT policy: (500, 'stable-updates'), (500, 'stable')
Architecture: amd64 (x86_64)
Kernel: Linux 3.2.0-4-amd64 (SMP w/8 CPU cores)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Versions of packages lvm2 depends on:
ii dmsetup 2:1.02.74-8
ii initscripts 2.88dsf-41+deb7u1
ii libc6 2.13-38+deb7u1
ii libdevmapper-event1.02.1 2:1.02.74-8
ii libdevmapper1.02.1 2:1.02.74-8
ii libreadline5 5.2+dfsg-2~deb7u1
ii libudev0 175-7.2
ii lsb-base 4.1+Debian8+deb7u1
lvm2 recommends no packages.
lvm2 suggests no packages.
-- no debconf information
------------------------------------
I did following exactly:
root at pr-ceph1:~# pvcreate /dev/sdb1
Writing physical volume data to disk "/dev/sdb1"
Physical volume "/dev/sdb1" successfully created
root at pr-ceph1:~# pvcreate /dev/sdc1
Writing physical volume data to disk "/dev/sdc1"
Physical volume "/dev/sdc1" successfully created
root at pr-ceph1:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 lvm2 a-- 500.00g 500.00g
/dev/sdc1 lvm2 a-- 500.00g 500.00g
root at pr-ceph1:~# vgcreate RAID1 /dev/sdb1 /dev/sdc1
Volume group "RAID1" successfully created
root at pr-ceph1:~# vgs
VG #PV #LV #SN Attr VSize VFree
RAID1 2 0 0 wz--n- 999.99g 999.99g
root at pr-ceph1:~# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lvm_raid1 RAID1 Rwi-a-m- 200.00g 100.00
root at pr-ceph1:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 RAID1 lvm2 a-- 500.00g 299.99g
/dev/sdc1 RAID1 lvm2 a-- 500.00g 299.99g
root at pr-ceph1:~# vgs
VG #PV #LV #SN Attr VSize VFree
RAID1 2 1 0 wz--n- 999.99g 599.98g
root at pr-ceph1:~# lvcreate --type raid1 --nosync -L 200G -n lvm_raid1 RAID1
WARNING: New raid1 won't be synchronised. Don't read what you didn't write!
Logical volume "lvm_raid1" created
lvcreate --snapshot --size 100g --name testik.snap RAID1/lvm_raid1
Logical volume "testik.snap" created
Now thing goes wrong:
lvremove RAID1/testik.snap
Do you really want to remove active logical volume testik.snap? [y/n]: y
Message from syslogd at pr-ceph1 at Apr 8 12:47:55 ...
kernel:[ 1041.008836] stack segment: 0000 [#1] SMP
Message from syslogd at pr-ceph1 at Apr 8 12:47:55 ...
kernel:[ 1041.014454] Stack:
Message from syslogd at pr-ceph1 at Apr 8 12:47:55 ...
kernel:[ 1041.015468] Call Trace:
Message from syslogd at pr-ceph1 at Apr 8 12:47:55 ...
kernel:[ 1041.017570] Code: 42 10 eb 03 48 89 06 48 8b 17 83 e2 03 48 09 c2 48 89 17 c3 41 56 41 55 49 89 f5 41 54 49 89 fc 55 53 e9 9e 00 00 00 48 83 e5 fc 8b 45 10 48 39 c3 75 41 48 8b 45 08 48 85 c0 74 08 48 8b 10
Segmentation fault
-----------------------------
Sometime it is needed to do some work on raid1 volume, like:
time dd if=/dev/zero of=/dev/mapper/RAID1-lvm_raid1 count=5000000 bs=1024 &
and create snapshot simultaneously:
lvcreate --snapshot --size 50g --name testik.snap RAID1/lvm_raid1
Logical volume "testik.snap" created
And now - crash:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84400.590136] Oops: 0000 [#1] SMP
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84400.591362] Stack:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84400.591664] Call Trace:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84400.592273] Code: 00 48 c7 c7 40 79 7e 81 5d 41 5c e9 d3 75 02 00 51 e8 c6 5c f4 ff 48 c1 e8 0c 48 ba 00 00 00 00 00 ea ff ff 48 6b c0 38 48 01 d0 8b 10 80 e6 80 74 04 48 8b 40 30 5a c3 41 56 41 be 01 00 00
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84400.592657] CR2: ffffeb57005b42b8
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84400.593765] Oops: 0000 [#2] SMP
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84400.600222] Stack:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84400.601264] Call Trace:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84400.603492] Code: 3f 48 c1 e5 03 48 c1 e0 06 48 8d b0 e0 5d 40 81 48 29 ee e8 43 32 fe ff 81 4b 14 00 00 00 04 41 59 5b 5d c3 48 8b 87 a8 02 00 00 8b 40 f8 c3 48 3b 3d 42 b8 72 00 75 08 0f bf 87 72 06 00 00
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84400.607046] CR2: fffffffffffffff8
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.909373] Stack:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.910428] Call Trace:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.910512]
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.911245]
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.912071] Code: 05 e8 6f 32 14 00 c3 f0 81 2f 00 00 10 00 74 05 e8 40 32 14 00 c3 b8 00 00 01 00 f0 0f c1 07 89 c2 c1 ea 10 66 39 d0 74 07 f3 90 8b 07 eb f4 c3 8b 17 31 c0 89 d1 c1 e9 10 66 39 ca 75 14 8d
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.927954] Stack:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.929010] Call Trace:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.932945] Code: 32 14 00 c3 f0 81 2f 00 00 10 00 74 05 e8 40 32 14 00 c3 b8 00 00 01 00 f0 0f c1 07 89 c2 c1 ea 10 66 39 d0 74 07 f3 90 66 8b 07 f4 c3 8b 17 31 c0 89 d1 c1 e9 10 66 39 ca 75 14 8d 8a 00 00
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.946689] Stack:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.947744] Call Trace:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.948231] Code: 28 e0 ff ff 80 e2 08 75 22 31 d2 48 83 c0 10 48 89 d1 0f 01 c8 0f ae f0 48 8b 86 38 e0 ff ff a8 08 75 08 b1 01 4c 89 e8 0f 01 c9 d9 84 e7 ff 4c 29 f0 48 89 c7 e8 53 cf e5 ff 4c 69 e8 40 42
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.958441] Stack:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.964081] Call Trace:
Message from syslogd at pr-ceph1 at Apr 8 11:39:14 ...
kernel:[84462.964556] Code: 28 e0 ff ff 80 e2 08 75 22 31 d2 48 83 c0 10 48 89 d1 0f 01 c8 0f ae f0 48 8b 86 38 e0 ff ff a8 08 75 08 b1 01 4c 89 e8 0f 01 c9 d9 84 e7 ff 4c 29 f0 48 89 c7 e8 53 cf e5 ff 4c 69 e8 40 42
and so on.
---------------------------------------------------------------------
Excerpt from /var/log/messages after some crash:
Apr 8 12:39:06 pr-ceph1 kernel: [ 512.925238] device-mapper: raid: Superblocks created for new array
Apr 8 12:39:06 pr-ceph1 kernel: [ 512.926231] md/raid1:mdX: active with 2 out of 2 mirrors
Apr 8 12:39:06 pr-ceph1 kernel: [ 512.926236] Choosing daemon_sleep default (5 sec)
Apr 8 12:39:06 pr-ceph1 kernel: [ 512.926239] created bitmap (200 pages) for device mdX
Apr 8 12:39:06 pr-ceph1 kernel: [ 512.960273] mdX: bitmap file is out of date, doing full recovery
Apr 8 12:39:06 pr-ceph1 kernel: [ 513.040176] mdX: bitmap initialized from disk: read 13/13 pages, set 409600 of 409600 bits
Apr 8 12:46:51 pr-ceph1 kernel: [ 977.172985] md/raid1:mdX: active with 2 out of 2 mirrors
Apr 8 12:46:51 pr-ceph1 kernel: [ 977.173190] created bitmap (200 pages) for device mdX
Apr 8 12:46:51 pr-ceph1 kernel: [ 977.215821] mdX: bitmap initialized from disk: read 13/13 pages, set 0 of 409600 bits
Apr 8 12:46:51 pr-ceph1 kernel: [ 977.444206] mdX: bitmap initialized from disk: read 26/13 pages, set 0 of 409600 bits
Apr 8 12:47:55 pr-ceph1 kernel: [ 1040.824846] md/raid1:mdX: active with 2 out of 2 mirrors
Apr 8 12:47:55 pr-ceph1 kernel: [ 1040.824924] created bitmap (200 pages) for device mdX
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.001216] mdX: bitmap initialized from disk: read 39/13 pages, set 0 of 409600 bits
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008602] ------------[ cut here ]------------
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008629] WARNING: at /build/linux-FpPMO6/linux-3.2.54/mm/vmalloc.c:1446 ctl_ioctl+0x220/0x242 [dm_mod]()
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008632] Hardware name: SUN FIRE X4170 M2 SERVER
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008723] Modules linked in: dm_snapshot nfsd nfs nfs_acl auth_rpcgss fscache lockd sunrpc loop dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq raid1 md_mod async_xor xor async_tx coretemp crc32c_intel ghash_clmulni_intel i7core_edac acpi_cpufreq mperf edac_core snd_pcm snd_page_alloc snd_timer snd soundcore aesni_intel aes_x86_64 aes_generic cryptd i2c_i801 iTCO_wdt i2c_core processor button evdev ioatdma iTCO_vendor_support joydev pcspkr thermal_sys ext4 crc16 jbd2 mbcache dm_mod sr_mod sg cdrom sd_mod crc_t10dif usb_storage usbhid hid uhci_hcd ahci libahci libata mpt2sas raid_class scsi_transport_sas ehci_hcd scsi_mod usbcore usb_common igb dca [last unloaded: scsi_wait_scan]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008765] Pid: 3177, comm: lvremove Not tainted 3.2.0-4-amd64 #1 Debian 3.2.54-2
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008766] Call Trace:
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008774] [] ? warn_slowpath_common+0x78/0x8c
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008778] [] ? warn_slowpath_fmt+0x45/0x4a
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008783] [] ? __vunmap+0x35/0xb8
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008788] [] ? ctl_ioctl+0x220/0x242 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008794] [] ? dm_ctl_ioctl+0xc/0x11 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008800] [] ? do_vfs_ioctl+0x459/0x49a
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008805] [] ? ipcget+0x175/0x1aa
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008808] [] ? sys_ioctl+0x4b/0x72
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008812] [] ? system_call_fastpath+0x16/0x1b
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008814] ---[ end trace 1a24e35971ce9052 ]---
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.008991] CPU 0
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.009045] Modules linked in: dm_snapshot nfsd nfs nfs_acl auth_rpcgss fscache lockd sunrpc loop dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq raid1 md_mod async_xor xor async_tx coretemp crc32c_intel ghash_clmulni_intel i7core_edac acpi_cpufreq mperf edac_core snd_pcm snd_page_alloc snd_timer snd soundcore aesni_intel aes_x86_64 aes_generic cryptd i2c_i801 iTCO_wdt i2c_core processor button evdev ioatdma iTCO_vendor_support joydev pcspkr thermal_sys ext4 crc16 jbd2 mbcache dm_mod sr_mod sg cdrom sd_mod crc_t10dif usb_storage usbhid hid uhci_hcd ahci libahci libata mpt2sas raid_class scsi_transport_sas ehci_hcd scsi_mod usbcore usb_common igb dca [last unloaded: scsi_wait_scan]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.012858]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.012911] Pid: 3177, comm: lvremove Tainted: G W 3.2.0-4-amd64 #1 Debian 3.2.54-2 SUN MICROSYSTEMS SUN FIRE X4170 M2 SERVER /ASSY,MOTHERBOARD,X4170
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.013151] RIP: 0010:[] [] rb_insert_color+0x17/0xd9
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.013277] RSP: 0018:ffff88046bbb7b18 EFLAGS: 00010206
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.013343] RAX: 0000000000000000 RBX: ffffea000f77a868 RCX: ffffea000f77a868
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.013415] RDX: ffffea000f77a870 RSI: ffffffff817dc168 RDI: ffff88046b7f3ad8
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.013513] RBP: 0200000000000800 R08: 00000000ffffffff R09: 00000000000000d2
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.013612] R10: 0000000000000246 R11: 0000000000000246 R12: ffff88046b7f3ad8
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.013710] R13: ffffffff817dc168 R14: ffff88046b7f3ac0 R15: ffffc900135f3000
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.013809] FS: 00007fa10856a7a0(0000) GS:ffff88047f200000(0000) knlGS:0000000000000000
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.013937] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.014030] CR2: 00000000008e4408 CR3: 000000046d51f000 CR4: 00000000000006f0
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.014128] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.014227] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.014326] Process lvremove (pid: 3177, threadinfo ffff88046bbb6000, task ffff88046811b890)
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.014533] ffff88046b7f3ac0 ffff88046b7f3ad8 ffffc90000000000 ffffe8ffffffffff
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.014845] ffff88046b7f3ac0 ffffffff810da962 0000000000000001 0000000000000001
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.015157] 0000000000005000 ffffffff810db963 ffffc90000000000 ffffffffffffffff
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.015552] [] ? __insert_vmap_area+0x67/0xb5
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.015647] [] ? alloc_vmap_area+0x1e9/0x28a
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.015745] [] ? copy_params+0x5f/0x118 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.015841] [] ? __get_vm_area_node+0xe0/0x137
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.015939] [] ? copy_params+0x5f/0x118 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.016038] [] ? dev_wait+0x80/0x80 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.016133] [] ? __vmalloc_node_range+0x67/0x1fa
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.016231] [] ? copy_params+0x5f/0x118 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.016329] [] ? up+0xb/0x34
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.016421] [] ? _raw_spin_unlock_irqrestore+0xe/0xf
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.016521] [] ? dev_wait+0x80/0x80 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.016615] [] ? __vmalloc_node+0x2c/0x31
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.016712] [] ? copy_params+0x5f/0x118 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.016808] [] ? vmalloc+0x24/0x27
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.016901] [] ? copy_params+0x5f/0x118 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.017001] [] ? ctl_ioctl+0x13d/0x242 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.017099] [] ? dm_ctl_ioctl+0xc/0x11 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.017196] [] ? do_vfs_ioctl+0x459/0x49a
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.017290] [] ? ipcget+0x175/0x1aa
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.017382] [] ? sys_ioctl+0x4b/0x72
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.017475] [] ? system_call_fastpath+0x16/0x1b
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.020934] RSP
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.021133] ---[ end trace 1a24e35971ce9053 ]---
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.021224] note: lvremove[3177] exited with preempt_count 1
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.021446] Modules linked in: dm_snapshot nfsd nfs nfs_acl auth_rpcgss fscache lockd sunrpc loop dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq raid1 md_mod async_xor xor async_tx coretemp crc32c_intel ghash_clmulni_intel i7core_edac acpi_cpufreq mperf edac_core snd_pcm snd_page_alloc snd_timer snd soundcore aesni_intel aes_x86_64 aes_generic cryptd i2c_i801 iTCO_wdt i2c_core processor button evdev ioatdma iTCO_vendor_support joydev pcspkr thermal_sys ext4 crc16 jbd2 mbcache dm_mod sr_mod sg cdrom sd_mod crc_t10dif usb_storage usbhid hid uhci_hcd ahci libahci libata mpt2sas raid_class scsi_transport_sas ehci_hcd scsi_mod usbcore usb_common igb dca [last unloaded: scsi_wait_scan]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.026043] Pid: 3177, comm: lvremove Tainted: G D W 3.2.0-4-amd64 #1 Debian 3.2.54-2
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.026171] Call Trace:
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.026257] [] ? __schedule_bug+0x3e/0x52
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.026409] [] ? __schedule+0x85/0x610
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.026503] [] ? __cond_resched+0x1d/0x26
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.026602] [] ? _cond_resched+0x12/0x1c
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.026697] [] ? do_unblank_screen+0x142/0x142
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.026797] [] ? down_read+0x9/0x19
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.026890] [] ? acct_collect+0x3f/0x165
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.026990] [] ? do_exit+0x210/0x713
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.027083] [] ? _raw_spin_unlock_irqrestore+0xe/0xf
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.027180] [] ? kmsg_dump+0x52/0xdb
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.027273] [] ? oops_end+0xb1/0xb6
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.027369] [] ? do_stack_segment+0x5e/0x71
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.027463] [] ? stack_segment+0x25/0x30
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.027557] [] ? rb_insert_color+0x17/0xd9
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.027650] > [] ? __insert_vmap_area+0x67/0xb5
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.027802] [] ? alloc_vmap_area+0x1e9/0x28a
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.027901] [] ? copy_params+0x5f/0x118 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.027996] [] ? __get_vm_area_node+0xe0/0x137
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.028094] [] ? copy_params+0x5f/0x118 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.028193] [] ? dev_wait+0x80/0x80 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.028288] [] ? __vmalloc_node_range+0x67/0x1fa
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.028386] [] ? copy_params+0x5f/0x118 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.028483] [] ? up+0xb/0x34
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.028573] [] ? _raw_spin_unlock_irqrestore+0xe/0xf
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.028672] [] ? dev_wait+0x80/0x80 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.028767] [] ? __vmalloc_node+0x2c/0x31
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.028863] [] ? copy_params+0x5f/0x118 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.028959] [] ? vmalloc+0x24/0x27
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.029053] [] ? copy_params+0x5f/0x118 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.029152] [] ? ctl_ioctl+0x13d/0x242 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.029251] [] ? dm_ctl_ioctl+0xc/0x11 [dm_mod]
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.029349] [] ? do_vfs_ioctl+0x459/0x49a
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.029443] [] ? ipcget+0x175/0x1aa
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.029563] [] ? sys_ioctl+0x4b/0x72
Apr 8 12:47:55 pr-ceph1 kernel: [ 1041.029657] [] ? system_call_fastpath+0x16/0x1b
Apr 8 12:48:12 pr-ceph1 shutdown[3210]: shutting down for system reboot
More information about the pkg-lvm-maintainers
mailing list