Bug#599352: mdadm: raid1 split brain: two degraded arrays with the same uuid instead of one
Константин Покотиленко
pokotilenko at mail.ru
Sun Nov 25 16:44:04 UTC 2012
Package: mdadm
Version: 3.1.4-1+8efb9d1+squeeze1
Severity: critical
Given that grub-pc now supports booting from LVM on RAID directly without the need for separate boot partition
I've setup a server in this way.
As everything went good I decided to test the situation where one drive is removed from the system:
1. poweroff the server, remove one drive phisicaly
2. boot the server with one drive out of two, everything is Ok, raid1 is degraded
3. poweroff the server, install the drive back
4. boot the server with two drives, and got a situation like this:
Instead of assembling degraded raid1 array with one non-fresh member - I got TWO raid1 degraded arrays with the same UUID!
Moreover, LVM layer kicked up one array (md0: 9:0) as PV for some LVs and another array (md127: 9:127) as PV for other LVs:
# dmsetup table
VGb-...1: 0 20971520 linear 8:2 20971904
VGag64-swap: 0 4194304 linear 9:127 48234880
VGag64-root: 0 2097152 linear 9:0 384
VGb-...2: 0 2097152 linear 8:2 41943424
VGag64-tmp: 0 4194304 linear 9:127 23069056
VGag64-usr: 0 20971520 linear 9:127 2097536
VGag64-var: 0 20971520 linear 9:127 27263360
VGb-data: 0 20971520 linear 8:2 384
# ls -la /dev/md*
brw-rw---- 1 root disk 9, 0 Сен 6 21:41 /dev/md0
brw-rw---- 1 root disk 9, 127 Сен 6 21:41 /dev/md127
# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sda1[0]
31462175 blocks super 1.2 [2/1] [U_]
md0 : active raid1 sdb1[2]
31462175 blocks super 1.2 [2/1] [_U]
unused devices: <none>
# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Sep 6 15:21:48 2012
Raid Level : raid1
Array Size : 31462175 (30.00 GiB 32.22 GB)
Used Dev Size : 31462175 (30.00 GiB 32.22 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Thu Sep 6 21:59:07 2012
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : ag:0 (local to host ag)
UUID : ddf61119:72425ddc:2cf6275c:6e1d1597
Events : 4373
Number Major Minor RaidDevice State
0 0 0 0 removed
2 8 17 1 active sync /dev/sdb1
# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Thu Sep 6 15:21:48 2012
Raid Level : raid1
Array Size : 31462175 (30.00 GiB 32.22 GB)
Used Dev Size : 31462175 (30.00 GiB 32.22 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Thu Sep 6 21:59:50 2012
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : ag:0 (local to host ag)
UUID : ddf61119:72425ddc:2cf6275c:6e1d1597
Events : 4707
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 0 0 1 removed
# mdadm -E /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : ddf61119:72425ddc:2cf6275c:6e1d1597
Name : ag:0 (local to host ag)
Creation Time : Thu Sep 6 15:21:48 2012
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 62924494 (30.00 GiB 32.22 GB)
Array Size : 62924350 (30.00 GiB 32.22 GB)
Used Dev Size : 62924350 (30.00 GiB 32.22 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 56e0d0f1:cf120c8b:e2dbf59a:20208576
Update Time : Thu Sep 6 22:00:13 2012
Checksum : f7bae2de - correct
Events : 4729
Device Role : Active device 0
Array State : A. ('A' == active, '.' == missing)
# mdadm -E /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : ddf61119:72425ddc:2cf6275c:6e1d1597
Name : ag:0 (local to host ag)
Creation Time : Thu Sep 6 15:21:48 2012
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 62924494 (30.00 GiB 32.22 GB)
Array Size : 62924350 (30.00 GiB 32.22 GB)
Used Dev Size : 62924350 (30.00 GiB 32.22 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : b10b70c3:d0736069:40390768:e0fa597b
Update Time : Thu Sep 6 22:00:07 2012
Checksum : 7994a5f9 - correct
Events : 4381
Device Role : Active device 1
Array State : .A ('A' == active, '.' == missing)
LVM is probably got confused by the fact that two different devices have one UUID, and considered those as one device???
So, the main problem I see is the fact that two different degraded raid1 arrays are assembled instead of one degraded
raid1 array with non-fresh member.
This particular situation was solved by booting from a boot media, zeroing superblock on one drive, re-inserting it
to original array, waiting for rebuild to complete and rebooting into the system.
I should note that if this happens in real life (not during a test that was looked after) the system could work for days
in a split brain situation where one set of data is being written to one array/drive and another set of data to another
array/drive causing raid1 members to become very different and each having some fresh data that is not present on other.
The recovery from such a situation is probably not an easy task as you can't just throw away one drive and re-sync into
it as this will lead to DATA LOSS.
It is even hard to see that both degraded arrays are being used Read/Write, so the probability that somebody will just
decide to choose most fresh drive and re-sync is high.
-- Package-specific info:
--- mdadm.conf
DEVICE partitions
CREATE owner=root group=disk mode=0660 auto=yes
HOMEHOST <system>
MAILADDR root
--- /etc/default/mdadm
INITRDSTART='all'
AUTOSTART=true
AUTOCHECK=true
START_DAEMON=true
DAEMON_OPTIONS="--syslog"
VERBOSE=false
--- /proc/mdstat:
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[2]
31462175 blocks super 1.2 [2/2] [UU]
unused devices: <none>
--- /proc/partitions:
major minor #blocks name
8 16 78150744 sdb
8 17 31463271 sdb1
8 18 46686449 sdb2
8 0 78150744 sda
8 1 31463271 sda1
8 2 46686449 sda2
9 0 31462175 md0
253 0 1048576 dm-0
253 1 10485760 dm-1
253 2 10485760 dm-2
253 3 1048576 dm-3
253 4 10485760 dm-4
253 5 2097152 dm-5
253 6 10485760 dm-6
253 7 2097152 dm-7
253 8 4194304 dm-8
253 9 37748736 dm-9
253 10 10485760 dm-10
--- LVM physical volumes:
PV VG Fmt Attr PSize PFree
/dev/md0 VGag lvm2 a- 30.00g 5.00g
/dev/sda2 VGb lvm2 a- 44.52g 13.52g
/dev/sdb2 VGa lvm2 a- 44.52g 4.52g
--- mount output
/dev/mapper/VGag-root on / type ext4 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/VGag-usr on /usr type ext4 (rw)
/dev/mapper/VGag-var on /var type ext4 (rw)
/dev/mapper/VGag-tmp on /tmp type ext4 (rw)
/dev/mapper/VGb-data on /disk/data type ext4 (rw)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/dev/mapper/VGa-... on /var/lib/lxc/.../rootfs type ext4 (rw)
/dev/mapper/VGa-..._data on /var/lib/lxc/.../rootfs/.../data type ext4 (rw)
cgroup on /sys/fs/cgroup type cgroup (rw)
/dev/mapper/VGb-... on /var/lib/lxc/.../rootfs type ext4 (rw)
--- initrd.img-3.2.0-0.bpo.3-amd64:
66334 blocks
f4fbd9099399ab08ba9b9f6c71d77595 ./scripts/local-top/mdadm
79246b981b4d424653bcabc1192b7161 ./sbin/mdadm
08e61d1f2c05528aff419b82781af080 ./etc/mdadm/mdadm.conf
f0e4bfe5ac79e4c0b0a15410925151b0 ./lib/modules/3.2.0-0.bpo.3-amd64/kernel/drivers/md/raid0.ko
e819fa0ddcf59b75fabaf36525ce4655 ./lib/modules/3.2.0-0.bpo.3-amd64/kernel/drivers/md/dm-snapshot.ko
2f31c5c465d6c1efa82f8d48ed12ce10 ./lib/modules/3.2.0-0.bpo.3-amd64/kernel/drivers/md/dm-mirror.ko
56e3d06af51824c147720add2085b8a0 ./lib/modules/3.2.0-0.bpo.3-amd64/kernel/drivers/md/dm-region-hash.ko
c6bc90f97e873f01cc8a0e6be7ba73b0 ./lib/modules/3.2.0-0.bpo.3-amd64/kernel/drivers/md/dm-mod.ko
15f6d6f8c8af438239ab4dd288ae3ff9 ./lib/modules/3.2.0-0.bpo.3-amd64/kernel/drivers/md/multipath.ko
88a1d9f9c14cd087c6019b1476a4533e ./lib/modules/3.2.0-0.bpo.3-amd64/kernel/drivers/md/raid1.ko
8d04c54706c4cb6944b8ce81b06ce89d ./lib/modules/3.2.0-0.bpo.3-amd64/kernel/drivers/md/raid10.ko
31deb4cfa296fbe53f410c51738e0dcb ./lib/modules/3.2.0-0.bpo.3-amd64/kernel/drivers/md/linear.ko
3e3870e71aa21fb74c5f08d64c0b0b87 ./lib/modules/3.2.0-0.bpo.3-amd64/kernel/drivers/md/dm-log.ko
e0522b92c67d012ecd7190ffe3b8bf85 ./lib/modules/3.2.0-0.bpo.3-amd64/kernel/drivers/md/raid456.ko
d689efcf7ae9bb9cb3ef6370581f0fef ./lib/modules/3.2.0-0.bpo.3-amd64/kernel/drivers/md/md-mod.ko
--- initrd's /conf/conf.d/md:
MD_HOMEHOST='ag'
MD_DEVPAIRS='/dev/md/0:raid1'
MD_LEVELS='raid1'
MD_DEVS=all
MD_MODULES='raid1'
--- /proc/modules:
dm_mod 63050 31 - Live 0xffffffffa0098000
raid1 30440 1 - Live 0xffffffffa0026000
md_mod 87021 2 raid1, Live 0xffffffffa0039000
--- /var/log/syslog:
--- volume detail:
/dev/sda is not recognised by mdadm.
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : ddf61119:72425ddc:2cf6275c:6e1d1597
Name : ag:0 (local to host ag)
Creation Time : Thu Sep 6 15:21:48 2012
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 62924494 (30.00 GiB 32.22 GB)
Array Size : 62924350 (30.00 GiB 32.22 GB)
Used Dev Size : 62924350 (30.00 GiB 32.22 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 56e0d0f1:cf120c8b:e2dbf59a:20208576
Update Time : Sun Nov 25 18:11:40 2012
Checksum : f8233aa4 - correct
Events : 6589
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing)
--
/dev/sda2 is not recognised by mdadm.
/dev/sdb is not recognised by mdadm.
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : ddf61119:72425ddc:2cf6275c:6e1d1597
Name : ag:0 (local to host ag)
Creation Time : Thu Sep 6 15:21:48 2012
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 62924494 (30.00 GiB 32.22 GB)
Array Size : 62924350 (30.00 GiB 32.22 GB)
Used Dev Size : 62924350 (30.00 GiB 32.22 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 967d2903:aa8db69d:1b516dc2:d1173d5e
Update Time : Sun Nov 25 18:11:40 2012
Checksum : 2b55bfab - correct
Events : 6589
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing)
--
/dev/sdb2 is not recognised by mdadm.
--- /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.2.0-0.bpo.3-amd64 root=/dev/mapper/VGag-root ro quiet
--- grub2:
insmod raid
set root='(VGag-root)'
linux /boot/vmlinuz-3.2.0-0.bpo.3-amd64 root=/dev/mapper/VGag-root ro quiet
insmod raid
set root='(VGag-root)'
linux /boot/vmlinuz-3.2.0-0.bpo.3-amd64 root=/dev/mapper/VGag-root ro single
insmod raid
set root='(VGag-root)'
linux /boot/vmlinuz-3.2.0-0.bpo.2-amd64 root=/dev/mapper/VGag-root ro quiet
insmod raid
set root='(VGag-root)'
linux /boot/vmlinuz-3.2.0-0.bpo.2-amd64 root=/dev/mapper/VGag-root ro single
insmod raid
set root='(VGag-root)'
linux /boot/vmlinuz-2.6.32-5-amd64 root=/dev/mapper/VGag-root ro quiet
insmod raid
set root='(VGag-root)'
linux /boot/vmlinuz-2.6.32-5-amd64 root=/dev/mapper/VGag-root ro single
--- grub legacy:
module /boot/vmlinuz-2.6.32-bpo.5-xen-686 root=LABEL=root at ag ro console=tty0
module /boot/vmlinuz-2.6.26-2-xen-686 root=LABEL=root at ag ro console=tty0
kernel /boot/vmlinuz-2.6.32-trunk-686 root=LABEL=root at ag ro quiet
kernel /boot/vmlinuz-2.6.32-trunk-686 root=LABEL=root at ag ro single
kernel /boot/vmlinuz-2.6.32-bpo.5-xen-686 root=LABEL=root at ag ro quiet
kernel /boot/vmlinuz-2.6.32-bpo.5-xen-686 root=LABEL=root at ag ro single
kernel /boot/vmlinuz-2.6.26-2-xen-686 root=LABEL=root at ag ro quiet
kernel /boot/vmlinuz-2.6.26-2-xen-686 root=LABEL=root at ag ro single
--- udev:
ii udev 164-3 /dev/ and hotplug management daemon
4a574fcd059040d33ea18a8aa605a184 /lib/udev/rules.d/64-md-raid.rules
--- /dev:
brw-rw---- 1 root disk 9, 0 Oct 30 21:35 /dev/md0
/dev/disk/by-id:
total 0
lrwxrwxrwx 1 root root 9 Oct 30 21:35 ata-ST3808110AS_5LR2KL18 -> ../../sda
lrwxrwxrwx 1 root root 10 Nov 12 14:06 ata-ST3808110AS_5LR2KL18-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Nov 12 15:27 ata-ST3808110AS_5LR2KL18-part2 -> ../../sda2
lrwxrwxrwx 1 root root 9 Oct 30 21:35 ata-ST3808110AS_5LR2V2PA -> ../../sdb
lrwxrwxrwx 1 root root 10 Nov 12 14:06 ata-ST3808110AS_5LR2V2PA-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Nov 12 15:27 ata-ST3808110AS_5LR2V2PA-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Nov 12 14:09 dm-name-VGa-... -> ../../dm-8
lrwxrwxrwx 1 root root 10 Nov 12 14:09 dm-name-VGa-..._data -> ../../dm-9
lrwxrwxrwx 1 root root 10 Oct 30 21:35 dm-name-VGag-root -> ../../dm-0
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-name-VGag-swap -> ../../dm-7
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-name-VGag-tmp -> ../../dm-5
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-name-VGag-usr -> ../../dm-4
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-name-VGag-var -> ../../dm-6
lrwxrwxrwx 1 root root 11 Nov 12 15:27 dm-name-VGb-... -> ../../dm-10
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-name-VGb-data -> ../../dm-1
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-name-VGb-... -> ../../dm-2
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-name-VGb-... -> ../../dm-3
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-uuid-LVM-BEt5We0XwZ9DPDKIvftHPts0lTgz2fjTAxZWs2lzGqT4WXDA8pUm8VQBMhADeybO -> ../../dm-1
lrwxrwxrwx 1 root root 11 Nov 12 15:27 dm-uuid-LVM-BEt5We0XwZ9DPDKIvftHPts0lTgz2fjTWJWS5F422ifm4ONTXVQdNbCKfsJguG6A -> ../../dm-10
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-uuid-LVM-BEt5We0XwZ9DPDKIvftHPts0lTgz2fjTeMj7eqfcI0dCR11SZMgRako1OF10DbJR -> ../../dm-3
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-uuid-LVM-BEt5We0XwZ9DPDKIvftHPts0lTgz2fjTn0LozzNIooUR17rmuKAf0m0XA1C5PQUm -> ../../dm-2
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-uuid-LVM-JGOQZZCuAj9Pfm6iuLmudjUGAcG1yhOO1mG74P8q4DLYYAcGQCKUuv4nyHIyvLp7 -> ../../dm-7
lrwxrwxrwx 1 root root 10 Oct 30 21:35 dm-uuid-LVM-JGOQZZCuAj9Pfm6iuLmudjUGAcG1yhOO5PweujsR6ggdSu0L8DhFPp7RE0xQ5vry -> ../../dm-0
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-uuid-LVM-JGOQZZCuAj9Pfm6iuLmudjUGAcG1yhOOJMypz6VcjwrK7N33gepn2qTz5JN3BQ1n -> ../../dm-5
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-uuid-LVM-JGOQZZCuAj9Pfm6iuLmudjUGAcG1yhOOOMs6R70OCvJPZa3Dwz221X1FWyXwt4EH -> ../../dm-4
lrwxrwxrwx 1 root root 10 Nov 12 15:27 dm-uuid-LVM-JGOQZZCuAj9Pfm6iuLmudjUGAcG1yhOOXSKTMmwGLJsH0ZSIOdiD9fybRIEoGb7j -> ../../dm-6
lrwxrwxrwx 1 root root 10 Nov 12 14:09 dm-uuid-LVM-aK1KQ1zzowjQFd5L9hMUKm4CJHEH8rGX6ySgQbApe3JkuUuAhCfP6JVeRcwf7cc7 -> ../../dm-8
lrwxrwxrwx 1 root root 10 Nov 12 14:09 dm-uuid-LVM-aK1KQ1zzowjQFd5L9hMUKm4CJHEH8rGXeBIF1zRBl3OTnX7NzHacxArhyTv9k87b -> ../../dm-9
lrwxrwxrwx 1 root root 9 Oct 30 21:35 md-name-ag:0 -> ../../md0
lrwxrwxrwx 1 root root 9 Oct 30 21:35 md-uuid-ddf61119:72425ddc:2cf6275c:6e1d1597 -> ../../md0
lrwxrwxrwx 1 root root 9 Oct 30 21:35 scsi-SATA_ST3808110AS_5LR2KL18 -> ../../sda
lrwxrwxrwx 1 root root 10 Nov 12 14:06 scsi-SATA_ST3808110AS_5LR2KL18-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Nov 12 15:27 scsi-SATA_ST3808110AS_5LR2KL18-part2 -> ../../sda2
lrwxrwxrwx 1 root root 9 Oct 30 21:35 scsi-SATA_ST3808110AS_5LR2V2PA -> ../../sdb
lrwxrwxrwx 1 root root 10 Nov 12 14:06 scsi-SATA_ST3808110AS_5LR2V2PA-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Nov 12 15:27 scsi-SATA_ST3808110AS_5LR2V2PA-part2 -> ../../sdb2
/dev/disk/by-path:
total 0
lrwxrwxrwx 1 root root 9 Oct 30 21:35 pci-0000:00:1f.2-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Nov 12 14:06 pci-0000:00:1f.2-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Nov 12 15:27 pci-0000:00:1f.2-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 9 Oct 30 21:35 pci-0000:00:1f.2-scsi-1:0:0:0 -> ../../sdb
lrwxrwxrwx 1 root root 10 Nov 12 14:06 pci-0000:00:1f.2-scsi-1:0:0:0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Nov 12 15:27 pci-0000:00:1f.2-scsi-1:0:0:0-part2 -> ../../sdb2
/dev/disk/by-uuid:
total 0
lrwxrwxrwx 1 root root 10 Nov 12 14:09 052e5c3f-6c61-4868-a7fd-991b76c830b6 -> ../../dm-8
lrwxrwxrwx 1 root root 11 Nov 12 15:27 1d3ecbab-e247-4e63-adb1-8ed4d23aa3ae -> ../../dm-10
lrwxrwxrwx 1 root root 10 Nov 12 15:27 2037722b-47ca-49b6-91e9-e156f7dafabb -> ../../dm-7
lrwxrwxrwx 1 root root 10 Oct 30 21:35 2eea9cd6-5654-4230-a8d6-7197a635ec95 -> ../../dm-0
lrwxrwxrwx 1 root root 10 Nov 12 15:27 403d023e-9436-4a42-9732-1a620164caad -> ../../dm-5
lrwxrwxrwx 1 root root 10 Nov 12 15:27 4c2cf947-8b1f-46b2-984b-7c41799e9b82 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Nov 12 15:27 940dba38-8516-45bc-ae26-72c0184a4084 -> ../../dm-6
lrwxrwxrwx 1 root root 10 Nov 12 15:27 d065c130-1fd1-4a96-b4b1-04d6fae984a4 -> ../../dm-4
lrwxrwxrwx 1 root root 10 Nov 12 14:09 d0a55b91-9369-4039-9a25-172073b9e301 -> ../../dm-9
/dev/md:
total 0
lrwxrwxrwx 1 root root 6 Oct 30 21:35 0 -> ../md0
Auto-generated on Sun, 25 Nov 2012 18:11:40 +0200
by mdadm bugscript 3.1.4-1+8efb9d1+squeeze1
-- System Information:
Debian Release: 6.0.6
APT prefers stable-updates
APT policy: (500, 'stable-updates'), (500, 'proposed-updates'), (500, 'stable')
Architecture: amd64 (x86_64)
Kernel: Linux 3.2.0-0.bpo.3-amd64 (SMP w/4 CPU cores)
Locale: LANG=ru_UA.UTF-8, LC_CTYPE=ru_UA.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/bash
Versions of packages mdadm depends on:
ii debconf 1.5.36.1 Debian configuration management sy
ii libc6 2.11.3-4 Embedded GNU C Library: Shared lib
ii lsb-base 3.2-23.2squeeze1 Linux Standard Base 3.2 init scrip
ii makedev 2.3.1-89 creates device files in /dev
ii udev 164-3 /dev/ and hotplug management daemo
Versions of packages mdadm recommends:
ii module-init-tools 3.12-2 tools for managing Linux kernel mo
ii ssmtp [mail-transport-agent] 2.64-4 extremely simple MTA to get mail o
mdadm suggests no packages.
-- debconf information:
mdadm/autostart: true
* mdadm/initrdstart: all
mdadm/initrdstart_notinconf: false
mdadm/initrdstart_msg_errexist:
mdadm/initrdstart_msg_intro:
mdadm/initrdstart_msg_errblock:
* mdadm/start_daemon: true
* mdadm/mail_to: root
mdadm/initrdstart_msg_errmd:
mdadm/initrdstart_msg_errconf:
* mdadm/autocheck: true
More information about the pkg-mdadm-devel
mailing list