Bug#398310: mdadm: let user choose when to start which array

lee lee at yun.yagibdah.de
Fri Dec 5 20:05:38 UTC 2008


Followup-For: Bug #398310
Package: mdadm
Version: 2.6.7.1-1


*** Please type your report below this line ***

The OP of this bug is right, the user needs to be told exactly which
arrays have been discovered, and he *must* be given a choice to either
start them now or not.

The only choice he is presented with is the question which arrays to
start when booting the system. That doesn't have to do anything with
starting them in the process of installing the mdadm package.


In my case, I have two disks used for a RAID-1:


   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        9119    73248336   fd  Linux raid autodetect
/dev/sda2            9120       29788   166023742+  fd  Linux raid autodetect
/dev/sda3           29789       36483    53777587+  fd  Linux raid autodetect

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        9119    73248336   fd  Linux raid autodetect
/dev/sdb2            9120       29788   166023742+  fd  Linux raid autodetect
/dev/sdb3           29789       36483    53777587+  fd  Linux raid autodetect


When the disks were new a few years ago, I tried to make a RAID-1 from
those disks and then to partition the array (on Debian i386). That
didn't work, so I had to partition the disks and then to create the
array from the partitons (see mdadm.conf below). The UUIDs are:


cat:/etc/exim4# mdadm --examine --scan /dev/sda
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=c5460893:6fe1b92f:8d76d626:2a523555
cat:/etc/exim4# mdadm --examine --scan /dev/sdb
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=c5460893:6fe1b92f:8d76d626:2a523555

cat:/etc/exim4# mdadm --examine --scan /dev/sda1
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=ca34e190:39db09f0:390edcc4:35d74b5f
cat:/etc/exim4# mdadm --examine --scan /dev/sdb1
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=ca34e190:39db09f0:390edcc4:35d74b5f
cat:/etc/exim4# mdadm --examine --scan /dev/sda2
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9f9a753b:eb70a81c:5ff8d522:9ec3586b
cat:/etc/exim4# mdadm --examine --scan /dev/sdb2
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9f9a753b:eb70a81c:5ff8d522:9ec3586b
cat:/etc/exim4# mdadm --examine --scan /dev/sda3
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=ae296907:7c3dc6ef:763a7645:40ee5e12
cat:/etc/exim4# mdadm --examine --scan /dev/sdb3
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=ae296907:7c3dc6ef:763a7645:40ee5e12


Please note that /dev/sda and /dev/sdb still have UUIDs from my first
attempt to create a RAID-1. How do I remove these?


This has been working fine on i386, the arrays were created
automatically as they should during installing the mdadm package when
I started using the disks on a new Debian system.

A few days ago I switched to amd64 (i. e. x86_64). When I installed
the mdadm package, I was asked which md devices should be started when
booting the system, then the devices were autodetected and immediately
started.

But they were detected with the ARRAY entries I later commented out
(see below). That left me with /dev/md0 as a RAID-1 across both disks
(the whole disks!) and apparently broken /dev/md1 and /dev/md2. A
resync for the array had also been automatically started. fdisk -l
showed devices like /dev/md0p1 or /dev/md0p2, but these devices didn't
exist. They were also not listed in /proc/partitions --- this is where
the manpage of fdisk says that fdisk gets its information:


-l	    List the partition tables for the specified
	    devices and then exit.
	    If no devices are given, those mentioned in
	    /proc/partitions (if that exists) are used.


Fortunately, I didn't lose my data. After the resync was finished ---
and I wonder what would have happened if I had stopped it or if there
had been a power failure during the resync --- I stopped all md
devices, reassembled the arrays manually and put new entries for them
into mdadm.conf. But I didn't know if it would work, I could as well
have lost all the data.

For one thing, I don't know why mdadm works differently on i386 than
it does on x86_64. For another, the risk of losing data could be
greatly reduced, and the trouble avoided, it the mdadm package would
show the user during installation exactly which arrays have been
detected and give him a choice to either start them now or not.

It is an *extremely bad* idea to just autodetect RAID arrays and to
automatically start and eventually resync them without the user having
a chance to verify that everything is correct during the installation
of the mdadm package, *before* any array is started.


The point in using a RAID is for me to make it less likely to lose
data, not to make it more likely to lose it. Most of the RAID modes
are for just that, making it less likely to lose data.

The OP reported this problem about two years ago. It's still there
...


-- Package-specific info:
--- mount output
/dev/hdb1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/hdb6 on /tmp type ext3 (rw)
/dev/md0 on /usr type ext3 (rw)
/dev/hdb5 on /var type ext3 (rw)
/dev/md1 on /home type ext3 (rw,errors=remount-ro)
/dev/md2 on /opt type ext3 (rw)

--- mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
#ARRAY /dev/md0 level=raid1 num-devices=2 UUID=c5460893:6fe1b92f:8d76d626:2a523555
#ARRAY /dev/md0 level=raid1 num-devices=2 UUID=ca34e190:39db09f0:390edcc4:35d74b5f
#ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9f9a753b:eb70a81c:5ff8d522:9ec3586b
#ARRAY /dev/md2 level=raid1 num-devices=2 UUID=ae296907:7c3dc6ef:763a7645:40ee5e12

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=ca34e190:39db09f0:390edcc4:35d74b5f
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9f9a753b:eb70a81c:5ff8d522:9ec3586b
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=ae296907:7c3dc6ef:763a7645:40ee5e12



# This file was auto-generated on Wed, 03 Dec 2008 20:44:02 -0600
# by mkconf $Id$

--- /proc/mdstat:
Personalities : [raid1] 
md2 : active raid1 sda3[0] sdb3[1]
      53777472 blocks [2/2] [UU]
      
md1 : active raid1 sda2[0] sdb2[1]
      166023616 blocks [2/2] [UU]
      
md0 : active raid1 sda1[0] sdb1[1]
      73248256 blocks [2/2] [UU]
      
unused devices: <none>

--- /proc/partitions:
major minor  #blocks  name

   3    64  195360984 hdb
   3    65     979933 hdb1
   3    66   31246425 hdb2
   3    67   62500882 hdb3
   3    68          1 hdb4
   3    69   62500851 hdb5
   3    70   38130246 hdb6
   9     0   73248256 md0
   8     0  293057352 sda
   8     1   73248336 sda1
   8     2  166023742 sda2
   8     3   53777587 sda3
   8    16  293057352 sdb
   8    17   73248336 sdb1
   8    18  166023742 sdb2
   8    19   53777587 sdb3
   9     1  166023616 md1
   9     2   53777472 md2

--- initrd.img-2.6.27.7-cat-smp:

--- /proc/modules:

--- volume detail:

--- /proc/cmdline
root=/dev/hdb1 ro

--- grub:
kernel		/boot/vmlinuz-2.6.26-1-amd64 root=/dev/hdb1 ro 
kernel		/boot/vmlinuz-2.6.26-1-amd64 root=/dev/hdb1 ro single
kernel		/boot/vmlinuz-2.6.18-6-amd64 root=/dev/hdb1 ro 
kernel		/boot/vmlinuz-2.6.18-6-amd64 root=/dev/hdb1 ro single
kernel		/boot/bzImage root=/dev/hdb1 ro


-- System Information:
Debian Release: lenny/sid
  APT prefers testing
  APT policy: (500, 'testing')
Architecture: amd64 (x86_64)

Kernel: Linux 2.6.27.7-cat-smp (SMP w/2 CPU cores)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/bash

Versions of packages mdadm depends on:
ii  debconf                       1.5.24     Debian configuration management sy
ii  libc6                         2.7-16     GNU C Library: Shared libraries
ii  lsb-base                      3.2-20     Linux Standard Base 3.2 init scrip
ii  makedev                       2.3.1-88   creates device files in /dev
ii  udev                          0.125-7    /dev/ and hotplug management daemo

Versions of packages mdadm recommends:
ii  exim4-daemon-heavy [mail-tran 4.69-9     Exim MTA (v4) daemon with extended
ii  module-init-tools             3.4-1      tools for managing Linux kernel mo

mdadm suggests no packages.

-- debconf information:
  mdadm/autostart: true
  mdadm/mail_to: root
  mdadm/initrdstart_msg_errmd:
* mdadm/initrdstart: all
  mdadm/initrdstart_msg_errconf:
  mdadm/initrdstart_notinconf: false
  mdadm/initrdstart_msg_errexist:
  mdadm/initrdstart_msg_intro:
  mdadm/autocheck: true
  mdadm/initrdstart_msg_errblock:
  mdadm/start_daemon: true





More information about the pkg-mdadm-devel mailing list