Bug#714155: initramfs-tools: mdadm starts up before all needed devices are available
Brian Minton
brian at minton.name
Fri Jun 28 13:47:42 UTC 2013
My apologies. The expected behavior is that all the needed devices
(/dev/sdc, /dev/sda2, and /dev/sde) would have started up and been
available, before mdadm started up, so that mdadm would assemble the array
with all devices, instead of in a degraded state.
The actual behavior seen is that /dev/sde was up and available at timestamp
3.135237, while mdadm had actually been started at timestamp 2.171754,
nearly a second before all devices finished initializing.
thanks!
On Thu, Jun 27, 2013 at 4:51 AM, Michael Prokop <mika at debian.org> wrote:
> reassign 714155 mdadm
> thanks
>
> * Brian Minton [Wed Jun 26, 2013 at 08:59:06AM -0400]:
> > Package: initramfs-tools
> > Version: 0.113
> > Severity: normal
>
> > Dear Maintainer,
>
> > bminton at bminton:~$ dmesg|grep sde
> > [ 3.114799] sd 8:0:0:0: [sde] 2930277168 512-byte logical blocks:
> (1.50 TB/1.36 TiB)
> > [ 3.115888] sd 8:0:0:0: [sde] Write Protect is off
> > [ 3.119758] sd 8:0:0:0: [sde] Mode Sense: 00 3a 00 00
> > [ 3.119808] sd 8:0:0:0: [sde] Write cache: enabled, read cache:
> enabled, doesn't support DPO or FUA
> > [ 3.134660] sde: unknown partition table
> > [ 3.135237] sd 8:0:0:0: [sde] Attached SCSI disk
> > [45018.662644] md: export_rdev(sde)
> > [45018.748222] md: bind<sde>
> > [45018.772292] disk 2, o:1, dev:sde
> > bminton at bminton:~$ dmesg|grep md1
> > [ 2.164616] md: md1 stopped.
> > [ 2.171754] md/raid:md1: device sdc operational as raid disk 0
> > [ 2.172104] md/raid:md1: device sda2 operational as raid disk 1
> > [ 2.173021] md/raid:md1: allocated 3282kB
> > [ 2.173416] md/raid:md1: raid level 5 active with 2 out of 3 devices,
> algorithm 5
> > [ 2.174093] md1: detected capacity change from 0 to 3000603639808
> > [ 2.177433] md1: unknown partition table
> > [45018.773937] md: recovery of RAID array md1
>
> > Here's some info about my RAID setup:
>
> > Personalities : [raid6] [raid5] [raid4]
> > md1 : active raid5 sde[3] sdc[0] sda2[1]
> > 2930276992 blocks level 5, 64k chunk, algorithm 5 [3/2] [UU_]
> > [===>.................] recovery = 19.4% (285554336/1465138496)
> finish=684.0min speed=28741K/sec
>
> > unused devices: <none>
> > /dev/md1:
> > Version : 0.90
> > Creation Time : Wed Jun 3 09:16:22 2009
> > Raid Level : raid5
> > Array Size : 2930276992 (2794.53 GiB 3000.60 GB)
> > Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> > Raid Devices : 3
> > Total Devices : 3
> > Preferred Minor : 1
> > Persistence : Superblock is persistent
>
> > Update Time : Wed Jun 19 11:22:09 2013
> > State : clean, degraded, recovering
> > Active Devices : 2
> > Working Devices : 3
> > Failed Devices : 0
> > Spare Devices : 1
>
> > Layout : parity-last
> > Chunk Size : 64K
>
> > Rebuild Status : 19% complete
>
> > UUID : bfa46bf0:67d6e997:e473ac2a:9f2b3a7b
> > Events : 0.2609536
>
> > Number Major Minor RaidDevice State
> > 0 8 32 0 active sync /dev/sdc
> > 1 8 2 1 active sync /dev/sda2
> > 3 8 64 2 spare rebuilding /dev/sde
>
> [snip package information]
>
> I'm not sure what initramfs-tools could do about that, AFAICS it's
> an issue with mdadm's i-t hook, so reassigning to mdadm.
>
> PS: It would be nice to write a few more words about
> misbehaviour/expected behaviour and not just c/p some logs into a
> bug report.
>
> regards,
> -mika-
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAlHL/S0ACgkQ2N9T+zficujlvQCeOVPTQxyBLlEEfDZnj2eX/SGi
> U10AnAno3fmNnQM8VEJ/dhlXt+kfgoDN
> =8vTT
> -----END PGP SIGNATURE-----
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-mdadm-devel/attachments/20130628/a70a46a0/attachment.html>
More information about the pkg-mdadm-devel
mailing list