Bug#784874: Fwd: Bug#784874: mdadm --re-add Segmentation Fault

Christoffer Hammarström christoffer.hammarstrom at linuxgods.com
Tue May 19 09:37:06 UTC 2015


The raid was a degraded raid 6 array of 5 disks with one missing disk, so
only one parity disk,  "[UUUU_]" in /proc/mdstat

There was a malfunction in the hard drive enclosure so that it momentarily
lost contact with the 4 active disks, and then /proc/mdstat said "[_____]",
meaning no disks at all.

I'm not sure if --re-add is supposed to work in this scenario, however
--stop and --assemble worked fine.

Thank you for your help!

 / C

On Thu, May 14, 2015 at 8:07 AM, NeilBrown <neilb at suse.de> wrote:

> On Wed, 13 May 2015 11:22:47 +0200 Christoffer Hammarström
> <christoffer.hammarstrom at linuxgods.com> wrote:
>
> > Yes, i'm sorry, i thoughtlessly ran
> >
> >     mdadm --stop /dev/md/storage
> >
> > before reporting the bug.
>
> Well that explains some of it.  I'd really like to know what the state of
> the
> array was when mdadm crashed - that code path should hardly ever be
> reached.
>
> Anyway, the fix is at:
>
>
> http://git.neil.brown.name/?p=mdadm.git;a=commitdiff;h=2609f339028a6035a3fadb1190b565438000e35c
>
> in case the Debian maintainer wants to pick it up.
>
> Thanks for the report.
>
> NeilBrown
>
> >
> > I managed to reassemble the raid later with --assemble.
> >
> >     / C
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-mdadm-devel/attachments/20150519/87d6bb65/attachment.html>


More information about the pkg-mdadm-devel mailing list