Bug#396582: Some additional info
Dan Pascu
dan at ag-projects.com
Thu Nov 2 09:46:07 CET 2006
On Thursday 02 November 2006 10:28, martin f krafft wrote:
> also sprach Dan Pascu <dan at ag-projects.com> [2006.11.01.2323 +0100]:
> > Also I've noticed something weird in the test you did. After failing
> > sde1 from md99 and stopping the array, when it was started with the
> > startup script it said it assembled md99 with 2 drives. The same was
> > said by mdadm --assemble later, as if between stopping and starting
> > it the failed drive was magically re-added. The message should have
> > been something like "starting degraded array with 1 drive (out of 2)"
> > if I'm not mistaken.
>
> No, because the drives were only marked as failed but not yet
> removed. On reassembly, they just get added again.
>
> Are you seeing different behaviour?
Yes. In my case, if I fail a drive, it is still there in a failed state,
but if I then stop the raid array, when it's restarted, the failed drive
is no longer there, as if it was removed meanwhile, only that I never
issued the remove command. And when the array starts, it shows that it
started degraded with only 1 out of 2 drives.
>
> > Personalities : [raid1]
> > md1 : active raid1 sdb2[1] sda2[0]
> > 231609024 blocks [2/2] [UU]
> > bitmap: 5/221 pages [20KB], 512KB chunk
> >
> > md0 : active raid1 sdb1[1] sda1[0]
> > 12586816 blocks [2/2] [UU]
> > bitmap: 12/193 pages [48KB], 32KB chunk
>
> You are using bitmaps, I am not. Maybe that's the cause?
The presence of bitmaps doesn't influence this. I got the same behavior
with or without them.
>
> Could you please recreate the problem from scratch and show me *all*
> steps?
As I said I'm currently unable to do this, but as soon as I get access to
the system to test it again I will.
I will do all the setup again on a test system running under vmware and
record all the steps I will be doing and save the output of the commands
I'll run.
--
Dan
More information about the pkg-mdadm-devel
mailing list