[luau] Restoring a Software RAID1 System (RH9)

Blake Vance blake_vance at hotmail.com
Tue Oct 14 00:45:01 PDT 2003


Greetings,

To upgrade (and quell any concerns about a potentially bad motherboard), a 
failed 40-GB HDD and a good 80-GB HDD were placed into a newer box (Dell 
Dimension 4400). During the boot, all the necessary configuration changes 
were made. Not too unexpectedly 'cat /proc/mdstat' still yields:

Personalities : [raid1]
read_ahead 1024 sectors
md2 : active raid1 hdc3[1]
522048 blocks [2/1] [_U]

md1 : active raid1 hdc2[1]
18924480 blocks [2/1] [_U]

md0 : active raid1 hdc1[1]
104320 blocks [2/1] [_U]

unused devices: (none)

'dmesg | less' yields (+ much  more),
md: invalid raid superblock magic on hda5
md: hda5 has invalid superblock, not importing!
md: could not import hda5!

Should I run 'mke2fs -c /dev/hda*' (safer than 'unmount /dev/hda*' then  
'badblocks -f /dev/hda*'?) to see if the HD can be salvaged?

If I understood Vince correctly at last week Monday's HOSEF mtg., the Linux 
root partition is also on the 80 gig drive. The failed drive is on the 
primary master (since hda was missing from the above, I suppose you knew 
that); hdc is on the secondary master (more redundant info?). A new 80-GB 
HDD ("new", same model) was purchased and is ready to be substituted for the 
faulty drive. Do I attach "new" as the primary master? Then what?
raidhotadd /dev/md? /dev/hd?

If I don't respond right away, it's because I'm calling it a night.

TIA,
Blake

_________________________________________________________________
See when your friends are online with MSN Messenger 6.0. Download it now 
FREE! http://msnmessenger-download.com




More information about the LUAU mailing list