RAIDframe - screech, smash.

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

RAIDframe - screech, smash.

Aaron Mason
Hey all,

Lately I've been using VMware to delve into the murky waters of
creating and maintaining RAID arrays, and I thought I'd get a bit
adventurous.  I compiled the kernel with the following config file:

# cat /usr/src/sys/arch/i386/conf/GENERIC.RAID

include "arch/i386/conf/GENERIC"

pseudo-device raid 8
option RAID_AUTOCONFIG

#

The commented out default in the GENERIC file says 4 - I figured if I
changed it to 8, I could get more RAIDs in there for a rather large
nested RAID setup.

My aim was for a RAID 1 array with 4 RAID 0 arrays of 4 10gb "disks"
each within.  So I set up the first 4 arrays which went without
incident, but when I went to create the RAID 1, it said that raid4
didn't exist.  I ran MAKEDEV to attempt to create another 4 sets (to
complete the 8) to no avail.  So I created the last set of RAID device
files, and went on to create the array.  This went without incident
until I went to initialise the array.  At this point the kernel
panicked.

As I didn't get the panic message the first time, I attempted to
recreate the error, and here's what I did, along with a summary of my
"system":

The VM has 16 10gb SCSI disks.  Each one would have the system on it
should any of the disks "fail" and a reboot be needed at the same
time.  The RAID arrays are configured thus:

# cat /etc/raid0_1.conf
START array
#One column, 4 drives per column, no spares
1 4 0
START disks
# A list of the drives to use
/dev/sd4d
/dev/sd5d
/dev/sd6d
/dev/sd7d
START layout
# 128 bytes per stripe, 1 stripe per parity unit, 1 stripe per
# reconstruction unit, in raid 0
128 1 1 0
START queue
# This establishes a FIFO queue of 100 requests
fifo 100
# cat /etc/raid0_2.conf
START array
#One column, 4 drives per column, no spares
1 4 0
START disks
# A list of the drives to use
/dev/sd8d
/dev/sd9d
/dev/sd10d
/dev/sd11d
START layout
# 128 bytes per stripe, 1 stripe per parity unit, 1 stripe per
# reconstruction unit, in raid 0
128 1 1 0
START queue
# This establishes a FIFO queue of 100 requests
fifo 100
# cat /etc/raid0_3.conf
START array
#One column, 4 drives per column, no spares
1 4 0
START disks
# A list of the drives to use
/dev/sd12d
/dev/sd13d
/dev/sd14d
/dev/sd15d
START layout
# 128 bytes per stripe, 1 stripe per parity unit, 1 stripe per
# reconstruction unit, in raid 0
128 1 1 0
START queue
# This establishes a FIFO queue of 100 requests
fifo 100
#

I would then clone the first disk's table:

# disklabel sd0 > disklabel.tpl
# for i in 1 2 3; do echo y | fdisk -i sd$i; disklabel -R sd$i
disklabel.tpl; done
(whole heap of crap)

I would then create a RAID partition inside each RAID device, then I
would initialise the RAID 1, configured as:

# cat /etc/raid1.conf
START array
#One column, 4 drives per column, no spares
1 4 0
START disks
# A list of the drives to use
/dev/raid0d
/dev/raid1d
/dev/raid2d
/dev/raid3d
START layout
# 128 bytes per stripe, 1 stripe per parity unit, 1 stripe per
# reconstruction unit, in raid 1
128 1 1 1
START queue
# This establishes a FIFO queue of 100 requests
fifo 100
#

Then I would initialise the RAID array:

# raidctl -C /etc/raid1.conf raid4
# raidctl -I 100 raid4
# raidctl -iv raid4

At this point, all hell broke loose, but not this time.

Unfortunately I didn't get the error message the first time, but after
I set it up to auto configure and take place as root, I rebooted, only
to find that while the first 4 raid archives appeared, the root one
did not - in fact the first 4's parity statuses were "DIRTY".  As such
it booted from the first hard drive as it did before.

I'll be trying this again in a fresh install of OpenBSD to try and
catch the error message.  In the meantime if anybody can shed some
light on why what I've done hasn't worked, I would really appreciate
it.

Regards, and sorry for the TL;DR.

--
Aaron Mason AKA Absorbent Shoulder Man
<i>Oh, why does everything I whip leave me?</i>

Reply | Threaded
Open this post in threaded view
|

Re: RAIDframe - screech, smash.

J.C. Roberts-3
On Thu, 23 Apr 2009 13:33:26 +1000 Aaron Mason
<[hidden email]> wrote:

> Hey all,
>
> Lately I've been using VMware to delve into the murky waters of
> creating and maintaining RAID arrays, and I thought I'd get a bit
> adventurous.  I compiled the kernel with the following config file:
>
> # cat /usr/src/sys/arch/i386/conf/GENERIC.RAID
>
> include "arch/i386/conf/GENERIC"
>
> pseudo-device raid 8
> option RAID_AUTOCONFIG
>
> #
>
> The commented out default in the GENERIC file says 4 - I figured if I
> changed it to 8, I could get more RAIDs in there for a rather large
> nested RAID setup.
>

You were 21 days too late.

--
J.C. Roberts

Reply | Threaded
Open this post in threaded view
|

Re: RAIDframe - screech, smash.

Aaron Mason
On Thu, Apr 23, 2009 at 9:46 PM, J.C. Roberts <[hidden email]> wrote:
>
>
> You were 21 days too late.
>
> --
> J.C. Roberts


Thanks for getting straight to the point, however it's a tad vague,
and I see no posts from 21 days before you emailed me that would
suggest that I am "too late".  Could you elaborate please?

Thanks

--
Aaron Mason AKA Absorbent Shoulder Man
<i>Oh, why does everything I whip leave me?</i>