[TriLUG] Setting Up RAID-5

Kevin Flanagan kevin at flanagannc.net
Fri Dec 8 06:33:46 EST 2006



I agree that the hit you take with software RAID computation of 
checksums etc is not that high for home with the systems that's common 
these days.  Spending a hundred or two extra bucks will get you a decent 
enough system to run OpenNAS, or the like quite happily.  I'm intrigued 
by Coraid's ATA over  Ethernet idea,  www.coraid.com, but won't get a 
chance to try it at work and it's too much dough for home.

    I don't even bother with any RAID _at home_, the stuff that's really 
important to me is backed up, more than one way. I suppose that I should 
move more to a single volume for simplicity, but it's easier in some 
ways to have a couple of disks NFS/SMB mounted up everywhere.

    Work is another matter all together, we spend VERY large dough with 
EMC for their "big box o disks" solution, Fibre channel, SAN, NAS, and 
whatever the hell else they sell for centralized storage. For locally 
attached disks to servers, we only use HP Proliants, their integrated 
RAID controllers seldom give us problems, and frequently allow us to 
replace disks on the fly, nobody knows but the person with the dead disk 
in their hand, there's a price we pay for that....


It seems to me that Brian's goal is simplicity and cost efficiency, an 
OpenNAS box, or the like with software RAID should do the trick.  Have 
plenty of RAM, and a halfway decent processor, you should be just fine.



Kevin






Brian Henning wrote:
>> From: trilug-bounces at trilug.org [mailto:trilug-bounces at trilug.org]On
>> Behalf Of Jason Tower
>> 1. the bootloader is installed on the MBR of the first disk only.
>>  if that
>> disk goes bye bye you're in trouble.  sure, you can install lilo/grub on
>> multiple MBRs but that's a pita and an inelegant solution.
>>     
>
> I didn't think of that straight away, but the easy solution would be to have
> the OS not stored on the array.  Separate itty-bitty cheap HD for the OS.
> If it dies, who cares.  Reinstall.  Uptime is my bottom-most concern;
> throughput (on read moreso than write) and data integrity/safety are top, in
> that order.
>
>
>   
>> 2. some sata chipsets flat out don't work with software raid 5, they will
>> crash the system hard either during initialization or under heavy
>> i/o.  i've
>> personally seen this happen on no less than three totally
>> different systems
>> with multiple distros.
>>     
>
> I guess that would just point to a TIAS situation.  I've heard Promise being
> called a good name in HD controllers, at any rate.
>
>   
>> 3. if a disk dies suddenly, the system is gonna crash regardless of raid
>> because the kernel can no longer communicate with /dev/sdx, it just
>> disappears.  go ahead, set up software raid with hot swap disks then yank
>> one out while the system is running, see what happens.  the data
>> itself is
>> probably ok (you'll have a degraded array upon reboot) but
>> availability is
>> shot.  plus your fstab may no longer be accurate once a disk is removed.
>>     
>
> Yeah, but as I said in #1 above, uptime isn't nearly so big a concern.  If
> the machine crashes, oops.  I don't imagine ever writing "live" to the array
> (meaning any critical write operations, such as writing while sounds are
> being recorded, will be to the workstation, then backed up to the RAID --
> system crashes during write during backup, just copy the file again), and a
> crash during a read would be a nuisance but less of a nuisance than losing
> all the data.  It's not like a sudden-death crash is going to (hopefully) be
> any more frequent than the sudden-death itself.
>
>   
>> there are probably workaround for these issues, or if they don't
>> bother you
>> then knock yourself out with software raid.  i use it myself if
>> circumstances justify it.  but i've built enough systems to know that
>> hardware raid exists for a reason.  if you want to do the job right get a
>> 3ware card and sleep peacefully.
>>     
>
> If this were some kind of 4-9s corporate uber-important server, I'd do
> exactly that.  But $500 just for the RAID card for this application, I can't
> justify.  If some day I'm doing sound recording engineering for sufficiently
> more people than myself, I might be able to justify it. (-:
>
> Thanks for the input, still!  That's just the kind of stuff I am looking to
> hear.  The good, bad, and ugly.
>
> ~B
>
>   



More information about the TriLUG mailing list