[TriLUG] Article Link: Slashdot Post on Disk Throughput...

Jon Carnes jonc at haht.com
Wed Aug 7 11:49:08 EDT 2002


Haven't read the article yet, but in the past I've done quite a bit of
reading up on this, and done some real world tests here at HAHT.  The winner
on throughput for us is Hardware RAID5 on a SCSI disk subsystem using LVD
disks and a Mylex controller running on a Linux server.
RAID 1 and RAID 5 gave very similar Read results, and the Write results
weren't significantly different (though RAID 1 was slightly faster...). RAID
5 and RAID 1 both significantly beat a single disk for Read results.

My testing was done both after hours and during hours (while folks were
using the server for other reads and writes).  In all cases Hardware RAID 5
was either the significant winner, or so close that the difference was
negligible.

Other interesting notes (tests done 2-3 years ago):

Mylex beats the pants off of Adaptec when it comes to throughput.
For secondary and subsequent reads, the disk configuration made almost no
difference (on a Linux server using 300+ Mb for cache).
Default Linux disk caching beats the pants off of Windows disk caching.
Servers that are disk I/O bound (such as mailservers and fileservers) run 6x
to 8x faster with a SCSI disk subsystem.

Jon
-----Original Message-----
From: trilug-admin at trilug.org [mailto:trilug-admin at trilug.org]On Behalf
Of William W. Ward
Sent: Wednesday, August 07, 2002 10:38 AM
To: trilug at trilug.org
Subject: [TriLUG] Article Link: Slashdot Post on Disk Throughput...


Since Tanner's been discussing different file system choices for the new
servers, I figured the enclosed link/article would be of interest to some of
you out there thinking about different RAID, file system and expected
throughput numbers with your disks.  The actual good stuff is in the
comments, so be sure to set the comment filter to 2 or better and read
through many of them.

Amongst the better moderated comments, I found a good explanation of why
RAID5 is not a good choice for throughput OR redundancy in today's age, a
short blurb about using hdparm to improve disk throughput under Linux, and
ways to eliminate single points of failure for SANs, arrays and whatnot.

Working for a major company that throws money at problems rather than works
through system tuning and appropriate planning, I'm extremely interested in
this sort of information.  When I finally relocate to the home data center,
I want to be able to subtly make some changes to improve the infrastructure
and get some real performance increases without spending any more money on
bad solutions.  Anytime I come across these types of articles, I think it'd
be a great idea for someone to collect them and assemble them into a "best
practices" book for the sub-management admins doing the grunt implementation
and tuning.

Link is:
http://ask.slashdot.org/article.pl?sid=02/08/06/159221&mode=thread&tid=137

Article starts off:
Mr. Jackson asks: "What kind of disk transfer rates (MB/s) do people get in
the real world when moving around large (100s MB) files? Either every
machine in our building is mis-configured, or our notions about what we were
getting are way off. I've tested half a dozen machines, mostly Win2k, some
Linux, by just copying a large file and timing it with a watch. 8 MB/s seems
to be about average for inter-disk copies. RAID 1 (stripped) got as high as
12 MB/s after fiddling with cache settings. RAID 5 was as low as 2 MB/s. We
all thought the numbers should have been around 30 MB/s."


_______________________________________________
TriLUG mailing list
    http://www.trilug.org/mailman/listinfo/trilug
TriLUG Organizational FAQ:
    http://www.trilug.org/~lovelace/faq/TriLUG-faq.html




More information about the TriLUG mailing list