[luau] Re: ata vs scsi

Warren Togami warren at togami.com
Fri Mar 15 02:34:03 PST 2002


----- Original Message -----
From: "R Scott Belford" <scott at belford.net>
To: <warren at togami.com>
Sent: Thursday, March 14, 2002 11:43 PM
Subject: ata vs scsi


>
> He says that ata drives are not engineered the same way and are not as
> reliable for 24/7 operation.  I suppose that if I use a hot spare in my
> array, this matters less.  If money was not an object for you and you
> were building a mission-critical box, would you use the ata raid
> solution.  Why?  Do you know of a few high volume servers using this
> solution?  I saw that amdmb used scsi for its ultimate box?  I would be
> really interested in your insights here, Warren, if you have a few
> moments.
>
> scott
>

It is true that SCSI is faster (especially with 15,000 rpm vs 7,200 rpm IDE)
and uses slightly less CPU to control.  Also SCSI disks are manufactured on
the average to last 2-3 years longer than the average IDE disk.  I don't
remember where I read that statistic though...
So yes, if you want to be 99.99999% safe rather than 99.9999% safe, I'd go
with SCSI with a known, good controller like Adaptec, especially if cost
isn't a major consideration.  (DO NOT BUY AMI SCSI controllers.)  I'd hate
to make Linux look bad to your employer/client if the 3Ware and the
particular brand of disks that you buy combo turns out to be unstable.

Otherwise, yes I do know of a vendor that extensively uses ATA RAID for
massive amounts of hot-swap storage.
http://www.raidzone.com/

These guys make custom built NAS arrays with their own custom built ATA RAID
controller.  It is an impressive setup in RAID 5 configuration using a
journaling filesystem like Reiserfs or Ext3 (reiserfs last I asked, although
I'd think ext3 is more reliable).  They get extra performance by storing the
fs journal on a non-volatile RAM disk or something like that.  Now I think
they are using either the 120GB or 160GB ATA hard disks.  Rather affordable
2TB+ network storage devices, comes preconfigured with Samba, NFS and
Netatalk with some kind of proprietary web based interface for
administration.

Hmmm.... I don't know what to say.
With both SCSI and ATA RAID, I would highly suggest extensive burn-in
stability testing before putting it into operation.  This means upgrading
the OS with the latest patches and official kernel, then loop heavy hardware
stressing programs like dnetc (for CPU crunching) and bonnie++ (for
filesystem and disk subsystem stability testing).  I'm not aware of a good
memory stability tester.  Does anyone know one?

You could optionally craft your own kernel, but be aware of the pros and
cons:
PROS
You have total control over your kernel version and patches.
You can minimize kernel bloat for extra performance.  This makes a big
performance difference in many cases.
CONS
Every time you need to upgrade your kernel due to a newly discovered
security hole, you would need another similar machine not in production in
order to redo the stability tests of your custom kernels on the hardware
combination.  Of course you could simply patch your kernel, recompile,
install and reboot, but there's no guarantee that it will be as stable as
your last tested kernel.  Well, this is if you absolutely must maintain 24/7
uptime of course.

I can say with great confidence that Red Hat thoroughly tests their latest
release kernel VERY WELL, and if you have any stability problems with their
kernels then it is likely a hardware problem.  Yes, there is always the
small chance that you will find a kernel bug there, but that chance is MUCH
higher if you build your own kernel.




More information about the LUAU mailing list