[sf-lug] Rick's explanation of his internet setup.

Rick Moen rick at linuxmafia.com
Tue Jan 3 20:32:16 PST 2006


Quoting Adrien Lamothe (alamozzz at yahoo.com):

> The computer industry has been going the way of the 
> automotive industry - do things "good enough", but "cheap".

Indeed.

> Hardly anyone uses SCSI anymore...

People who keep critical data, e.g., database machines with real data
that matters, on PATA/SATA are just a little reckless, for well-known
reasons.  (Or, let's just say that backups and their age would matter a
whole lot.)  And no, RAIDing them doesn't fix that.  The reasons relate
to metadata treatment and caching, and you can read about that, here:
http://www.findarticles.com/p/articles/mi_m0BRZ/is_6_23/ai_105884199

Any admin who, say, puts a production Oracle server (with non-throwaway
data) on PATA/SATA deserves to be fired.

> ...many people think SATA is just as fast....

With enough write-behind cache memory at it, one can fool
one's self into thinking that.  Where speed for the _dollar_ is
concerned, it's actually true, because the SCSI vs. ATA pricing gap has
widened even further over the decade or so I've followed this issue.
(More about that, below.)  But that isn't very _safe_ speed in the
protection of current data sense, for reasons indicated.  Whether the
additional risk of losing a bunch of data back to your last good backup
tape is justified is of course a judgement call, and situation-dependent.

[...]

> It means the CPU is free for other activity
> during a large portion of the data transfer. Under IDE,
> the CPU is occupied for the entire period of data transfer.

Yeah, but don't forget that the CPU goes heavily underused on most Linux
deployments, so CPU loading from ATA isn't really the issue that you
might think.  The exception is during RAID restriping (rebuild)
operations, where (e.g.) SATA-based RAID5 arrays tend to have pretty
impaired performance and heavy loading until the rebuild completes. 
That's part of the downside of the money savings.

> By the way, SCSI is still expensive. 

More so all the time.  It's apparently largely a market effect,
resulting from relative production volumes, and market positioning of
SCSI/SAS equipment as "specialty gear", more than previously.  Which 
in turn artificially bumps the prices and widens that gap.

But, if you run Oracle, or it's your corporate NFS farm, that's what you
buy, because as expensive as the gear is, losing your data to metadata
scrambling when a SATA RAID array freezes up is _mondo_ expensive.

> So, how much are you willing to pay for a "smoking" system?

As stated, the actual intended comparison was _the same_ total cost for
the server-balanced system versus the gamer-type one.  

I'm certainly not sure, in 2006, that the former sort of machine would most
reasonably _be_ SCSI-based.  If you could live with the aforementioned
extra risk of data lossage in the event of array failure, the machine
might be based on a Tekram SATA card using an Areca chip (cheap!), or
maybe even just the Intel ICH7R or SiI 3112/3114 motherboard-embedded SATA
that you get bundled into many commodity systems, these days.  And a
pair of good HDs doing RAID1, maybe WD Raptors.  More at:
http://linuxmafia.com/faq/Hardware/sata.html

But one thing's for damned sure:  It would have immensely better and
faster mass storage than a typical gaming box.  And less wasted money on
overblown CPU and video.






More information about the sf-lug mailing list