[conspire] Advice on Building a Computer

Rick Moen rick at linuxmafia.com
Sat Dec 29 20:30:04 PST 2007


(Disclaimer:  I haven't assembled a server from parts in a while, or
dealt with "white box" systems general, but instead recently have done
work only on server boxen from the usual suspects.)

Quoting Mark Weisler (mark at weisler-saratoga-ca.us):

> ...Tyan 2468 motherboard...
> ...Lian Li V1200B Plus II case....

Very nice stuff.  Tyan has always made good-for-the-buck middle-grade
server gear.  They have a higher defect rate than, say, Abit, but
they're cheaper.  Lian Li is invariably just really nice, with no
qualifiers.

> This has two SCSI interfaces,
> SCSI A and SCSI B and is specified as Ultra 160 SCSI.

I hope you don't mind that U160 (and U320) is regarded as passe in 2007.
That is, a couple of years ago, the server industry transitioned away
from all parallel-SCSI gear, and went all-SAS.  (Why?  Because even with
low-voltage differential, the chain-length limits were becoming a
problem, cable cost was too high, cables were too wide, base heat
dissipation was too high, and device addressability was too limited.
Also, re-standardising on SAS reduced costs for manufacturers because
the basic SAS bus design is the exact same as SATA, except for some
extra circuitry primarily to support a large device address space.)
You'll still be able to get U160/U320 gear for a good long while, but
the writing's on the wall:  Futureproofing means going SAS, instead.

(I note without any intent to disparage that Tyan has discontinued the
Thunder K7X / S2468UGN.  It's a perfectly fine motherboard, and
certainly it would be the height of hypocrisy for me to make fun of
anyone using obsolete computing gear.)

> The intended use of the computer is as a server and a learning
> environment....

So, I notice the Lian Li isn't really suited for 19" rackmount use, and
trust that that's not an issue.  

5-7 server-grade SCSI drives emit a _lot_ of heat.  I have a great deal
of respect for Lian Li, but you should double-check to make sure this
case, which looks like it's really a heavy-duty workstation case, can
handle the heat load of that many drives plus a pair of Athlons.  

The 210mm width, if the unit were put on its side[1], equates to about 5U 
of standard rack height.  It's certainly very common for a 4U server
_rackmount_ enclosure to be able to handle 5-7 server HDs, but this
workstation box raises at least a little concern with me.  (Note that
SAS drives would be smaller and in general run cooler, not to mention
putting less draw on the PSU.)

Anyhow, the two things that most often kill server parts, over the long
term, are heat buildup and overstrained PSUs.

>  http://www.power-on.com/atxges.html 

I note that this particular unit has a 460W rating.  I actually haven't
done the math to guesstimate how beefy a PSU your 5-7 drives + 2
Athlons + Tyan MB are likely to need -- as I said, I've mostly dealt
with preassembled systems lately -- but just wanted to caution that
even _after_ you figure that out, you should be aware that PSU wattage 
ratings need to be approached with skepticism.  Frankly, in general,
those figures are dangerously unreliable, which is why, if I were
assembling a system from parts, I'd indulge my old-fogy prejudice
towards sticking to Antec, Cooler Master, Enermax, PC Power & Cooling,
or in a pinch Sparkle aka SPI.  None other.  People tell me occasionally
that some others such as Seasonic are also good, and they may be right.
Point is, when Antec tell me a TruePower Quattro TPQ-1000 is good for
a kilowatt under a variety of loads, I tend to believe them -- and
believe that my components won't get fried by power spikes.

> I believe these are 68-pin interfaces rather than 80-pin because
> the 80-pin interface is, as I understand it, more often used
> for hot swappable drives and I don't think I have that in this case.

Yes.  80-pin drives, aka SCA SCSI drives, are designed to plug directly
into a hot-plugged SCSI backplane that assigns them a SCSI ID dynamically.  
68-pin drives are the conventional wide-SCSI U160/U320 ones.  You can,
however, buy cheap little converter widgets to turn one into the other,
if need be.  

So, the (D-shell) SCSI connectors _on_ motherboards are inevitably 68-pin
("HD68"), though a common configuration is to mount that motherboard in
a large case with an SCA SCSI backplane (into which drives could be
plugged from the front panel), and you would then cross-connect the
backplane to the motherboard with a wide-SCSI ribbon cable.

In short, it's really the _case_ that determines whether you'd seek
68-pin SCSI or 80-pin SCA SCSI hard drives, not the motherboard.  The
Lian Li case you've specified doesn't have a SCSI backplane.  Hence, for
that case, you'd use conventional 68-pin SCSI drives, and set their SCSI
IDs via their individual drive jumpers.

> I think I want to put two or three drives on each interface. Thus I
> would need two ribbon SCSI interfaces like, I believe, this:
>  http://www.newegg.com/Product/Product.aspx?Item=N82E16812200077

Sure, you might as well.  "U160" means a theoretical bus speed maximum
of 80MB/sec, 2 bytes at a time (because it's "wide SCSI" cabling).  In
theory, it's advantageous to split your drives between the two chains, 
because each HBA (host-bus adapter, aka controller) can do
disconnected-operation commands to each device, with the result that in
edge-case scenarios each drive on the chain _could_ be simultaneously
transferring data on or off, and you want to minimise the chance of a
maxed-out bus.  Also, each HBA has its own DMA channel to the CPUs, so
again you are minimising bottlenecking by splitting activity between the
chains.

I like the StarTech.com ribbon cable you point to at newegg.com, because
it includes an active terminator built right into the end of the cable.  
Without that, you'd have to muck around with terminator jumpers on the
final hard drive of the chain, which is a pain.  Having them built into
the cable means you can avoid that, leave all your drive unterminated,
and yet know the termination's exactly right because it's built in.

(SCSI termination in a nutshell:  Termination must be present at each end
of any chain, and must not be present anywhere else.  Active terminators
are better than passive because they actively adapt to load, keeping
termination impedence steady.  That's it.)

The page you cite doesn't mention cable length.  LVD (low-voltage
differential addressing mode) extends max. length to 25 metres, which is
very nice to have -- but applies only if all devices on the chain
including the HBA do LVD.  The fallback, legacy addressing mode is "SE"
= single-ended, in which I believe at U160 levels the maximum length is
-- I think -- 3 metres, narrowing down to (I think) 1.5 metres at U320. 

(LVD was in part an attempt to end the length-limit problem, along with
lowering voltages and thus size, heat, and unit costs.  Avoid at all
costs any component that just says "differential SCSI", as that means
the old HVD = high-voltage differential spec, which is incompatible and
useful only in specialised situations I'd rather not get into.)

Again, the industry has actually moved on, and left all of the above
mess and more in the dustbin of history, junking it all for SAS.

>  and some number of SCSI drives like this 68-pin one:
>  http://www.compuvest.com/Description.jsp?iid=355204

OK, sure, but that's low-capacity, relatively slow, and non-LVD.  That
is, it's a single-ended SCSI device, which all other things being equal
you would prefer not to have.  On the other hand, it's cheap. 

Let's face it, 18GB drives on 7200 RPM chassises are pretty last-decade.
If you want to spend a bit more, get one that's LVD-capable.

Notice it says "Fast Wide SCSI" for the interface electronics and
connector, which was also known as "Wide Ultra SCSI" and means a
theoretical maximum bus speed of 20MB/sec x 2 (i.e., a 2-byte wide bus).
Like other theoretical bus limits, this really doesn't necessarily mean
much, because _actual_ speed tends to be limited by physical drive
access, which typically for any given hard drive tends to be a lot
slower than the theoretical bus limit.


[1] Not necessarily recommended.  Good cases are designed for a
particular type of airflow, bearing in mind that hot air rises.  
Putting a desktop box sideways interferes with this objective (if the
manufacturer bothered, which not all do).





More information about the conspire mailing list