[conspire] CPU upgrade questions (was sous vide question.)
Rick Moen
rick at linuxmafia.com
Sat Oct 28 00:34:00 PDT 2017
More speechifying about computer hardware _balance_:
> [...] So, for example, if you do CADD, or chip design, or molecular
> modeling, or A/V, or gaming, that is more likely to substantially
> benefit than if, say, you do Internet serving, or databases, or many
> other sorts of common things that end up being RAM-bound or I/O-bound.
>
> For most things *I* do with Linux, the machines I do them on end up
> being RAM-bound or I/O-bound.[1]
[...]
>
> [1] Which general pattern I attribute to PeeCee equipment generally
> being overendowed with CPU grunt in order to compensate for Microsoft
> Windows being a CPU hog, whereas Linux is not.
(In passing: I've _never_ heard a server admin say 'If only this server
had a faster CPU. More and faster RAM and I/O, yes, but never CPU.)
PeeCees in general being skewed towards excessive CPU for Windows's
benefit is one thing people often miss. Another is how and why
server-class machines are 'balanced' differently from workstation/laptop
machines. (Long years bias me towards seeing the industry through
server-admin eyes, generally.)
A typical server has fast and reliable I/O, first and foremost. The
mass storage is on whatever is the fastest, most-reliable interface
design of that generation[1], internal, modular, expandable (so as to
support RAID and other things), with effective heat control /
dissipation / monitoring. Currently, that interface is SATA or SAS[2]
(not counting exotic storage types like SANs). Previously, it was
various generations of SCSI. For RAM, the emphasis tends to be on speed
more than on capacity.[3]
Also, the assumption is that they might run full-out 24x7, so the
heat-handling is designed accordingly, ergo they are often obnoxiously
loud and intended to move a lot of air.
One of my frustrations is that nobody 'gets' the home server market,
which is to say that OEMs don't recognise the existence of the niche,
probably on compelling economic grounds that people like me are freaks
and not a significant market. FWIW, near-perfect matches keep turning
out to miss the mark in some way:
Example 1, my CompuLab Intense PC intended to run my main home server.
I was acutely aware that CompuLab targeted it as a high-power yet silent
workstation rather than any kind of server, so before buying it I made
sure I got the answer to one question: Does the machine come back
online, or at least can it be configured to come online, after losing
and regaining power? And there's history behind my asking that
question:
The early AT-class cases and power supplies, up to about 1998, had no
problem in this area. Around that year, Intel's replacement 'ATX'
design for motherboards and cases replaced the AT architecture, which in
general was A Good Thing -- except that most ATX power supplies, upon
losing power and then regaining it, came back up in a 'standby' state,
where the machine would remain not actually running but just warmed up
until someone hit a front-panel button. Why? Because the designers
were thinking like non-server people.
Here at Chez Moen, we encountered this mindset the hard way, when
Deirdre started running her personal deirdre.net domain on a ShuttlePC
'lunchbox'-sized box. It was quite attractive (coloured lights,
transparent plexiglas case), and reasonably quiet, but always came back
from any power outage in standby mode, because it had one of those
godsdamned workstation-oriented ATX PSUs. It turned out that Shuttle
offered no fix (because who would ever want a machine to turn back on
after losing power?), so Deirdre's only remedy was to add an
Uniterruptable Power Supply to the hardware stack, not because she
wanted to bridges across power glitches as such, but solely so the
Shuttle wouldn't land in standby mode. One whole, huge, heavy, costly
outboard appliance, just because of a hardware design error.
So, with the CompuLab, I carefully ducked that missle. _However_,
my friend Duncan recently discovered another one I never even suspected:
If you try to boot it with no monitor connected to the HDMI port, it
refuses to boot because, hey, you don't have a monitor. Because, who
ever runs a machine headless? Just me and every other server
administrator in the world.
So, hey, I get to buy a little plug (from CompuLab) that lies to the
computer and says there's a monitor on its HDMI port.
https://www.amazon.com/CompuLab-fit-Headless-Display-Emulator/dp/B00FLZXGJ6
It's a bit annoying and a travesty that I should _have_ to keep around
special little special plugs to lie to computers and tell them that
their phantom limbs are real -- but that's the world I live in, and this
is the sort of absurdity you can encounter if the vendor is oblivious to
server realities. (But, to be philosophical about this, it's probably
time I had a couple of these kicking around, anyway.)
Example 2: My new toy the Zotac, at minimum, has appallingly bad
prospects for I/O expansion when viewed as a server. Mass storage
on a _quality_ interface is limited to exactly one SATA connector.
Possibly, one might also count additional storage in the SD/SDHC/SDXC
card reader, though I'm not sure about speed. External storage is
possible only on USB. (But it's pretty damned good for $125 w/ 1 year
mfr. warranty.)
[1] This is one of several reasons why all models of Raspberry Pi,
though suggested annoyingly often on Linux mailing lists as the basis
for a home server, are IMO unsuitable. Even the most advanced RPi,
the model 3B, supports mass storage only only qty 1 MicroSD slot and qty
2 USB 2.0 ports. Not even close to good enough, IMO. Too dodgy.
[2] SAS is the flavour-du-jour of SCSI, in case it's unfamiliar.
Surprisingly, it's physically and electrically almost the same, and SATA
drives can live and function perfectly on a SAS chain. (But unless you
do servers, you'll never encounter SAS.)
https://web.archive.org/web/20130629051809/http://old.steadfast.net/services/hdd.dedicated.hosting.php
[3] As my friend Joey Hess, former key Debian developer, would testify,
you can get along perfectly fine for many home server needs with small,
low-power ARM-based microserver boxes like his BeagleBone Black and
newer BeagleBone X15, but I think it's a pity those max out at,
respectively, 512 MB and 2GB of DDR3 SDRAM. Modern x86_64 CPUs, for
not many more watts, can support 8GB, 16GB, or in some cases more,
which in combination with hypervisors permit running multiple hosts for
different purposes without needing more hardware, more space consumed,
and more power draw. Plus snapshotting and other useful VM side-effects.
More information about the conspire
mailing list