[conspire] I7000

Edmund J. Biow biow at sbcglobal.net
Sun Feb 22 18:28:45 PST 2009

Hash: SHA1

> Date: Sat, 21 Feb 2009 23:25:06 -0800
> From: Rick Moen <rick at linuxmafia.com>
> Subject: Re: [conspire] I7000
> To: conspire at linuxmafia.com
>> I've been shopping around for a good distro for my home made Via Samuel
>> CPU box (maybe 800 MHz but really more like a PIII 500) with 384 MB of
>> RAM and that is a slog.
> Well, the VIA C3-series (ex-Cyrix) CPUs do have that cmov CPU-instruction
> issue:  You have to make sure that the kernel is _not_ one with i686
> instruction support, i.e., you have to force use of a kernel compiled
> for i586/MMX.
> That's the big issue with _all_ of the VIA ex-Cyrix CPUs, and I keep
> seeing people doing strange things to contend with it.  (Personally, I
> am wary of such CPUs; just too much gratuitous almost-compatibility for
> my taste.)
Yes, I've been bit by the lack of 686 extensions by the Cyrix CPU
before, lots of Linux CDs just reboot as soon as they get to the grub
screen, particularly the Fedora/Redhat/CentOS line.  Luckily, Debian
and Ubuntu still retain i386 kernels (actually, a bit of a misnomer,
since the i386 Debian Linux kernel requires at least a 486, IIRC).
>> Maybe the slow 3.2 GB hard drive is a culprit, but this thing
>> actually ran Red Hat 7-9 passably in the past, I just think Linux is
>> getting heavier.
> No, I'm betting it's primarily a combination of that VIA CPU, which is
> severely underpowered even for its era, and not bothering to seriously,
> _seriously_ look through the process table and make an actual decision
> about what to run and what not to run. 
Could be, but I ran another Via C7 system for years as my primary
local file and web server, and when I was in the attic, it did double
duty as a desktop machine, listening to music, web surfing, email,
burning discs (never had a bad burn, either), etc.  It started out
with a 160 GB hard drive and 256 MB of RAM and ended up with a 500 and
a 750 GB and 512 MB of RAM and for most of its career it ran Sarge,
which by default had a 2.4 kernel.  My little Asus Terminator ran like
a champ from when Sarge was released as stable until it was no longer
supported.  But I upgraded it to Etch last year and it no longer
worked so well.  Onboard sound was staticy, so I had to install a
Sound Blaster card.  And after a dist-upgrade I started to get
occasional freeze ups under heavy samba transfers.  I put Hardy on it
and had the same Samba freeze problem.  I don't know if it was a
hardware issue or a software one, quite possibly the meager 170 watt
power supply was getting funky after 3+ years of 24/7 duty.  I have to
say, I installed everything and its brother on that poor box, several
window managers and desktop environments though I normally used KDE
3.2 under Sarge.  I did use sysv-rc-conf  to pare down a few services,
but 512 MB was actually adequate for my purposes, htop showed I had
RAM to spare most of the time.  The poor beast still gets fired up
every couple of weeks so I can back up my current server using NFS,
which seems more robust than samba on that rig.
> You cannot hope to have reasonable performance on low-spec machines with
> default distro configurations.  That goes triple for the runtime state
> of live CDs.  I guess I'm just old-fashioned, but it's blindingly
> obvious to me that you would simply have to take full charge of your
> configuration, know why you're running each and every process, and find
> out why (and if) each process is necessary or useful, through the
> obvious expedient of shutting them off and seeing if you miss them.  If
> you're not doing that, then you're not yet serious about getting the
> best out of Linux on that machine.
> Honestly, the biggest single change I've seen spread around the Linux
> user community since early days is that an increasing number of people,
> and not just raw newcomers, seriously think you're _done_ when the
> distro installer terminates -- that it's not a necessary and obvious
> next step to keep working at the details of the installation until it
> fully meets _your_ needs.  That's what I've always done, and what's
> always been the way to get best results.  Morever, it's a logical
> consequence of recognising that Linux puts _you_ in charge of your own
> system.  You're handed that control on a platter.  Why refuse to
> exercise it?
Well, I'll give it a shot with my back porch Via Ubuntu LXDE machine
and report back.  What other programs besides rcconf/sysv-rc-conf and
removing extra programs should I be looking at?  I don't need more
than 2 TTYs.  Maybe using sysctl to reduce swappiness would help.  I
actually have plenty of RAM on that rig but I think that install still
tries to use the swap partition.

>> Actually, if they will do the trick for you, I'd recommend trying out
>> Puppy Linux or even DSL (Damn Small Linux).
> I would recommend aiming for something more satisfactory and far less
> limiting.
> Again, this bad micro-distro recommendation, which I keep hearing in
> these situations, results from the underlying error of assuming that
> you're done when the distribution installer finishes.  I didn't
> understand, for quite a while, why anyone would recommend something as
> limited as DSL as a long-term solution for a machine, for no better
> reason than having only circa-2000 CPUs and total RAM. 
> Sorry, but DSL and Puppy Linux are very limited setups, and you would
> use them only because you are _radically_ short on hardware, especially
> on disk space (like, 200 MB), or because for some reason you need an
> absolutely barebones, tiny live CD.  They are completely unsuitable for
> systems that you intend to use longterm. 
Depends on your needs. I have a friend who scrambled Windows 2000 by
installing a defective stick of RAM.  I sent him a 200 MB Slax
(Slackware based KDE) CD so he could get his data off of it and
reinstall Windows. He was so impressed (or intimidated about
reinstalling after the long set of instructions that I gave him) that
he ended up using the live CD for years to listen to his music and
watch his pr0n.  He bought a C2D laptop with Vista & 2 GB of RAM last
year, but he says that Slax feels faster as a live CD on his Duron 700
with 256 MB for his limited and rather sticky purposes.

The Seamonkey version of Puppy is quite a nice environment for light
web surfing & multimedia.  There is even a version called Mediapup
that will load in to RAM on systems with 512+ that is designed for
video editing and DVD authoring. kino, avidemux, k9copy, GIMP, etc.,
though I doubt it compares favorably to ArtistX if you have modern
> And, more to the point, they are horribly unsatisfactory compared to,
> say, installing Debian _and not stopping_ with setup until it's
> configured and pared down appropriately.  Like, say, starting up only
> what is required, carefully disabling the load of anything not strictly
> necessary, and using something like IceWM or similar.
Puppy's repository is very limited, but I understood that DSL is
actually  a very stripped down Debian based on the 2.4 kernel.  I've
seen references to people even upgrading it to use the big windows
managers, though I wouldn't recommend it.
> And that is why I picked antiX SimplyMEPIS as something to serve as a
> starting point for Mike Kirk:  Although I've never tried it, going by
> the description, it sounds like it would _install_ a somewhat reasonable
> system for a PII with 256MB and a 8GB hard drive -- a relatively
> minimalist IceWM-based Debian-derivative distro with no "desktop" stuff. 
> I did _not_ expect that it would give satisfactory performance from the
> live CD default bootup:  To the contrary, the 256 MB RAM limitation and
> run-from-CD operation ensure that it'd be functional but slow.  The
> point is, it would function and be able to install to HD without hassle,
> and _then_ would function reasonably -- in addition to being
> maintainable from standard package archives, and not be hampered by
> extremely nonstandard architecture the way, say, DSL is.
> I don't want to say I don't respect what the Puppy and DSL maintainers
> have accomplished:  It's always great to have a functional X11-based
> live CD in a 50-85 MB business-card-sized CD image (as has been known
> since, ahem, the Linuxcare Bootable Business Card). 
I still have one of those.  I should fire it up again, I picked it up
when I was new to Linux and didn't know many of the programs.  These
days USB sticks are cheap, very portable and robust, and large enough
to host a full distro.  The problem with older machines is finding
BIOSs that will boot from a USB drive.
> However, achieving
> that sort of miniaturisation requires a whole bunch of compromises --
> ones that are absolutely not in the interest of someone installing a
> distribution to a system with a 8GB hard drive.
Heck, I'd need more than that just to house the stuff I downloaded off


I will follow the good side right to the fire, but not into it if I can
help it.
        -- Michel Eyquem de Montaigne

Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org


More information about the conspire mailing list