[sf-lug] separate partition for /home
Rick Moen
rick at linuxmafia.com
Mon Mar 28 11:29:40 PDT 2011
Quoting Akkana Peck (akkana at shallowsky.com):
> I see all kinds of recommendations like this for swap size, but it's
> all "$X says to do $Y." I hate following instructions blindly.
> Has anyone ever seen an article explaining any of this? I haven't.
> Understanding what's really going on would make the decision clearer.
This whole conversation reminds me of a situation I was in around 2006,
which I posted about to SVLUG's discussion list, a couple of years
after. (I had worked at the pseudonymous 'VARco', referred to below.)
Even given the passage of years, personally I'd still stick to 2GB swap
partitions. Here's the relevant part of my SVLUG post.
A wise IT greybeard said: "You can always tell the pioneers, by the
arrows sticking out of their backs." Which has some corollaries,
including undesirability of using code in even moderately unusual /
seldom invoked ways. I have an anecdote, on that:
Some number of years back, a Linux hardware VAR sold many of its 1U and
2U Opteron and Xeon servers (and a certain number of 4U ones) to its
biggest customer, whom we'll call Bigco. Each machine had mucho grande
disk and RAM, with swap to match.
VARco tended to follow its Linux techs' intuition and established
practices in keeping swap partition sizes to no bigger than 2 GB per
swap filesystem -- which meant several swap partitions per drive to
achieve Bigco's spec of 32GB total swap during RHEL3 load on the VAR's
assembly line.
But then came the day when a Bigco executive said he wanted not only
32GB of swap space on each host, but also wanted all of it in a single
partition, on the boot drive. Such systems were duly delivered despite
the VAR experts' slightly vague but consistent misgivings. Because of
course you try to do things right, but _must_ make the customer happy.
Customer soon reported that the systems were hanging hard while in
production use.
Extensive stress-testing using CTCS (a suite of simultaneous tests of
hardware that includes parallelised Linux kernel compiles, memtest86,
badblocks + iozone HD testing, and some others, all at once) followed.
With the customer-specified swap configuration, running the test suite
induced a system hang in 1 day. With a pair of 16GB swap partitions,
CTCS hung the box in 2 days. With four 8 GB swap partitions: five days.
With as many 2 GB swap partitions as the limits on SCSI device numbers
then permitted, CTCS ran apparently _indefinitely_ (10 days+) without
problems.
Bigco's load image was duly modified; problem cause is retroactively
attributed to hitting a previously unknown bug in the RHEL3 kernel's VM
code.
Could VARco experts have _said_ in advance "Don't do that, it risks
triggering a VM bug"? Nope. The most they could have said was "Gee,
this other way is what we _recommend_ because it's extremely well tested
by huge numbers of people and as such is known-good."
Would VARco experts have ever said "Do it this other way, or you're
likely to have problems"? Nope. It's merely intuition guided by
experience; it's almost never any kind of certain recipe.
If some VARco guy had posted on svlug at lists.svlug.org, in
advance of observed hangs and CTCS results, mentioning VARco's prejudice
/ recommendation, I have no doubt several people would have had a field
day saying "That's silly. You should go with [something else] because
of [reason foo]." Probably, the VAR expert would have smiled and said
"You _may_ be right."
More information about the sf-lug
mailing list