[conspire] Sat, 1/10 Installfest/RSVP
Nick Moffitt
nick at zork.net
Mon Jan 12 10:00:34 PST 2009
Rick Moen:
> (Which makes me wonder why _you_ would particularly seek a stable
> binary userspace interface, by the way.)
Because it saves me the work of recompiling or upgrading software that
is running a production service. Things break, and the time it takes to
test that an upgrade won't throw errors to a waiting public is time
taken out of development, new deployment, and repairs and maintenance.
Also, I'm not the one doing all the development for the services I
maintain. I need to be able to tell developers to target (say) Hardy,
and not "the latest possible in the repos as of noon on Thursday" or
similar. Not everything is willing to refresh on weekly timescales.
> On all of my production machines, I'm happiest if I'm able to keep
> them incrementally upgraded on an ongoing basis, which in my
> experience (given a suitable Linux distribution, e.g., Debian
> testing/unstable) leads to the fewest and least serious problems
> overall.
Yeah, I used to think this way too. I just spent too much time in the
#debian channels on IRC when the topic was set to "STOP NOBODY
DIST-UPGRADE ZOMFG". I managed to bork my own systems too many times on
unstable. And then testing really didn't live up to my expectations.
I was very skeptical, especially since Debian Stable is so embarassing
and because Dapper was kind of a rocky start for the LTS thing. But
it's been much better to just stick with LTS for the core OS and then
surgically backport any necessary New Hotness. Set up a custom apt repo
or four and you get automatic win!
> In other words, far from balking at "upgrading every six months", my
> strong preference is to upgrade weekly or better.
Yeah, so do I: to security updates. I want to get those USNs!
> > When you manage hundreds of servers, upgrading (even in apt-based
> > distros!) can be a chore that gets in the way of real work.
>
> Well, not when they're on Debian. You may recall that VA linux had no
> problem running the corporpate desktop system (not servers, but they
> could have deployed the system to servers, too) using Debian packages
> pushed out nightly via rsync and cron.
Those desktops are all a pretty singular role. Having hundreds of
servers on various hardware types and different networks with vastly
different roles is a completely different scenario. I maintain that the
system at VA worked only because it was a desktop system that did not
have the uptime requirements of a production server.
Consider, just for an example, the horror that is the Mailman package in
Debian/Ubuntu: It forces you to throw away all mail currently in-queue
during an upgrade even if it's a -* release (identical upstream source,
maintainer-applied changes). It basically punishes you for actually
*using* Mailman.
So if you're upgrading mailman to every new package that appears, that's
a regular nightmare of shutting down MTAs, flushing queues, deleting
apparent spam with prejudice, etc. That's simply unacceptable for a
production mailing list server with zillions of messages flowing
through.
It's a bit of a pathological example, but it's not an unpopular package!
And that seems to be just the *deliberate* brokenness during an upgrade:
I refer again to the #debian channel topic problems.
--
"No, I ain't got a fax machine! I also ain't got an Nick Moffitt
Apple IIc, polio, or a falcon!" nick at zork.net
-- Ray, Achewood 2006-11-22
More information about the conspire
mailing list