[conspire] Sat, 1/10 Installfest/RSVP
nick at zork.net
Mon Jan 12 14:53:45 PST 2009
> Again, this avoids entirely the need for a fixed _binary_ application
Note that I never said that I specifically *need* an ABI. I first said
that it wasn't an unreasonable desire, and then that I find it useful
for ensuring that multiple groups of people are all on the same page
with a minimum of communication: "Make it work on Hardy or it doesn't go
out." Sometimes what you need is to remove variables in as many places
as you can. I don't see this as "foolishness".
I kind of hand-wavingly used the term "ABI" but really intended it to
refer to a complete runtime. A package could fix an FHS bug, for
example, and really flummox a python script that naïvely expected files
to be in a particular location. A change in the command-line options to
a program can make an important shell script start spewing garbage and
spawning an unholy army of the dead. This sort of thing will bite
anyway, but better that it happen during a scheduled upgrade than
randomly on a weekly walk down dist-upgrade lane.
And then there's the rdiff-backup maintainers, who implement protocol
incompatibilities with *minor* revision number updates (awesome
software, but a mega-frustrating release process). Debian gleefully
packages each new version with the same package name, and you get
"Sorry, no backups over the wire today" until everything's in sync
> By default, assume software is going to be available in -testing. If
> for arcane reasons of delay in clearing quarantine, it's temporarily
> not available there, append "-t unstable". Done.
> I had a few dozen server boxes at recent $EMPLOYER managed that way.
> It scales, and it works..
I think that this an important difference here. A few dozen boxes are
about two or three racks' worth, I suppose. One person could reasonably
keep track of that without even resorting to written records for very
I'm dealing with a /24 and several private networks besides. Even with
a team of SysAdmins, it's easiest to minimize the uncertainty brought on
by regular upgrades across this many machines. In fact, the more people
you get involved on the team, the more valuable that kind of stability
One thing that also might make upgrades a little more expensive for me
than for you is that I keep /etc in revision control, and checking
changes in requires peer review of the diffs.
> You know, come to think of it, I _usually_ haven't even bothered to
> ensure that the queues were flushed, and still haven't lost anything.
> It's just never been an issue, so I hadn't spent time properly solving
> what hasn't been a problem.
Even on my private mailman installation (which hasn't been even
moderately busy since the free-sklyarov list shut down) the amount of
spam that accumulates in the qfiles, blocking my upgrades, is
[nick at frotz(/var/lib/mailman/qfiles)] for i in *; do echo -en "$i:\t"; sudo ls -1 $i/ | wc -l; done
The retry queue is the real killer, usually (especially after an
"unshunt"). I've taken to just covering my eyes, saying "It's all just
spam, right? Right?" and blasting it before resuming the upgrade. I
feel reasonably comfortable doing that on my private server. Production
mailing lists for an important service? Not quite so much.
If you have some magic anti-backscatter system that keeps
conspire-bounces from hanging on to hundreds of undeliverable "Your
mail: V1 at kruh n0w! is being held for moderation" messages and
"Unrecognized command: satisfy ur womn" output, I'd love to hear it!
Otherwise I guess you're doing what I do, blasting qfiles and hoping
none of them are legitimate mails waiting for redelivery.
You are not entitled to your opinions. Nick Moffitt
nick at zork.net
More information about the conspire