[sf-lug] SF-LUG & BALUG: System OS upgrades *soon*(?) - volunteer(s)? [DON'T "REPLY ALL" TO BOTH LISTS UNLESS YOU'RE SUBSCRIBED TO BOTH]

Michael Paoli Michael.Paoli at cal.berkeley.edu
Mon Jan 23 19:05:35 PST 2012


Jim, thanks for passing this along - thought you might.  :-)
I'm also passing it along to "BALUG-Talk".

Also, I *probably* will make it to Noisebridge this Wednesday
evening (6-8p).  I'll confirm Wednesday afternoon if I'm able to make
it (and will confirm at *least* an hour in advance (before 5pm),
and probably a fair bit earlier than that (perhaps before 3pm)).

I'm also intending to make the SF-LUG 2012-02-05 meeting.

Anyway, between those meetings, hopefully we'll have the upgrades pretty
well planned out and can execute them relatively soon after.

I did also get the host "vicki" a wee bit closer to prepared yesterday.
Some we reference excerpts added further towards the tail end of this email.


> From: jim <jim at well.com>
> Subject: Re: SF-LUG & BALUG: System OS upgrades *soon*(?) - volunteer(s)?
> Date: Sun, 22 Jan 2012 20:51:51 -0800

>     I think this is a great volunteer project for anyone
> who's interested in linux system administration. Let us
> know if you're interested in helping or learning.
>     We can meet up either at the next regular SF-LUG
> meeting (Sunday February 5 from 11 AM to 1 PM at the Cafe
> Enchante on Geary at 26th Ave) or at the Linux Discussion
> Group meeting at Noisebridge (every Wednesday evening
> from 6 to 8 PM) or both--don't worry about coordinating
> between multiple meetings, we'll take care of that.
>     Note that Michael is an expert Linux sys admin with
> a particularly good sense of best practices.
> On Sun, 2012-01-22 at 18:25 -0800, Michael Paoli wrote:
>> SF-LUG & BALUG: System OS upgrades *soon*(?) - volunteer(s)?
>> Jim, et. al.,
>> Do we have a quorum of volunteers (or should we also try to add a person
>> or two)?  In this case, I'm specifically thinking colo box, physical
>> access and associated systems administration stuff (there's also lots
>> that can be done mostly remotely).
>> Anyway, I see some fairly major upgrades due in our near future.
>> Impacted are:
>> SF-LUG:
>> sflug (guest on vicki, hosts [www.]sf-lug.com)
>> vicki (host for the above)
>> vicki (noted above, hosts the immediately below)
>> balug-sf-lug-v2.balug.org (guest on vicki, hosts lot of BALUG
>>    production)
>> aladfar.dreamhost.com. (hosted, will be upgraded/replaced for us, hosts
>>    [www.]balug.org, etc.)
>> Security support for Debian 5.0 "lenny" ends *soon* (2012-02-06).
>> To the extent feasible, we should upgrade the relevant systems soon,
>> preferably before that date, if that's doable, but if not, soon
>> thereafter.
>> Maybe we could plan out the upgrades at an upcoming SF-LUG meeting?
>> Roughly, I have in mind (what I'd like to do):
>> o There isn't any official supported upgrade path from i386 to amd64
>> o the Silicon Mechanics physical box is and will run amd64/x86_64
>> o the Silicon Mechanics physical box supports hardware virtualization
>> o suitably backup (including on-disk as feasible)
>> o generally prepare for upgrades
>> o do "upgrades" as follows:
>>    o vicki:
>>      o backup / move / "shove" stuff around beginning of disk suitably
>>        out-of-the-way (on-disk backups / access to existing data)
>>      o install Debian 6.0.3 (or latest 6.0.x) amd64, using beginning
>>        area(s) of disks, general architecture layout mostly quite as
>>        before (everything mirrored, separate /boot, rest under LVM2,
>>        separate filesystems, etc.)
>>      o install/configure vicki as above to fully support both qemu-kvm,
>>        and xen.  Note that on amd64, and with hardware virtualization,
>>        that will allow vicki to support i386 and amd64 images under
>>        qemu-kvm and I believe also xen.
>>    o sflug & balug-sf-lug-v2.balug.org:
>>      o once the above vicki upgrades are done, sflug and
>>        balug-sf-lug-v2.balug.org can be dealt with remotely
>>      o sflug & balug-sf-lug-v2.balug.org can each be dealt with
>>        separately by their primary/lead sysadmin(s) as may be desired, in
>>        general for them, I'd probably recommend proceeding as follows:
>>        o get the existing xen guests running again, more-or-less as they
>>          were (may require some adjustments - most notably boot bits) -
>>          may be advisable to convert them to run under qemu-kvm as soon as
>>          feasible (to avoid guest<-->host kernel, etc. interdependencies)
>>        o upgrade guests to Debian 6.0.3 (or latest 6.0.x)
>>        o optional: change guests from i386 to amd64, use above guests
>>          as reference installations, and do an install/merge to get the
>>          guest(s) as desired to amd64 architecture.
>> bit 'o reference/background ;-) ...
>> THE END* IS NEAR**! *of security support for Debian GNU/Linux 5.0
>> (code name "lenny") **2012-02-06
>> Security support of Debian GNU/Linux 5.0 (code name "lenny") will be
>> terminated 2012-02-06.
>> Debian released Debian GNU/Linux 5.0 alias "lenny" 2009-02-14.
>> Debian released Debian GNU/Linux 6.0 alias "squeeze" 2011-02-06.
>> references:
>> http://lists.debian.org/debian-security-announce/2011/msg00238.html

//or if we present that data a bit differently, to show just how
//identical the partitioning on the two /dev/sd[ab] disks is:
Disk /dev/sd[ab]: 30401 cylinders, 255 heads, 63 sectors/track
Units = sectors of 512 bytes, counting from 0

    Device Boot    Start       End   #sectors  Id  System
/dev/sd[ab]1            63    498014     497952  fd  Linux raid autodetect
/dev/sd[ab]2        498015  35648234   35150220  fd  Linux raid autodetect
/dev/sd[ab]10    318472623 375037424   56564802  fd  Linux raid autodetect

//excepting extended partition, all logical and non-zero length primary
//partitions paired up between the sda and sdb devices partisions as md
//raid1 devices

///dev/md0 is used for /boot
sda1 sdb1 md0
/dev/md0                241036     57135    171457  25% /boot
//md[1-6] used for LVM
PV Name /dev/md1 VG Name vg00
//before: (and above)
sda2 sdb2 md1 vg00
sda10 sdb10 md7 (unused)
sda2 sdb2 md1 (unused)
sda10 sdb10 md7 vg00


More information about the sf-lug mailing list