[conspire] Utility to rescue formatted EXT3 partition & distribution, choice?

Rick Moen rick at linuxmafia.com
Fri Mar 16 20:38:52 PDT 2007


Quoting Edmund J. Biow (biow at sbcglobal.net):

> No real data loss, just a few hours of adding programs and configuring
> various files.  Actually, a large dollop of time with that install was
> spent setting up my home directory, which was its own partition.  My
> user preferences survived the reinstallation.
> 
> I'll take a closer look at rsync and rdiff-backup at some point in the
> near future.

Ja, you know, rsync's really darned near all you need.  It's pretty
useful:  "rsync -av [source] [dest]" for copying within a host, 
"rsync -avz [source] [dest]" adds gzip compression for copying across
networks.  rdiff-backup indeed looks very similar, except being in
Python and having special provisions for OS X resource forks.  

> I've got copies of most of those and have even installed a couple of
> stripes of Mint and DreamLinux on various boxen.  Actually, my copies
> may not be in such good shape since I tipped over a glass of whisky on
> my CD rug while trying to hook up a friend with DreamLinux Saturday
> evening, but discs are drying out in my attic and I still have the ISOs
> on my server.  Do you think rotgut scotch will delaminate the surface?

Hmm, I don't think so, but I always just use liquid dishwashing
detergent, which doesn't seem to hurt them and gets most things off.

[KnoppMyth:]

> I played with it a year or two ago without much success and it doesn't
> look like it is progressing too rapidly.  First I've got to get a low
> profile card that is compatible with Linux.  Any suggestions on that score?

I've only kicked tires from a distance, so far.  There was a "buildfest" 
in Alameda a couple of years back, when Our Lords in Hollywood seemed
likely to cram the Broadcast Flag through Congress.  I attended and
helped a bit:  Of the three (IIRC) people doing MythTV installs, I
vaguely recall that one was using Fedora, one was something else, and
Peter Knagg was working with KnoppMyth.  Peter managed to get pretty
much total success over the 3-4 hours of the 'fest, so that was why I
suggested it as a starting point.


> >> I'm having some second thoughts about installing 64 bit anything. 
> >>     
> >
> > If you don't have several gigs of RAM, it doesn't really buy you
> > anything.  With more modest amounts of RAM, you should still go with
> > x86_64 distros _unless_ you're feeding a proprietary-software addiction,
> > which creates problems because those asshats tend to still offer
> > i386-only binaries -- which still can be supported by pose varying
> > degrees of hassle.
> >   
> This box only has two slots, both filled with gig sticks.  However I did
> notice that the 64 bit version of Sidux did "feel" considerably snappier
> than 32 bit Sidux.  That rather surprised me, I figured it would be an
> "only the benchmark can tell for sure" type of difference.

Yeah, I know it's elusive.  Let's just say that the people who care
_most_ about the x86_64 runtime environment, at this point, are the 4GB+
RAM people.  Pretty soon, though, there won't be any Intel-standard new
machines that can't run _either_ x86_64 or strict-i386 environments:
Even the laptops have switched CPUs, except for the low end.

> Unfortunately, I like my Flash movies.  

Well, here's a security metaphor, then.  ;->

http://www.shoutfile.com/v/v3einrdT/Chasers_Trojan_Horse  (Might have
been taken down if the Australian Broadcast Corporation paralegals are
cranking out demand letters unusually quickly.)

Flash movies are indeed fun.  Flash is also a serious security exposure,
so you might want to run it behind Flashblock (which, despite the name, 
merely gives you the ability to decide when to run Flash, rather than it
autostarting when you load a page).

> Flash 7 barely worked on Linux.  Pictures or audio sometimes didn't
> play, there were synchronization issues, occasionally flying monkeys
> were emitted by my USB ports.  Flash 9 generally works pretty well.  My
> understanding is that Gnash can play up to version 7 Flash videos (and
> some 8 & 9), but can't handle ActionScript (no YouTube).  Other free
> players top out at SWF v4. 

All true.  Likely to get gradually better, despite conspicuous ongoing
assholedom by the DRM-loving weenies at Adobe, but not today.


[Debian tracks:]
  
> My misapprehension was not immediately dispelled by several previous
> visits to www.debian.org, which really seems to emphasize Stable to the
> exclusion of other varieties.  

Understandable if you remember hacker mindsets.  For years and still
largely today, the consistent line was "If you're not a developer able
to fix your own bugs, stick to Stable." Why?  Simply so the Debian
developers wouldn't be bothered by complaints.  That is, when a newcomer
tried something on unstable, got burned, and said something
uncomplimentary on debian-devel, said newcomer would be told "What part
of 'unstable' did you not understand?"

It's a "Go away; don't bother us" sort of thing.  Anyway, the same logic
I was urging would apply on the _stable_ track, too.  That is, if you
_are_ devoted to running "stable", you wouldn't want to lock
/etc/apt/sources.list to "sarge":  Sure, doing that would mean you'd be
running "stable" today, but then after the next release day you'd get
shuffled off to symlink "old-stable", which at that point would cease to
receive maintenance updates.


> It took following four not-particularly-intuitive links to get to a
> page where I could click on a Testing download.  When I looked at the
> CDIMAGE page last week before downloading SIDUX it mentioned that the
> latest weekly versions were suffering from some sort of apt/archive
> key interaction issue and might fail to install, which enhanced my
> trepidation about playing with Etch at this juncture.
> http://cdimage.debian.org/cdimage/weekly-builds/

Ja.  The fooling around with weekly-build images was, itself, dictated
by package-signing / key-verification problems, you may recall:  The
2006 master signing key expired some time around January, leaving
_slightly_ in the lurch people installing from Etch Release Candidate 1,
which predated that expiration.

You may recall my having mentioned that here, about a month ago.  But
you might recall my explaining that the consequences of that are
actually more annoying than alarming:  Etch RC1's CD contents will
install just fine.  The part that would break is the initial
package-update fetches from the Internet, near the end of installation.  

Which, as I pointed out a month ago, is no big deal:  You fetch the 2007
signing key and import it into Debian's keyring (apt-key utility), and
then updates work.  Or, alternatively, you disable package signature
checking.  

What it says at the top of
http://cdimage.debian.org/cdimage/weekly-builds/ is probably something
very similar.  Also, they're supposed to be getting out RC2, Real Soon
Now.  Which by design should have fewer quality-control rough edges than
the any random weekly build.


> I probably do not need a jigdo of all 22 ISOs in testing, but then
> again, the netinst may not quite give me a complete system with X and a
> window manager, so I gather that the 1 CD KDE image may be a good place
> to start.

Fetching and burning either just disk 1 or netinst (via jigdo, http/ftp,
whatever) is, in my experience, usually the most reasonable compromise
for Debian.

netinst takes less time to download.  Disk 1 makes better use of media
if, like me, you don't keep mini-CD media around but have a huge pile of
700 MB blanks.

And downloading 22+ full-sized ISOs is just really dumb -- unless you're
just about to be doing a bunch of installations with no network
connectivity.

The above observations always come as a surprise to people new to
Debian.  (They are emphatically true if tracking unstable/testing, a bit
less so if tracking stable.)

To explain:  Whatever you install from CD is going to get updated from
network-accessed package mirrors, starting pretty much immediately.
That is, the CD contents are inevitably obsolete.  (This is the part
that's emphatically true on unstable/testing, less so on stable.)  So,
it's a silly waste of time downloading huge amounts of software that's
just going to get pretty much immediately replaced.  That's why it's 
almost always silly to download and burn those 22+ ISOs -- when you can,
instead, download and burn Disk 1.  

Disk 1 will create a reasonable Debian desktop or server system, even
without any network access at all.  With optional GNOME stuff, I think.
And of course you can efficiently built up, via
"apt-get"/"aptitude"/whatever fetches from the package mirrors, with any
of the packages you _could_ have installed from the 22+ ISOs -- with the
difference that you'll be fetching _current_ software, rather than a
somewhat obsolete CD snapshot.  (Again, that snapshot software's going
to get replaced the first time you do a package update, so installing it
is doubly silly.)

Some people say:  "But suppose I'm installing ten Debian systems,
instead of one?  Aren't the 22+ ISOs a good idea, then?"  Not really:
What you actually want, in that case, is something like a Squid proxy on
your LAN, so that systems 2 through 10 pull the packages from local
cache.  Or, if you'll be doing this a great deal, you can easily have a
partial Debian mirror locally.  (There are tools for this.)

> Bandwidth isn't much of an issue (I have DSL), so I could
> grab the first DVD image, as well.

See, the only scenario in which I'd download and burn either one, two,
or three DVDs of Debian "testing" (or any other Debian) is if I expected
to be installing to machines with no (or absymal) Internet access.
Otherwise, it's smarter to proceed as above.



> I followed SID via Kanotix for a couple of years until the machine that
> it was on was mothballed last fall.  I had to restrain my desire to do a
> apt-get upgrade every time I booted or I would have been downloading
> probably a gig or two a month, plus, on that old PIII 550 installing the
> new software took forever if I didn't do it regularly. Once or twice I
> had some serious breakage, but generally things cleared up in a day or
> two with a few more updates.

Typical story, for "unstable".

> Eventually the machine just felt too pokey even though it had 384 MB
> of RAM.

See, that's just a matter of desktop configuration.  


> My only real experience with Etch was Mepis 3.3, but though it was a
> snapshot of Etch, it didn't evolve with it, and at some point it fell
> too far behind Etch.

My personal view is that Warren Woodford's a bit of a screw-up -- and
didn't really know what he was doing.


[Sidux:]
     
> Actually, the video on the 32 bit version of Sidux worked very nicely. 

Glad to hear.

> Good to know.  I'm actually going to try to shun all proprietary
> software on my soon-to-be new 64 bit Etch installation and see how far I
> get with it.  I'll just Gnash my teeth.

There _will_ be much cheering the day Gnash is able to make YouTube
junkies happy.

And, honestly, if the only proprietary software you need is the Usual
Suspects of browser plug-ins, then the natural solution (on
x86_64-capable CPUs) is an i386-version browser package running with
i386 support libs on an x86_64 Linux distribution.  It really isn't
necessary, or desirable, to eschew the x86_64 runtime environment, just
because you want to run, e.g., the Macromedia Flash plug-in.


> I'm not doing anything mission critical, I'm convinced to just stay on
> the Testing track with my new machine.

Consider my little "pinning" trick, to enable optional access to
"unstable" branch packages, whenever you need them:

1.  Add lines for "unstable" to /etc/apt/sources.list.  By itself, this
would be A Bad Thing, since you'd essentially have overriden the lines 
for "testing", because the unstable-branch packages would always have
higher or equal version numbers.  Essentially, that would be
functionally equivalent to just switching to the unstable track.

2.  Create this file as /etc/apt/preferences (or add this as a new
block, if you happen to have one already):

Package: *
Pin: release a=unstable
Pin-Priority: 50

(You will want to do "apt-get update" or equivalent, to refresh the
available-package catalogues.)

Those three lines in /etc/apt/preferences say "deprecate any package
from a source whose release string = unstable".  That is, don't fetch
and install them by default, ever.  (Normal pin priority = 100.)

To specify the non-default "unstable" branch in apt operations, you just
add a "-t unstable" qualifier, e.g., "apt-get -t unstable install kword".

Necessary disclaimer:  The "testing" track may get a bit bumpy for a
month or so following release, because that's when they start permitting
boat-rocking new versions of key packages into "unstable".  The
automated quarantining scripts will buffer the "testing" branch
_somewhat_ from such upheavals, but there's no human being sitting there
doing quality control:  Only automated quality checks are applied.

You probably already knew that, but I'm saying it just in case.
(Suffice it to say that I've kept production servers tied to "testing"
through several releases.)

> But I intend to keep my little Via C7 server running stable.  It has
> been a champ..  Anybody know off hand if stable automatically update my
> kernel from 2.4 to 2.6, or should I do that before or after I do the
> "apt-get dist-upgrade" or should I just keep running 2.4?

Stable should never take you from one kernel version to another.  In
fact, Debian as a whole shouldn't:  That's because there's not a package
called "kernel" (or equivalent).  Instead, you will generally have a
package called something like "kernel-image-2.4.27" installed.  Absent
some explicit request to fetch and install something later, you will (in
that hypothetical) only get newer iteration of package
kernel-image-2.4.27".  E.g., you might move from
kernel-image-2.4.27-2-686 to kernel-image-2.4.27-3-686 -- from the
second Debian packaging of that kernel as an i686 binary to the third.

You can certainly look through the available packages and say, for
example, "apt-get install linux-image-2.6.18-4-686" (or "kernel-image-2.6-686"
to get the virtual package that's always pointed to the latest 2.6.*).


> Speaking of server issues, one thing I noticed was that Sidux comes with
> both SMBFS and CIFS, which I've never seen before.  

Yeah, sorry, can't comment, as I almost never use SMB-type anything.

> Maybe I should give NFS a try.  I seem to recall that NFS had
> security issues if you didn't run NIS.

Bwa-ha-ha.  NIS doesn't fix that aspect of the No Friggin' Security
network filesystem.

Personally, I just rough it, and don't do network filesystems on any
machine with even the tiniest exposure to public networks.  If I want to
move files between hosts, it's via scp (or some other file transport
over an SSH tunnel).

Samba is a bit perilous, in that regard.  (It's not the Samba guys'
fault:  It's inherent in the network design.  You should hear Jeremy
Allison's standard lecture on _that_ subject, which will raise the hair
on the back of your neck.)

-- 
Cheers,    "Cthulhu loves me, this I know; because the High Priests tell me so!
Rick Moen   He won't eat me, no, not yet.  He's my Elder God, dank and wet!"
rick at linuxmafia.com




More information about the conspire mailing list