[conspire] Utility to rescue formatted EXT3 partition & distribution, choice?

Daniel Gimpelevich daniel at gimpelevich.san-francisco.ca.us
Sat Mar 17 08:57:34 PDT 2007


On Fri, 16 Mar 2007 20:38:52 -0700, Rick Moen wrote:

> networks.  rdiff-backup indeed looks very similar, except being in
> Python and having special provisions for OS X resource forks.  

In 10.4, rsync itself provides that also. I think the flag is -E.

> [KnoppMyth:]
> 
>> I played with it a year or two ago without much success and it doesn't
>> look like it is progressing too rapidly.  First I've got to get a low
>> profile card that is compatible with Linux.  Any suggestions on that score?
> 
> I've only kicked tires from a distance, so far.  There was a "buildfest" 
> in Alameda a couple of years back, when Our Lords in Hollywood seemed
> likely to cram the Broadcast Flag through Congress.  I attended and
> helped a bit:  Of the three (IIRC) people doing MythTV installs, I
> vaguely recall that one was using Fedora, one was something else, and
> Peter Knagg was working with KnoppMyth.  Peter managed to get pretty
> much total success over the 3-4 hours of the 'fest, so that was why I
> suggested it as a starting point.

At last month's SVLUG installfest, Peter was strongly recommending
against the use of KnoppMyth (well, actually against the use of MythTV in
general) in favor of a different piece of software whose name I now can't
recall. If he was your measure of success, that little factoid should be
interesting to you at least.

> It's a "Go away; don't bother us" sort of thing.  Anyway, the same logic
> I was urging would apply on the _stable_ track, too.  That is, if you
> _are_ devoted to running "stable", you wouldn't want to lock
> /etc/apt/sources.list to "sarge":  Sure, doing that would mean you'd be
> running "stable" today, but then after the next release day you'd get
> shuffled off to symlink "old-stable", which at that point would cease to
> receive maintenance updates.

You forgot to counter that with the advantage that setting it to sarge
instead of stable offers: If you do so, the change in which release your
box tracks is solely under _your_ control, instead of the control of the
Debian release team.

> Some people say:  "But suppose I'm installing ten Debian systems,
> instead of one?  Aren't the 22+ ISOs a good idea, then?"  Not really:
> What you actually want, in that case, is something like a Squid proxy on
> your LAN, so that systems 2 through 10 pull the packages from local
> cache.  Or, if you'll be doing this a great deal, you can easily have a
> partial Debian mirror locally.  (There are tools for this.)

In the presence of such a caching proxy, Debian ISOs of any kind are
categorically superfluous. A single floppy disk is a sufficient physical
medium to install Debian (for Sarge, anyway), which everything that would
be in the ISO getting transferred from the proxy. On newer machines that
didn't ship with a floppy drive, no physical medium whatsoever is needed
to install any version of Debian or Ubuntu (or, with version 6.10 or
later, also Edubuntu, Kubuntu, or Xubuntu). Everything in its entirety can
come from the proxy.

>> Bandwidth isn't much of an issue (I have DSL), so I could
>> grab the first DVD image, as well.
> 
> See, the only scenario in which I'd download and burn either one, two,
> or three DVDs of Debian "testing" (or any other Debian) is if I expected
> to be installing to machines with no (or absymal) Internet access.
> Otherwise, it's smarter to proceed as above.

Note that, although it is smarter, you'll still end up tearing your hair
out if you do it over something less than a Gigabit Ethernet LAN, or at
the very least, 802.11n. (Not you, Rick. I'm referring to new people here.)

> There _will_ be much cheering the day Gnash is able to make YouTube
> junkies happy.
> 
> And, honestly, if the only proprietary software you need is the Usual
> Suspects of browser plug-ins, then the natural solution (on
> x86_64-capable CPUs) is an i386-version browser package running with
> i386 support libs on an x86_64 Linux distribution.  It really isn't
> necessary, or desirable, to eschew the x86_64 runtime environment, just
> because you want to run, e.g., the Macromedia Flash plug-in.

The other night, I got YouTube almost-working on Ubuntu PowerPC Edition
with a combination of Firefox, Gnash (from current CVS), GreaseMonkey,
MplayerTube, mplayer, and mplayerplug-in. The "almost" was due to the fact
that I realized way too late that I was using the broken version of
mplayerplug-in (the mozilla-mplayer package), which wouldn't even play the
movie trailers on the QuickTime site.

>> I'm not doing anything mission critical, I'm convinced to just stay on
>> the Testing track with my new machine.
> 
> Consider my little "pinning" trick, to enable optional access to
> "unstable" branch packages, whenever you need them:
> 
> 1.  Add lines for "unstable" to /etc/apt/sources.list.  By itself, this
> would be A Bad Thing, since you'd essentially have overriden the lines
> for "testing", because the unstable-branch packages would always have
> higher or equal version numbers.  Essentially, that would be
> functionally equivalent to just switching to the unstable track.
> 
> 2.  Create this file as /etc/apt/preferences (or add this as a new
> block, if you happen to have one already):
> 
> Package: *
> Pin: release a=unstable
> Pin-Priority: 50
> 
> (You will want to do "apt-get update" or equivalent, to refresh the
> available-package catalogues.)
> 
> Those three lines in /etc/apt/preferences say "deprecate any package
> from a source whose release string = unstable".  That is, don't fetch
> and install them by default, ever.  (Normal pin priority = 100.)
> 
> To specify the non-default "unstable" branch in apt operations, you just
> add a "-t unstable" qualifier, e.g., "apt-get -t unstable install
> kword".
> 
> Necessary disclaimer:  The "testing" track may get a bit bumpy for a
> month or so following release, because that's when they start permitting
> boat-rocking new versions of key packages into "unstable".  The
> automated quarantining scripts will buffer the "testing" branch
> _somewhat_ from such upheavals, but there's no human being sitting there
> doing quality control:  Only automated quality checks are applied.
> 
> You probably already knew that, but I'm saying it just in case. (Suffice
> it to say that I've kept production servers tied to "testing" through
> several releases.)

How many times must I repeat this URL?
http://linuxmafia.com/isos/debian/

>> Speaking of server issues, one thing I noticed was that Sidux comes
>> with both SMBFS and CIFS, which I've never seen before.
> 
> Yeah, sorry, can't comment, as I almost never use SMB-type anything.
> 
>> Maybe I should give NFS a try.  I seem to recall that NFS had security
>> issues if you didn't run NIS.
> 
> Bwa-ha-ha.  NIS doesn't fix that aspect of the No Friggin' Security
> network filesystem.

Even in the absence of Windows-type anything, SMB is still Lincoln to
NFS's Douglas.

> Personally, I just rough it, and don't do network filesystems on any
> machine with even the tiniest exposure to public networks.  If I want to
> move files between hosts, it's via scp (or some other file transport
> over an SSH tunnel).
> 
> Samba is a bit perilous, in that regard.  (It's not the Samba guys'
> fault:  It's inherent in the network design.  You should hear Jeremy
> Allison's standard lecture on _that_ subject, which will raise the hair
> on the back of your neck.)

I always think it goes without saying that SMB/NFS/whatever are for LANs
_only_, not for the Internet. However, in either case, sshfs is damn
convenient.





More information about the conspire mailing list