[conspire] How do I determine what I need to keep from my internal hard drive and what is the recommend way to move things to an external hard drive
nick at zork.net
Mon Sep 6 01:37:15 PDT 2010
> Thank you - I'm using the rsync command with the 'z' option too.
The 'z' option turns on gzip encryption between the sending and
receiving rsync processes. Over a remote connection this can help
throughput, but if you're just syncing between two local disks the
compression/decompression on the same system will add a wee bit of
> > 2. Snapshot of your partition layout.
> I used "parted -l" since I used parted to create the partitions. I
> like the naming scheme of including the date.
I should note that for most desktop machines the partition information
is largely irrelevant. If you've been playing games with separation of
concerns, that's fine. This used to be mandatory in the old days, but
journaling filesystems give me the confidence to make my desktop One Big
Partition and swap so I can just accept installer defaults when I
Here's what I once wrote to a co-worker at one job or another, in
reaction to a document she had written describing how to partition your
I can't recall where this is documented, but I believe that at
UC Berkeley the systems they were developing BSD Unix on had two
kinds of disks: a 2MB fixed-head disk (imagine a thin ring) with
very few moving parts, and a standard-for-the-time removable
disk pack drive (imagine a top-loading washing machine where you
loaded large copper platters onto the agitator).
So they used the fast fixed-head ring-shaped disk for the most
core OS files, and used the slower large-volume disk packs for
everything else, mounting whole disk packs onto /home and /usr
and /var and wherever else they were likely to need lots of
Notice that this is something of the inverse of what you
advised: instead of mounting multiple whole volumes onto the
filesystem at various points, you're asking them to carve up a
single volume into smaller spaces.
For many years, this sort of carving-up persisted among the free
PC Unixes (Linux, FreeBSD, etc) I think for a few reasons:
1. Early home Linux users tended to copycat the Big Unix
systems they had access to. The Big Unix SysAdmins
used to find this obnoxious and amusing in turns, but
some practices may just be cargo-cult
2. Many systems for accounting and limitation (for
example, user disk-space quotas) are per-partition,
and work best when you have one region of your
filesystem configured for them but leave the rest
3. Through the 1990s, the main Linux filesystems (ext2
in particular) were not very crash-resistant, and
could become corrupted if the system was not shut
down cleanly. If a partition was in the middle of
being written to when the power went out, the fsck
filesystem check could take a very long time on next
boot. The repairs might also result in lost files or
chunks of file left in the lost+found/ directory
instead of where they were meant to be.
The logic then went that if you separate the
filesystems that are likely to have lots of writes
(/var, /home etc) from the filesystems that are most
important for correct system operation (/etc, /usr,
/lib, /bin, etc) you can localize any damage.
Further, the smaller individual partitions will have
a shorter overall time required for the fsck program
to inspect and repair them.
4. Die-hard Linux home users like to try different
distributions out to experiment with them, and
keeping /home separate makes it easy to boot into a
new system but still have all your firefox bookmarks
and thunderbird passwords and all that good stuff.
There were also other tricks available. I myself used to mount
/usr read-only (but had to remount it read-write every time I
upgraded software, which got annoying) and mounted partitions
like /var and /home with options that restricted what users
could put on them (while leaving system partitions that relied
on these features alone). But I think the above four points
were the most common.
So let's look at these factors in order:
1. This is clearly not a good reason to do anything, and
can be safely ignored.
2. This issue has some merit, but it's been a very long
time since I've seen a system with quotas enabled.
They were somewhat invasive last time I used them,
and more trouble than they were worth. Even at $FIRM
we just monitor disk space on all our systems and do
analysis of worst offenders when we reach a certain
3. I think this is where the biggest change has taken
place. Journaling filesystems such as ext3 and xfs
(but mostly ext3) have made this sort of precaution
far less relevant. Journaling filesystems write
their data to a temporary staging area called a
journal, and checkpoint those changes into the actual
filesystem structure at regular intervals. If the
process is interrupted, the system has enough of an
audit trail to revert any half-committed changes and
repair the filesystem in a matter of seconds during
It is this change alone that caused me to stop
carving up my own filesystems into little chunks. I
feel far more confident in ext3's ability to keep my
data intact without any noticeable performance
4. This is something you hinted at in $DOCUMENT. This
is probably good advice for desktop users who want to
experiment with lots of different distributions, but
I'm not sure it's the kind of suggestion we want to
plant in $SOFTWARE server administrators' heads:
"Also, this will let you ditch
$SUPPORTED_DISTRIBUTION for $COMPETITOR more easily!"
None of the $FIRM servers carve up disks into partitions beyond
making an ext3 / partition and a swap partition. We keep all
Web site data under /srv (see
for the section of the Filesystem Hierarchy Standard on this
directory) rather than /var, and when we need more space we
usually add it by putting a new disk (or RAID, more frequently)
in as /srv or a directory under /srv somewhere.
So that's my biggest philosophical break with $DOCUMENT. I
really think the benefits of partitioning small don't really
outweigh the hassles it causes.
You are not entitled to your opinions.
More information about the conspire