[sf-lug] SF LUG meeting of Monday May 20, 2013 & the next day.
Rick Moen
rick at linuxmafia.com
Sat Jun 8 14:03:37 PDT 2013
Quoting Bobbie Sellers (bliss-sf4ever at dslextreme.com):
> I ended up with a manageable / partition but still don't understand
> why it filled up on me.
You used the phrase 'the system had to reformat and copy the contents of
the two relevant directories to the new partitions'. Um...
> Note that I say the system did that for me.
Um, yeah, well, about that....
Those darned 'system' things, always messing you up, right? ;->
It's all the system. The system did it, Your Honour.
Well...
...Welcome to my world, the world of system administration.
Any time you muck about using root authority or its equivalents such
as the most common ways of wielding sudo (let alone slicing and dicing
partitions from a live CD), you need to be really super-careful about
what you're doing, as there is no safety net whatsoever between you and
the countless ways of shooting your system in the foot. You need to
learn to cultivate, in those situations, a Spidey-Sense tingle that goes
off when you are just about to carry about a risky operation, and that
_certainly_ includes just about any method I can think of of moving or
copying major system subtrees.
Mess up too badly when you do that -- or being there as an innocent
bystander when that devilish system 'does things for you' with root
authority and makes some hideous gaffe or other -- and you may find that
you have little option but to rebuild.
You may have done something like copy all of /usr somewhere so that
you have it in two places. You know how big all of /usr is on a typical
system? Big. Don't have much of a sense about how big major system
subtrees are, yet? OK, that would be the next problem you ought to fix
before moving on to others.
Citing /usr is just an example. There's no way of telling from what you
wrote what the cause is, because you didn't give enough information.
In _different_ circumstances, where a filesystem suddenly becomes close
to full for mysterious reasons and you have a reasonable suspicion it's
because of some runaway _big_ dynamic files, like huge logfiles, huge
corefiles, etc., the following neat little Perl script can be handy:
:r /usr/local/bin/largest20
#!/usr/bin/perl -w
use File::Find;
@ARGV = $ENV{ PWD } unless @ARGV;
find ( sub { $size{ $File::Find::name } = -s if -f; }, @ARGV );
@sorted = sort { $size{ $b } <=> $size{ $a } } keys %size;
splice @sorted, 20 if @sorted > 20;
printf "%10d %s\n", $size{$_}, $_ for @sorted
(Readers, please note before you knee-jerk reply with some half-baked
'du' recipe that identifies the largest _subdirectory_ of a given
directory, that the above-cited Perl script does _not_ do that, but
rather finds and lists the largest individual files with a subtree.)
The perl script is of course trivially hackable to turn it into
/usr/local/bin/largest50 or such, if you wish. Or go to town and make
the integer a command-line parameter, if you want to do that.
More information about the sf-lug
mailing list