[sf-lug] how to whack crackers

jim jim at well.com
Tue Jan 6 09:41:52 PST 2009



   recopying an entire filesystem every night is 
not a good idea, just as a matter of form, not to 
mention the hassle-factor of managing disk space 
(the time factor seems okay if cron is doing the 
work at night while i sleep). one olde-tyme way 
is move oldest to last, older to oldest, old to 
older, and then copy tonight's stuff to old. that 
gives a little hope for reclaiming work that got 
corroded and then backed up to old. 
   ls -t is one of my best friends, but automating 
the job of clipping off the newest entries for 
backup seems dicey. 
   i've considered making a tarball of a directory 
and then submitting it to some version control 
software and letting the version control software 
do the work of storing the diff. haven't gotten 
around to trying, opinions? 
   we do try to record absolute pathnames of files 
we've changed (pretty much config files in /etc/). 
jim 
:wq 



On Mon, 2009-01-05 at 17:52 -0800, Rick Moen wrote:
> Quoting Asheesh Laroia (asheesh at asheesh.org):
> 
> > On Mon, 5 Jan 2009, jim wrote:
> > 
> > >   i cannot figure out a good backup scheme. the one
> > > that copies absolutely everything from certain
> > > directories each night is inelegant.
> > 
> > I disagree; it's magically elegant. Back up the whole filesystem, and then 
> > you know that if you lose that filesystem tomorrow you have a copy of it 
> > for later. On this point I think Rick and I disagree, but to me, the 
> > confidence I get knowing I have the entire filesystem backed up means that 
> > I don't ever have to worry about my backups excluding a file I wanted, nor 
> > about spending time configuring the backups.  Disks are cheap, and Asheesh 
> > worrying is expensive.
> 
> Whatever works, really.
> 
> I happen to have put in a lot of time getting to know what's on my
> system and where everything lives.  Fortunately, Linux systems cooperate
> by being very, very consistent about where things go.  The complete list
> happens to be, in my server's case:
> 
> /root/*                      Root user's home directory
> /etc/*                       System configuration files
> /usr/lib/cgi-bin/*           CGI scripts (omit PHP binaries)
> /var/lib/mysql/*             MySQL database files 
> /boot/grub/menu.lst          GRUB bootloader configuration (just 1 file)
> /var/spool/exim4/*           Exim and SA-Exim internal files
> /var/spool/news/*            NNTP news spool for Leafnode
> /var/spool/mail/*            SMTP mail spool
> /var/lib/mailman/archives/*  Mailing list archives for Mailman
> /var/lib/mailman/data/*      Mailing list state and other data
> /var/lib/mailman/lists/*     Mailing list definitions for Mailman
> /var/lib/mailman/nntp/*      Mailing list NNTP gateway data
> /var/lib/mailman/qfiles/*    Mailing list in-process data
> /usr/local/*                 Locally installed files and records
> /var/www/*                   Public http, ftp, rsync tree
> /home/*                      Non-root users' home trees
> 
> Plus, export (to a file in /root) of the package database contents 
> 
>    dpkg --get-selections "*" > /root/selections-$(date +%F)
> 
> ...and partition maps of the two hard drives:
> 
>    disk -l /dev/sda > /root/partitions-sda-$(date +%F)  Partition map of sda
>    fdisk -l /dev/sdb > /root/partitions-sdb-$(date +%F)  Partition map of sdb
> 
> ...and a dated snapshot of the all-important /etc tree
> 
>    tar cvzf /root/etc-$(date +%F).tar.gz /etc
> 
> That's literally everything that cannot be reconstructed (and updated at
> the same time) from trusted Debian package contents -- and is a great
> deal smaller, quicker to copy, etc.  Important when you backup to
> offsite over a congested aDSL link, for example.
> 
> Along the lines of "whatever works", if re-copying to backup media the
> same basically irrelevant and seldom-changing files in /usr/bin,
> /usr/lib, /usr/share, etc., isn't a problem, great.  If the extra time
> and room makes it marginal whether you'd bother doing backups at all,
> then that's important.
> 
> The way you know that you've gotten everything is using the only test
> that ever matters for _any_ backup, "whole filesystem" or not:  See if
> you can restore a functional machine using it, starting from bare metal.
> 
> 
> 
> 
> > Okay, so it's for cleanliness, not security? Then use fail2ban to tidy 
> > that up.
> 
> If thy logging offend thee, _reduce_ it.  Or, better yet, just ignore what
> doesn't matter.
> 
> Dunno if this explanation applies, here, but often I see people
> scrambling for things like "fail2ban" because they've just installed a
> log-analysis tool like logcheck, and are all hot and bothered by the
> e-mailed reports, which suggest menacingly that things like dictionary
> attacks against sshd are serious threats.  Those people (of whom I
> speak) then install blocking software, iptables rules, etc., _solely_ 
> to make logcheck's menacing e-mails go away.
> 
> The smarter approach is, of course, to just edit logcheck's
> configuration to make it cease doing a Chicken Little impression every
> time someone rattles the doorknob.
> 
> 
> _______________________________________________
> sf-lug mailing list
> sf-lug at linuxmafia.com
> http://linuxmafia.com/mailman/listinfo/sf-lug
> 





More information about the sf-lug mailing list