[sf-lug] interlopers continued
Rick Moen
rick at linuxmafia.com
Tue Jul 24 09:08:31 PDT 2007
Quoting John Reilly (jr at inconspicuous.org):
> You seem to forget about human error. Obviously its because you're
> infallible :)
I certainly have _not_ forgotten about human error. However, this would
have to be a very major error conducted using the root user's authority;
if the machine owner is willing to conduct major network service
activations in error, using root authority, then I think the owner has
much bigger problems. In addition, increasingly on the better Linux
distributions including {U|Ku|Xu|Gu|Edu}buntu, network daemons aren't
even installed by default in the first place, so the "error" would have
to equate to not only loading the gun and shooting it at your foot, but
also first driving across town to buy one.
> But for most people, they do make mistakes.
Indeed, in a very large world with billions of people in it, I'm sure
there are some who make the "error" of using the root account in a
careless fashion to make the "error" installing and enabling major
network daemon packages on the public Internet.
Meanwhile, I've only been at this Unix thing for a couple of dozen
years, but have noticed over that time a couple of things: (1) The
most effective way to learn what you're doing in hurry is to look at the
process table and ask yourself: "What are each of these things? Did I
really want to run each of them? What do they do? How do they get
started? What happens if they aren't running?" (2) One of the most
frequent debugging suggestions made to people having problems with
network services -- both client and server side -- is: "Check
'iptables -L' and the contents of the /etc/hosts.{allow|deny} files, to
see if you've shot yourself in the foot with blocking rules."
> I can't say 100% for sure, but I believe that for the average person
> out there using or trying out linux, they may not know which services
> are safe to have running on an open host or how to configure them so
> that they are safe.
The "average person out there using or trying out Linux" has no reason
ab initio to enable network services at all -- and, with reasonable luck
and suitable choice of distribution, the software required isn't even
present on his/her system. However, when that person _does_ get around
to playing with offering network services, default, blanket filtering
rules at either the hostaccess or iptables/netfilter level -- or, worse,
both -- will cause perplexing and mysterious failures. I don't have to
speculate about this: I've seen it over and over.
> You try and use your own host as an example of the general case when its
> not.
Much as I'd love to join you in taking whacks at that straw man, _no_,
this is simply not the case, and, further, I find it tedious to have
point out to you what I did _not_ say.
You made a blanket assertion about general usefulness of (unspecified)
blanket iptables rulesets for Linux hosts. I asked how your claim would
apply to one specific example. You waffled with some lame handwave
about how it would be inappropriate to discuss my system. So, I said,
fine, pretend it belongs to someone else. Or, you see, discuss some
other specific system, and show how your assertion applies there: What
rules, to meet what particular threat, and why?
Apparently, you don't want to talk about a specific system, nor about
what specific threats you believe you're dealing with, nor why your
rulesets -- which you haven't bothered to detail -- are a good measure
to deal with those threats, and outweigh the inherent drawbacks.
You're doing none of those things. I don't know why, and it'd be nice
if you actually became specific at some point, rather than merely
reciting vague platitudes about "defence in depth" and playing long
bouts of rhetorical footsie.
More information about the sf-lug
mailing list