by Rick Moen
Shortly before covering security analyst Craig Ozancin's LinuxWorld Expo session on Linux security, I wandered into the Geek Bowl quiz competition in progress. Through an odd bit of syncronicity, the two events segued rather nicely: One of the many questions the panelists blew completely was "In the movie Tron, the lead character Flynn's voice was provided by what actor?"
I told Eric Raymond we'd have to send him back to the geek re-education camps, for missing that one. But it helped put me in the proper frame of mind for a security panel -- once you correct the movie's minor flaw of depicting the wrong side as the heroes. In a nutshell, you (the system administrator) are in the role of the villain in that computerist's classic, the Master Control Program. Your problem: How do you keep out Jeff Bridges (the outside attacker)?
Ozancin's talk dwelled at length on the methods and tools an attacker (a term he advocates over "hacker") uses to select you as his target, and worm his way in. The attacker collects information on vulnerable systems and networks from unwary individuals ("social engineering"), public DNS listings, ranges of telephone numbers listed on company Web sites, and other public information -- or his target may be a user whose on-line identity he follows from system to system. (Perhaps this user employs the same password on all systems he uses.) Having picked a target, he remotely probes accessible systems and services using sweeps of IP-network ranges using ping, port scanners (nmap, strobe, Cheops) to tell him what OSes and OS versions are present, and what attackable services they run. The attacker may also use specialized network-vulnerability scanners: Nessus, the older SATAN and SAINT packages, Firewalk (which probes and identifies a network's firewall ruleset), or proprietary scanners such as Internet Security Systems's Internet Scanner and Axxent Technologies's NetRecon -- as well as checking Web sites on the target network for known-exploitable CGI scripts.
Or the attacker may skip the fancy network scanners, and concentrate on stealing one of your passwords. In my experience, this is the bad guys' usual way in, and absurdly easy on most systems. If one of your users uses telnet, or (non-anonymous) ftp, or POP3 to reach your system remotely, the user's login name and password can be snagged with trivial effort at any point between the two machines. Alternatively, the malefactor may use as low-tech a means as "shoulder surfing" (watching the login as it's being typed in), or a variety of "social engineering" techniques: People are often astonishingly willing to give their passwords over the telephone to a stranger with a plausible reason for asking. Or they e-mail passwords and other confidential data across the open Internet, ripe for interception. At the minimum, the attacker may telephone the firm to glean people's names and positions, or get this information from the company Web pages: He may then be able to predict valid usernames, and try them with likely password combinations.
Then, there are the truly embarrassing password techniques, that amount to walking into an open, unguarded bank vault. There are still services that ship with default remote administrative passwords, as evidenced by Red Hat Software's recent Piranha gaffe, as well as sites reckless enough to use null passwords, the username as the password, or the username reversed (e.g., "toor" for the root account). Or the attacker may use remote techniques to read a copy of /etc/passwd (on systems without shadow passwords enabled). Many such past exploits have relied on insecure CGI scripts provided by default with Web servers that are also unnecessarily running with root authority. (The Apache Web server most commonly used on Linux no longer ships with either of those faults.) Any attacker who can grab an un-shadowed password file has hit the jackpot, because he can then "crack" your passwords in private, at his leisure. This is done by automatically encrypting large lists of words in various permutations, and comparing the "crypted" versions against the target password entries, looking for matches. The traditional tool for this task, crack, now has a next-generation replacement, John the Ripper, with better performance and a broader reach of target passwords. But the real clincher is the advent of distributed password-crackers such as mio-star, saltine-cracker, or slurpie, which can make entire networks of machines work cooperatively on cracking your password file via these "dictionary attacks".
Why all this firepower concentrated on cracking your password files? Because, once the attacker is on your machine, posing as a legitimate shell user, vastly greater avenues towards total control of your machine (root access) beckon: He can attempt this through manipulation of any of your system's privileged programs, instead of just those advertising remote network services. This is what I call Moen's First Law of Security: "It's easier to break in from the inside."
If the attacker is not able to pose as a legitimate user, then his avenues of attack are more limited but still numerous. Every month, there are security advisories of new holes in network software, more often than not in the form of buffer overflows: examples of poor input validation that permit running attacker-specified code as if it were part of the program, abusing its authority. Some overflow-based attacks directly open shells or other direct access mechanisms for the attacker; others act more indirectly by yielding the contents of /etc/passwd or /etc/shadow, creating a new account, changing the password of an existing account, creating a custom .rhosts file, and so on.
However, regardless of whether your attacker entered via the front door or the back, his next priority after gaining root access is to cover his tracks, preventing the administrator from noticing his presence and locking him out. He'll do this by sabotaging the system logs and accounting software, disabling any security-monitoring software, and installing trojan-horse ("trojaned") software to conceal his activities, gain additional intelligence, and create backdoors in case he needs another way in.
The trojaned software usually includes replacement binaries for the genuine login, netstat, ps, ifconfig, du, df, ls, top, syslogd, tcpd, locate, and various servers run by the inetd superserver. The aim is to hide the attacker's tools, logs, and processes, so they are invisible to the legitimate root user.
And Tomorrow, the World!
Some of those processes will be spy programs, running to capture login information entered by local users for remote systems elsewhere. Those will be logged and conveyed back to the attacker, giving him new targets. Some may be "network sniffers", monitoring the traffic passing nearby, to or from other nearby machines, and likewise capturing private information for the bad guys. Those work by putting your network interface in "promiscuous mode", in which the normal disregarding of other machines' network traffic gets disabled. Some may be clandestine network services such as file-swapping, useful for the attacker and his friends. Most distressingly of all, some may be carrying out attacks on other systems. The older variety of these involved flooding distant machines with either normal or deliberately malformed network traffic (ping, ping of death, smurf, SYN flooding, teardrop, land, bonk), as a "denial of service" (DoS) attack. Then starting last year, the more-organized "DDoS" tools (trinoo, Tribal Flood Network, stacheldraht, Trank, and so on) came to sudden public attention, when they were used to overwhelm popular Internet sites. The third-party, subverted machines ("zombies") used to carry out those attacks appear to have been university machines, favoured for their lax security and high Internet bandwidth, but your Linux hosts could be the attackers' next tools.
Even if your machines don't cause you that order of embarrassment, the other risks are equally grim: You can reveal confidential data with business and/or personal consequences, lose that data entirely, see it corrupted or sabotaged, be involved in wrongful or even criminal activity, lose access to your computing resources, and indirectly cause harm to your staff and business associates. Your Web site can be defaced or modified, or visitors might be redirected by sabotaged company DNS servers to different sites entirely.
What Would the Master Control Program Do?
As Ozancin pointed out, to prevent, detect, and recover from such attacks, your first step is to spend some time thinking like an attacker. Spend some time exploring your network with Nessus, nmap, and Firewalk, discovering its vulnerabilities as if you were an outsider peeking in. Set John the Ripper loose on your password files, to discover any trivial-to-break passwords your users are damaging your security posture with. Subscribe to the security-alert mailing list for your Linux distribution. Install one or more security-checking packages (LIDS, LogCheck, Tripwire, HostSentry), or simply generate and store (off-system) MD5 checksums for all critical system files.
Disable all network services you're not sure you need (if you're wrong, you'll find out), including those in /etc/inetd.conf), and the same for CGI scripts on Web servers. (Never place scripting executables such as the perl interpreter in your cgi-bin directory.) If you wish to leave the user-information service "finger" running, make sure it's not one that lists all logged-in users if you run "finger @hostname" (substituting your machine's name for "hostname"). Stay current on security-related revisions, especially for the network services you leave enabled. The foregoing measures are probably the second most valuable precautions you can take.
The most valuable would involve password policies. You'll want to always use shadowed passwords, for starters. The utility "pwconv" will switch you over to those (populating /etc/shadow, and removing all passwords from /etc/passwd), if you aren't running shadowed already. This essentially eliminates the risk of password cracking.
You'll also want to set a minimum password length. Most Linux distributions require five or six characters, minimum, but Ozancin suggests changing this to require a full eight characters. Since most Linux systems, these days, use a Pluggable Authentication Module (PAM) security architecture, the minimum length can usually be set easily in the /etc/pam.d/passwd configuration file.
In addition, you should consider avoiding plaintext-password network services: The POP3, ftp, and telnet daemons pose a special risk because their passwords pass unencrypted across the open network, sniffable by any nearby machine along the way. SSH (the secure-shell suite) and stunnel can replace or protect the vulnerable protocols, and you can use SSL encryption for any sensitive Web-based information. For best protection, Ozancin recommends "two-factor authentication": adding an additional security mechanism to the password one, such as a "smart card" or an encryption key-pair. Users should be encouraged (and equipped) to encrypt any sensitive e-mail using PGP or GNU Privacy Guard. Also, given the possibility of network sniffing, use switched ethernet hubs wherever possible, to isolate traffic (thus minimizing the amount of sniffable information).
Ozancin wagged his finger at the Sendmail mail transfer agent (MTA) as the cause of many past security exploits, including but not limited to buffer overflows, which is true but slightly unfair, as Sendmail has had a much longer history than most MTAs, and a clean one for quite some time, now. However, his point about the program's slightly risky, monolithic design is well taken, and cautious sites may wish to adopt Postfix (which is open-source licenced) or Qmail (which has an almost open-source licence).
Truly paranoid administrators may also want to run their Web server programs in a chroot (artificial root) environment, as a precaution in case of buffer overflows, misbehaved CGI scripts, etc.). Ozancin warns that the minor security gain such a setup provides may not justify its administrative and maintenance overhead.
As a security-tightening measure on individual Linux boxes, Ozancin recommends reviewing security-sensitive files, especially ones installed set-UID or set-GID to run with the root user's authority (or equivalent). He recommends that security-sensitive files be made unreadable by ordinary users, and removing the SUID/SGID bits where they are not needed. I can say from personal experience that this recommendation must be approached with caution: Keep good records of what you change, as you may find things unexpectedly breaking from such efforts to tighten security post-installation.
Ozancin also mentioned the possibility of dedicated monitoring hosts. One such machine might be a dedicated "loghost", to which the syslog daemons on your other machines report their operations. Ideally, this machine would have no network connection (as it would be a prime break-in target), and be reported to via null-modem serial cable, only. The other type would be network-based intrusion-detectors, such as a machine running Marcus J. Ranum's Network Flight Recorder -- as opposed to host-based detectors such as Tripwire. I have my doubts about network-based intrusion detectors, as their ability to reassemble and analyze packets in real time is going to be strained, with any reasonable underlying hardware -- but they have their proponents.
Last, and lest we forget, Linux can be firewalled at several levels. Machines inside of your network can be partially concealed through IP masquerading (the version of Network Address Translation most used in Linux). Basic filtering of traffic allowed and disallowed at the network interfaces can be done (with Linux 2.2 kernels) using the related IP Chains rules, perhaps building the firewall rulesets using Mason, and allowable traffic can be specified at the level of individual services through the TCP Wrappers control files in /etc, hosts.allow and hosts.deny. The truly paranoid may elect to use the 2.4 kernels' Netfilter facility (adding stateful packet filtering), or a commercial application-level proxy gateway. And traffic between geographically separated company networks can be routed through a virtual private network (VPN) tunnel, instead of being exposed to the open Internet.
Ozancin's talk was fairly comprehensive, and typical of such security talks in its emphasis: It focussed almost entirely on prevention and detection -- and mostly on prevention.
Prevention and detection are of course very good things, but ideally should be part of better-rounded effort at risk-assessment and management. That should include damage-reduction (what is at risk?), defence in depth (how can we avoid having all our eggs in one basket?), hardening (e.g., jumpering the SCSI drives read-only for some filesystems, and altering ethernet hardware to make promiscuous mode impossible), identification of the attackers, and recovery from security incidents. Explicit security policies, security auditing, the design and testing of backup systems, automatic and manual log analysis, handling of dial-up access, physical security for the network, the special problems posed by laptop users, security training & documentation, and disaster recovery and costing are necessary parts of such an effort.
After all, if the Master Control Program had only been better at risk management, the movie might have had a happy ending.
 Among nmap's interesting features is the ability to estimate your likelihood of successfully predicting TCP sequence numbers on a target system, a method described by Steven Bellovin in 1989 and reportedly used by Kevin Mitnick in 1994 to remotely take over security consultant Tsutomu Shimomura's Unix host. Ozancin used nmap to show that such guessing is thousands of times more difficult against a remote machine running a generic Linux 2.2 kernel than against one running MS Windows NT 4.0 with the latest service pack.
nmap can also operate in "decoyed" mode, in which a high percentage of the probe packets purport to come from elsewhere entirely. One series of probes against Pentagon systems, in December 1998, seemed to originate from addresses all over Russia, but is now thought to have involved a single university machine running nmap.
 In accordance with Moen's Second Law of Security: "A system can be only as secure as the dumbest action it permits its dumbest user to perform."
 The rule of thumb in security is that you can maximize security through a three-factor approach: something you know, something you have, and something you are. Passwords typify "something you know", smart cards are an example of "something you have", and biometric scanning techniques exemplify "something you are". The latter may seem science-fictional, but I saw a mouse device that incorporates a fingerprint reader at the August 2000 Stanford Cypherpunks meeting: They are in mass-production, as you read this.
 Linux's 2.4 kernel series is planned to provide for a "capabilities model" in which processes can reach only resources their roles require, but the exact way this should be done is still being hotly debated.
Copyright (C) 2000 by Rick Moen, email@example.com.
Article first appeared in LinuxWorld.com.