The most recent version of these essays can be found at http://linuxmafia.com/~rick/faq/.
("That's what you get for swimming in the shallow end of the gene pool.")
Economy of expression is a good thing. So, rather than have to repeat myself continually, I'm posting my top rants here, for ready reference. Many of you (readers) will be visiting today because I pointedly referred you to the "#"-tagged URL of some particular item, below.
Table o' Contents
Virus . . .
- Should I get anti-virus software for my Linux box?
- But didn't security expert Simson Garfinkel say that all Linux systems need virus checkers?
- Don't the rise of Linux worms show that Linux now has a virus problem?
- Isn't Microsoft Corporation's market dominance, making Linux an insignificant target, the only reason it doesn't have a virus problem?
- But how can you say there's no virus problem, when there have been several dozen Linux viruses?
- Linux Tire-Kicking . . .
- Proprietary Warez . . ."
- Hardware . . .
- Netiquette . . .
- Crybaby . . ."
- MacLinux . . .
- Miscellany . . .
And yet it is.
Here's the short version of the answer: No. If you simply never run untrusted executables while logged in as the root user (or equivalent), all the "virus checkers" in the world will be at best superfluous; at worst, downright harmful. "Hostile" executables (including viruses) are almost unfindable in the Linux world — and no real threat to it — because they lack root-user authority, and because Linux admins are seldom stupid enough to run untrusted executables as root, and because Linux users' sources for privileged executables enjoy paranoid-grade scrutiny (such that any unauthorised changes would be detected and remedied).
Here's the long version: Still no. Any program on a Linux box, viruses included, can only do what the user who ran it can do. Real users aren't allowed to hurt the system (only the root user can), so neither can programs they run.
Because of the distinction between privileged (root-run) processes and user-owned processes, a "hostile" executable that a non-root user receives (or creates) and then executes (runs) cannot "infect" or otherwise manipulate the system as a whole. Just as you can delete only your own files (i.e., those you have "write" permission to), executables you run cannot affect other users' (or root's) files. Therefore, although you can create (or retrieve), and then run, a virus, worm, trojan horse, etc., it can't do much. Unless you do so as "root". Which it's simple to avoid doing.
The first "virus" (arguably, actually a trojan or worm) for Linux was named "Bliss", created in September 1996 as a proof of concept. If a user executes an infected executable, the viral code appends itself to all executables for which the user has write permission. But thereafter, it can't go anywhere else or do anything else — and cannot take over (infect) the local machine (or any other): It lacks permission to do so. Nor can the other Linux/Unix viruses / worms / trojan horses thus far known. And claims of "Bliss" infections outside deliberate lab-only deployment by virus researchers are, in point of fact, considered suspect. New Linux viruses (such as Simile.D) emerge continually, too. But guess what? They don't go anywhere, either.
Most people asking this question have no experience with true multi-user systems built around a pervasive, ground-up security model. On their systems, any process the user executes, directly or indirectly, can modify, destroy, or manipulate anything on the system. This is true to a degree even on MS-Windows NT/XP, which tries to be fully multiuser as Unixes are, but has numerous fundamental security flaws.
By contrast, on Linux (or any other Unix), your processes cannot harm the machine (or damage other users' files) — because you yourself cannot.
Thus, even a Linux user who deliberately wants to activate a Linux virus (trojan horse, worm, or other program designed to do mischief) will have extreme difficulty getting it to circulate. If you're a programmer, try and see. Viruses aren't difficult to write on Linux: Write one, run it (as a non-root user), and watch it bollix your files. But nobody else's.
Three objections are commonly raised to the above argument:
1) Ah, you say, all you need do is insert "hostile" code into some package that must run with root-user privileges. True, that would work: Just infiltrate the main software-distribution chains. But this is extremely difficult, not just because the distribution chain is well monitored by paranoid technical people, but also because, with open-source code, any odd modifications would be quickly found by the large number of programmers working on the source code, and removed.
For example, on January 21, 1999, it was discovered that the main distribution site of Wietse Venema's key TCP Wrappers package, ftp.win.tue.nl at Eindhoven University, had been site-compromised and the development copy of TCP Wrappers there had been trojaned. No Linux distributions were affected because they and other wary observers check PGP signatures on "upstream" source releases, and, in fact, the compromise was detected within hours by Andrew Brown of Crossbar Security, Inc., because he noticed that tcp_wrappers_7.6.tar.gz was unsigned. (The next day, util-linux development release 2.9g at that site was also trojaned: exact same outcome.)
On September 28, 2002, similarly, the public ftp.sendmail.org server was site-compromised and trojaned source-code packages of sendmail 8.12.6 were offered there — probably not PGP-signed — for eight days. Shortly before that, around July 30, 2002, the same thing happened with the OpenBSD Foundation's ftp server and hosted packages of OpenSSH 3.2.2p1, 3.4p1, and 3.4 development source code. This trojaning was caught and corrected in about a day; as in all the other cases, source-code downloaders who check package signatures weren't fooled.
On December 13, 2007, post-release source tarballs of SquirrelMail 1.4.11 and 1.4.12 on www.squirrelmail.org were found to have been trojaned by an intruder using a security-compromised developer account to insert a remote-execution backdoor. This deed was (once again) caught by a user noticing that the packages' md5 checksums did not check out. (The inexcusably lax developers only started gpg-signing their releases' md5 sums with the following version, 1.4.13.)
On November 8, 2010, release source tarballs of ProFTPD 1.3.3c on ftp.proftpd.org were found to have been replaced by a trojaned version. ProFTPD maintainer speculated that intruders used an unpatched flaw in ProFTPD itself, which, given the sorry record of that notoriously buggy software is sadly credible. Downloader Daniel Austin noticed noticed the modification three days later, probably by checking PGP-signatures.
On June 30, 2011, release versions of vsftpd version 2.3.4 on vsftpd.beasts.org was (somehow -- not yet clear) replaced by a trojaned version containing a remote-login backdoor. As before, the substitution was caught by a user noticing the tarball's md5 and sha1 checksums no longer validate against the developer's signature, after the trojaned version had been available for 3 days. (The developer immediately moved to new hosting, so we may never learn how the trojaning occurred.)
On or before August 12, 2011, Linux kernel server hera.kernel.org and several related machines (odin1, demeter2, zeus1 and zeus2) were root-compromised via stolen user ssh credentials, which access was then escalated to root-user access through still-undisclosed means, and intruders then installed the Phalanx rootkit to hide themselves. The compromise was discovered on Aug. 29, 2011, 17 or more days after break-in, and revealed to the public on Aug. 31, 2011. The main kernel source tree was not, and could not be, compromised, because it's stored entirely in sha1-cryptographically-vetted 'git' trees. Those in control of those machines could -- and may -- have replaced downloadable kernel tarballs (compressed source code archives) with trojaned versions. It should be noted that contents of tampered tarballs would not match the sha1 signatures of git checkouts, hence probably would have been caught in time. Also, the downloadable tarballs aren't used for development (git checkout is); hence the development code would be unaffected.
To my disappointment, a forensic report on the kernel.org break-in was promised for the first two years after the event, but has never been delivered, and reporters inquiring about that promise (as of 2015) have been receiving no answer.
With binary-only Linux software (little of which must run with root authority), e.g., packages offered by proprietary software companies, you would face the equally daunting task of adulterating the product of a company that realises such intrusions would damage its reputation (unlike in the MS-Windows market, where Microsoft Corporation has repeatedly shipped virus-infected CD-ROMs, and nobody considered that peculiar or unacceptable). Given that have you pulled off such a feat, your virus would then encounter the previously-detailed barriers to its further spread, and thus probably go roughly nowhere (beyond the systems initially infected). And then, the damaged systems would get rebuilt, and the virus would effectively die out.
(Occasional late arrivals to Linux from the proprietary world have on rare occasions been found to have built secret backdoors into their own official software releases, presumably not caring about the loss to their reputation. It seems only fair to help them: On Jan. 9, 2001, security researchers found that Borland/Inprise's SQL database "Interbase" included backdoor access on port 3050/tcp to undocumented service account "LOCKSMITH" with full access to all database objects. In Feb. 2003, Rüdiger Kuhlman, maintainer of instant messaging program mICQ, now known as "climm", introduced obfuscated code into his own program to make it refuse to run on Debian. One does wonder how many popular proprietary programs on legacy proprietary OSes have similar hidden code.)
2) Well then, you say, one might engineer a virus to start out as a user-owned process, but then crack the local security model from the inside. This approach, too, might work — if it could be done unobtrusively. At any given time, some of any Linux (or other Unix) system's dozens of root-owned system binaries will undoubtedly be vulnerable to attack, but viruses and similar code must be small, simple, and unobtrusive. The two goals are incompatible.
(The possibility does point out why it's important that users understand that they're responsible for processes that run under their user authority: If some untrustworthy code you've downloaded and decided to run backgrounds itself and performs nasty tricks on others and/or hammers away at possible system weaknesses until it finds a way to escalate privilege, it's your fault. So, know the processes listed by "ps uxw", and understand why you're running each.)
But this brings up another reason why Linux/Unix systems tend to be hardy: genetic diversity. That is, for virus-like code to spread among Linux boxes, it must be unfazed by the variety of CPU architectures Linux supports, and their diversity of software and of configuration. As long as Linux distributions remain diverse, they will be that much harder a target.
3) Last, you say, surely sysadmins stupid enough to take dangerous actions as root must be becoming the norm instead of a rarity, given Linux's current explosive growth — thus undermining the whole security model. This, too, is true — but there are powerful forces at work to educate new sysadmins: The administrative tools, themselves, tend to stress that the root account is dangerous and should be used minimally and carefully, as does Linux's new-user documentation. Also, those sysadmins resistant to learning this message via such avenues inevitably learn it the hard way, by destroying or crippling their systems repeatedly — until they learn. In that regard, viruses do not even stand out from the general likelihood of repeatedly destroying one's system, until one learns to not do unwise things as root. The difference between "hostile" executables (such as viruses) and others is academic, when a root-account user can already shoot off his/her foot or other vital parts, with one of myriad, brief commands. Put the other way, the same survival skills by which you, as a novice sysadmin, will cease destroying your system directly will also, more generally, dissuade you from doing unwise things as root, thereby incidentally keeping viruses and their kin off your system.
Or, put a third way, the Linux community would see no real distinction between novices who (as root) infect their systems (if this should ever happen to significant numbers of them), and those who accidentally type some variation on "rm -rf /" (delete all files) while logged in as root: Both are a result of inexperience and lack of caution. In both cases, education, attention, and experience are a 100% effective cure.
The above discussion has centred on the root user's actions, and has mostly been variations on the theme of don't run untrusted executables as root. There remains one other option: viruses (and similar things) that don't attempt to affect system binaries or take over entire machines, but instead dwell in a particular user's account and attempt to spread to other user accounts, on that or other machines, via inter-user communication mechanisms such as e-mail. One might imagine, for example, a virus written in "elisp", the macro language of GNU emacs and xemacs, and propagating as attachments to e-mail sent to other emacs users.
Such an invention would be at worst a nuisance among a few users, as it could affect only users running the same combinations of user software. Further, the Unix community long ago became wary of auto-executing programs/macros, so ultimately this technique would rely on convincing each additional user to execute (run) the program/macro, to "infect" his/her files. Also, in the Linux/Unix world, macros tend to be stored as readable plain text (unlike the case with, say, MS-Word), so that untrustworthy code is difficult to conceal from user scrutiny.
In these areas, again, viruses wouldn't stand out from the general category of programs another user sends you that you shouldn't run: If a friend mailed you a script that would erase all your files, would you run it? Of course not. In the same sense, you would not automatically run any other executable that landed on your doorstep, from another user — and Linux programs will pretty reliably not auto-run them for you (nor even save them with the executable bit still present). If Linux programs emerge that do auto-execute (e.g.) macros in documents attached to e-mail (as does the combination of MS-Outlook or MS-Outlook Express with MS-Word on Win32 systems), there might be a flurry of viruses transmitted that way, until the foolishness of such a feature becomes obvious to all — or until only fools run such programs.
Linux systems can be indirectly affected by viruses arising on more-vulnerable systems. If you offer file-sharing services from a Linux machine to others on its network, such as NFS, Samba, or NetATalk, the other machines might well store infected programs on the shared volumes. (For this reason, sometimes Linux sysadmins run checkers to catch and remove foreign OS viruses from shared files, in-transit e-mail, and the like.) Also, the Linux boot process might be interrupted by operation of (say) a virus originating in MS-Windows, and affecting boot-sensitive areas such as the Master Boot Record. But these are not Linux viruses, which remain vanishingly rare and (effectively) a harmless curiosity.
And yet. . . . And yet, the big anti-viral companies such as McAfee and Symantec all hawk anti-viral products for Linux. Why would they do this, if viruses pose no threat? Because gullible people have money, too, that's why. Such products are sold to the crowds of people who refuse to believe essays like this one. If you feel that way, buy them with pride: It's easier than thinking.
But then again, maybe you just can't trust anyone. Caveat user.
For a knowledgeable, but more glass-half-empty, view of Unix viruses, see also Rado Dejanovic's article. Also, Bruce Ediger appears to be interested in the same subject. Be sure to check out, also, David F. Skoll's article, especially the hilarious "Challenge" section near the bottom.
All this is not intended to suggest that system-integrity checkers like AIDE, Tripwire, and other IDSes aren't an excellent idea: Being able to detect unauthorised changes is a very good thing. Ditto the various schemes to "sandbox" untrusted code.
By the way, the ill-informed lucubrations of a Slashdot writer to the contrary, there is no such word as "virii". The plural of this English word is "viruses". (The word was borrowed and redefined from the Latin word virus = slime, poison, or venom. In Latin, that is a 2nd declension neuter noun, whose nominative plural form is now unclear, since it seems that nobody ever used one — and it doesn't appear to work like either a standard "-us" or "-um" noun, whose plural behaviours are known. In other words, it doesn't have a Latin plural, possibly because it was a mass noun rather than a countable one.)
Yes. Top security authority Garfinkel, co-author of Practical Unix and Internet Security and other classics, did say, in a SecurityFocus article, that a plague of viruses are destined to descend upon Linux, and that the only cure is for all Linux systems to run "credible anti-virus software".
Garfinkel acknowledges that the threat he envisions exists only because inexperienced sysadmins "are incredibly promiscuous with the root account", but he thinks running software that compensates for root-user carelessness is an appropriate and adequate remedy.
Unfortunately, this world-class authority is dead wrong: There is no way that automated "checking" software can ever prevent a careless root user from damaging (or fully destroying) the system. As explained in the prior essay, the remedy is not adequate because viruses are a very minor system threat compared to the extremely broad variety of easy ways a root-account user has of damaging/destroying his/her system, and that remedy is not appropriate because it fails to address the underlying, real problem of sysadmins being willing to carry out dangerous actions while logged in as the root user.
It is simply not possible to create and run a piece of software sophisticated enough to prevent a root user from running scripts, system commands, interpreted programs, or any of myriad non-virus executables having destructive potential equal to or greater than that of any virus. Further, such a program would be hostile to the very idea of a root account, which is by design supposed to be able to carry out any possible action on the system.
(And, by the way, what's going to protect you from subverted or just dangerously defective virus checkers, themselves wielding root authority? Hmm? And why on earth would we entrust our system security to ethically suspect firms who demonstrably — and please note that both anti-virus and also commercial security-monitoring firms (with honourable exceptions ClamAV and F-Secure) were culpable in that hyperlinked example of corrupt collusion — have a tendency to sell their own customers down the river?)
The implication is clear: If a user lacks the judgement to use the root account safely, the only way to protect the system from that user is for him/her to not have root access. After carrying out this remedy to address the real causes of the problem, adding a "virus checker" is neither necessary nor useful.
It should be noted that there is nothing wrong with lacking the root password to one's system. Corporations do that with Unix boxes all the time. Somebody else, whom you trust to do any rare system administration tasks required, can keep and use your root password.
Is this inconvenient? Possibly. At a minimum, it requires modifying the usual PC-desktop habits of thinking — e.g., you might have to provide security-hardened remote access to your Linux box using ssh/scp. But that is a good thing, because it allows you to deal with real, fundamental problems in an effective manner. Adopting Garfinkel's would-be solution does not accomplish that.
No, they demonstrate that the computer press doesn't understand network security, and reprints boilerplate self-promotion from the anti-virus industry in lieu of news and analysis. Saying these display a "virus problem" is like saying a homeowner had a "fire hazard problem" after he/she left his/her home wide open and unoccupied for six months, then burglars finally noticed the house, stole its valuables, and finally torched it.
To explain: None of these Linux worms break into systems directly, but rather perform automated "script-kiddie"-style probes for specific obsolete, security-vulnerable network daemon (server) software versions. Typically, those vulnerabilities they seek were found and fixed months or years ago — and heavily publicised. At which point, everyone with a grain of common sense upgraded.
If you run a Linux (or other Unix) system and choose to have it offer network services, especially using overly complex, security-problematic software such as BIND v. 8 and WU-FTPd, it is an elementary fact of life that failing to heed security advisories and update your software when necessary means you may have your valuable business plans and other confidential data stolen or subtly sabotaged. You may find yourself arrested and tried for crimes you seem to have committed using your computer. You may give faceless strangers the means to believably impersonate you for their own purposes. You may see your and (sometimes) your company's reputation injured, and your career in ruins. You may suffer immense financial losses.
The point? "Linux worms" don't even rate in the catalogue of disaster you may suffer, if you have given the bad guys a dirt-easy way to seize total control of your system anonymously from anywhere in the world. Thus, people who fixate on the (at best) adding-insult-to-injury threat of "Linux worms" do not understand the subject of real network security at all.
For the sake of completeness, I should also mention that there's nothing Linux-specific about those "worms": Since the attack is against long-notorious vulnerabilities in widely-used network daemon software, they can be trivially modified to find and exploit such holes on other platforms where those packages run. But really, even that runs the risk of obscuring the real point: "Worm" attacks are not themselves a security issue, but rather one of the lesser consequences that typically result from ignoring real security issues for ludicrous lengths of time.
Not at all. This question is virus pundits' pons asinorum: If they can't think past this fallacy, don't even try to reason with them, as they're hopelessly mired in rationalisation.
The speaker's supposition is that virus writers will (like himself/herself) ignore anything the least bit unfamiliar, and attack only the most-common user software and operating systems, thus explaining why Unix viruses are essentially unknown in the field. This is doubly fallacious: 1. It ignores Unix's dominance in a number of non-desktop specialties, including Web servers and scientific workstations. A virus/trojan/worm author who successfully targeted specifically Apache httpd Linux/x86 Web servers would both have an extremely target-rich environment and instantly earn lasting fame, and yet it doesn't happen.
2. Even aside from that, it completely fails to account for observed fact: Assume that only 1% of Internet-reachable hosts run x86 Linux (a conservative figure). Assume that only one virus writer out of 1000 targets Unixes. Then, given the near-instant communication across the Net that at this writing is blitzing my Linux Web server with dozens of futile probes for the Microsoft "Nimda" vulnerability per second, the product of that one virus writer's work should be a nagging problem on Linux machines everywhere — and he/she will be working very hard to achieve that, given the bragging rights he/she would gain. Yet, it's not there. Where is it?
The answer is that, for various reasons discussed in prior essays, such code is very easy to write, but — given minimally competent system maintenance (including the automated kind, cited below) — completely impractical to propagate. And likely to remain so.
First of all, that's not what I said. (People keep failing to heed what these essays actually say.) I said that Linux systems' architecture and culture, by design, resist such petty nuisances, and create sufficient default protections that anyone careless enough to be exposed to Linux "malware" (viruses and such) has bigger and more fundamental worries: By and large, you can be hit at all only by being really dumb. By and large, you can suffer system (root) compromise from malware only by being mind-bogglingly dumb.
Moreover, especially since the year 2000, even reckless, dumb Linux users have been adequately protected against the consequences of likely types of gross negligence, by automated system updaters.
Let's get into specifics. Here's a detailed profile of literally all Linux malware to date (2004):
I. ELF Infectors:
Abulia, Alaeda, Balrog, Bi, Binom, Bliss, Brundle, Caline, Cassini, Cron, Cyneox, Dataseg, DebiLove, DerFunf, Dido, Diesel, Dummy, Eriz, Eternity, Gildo, Godog, Grip, Gzid, Henky, Herderv, Hyp, Jac, Kagob, Kaot, Laurung, Mais, Mandragore, Mixter, Nel, Nemox, Neox, Nf3ctor, Nuxbee, Obsidian.E (Obsid), Orig, OSF, Ovets, Pavid (Alfa.dr), Penguin, Quasi, RST = Remote Shell Trojan, Radix, RcrGood, Rike (Rike.1627), Satyr, Sickabs, Siilov, Silvio, Simile (Etap, MetaPHOR), Spork, Staog, Svat, Telf, Thebe, Vit (4096, Vit.4096, Silly), Winter (Lotek, LoTek), Winux (Lindose, PEElf, Pelf), Wozip, Xone, Ynit, and Zipworm (distinctive only in that it likes to infect ELF files in Zip archives).
These are all "ELF infectors", where "ELF" is the standard Unix binary format. To activate these, you must literally decide to run a binary infected with them, e.g., someone mails you a binary file and says "Please run this not-especially-trustworthy binary executable." Doing so would of course be really dumb; the consequence of being dumb in that particular fashion is that some number of Linux executable binaries set to be writable by the user's account would get modified to include a copy of the virus ("infected"). Note that the user is thereby enabled only to shoot at his/her own foot: No regular installed applications could be affected, because those are not writable by regular users: Only binary executables in that specific user's /home/username/bin/ and such could be affected (and seldom do users have any).
And, perhaps needless to say, anyone who runs untrustworthy binary executables using the root account is a dumb cluck, and hopeless. Further, you really, really have to go out of your way to run them at all: For example, literally none, zero, nada of the more than 100 e-mail clients for Linux auto-execute received executable attachments on the user's behalf. The user would have to save the attachment to /tmp, run "chmod u+x" on it to make it executable, and then manually run it — in order to (finally) shoot himself/herself (but not his/her system) in the foot.
Even though the category of "attack" is slightly different, the epic degree of inventive and energetic haplessness that would be required to actually hurt a system with one of these was nicely illustrated by my summaries (1, 2) of the October 2004 "phishing attack" aimed at Red Hat users.
One last observation about ELF infectors: They're all fundamentally identical, and might as well all be the same virus. Seen one, seen 'em all. (More to the immediate point: Easily avoid running one, easily avoid running 'em all.)
II. Automated Attack Tools against Obsolete Network Daemons:
This category is the one that former SecurityFocus staffer "Blue Boar" (Ryan Russell, former moderator of the Vuln-Dev security mailing list) cited, on my user group mailing list, in supposed answer to my "Yet, it's not there. Where is it?" rhetorical question (above), asking where's the first virus massively attacking Linux and making its author famous. In reply, he claimed to have seen Internet-traffic logs proving that "thousands" of Red Hat systems had been infected by "1i0n" and "lpdw0rm".
Background: Starting with the Sept. 25, 2000 release of Red Hat Linux 7.0, Red Hat, Inc. provided automatic, free-of-charge security updates through its "Red Hat Network" (RHN) service. Much of the discussion that follows will be Red Hat-centric, in part because some or all of the attack tools function only within Red Hat's shell environment and break elsewhere. However, the same network daemons were equally vulnerable at one time on other Linux distributions and (in many cases), indeed, on other OSes including MS-Windows (e.g., BIND). The same comments about avoidance apply elsewhere.
Here are my notes on the "worms" mentioned. ("Worm" in this context is just a scare-word meaning someone's canned remote-attack tool against a piece of network-accessible software your system may or may not have running and exposed to outside connections.)
Name: 1i0n (lion)
Appeared: March 23, 2001.
Vulnerable: BIND8 prior to 8.2.3, via the "TSIG" exploit of Jan. 29, 2001. Note BIND9 initial release, Sept. 15, 2000; BIND 9.1.0 release, Jan. 17, 2001.
Name: lpdw0rm (lpdworm, Kork, Abditive)
Appeared: April 2001.
Vulnerable: Berkeley lpd printing package, via an input validation bug; fixed in lpd's Oct. 2000 release.
Both Berkeley lpd and BIND8 were / are notoriously buggy network daemons, and neither was necessary or recommended unless you were running particular types of server machine. If you decided to run them anyway, pretty much everyone advised you to always stay absolutely current on security fixes. Fortunately, the above worms were no threat: The holes they attack had already been fixed two and six months earlier, respectively, by the time the worms made their debuts.
Running a known-vulnerable release of BIND8 or lpd was not merely obviously foolhardy, but also difficult to do starting on RH 7.0 and above because of RHN, which would inform even a near-comatose sysadmin that a new security fix is available, and would he/she like it retrieved and installed (Y/n)?
Moreover, CUPS had long been the preferred successor to lpd by 2000, and ditto the from-scratch-rewritten BIND9 replacement for BIND8, which by version 9.1.0 was quite sound.
So, being hit by either worm on Red Hat required either still running long-obsolete RH 6.2 (or earlier) with zero maintenance & obsolete network daemons still running, or practically willfully sabotaging all efforts to make effective maintenance easy and the path of least resistance. And, the fact that Russell trumpeted "thousands" of such systems allegedly having succumbed during those worms' heydays, out of some estimated 10 to 20 million Linux systems on the Internet in 2001, is conceivably credible but not very impressive. (Reliable censuses of in-service Linux machines are notoriously difficult; IDC analyst Dan Kuznetsky estimated in 2001 that Linux comprised 27% of the then-current market for new server hardware, and deployments had been accelerating for a decade.)
Then, too, consider the source: Russell seriously claimed that pathetic ELF-infectors RST.A, RST.B, and OSF were "the most successful Linux viruses [he'd] seen in the wild" and faulted these essays for not having covered them. Later, when challenged about what "successful" and "in the wild" meant, he admitted that he meant that some (unspecified, small) number of extremely gullible people, whom he claimed to know, had downloaded supposed software-cracking (or security-cracking) utilities from anonymous underworld strangers (who had virus-infected them) and run those supremely suspect "warez" with root authority.
(By the way, Russell casually mentioned in that conversation that he leaves himself logged into Linux desktop machines as the root user as a matter of deliberate policy, saying only "I know what I'm doing", and sees nothing wrong with doing so — and likewise habitually uses the Administrator login on MS-Windows. Both are, of course, novice-user bad habits and create needless system risk with little benefit.)
Such is our famous "virus threat" — but let's also cover all the other Linux worms, to date:
Appeared: May 22, 2001.
Vulnerable BIND8 prior to 8.2.3, via the "TSIG" exploit of Jan. 29, 2001. Note BIND9 initial release, Sept. 15, 2000; BIND 9.1.0 release, Jan. 17, 2001.
This is a near-twin of 1i0n; the same comments apply. (Oddly, it seems to have been intended to repair 1i0n-cracked systems.) Note that the hole that Cheese attacks had already been fixed for four months.
Name: Adore (Red)
Appeared: April 04, 2001.
Vulnerable: LPRng printing package, via an input validation bug discovered December 12, 2000.
Vulnerable: rpc-statd daemon, via an input validation bug discovered August 18, 2000.
Vulnerable: wu-ftpd daemon v. 2.6, via an input validation bug discovered July 7, 2000.
Vulnerable: BIND8 v. 8.2.3, via several buffer overflow and input validation bugs discovered Jan. 29, 2001.
This worm tried a grab-bag of attacks, against four separate server-role packages, all (like those previously discussed) notoriously prone to security holes, but please note that the holes it attacks had already been fixed two, eight, nine, and three months previously, respectively.
The worm should not be confused with the Adore aka adore-ng rootkit, which is of course (like other rootkits) not an attack tool but rather an academic example of how to hide after system intrusion.
Appeared: January 17, 2001.
Vulnerable: wu-ftpd daemon v. 2.6, via an input validation bug of June 22, 2000.
Vulnerable: rpc.statd daemon, via a bug fixed summer 2000.
Vulnerable: LPRng printing package, via an input validation bug of Aug. 2000.
Notice the pattern? This is a near-twin of Adore, and no more significant: The holes it attacks has already been fixed seven, approximately seven, and five months previously, respectively. And this is what we'll be seeing for all the other worms, time and again. (One anti-virus vendor, Sophos, also speaks in its malware bestiary of an otherwise-unknown November 2001 Linux worm named "Honeymoo" attacking the same years-obsolete wu-ftpd version that Adore and Ramen did. It might be the same attack code, recycled. It's difficult to tell, given absence of details.)
(A programmer group named TESO released in April 2002 a slight variant on Ramen named "7350wurm" that attacked a glob() heap corruption bug in wu-ftpd v. 2.6.1, but that glitch had already been fixed for five months.)
Name: Slapper (Cinik, Unlock, bugtraq.c, Apache/mod_ssl worm)
Appeared: Sept. 13, 2002.
Vulnerable: A very specific and rare combination of Apache httpd with OpenSSL 0.9.6d / 0.9.7beta1 or earlier, via an OpenSSL buffer overflow fixed July 2, 2002.
This worm attacks only e-commerce and other SSL-enabled Web sites with particular obsolete versions of OpenSSL and Apache httpd configured in a particular way, and the (exotic) hole it attacks had already been fixed for two months.
Note: This worm should not be confused with the Internet-crippling January 25, 2003 Microsoft "Slapper" worm, AKA SQL Slammer or Sapphire, that within about ten minutes subverted a quarter-million MS-Windows desktop machines running the Microsoft Desktop Engine (MSDE) 2000 embedded database with a network listener fully exposed to public networks. (News reports calling SQL Slammer — which, by the way, conventional MS-Windows virus checkers cannot detect — an attack on MS-SQL Server were substantively in error.)
Yankee Group senior stock analyst Laura DiDio, renowned as pretty much the last person on Earth to figure out that SCO Group press releases should not be taken at face value, claimed that Slapper compromised 20,000 Linux systems worldwide in 2002, but, even though that's a minuscule percentage of Linux deployments by 2002, it seems a bit unlikely even before considering the source: To be hit, a sysadmin would need to be both advanced enough to install/configure a mod_ssl/Apache https-capable Web site — something one associates with professional paranoia — and too incompetent to bother applying crucial (and semi-automated) system updates.
Name: Mighty (Devnull)
Appeared: Oct. 3, 2002.
Vulnerable: A very specific and rare combination of Apache httpd w/OpenSSL 0.9.6d and 0.9.7-beta1 or earlier, via an OpenSSL buffer overflow fixed July 30, 2002.
This was near-indistinguishable from the equally ineffective Slapper worm (above), and might as well have been the same code.
Name: Adm (ADMworm, ADMw0rm)
Appeared: May 1998.
Vulnerable: BIND8 buffer overflow prior to 8.1.2 (in the reverse query function, "fake-iquery yes;", which is disabled by default). Fix released April 8, 1998.
The hole in question had been fixed for only a month, which might have made it a plausible threat except that "fake-iquery" is pretty much always disabled.
Appeared: Oct. 2001.
Vulnerable: OpenSSH exploit effective prior to v. 2.3.0. Old versions were patched Feb. 27, 2001; 2.3.0 released November 2000.
People already had this hole patched for either eleven or eight months, depending on whether they were willing to jump to v. 2.3.0 or not.
Name: Millen (Millenium, MWorm, Mworm)
Appeared: Nov. 18, 2002.
Vulnerable: wu_imapd daemon, via a buffer overflow fixed May 11, 2002.
Vulnerable: qpopper daemon, via a buffer overflow fixed March 2002.
Vulnerable: BIND8 through v. 8.3.3, via a buffer overflow fixed Nov. 11, 2002.
Vulnerable: rpc.mountd daemon, via a buffer overflow fixed in 1998.
This bag-of-tricks worm attacks holes already fixed for six months, eight months, a week, and several years, respectively. Now, I have to admit that only a week's lead time on fixing a critical security hole could be a real problem if you were asleep at the wheel and your semi-automated updating mechanisms were broken or disabled — except who in his/her right mind was still running BIND8 by late 2002? It hadn't even been a standard Linux package in a year or so.
Appeared: July 2, 2003.
Vulnerable: Samba prior to v. 2.0.10 / 2.2.8a, via a buffer overflow. Those fixed versions were released April 7, 2003.
This is the only Linux worm to date targeting the Samba server-role package's obsolete versions, possibly because even reckless server admins tend to know that Microsoft file/print sharing isn't safe to make accessible to the global Internet — just like the aforementioned rpc.mountd and rpc.statd daemon processes (part of NFS, Network File System — or No Friggin' Security as the wags would have it). In any event, the attacked holes had already been fixed for three months.
Name: Lupper (Lupii, Plupii, Mare)
Appeared: Nov. 11, 2005.
Vulnerable: PHPXMLRPC messaging library v. 1.1.1, via URL input validation bug enabling execution of arbitrary PHP. Fixed Aug. 8, 2005.
Vulnerable: AWstats Web-statistics Perl CGI script, v. 6.3, via a URL input validation bug. Fixed June 10, 2005.
Vulnerable: Darryl C. Burgdorf's WebHints proprietary "thought for the day" Perl CGI script, v. 1.02, has zero URL input validation, a design failure publicised May 9, 2005. (References to v. 1.03 and 1.3 are in error.)
Vulnerable: Jimmy's "The Includer" proprietary SSI-emulation Perl CGI script v. 1.1, has zero URL input validation, a design failure publicised March 3, 2005.
This worm — exploiting vulnerabilities already fixed or eliminated for three, five, six, and eight months, respectively — derived from the earlier Slapper worm codebase. Thus far, it exists only as an i386 Linux binary, fetched to target Web servers' /tmp directory by one of the four obsolete, vulnerable Web apps, and then run as httpd. One of those exploits (against PHPXMLRPC) would work equally well (after recompiling the worm) on any operating system. The others invoke Bourne-like shells (and thus are feasible on any Unix, but on MS-Windows only with Cygwin, etc.). The AWstats exploit also calls wget, via buggily-parsed URL input of the form "configdir=|program".
The Includer and WebHints CGIs' failures to validate input are total: URLs "http://www.example.com/hints.pl?|program|", "http://www.example.com/includer.cgi?|program|", and "http://www.example.com/includer.cgi?template=|program|" all remotely execute "program". However, it's important to note that neither is packaged by Linux distributions: Either would have to be downloaded and installed manually by an admin of uncommonly bad judgement.
The AWstats CGI, by contrast, is sometimes packaged but never to the best of my ability to tell installed by default, in any Linux distribution: It has historically been notorious for input validation flaws, and thus is best run in its optional configuration that generates static HTML pages, rather than its default CGI mode.
PHPXMLRPC is usually offered via optional, supplemental PHP-add-ons packages but is never to the best of my ability to tell installed by default, in any Linux distribution. Like the related and identically vulnerable (fixed the same day, but not attacked so far by this worm) PEAR XML-RPC v. 1.3.3 messaging library, it would probably get installed as part of overfeatured, developed PHP-based Web applications such as Ampache, b2evolution, egroupware, MailWatch for MailScanner, Nucleus CMS, phpmyfaq, phpPgAds, phpgroupware, PostNuke, TikiWiki, and Xaraya; plus older versions of Civicspace and Drupal.
(The two PHP-coded XML-RPC implementations should not be confused with PHP's optional xmlrpc-epi extension, in C, included with PHP since v. 4.10, or various other non-PHP implementations.)
One lesson that's common to all of those exploits is that Linux Web-server admins need to be extra careful of applications that will process public data, e.g., via URL input, and doubly careful (lest they miss needed fixes) of any they choose to install outside their distributions' regular maintenance regimes. As it happens, the worm requires rather rare (not to mention old) Web-app vulnerabilities, and extremely few systems have been reported affected. ("Affected" means that the attacker can compromise the httpd process but not the Web host as a whole, without some separate and more serious method to compromise the machine.)
Name: Others — you tell me. I might have missed some.
If I did, long odds favour the story being same as with those above. You might search, for example, JD Moore's archive, Treachery Unlimited's archive, the 29 A Group, MadChat, RRLF, Herm1t's VX Heavens Virus Collection, Hack Academy, or request access to SpywareInfo's repository. Beware that some collections' "Linux" entries actually work only on certain other Unixes (usually FreeBSD or Solaris — e.g., the 2002 Scalper AKA Ehchapa worm on FreeBSD and the BoxPoison worm on Solaris), though often misreported as Linux code.
My overall point is that, especially starting around 2000 when automated and semi-automated maintenance regimes became ubiquitous in Linux distributions, even just following the path of least resistance and not being a particularly competent admin would have closed off the above holes long before they could be exploited by "viruses". Also, Red Hat in RH 7.3 (May 2002) and other distributions about the same time started defaulting to enabling iptables port-filtering ("firewall") scripts at startup by default &mdash again, protection even for those who haplessly switch on obsolete network daemons.
Beware, too, of vacuous claims like that of Forrester Research senior analyst Laura Koetzle's privately circulated April 2004 study claiming that major Linux distributions were typically "at risk" during 2002-3 for a disturbingly long period, longer than current Microsoft OSes were. In Koetzle's case, the sleight of hand occurs in her definition of "risk" as the number of days from availability of a source patch to the distribution's release of a package update. This ignores the crucial fact that Linux distributions' software patches for security holes are just about invariably anticipatory: They come out many months before anyone figures out how to exploit the hole, if ever — whereas Microsoft's patches very often "patch" an already exploitable security disaster. Moreover, Koetzle made the fatal error of weighting all "security" issues equally, regardless of whether they were serious or even potentially exploitable at all. Last, Koetzle made the tediously familiar error of comparing major Linux distributions each comprising both the core OS and several thousand optional application and server packages against only Microsoft's core OS with hardly any bundled applications at all, but still considered the relative "patch counts" meaningful. That's like comparing the total number of murders in the Vatican in 1998 (3) with those in Canada (555), and concluding that Canada is 185 times as dangerous a place.
(News reports citing that study seem oddly reticent about which customer commissioned it. This may be of interest given that a frequent customer has used Forrester Research frequently in its ongoing anti-Linux campaign, often through subsidiary firm Giga Research.)
Incidentally, a brief note to anyone intending to draft yet another Linux vs. MS-Windows security comparison and aspiring to get it right for a change: Please jot down all Apache httpd and BIND items in both columns. Those open-source packages are widely used on MS-Windows servers, too.
And a note to admins: Don't panic about an installed package having a "vulnerable" version number until you've checked your distribution's security-alert pages: You may well have received the fix as a "backport" to stable package code, thereby fixing the package without incrementing its version number. This practice is customary, for example, on Debian's stable branch.
III. Buggy or Obsolete User Apps Exposed to Public Data:
And yes, failing to update regular, non-network-daemon user apps could lead to data loss and possibly personal embarrassment (albeit not system compromise):
Name: JBells (JBellz)
Appeared: January 14, 2003
Vulnerable: The proprietary mpg123 music-playing app's buggy non-production v. pre0.59s beta, but not prior or subsequent production versions, via a buffer overflow induced by trojaned (specially malformed) MP3 files played using it, having binary code in the MP3 frame header that invokes a shell and recursively deletes the user's home directory.
This exploit code was very brittle — needing to be customised for specific mpg123 releases on particular Linux distributions and so ever working, even theoretically, only on a couple of them — and the tiny window for overflow code didn't permit any complex hostile actions. However, more significant is that this affected only one buggy beta (fixed the same day) — even the prior pre0.58r beta was immune — that didn't meet quality standards for inclusion in any Linux distribution.
But it is worth noting the point, that data posted on the public Internet may in some cases be used to subvert user apps having severe, known flaws in their input validation routines — e.g., Web browsers, multimedia plugins/apps, e-mail readers, and print daemons. This isn't news to us of the Linux community, either: It's why you should keep those packages — like network daemons you choose to run, kernels' network stacks, and other security-sensitive code — up to date and eschew ones known to be particularly bug-prone or unmaintained.
Anti-virus pundit and Linux critic Phil d'Espace feels that such an application exploit should be able to trivially escalate to root privilege: The particulars of JBells show that this sort of talk is cheap, that examples of such trivial escalation are nonexistent (and why), and that security-industry commentators have a lamentable tendency to shade the truth. (E.g., d'Espace postulates that a remotely compromised instance of Apache httpd, even though running as an unprivileged user, could be caused to overwrite "all the files on the Web site", ignoring the fact that no files within the httpd document root are or should ever be owned by that process's designated runtime user.)
IV. The Ringers. Post-Compromise Rootkits (Trojan, Worm) and Attack Tools (not malware at all):
Apologies to those for whom this subject is old hat, but the following nasty packages do not qualify as Linux malware in any meaningful sense:
Abrox, Adminer, Adore and Adore-ng (rootkits by authors "TESO" and "stealth", not to be confused with the remote-attack worm of the same name), afhrm, AjaKit, Alcohol, Alk, Ambient (ark), Andrada, Anonoying, aPa, Arang, Arkdoor, asp, Attack, Backserv, Banner, Batamacker, Battlec, Beasted, BeastKit, bindshell, Blackhole, Blitz (Bliz), Bloop, BlowFish (BF), Bnc, BOBKit, Bodoor (BO), Bofishy, Bonk, Boost, Bouncer, Brk, Bscan, Bshell, Caplen, CGI, CGIexp, Chass, Chfn, Chrome, Ciscer, Clifax, CleanLog, Clripch, Corn "worm", Cwd, Cyrax, Cyrus, Da2, Dancer, Danny-Boy's Abuse Kit, Dar, Darkux, Darkwar, DC, DCom, Dcomer, demonKit, Demonul, Desida, Devil, Dexterois, Dica, Digit, Divine, Dmp, Dnstroyer, DobDrag, Drakat, Dreams, Drugkit, Duarawkz, Ducoci, ELF_Gmon.a (sets up a backdoor on UDP port 3049; included in SuckIT), Echo, Echosrv, Eko, Elfpatch, Elfwrsec, Escal, Espacker, Ethereal, Evil, Excedoor, Explodor, Faker, Fixer, Fkit, Flea, Flooder (Icmp), Fmtxp, Foda, Fork, Fpatch (Fakepatch, ShcBased), Fpath, Freeze, Frezer, Front, Fuck'it, FunSeven, Fusys, Gabitzu, GasKit, Gata, Gbkdor, Gizc, Glock, GMM, Gold2, Guile, Gulzan, Gummo, Hacktop, Haploit, heroin, Hella, Hestra, Hider, HiddenFunc, HiDrootkit, Hife, Hijack (Hijacker), HitWins, HjC, Homador, Hopbot, ibnoKit, Igmp, IIS-Attacker, IISuxor, Ikproc, Imspd, ImperalsS-FBRK, Infect, Initen, InjWrap, Interbase, Ircd, IRCKiller, Irix, Iroffer, itf, Kaiten (Kayten), Kaot, Kbd, Keitan, Kidbin, KIS, Kitko, KldHide, Kmod, Knark, Knight, Kod, Koka, Kokain, Kot, Krepper, Lacksand, Lala, Lambida, lbd, Lime, Lindoor, Linspy, Linux.Encoder.1, Linux Rootkit (LRK), Livthe, lkm, LkmHide, LOC, Lockit (LJK2), Logftp, LuCe LKM, Luckroot, Ltrap, Madvise, Maniac, Manpages, Map, Masan, Matrics, Maxload, Melt, Metti, Mhttpd, Micmp, Midav, Mirc, Mircforce, Mithra, Mmap, mod_rootme, MonKit, Mr, MRK, MStream, Muench, Mulexp, Mweb, Nestea, NetBus, Nhttpd, Ni0, Nkiller Nocwage, OBSD, Octopus, Omega "worm", OpticKit (Tux), Ovason, Overdrop, Oz, Pass, PaulCyber, Phalanx, Phobi, PhsychoPhobia, PLT, Poly, Pong, Popdoor, Portacelo, ProcHider, ProcSuid, PsychoPhobia, Qitty, Quacker, R0nin, R3dstorm, Race, Raped, Rawsocket, Rbind, Reboot, Reflect, Regen2k, Regile, Remprint, Resrcs, RemoteSync, rexedcs, RH-Sharpe, Rial, Ris, RK17, Romanian Rootkit, Rooter, Rootin, Rootkit, Rpc, Rpctime, RQPOP, RSHA, Sambex, Sendxp, Senha, Shadoor, Shaggy, ShcBased, ShellCode (Shellcode), Shinject, ShitC "worm", ShKit, Siback, Showtee, Shutdown, SHV4, SHV5, Sicmp, Sickabs, Sin, Sink, Sinkhole (SinkSlice, Slice), Sirius, Sk, Slice, Smack, Small, Smurf, Sneakin, sniffer, Snoopy, Snug, Soutown, Sprite, SQLexp, SSPing, Sstftp, Stach (stacheldraht), Stealer, Stream, Streamdoor, Subsevux, SuckIT (Sckit, Skit), Suffer, Superkit, SVScan, Synapsis, Synk, Sysniff, Synscan, Targ, TBD (Telnet BackDoor), TC2, Tcpscan, Teso, Tesoelf, tfn (TFN), tfn2k, THC, T0rn (t0rnkit), Trank, Trinity, TRK, trinoo (Trin), Trojankit, Tsig, Tsunami, Typot, UDP, Unfstealth, Unk, Untrace, URK (Universal Rootkit) Usmel, VcKit, Vma, Volc, Vulner, w55808, Wgcrash, Win, Winploit, WrapFtp, WrapLogin, WrapPasswd, WrapSu, wted, Wudel, WuScan, XChatSouls, Xicmpfl, XKeyLogger, Xmailer, Xpl, z2, Zab, zaRwT, and ZK.
Every one of those is some sort of post-attack tool; all are erroneously claimed on sundry anti-virus companies' sites (and consequently in various news articles) to be "Linux viruses". Some are actually "rootkits", which are kits of software to hide the intruder's presence from the system's owner and install "backdoor" re-entry mechanisms, after the intruder's broken in through other means entirely. Some are "worms"/"trojans" of the sort that get launched locally on the invaded system, by the intruder, to probe it and remote systems for further vulnerabilities. Some are outright attack tools of the "DDoS" (distributed denial of service) variety, which overwhelm a remote target with garbage network traffic from all directions, to render it temporarily non-functional or incommunicado.
The news reporters and anti-virus companies in question should be ashamed of themselves: None of the above, in itself, can break into any remote Linux system. All must be imported manually and installed by an intruder who has cracked your system by other means.
That incompetent reporting sometimes has extremely damaging consequences: In 2002, British authorities arrested the alleged author of the T0rn rootkit, based on their mistaken notion that it's a "Linux virus". (My efforts to get the Reuters / NY Times story corrected were ignored, except by cited anti-virus consultant Graham Cluley, who told me he'd been misquoted.)
I should mention in passing that feeble albeit genuine malware like the RST and OSF ELF-infectors are often downloaded and manually installed, locally, by attackers after they've entered and cracked root via other means entirely, often as part of their "rootkits". Some of these help keep alive UDP-based backdoors to preserve their ongoing access. The point, again, is that they're an after-effect of break-in, not a method of attack in themselves. It's like a burglar disabling your back-porch door lock from inside your kitchen; it's damage, but not the guy's means of entry.
V. In Summary:
There are real threats to Linux security. If you spend time looking for "Linux viruses" — which, by and large, can come at your system only if you get behind them and push — you might miss the real threats and not do something useful like studying your security profile and other measures.
And yes, some "virus" author could in principle, some day, in the very worst-case scenario — if he/she were able to find a remotely exploitable Linux kernel network-code flaw unknown to everyone else — unleash a devastating and rapid, automated, surprise attack that clobbers (compromises) within one hour a large percentage of, say, worldwide Internet-connected i386 Linux servers' TCP/IP stacks, and thus gains root control.
This would force all afflicted systems to be offline for a day to await the necessary patch and be rebuilt. That would be very annoying — but would hardly be unrecoverable. Moreover, I'll give very long odds against this or less-central failures happening, too — and lower ones for the same threat against practically every other OS.
- System was designed for multiuser and networked operation from the ground up.
- System was designed to distrust and not rely (in the general case) on remote procedure calls (RPCs), especially not between hosts.
- System is profoundly modular, with the simplest, most generic possible interactions (often via pipes or textual interchange — even if then layered over sockets, etc.) between components (which can thus be individually changed, patched, upgraded, removed, or disabled as desired — without, in general, large interdependency consequences or cascade failures). Within that modular framework, functional substitutes exist and can be swapped in for almost all common security-relevant codebases. (E.g., if OpenSSH is having security problems, I can easily sidestep to LSH or any of several other SSH daemons. Ditto Web servers, ftp daemons, mail servers, etc. If need be, I can even change kernels.)
- System doesn't give software excessive privilege or easy paths to escalation. Components run with high privilege are kept as small and carefully checked as possible. Interacting components seldom even run as the same effective user ID, and thus are in a poor position to subvert one another's resources.
- As a result of the above, system state is highly transparent, lending itself to effective scrutiny and management via simple, well-understood tools (including ps, netstat, lsof, lslk, fuser, etc.).
For details, please see Petreley, Raymond, and Self's more-comprehensive write-ups.
Last modified: firstname.lastname@example.org
Copyright (C) 1995-2015 by Rick Moen. Verbatim copying, distribution, and display of this entire article (page) are permitted in any medium, provided this notice is preserved. Alternatively, you may create derivative works of any sort for any purpose, provided your versions contain no attribution to me, and that you assert your own authorship (and not mine) in every practical medium.