[sf-lug] Got worm ? ? ?
Rick Moen
rick at linuxmafia.com
Thu Apr 9 18:42:08 PDT 2009
Elkhorn (the_elkhorn at yahoo.com) wrote:
> Nice analysis, by SRI, of the "Conficker" MS-Windows worm.
[...]
> http://mtc.sri.com/Conficker/
> http://mtc.sri.com/Conficker/addendumC/
(Actually, Elkhorn didn't say "MS-Windows worm". I inserted the
qualifier to point out that Elkhorn was posting about a non-Linux
problem.)
I'm revisiting this topic because I thought it might be interesting to
comment on the SRI report from a _Linux_ perspective. That might make
an interesting change from the usual.
So, (1) what's a worm, and how does it differ from viruses and trojan
horses? And then, (2) how does it come to get _run_, anyway? For the first
question, have a look at this undergrad paper by a Mr. David Stone,
"Spyware/Viruses in Linux": http://nnucomputerwhiz.com/linux-virus.html
(Shameless self-promotion alert: Stone's paper tests, very
impressively, some assertions about Linux malware in the Web pages of
yr. humble servant.)
Stone's description is pretty accurate, and concise:
The nastiest viruses are the ones that exploit remote security holes
in the operating system and use those to infect a computer over the
network. The infected computer will then try to infect more computers
and so on. Viruses that behave in this way are called worms.
Question #2, "How does it come to get run?", is always, always, always
the most-vital question with malware -- and the fact that the IT press
and the security industry tend to give _bad or no_ answers to that
question for MS-Windows malware should be setting off alarms. It's one
of many indicators of a horribly broken situation, which Linux users
would not put up with.
It turns out, Conficker's vector of attack is a _little_ easier to track
down than that of many pieces of MS-Windows malware -- which is to say,
you can find at least vague descriptions without huge difficulty, and
pry the real details out if you're very determined.
Ever since the MS-Windows NT family replaced the prior MS-Windows
95/98/ME family (which was just MS-DOS 7.x with MS-Windows 4.x glued
on top), all MS-Windows systems have automatically run an RPC portmapper
network service.
"Say what?", you said. I'm referring to a network service (daemon)
that freely hands out, on any network interface the machine has
(including modems) Remote Procedure Call port assignments usable to
reach various other network services that might be (or might not be)
also running on that machine.
In Unix-type operating systems, we've had RPC portmapper daemons
available and studied for a long time: The concept was first invented
in BSD Unix in the 1980s, but Sun Microsystems later invented the
standard design, and Linux closely imitated that. On the basis of decades
of experience it's known to be inherently dangerous to security, for a
couple of reasons: 1. Because the port assignments it hands out cannot
be predicted, it makes port/address firewalling difficult. 2. Like any
other advertised network service, it's an exposed point of outside
attack by bad guys.
For those reasons, RPC-based network services on Unix (chiefly NIS and
NFS) are considered too dangerous for anything other than carefully
protected, isolated networks. In rare cases where questionable calls
are made by "desktop" programmers [**COUGH** "fam" in GNOME **COUGH],
it's trivial to set the rpc.portmap daemon process to be accessible from
localhost only. _And_ it's very easy to determine what's going to not
work if you shut it down.
Back to Microsoft: Having learned little-to-nothing from 40 years of
Unix history, they made all -- _all_ -- NT-family machines run a
portmapper, something called "Server Service".
Can you lock the "Server Service" daemon process to localhost? Nope.
Can you determine what'll break, and if you care, if you were to shut
_off_ the "Server Service" daemon? Nope.
One of the longest-ago and most bitter lessons of network security is
that any process that must deal in public data must carefully _validate_
that data before parsing it. "Validate" means making sure data is of
the allowed/expected types/lenghths/contents only, so that subsequent
code cannot be subverted by overpowering it with wrong / malformed data.
An RPC portmapper daemon would be a classic case of where data input
validation would be utterly crucial, and an incredibly picky input
parser would be the very first and most important piece you'd write.
Which brings us back to Conficker: Delve through dozens of mostly
content-free articles on the subject, and you eventually come across
things like:
executes arbitrary code via a crafted RPC request that triggers the
overflow during path canonicalization
Hullo? How could a "crafted RPC request" trigger a buffer overflow?
That would be possible only if....
Egads. Microsoft Corporation put out a security-crucial network daemon
on each and every MS-Windows machine, one that cannot be locked to
localhost only, and cannot be safely prevented from starting because
you don't know what would break, that had _no input validation_!
Furthermore, according to even such carefully semi-informative notices
as http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2008-4250 , this
complete and total failure to validate crucial public data was still
present in Win2k Service Pack 4, WinXP Service Pack 3, Windows Server
2003 Service Pack 2, Vista Service Pack 1, Windows Server 2008, and
Windows 7 Pre-Beta. In other words, they finally got around to
looking at input validation after 14 years!
If any Linux distro had been that reckless with crucial network issues,
and moreover had needlessly exposed every _desktop_ system to those
issues for a decade-plus, their staff would have hidden in shame.
But wait! There's more. The "fix" came out in hotfix patches and then
service packs, around October 2008. In Linux terms, this would have
been an urgent, top-priority fix that would be _pushed_ out to Linux
systems and shouted from the rooftops. We would have been busting our
butts to _shut down_ our RPC portmappers instantly, and then replace
them with competently written code the same day.
And my calendar says: April 2009. Six months after the fixes, and
they're talking about a worldwide exploit of vulnerable RPC network code?
I said I'd look at the SRI pages. So:
> From late November through December 2008 we recorded more than 13,000
> Conficker infections within our honeynet,
OK, suggests that the September 2008 vulnerability announcements (and
hotfix announcements) were followed by appearance in public of a canned
exploit a month later. Is anyone surprised?
> The exploit employs a specially crafted remote procedure call (RPC)
> over port 445/TCP, which can cause Windows 2000, XP, 2003 servers, and
> Vista to execute an arbitrary code segment without authentication.
The SRI report is better than most in the "How does it come to get
run?" category, but still fails to address the obvious questions:
_Why_ is there a portmapper at all, especially on desktop systems?
Why isn't it locked to localhost by default? What relies on it?
What were they smoking when they decided to not validate input data on a
network daemon exposed to public code?
> The exploit can affect systems with firewalls enabled, but which
> operate with print and file sharing enabled.
Suggests that these "firewall" leave the RPC portmapper exposed to
public data, probably on grounds of Microsoft's SMB implementation
requiring it be thus exposed.
Note that Samba, the interoperable open-source implementation of SMB
file and print sharing, does not require RPC service of any sort.
> those Windows PCs that receive automated security updates have not
> been vulnerable to this exploit.
The paper refers obliquely, but doesn't detail, the reasons why many
sites, even those that are very security conscious, are slow to apply
Microsoft hotfixes and service packs as a matter of policy: breakage.
Because of poor modularity (poorly defined, unstable interfaces) in the
OS and app stack, sites are afraid to apply patches: They don't know
what else will break.
Again the Linux world would not put up with that situation.
The SRI paper spends a lot of time talking about what Conficker does
after it's been run with system root authority. From a Linux
perspective, that's pretty much academic, because it's game over if a
Linux host is believed to be root-compromised: You yank the power cord,
boot up trusted boot media (e.g., a live CD), study the inactive system
to make sure you know what happened and how to prevent its recurrence,
copy off the datafile and a reference copy (not to be reused) of the
compromised machine configuration, and build a replacement host from
scratch, carefully avoiding trusting anything executable or any conffile
or dotfile from the burned host.
> NetBIOS Share Propagation: Conficker B exploits weak security
> controls in enterprises and home networks to find additional
> vulnerable machines through open network shares and brute force
> password attempts using a list of over 240 common passwords.
As usual, the SRI paper fails to address: Why are NetBIOS-share
authentication datbases permitted to accept weak passwords? Haven't
they heard of the system crack libraries?
> USB Propagation: Finally, Conficker B copies itself as the
> autorun.inf to removable media drives in the system, thereby forcing
> the executable to be launched every time a removable drive is inserted
> into a system.
And why does Microsoft still have an autorun mechanism defaulting to
enabled after nothing more than the insertion of media? Have they
learned nothing?
To be fair, none of these questions is in-scope for the SRI study's
intended coverage. My point really is that _nobody_ in the Microsoft
world routinely asks those questions. All of the indicated types of
incompetence are accepted without objection and accepted as unavoidable
reality -- and all of them would instantly raise red flags in the Linux
community.
>
_______________________________________________
sf-lug mailing list
sf-lug at linuxmafia.com
http://linuxmafia.com/mailman/listinfo/sf-lug
More information about the sf-lug
mailing list