[sf-lug] HeartBleed OpenSSL bug mini-Q&A

Rick Moen rick at linuxmafia.com
Sun Apr 13 16:26:12 PDT 2014


Quoting Michael Paoli (Michael.Paoli at cal.berkeley.edu):

> Why bother stating whether or not SF-LUG and/or BALUG are or were
> vulnerable or not?  Mostly a combination of: folks want to know if their
> information was or may have been compromised [...]

There's a point here that I think's in danger of getting lost, that is
best highlighted by asking the rhetorical question:  _What_ information?

In discussion of the Heartbleed bug in various places including here, 
I've been dismayed by otherwise intelligent Linux users failing to 
engage with the subject.  Instead of seeking to understand the threat 
model - what specifically gets risked and why - mailing list readers 
gloss right over that and merely want to be told what to do, what is 
'fixed' and what is not, etc.

That's a lost opportunity.  

When I asked Michael, in some puzzlement and no little curiosity, what
specific Web hosts either BALUG or SF-LUG has been serving over https,
such that disclosure of sensitive data could be a concern - given that 
neither of them mentions such service in their public information - 
there was a serious point to that:  For information to be compromised, 
it must be used in a context where that can happen.

And what context were we given?  1.  Zero production sites for SF-LUG.
2.  A pretty obscure, unadvertised, alternate https URL for BALUG's
wiki.  3.  An obscure, unadvertised, alternate https URL for an archive
of snapshots of BALUG's public Web site as it existed in the past.  4.
A convoluted way a sufficiently motivated person might, in theory,
locally re-engineer https queries about sf-lug.{org|com} to be served
from the obscure, unadvertised alternate BALUG https host.

None of that actually involves anyone's sensitive information.  I mean, 
wiki credentials?  Please.  Ability to forge balug.org with an 
impersonating https site?  C'mon.

This has been an opportunity to bring clarity to a matter of public 
interest, and mostly I'm seeing obscure  neepery that fails to put
matters in proper perspective.  And talking to folks about the
compromise of 'their information' without realism about what information
would be at stake is doing them no favours.


> One of the best sources of technical information and details I've seen
> on it has been from SANS.

Honestyly, Michael?  It's not like this is a difficult matter to figure
out without having talking heads at SANS telling you what to do.  I've
also really never been impressed with the SANS's advocacy of the vendor 
patch treadmill.

I'm with Ranum about the patch treadmill.
http://www.ranum.com/security/computer_security/editorials/master-tzu/

How about not running overfeatured systems based on buggy, overcomplex 
code to begin with?  I appreciate being able to read security advisories 
on, say, the latest happless security cock-up in PHP and realise that 
I'm unaffected because I've carefully disabled all the dumb and 
dangerous features via careful php.ini tweaking.  Likewise, my 
OpenSSL-using systems were unaffected by Heartbleed because I had
eschewed overfeatured leading-edge versions, and so my installation 
lacked the buggy RFC6520 keepalive code entirely.

OpenSSL _is_ an ongoing menace because of bad code and poor management
processes.  E.g., they really should have furnished switches to enable
or disable RFC6520 keepalive as a service to the many of us who had no
use for that entirely superfluous funciton and no desire to expose it to
public probing on our sites.

That being said, I have a feeling that the prevalance of the buggy code
in production use has been wildly overestimated.  Failing to mention why
is another missed opportunity.  So:

Enterprise software tends to live pretty damned far from the bleeding
edge.  Inside corporate sites, including the Internet-facing portions
thereof, in 2014, you will find quite a bit of CentOS 5.6, 5.8, 6.2,
6.3. 6.4 and so on.  Sites pull down from the repos backported patches
to those, but they do _not_, as a general rule leap out and rebuild all
their systems just because CentOS 5.10 and 6.5 are out (rebranded RHEL5
Update 10 and RHEL6 Update 5).  Moreover, not all package updates get
immediately rolled out within firms, which often feel they need to
curate those and apply them selectively.

Customers willing to shell out for upstream RHEL, same story.  There's
no mad rush to abandon RHEL 6 Update 2 just because Update 5 is out.

You might argue that they _should_ be quicker and more consistent to
upgrade, rekickstarting existing machines to newer distro releasees as
they come out.  There are lots of reasons companies don't, but those are
beside the immediate point, which is that this is how real-world
companies largely _do_ work, irrespective of why and whether they
should.

That being the case, guess what?  Most of the corporate world that is
deploying OpenSSL in corporate computing is still using some variant of
OpenSSL 0.9.8, because that's what is current (with backported
maintenance patches) on their somewhat lagging releases of CentOS and
RHEL.

Yes, all sorts of yoyos who adopt new code without particular concern
for stability and quality did get burned, but my point is that this will
end up being _way_ fewer major sites than many people seem to think.






More information about the sf-lug mailing list