Heartbleed bug: How did it happen, and how do we know it won't happen again?

Johns Hopkins expert Matthew Green on encryption software flaw

A significant flaw in popular encryption software could compromise the personal information—including passwords and credit card data—of billions of Internet users, security experts revealed this week.

The Heartbleed bug was put in a section of code of OpenSSL, which is used by companies and government agencies almost everywhere to achieve security online. There's no telling how much damage has been caused by the bug, but the potential for data theft is enormous, experts say.

The bug was discovered by a Google researcher and, independently, by a Codenomicon, a security company based in Finland.

So how does something like this get overlooked? In part, it's because security is sometimes an afterthought in software development, experts suggest.

"We have standards for coding in mission-critical systems like the airline industry, but I'm not sure we would want those standards applied everywhere," Matthew Green, a cryptographer and professor at Johns Hopkins, told The New York Times.

Stricter security standards mean programmers would need to spend significantly more time testing their work, and neither technology companies nor consumers can stomach such delays, The Times notes.

"I don't think we want to wait 20 years for the next Google and Facebook," Green said.

For the more technologically inclined, Green [wrote about Heartbleed at length on his blog earlier this week]. Here's a brief, not terribly far-over-our-heads excerpt:

The problem is fairly simple: there's a tiny vulnerability—a simple missing bounds check—in the code that handles TLS 'heartbeat' messages. By abusing this mechanism, an attacker can request that a running TLS server hand over a relatively large slice (up to 64KB) of its private memory space. Since this is the same memory space where OpenSSL also stores the server's private key material, an attacker can potentially obtain (a) long-term server private keys, (b) TLS session keys, (c) confidential data like passwords, (d) session ticket keys.

Any of the above may allow an attacker to decrypt ongoing TLS sessions or steal useful information. However item (a) above is by far the worst, since an attacker who obtains the server's main private keys can potentially decrypt past sessions (if made using the non-PFS RSA handshake) or impersonate the server going forward. Worst of all, the exploit leaves no trace.

You should care about this because—whether you realize it or not—a hell of a lot of the security infrastructure you rely on is dependent in some way on OpenSSL. This includes many of the websites that store your personal information. And for better or for worse, industry's reliance on OpenSSL is only increasing.

OpenSSL is open-source, which means the code lives online and can be amended by anyone. In theory, this makes it more secure—with enough programmers checking the code, flaws can be identified quickly. But without enough programmers checking the code ...

"There just weren't enough eyeballs on this—and that's very bad," Green said.

"If we could get $500,000 kicked back to OpenSSL and teams like it, maybe this kind of thing won't happen again."

Also see ...