On a Tuesday evening in late September 2016, Brian Krebs, one of the internet's most prominent cybercrime reporters, noticed a startling surge in his blog traffic. It did not take him long to understand that he was under attack. Someone, whom he subsequently spent months working to track down, had seized control of hundreds of thousands of internet-connected devices, including home routers, video cameras, DVRs, and printers, to create a botnet, a sort of digital zombie army. Instead of performing their normal functions, these various devices, all of which were capable of transmitting data to the internet, obeyed a command to pummel the server that hosted Krebs' blog with vastly more traffic than normal sites expect to handle in an entire month. The assault, called a distributed denial of service, or DDoS attack, overwhelmed the server and knocked Krebs' blog off the internet for three days. The digital security specialists Akamai later reported that it was nearly twice as large as the biggest attack they had seen previously.
It wasn't long before they saw a bigger one. One month later, the same botnet knocked major websites such as Twitter, Amazon, Netflix, Reddit, and Tumblr offline for several hours. That outage was brief, but it demonstrated the vulnerability of the internet's core structural elements.
The botnet used to execute these attacks was named Mirai by the attacker, who goes by the online moniker Anna-senpai. "Mirai" is a Japanese word that means "future." Anna-senpai subsequently released the Mirai source code to the internet, so that hackers could create variants of the malware for new attacks, which they immediately did. What keeps security experts on constant alert is the reality that malicious botnets like Mirai and the attacks they enable—in addition to a host of other cyberweapons now on the internet—are, indeed, the future.
Matt Green, a prominent cryptographer and assistant professor in the Johns Hopkins Department of Computer Science, has paid close attention to Mirai. "Entire portions of the internet went down because [devices like] home cameras were hacked, and those cameras had no security built in," he says. "Once you have the ability to generate lots and lots of traffic"—using a botnet of hacked devices to generate a DDoS attack, for example—"you can easily and selectively take down big chunks of the internet because nothing on the internet was designed to fight that."
All the world's computers, any other device that contains a computer chip, the internet itself, all run on computer code. "It's hard to overstate the degree to which our world is dependent on software systems," says Eric Rescorla, a fellow at Mozilla, the nonprofit organization that developed the Firefox web browser. (Rescorla has co-authored communications security research with Green.) "A huge fraction of the devices you use on a daily basis, ranging from thermostats to watches to cars to aircraft, are actually computers."
The code in many operating systems and web browsers is much more secure now, and it's regularly updated whenever someone finds a new vulnerability. But the code in so many consumer devices is much less secure because manufacturers don't design them to be secure; it's more expensive and not their priority. Increasingly, all these devices, along with their shoddy security, are connected to the internet. Your fitness watch, your home security cam, your bathroom scale, the E-ZPass toll collection gizmo velcroed to your windshield, your DVR, possibly the very locks securing your house—all connected. Economic surveys estimate there will be more than 50 billion internet-connected devices by 2020, and that 70 percent of the population will have smartphones, each one vulnerable to hackers.
When a journalist like Brian Krebs, whom most people have never heard of, loses his blog for a while, or Netflix is unavailable for an hour or two, that doesn't seem all that serious. But holes in internet security have resulted in major thefts of financial data and personal data, and attacks with much larger geopolitical implications. Major corporations and credit card systems have been breached so regularly that journalists now annually compile top-10 lists of the year's biggest attacks. Hackers stole 160 million credit and debit card numbers from Home Depot in 2014. From Anthem health insurance, they stole Social Security numbers and other sensitive information belonging to 80 million clients in 2015, and from Yahoo, data from over a billion user accounts just this past year. Hillary Clinton and her team at the Democratic National Committee became all too familiar with the dangers when thousands of her personal emails were exposed to the public by hackers—a crime the intelligence community believes was directed by the highest levels of the Russian government in an effort to influence the U.S. presidential election.
Around 15 percent of DDoS attacks target banks, according to a report from internet security firm Verisign. In 2014, the analytics company Neustar surveyed financial services companies, and 42 percent of them estimated that DDoS attacks cost them $100,000 per hour until the attack can be contained. Last year, more than a dozen hospitals suffered from a type of digital assault called ransomware: criminal hackers penetrate online databases and encrypt them, demanding money in exchange for the keys to decrypt them again. Hollywood Presbyterian Medical Center in Los Angeles paid hackers $17,000 to unlock access to its computer system last February after being locked out for 11 days.
Noted computer scientist Gerald Weinberg, author of The Psychology of Computer Programming, has been oft-quoted: "If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization."
In November, I meet Matt Green for lunch at an Irish pub near Union Station in Washington, D.C. A digital security expert of modest Twitter fame and a private sector consultant gone academic, Green is soft-spoken and not prone to fearmongering; he's the kind of person who graciously takes the blame when I knock over a water glass. Nevertheless, his take on the fundamental problem with internet security can inspire plenty of anxiety: "So the places where we screwed up, and are maybe even too late to fix, are things like the design of the internet." The design of the internet? That's all? He explains that the basic encryption that permits secure communication of information for online banking data, the services that allow us to search and connect to websites, and the programming languages on our phones and in our critical infrastructure is extremely vulnerable.
Because there's so much complex code running on our computers at all times, like invisible puppet strings in the background, it's almost impossible to know whether you've been hacked and your device is under someone else's control at this very moment. Your laptop "can do 2 million instructions a minute in the background without you noticing it," Green points out. "It has a microphone, it has a camera. All of those functions can be operating without any visible sign, and there's nothing you can do about it. The only solution is to not trust your computer."
Prior to Johns Hopkins, Green worked for AT&T and after that co-founded his own security company, Independent Security Evaluators. He can't talk about everything he did then—much of the information is proprietary—but he's well-placed to discuss just how bad digital security can be. "It's amazing to me how much stuff we really care about relies on crappy code that hasn't been updated in 10 years," he says.
When he examined some major corporations and the digital locks they kept on their data, he said that "it was like finding out airplane mechanics were using rubber bands to fix their jet engines." Sometimes the security systems were so dodgy the researchers inadvertently crashed them. Green has a recent story about a colleague in academia who was doing a simple and usually benign internet scan for virtual private networks, pinging the networks to test their connections, when she accidentally knocked an oil pipeline controller offline for 10 minutes. "She wasn't even hacking," he says.
Almost every day, independent researchers like Green find dangerous flaws in code that amount to ticking cyberbombs just waiting to be exploited. Examples have included the code in powered wheelchairs, smart door locks, thermostats, and more. All these devices are potential gaps in the fence guarding the internet because so many of them run on the same operating systems that are used to defend critical infrastructure. "All Android phones are based on Linux," says Green, citing an example of the common open-source operating system. "And half of all critical infrastructure is based on some flavor of Linux." In April, the Linux Foundation became part of a program called the Civil Infrastructure Platform, created to provide software to help improve the functioning of water systems, roads, bridges, oil and gas distribution, and health care. All of that software is based on variants of the same code.
Green notes that the potential for irreparable damage in major systems has already been demonstrated. When a malicious computer worm called Stuxnet infected computers hooked up to centrifuges that Iran was using to enrich nuclear material in 2009 and 2010, it broke them beyond repair. That attack, presumably meant to sabotage Iran's nuclear weapons program, has been widely attributed to the United States and Israel. "We saw with Stuxnet that this kind of thing can be easily tapped," Green says, and the same sort of attack could be aimed at American infrastructure. Our industrial systems are practically all hooked up to each other online. Adm. Mike Rogers, National Security Agency director, told a congressional panel in 2014 that China and a handful of other nations are capable of taking down U.S. power grids. "We paved the road [with Stuxnet], and we showed how this can be done," Green says.
Cybersecurity experts like Green assume everything has been hacked, or could be—we just don't know about it yet. But that wasn't always so obvious. "The original inventors of the internet were good people and didn't avoid security because they were bad or lazy," Green says. "It's just that they had a lot of constraints in terms of what they could roll out on the first go. It's a miracle the internet worked in the first place." Perfect, hacker-proof security was practically impossible because inevitably there were exploitable flaws in the hundreds of millions of lines of code that to this day undergird the internet. Plus those original internet coders just wanted to design a network to allow people to transmit data, collaborate on research, and access information faster. They never anticipated that in 2017 we'd be so interconnected, or would rely so heavily on the internet in our everyday lives. Nor did they foresee that people would subvert their code for so many nefarious purposes. Green's mentor when he worked for AT&T told him he would be stupid to pursue a PhD in computer security, and at the time Green regarded that as good advice. "He was thinking that right now—it was 2000 or so—the web is kind of new. Nobody is going to put lots of money on the internet. Nobody is going to hook up power plants to the internet." Vinton G. Cerf, one of the internet's architects, now at Google as a vice president and chief internet evangelist (that's really his title), told The Washington Post in 2015, "We didn't focus on how you could wreck this system intentionally." He has said that is one of his regrets.
The firewalls that are more familiar to consumers help stop some viruses, but there are always other ways to get in, especially if software isn't updated with security patches, the standard remedy for security flaws, in effect mending the fence. Green believes the quick-devise-a-patch approach is no longer sufficient. He argues that the net's fundamental architecture itself needs to be redesigned. That will cause disruption, to say the least. For example, he and fellow researchers discovered a major bug in the internet's basic encryption system in 2015; to fix that one bug "would require breaking 1 percent of all the websites on the internet," Green says, because any site that did not upgrade to the new standard would cease to function. One percent might not sound like a lot, but given the extraordinary size of the internet, it was enough to make companies like Google balk at the suggestion. "There's so much broken obsolete stuff out there that's not even being maintained by anybody anymore, but people still have to use it," Green says. No matter how essential a security fix might be, if the solution entails a new internet design that doesn't allow everything people need to do or have grown accustomed to doing online, there will be widespread opposition.
Green gave me a list of major software and devices that are dangerously insecure and aren't being fixed but are still relied upon by the average person. "There are more than 200,000 systems still vulnerable to Heartbleed," he says, citing a serious security hole in the encryption used to secure much of the internet, found three years ago. Plus, half of Android phones "contain unpatched vulnerabilities," he continues—either because the developer didn't design one or users didn't download patches.
Academics like Green and independent researchers search the code inside countless digital products to discover what's already been hacked and what's vulnerable. Then they tell companies about those flaws. But it's the companies that have to spend the money to fix the problems. "They often refuse to fix it unless you can prove it really matters," Green says. "It's like finding out the seat belts in your car don't work, but somebody says, 'We're not gonna fix it unless you can prove to us that in a crash somebody's actually gonna get killed.'" Security staff at Target had warned corporate officials that its payment systems were vulnerable to hacking several months before thieves absconded with 40 million credit and debit card numbers. Green asserts that a manufacturer of implantable cardiac devices still has not fully resolved security problems that he reviewed. And there are plenty of other easily hacked medical devices and computer programs that hospitals rely on, he says.
Some companies are more responsive than others, and pay "bug bounties" to security researchers who discover flaws and discretely notify them so they can fix the problem. Apple, Google, Microsoft, and Mozilla are among them; the Pentagon has recently started its own bug bounty program. But not everyone wants all bugs to be fixed. The FBI has argued that it needs to dive through software holes to unmask child pornographers and crack smartphones belonging to criminal suspects, such as the shooters who killed 14 people in San Bernardino last year. The National Security Agency says it needs to exploit online systems to monitor terrorists or track agents of foreign powers, friendly and not. Because of that, the government doesn't always tell companies when they've found a security hole in their hardware or software. The Obama White House established the Vulnerabilities Equities Process, an internal procedure for determining whether and when the U.S. government should publicly disclose newly discovered vulnerabilities. But such discoveries are hard to keep secret; the world is full of malicious hackers constantly trying to find them.
So what can be done to stave off the next big website takedown, or some sort of catastrophic infrastructure sabotage? Green notes that some software that people still rely on will probably never be fixed because the companies who wrote it consider it obsolete and no longer spend the money to patch it. Corporations, security experts, and the government often cannot agree on priorities of what should be fixed first. And they are all up against massive resources that have been committed, by both private actors and governments, to devise cyberattacks.
That pertains to existing code and devices. But maybe, experts say, we can do better with what comes next. Jim Zemlin, executive director of the Linux Foundation, told The New York Times that the best option is to encourage better security in code from now on. "Long term, we need to make a better investment in the overall health of the internet," he said. "There's no quick fix, but if you have bug bounty programs, do threat modeling, and train developers to write secure code, you're going to have a healthier internet."
Green points to my iPhone, which is on the table recording our conversation, and notes that it's a safer product produced by a company that prioritizes security. But iPhones are expensive, and that turns security into an economic issue. "If you're a poor person in America and you don't have access to a lot of resources, chances are you're using a [cheaper] insecure phone."
Advancing security for everyone won't be easy. Take the massive network of internet-connected devices, the much ballyhooed IoT, or "internet of things"—programmable house lights and coffee makers and those Amazon Echo gadgets and more new products every day. Green says we're rushing into that without considering the consequences of another layer of devices connected to the internet, each one powered by vulnerable code. "Devices are a difficult problem just because there are so many systems with serious security issues and fixing them is a massive task," Rescorla, the Mozilla fellow, says. That would be fine if it left only one person vulnerable, but Mirai has proved that millions can be impacted by a botnet assembled from insecure everyday consumer devices. "Even if we had replacement software that was secure—and that has to be crafted on an individual basis—just deploying it would be very hard. Many devices are not designed for easy updates," Rescorla adds. Apple customers are familiar with recurrent messages urging them to update their MacBooks or iPhones or iPads to the latest version of the operating system, but no such update system exists for much of what we have in our homes, cars, and pockets. In 2016, when 2,000 owners of smart devices were asked whether they'd kept their devices current with the most secure software, 40 percent admitted they had never intentionally performed an update, either because they weren't aware they could, or it wasn't an option, or they simply didn't bother.
Perhaps it has become time to focus effort on the devices and infrastructure that most matter like voting machines, power plants, automobiles, Rescorla wrote. "That's not a very satisfactory answer, but if we try to secure everything right away, we won't secure anything." We still have no idea how to regulate security in digital consumer products, or even if regulation is the best option for convincing companies to put security first. One possibility could be to make companies liable to their customers for breaches in the security of their products, but that could be ruinous to smaller companies that don't have the resources of a Google or Apple.
Were experts like Green to succeed tomorrow in eliminating all the bugs in code that criminals regularly exploit, it would make the internet harder to hack. But only for a time, and only by crooks with limited resources. State actors like China, Russia, and the United States have directed massive resources at finding ways to break into code. As we lead an ever more wired and interconnected existence, we have to consider that a fact of life. As Green says, "Once you start thinking like that, the only hope you have is that you're not interesting enough to be a target."
Posted in Science+Technology
Tagged encryption, matthew green, cybersecurity, national security