Computer security researchers had previously shown that when two programs are running simultaneously on the same operating system, an attacker can steal data by using an eavesdropping program to analyze the way those programs share memory space. They posited that the same kinds of attacks might also work in clouds when different virtual machines run on the same server.
In the immensity of a cloud setting, the possibility that a hacker could even find the intended prey on a specific server seemed remote. This year, however, three computer scientists at the University of California, San Diego, and one at MIT went ahead and did it. They hired some virtual machines to serve as targets and others to serve as attackers–and tried to get both groups hosted on the same servers at Amazon’s data centers. In the end, they succeeded in placing malicious virtual machines on the same servers as targets 40 percent of the time, all for a few dollars.
Gmail, Twitter, and Facebook are all cloud applications, for example. Web-based infrastructure services like Amazon’s–as well as versions from vendors such as Rackspace–have attracted legions of corporate and institutional customers drawn by their efficiency and low cost.
“Today you have these huge, mammoth cloud providers with thousands and thousands of companies cohosted in them,” says Radu Sion, a computer scientist at the State University of New York at Stony Brook. “If you don’t have everybody using the cloud, you can’t have a cheap service. But when you have everybody using the clouds, you have all these security issues that you have to solve suddenly.”
Cloud computing actually poses several separate but related security risks. Not only could stored data be stolen by hackers or lost to breakdowns, but a cloud provider might mishandle data–or be forced to give it up in response to a subpoena. And it’s clear enough that such security breaches are not just the stuff of academic experiments. In 2008, a single corrupted bit in messages between servers used by Amazon’s Simple Storage Service (S3), which provides online data storage by the gigabyte, forced the system to shut down for several hours. In early 2009, a hacker who correctly guessed the answer to a Twitter employee’s personal e-mail security question was able to grab all the documents in the Google Apps account the employee used. (The hacker gleefully sent some to the news media.) Then a bug compromised the sharing restrictions placed on some users’ documents in Google Docs. Distinctions were erased; anyone with whom you shared document access could also see documents you shared with anyone else.
Andin October, a million T-Mobile Sidekick smart phones lost data after a server failure at Danger, a subsidiary of Microsoft that provided the storage. (Much of the data was later recovered.) Especially with applications delivered through public clouds, “the surface area of attack is very, very high,” says Peter Mell, leader of the cloud security team at the National Institute of Standards and Technology (NIST) in Gaithersburg, MD. “Every customer has access to every knob and widget in that application. If they have a single weakness, [an attacker may] have access to all the data.”
To all this, the general response of the cloud industry is: clouds are more secure than whatever you’re using now. Eran Feigenbaum, director of security for Google Apps, says cloud providers can keep ahead of security threats much more effectively than millions of individuals and thousands of companies running their own computers and server rooms. For all the hype over the Google Docs glitch, he points out, it affected less than .05 percent of documents that Google hosted. “One of the benefits of the cloud was the ability to react in a rapid, uniform manner to these people that were affected,” he says. “It was all corrected without users having to install any software, without any server maintenance.”
Think about the ways security can be compromised in traditional settings, he adds: two-thirds of respondents to one survey admitted to having mislaid USB keys, many of them holding private company data; at least two million laptops were stolen in the United States in 2008; companies can take three to six months to install urgent security patches, often because of concern that the patches will trigger new glitches. “You can’t get 100 percent security and still manage usability,” he says. “If you want a perfectly secure system, take a computer, disconnect it from any external sources, don’t put it on a network, keep it away from windows. Lock it up in a safe.”
But not everyone is so sanguine. At a computer security conference last spring, John Chambers, the chairman of Cisco Systems, called cloud computing a “security nightmare” that “can’t be handled in traditional ways.” At the same event, Ron Rivest, the MIT computer scientist who coinvented the RSA public-key cryptography algorithm widely used in e-commerce, said that the very term cloud computing might better be replaced by swamp computing. He later explained that he meant consumers should scrutinize the cloud industry’s breezy security claims: “My remark was not intended to say that cloud computing really is ‘swamp computing’ but, rather, that terminology has a way of affecting our perceptions and expectations. Thus, if we stop using the phrase cloud computing and started using swamp computing instead, we might find ourselves being much more inquisitive about the services and security guarantees that ‘swamp computing providers’ give us.”
Amazon announced plans to offer a “private cloud” service that ensures more secure passage of data from a corporate network to Amazon’s servers. (The company said this move was not a response to the research by the San Diego and MIT group. According to Adam Selipsky, vice president of Amazon Web Services, the issue was simply that “there is a set of customers and class of applications asking for even more enhanced levels of security than our existing services provided.”)
The problem of how to manipulate encrypted data without decrypting it, meanwhile, stumped researchers for decades until Gentry made a breakthrough early in 2009. While the underlying math is a bit thick, Gentry’s technique involves performing calculations on the encrypted data with the aid of a mathematical object called an “ideal lattice.” In his scheme, any type of calculation can be performed on data that’s securely encrypted inside the cloud. The cloud then releases the computed answers–in encrypted form, of course–for users to decode outside the cloud. The downside: the process eats up huge amounts of computational power, making it impractical for clouds right now. “I think one has to recognize it for what it is,” says Josyula Rao, senior manager for security at IBM Research. “It’s like the first flight that the Wright Brothers demonstrated.” But, Rao says, groups at IBM and elsewhere are working to make Gentry’s new algorithms more efficient.
“Clouds are systems,” says NIST’s Peter Mell. “And with systems, you have to think hard and know how to deal with issues in that environment. The scale is so much bigger, and you don’t have the physical control. But we think people should be optimistic about what we can do here. If we are clever about deploying cloud computing with a clear-eyed notion of what the risk models are, maybe we can actually save the economy through technology.”
Copyright Technology Review 2009.
The full article here talks about the expense (in computational power) of encryption churn, future interoperability concerns resembling the 90s between competitors, and other anticipated challenges along the way.
Not a bad attempt at a future and failures in-a-nutshell article.
Like the current thinking on carbon-based fuels, added costs of risk exposure and additional governance needs to be baked into so called cloud and virtualized offerings.
The threat landscape has exploded exponentially in internet applications from where it was only a few years ago with the advent of visualization, massively increased distribution of assets, explosion of wireless access, and quick-to-market applications that have unprecedented amounts of software flaws that pose risks of disclosure of private data.
I say this not to overly criticize innovation and more aggressive and fast-paced development, but to clarify to those that do not realize that reigning in and controlling access at inception of these services is required to control them.
Without foresight in building infrastructure in secure ways, the risk of difficult systematic problems creates the space for unintended commerce in leaked or stolen information. Nature abhors a vacuum and in highly complicated systems there will invariably be backwaters where this will occur.
The trick here is to make modular systems that guard against inappropriate disclosure at each step using the defense in depth model. Once actual costs are assigned to risk by means of open data and metric information, market forces should make this a reality.