By Rachel Coker
Information security matters to anyone who uses a computer. These days, of course, that includes not only engineers at major corporations, but artists and kindergartners. We strive to think of clever passwords, take pains to back up our data and buy virus protection for our computers.
Consider, however, the ways that the term “computer” is expanding. Cell phones, tablets and other devices are part of this landscape, too. What happens if you lose your smart phone? Maybe you’re concerned about your password for online banking. Now imagine your worries if you work for the Department of Defense.
Researchers in the Thomas J. Watson School of Engineering and Applied Science work through these and other scenarios to protect individuals and the nation alike from hackers. And while traditional approaches have often relied on software modifications, several of their innovations aim to provide built-in security with improved hardware.
“Every day, hundreds of thousands of hackers try to attack America’s cyberinfrastructure,” says Yu Chen, assistant professor of electrical and computer engineering. There are already real-world examples of cyber warfare, he notes, citing conflicts between Israelis and Palestinians and between Russia and Georgia.
Chen develops hardware that can be integrated into a network and detect attacks automatically. It’s vital to raise the alarm quickly, he says, given that an attack can affect hundreds of thousands of machines in seconds. He also has applied for a patent on a “Data Dog” to protect mobile devices from hacking.
Nael Abu-Ghazaleh and Dmitry Ponomarev, PhD ’03, both associate professors of computer science, also envision a future in which wars are fought digitally. They believe it makes sense to engineer systems for security, rather than “build a dam and try to plug the holes later,” as Abu-Ghazaleh puts it.
Ponomarev and Abu-Ghazaleh say it’s shortsighted to focus on performance without attention to security. They’d like to arm devices with a “Nanny Chip” and other features to make life more difficult for attackers.
The ‘Nanny Chip’
Ponomarev and Abu-Ghazaleh see new threats as well as new opportunities as computer architecture undergoes a period of rapid change.
Moore’s Law, which predicted that the number of transistors on a chip would double every 18 months to two years, has held up for decades. But many experts now expect a breakdown in Moore’s Law, which is driving manufacturers to place processors with multiple cores onto a single chip. This “multicore” approach improves speed and performance but can open new avenues of attack.
“Computer architecture performance has been improving at such a rapid rate, eclipsing probably any other human system,” Abu-Ghazaleh says. “But Moore’s Law is coming to a screeching halt.”
Most modern processors run multiple programs at once. The main program is running, but there is also hardware available to run something else. Abu-Ghazaleh and Ponomarev propose using the “spare” hardware as a baby sitter — a “Nanny Chip” or “Nanny Core,” if you will.
When programs run, there are expected behaviors. You can check up on them just like a nanny would check on a toddler at the playground. “It’s OK if we let our kids do something wrong as long as we catch them soon after, right?” asks Abu-Ghazaleh. “Permanent changes to the system are done at something called the system call boundary. As long as we’re OK when the system call happens, it’s all right.”
This kind of protection is called reference monitoring. As instructions exit a program, the “Nanny Core” makes sure the program follows the rules.
Ponomarev and Abu-Ghazaleh are also working on a related defense against a class of vulnerabilities called code injection. Let’s say you have a Web form in which you ask for someone’s address. A hacker can put in not just that type of data but files from which she’s able to generate a new program within your machine. Her code has been “injected” into your server.
In this scenario, the “nanny” assumes that any data coming from outside the program is not to be trusted.
“Let’s say I have a Web server and the bad guys connect to it and provide some garbage,” Abu-Ghazaleh says. “What we do is mark that data as untrustworthy, and any data that it touches is also suspicious. Then, as we are running our program, we check what it is doing with this bad data.”
There’s a major drawback to this approach, known as “information flow tracking,” however: It can slow down a program.
Ponomarev says they’re proposing a small change in hardware that doesn’t touch the rest of the carefully designed architecture. It’s a small box at the back end of the processor pipeline, and they’ve built the necessary VSLI circuits using Sun Microsystem’s public core as a demonstration.
They propose using several cores for security purposes only, rather than having a machine fire up every core to run programs. “Architecture changes are difficult except if they are not difficult,” Abu-Ghazaleh says. “If they’re small and don’t touch the major structures that Intel and AMD have spent a ton of their energies optimizing, then it becomes feasible.”
A ‘Data Dog’
Chen shares many of Abu-Ghazaleh and Ponomarev’s motivations. They also share research sponsors: Both the National Science Foundation and the Air Force have funded this work.
Chen proposes using a technique called “out-of-order data division” to strengthen security for mobile devices. Normally, once a hacker breaks a device’s encryption — a sort of secret code — he can access all of its data by using this key. But “out-of-order data division” involves storing information in segments, making it harder to reassemble even with the encryption key.
“Mobile devices are concerned with computing power and memory,” he says. “A cell phone doesn’t normally have the power to fend off a strong attack.”
Chen’s “Data Dog,” a highly flexible mechanism, could be a special chip or a function incorporated into another chip. There’s also a software version. The chip, Chen notes, wouldn’t conflict with existing encryption standards.
“You can use our Data Dog on top of that, for an extra level of encryption,” he says.
Closing the ‘side channel’
Ponomarev and Abu-Ghazaleh are also looking at novel ways to protect an encryption key, though their focus is on multicore environments.
They’re interested in preventing “side-channel” attacks, in which a hacker gains access to information that’s unintentionally revealed in a space shared by two chips. For example, one chip can’t “see” what another has put in the small storage area known as the cache, but it can detect what line of code in the cache was accessed. From there, it’s possible to reconstruct an encryption key.
Ponomarev and Abu-Ghazaleh’s technique, called non-monopolizable cache, prevents the attacker from taking over the cache. “We reserve small portions of the cache to be private to each processor,” Ponomarev says. “The rest of the cache is shared. The private partition is sufficient to keep most of the side-channel information from an attacker.”
Ponomarev and Abu-Ghazaleh say their solution is inexpensive and easy to do with a few extra transistors and less than a 1 percent cost in terms of performance.
“These are advanced attacks,” Abu-Ghazaleh says. “People are just becoming aware of their impact. But they’re powerful enough that we have to be aware of them as we build these machines.”