Overcoming the human bottleneck with autonomy
On August 4, 2016, the mission of cyber autonomy was declared accomplished. Meanwhile, back in Pittsburgh, a CyLab graduate student quietly hacked away at her own research in cyber autonomy.
On a blisteringly hot Thursday afternoon in Las Vegas, thousands of people gathered in a giant, windowless auditorium and closely watched lights flicker on seven supercomputers, standing side-by-side in a semi-arch facing toward the audience. Audience members pointed, took photos, and whispered to each other.
On August 4, 2016, the mission of cyber autonomy was declared accomplished—that is, seven supercomputers demonstrated that they were capable of hacking each other, stealing virtual “flags” while trying to protect their own, completely without human intervention. CMU-spinoff ForAllSecure, led by Electrical and Computer Engineering (ECE) Professor David Brumley, took home the $2 million prize for building the supercomputer that scored the most points in the competition, colloquially known as the Cyber Grand Challenge.
Meanwhile, back in Pittsburgh, one of Brumley’s Ph.D. students in CyLab quietly hacked away at her own research in cyber autonomy.
“We have automatic bug finding, and we have automatic patching of those bugs, but we don’t have automatic strategy yet,” says Tiffany Bao, a recent ECE Ph.D. graduate. “Should I patch this bug? Or should I report it to the vendor? If I patch it by myself, what is the patch about? How robust is the patch?”
Bao describes cyber autonomy as your own cyber bodyguard—it’s meant to protect you by analyzing your computer to figure out where vulnerabilities exist and what to do with them. The supercomputers that competed in the Cyber Grand Challenge automatically patched vulnerabilities they found, but Bao’s work focuses on enabling these systems to come up with more complex strategies, when automatically patching isn’t necessarily the best option.
We are going to come across this challenge of human bottleneck in deciding how to deal with vulnerabilities in the future. This is why we need autonomy.
Tiffany Bao, Ph.D. student, Electrical and Computer Engineering, Carnegie Mellon University
“Given the increasing number of programs and the increasing amount of code, and as a result, the increasing amount of vulnerabilities, how are we going to react?” Bao says. “We are going to come across this challenge of human bottleneck in deciding how to deal with vulnerabilities in the future. This is why we need autonomy.”
In the short term, Bao says that she hopes to continue to gain a more comprehensive understanding of the security world, which will help inform her development of cyber autonomy strategy. How criminal hackers behave and what motivates them all can affect a particular strategy taken for a particular software bug.
“For instance, if you report a vulnerability to a software vendor and they send out a patch, the patch itself could reveal information about the vulnerability,” Bao says. “Then the patch attracts attackers to attack other people before those people apply the patch. For each vulnerability, there should be a different strategy based on many factors.”
Long term, Bao hopes that cyber autonomy will be applied to real-world, commonly used systems such as web browsers, and become more practical to use in daily life.
Earlier this week, Bao defended her Ph.D. thesis, and she’s now in the process of moving to Arizona where she'll be an assistant professor at Arizona State University starting in the fall semester.
“I’m thrilled by the simplicity of the work,” Bao says. “It turns out that work in cyber autonomy is not that complex. It’s so simple, and I like simple. It’s beautiful. It’s elegant.”