Rutgers Professor Joins U.S. Effort Aimed at Using AI to Thwart Cyber Attacks
A Rutgers University professor is working with top U.S. computer scientists to develop cybersecurity methods that use artificial intelligence (AI) to safeguard against threats.
Jie Gao, a professor in the Department of Computer Science, School of Arts and Sciences, is a co-principal investigator with the AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION), a consortium of researchers from 11 schools led by the University of California, Santa Barbara, and one of seven new national AI research institutes.
“We want to develop next generation security,” Gao says. “The idea is to develop an automated AI system that will recognize potential threats, communicate with other agents, and develop a response mechanism.”
The five-year project—announced last spring by the National Science Foundation (NSF)—is part of a $140 million investment aimed at advancing AI research across six themes: ethical and trustworthy systems and technologies, novel approaches to cybersecurity, innovative solutions to climate change, expanding the understanding of the brain, and enhancing education and public health.
"These institutes are driving discoveries that will ensure our country is at the forefront of the global AI revolution," NSF Director Sethuraman Panchanathan said in a statement.
In the area of cybersecurity, Gao said that the rise of AI could lead to attacks on computer networks that are increasingly sophisticated and harder to detect. One example, she notes, is the denial-of-service attack, in which an attacker disrupts a system with a flood of deceptive requests.
“It used to be that you had to have someone who gained access to a lot of machines and launched a large traffic flow to the target,” Gao said. “But now advanced AI systems can fool traffic analysis and traffic monitoring, and that could open up opportunities for adversaries.”
Another potential vulnerability, she noted, are computer systems that interpret images from videos or cameras, such as those used in self-driving vehicles.
“One can slightly change pixels of an image and fool the classifier,” she said. “So instead of seeing a car in front of me, the system sees a cloud, and (the vehicle) doesn’t stop. This could have serious and devastating consequences.”
Gao is working on the theory and algorithm side of the project with the goal of developing a multi-agent system in which humans and different AI technologies are communicating and cooperating as they monitor threats, gather information, and make decisions.
The ACTION Institute is funded through a partnership between NSF, the U.S. Department of Homeland Security's Science and Technology Directorate, and the IBM Corp.
"The ACTION Institute will help us better assess the opportunities and risks of rapidly evolving AI technology and its impact on DHS missions," Dimitri Kusnezov, undersecretary for science and technology at homeland security, said in a statement. "This group of researchers and their ambition to push the limits of fundamental AI and apply new insights represents a significant investment in cybersecurity defense. These partnerships allow us to collectively remain on the forefront of leading-edge research for AI technologies.”
Read a story from University of California, Santa Barbara on the AI Institute for Agent-based Cyber Threat Intelligence and Operation.