Safety first: Project aims to make AI-based autonomous systems more reliable and secure

Johns Hopkins engineers take part in five-year effort for the Department of Defense

Using a $7.5 million, five-year grant from the U.S. Department of Defense, a multi-university team that includes Johns Hopkins engineers is tackling one of today's most complex and important technological challenges: How to ensure the safety of autonomous systems, from self-driving cars and aerial delivery drones to robotic surgical assistants.

René Vidal, professor of biomedical engineering and the inaugural director of the Mathematical Institute for Data Science, and Noah J. Cowan, professor of mechanical engineering and director of the LIMBS Laboratory, are partnering with researchers at Northeastern University, University of Michigan, and the University of California, Berkeley, on the Multidisciplinary University Research Initiative, or MURI project, called "Control and Learning Enabled Verifiable Robust AI," or CLEVR-AI.

"The potential practical ramifications of this project are broad, and boil down to making sure that artificial intelligence-based systems are safe," said Vidal. "Using this grant, we are developing theoretical and algorithmic foundations to support a new class of verifiable, safe, AI-enabled dynamical systems that, like living systems, will adapt to novel scenarios, where data is generated—and decisions are made—in real time."

Composite image of two men

Image caption: René Vidal (left) and Noah Cowan

Despite recent advances in artificial intelligence and machine learning, the goal of designing control systems capable of fully using AI/machine learning methods to learn from and interact with the environment the way humans and animals do remains elusive, the researchers say.

"Animals, including humans, move with such grace and agility, and are able to adjust and change their movements and behavior according to what is happening around them. As engineers, we seek to learn from them while creating robust control systems that are safe and reliable. Our goal here is to bridge the gap between biological sensing and control systems, and safe and reliable autonomous systems," Cowan says.

According to Cowan, the field of artificial intelligence has tried to address this gap by building computational systems that attempt to mimic or approximate aspects of the brain. Unfortunately, that approach and the resulting designs do not reliably create safe systems. The team's project seeks to overcome this by combining machine learning, dynamical systems theory, and formal verification methods, leveraging each team member's strength. For instance, Vidal is an expert in machine learning, computer vision, and biomedical data science and Cowan is an expert on understanding sensing, computation, and control in animals. Both Vidal and Cowan are leading experts in robotics and control theory.

They are teaming up with Northeastern University faculty members Mario Sznaier, an expert in identification and learning enabled control of hybrid and non-linear systems; Octavia Camps, whose research focuses on computer vision and machine learning; Milad Siami, an expert in multi-agent systems, network optimization, and control with applications in robotics; and Eduardo Sontag, whose expertise includes control and systems theory, machine learning and biology; as well as with Peter Bartlett of UC Berkeley, who studies machine learning and statistical learning theory; and Necmiye Ozay of the University of Michigan, who works in dynamical systems, control, formal methods and optimization.

"Safe autonomous systems are crucial for our society," Cowan said. "Our approach will integrate traditional mathematical control theory with new and emerging AI to make systems verifiable, robust, safe, and correct."