Institute for Assured Autonomy Seminar: Jeannette Wing, Columbia University

Sept 22, 2020
11am - 12pm EDT
Online
This event is free

Who can attend?

  • General public
  • Faculty
  • Staff
  • Students

Contact

IAA Seminar Series

Description

Colleagues of the Whiting School of Engineering and the Applied Physics Laboratory are invited to the inaugural talk in a new speaker series co-sponsored by the Johns Hopkins Institute for Assured Autonomy and the Computer Science Department, featuring national scholars presenting new research and development at the intersection of autonomy and assurance.

The series' first talk will be "Trustworthy AI" featuring speaker Jeannette Wing, director of the Data Science Institute and professor of computer science at Columbia University.

Please attend the event by using the Zoom link.

Abstract:

Recent years have seen an astounding growth in deployment of AI systems in critical domains such as autonomous vehicles, criminal justice, healthcare, hiring, housing, human resource management, law enforcement, and public safety, where decisions taken by AI agents directly impact human lives. Consequently, there is an increasing concern if these decisions can be trusted to be correct, reliable, fair, and safe, especially under adversarial attacks. How then can we deliver on the promise of the benefits of AI but address these scenarios that have life-critical consequences for people and society? In short, how can we achieve trustworthy AI?

Under the umbrella of trustworthy computing, there is a long-established framework employing formal methods and verification techniques for ensuring trust properties like reliability, security, and privacy of traditional software and hardware systems. Just as for trustworthy computing, formal verification could be an effective approach for building trust in AI-based systems. However, the set of properties needs to be extended beyond reliability, security, and privacy to include fairness, robustness, probabilistic accuracy under uncertainty, and other properties yet to be identified and defined. Further, there is a need for new property specifications and verification techniques to handle new kinds of artifacts, e.g., data distributions, probabilistic programs, and machine learning based models that may learn and adapt automatically over time. This talk will pose a new research agenda, from a formal methods perspective, for us to increase trust in AI systems.

Who can attend?

  • General public
  • Faculty
  • Staff
  • Students

Contact

IAA Seminar Series