Skip to main content

Johns Hopkins UniversityEst. 1876

America’s First Research University

Suchi Saria

Credit: Will Kirk / Johns Hopkins University

Q+A

A trailblazing researcher describes her path to AI and health care

A conversation with Johns Hopkins computer scientist Suchi Saria, who is on a mission to augment human care with the latest in AI and machine learning technology

Suchi Saria has never met a challenge she walked away from. In fact, she often runs toward it. Her academic and professional career has been marked by taking on projects that many said were impossible. Growing up in India in Darjeeling, a small town in the foothills of the Himalayas, Saria became interested in computer science at an early age. She would later attend a science boarding school in no small part because it was "ridiculously hard" to get into. And, back then, the majority of her classmates were boys. "There were a lot of norms in India, like girls don't do this, you can't do this. The world doesn't work this way," she says. "I had to learn the instinct to ignore when people said, 'It can't be done, you can't do this.' It became where if somebody says something couldn't be done, I was even more intrigued."

Saria took this attitude first to Mount Holyoke College in Massachusetts, then to Stanford University, where she was told that the burgeoning fields of AI and big data would be difficult to apply to the many needs of the healthcare industry. Doctors and nurses might be reluctant to implement new tools and strategies without firm proof of their reliability and efficacy. What about doctor's notes in charts—could AI decipher them?

Today, Saria is a professor of computer science at Johns Hopkins University's Whiting School of Engineering and a professor of statistics and health policy at the university's Bloomberg School of Public Health. She also directs JHU's new Machine Learning and Healthcare Lab.

Video credit: WebMD

Over the past decade, Saria has developed an AI-powered platform that is reducing sepsis mortality rates by 18% in dozens of hospitals across the United States—a significant advance in addressing a deadly immune response that claims roughly 270,000 lives each year. Think of it as an early-warning system for sepsis, where every hour counts. She's also working on building trust in AI systems deployed in health care settings to reduce false positives, make these systems transparent and easy to use for caregivers, all while not ignoring the human element. In short, she's on a mission to see how AI can be deeply interwoven into clinical workflows to significantly reduce unnecessary mortality and injury.

Johns Hopkins Magazine sat down with Saria to discuss the evolution of her career, the personal connections she has to her work, the creation of her spin-off company Bayesian Health, and the promise of what's to come for AI and health care.

I'm always fascinated by the journey that brought a person to this place and time in their life and career. What formed the person we're talking with today?

I grew up in India in a tiny town. My parents dealt in the tea industry, and my family has done this for generations: making tea, exporting tea, researching new types of tea. I grew up in a massive joint family. So, although I just have one younger brother, it felt like I didn't just have one sibling. I was part of something much bigger.

I studied science and tech as a kid, which is very natural in India. People choose disciplines early. I really loved science and engineering, but I also loved art, and I wanted to be a fashion or interior designer, too. I had to decide between a career in one or the other. I feel like it made sense to choose the physics robotics path because it's the harder thing to do, right?

What about computer science drew you in?

It was fun to program. It was fun to build. In middle school, I got to learn about the many generations of computing: vacuum tubes, semiconductors, integrated circuits, and AI with really smart computers and algorithms as the next frontier. I was like, Cool, I can be part of this next generation. The idea of building smart little robots that could do useful things was just fascinating.

I ended up going to a boarding school in India, which was like a science school that was very competitive, hard to get in. So right away I was drawn to it. And at the time this school had one of the only robotics clubs in the country. I was one of two girls in the club.

I got a scholarship to Mount Holyoke College in Massachusetts, where I trained under the chair of the department who was an experienced robotics researcher and had the hope of training more women to go into AI and robotics. I was lucky at age 17 to have someone like him as a mentor who took me under his wing. From there, a lot of doors opened up.

Where and when did your transition from robotics to AI in health care start?

By the time I got to Stanford, my early projects were in broad tech applications of AI and robotics. For example, positional mapping for autonomous vehicles, like what happens when you're in a new area or building, perhaps one that has been damaged and you're on a rescue mission. You literally have to both scan and form a new map of the place.

I also worked on a big project funded by DARPA that was an early precursor to Siri called Cognitive Assistant that Learns and Organizes (CALO). Basically, it was like building AI to be your desktop.

"My earlier life was driven by doing hard things because they were difficult or others said it couldn't be done. But it was at this time I realized I don't need to prove myself to other people. I wanted to do things that mattered."

This was a really exciting time. We're talking hard algorithmic work with really messy data sets. But for me personally, this was a turning point. My earlier life was driven by doing hard things because they were difficult or others said it couldn't be done. But it was at this time I realized I don't need to prove myself to other people. I wanted to do things that mattered.

In 2008, I started working with a terrific neonatologist colleague at Stanford. It was a very interesting time in health care because of the Meaningful Use program that incentivized health care providers to adopt electronic health records. Every system was going from paper to electronic records, so for my group, being involved with AI, the natural question was: There's going to be this explosion of new data, and are AI and machine learning the right technologies for making sense of it? To me that made perfect sense, and that's how I got introduced to health care.

I'm interested in hearing about this transition.

My first year was super tough. It was constant shock after shock because I was coming from a world where we put every bit of clean data through the lens of algorithms to a world where data wasn't necessarily connected, or no one had the time to look at it.

I thought it was wild that we have all this data coming from monitors on these infants in the Neonatal ICU (NICU). We have data on symptoms, treatments, response to treatments. These problems could be solved totally differently, I thought. You could identify how patients will respond or not respond to a certain therapy; you could identify how you optimize dosing. This was back in 2009, but medicine can be slow to convert, so we had to prove that what we're doing is useful.

You've brought up this idea of messy data a few times. For someone not in your field, what does this mean?

OK, think about a clinical trial. You have a very controlled environment in which data is selected. You have protocols, and you say you're going to measure let's say these five things. You will measure them at the beginning of the trial and then every month at this specific cadence. You take pains to make sure there are no errors at any time. The conditions are the same.

Now think about medicine in the real world. You have lots of different doctors, nurses, case managers making measurements. They're recording it all in a more unstructured way. They record it in notes. There's data coming from various devices attached to patients, but maybe a lead fell off for whatever reason and someone comes in later and reattaches it. Some monitors are turned off momentarily, some left on, and maybe you get junk readings. So that's why it's messy.

But think about it—we have really smart physicians who are put in front of this data all the time and they can make sense of it. They draw inferences of what happened, what didn't happen. So what if we could train an AI system to similarly make sense of that noise and harmonize it? Learn how to read doctor speak basically and be an aid.

And some of the models and algorithms that you built out show that this could work, like how to better predict outcomes for premature babies.

In the first month of life, the organs of premature babies are very underdeveloped. They are at high risk of complications, and so the earlier you know an organ is deteriorating, the greater the chance that you're able to resuscitate them and for the baby to do well.

So we started collecting all these measurements/vitals in the NICU that we observed over time. Some of these infants had complications, like necrotizing enterocolitis, retinopathy of prematurity, culture positive sepsis. These are all really bad, life-threatening complications. And we knew which babies had them, and when they had them. The natural question was what if we looked at the data leading up to these moments? Could we identify when the early signs of these complications, however small, started to happen? That was the hypothesis.

We collected all this very granular monitoring and outcome data and began to design algorithms to learn from it. We could start to observe changes over time, including what happens when you're not getting enough oxygen and the heart starts to pump harder to send blood to end organs. Your body self-regulates and that shows up as high variability in the data. Babies that have lowered variability were performing more poorly, and that was often an early indicator that this baby's not going to do well. We discovered we could characterize with very high accuracy which infants were at high risk of complications, and so you could imagine if we knew that, we could put a workflow in place for the clinicians to then set up interventions more proactively. This baby needs more respiratory support. This one is at risk of an infection, so maybe we should be proactively giving antibiotics.

What became of this work?

From an AI perspective, it was a total win. The paper ["Integration of Early Physiological Responses Predicts Later Illness Severity in Preterm Infants"] was on the cover of Science Translational Medicine. We got a huge amount of press and a lot of attention on this new world of data.

I began to approach hospital administrators and that's when I learned a bitter truth that to take an invention like this to practice, there are a lot of gaps that need to be closed. What infrastructure do you need? What would you do with the signals? Who's going to act on it? How much data do you need to collect to be sure that it needs to be acted upon? All the millions of questions. What would I need to do to make sure this technology works in very diverse environments, not in my research world?

And this was why I founded our spin-off company, Bayesian Health. [The company's name is a reference to a statistical approach based on Bayes' theorem, which uses probability to represent degrees of belief or uncertainty in an event.] I knew we needed a company to help bring these ideas to market. And for the past decade we've been building and validating a state-of-the-art AI/machine learning platform that physicians can trust and help them make the best decisions possible. And do it faster.

A nice transition to switch to your company's work with sepsis, which is a major issue in clinical settings and one of the leading causes of death. Why did you focus your work here?

Well, I decided to join Johns Hopkins in the first place because I felt people here are committed to pushing these ideas in the real world, whether it was from a policy perspective, regulatory perspective, or practice perspective.

I started interacting with a colleague of mine at Johns Hopkins, Dr. Peter Pronovost, who was doing work in harm reduction in health care. Harms that are avoidable. And one of those was sepsis. You get infected and while your body is trying to respond to the infection, it overreacts and your own immune system starts to attack your organs, and that spirals down into organ failure leading to death. In terms of mortality rates, people say anywhere between one and three infected patients who go into septic shock will die. But Peter knew these harms were preventable. If we had just known earlier, we could have done something. My brain lit up.

Of course, I don't want to make it sound like caregivers are not aware of this and they're not checking for sepsis. They are. It's just often very hard to identify the signs early on because of how subtle the symptoms are and how heterogeneous the presentation can be for any given patient.

Could we use the data we are collecting on patients to understand when we're starting to see subtle signs moving in a direction that suggests that this person is beginning to deteriorate? And, if so, what can we do to then signal and prompt the caregiver?

And for you, this work is personal.

I lost my nephew to sepsis. I distinctly remember I was sitting in my room in Baltimore. I was exhausted after a long night of work. My mom calls me and she's like, "Suchi, I have bad news." And she tells me about my nephew, who is now in an ICU in India and has experienced septic shock and his liver has failed. They knew at the time I was doing work in sepsis. I'm like, my God, that's terrible. And then it was extremely sad because it was only another day or so that I would learn he didn't survive. He was 26.

Back to your work with sepsis detection—that must have been a major milestone for you and your team.

I remember that night when we first analyzed our data. We had this three-year study across five hospitals, almost three quarters of a million patients. What we showed was the ability for AI to identify sepsis patients significantly earlier than most doctors could. The median lead time was 5.7 hours earlier, and every hour matters. Past studies have shown high single-digit increase in mortality every hour that goes by. So the idea that we could move it so much earlier was very meaningful. We showed not just an 18% reduction in mortality but also reductions in organ dysfunction, hospital length of stay, and ICU utilization. I literally didn't sleep the night I saw the results from our study. I was crying because it was just wild to me that after years of working at this, that it was finally happening.

Video credit: EDTalks

But trust me, implementing this at a big place like Johns Hopkins and other hospitals was not easy. First, there's some resistance from the caregivers. Like, I don't need this. I know how to identify sepsis, and I don't need help from a computer program. telling me what to do. At times, I felt like I was playing a game with the odds against me. But early on we ran our system in the background and later showed the results to doctors once the patients were discharged. They were like, my God, this is fast and accurate. It's one thing for one or two people to come along, I needed 4,400 clinicians to come along, right? But we implemented it and in 2022 published studies in Nature Medicine showing 89% physician response, which in turn drove the outcomes results.

It must be challenging to build that level of trust with a technology that is relatively new and one that some might be prejudiced against.

A lot of work went into how to build trust and create transparency. We knew that would drive adoption. In the last few years, we've seen more acceptance. Hospital administrators, clinicians, everybody's so excited about AI. They see the possibility; they see the future.

We're continuing to learn how to make this technology easier to scale, across very diverse settings, big and small. We've been able to drive even better adoption as the body of results and. our approaches have matured. What's even more exciting is the pace at which we're bringing this platform to other leading causes of preventable life-threatening complications that similarly would benefit from early detection and proactive intervention. If you could have foreseen it, you could have proactively done the right thing. It saves time and energy—clinicians can focus on other patients—and helps prevent adverse outcomes. And it actually cuts costs.

How is the software you're developing integrated with the tech we see in hospitals and clinics?

This system is integrated with the electronic health record and it continuously scans the vast amounts of data on any new patient in real-time to identify and flag warning signs. A key thing early on with our work with sepsis is we didn't want the system crying wolf too much, because then people will just ignore you.

"What's even more exciting is the pace at which we're bringing this AI platform to other leading causes of preventable life-threatening complications that similarly would benefit from early detection and proactive intervention. If you could have foreseen it, you could have proactively done the right thing."

So, what our system does is a warning signal surfaces in the electronic health records in different. places depending on the unit and workflow of the care team. If one person doesn't see it, another will. What they do next is click and start to see why. What triggered the system? Maybe the patient's temperature went up, and two or three other things occurred that are early signs of decline. The system combs through tons of data and highlights key anomalies. Next, if the clinicians agree, the system makes it very easy to see what's needed to help the patient quickly. And if the patient is transferred from one unit to another, they have an easy-to-read record of what's been done, what's not been done, and how to close the gaps.

You wouldn't want a smoke alarm going off every hour. You'd unplug it.

Exactly. My team led a 5-year program funded by the National Science Foundation on understanding the different ways to build efficacy and trust in AI technology designed for high-stakes applications like ours.

Our most recent manuscript was on this idea of designing AI communication interfaces for building trust, and we learned you can sometimes build too much trust into a system. If the end user thinks it's accurate in 100% of the time, and they do exactly as the system tells them to and does not question any of it, that could lead to errors. That's where transparent interfaces come in, so the humans can verify and validate. You don't want to build in overreliance.

What's next for your team?

While we're having a great impact; there are still far more hospitals nationwide that are not yet using our platform, so market penetration through our work at Bayesian is a focus. We're continuing to refine the AI and workflows to optimize our impact. It took us a long time to build this platform and mature this technology in the context of sepsis, but by the end of the year, we will already have results in more than two dozen other conditions related to respiratory-, cardiovascular-, and infection-associated failure to rescue scenarios. It's a privilege to be able to bring life-saving innovations to the bedside and help hospitals become a safer, more hospitable environment, both for patients and clinicians.

Greg Rienzi is editor of Johns Hopkins Magazine.