Skip to main content

Johns Hopkins UniversityEst. 1876

America’s First Research University

Suchi Saria with a laptop computer

Image caption: Suchi Saria

Credit: Will Kirk / Johns Hopkins University

Artificial intelligence

How machine learning can help optimize treatment for septic shock

By using reinforcement learning, researchers train virtual agent to determine the best time to administer medication based on a variety of patient-specific factors

A multi-institutional research team has demonstrated how AI and machine learning can optimize therapy selection and dosing for septic shock, a life-threatening complication that is the leading cause of hospital deaths.

The team includes Johns Hopkins University's Suchi Saria, who previously developed an AI-powered early warning system that is reducing sepsis mortality rates in dozens of hospitals across the United States. Their results appear in The Journal of the American Medical Association.

Sepsis often causes low blood pressure that may result in life-threatening organ dysfunction and accounts for more than 270,000 U.S. deaths annually. Emergency treatment includes administering fluids and various vasopressors, agents that constrict the blood vessels, to raise the patient's blood pressure to normal levels and restore the flow of blood and oxygen to their organs.

"How best to individualize blood pressure treatment with different therapies remains a complicated open challenge," says senior author Romain Pirracchio, professor of anesthesia and perioperative care at the University of California, San Francisco.

"With this kind of infrastructure, instead of doing three experiments at a time, we're doing a thousand experiments at a time—but we're not even doing experiments; we're learning from existing data."
Suchi Saria
Professor of computer science

International guidelines recommend using norepinephrine, a medication designed to raise blood pressure, before moving on to vasopressin, a blood pressure-raising hormone, if a patient's blood pressure remains too low. However, septic shock is a condition that changes rapidly and continuously, complicating the decision about whether and when to start vasopressin. What's more, vasopressin is extremely potent—meaning that starting it too early can cause severe side effects.

"To find the optimal time to begin administering vasopressin, traditionally we'd posit very specific criteria and run a clinical trial—costing millions of dollars and lasting years—comparing those criteria against the standard of care. But this only allows us to test one criterion at a time," says Saria, professor of computer science at JHU's Whiting School of Engineering and a professor of statistics and health policy at the university's Bloomberg School of Public Health. "Turns out, there's a much better way: using reinforcement learning."

Reinforcement learning is a branch of machine learning in which a virtual agent learns from trial and error to maximize the probability of a good outcome. Using electronic medical records from more than 3,500 patients across various hospitals and public datasets, the research team trained a reinforcement learning model to consider individuals' blood pressure, organ dysfunction scores, and other medications being taken to determine when to begin vasopressin.

The researchers then validated their model on unseen data from nearly 11,000 additional patients to confirm the algorithm's effectiveness and verify that deployment would have reduced in-hospital mortality.

"There was a substantial number of patients who were started on vasopressin exactly when our algorithm would have recommended it if it had been live," Pirracchio says. "So, using complex statistical methods to account for bias and differences in baselines, we were able to show that treatment matching with exactly what the algorithm suggested—in other words, starting at the exact right time—was consistently associated with a better outcome in terms of mortality."

The model consistently recommended starting vasopressin earlier than most physicians did in practice, but in the few cases where the drug was administered even earlier than the algorithm recommended, patient outcomes were worse.

"This shows that there's virtue in trying to individualize the strategy to each patient," Pirracchio says. "There's no one-size-fits-all rule—in septic shock, there is substantial variability in resuscitation practices between hospitals and in different countries, especially regarding vasopressor support. Given the diversity of the population included in this study, the results show that an individualized vasopressin initiation rule can improve the outcome of patients with septic shock."

The next step will be to implement the model in practice, going "from promise to reality," as Saria puts it.

Pirracchio and his team will be doing just that at the UCSF Medical Center before scaling to centers nationally in partnership with Bayesian Health, a clinical AI platform spun out of Saria's research. But the applications of reinforcement learning in health care don't stop with vasopressor administration.

"With this kind of infrastructure, instead of doing three experiments at a time, we're doing a thousand experiments at a time—but we're not even doing experiments; we're learning from existing data," Saria says. "It's almost like the experiment was already done, for free, and we just get to learn from it and intelligently discover the precise contexts in which different strategies should be implemented to improve patient outcomes and save lives.

"There are lots of opportunities here for reinforcement learning; this is only the start."