With support from the National Institutes of Health, Johns Hopkins electrical and computer engineer Mounya Elhilali is developing a smart digital stethoscope aimed at revolutionizing pulmonary diagnostics—particularly pediatric diagnostics that would help prevent the more than two million childhood deaths each year caused by acute lower respiratory infections, the leading cause of childhood mortality worldwide.
The $2.3 million grant was awarded in 2022 by NIH's National Heart, Lung, and Blood Institute to Elhilali, a professor in the Whiting School of Engineering, and Johns Hopkins colleagues James West, a professor of electrical and computer engineering, and Eric McCollum, an associate professor of pediatrics at the Johns Hopkins School of Medicine.
It's one of several potentially lifesaving projects Elhilali is pursuing to help people tune in more effectively in a noisy world—not just to conversations but also to the vital sounds used to diagnose disease and the cues that help people with hearing challenges navigate their surroundings.
"With NIH support, we are applying what we learn about how the brain filters sounds in noisy environments to create smarter tools that have a real impact," says Elhilali, who is also founder of the university's Laboratory for Computational Audio Perception (LCAP).
The smart digital stethoscope has the potential to assist in the diagnosis of respiratory diseases, where speed and accuracy can mean the difference between life and death for people around the globe.
"Traditional stethoscopes, while widely used, are often unreliable due to interference from noise, the need for expert interpretation, and subjective variability," she says. "Our research introduces new, advanced listening technology that combines artificial intelligence with new sensing materials to make diagnoses more accurate."
This advanced stethoscope uses a special material that can be adjusted to muffle or block out background noises, making lung sounds clearer. It also uses AI models of the airways and neural networks to analyze breathing sounds. Elhilali and her team trained their algorithm on more than 1,500 patients across Asia and Africa. This technology is currently being used in clinical studies in Bangladesh, Malawi, South Africa, and Guatemala, as well as in the pediatric emergency room at Johns Hopkins Children's Center.

Image caption: Mounya Elhilali with postdoc Alex Gaudio
Image credit: Will Kirk / Johns Hopkins University
A version of the device, now called Feelix, has received FDA approval and is being marketed by the Baltimore spinoff Sonavi Labs.
"This technology is already making an impact," Elhilali says. "It is being used in rural clinics, mobile health units, and large hospitals to assist emergency responders and health care providers around the world to provide rapid, cost-effective pulmonary assessments, especially in areas with limited access to imaging tools like X-rays and ultrasounds."
Beyond medical diagnostics, Elhilali's research explores how the brain processes sound in noisy environments—a phenomenon known as the cocktail party effect. Another NIH-supported project tackled this problem using a new adaptive theory of auditory perception. With support from the National Institute on Aging, Elhilali explored the role of neural plasticity in allowing our brains to adapt to the changing soundscape around us by balancing what our senses hear and what our mental and attentional states are.
"By incorporating insights from brain science, this research examines auditory perception in young and aging brains and has the potential to bridge a gap between traditional hearing aids and truly intelligent auditory devices," she said.
Elhilali is using data gathered from brain recordings from both humans and animals to understand better how brain circuits can isolate certain important sounds from background noise. In a recent study published in Nature Communications Biology, she and her colleagues discovered that when focusing on a particular speaker, the brain actually synchronizes its activity to match the timing and sound features of that voice, a mechanism known as temporal coherence.
"This discovery provides crucial insights into selective hearing and has significant implications for improving speech recognition technology," Elhilali says. "By applying these principles, future hearing aids and communication devices could be designed to better filter unwanted noise, benefiting individuals with auditory processing challenges."
Posted in Health, Science+Technology
Tagged pediatrics, nih funding, electrical and computer engineering, artificial intelligence, pneumonia