- Name
- Maggie Ward
- mward67@jh.edu
- Cell phone
- 724-814-2530
Doctors who use artificial intelligence at work risk having their colleagues deem them less competent for it, according to a recent Johns Hopkins University study.
While generative AI holds significant promise for advancing health care, a new study finds its use in medical decision-making impacts how physicians are perceived by their colleagues. The research shows that doctors who primarily rely on generative AI for decision-making face considerable skepticism from fellow clinicians, who correlate their use of AI with a lack of clinical skill and overall competence, resulting in a diminished perceived quality of patient care.
Funded by a 2022 Johns Hopkins Discovery Award, the research included a diverse group of clinicians from a major hospital system, involving attending physicians, residents, fellows, and advanced practice providers. Results of the study were published in August in [Nature Digital Medicine](Nature Digital Medicine).
Stigma stunts better care
The findings may indicate a social barrier to AI adoption in health care settings, which could slow advances that might improve patient care.
"AI is already unmistakably part of medicine," says Tinglong Dai, professor of business at the Johns Hopkins Carey Business School and co-corresponding author of the study. "What surprised us is that doctors who use it in making medical decisions can be perceived by their peers as less capable. That kind of stigma, not the technology itself, may be an obstacle to better care."
The study, conducted by researchers at Johns Hopkins University, involved a randomized experiment where 276 practicing clinicians evaluated different scenarios: a physician using no AI, one using AI as a primary decision-making tool, and another using it for verification. The research found that as physicians were more dependent on AI, they faced an increasing "competence penalty," meaning they were viewed more skeptically by their peers than those physicians who did not rely on AI.
"In the age of AI, human psychology remains the ultimate variable," says Haiyang Yang, first author of the study and academic program director of the Masters of Science in Management program at the Carey Business School. "The way people perceive AI use can matter just as much as, or even more than, the performance of the technology itself."
Skipping AI equaled more respect
According to the study, peer perception suffers for doctors who rely on AI. Framing generative AI as a "second opinion" or a verification tool partially improved negative perceptions from peers, but it did not fully eliminate them. Not using GenAI, however, resulted in the most favorable peer perceptions.
The findings align with theories that suggest perceived dependence on an external source like AI can be seen as a weakness by clinicians.
Ironically, while GenAI's visible use can undermine a physician's perceived clinical expertise among peers, the study also found that clinicians still recognize AI as a beneficial tool for enhancing precision in clinical assessment. The research showed that clinicians still generally acknowledge the value of GenAI for improving the accuracy of clinical assessments, and they view institutionally customized GenAI as even more useful.
The collaborative nature of the study led to thoughtful suggestions for GenAI implementation in health care settings, which are crucial to balance innovation with maintaining professional trust and physician reputation, the researchers note.
"Physicians place a high value on clinical expertise, and as AI becomes part of the future of medicine, it's important to recognize its potential to complement—not replace—clinical judgment, ultimately strengthening decision making and improving patient care," said Risa Wolf, co-corresponding author of the research and associate professor of pediatric endocrinology at Johns Hopkins School of Medicine with a joint appointment at the Carey Business School.
Posted in Health, Science+Technology
