Study: Common hospital safety measures are often misleading to public

Johns Hopkins institute suggests reform for public rating systems

The most common measures used to rank hospital safety are often inaccurate and misleading, according to new research from the Johns Hopkins Armstrong Institute for Patient Safety and Quality.

Peter Pronovost

Image caption: Peter Pronovost

The study, published in the journal Medical Care, finds fault in the common practice of relying on billing data rather than clinical data to measure hospital safety.

Researchers evaluated measures for hospital safety used by common public ranking systems, including U.S. News and World Report, Leapfrog's Hospital Safety Score, and the Center for Medicare and Medicaid Services' (CMS) Star Ratings. The team found only one measure out of 21 that met scientific criteria for being considered a true indicator of hospital safety.

"These measures have the ability to misinform patients, misclassify hospitals, misapply financial data and cause unwarranted reputational harm to hospitals," says lead study author Bradford Winters, professor of anesthesiology and critical care medicine at Johns Hopkins. "If the measures don't hold up to the latest science, then we need to re-evaluate whether we should be using them to compare hospitals."

Many hospitals report their performance using measures created by the CMS and the Agency for Health Care Research and Quality (AHRQ) over a decade ago. These measures—known as patient safety indicators (PSIs) and hospital-acquired conditions (HACs)—use billing data input from hospital administrators, rather than clinical data obtained from patient medical records. The result, Hopkins researchers say, can be extreme variation in the way different hospitals code medical errors.

"The variation in coding severely limits our ability to count safety events and draw conclusions about the quality of care between hospitals," says study author Peter Pronovost, director of the Johns Hopkins Armstrong Institute for Patient Safety and Quality. "Patients should have measures that reflect how well we care for patients, not how well we code that care."

The researchers analyzed 19 studies conducted between 1990 and 2015, comparing errors listed in medical records to billing codes found in administrative databases. If the medical record and administrative database matched 80 percent of the time, that measure was considered a realistic portrayal of hospital performance.

Of the 21 measures developed by the AHRQ and CMS, 16 had insufficient data and couldn't be evaluated for validity. With the other five measures, only one—measuring accidental punctures or lacerations obtained during surgery—met the researchers' criteria to be considered valid.

"Despite their broad use in pay for performance and public reporting, these measures no longer represent the gold standard for quality," Pronovost says.

The researchers say they hope their work will lead to reform and encourage public rating systems to use measures based in clinical rather than billing data. Pronovost also recently outlined additional solutions the rating community could explore in a commentary published last month in JAMA.

Read more from Hopkins Medicine