In a bid to clean up misleading institutional safety
comparisons and go further to fix safety
problems, Johns Hopkins experts are proposing standard
guidelines to be used as hospital safety
rating tools.
"Hospitals are increasingly reporting patient safety
data on their Web sites," said Peter
Pronovost, medical director of the Johns
Hopkins Center for Innovation in Quality Patient Care.
"While this is long overdue, the data is only helpful if
it's accurate. The absence of proper oversight in
measuring and reporting patient safety not only could mean
some problems aren't being fixed but also
that the public is potentially being misled."
In an article published in the Nov. 7 issue of the
Journal of the American Medical Association,
Pronovost, an anesthesiologist and critical care
specialist, and a team of Johns Hopkins researchers
adapted elements of the well-known Users' Guide to the
Medical Literature: A Manual for Evidence-
Based Clinical Practice to construct what they say are
guidelines that hospitals can use to ensure
validity and accuracy in patient safety reporting.
"The guide has been used successfully for years to
help clinicians evaluate the validity and
accuracy of research data they might want to use in their
own practice," Pronovost said. "We propose
using the same principles to evaluate the validity and
accuracy of the methods used by an institution
to gauge patient safety."
Like the clinical practice assessments, the new
guidelines, Pronovost says, address three key
questions: Are the measures important? Are they valid? And
are they useful for the goal intended, in
this case to improve safety in health care
organizations?
These larger concepts are addressed in an assessment
tool that comprises some 30 questions,
such as, Is the measure required by an external group or
agency? Is the measure supported by empiric
evidence or a consensus of experts? Does the measure have
face validity--do clinicians believe that
improvement in performance on the measure will be
associated with improved patient outcomes? Is
the risk for selection bias minimized?
Patient safety reporting came to the forefront in 1999
after the Institute of Medicine issued
its report "To Err Is Human," which documented widespread
risk to patients. In response, the Centers
for Medicare and Medicaid Service and the Joint Commission
began requiring all hospitals to submit
annual patient safety reports.
The problem with these reports is that they were
essentially "snapshots" rather than long-term
system analyses, according to Pronovost. For example, they
would identify whether pneumonia patients
received antibiotics within a specific time frame, or if
statins were administered to heart attack
patients. In an article published in the Oct. 17 issue of
the Journal of the American Medical
Association, Pronovost and his team illustrated some of
the limitations of this type of reporting.
"One institution advertised on its Web site that its
rate of staph infection is zero but did not
say how many people were sampled or whether this represents
one month of results or 10 years,"
Pronovost said.
Examples of problematic reports on health care
organization Web sites are easy to find. A quick
and unscientific search of the Internet revealed many
examples. One hospital reported that it saved
242 lives over 18 months (four lives/1,000 discharges), but
the sample size, methods of risk
adjustment and a measure of precision (e.g., confidence
intervals) for the mortality estimates were
not given. Another hospital Web site stated that 90 percent
of pneumonia patients were screened and
given pneumococcal vaccination, while the CMS's Hospital
Compare Web site
(www.hospitalcompa
re.hhs.gov) on the same day reported that 64 percent of
patients were vaccinated.
"It's essentially a snapshot of care," Pronovost said.
"To best assess the current level of safety,
what's being done to improve it and whether it's getting
better, we need all the elements that make up
the big picture."
Sean Berenholtz, of the
Department of Anesthesiology and Critical Care
Medicine, and Dale
Needham, of the
Department of Pulmonary and Critical Care Medicine,
also contributed to the Nov. 7
JAMA article.