Hospitals participating in a monitoring and feedback program for surgical quality showed no more improvement in patient mortality, serious complications, reoperation, or readmission than hospitals not participating in the program, according to two separate reports published online Feb. 3 in JAMA.
Both research groups concluded that feedback on outcomes alone may not be sufficient to improve surgical outcomes.
The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) is an extensive clinical registry that provides participating hospitals with detailed descriptions of outcomes such as mortality, length of stay, and complications, allowing them to benchmark their performance relative to other participating hospitals and focus their efforts to improve care on the areas in which they perform poorly. The information is not reported publicly. Proponents contend that this targeting has already improved surgical outcomes as reported in several single-center studies, but others argue that any improvements noted so far might have occurred over time anyway.
The best way to examine the question would be to compare outcomes between participating and nonparticipating hospitals, according to the two groups of investigators who did just that in these studies. However, the American College of Surgeons took issue with both study designs and released a statement taking exception to their approach to measuring surgical complications.
In the first study, researchers analyzed 30-day outcomes during a 10-year period at 263 hospitals participating in the ACS NSQIP and 526 nonparticipating propensity-matched hospitals across the United States. They focused on patients aged 65-99 years undergoing 11 high-risk general or vascular surgical procedures that are most in need of quality improvement: esophagectomy, pancreatic resection, colon resection, gastrectomy, liver resection, ventral hernia repair, cholecystectomy, appendectomy, abdominal aortic aneurysm repair, lower-extremity bypass, and carotid endarterectomy, said Dr. Nicholas H. Osborne of the Center for Healthcare Outcomes and Policy, University of Michigan, Ann Arbor, and a vascular surgeon at the university and his associates.
They found “slight trends toward improved outcomes” in NSQIP hospitals over time, but control hospitals showed the same trends. For example, 30-day mortality declined from 4.6% to 4.2% in participating hospitals during the study period, and similarly declined from 4.9% to 4.6% in nonparticipating hospitals. However, further analysis showed no statistically significant reductions after enrollment in the NSQIP in 30-day mortality, serious complications, reoperations, or readmissions, Dr. Osborne and his associates said (JAMA 2015 Feb. 3 [ doi:10.1001/jama.2015.25 ]).
The underlying reasons for a lack of improvement among participating hospitals aren’t yet known, but it is possible that hospitals never implemented quality improvement efforts after being informed of their shortcomings, or that they implemented ineffective remedies. “Clinical quality improvement is challenging for hospitals. Changing physician practice requires complex, sustained, multifaceted interventions, and most hospitals may not have the expertise or resources to launch effective quality improvement interventions,” Dr. Osborne and his associates added.
In the second study, researchers analyzed surgical outcomes over a 4-year period among 113 academic hospitals in a health care system database; 39% of these hospitals participated in the NSQIP, receiving feedback on their performance, and the remaining 61% did not. This study evaluated 345,357 hospitalizations for 16 elective general and vascular surgeries, including many of the procedures covered in Dr. Osborne’s study plus mastectomy, thyroid procedures, open or laparoscopic colectomy, prostatectomy, and bariatric procedures, said Dr. David A. Etzioni, a surgeon at Mayo Clinic Arizona, Phoenix, and of the Kern Center for the Science of Health Care Delivery, and his associates.
This study also showed a slight decrease over time in postoperative complications, serious complications, and mortality at both NSQIP and non-NSQIP hospitals. “After accounting for patient risk, procedure type, underlying hospital performance, and temporal trends, the [statistical] model demonstrated no significant differences over time between NSQIP and non-NSQIP hospitals in terms of likelihood of complications, serious complications, or mortality,” Dr. Etzioni and his associates said (JAMA 2015 Feb. 3 [ doi:10.1001/jama.2015.90 ]).
Their findings indicate that quality reports do not necessarily translate into evidence-based strategies for quality improvement and “suggest that a surgical outcomes reporting system does not provide a clear mechanism for quality improvement,” they noted.
In response to these reports, the American College of Surgeons released a statement emphasizing that claims data such as those used by both Osborne et al. and Etzioni et al. “are inaccurate and inappropriate for measuring surgical complications.” Furthermore, Dr. Clifford Ko, ACS director of the division of research and optimal patient care, called it “irresponsible to use data that are known to be an inaccurate measure of quality to determine the effectiveness of a quality improvement program.”
In addition, real-world experience shows that hospitals tend to focus on specific complications one at a time (such as surgical site infections) rather than amalgamating all complications. Hospitals also tend to address performance by separate specialties (such as urology) rather than on particular procedures (such as prostatectomy), according to the ACS statement.
Dr. Osborne’s study was supported in part by the National Institute on Aging. Dr. Osborne reported having no financial disclosures; one of his associates reported ties to Arbor Metrix. Dr. Etzioni’s study did not list any sources of financial support. Dr. Etzioni and his associates reported having no financial disclosures.
email address On Twitter