User login
CINCINNATI — Quality improvement programs at hospitals might report significantly different rates of risk-adjusted comorbidities and outcomes for surgical patients, according to a retrospective analysis of two programs within one health system.
The risk-adjusted mortalities calculated by the American College of Surgeons' National Surgical Quality Improvement Program (NSQIP) and the University HealthSystem Consortium (UHC) for the general and vascular surgery services in the Ohio State University health system were different for “pretty much the same patient population over the same time period,” Dr. Steven M. Steinberg said at the annual meeting of the Central Surgical Association.
Dr. Steinberg, chief of the division of critical care, trauma, and burn in the department of surgery at Ohio State, and his coinvestigators compared the NSQIP records of 120 consecutive general and vascular surgery inpatients with their matching records, which were submitted to UHC from January to June 2006.
NSQIP provides a prospective database of 30-day, risk-adjusted surgical outcome data on inpatients and outpatients from participating hospitals.
UHC's membership of 101 academic medical centers and 170 of their affiliated hospitals includes about 90% of nonprofit academic medical centers. UHC uses the Centers for Medicare and Medicaid Services' system for classifying the severity of illness, the All Patient Refined Diagnosis Related Groups.
“From our point of view, [UHC's methodology] is somewhat more complex than the NSQIP methodology,” Dr. Steinberg said.
According to NSQIP, Ohio State's ratio of observed to expected mortality was 0.76, placing it in the top quartile. But UHC calculated a ratio of 1.45, putting it in the bottom quartile. A ratio less than 1 indicates that the hospital is performing better than expected given the complexity of its patient population and surgical case complexity.
Overall, NSQIP tallied significantly fewer comorbidities per person after risk adjustment than did UHC (1.38 vs. 2.85). These included discordant results between NSQIP and UHC for the rates of hypertension (47% vs. 43%, respectively) and diabetes (11% vs. 14%), as well as cardiac (10% vs. 12%) and pulmonary comorbidities (18% vs. 23%).
Significant discordance also occurred between NSQIP and UHC results for all complications combined (28% vs. 11%).
“Clearly, not all risk adjustment is the same. Both NSQIP and the University HealthSystem Consortium risk adjustment of data cannot be kept at our institution because they are so different,” Dr. Steinberg said. “From my point of view, NSQIP has more face validity than the UHC system, not just because we did better [on NSQIP] but because it's something that I can understand, whereas I have great difficulty in being able to understand the UHC process.”
Several audience members thought that the results illustrate the problems with using retrospective analyses of administrative data sets to evaluate outcomes, rather than prospective databases that are maintained by a trained and dedicated nurse, as is the case with NSQIP.
The difference in the ratio of observed to expected mortality between these quality improvement programs could be attributable to a number of factors:
PIProblems with documentation and coding (although this is unlikely, according to Dr. Steinberg).
PIDifferences in the participation of medical centers in each quality improvement program (although 56 centers participate in both NSQIP and UHC).
PIPossible incorrect classification—for example, UHC defines a service line by ICD-9 codes, not whether a patient was ever actually on a service.
PIDifferences in the programs' risk-adjustment methodologies.
'Clearly, not all risk adjustment is the same.' DR. STEINBERG
CINCINNATI — Quality improvement programs at hospitals might report significantly different rates of risk-adjusted comorbidities and outcomes for surgical patients, according to a retrospective analysis of two programs within one health system.
The risk-adjusted mortalities calculated by the American College of Surgeons' National Surgical Quality Improvement Program (NSQIP) and the University HealthSystem Consortium (UHC) for the general and vascular surgery services in the Ohio State University health system were different for “pretty much the same patient population over the same time period,” Dr. Steven M. Steinberg said at the annual meeting of the Central Surgical Association.
Dr. Steinberg, chief of the division of critical care, trauma, and burn in the department of surgery at Ohio State, and his coinvestigators compared the NSQIP records of 120 consecutive general and vascular surgery inpatients with their matching records, which were submitted to UHC from January to June 2006.
NSQIP provides a prospective database of 30-day, risk-adjusted surgical outcome data on inpatients and outpatients from participating hospitals.
UHC's membership of 101 academic medical centers and 170 of their affiliated hospitals includes about 90% of nonprofit academic medical centers. UHC uses the Centers for Medicare and Medicaid Services' system for classifying the severity of illness, the All Patient Refined Diagnosis Related Groups.
“From our point of view, [UHC's methodology] is somewhat more complex than the NSQIP methodology,” Dr. Steinberg said.
According to NSQIP, Ohio State's ratio of observed to expected mortality was 0.76, placing it in the top quartile. But UHC calculated a ratio of 1.45, putting it in the bottom quartile. A ratio less than 1 indicates that the hospital is performing better than expected given the complexity of its patient population and surgical case complexity.
Overall, NSQIP tallied significantly fewer comorbidities per person after risk adjustment than did UHC (1.38 vs. 2.85). These included discordant results between NSQIP and UHC for the rates of hypertension (47% vs. 43%, respectively) and diabetes (11% vs. 14%), as well as cardiac (10% vs. 12%) and pulmonary comorbidities (18% vs. 23%).
Significant discordance also occurred between NSQIP and UHC results for all complications combined (28% vs. 11%).
“Clearly, not all risk adjustment is the same. Both NSQIP and the University HealthSystem Consortium risk adjustment of data cannot be kept at our institution because they are so different,” Dr. Steinberg said. “From my point of view, NSQIP has more face validity than the UHC system, not just because we did better [on NSQIP] but because it's something that I can understand, whereas I have great difficulty in being able to understand the UHC process.”
Several audience members thought that the results illustrate the problems with using retrospective analyses of administrative data sets to evaluate outcomes, rather than prospective databases that are maintained by a trained and dedicated nurse, as is the case with NSQIP.
The difference in the ratio of observed to expected mortality between these quality improvement programs could be attributable to a number of factors:
PIProblems with documentation and coding (although this is unlikely, according to Dr. Steinberg).
PIDifferences in the participation of medical centers in each quality improvement program (although 56 centers participate in both NSQIP and UHC).
PIPossible incorrect classification—for example, UHC defines a service line by ICD-9 codes, not whether a patient was ever actually on a service.
PIDifferences in the programs' risk-adjustment methodologies.
'Clearly, not all risk adjustment is the same.' DR. STEINBERG
CINCINNATI — Quality improvement programs at hospitals might report significantly different rates of risk-adjusted comorbidities and outcomes for surgical patients, according to a retrospective analysis of two programs within one health system.
The risk-adjusted mortalities calculated by the American College of Surgeons' National Surgical Quality Improvement Program (NSQIP) and the University HealthSystem Consortium (UHC) for the general and vascular surgery services in the Ohio State University health system were different for “pretty much the same patient population over the same time period,” Dr. Steven M. Steinberg said at the annual meeting of the Central Surgical Association.
Dr. Steinberg, chief of the division of critical care, trauma, and burn in the department of surgery at Ohio State, and his coinvestigators compared the NSQIP records of 120 consecutive general and vascular surgery inpatients with their matching records, which were submitted to UHC from January to June 2006.
NSQIP provides a prospective database of 30-day, risk-adjusted surgical outcome data on inpatients and outpatients from participating hospitals.
UHC's membership of 101 academic medical centers and 170 of their affiliated hospitals includes about 90% of nonprofit academic medical centers. UHC uses the Centers for Medicare and Medicaid Services' system for classifying the severity of illness, the All Patient Refined Diagnosis Related Groups.
“From our point of view, [UHC's methodology] is somewhat more complex than the NSQIP methodology,” Dr. Steinberg said.
According to NSQIP, Ohio State's ratio of observed to expected mortality was 0.76, placing it in the top quartile. But UHC calculated a ratio of 1.45, putting it in the bottom quartile. A ratio less than 1 indicates that the hospital is performing better than expected given the complexity of its patient population and surgical case complexity.
Overall, NSQIP tallied significantly fewer comorbidities per person after risk adjustment than did UHC (1.38 vs. 2.85). These included discordant results between NSQIP and UHC for the rates of hypertension (47% vs. 43%, respectively) and diabetes (11% vs. 14%), as well as cardiac (10% vs. 12%) and pulmonary comorbidities (18% vs. 23%).
Significant discordance also occurred between NSQIP and UHC results for all complications combined (28% vs. 11%).
“Clearly, not all risk adjustment is the same. Both NSQIP and the University HealthSystem Consortium risk adjustment of data cannot be kept at our institution because they are so different,” Dr. Steinberg said. “From my point of view, NSQIP has more face validity than the UHC system, not just because we did better [on NSQIP] but because it's something that I can understand, whereas I have great difficulty in being able to understand the UHC process.”
Several audience members thought that the results illustrate the problems with using retrospective analyses of administrative data sets to evaluate outcomes, rather than prospective databases that are maintained by a trained and dedicated nurse, as is the case with NSQIP.
The difference in the ratio of observed to expected mortality between these quality improvement programs could be attributable to a number of factors:
PIProblems with documentation and coding (although this is unlikely, according to Dr. Steinberg).
PIDifferences in the participation of medical centers in each quality improvement program (although 56 centers participate in both NSQIP and UHC).
PIPossible incorrect classification—for example, UHC defines a service line by ICD-9 codes, not whether a patient was ever actually on a service.
PIDifferences in the programs' risk-adjustment methodologies.
'Clearly, not all risk adjustment is the same.' DR. STEINBERG