Study took wrong approach
Article Type
Changed
Thu, 03/28/2019 - 15:56
Display Headline
Trauma center rankings differ by mortality, morbidity

SAN FRANCISCO – Trauma centers are ranked on the basis of in-hospital mortality rates, but pay-for-performance programs will benchmark them based on in-hospital complications – and there’s not good concordance between the two measures, a study of data from 248 trauma centers suggests.

Investigators used data on 449,743 patients aged 16 years or older who had blunt/penetrating injuries and an Injury Severity Score of 9 or higher to generate risk-adjusted, observed-to-expected mortality rates for each trauma center They ranked each facility based on mortality rate as a high-performing, average, or low-performing center and used complication rates to rank them again based on observed-to-expected morbidity ratios.

Only 40% of centers received the same benchmark using these two measures, Dr. Zain G. Hashmi and his associates reported at the annual meeting of the American Association for the Surgery of Trauma.

Dividing each performance ranking into quintiles, the two rankings diverged by at least one quintile for 79% of trauma centers. Only 21% were assigned the same quintile rank in the mortality benchmarking as in the morbidity benchmarking. A two-quintile divergence in rankings was noted in 21%, and a three-quintile difference in 23%, said Dr. Hashmi, a research fellow at Johns Hopkins University, Baltimore.

Overall, the unadjusted mortality rate was 7% and the morbidity rate was 10%. The most frequent complications were pneumonia in 4%, acute respiratory distress syndrome in 2%, and deep venous thrombosis in 2%.

The complications used for the morbidity benchmarking included pneumonia, deep venous thrombosis, acute respiratory distress syndrome, acute renal failure, sepsis, pulmonary embolism, decubitus ulcer, surgical site infection, myocardial infarction, cardiac arrest, unplanned intubation, and stroke.

The Centers for Medicare and Medicaid Services is implementing pay-for-performance programs in the public health sector nationwide under the Affordable Care Act to incentivize high quality of care and penalize low quality of care. The programs may soon be extended to trauma care, which could incorrectly penalize centers that are the best performers based on mortality benchmarks, he said.

"We need to develop more appropriate measures of trauma quality before pay-for-performance" programs come to trauma centers, perhaps using multiple quality indicators such as mortality, length of stay, complications, and failure to rescue, he said.

Data for the study came from the National Trauma Data Bank for 2007-2010.

Dr. Hashmi reported having no financial disclosures.

[email protected]

On Twitter @sherryboschert

Body

The authors reached the very predictable conclusion that the two benchmarking approaches have no correlation whatsoever. They did not come quite as close to embracing the other obvious conclusion that, in fact, neither benchmark encompasses, or perhaps even approximates, the quality of care given at an individual center. And they don’t really offer us an alternative.

We’ve been seeking the best way to measure the quality of care for the injured patient for decades, long before the concepts "pay for performance" or "value-based purchasing" became something of our daily lives. One thing we certainly learned is that quality is a complex, nuanced, and maybe even an elusive concept, sort of like one of Plato’s forms – we can’t see it directly, and we have to figure out what it is by the shadows it casts.


Dr. Robert Winchell

Unfortunately, before you can really measure something, you do have to know a little bit about what it is you’re trying to measure. Otherwise, you’re likely to pick the wrong tool. For reasons of obvious practicality, the approach that is most commonly taken, just like the approach in this paper, is to measure the things we can, perhaps in very, very sophisticated ways, and then try somehow to take that result and connect it in some way to that elusive concept, quality.

If nothing else, this paper illustrates the weakness inherent in that approach. Without going into the potential methodological flaws, I would submit that the hypothesis is poorly focused. There was no observed concordance between mortality and morbidity because there is no reason to expect that there should be. They measure entirely different things, and neither one of those things is necessarily very much connected to quality, which is really what we’d like to get a handle on.

The better approach, I’d suggest, is to postulate, a priori, a definition of what quality might be or at least a set of characteristics that might represent quality, and then set about to measure against that model.

Dr. Robert Winchell is a surgeon at Maine Medical Center in Portland. These are excerpts of his remarks as discussant of the study at the meeting. He reported having no financial disclosures.

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
Trauma centers, in-hospital mortality rates, pay-for-performance
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event
Body

The authors reached the very predictable conclusion that the two benchmarking approaches have no correlation whatsoever. They did not come quite as close to embracing the other obvious conclusion that, in fact, neither benchmark encompasses, or perhaps even approximates, the quality of care given at an individual center. And they don’t really offer us an alternative.

We’ve been seeking the best way to measure the quality of care for the injured patient for decades, long before the concepts "pay for performance" or "value-based purchasing" became something of our daily lives. One thing we certainly learned is that quality is a complex, nuanced, and maybe even an elusive concept, sort of like one of Plato’s forms – we can’t see it directly, and we have to figure out what it is by the shadows it casts.


Dr. Robert Winchell

Unfortunately, before you can really measure something, you do have to know a little bit about what it is you’re trying to measure. Otherwise, you’re likely to pick the wrong tool. For reasons of obvious practicality, the approach that is most commonly taken, just like the approach in this paper, is to measure the things we can, perhaps in very, very sophisticated ways, and then try somehow to take that result and connect it in some way to that elusive concept, quality.

If nothing else, this paper illustrates the weakness inherent in that approach. Without going into the potential methodological flaws, I would submit that the hypothesis is poorly focused. There was no observed concordance between mortality and morbidity because there is no reason to expect that there should be. They measure entirely different things, and neither one of those things is necessarily very much connected to quality, which is really what we’d like to get a handle on.

The better approach, I’d suggest, is to postulate, a priori, a definition of what quality might be or at least a set of characteristics that might represent quality, and then set about to measure against that model.

Dr. Robert Winchell is a surgeon at Maine Medical Center in Portland. These are excerpts of his remarks as discussant of the study at the meeting. He reported having no financial disclosures.

Body

The authors reached the very predictable conclusion that the two benchmarking approaches have no correlation whatsoever. They did not come quite as close to embracing the other obvious conclusion that, in fact, neither benchmark encompasses, or perhaps even approximates, the quality of care given at an individual center. And they don’t really offer us an alternative.

We’ve been seeking the best way to measure the quality of care for the injured patient for decades, long before the concepts "pay for performance" or "value-based purchasing" became something of our daily lives. One thing we certainly learned is that quality is a complex, nuanced, and maybe even an elusive concept, sort of like one of Plato’s forms – we can’t see it directly, and we have to figure out what it is by the shadows it casts.


Dr. Robert Winchell

Unfortunately, before you can really measure something, you do have to know a little bit about what it is you’re trying to measure. Otherwise, you’re likely to pick the wrong tool. For reasons of obvious practicality, the approach that is most commonly taken, just like the approach in this paper, is to measure the things we can, perhaps in very, very sophisticated ways, and then try somehow to take that result and connect it in some way to that elusive concept, quality.

If nothing else, this paper illustrates the weakness inherent in that approach. Without going into the potential methodological flaws, I would submit that the hypothesis is poorly focused. There was no observed concordance between mortality and morbidity because there is no reason to expect that there should be. They measure entirely different things, and neither one of those things is necessarily very much connected to quality, which is really what we’d like to get a handle on.

The better approach, I’d suggest, is to postulate, a priori, a definition of what quality might be or at least a set of characteristics that might represent quality, and then set about to measure against that model.

Dr. Robert Winchell is a surgeon at Maine Medical Center in Portland. These are excerpts of his remarks as discussant of the study at the meeting. He reported having no financial disclosures.

Title
Study took wrong approach
Study took wrong approach

SAN FRANCISCO – Trauma centers are ranked on the basis of in-hospital mortality rates, but pay-for-performance programs will benchmark them based on in-hospital complications – and there’s not good concordance between the two measures, a study of data from 248 trauma centers suggests.

Investigators used data on 449,743 patients aged 16 years or older who had blunt/penetrating injuries and an Injury Severity Score of 9 or higher to generate risk-adjusted, observed-to-expected mortality rates for each trauma center They ranked each facility based on mortality rate as a high-performing, average, or low-performing center and used complication rates to rank them again based on observed-to-expected morbidity ratios.

Only 40% of centers received the same benchmark using these two measures, Dr. Zain G. Hashmi and his associates reported at the annual meeting of the American Association for the Surgery of Trauma.

Dividing each performance ranking into quintiles, the two rankings diverged by at least one quintile for 79% of trauma centers. Only 21% were assigned the same quintile rank in the mortality benchmarking as in the morbidity benchmarking. A two-quintile divergence in rankings was noted in 21%, and a three-quintile difference in 23%, said Dr. Hashmi, a research fellow at Johns Hopkins University, Baltimore.

Overall, the unadjusted mortality rate was 7% and the morbidity rate was 10%. The most frequent complications were pneumonia in 4%, acute respiratory distress syndrome in 2%, and deep venous thrombosis in 2%.

The complications used for the morbidity benchmarking included pneumonia, deep venous thrombosis, acute respiratory distress syndrome, acute renal failure, sepsis, pulmonary embolism, decubitus ulcer, surgical site infection, myocardial infarction, cardiac arrest, unplanned intubation, and stroke.

The Centers for Medicare and Medicaid Services is implementing pay-for-performance programs in the public health sector nationwide under the Affordable Care Act to incentivize high quality of care and penalize low quality of care. The programs may soon be extended to trauma care, which could incorrectly penalize centers that are the best performers based on mortality benchmarks, he said.

"We need to develop more appropriate measures of trauma quality before pay-for-performance" programs come to trauma centers, perhaps using multiple quality indicators such as mortality, length of stay, complications, and failure to rescue, he said.

Data for the study came from the National Trauma Data Bank for 2007-2010.

Dr. Hashmi reported having no financial disclosures.

[email protected]

On Twitter @sherryboschert

SAN FRANCISCO – Trauma centers are ranked on the basis of in-hospital mortality rates, but pay-for-performance programs will benchmark them based on in-hospital complications – and there’s not good concordance between the two measures, a study of data from 248 trauma centers suggests.

Investigators used data on 449,743 patients aged 16 years or older who had blunt/penetrating injuries and an Injury Severity Score of 9 or higher to generate risk-adjusted, observed-to-expected mortality rates for each trauma center They ranked each facility based on mortality rate as a high-performing, average, or low-performing center and used complication rates to rank them again based on observed-to-expected morbidity ratios.

Only 40% of centers received the same benchmark using these two measures, Dr. Zain G. Hashmi and his associates reported at the annual meeting of the American Association for the Surgery of Trauma.

Dividing each performance ranking into quintiles, the two rankings diverged by at least one quintile for 79% of trauma centers. Only 21% were assigned the same quintile rank in the mortality benchmarking as in the morbidity benchmarking. A two-quintile divergence in rankings was noted in 21%, and a three-quintile difference in 23%, said Dr. Hashmi, a research fellow at Johns Hopkins University, Baltimore.

Overall, the unadjusted mortality rate was 7% and the morbidity rate was 10%. The most frequent complications were pneumonia in 4%, acute respiratory distress syndrome in 2%, and deep venous thrombosis in 2%.

The complications used for the morbidity benchmarking included pneumonia, deep venous thrombosis, acute respiratory distress syndrome, acute renal failure, sepsis, pulmonary embolism, decubitus ulcer, surgical site infection, myocardial infarction, cardiac arrest, unplanned intubation, and stroke.

The Centers for Medicare and Medicaid Services is implementing pay-for-performance programs in the public health sector nationwide under the Affordable Care Act to incentivize high quality of care and penalize low quality of care. The programs may soon be extended to trauma care, which could incorrectly penalize centers that are the best performers based on mortality benchmarks, he said.

"We need to develop more appropriate measures of trauma quality before pay-for-performance" programs come to trauma centers, perhaps using multiple quality indicators such as mortality, length of stay, complications, and failure to rescue, he said.

Data for the study came from the National Trauma Data Bank for 2007-2010.

Dr. Hashmi reported having no financial disclosures.

[email protected]

On Twitter @sherryboschert

Publications
Publications
Topics
Article Type
Display Headline
Trauma center rankings differ by mortality, morbidity
Display Headline
Trauma center rankings differ by mortality, morbidity
Legacy Keywords
Trauma centers, in-hospital mortality rates, pay-for-performance
Legacy Keywords
Trauma centers, in-hospital mortality rates, pay-for-performance
Article Source

AT THE AAST ANNUAL MEETING

PURLs Copyright

Inside the Article

Vitals

Major finding: Only 40% of trauma centers received the same ranking when judged by mortality or morbidity rates.

Data source: Retrospective analysis that ranked 238 centers as high, average, or low performing, based on data on 449,743 patients with blunt/penetrating injuries and an Injury Severity Score of 9 or higher.

Disclosures: Dr. Hashmi reported having no financial disclosures.