Article Type
Changed
Fri, 09/14/2018 - 12:23
Display Headline
Beware Hospital Compare? New Measures Highlight Questions Surrounding Healthcare Quality Report Cards

To get where we want to go in American healthcare, we need a more thoroughly supported measure development infrastructure.


—Anne-Marie J. Audet, MD, MSc, SM, vice president, Health System Quality and Efficiency, Commonwealth Fund

The Centers for Medicare & Medicaid Services (CMS) has been publicly reporting performance measures on its Hospital Compare website (www.hospitalcompare.hhs.gov) since 2005, focusing on processes of care, patient outcomes, patient satisfaction, patient safety, and other measures. A recent addition of patient-safety metrics has rekindled skeptical questions about the validity, purpose, and effectiveness of public healthcare quality report cards, while highlighting the need for hospitalists and their institutions to remain vigilant in the struggle to ensure that they are compared and rewarded fairly and appropriately.

Provocative Measures

Last fall, CMS began posting “Serious Complications and Deaths” measures, developed by the Agency for Healthcare Research and Quality (AHRQ). The measures score individual hospitals according to the rates at which their patients suffer from:

  • Pneumothorax due to medical treatment;
  • Post-operative VTE;
  • Post-operative abdominal or pelvic dehiscence; and
  • Accidental lacerations from medical treatment.

Four other serious complication measures (pressure ulcers, catheter and bloodstream infections, and hip fractures from falling after surgery) are folded into a separate composite score for each hospital, while another composite score for “Deaths for Certain Conditions” is based on a hospital’s post-admission mortality rate for hip fractures, acute MI, heart failure, stroke, GI bleed, and pneumonia.

National and local media reports have thrust these dramatic metrics into the public eye, putting many hospitals on the spot to explain their putative breaches of patient safety. A closer inspection of the metrics, however, reveals plausible criticisms of their shortcomings.

Methodological Weakness

The new metrics are derived from Medicare claims data instead of medical chart abstractions, which experts say weakens their validity significantly and makes their use for provider profiling questionable. Moreover, claims data are based on records that were never designed to capture the sort of clinical nuances needed for valid and equitable risk adjustment (see “Methodological Challenges to Quality Metrics,” below). “Serious Complications and Deaths” rates based on these data, critics maintain, lack validity for meaningful hospital comparisons because they can exaggerate problems at hospitals that treat a high volume of complicated patients and use more invasive procedures to do so, such as teaching hospitals in academic medical centers.1

The ante gets upped when CMS eventually begins adding patient-safety measures to the Hospital Value-Based Purchasing (HVBP) program, which rewards or punishes hospitals financially, depending on their performance on the metrics. CMS is considering adding the Serious Complications and Deaths measures to the HVPB program in the near future.

As the science of documenting and reporting patient harm struggles to find its footing, physicians and hospitals have to be more vigilant than ever to adopt a unified, organized approach to advocate the most appropriate processes and outcomes for which they will be held accountable, and avoid being cast in a reactive mode when metrics are imposed on them, says Patrick J. Torcson, MD, MMM, FACP, SFHM, chair of SHM’s Performance Measurement and Reporting Committee, and director of hospital medicine at St. Tammany Parish Hospital in Shreveport, La.

Last year, SHM sent comments to then-CMS administrator Don Berwick, expressing concern that the patient-safety measures CMS proposes to include in the HVBP program in fiscal-year 2014 are not endorsed by the National Quality Forum (NQF), that they are derived from billing and payment data that are not intended to be used primarily for clinical purposes, that the outcome measures are not entirely preventable even with the best of care, and that they are not adequately risk-adjusted.

 

 

“While it’s easy to agree with the experts that Hospital Compare’s patient-safety measures are not ready for prime time, it’s no longer acceptable simply to say, ‘These metrics are irrelevant,’” Dr. Torcson cautions. “We also must be aware of the evolution and inexorable movement of the nation’s healthcare quality and safety agenda. SHM embraces the triple aim of providing better care to our patients, promoting better health of patient populations, and doing so at a lower cost.”

The Power of “Why?”

Despite its imperfections, Hospital Compare’s greatest value is the power of its transparency, which fosters healthy discussion among providers, patients, and payors, according to Anne-Marie J. Audet, MD, MSc, SM, vice president for Health System Quality and Efficiency for the Commonwealth Fund. “That transparency gets providers’ attention and leads them to make changes that can translate into better performance,” she says, noting how hospital care for patients with heart attack, heart failure, and pneumonia has steadily improved in recent years, with the worst performers in 2009 doing as well or better than the best performers in 2004.

“There are also examples of hospitals that have gone from a median of four central line-associated bloodstream infections (CLABSI) per 1,000 line-days to zero because they decided not to take the status quo as acceptable,” says Stephen C. Schoenbaum, MD, MPH, special advisor to the president of the Josiah Macy Jr. Foundation. Dr. Schoenbaum played a significant role in the development of the Healthcare Effectiveness Data and Information Set, or HEDIS.

“There is no such thing as a perfect measure in which some adjustment or better collection method would not affect the numbers,” he notes. “Ideally, you want any publicly reported measure to get the poorer performers to come up with a way to explain their result. Or, even better, to improve their result.”

Methodological criticisms of CMS’ new “Serious Complications and Deaths” measures may be justified, Dr. Audet concedes, but she also notes that rigorous validation and reliability testing of quality measures is an expensive process. “To get where we want to go in American healthcare, we need a more thoroughly supported measure development infrastructure,” she says.

“In the meantime, providers will be probing the implications of their numbers, asking why they got the numbers they did, and what can be done about it. This attention can only lead to improvement, both in the measures themselves and in the care delivered.”

Indeed, one of the hospitals that was listed as having a high rate of accidental cuts and lacerations in the new measures found most of those cuts had been intended by the surgeon but erroneously billed to Medicare under the code for an accidental cut. Even with its methodological flaws, the Hospital Compare data led to root-cause analysis and improvement in coding.

Hospitalists’ Role

Hospitalists, according to Dr. Torcson, will be critical to the successful performance of hospitals under the HVPB program, as experts in quality and quality-measurement adherence. Hospitalists care for more hospitalized patients than any other physician group, and many believe they are uniquely positioned to lead the system-level changes and quality-improvement (QI) efforts that will be required.

“Hospitalists and their hospitals, practicing in alignment, become champions for their patients,” Dr. Torcson says. “SHM supported the HVBP program, and we foresee that the alignment of performance and payment within the program will inevitably result in better clinical outcomes for our patients.”

Chris Guadagnino is a freelance writer in Philadelphia.

Methodological Challenges To Quality Metrics

Using Medicare claims for profiling provider quality and patient safety is expedient and far less labor- and cost-intensive than medical chart abstraction (the “gold standard,” as suggested by national guidelines for performance measurement). But experts say reliance on claims data introduces fundamental deficiencies that could misclassify providers and mislead consumers.2 When claims data are used for comparing provider mortality rates, for example, deficiencies can include:

  • Diagnosis misclassification;
  • Difficulty distinguishing complications from comorbidities;
  • Absence of critical clinical variables;
  • Coding inaccuracy, including failure to code chronic conditions or secondary diagnoses; and
  • Restriction to selected subpopulations (e.g. Medicare beneficiaries).

—Chris Guadagnino

 

 

References

  1. Experts question Medicare’s effort to rate hospitals’ patient safety records. Kaiser Health News website. Available at: http://www.kaiserhealthnews.org/Stories/2012/February/13/medicare-hospital-patient-safety-records.aspx. Accessed March 12, 2012.
  2. Shahian DM, Iezzoni LI, Meyer GS, Kirle L, Normand ST. Hospital-wide mortality as a quality metric: conceptual and methodological challenges. Am J Med Qual. 2012;27:112.
Issue
The Hospitalist - 2012(04)
Publications
Sections

To get where we want to go in American healthcare, we need a more thoroughly supported measure development infrastructure.


—Anne-Marie J. Audet, MD, MSc, SM, vice president, Health System Quality and Efficiency, Commonwealth Fund

The Centers for Medicare & Medicaid Services (CMS) has been publicly reporting performance measures on its Hospital Compare website (www.hospitalcompare.hhs.gov) since 2005, focusing on processes of care, patient outcomes, patient satisfaction, patient safety, and other measures. A recent addition of patient-safety metrics has rekindled skeptical questions about the validity, purpose, and effectiveness of public healthcare quality report cards, while highlighting the need for hospitalists and their institutions to remain vigilant in the struggle to ensure that they are compared and rewarded fairly and appropriately.

Provocative Measures

Last fall, CMS began posting “Serious Complications and Deaths” measures, developed by the Agency for Healthcare Research and Quality (AHRQ). The measures score individual hospitals according to the rates at which their patients suffer from:

  • Pneumothorax due to medical treatment;
  • Post-operative VTE;
  • Post-operative abdominal or pelvic dehiscence; and
  • Accidental lacerations from medical treatment.

Four other serious complication measures (pressure ulcers, catheter and bloodstream infections, and hip fractures from falling after surgery) are folded into a separate composite score for each hospital, while another composite score for “Deaths for Certain Conditions” is based on a hospital’s post-admission mortality rate for hip fractures, acute MI, heart failure, stroke, GI bleed, and pneumonia.

National and local media reports have thrust these dramatic metrics into the public eye, putting many hospitals on the spot to explain their putative breaches of patient safety. A closer inspection of the metrics, however, reveals plausible criticisms of their shortcomings.

Methodological Weakness

The new metrics are derived from Medicare claims data instead of medical chart abstractions, which experts say weakens their validity significantly and makes their use for provider profiling questionable. Moreover, claims data are based on records that were never designed to capture the sort of clinical nuances needed for valid and equitable risk adjustment (see “Methodological Challenges to Quality Metrics,” below). “Serious Complications and Deaths” rates based on these data, critics maintain, lack validity for meaningful hospital comparisons because they can exaggerate problems at hospitals that treat a high volume of complicated patients and use more invasive procedures to do so, such as teaching hospitals in academic medical centers.1

The ante gets upped when CMS eventually begins adding patient-safety measures to the Hospital Value-Based Purchasing (HVBP) program, which rewards or punishes hospitals financially, depending on their performance on the metrics. CMS is considering adding the Serious Complications and Deaths measures to the HVPB program in the near future.

As the science of documenting and reporting patient harm struggles to find its footing, physicians and hospitals have to be more vigilant than ever to adopt a unified, organized approach to advocate the most appropriate processes and outcomes for which they will be held accountable, and avoid being cast in a reactive mode when metrics are imposed on them, says Patrick J. Torcson, MD, MMM, FACP, SFHM, chair of SHM’s Performance Measurement and Reporting Committee, and director of hospital medicine at St. Tammany Parish Hospital in Shreveport, La.

Last year, SHM sent comments to then-CMS administrator Don Berwick, expressing concern that the patient-safety measures CMS proposes to include in the HVBP program in fiscal-year 2014 are not endorsed by the National Quality Forum (NQF), that they are derived from billing and payment data that are not intended to be used primarily for clinical purposes, that the outcome measures are not entirely preventable even with the best of care, and that they are not adequately risk-adjusted.

 

 

“While it’s easy to agree with the experts that Hospital Compare’s patient-safety measures are not ready for prime time, it’s no longer acceptable simply to say, ‘These metrics are irrelevant,’” Dr. Torcson cautions. “We also must be aware of the evolution and inexorable movement of the nation’s healthcare quality and safety agenda. SHM embraces the triple aim of providing better care to our patients, promoting better health of patient populations, and doing so at a lower cost.”

The Power of “Why?”

Despite its imperfections, Hospital Compare’s greatest value is the power of its transparency, which fosters healthy discussion among providers, patients, and payors, according to Anne-Marie J. Audet, MD, MSc, SM, vice president for Health System Quality and Efficiency for the Commonwealth Fund. “That transparency gets providers’ attention and leads them to make changes that can translate into better performance,” she says, noting how hospital care for patients with heart attack, heart failure, and pneumonia has steadily improved in recent years, with the worst performers in 2009 doing as well or better than the best performers in 2004.

“There are also examples of hospitals that have gone from a median of four central line-associated bloodstream infections (CLABSI) per 1,000 line-days to zero because they decided not to take the status quo as acceptable,” says Stephen C. Schoenbaum, MD, MPH, special advisor to the president of the Josiah Macy Jr. Foundation. Dr. Schoenbaum played a significant role in the development of the Healthcare Effectiveness Data and Information Set, or HEDIS.

“There is no such thing as a perfect measure in which some adjustment or better collection method would not affect the numbers,” he notes. “Ideally, you want any publicly reported measure to get the poorer performers to come up with a way to explain their result. Or, even better, to improve their result.”

Methodological criticisms of CMS’ new “Serious Complications and Deaths” measures may be justified, Dr. Audet concedes, but she also notes that rigorous validation and reliability testing of quality measures is an expensive process. “To get where we want to go in American healthcare, we need a more thoroughly supported measure development infrastructure,” she says.

“In the meantime, providers will be probing the implications of their numbers, asking why they got the numbers they did, and what can be done about it. This attention can only lead to improvement, both in the measures themselves and in the care delivered.”

Indeed, one of the hospitals that was listed as having a high rate of accidental cuts and lacerations in the new measures found most of those cuts had been intended by the surgeon but erroneously billed to Medicare under the code for an accidental cut. Even with its methodological flaws, the Hospital Compare data led to root-cause analysis and improvement in coding.

Hospitalists’ Role

Hospitalists, according to Dr. Torcson, will be critical to the successful performance of hospitals under the HVPB program, as experts in quality and quality-measurement adherence. Hospitalists care for more hospitalized patients than any other physician group, and many believe they are uniquely positioned to lead the system-level changes and quality-improvement (QI) efforts that will be required.

“Hospitalists and their hospitals, practicing in alignment, become champions for their patients,” Dr. Torcson says. “SHM supported the HVBP program, and we foresee that the alignment of performance and payment within the program will inevitably result in better clinical outcomes for our patients.”

Chris Guadagnino is a freelance writer in Philadelphia.

Methodological Challenges To Quality Metrics

Using Medicare claims for profiling provider quality and patient safety is expedient and far less labor- and cost-intensive than medical chart abstraction (the “gold standard,” as suggested by national guidelines for performance measurement). But experts say reliance on claims data introduces fundamental deficiencies that could misclassify providers and mislead consumers.2 When claims data are used for comparing provider mortality rates, for example, deficiencies can include:

  • Diagnosis misclassification;
  • Difficulty distinguishing complications from comorbidities;
  • Absence of critical clinical variables;
  • Coding inaccuracy, including failure to code chronic conditions or secondary diagnoses; and
  • Restriction to selected subpopulations (e.g. Medicare beneficiaries).

—Chris Guadagnino

 

 

References

  1. Experts question Medicare’s effort to rate hospitals’ patient safety records. Kaiser Health News website. Available at: http://www.kaiserhealthnews.org/Stories/2012/February/13/medicare-hospital-patient-safety-records.aspx. Accessed March 12, 2012.
  2. Shahian DM, Iezzoni LI, Meyer GS, Kirle L, Normand ST. Hospital-wide mortality as a quality metric: conceptual and methodological challenges. Am J Med Qual. 2012;27:112.

To get where we want to go in American healthcare, we need a more thoroughly supported measure development infrastructure.


—Anne-Marie J. Audet, MD, MSc, SM, vice president, Health System Quality and Efficiency, Commonwealth Fund

The Centers for Medicare & Medicaid Services (CMS) has been publicly reporting performance measures on its Hospital Compare website (www.hospitalcompare.hhs.gov) since 2005, focusing on processes of care, patient outcomes, patient satisfaction, patient safety, and other measures. A recent addition of patient-safety metrics has rekindled skeptical questions about the validity, purpose, and effectiveness of public healthcare quality report cards, while highlighting the need for hospitalists and their institutions to remain vigilant in the struggle to ensure that they are compared and rewarded fairly and appropriately.

Provocative Measures

Last fall, CMS began posting “Serious Complications and Deaths” measures, developed by the Agency for Healthcare Research and Quality (AHRQ). The measures score individual hospitals according to the rates at which their patients suffer from:

  • Pneumothorax due to medical treatment;
  • Post-operative VTE;
  • Post-operative abdominal or pelvic dehiscence; and
  • Accidental lacerations from medical treatment.

Four other serious complication measures (pressure ulcers, catheter and bloodstream infections, and hip fractures from falling after surgery) are folded into a separate composite score for each hospital, while another composite score for “Deaths for Certain Conditions” is based on a hospital’s post-admission mortality rate for hip fractures, acute MI, heart failure, stroke, GI bleed, and pneumonia.

National and local media reports have thrust these dramatic metrics into the public eye, putting many hospitals on the spot to explain their putative breaches of patient safety. A closer inspection of the metrics, however, reveals plausible criticisms of their shortcomings.

Methodological Weakness

The new metrics are derived from Medicare claims data instead of medical chart abstractions, which experts say weakens their validity significantly and makes their use for provider profiling questionable. Moreover, claims data are based on records that were never designed to capture the sort of clinical nuances needed for valid and equitable risk adjustment (see “Methodological Challenges to Quality Metrics,” below). “Serious Complications and Deaths” rates based on these data, critics maintain, lack validity for meaningful hospital comparisons because they can exaggerate problems at hospitals that treat a high volume of complicated patients and use more invasive procedures to do so, such as teaching hospitals in academic medical centers.1

The ante gets upped when CMS eventually begins adding patient-safety measures to the Hospital Value-Based Purchasing (HVBP) program, which rewards or punishes hospitals financially, depending on their performance on the metrics. CMS is considering adding the Serious Complications and Deaths measures to the HVPB program in the near future.

As the science of documenting and reporting patient harm struggles to find its footing, physicians and hospitals have to be more vigilant than ever to adopt a unified, organized approach to advocate the most appropriate processes and outcomes for which they will be held accountable, and avoid being cast in a reactive mode when metrics are imposed on them, says Patrick J. Torcson, MD, MMM, FACP, SFHM, chair of SHM’s Performance Measurement and Reporting Committee, and director of hospital medicine at St. Tammany Parish Hospital in Shreveport, La.

Last year, SHM sent comments to then-CMS administrator Don Berwick, expressing concern that the patient-safety measures CMS proposes to include in the HVBP program in fiscal-year 2014 are not endorsed by the National Quality Forum (NQF), that they are derived from billing and payment data that are not intended to be used primarily for clinical purposes, that the outcome measures are not entirely preventable even with the best of care, and that they are not adequately risk-adjusted.

 

 

“While it’s easy to agree with the experts that Hospital Compare’s patient-safety measures are not ready for prime time, it’s no longer acceptable simply to say, ‘These metrics are irrelevant,’” Dr. Torcson cautions. “We also must be aware of the evolution and inexorable movement of the nation’s healthcare quality and safety agenda. SHM embraces the triple aim of providing better care to our patients, promoting better health of patient populations, and doing so at a lower cost.”

The Power of “Why?”

Despite its imperfections, Hospital Compare’s greatest value is the power of its transparency, which fosters healthy discussion among providers, patients, and payors, according to Anne-Marie J. Audet, MD, MSc, SM, vice president for Health System Quality and Efficiency for the Commonwealth Fund. “That transparency gets providers’ attention and leads them to make changes that can translate into better performance,” she says, noting how hospital care for patients with heart attack, heart failure, and pneumonia has steadily improved in recent years, with the worst performers in 2009 doing as well or better than the best performers in 2004.

“There are also examples of hospitals that have gone from a median of four central line-associated bloodstream infections (CLABSI) per 1,000 line-days to zero because they decided not to take the status quo as acceptable,” says Stephen C. Schoenbaum, MD, MPH, special advisor to the president of the Josiah Macy Jr. Foundation. Dr. Schoenbaum played a significant role in the development of the Healthcare Effectiveness Data and Information Set, or HEDIS.

“There is no such thing as a perfect measure in which some adjustment or better collection method would not affect the numbers,” he notes. “Ideally, you want any publicly reported measure to get the poorer performers to come up with a way to explain their result. Or, even better, to improve their result.”

Methodological criticisms of CMS’ new “Serious Complications and Deaths” measures may be justified, Dr. Audet concedes, but she also notes that rigorous validation and reliability testing of quality measures is an expensive process. “To get where we want to go in American healthcare, we need a more thoroughly supported measure development infrastructure,” she says.

“In the meantime, providers will be probing the implications of their numbers, asking why they got the numbers they did, and what can be done about it. This attention can only lead to improvement, both in the measures themselves and in the care delivered.”

Indeed, one of the hospitals that was listed as having a high rate of accidental cuts and lacerations in the new measures found most of those cuts had been intended by the surgeon but erroneously billed to Medicare under the code for an accidental cut. Even with its methodological flaws, the Hospital Compare data led to root-cause analysis and improvement in coding.

Hospitalists’ Role

Hospitalists, according to Dr. Torcson, will be critical to the successful performance of hospitals under the HVPB program, as experts in quality and quality-measurement adherence. Hospitalists care for more hospitalized patients than any other physician group, and many believe they are uniquely positioned to lead the system-level changes and quality-improvement (QI) efforts that will be required.

“Hospitalists and their hospitals, practicing in alignment, become champions for their patients,” Dr. Torcson says. “SHM supported the HVBP program, and we foresee that the alignment of performance and payment within the program will inevitably result in better clinical outcomes for our patients.”

Chris Guadagnino is a freelance writer in Philadelphia.

Methodological Challenges To Quality Metrics

Using Medicare claims for profiling provider quality and patient safety is expedient and far less labor- and cost-intensive than medical chart abstraction (the “gold standard,” as suggested by national guidelines for performance measurement). But experts say reliance on claims data introduces fundamental deficiencies that could misclassify providers and mislead consumers.2 When claims data are used for comparing provider mortality rates, for example, deficiencies can include:

  • Diagnosis misclassification;
  • Difficulty distinguishing complications from comorbidities;
  • Absence of critical clinical variables;
  • Coding inaccuracy, including failure to code chronic conditions or secondary diagnoses; and
  • Restriction to selected subpopulations (e.g. Medicare beneficiaries).

—Chris Guadagnino

 

 

References

  1. Experts question Medicare’s effort to rate hospitals’ patient safety records. Kaiser Health News website. Available at: http://www.kaiserhealthnews.org/Stories/2012/February/13/medicare-hospital-patient-safety-records.aspx. Accessed March 12, 2012.
  2. Shahian DM, Iezzoni LI, Meyer GS, Kirle L, Normand ST. Hospital-wide mortality as a quality metric: conceptual and methodological challenges. Am J Med Qual. 2012;27:112.
Issue
The Hospitalist - 2012(04)
Issue
The Hospitalist - 2012(04)
Publications
Publications
Article Type
Display Headline
Beware Hospital Compare? New Measures Highlight Questions Surrounding Healthcare Quality Report Cards
Display Headline
Beware Hospital Compare? New Measures Highlight Questions Surrounding Healthcare Quality Report Cards
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)