User login
The measurement of quality of care has been the mantra of health policy care for the past decade, and has become as American as apple pie and Chevrolet. Yet there have been few data showing that the institution of quality of care guidelines has had any impact on mortality or morbidity.
Despite this lack of data, hospitals are being financially rewarded or penalized based on their ability to meet guidelines established by the Center for Medicare and Medicaid Services in conjunction with the American College of Cardiology and the American Heart Association. Two recent reports provide insight on the progress we have achieved with guidelines in heart failure and in instituting the shortening of the door-to-balloon time (D2B) for percutaneous coronary artery intervention (PCI) in ST-segment elevation MI.
Decreasing heart failure readmission within 30 days, which occurs in approximately one-third of hospitalized patients, has become a target for the quality improvement process. Using the "Get With the Guidelines Heart Failure" registry, a recent analysis indicates that there is a very poor correlation between the achievement or those standards and the 30 day mortality and readmission rate (Circulation 2011;124:712-9).
The guidelines include measurement of cardiac function, application of the usual heart failure medications, and discharge instructions. Data were collected in almost 20,000 patients in 153 hospitals during 2005. Adherence to these guidelines was quite good and was achieved in more than 75% of the hospitals, yet it was unrelated to the 30 day mortality or hospital readmission.
The authors emphasized that the factors that affect survival and readmission are very heterogeneous. Basing pay-for-performance standards on a single measure (such as readmission rates) may penalize institutions that face impediments that are unrelated to performance measurements. Penalizing hospitals that have high readmission rates as a result of a large populations of vulnerable patients may penalize institutions that actually could benefit from more resources in order to achieve better outcomes.
The effectiveness of PCI, when it is performed in less than 90 minutes in STEMI patients, has been supported by clinical data from selected cardiac centers. The application to the larger patient population of the guideline to shorten D2B time to less than 90 minutes has been championed by the ACC, which launched the D2B Alliance in 2006 and by the AHA in 2007 with its Mission: Lifeline program.
The success of these efforts was reported in August (Circulation 2011;124:1038-45) and indicates that in a selected group of CMS-reporting hospitals, D2B time decreased from 96 minutes in 2005 to 64 minutes in 2010. In addition, the percentage of patients with a D2B time of less than 90 minutes increased from 44% to 91%, and that of patients with D2B of less than 75 minutes rose from 27% to 70%. The success of this effort is to be applauded, but the report is striking for its absence of any information regarding outcomes of the shortened D2B time. Unfortunately, there is little outcome information available, with the exception of data from Michigan on all Medicare providers in that state, which indicates that although D2B time decreased by 90 minutes, there was no significant benefit.
Measurement of quality remains elusive, in spite of the good intentions of physicians and health planners to use a variety of seemingly beneficial criteria for its definition.
As consumers, we know that quality is not easy to measure. Most of us can compare the quality of American automobiles vs. their foreign competitors by "kicking the tires," that is, by doing a little research. But even with this knowledge, we are not always sure that the particular car we buy will be better or last longer. Health care faces the same problem. Establishing quality care measurements will require a great deal of further research before we can reward or penalize hospitals and physicians for their performance.
It is possible that in our zeal to measure what we can, we are confusing process with content. How to put a number on the performance that leads to quality remains uncertain using our current methodology.-
Dr. Sidney Goldstein is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The measurement of quality of care has been the mantra of health policy care for the past decade, and has become as American as apple pie and Chevrolet. Yet there have been few data showing that the institution of quality of care guidelines has had any impact on mortality or morbidity.
Despite this lack of data, hospitals are being financially rewarded or penalized based on their ability to meet guidelines established by the Center for Medicare and Medicaid Services in conjunction with the American College of Cardiology and the American Heart Association. Two recent reports provide insight on the progress we have achieved with guidelines in heart failure and in instituting the shortening of the door-to-balloon time (D2B) for percutaneous coronary artery intervention (PCI) in ST-segment elevation MI.
Decreasing heart failure readmission within 30 days, which occurs in approximately one-third of hospitalized patients, has become a target for the quality improvement process. Using the "Get With the Guidelines Heart Failure" registry, a recent analysis indicates that there is a very poor correlation between the achievement or those standards and the 30 day mortality and readmission rate (Circulation 2011;124:712-9).
The guidelines include measurement of cardiac function, application of the usual heart failure medications, and discharge instructions. Data were collected in almost 20,000 patients in 153 hospitals during 2005. Adherence to these guidelines was quite good and was achieved in more than 75% of the hospitals, yet it was unrelated to the 30 day mortality or hospital readmission.
The authors emphasized that the factors that affect survival and readmission are very heterogeneous. Basing pay-for-performance standards on a single measure (such as readmission rates) may penalize institutions that face impediments that are unrelated to performance measurements. Penalizing hospitals that have high readmission rates as a result of a large populations of vulnerable patients may penalize institutions that actually could benefit from more resources in order to achieve better outcomes.
The effectiveness of PCI, when it is performed in less than 90 minutes in STEMI patients, has been supported by clinical data from selected cardiac centers. The application to the larger patient population of the guideline to shorten D2B time to less than 90 minutes has been championed by the ACC, which launched the D2B Alliance in 2006 and by the AHA in 2007 with its Mission: Lifeline program.
The success of these efforts was reported in August (Circulation 2011;124:1038-45) and indicates that in a selected group of CMS-reporting hospitals, D2B time decreased from 96 minutes in 2005 to 64 minutes in 2010. In addition, the percentage of patients with a D2B time of less than 90 minutes increased from 44% to 91%, and that of patients with D2B of less than 75 minutes rose from 27% to 70%. The success of this effort is to be applauded, but the report is striking for its absence of any information regarding outcomes of the shortened D2B time. Unfortunately, there is little outcome information available, with the exception of data from Michigan on all Medicare providers in that state, which indicates that although D2B time decreased by 90 minutes, there was no significant benefit.
Measurement of quality remains elusive, in spite of the good intentions of physicians and health planners to use a variety of seemingly beneficial criteria for its definition.
As consumers, we know that quality is not easy to measure. Most of us can compare the quality of American automobiles vs. their foreign competitors by "kicking the tires," that is, by doing a little research. But even with this knowledge, we are not always sure that the particular car we buy will be better or last longer. Health care faces the same problem. Establishing quality care measurements will require a great deal of further research before we can reward or penalize hospitals and physicians for their performance.
It is possible that in our zeal to measure what we can, we are confusing process with content. How to put a number on the performance that leads to quality remains uncertain using our current methodology.-
Dr. Sidney Goldstein is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
The measurement of quality of care has been the mantra of health policy care for the past decade, and has become as American as apple pie and Chevrolet. Yet there have been few data showing that the institution of quality of care guidelines has had any impact on mortality or morbidity.
Despite this lack of data, hospitals are being financially rewarded or penalized based on their ability to meet guidelines established by the Center for Medicare and Medicaid Services in conjunction with the American College of Cardiology and the American Heart Association. Two recent reports provide insight on the progress we have achieved with guidelines in heart failure and in instituting the shortening of the door-to-balloon time (D2B) for percutaneous coronary artery intervention (PCI) in ST-segment elevation MI.
Decreasing heart failure readmission within 30 days, which occurs in approximately one-third of hospitalized patients, has become a target for the quality improvement process. Using the "Get With the Guidelines Heart Failure" registry, a recent analysis indicates that there is a very poor correlation between the achievement or those standards and the 30 day mortality and readmission rate (Circulation 2011;124:712-9).
The guidelines include measurement of cardiac function, application of the usual heart failure medications, and discharge instructions. Data were collected in almost 20,000 patients in 153 hospitals during 2005. Adherence to these guidelines was quite good and was achieved in more than 75% of the hospitals, yet it was unrelated to the 30 day mortality or hospital readmission.
The authors emphasized that the factors that affect survival and readmission are very heterogeneous. Basing pay-for-performance standards on a single measure (such as readmission rates) may penalize institutions that face impediments that are unrelated to performance measurements. Penalizing hospitals that have high readmission rates as a result of a large populations of vulnerable patients may penalize institutions that actually could benefit from more resources in order to achieve better outcomes.
The effectiveness of PCI, when it is performed in less than 90 minutes in STEMI patients, has been supported by clinical data from selected cardiac centers. The application to the larger patient population of the guideline to shorten D2B time to less than 90 minutes has been championed by the ACC, which launched the D2B Alliance in 2006 and by the AHA in 2007 with its Mission: Lifeline program.
The success of these efforts was reported in August (Circulation 2011;124:1038-45) and indicates that in a selected group of CMS-reporting hospitals, D2B time decreased from 96 minutes in 2005 to 64 minutes in 2010. In addition, the percentage of patients with a D2B time of less than 90 minutes increased from 44% to 91%, and that of patients with D2B of less than 75 minutes rose from 27% to 70%. The success of this effort is to be applauded, but the report is striking for its absence of any information regarding outcomes of the shortened D2B time. Unfortunately, there is little outcome information available, with the exception of data from Michigan on all Medicare providers in that state, which indicates that although D2B time decreased by 90 minutes, there was no significant benefit.
Measurement of quality remains elusive, in spite of the good intentions of physicians and health planners to use a variety of seemingly beneficial criteria for its definition.
As consumers, we know that quality is not easy to measure. Most of us can compare the quality of American automobiles vs. their foreign competitors by "kicking the tires," that is, by doing a little research. But even with this knowledge, we are not always sure that the particular car we buy will be better or last longer. Health care faces the same problem. Establishing quality care measurements will require a great deal of further research before we can reward or penalize hospitals and physicians for their performance.
It is possible that in our zeal to measure what we can, we are confusing process with content. How to put a number on the performance that leads to quality remains uncertain using our current methodology.-
Dr. Sidney Goldstein is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.