Affiliations
Yale New Haven Health Services Corporation/The Center for Outcomes Research and Evaluation, New Haven, Connecticut
Given name(s)
Jacqueline N.
Family name
Grady
Degrees
MS

Effect of Hospital Readmission Reduction on Patients at Low, Medium, and High Risk of Readmission in the Medicare Population

Article Type
Changed
Fri, 10/04/2019 - 15:52

Given the high cost of readmissions to the healthcare system, there has been a substantial push to reduce readmissions by policymakers.1 Among these is the Hospital Readmissions Reduction Program (HRRP), in which hospitals with higher than expected readmission rates receive reduced payments from Medicare.2 Recent evidence has suggested the success of such policy changes, with multiple reports demonstrating a decrease in 30-day readmission rates in the Medicare population starting in 2010.3-8

Initiatives to reduce readmissions can also have an effect on total number of admissions.9,10 Indeed, along with the recent reduction in readmission, there has been a reduction in all admissions among Medicare beneficiaries.11,12 Some studies have found that as admissions have decreased, the burden of comorbidity has increased among hospitalized patients,3,11 suggesting that hospitals may be increasingly filled with patients at high risk of readmission. However, whether readmission risk among hospitalized patients has changed remains unknown, and understanding changes in risk profile could help inform which patients to target with future interventions to reduce readmissions.

Hospital efforts to reduce readmissions may have differential effects on types of patients by risk. For instance, low-intensity, system-wide interventions such as standardized discharge instructions or medicine reconciliation may have a stronger effect on patients at relatively low risk of readmission who may have a few important drivers of readmission that are easily overcome. Alternatively, the impact of intensive care transitions management might be greatest for high-risk patients, who have the most need for postdischarge medications, follow-up, and self-care.

The purpose of this study was therefore twofold: (1) to observe changes in average monthly risk of readmission among hospitalized Medicare patients and (2) to examine changes in readmission rates for Medicare patients at various risk of readmission. We hypothesized that readmission risk in the Medicare population would increase in recent years, as overall number of admissions and readmissions have fallen.7,11 Additionally, we hypothesized that standardized readmission rates would decline less in highest risk patients as compared with the lowest risk patients because transitional care interventions may not be able to mitigate the large burden of comorbidity and social issues present in many high-risk patients.13,14

METHODS

We performed a retrospective cohort study of hospitalizations to US nonfederal short-term acute care facilities by Medicare beneficiaries between January 2009 and June 2015. The design involved 4 steps. First, we estimated a predictive model for unplanned readmissions within 30 days of discharge. Second, we assigned each hospitalization a predicted risk of readmission based on the model. Third, we studied trends in mean predicted risk of readmission during the study period. Fourth, we examined trends in observed to expected (O/E) readmission for hospitalizations in the lowest, middle, and highest categories of predicted risk of readmission to determine whether reductions in readmissions were more substantial in certain risk groups than in others.

Data were obtained from the Centers for Medicare and Medicaid Services (CMS) Inpatient Standard Analytic File and the Medicare Enrollment Data Base. We included hospitalizations of fee-for-service Medicare beneficiaries age ≥65 with continuous enrollment in Part A Medicare fee-for-service for at least 1 year prior and 30 days after the hospitalization.15 Hospitalizations with a discharge disposition of death, transfer to another acute hospital, and left against medical advice (AMA) were excluded. We also excluded patients with enrollment in hospice care prior to hospitalization. We excluded hospitalizations in June 2012 because of an irregularity in data availability for that month.

Hospitalizations were categorized into 5 specialty cohorts according to service line. The 5 cohorts were those used for the CMS hospital-wide readmission measure and included surgery/gynecology, medicine, cardiovascular, cardiorespiratory, and neurology.15 Among the 3 clinical conditions tracked as part of HRRP, heart failure and pneumonia were a subset of the cardiorespiratory cohort, while acute myocardial infarction was a subset of the cardiovascular cohort. Our use of cohorts was threefold: first, the average risk of readmission differs substantially across these cohorts, so pooling them produces heterogeneous risk strata; second, risk variables perform differently in different cohorts, so one single model may not be as accurate for calculating risk; and, third, the use of disease cohorts makes our results comparable to the CMS model and similar to other readmission studies in Medicare.7,8,15

For development of the risk model, the outcome was 30-day unplanned hospital readmission. Planned readmissions were excluded; these were defined by the CMS algorithm as readmissions in which a typically planned procedure occurred in a hospitalization with a nonacute principal diagnosis.16 Independent variables included age and comorbidities in the final hospital-wide readmission models for each of the 5 specialty cohorts.15 In order to produce the best possible individual risk prediction for each patient, we added additional independent variables that CMS avoids for hospital quality measurement purposes but that contribute to risk of readmission: sex, race, dual eligibility status, number of prior AMA discharges, intensive care unit stay during current hospitalization, coronary care unit stay during current hospitalization, and hospitalization in the prior 30, 90, and 180 days. We also included an indicator variable for hospitalizations with more than 9 discharge diagnosis codes on or after January 2011, the time at which Medicare allowed an increase of the number of International Classification of Diseases, 9th Revision-Clinical Modification diagnosis billing codes from 9 to 25.17 This indicator adjusts for the increased availability of comorbidity codes, which might otherwise inflate the predicted risk relative to hospitalizations prior to that date.

Based on the risk models, each hospitalization was assigned a predicted risk of readmission. For each specialty cohort, we pooled all hospitalizations across all study years and divided them into risk quintiles. We categorized hospitalizations as high risk if in the highest quintile, medium risk if in the middle 3 quintiles, and low risk if in the lowest quintile of predicted risk for all study hospitalizations in a given specialty cohort.

For our time trend analyses, we studied 2 outcomes: monthly mean predicted risk and monthly ratio of observed readmissions to expected readmissions for patients in the lowest, middle, and highest categories of predicted risk of readmission. We studied monthly predicted risk to determine whether the average readmission risk of patients was changing over time as admission and readmission rates were declining. We studied the ratio of O/E readmissions to determine whether the decline in overall readmissions was more substantial in particular risk strata; we used the ratio of O/E readmissions, which measures number of readmissions divided by number of readmissions predicted by the model, rather than crude observed readmissions, as O/E readmissions account for any changes in risk profiles over time within each risk stratum. Independent variables in our trend analyses were year—entered as a continuous variable—and indicators for postintroduction of the Affordable Care Act (ACA, March 2010) and for postintroduction of HRRP (October 2012); these time indicators were included because of prior studies demonstrating that the introduction of ACA was associated with a decrease from baseline in readmission rates, which leveled off after introduction of HRRP.7 We also included an indicator for calendar quarter to account for seasonal effects.

 

 

Statistical Analysis

We developed generalized estimating equation models to predict 30-day unplanned readmission for each of the 5 specialty cohorts. The 5 models were fit using all patients in each cohort for the included time period and were adjusted for clustering by hospital. We assessed discrimination by calculating area under the receiver operating characteristic curve (AUC) for the 5 models; the AUCs measured the models’ ability to distinguish patients who were readmitted versus those who were not.18 We also calculated AUCs for each year to examine model performance over time.

Using these models, we calculated predicted risk for each hospitalization and averaged these to obtain mean predicted risk for each specialty cohort for each month. To test for trends in mean risk, we estimated 5 time series models, one for each cohort, with the dependent variable of monthly mean predicted risk. For each cohort, we first estimated a series of 12 empty autoregressive models, each with a different autoregressive term (1, 2...12). For each model, we calculated χ2 for the test that the autocorrelation was 0; based on a comparison of chi-squared values, we specified an autocorrelation of 1 month for all models. Accordingly, a 1-month lag was used to estimate one final model for each cohort. Independent variables included year and indicators for post-ACA and post-HRRP; these variables captured the effect of trends over time and the introduction of these policy changes, respectively.19

To determine whether changes in risk over time were a result of changes in particular risk groups, we categorized hospitalizations into risk strata based on quintiles of predicted risk for each specialty cohort for the entire study period. For each individual year, we calculated the proportion of hospitalizations in the highest, middle, and lowest readmission risk strata for each cohort.

We calculated the monthly ratio of O/E readmission for hospitalizations in the lowest 20%, middle 60%, and highest 20% of readmission risk by month; O/E reflects the excess or deficit observed events relative to the number predicted by the model. Using this monthly O/E as the dependent variable, we developed autoregressive time series models as above, again with a 1-month lag, for each of these 3 risk strata in each cohort. As before, independent variables were year as a continuous variable, indicator variables for post-ACA and post-HRRP, and a categorical variable for calendar quarter.

All analyses were done in SAS version 9.3 (SAS Institute Inc., Cary, NC) and Stata version 14.2 (StataCorp LLC, College Station, TX).

RESULTS

We included 47,288,961 hospitalizations in the study, of which 11,231,242 (23.8%) were in the surgery/gynecology cohort, 19,548,711 (41.3%) were in the medicine cohort, 5,433,125 (11.5%) were in the cardiovascular cohort, 8,179,691 (17.3%) were in the cardiorespiratory cohort, and 2,896,192 (6.1%) were in the neurology cohort. The readmission rate was 16.2% (n = 7,642,161) overall, with the highest rates observed in the cardiorespiratory (20.5%) and medicine (17.6%) cohorts and the lowest rates observed in the surgery/gynecology (11.8%) and neurology (13.8%) cohorts.

The final predictive models for each cohort ranged in number of parameters from 56 for the cardiorespiratory cohort to 264 for the surgery/gynecology cohort. The models had AUCs of 0.70, 0.65, 0.67, 0.65, and 0.63 for the surgery/gynecology, medicine, cardiovascular, cardiorespiratory, and neurology cohorts, respectively; AUCs remained fairly stable over time for all disease cohorts (Appendix Table 1).

We observed an increase in the mean predicted readmission risk for hospitalizations in the surgery/gynecology and cardiovascular hospitalizations in early 2011 (Figure 1), a period between the introduction of ACA in March 2010 and the introduction of HRRP in October 2012. In time series models, the surgery/gynecology, cardiovascular, and neurology cohorts had increased predictive risks of readmission of 0.24%, 0.32%, and 0.13% per year, respectively, although this difference did not reach statistical significance for the cardiovascular cohort (Table 1). We found no association between introduction of ACA or HRRP and predicted risk for these cohorts (Table 1). There were no trends or differences in predicted readmission risk for hospitalizations in the medicine cohort. We observed a seasonal variation in predicted readmission risk for the cardiorespiratory cohort but no notable change in predicted risk over time (Figure 1); in the time series model, there was a slight decrease in risk following introduction of HRRP (Table 1).

After categorizing hospitalizations by predicted readmission risk, trends in the percent of hospitalizations in low, middle, and high risk strata differed by cohort. In the surgery/gynecology cohort, the proportion of hospitalizations in the lowest risk stratum increased only slightly, from 20.1% in 2009 to 21.1% of all surgery/gynecology hospitalizations in 2015 (Appendix Table 2). The proportion of surgery/gynecology hospitalizations in the high risk stratum (top quintile of risk) increased from 16.1% to 21.6% between 2009 and 2011 and remained at 21.8% in 2015, and the proportion of surgery/gynecology hospitalizations in the middle risk stratum (middle three quintiles of risk) decreased from 63.7% in 2009 to 59.4% in 2011 to 57.1% in 2015. Low-risk hospitalizations in the medicine cohort decreased from 21.7% in 2009 to 19.0% in 2015, while high-risk hospitalizations increased from 18.2% to 20.7% during the period. Hospitalizations in the lowest stratum of risk steadily declined in both the cardiovascular and neurology cohorts, from 24.9% to 14.8% and 22.6% to 17.3% of hospitalizations during the period, respectively; this was accompanied by an increase in the proportion of high-risk hospitalizations in each of these cohorts from 16.0% to 23.4% and 17.8% to 21.6%, respectively. The proportion of hospitalizations in each of the 3 risk strata remained relatively stable in the cardiorespiratory cohort (Appendix Table 2).

In each of the 5 cohorts, O/E readmissions steadily declined from 2009 to 2015 for hospitalizations with the lowest, middle, and highest predicted readmission risk (Figure 2). Each risk stratum had similar rates of decline during the study period for all cohorts (Table 2). Among surgery/gynecology hospitalizations, the monthly O/E readmission declined by 0.030 per year from an initial ratio of 0.936 for the lowest risk hospitalizations, by 0.037 per year for the middle risk hospitalizations, and by 0.036 per year for the highest risk hospitalizations (Table 2). Similarly, for hospitalizations in the lowest versus highest risk of readmission, annual decreases in O/E readmission rates were 0.018 versus 0.015, 0.034 versus 0.033, 0.020 versus 0.015, and 0.038 versus 0.029 for the medicine, cardiovascular, cardiorespiratory, and neurology cohorts, respectively. For all cohorts and in all risk strata, we found no significant change in O/E readmission risk with introduction of ACA or HRRP (Table 2).

 

 

DISCUSSION

In this 6-year, national study of Medicare hospitalizations, we found that readmission risk increased over time for surgical and neurological patients but did not increase in medicine or cardiorespiratory hospitalizations, even though those cohorts are known to have had substantial decreases in admissions and readmissions over the same time period.7,8 Moreover, we found that O/E readmissions decreased similarly for all hospitalized Medicare patients, whether of low, moderate, or high risk of readmission. These findings suggest that hospital efforts have resulted in improved outcomes across the risk spectrum.

A number of mechanisms may account for the across-the-board improvements in readmission reduction. Many hospitals have instituted system-wide interventions, including patient education, medicine reconciliation, and early postdischarge follow-up,20 which may have reduced readmissions across all patient risk strata. Alternatively, hospitals may have implemented interventions that disproportionally benefited low-risk patients while simultaneously utilizing interventions that only benefited high-risk patients. For instance, increasing threshold for admission7 may have the greatest effect on low-risk patients who could be most easily managed at home, while many intensive transitional care interventions have been developed to target only high-risk patients.21,22

With the introduction of HRRP, there have been a number of concerns about the readmission measure used to penalize hospitals for high readmission rates. One major concern has been that the readmission metric may be flawed in its ability to capture continued improvement related to readmission.23 Some have suggested that with better population health management, admissions will decrease, patient risk of the remaining patients will increase, and hospitals will be increasingly filled with patients who have high likelihood of readmission. This potential for increased risk with HRRP was suggested by a recent study that found that comorbidities increased in hospitalized Medicare beneficiaries between 2010 and 2013.11 Our results were mixed in supporting this potential phenomenon because we examined global risk of readmission and found that some of the cohorts had increased risk over time while others did not. Others have expressed concern that readmission measure does not account for socioeconomic status, which has been associated with readmission rates.24-27 Although we did not directly examine socioeconomic status in our study, we found that hospitals have been able to reduce readmission across all levels of risk, which includes markers of socioeconomic status, including race and Medicaid eligibility status.

Although we hypothesized that readmission risk would increase as number of hospitalizations decreased over time, we found no increase in readmission risk among the cohorts with HRRP diagnoses that had the largest decrease in readmission rates.7,8 Conversely, readmission risk did increase—with a concurrent increase in the proportion of high-risk hospitalizations—in the surgery/gynecology and neurology cohorts that were not subject to HRRP penalties. Nonetheless, rehospitalizations were reduced for all risk categories in these 2 cohorts. Notably, surgery/gynecology and neurology had the lowest readmission rates overall. These findings suggest that initiatives to prevent initial hospitalizations, such as increasing the threshold for postoperative admission, may have had a greater effect on low- versus high-risk patients in low-risk hospitalizations. However, once a patient is hospitalized, multidisciplinary strategies appear to be effective at reducing readmissions for all risk classes in these cohorts.

For the 3 cohorts in which we observed an increase in readmission risk among hospitalized patients, the risk appeared to increase in early 2011. This time was about 10 months after passage of ACA, the timing of which was previously associated with a drop in readmission rates,7,8 but well before HRRP went into effect in October 2012. The increase in readmission risk coincided with an increase in the number of diagnostic codes that could be included on a hospital claim to Medicare.17 This increase in allowable codes allowed us to capture more diagnoses for some patients, potentially resulting in an increase in apparent predicted risk of readmissions. While we adjusted for this in our predictive models, we may not have fully accounted for differences in risk related to coding change. As a result, some of the observed differences in risk in our study may be attributable to coding differences. More broadly, studies demonstrating the success of HRRP have typically examined risk-adjusted rates of readmission.3,7 It is possible that a small portion of the observed reduction in risk-adjusted readmission rates may be related to the increase in predicted risk of readmission observed in our study. Future assessment of trends in readmission during this period should consider accounting for change in the number of allowed billing codes.

Other limitations should be considered in the interpretation of this study. First, like many predictive models for readmission,14 ours had imperfect discrimination, which could affect our results. Second, our study was based on older Medicare patients, so findings may not be applicable to younger patients. Third, while we accounted for surrogates for socioeconomic status, including dual eligibility and race, our models lacked other socioeconomic and community factors that can influence readmission.24-26 Nonetheless, 1 study suggested that easily measured socioeconomic factors may not have a strong influence on the readmission metric used by Medicare.28 Fourth, while our study included over 47 million hospitalizations, our time trend analyses used calendar month as the primary independent variable. As our study included 77 months, we may not have had sufficient power to detect small changes in risk over time.

Medicare readmissions have declined steadily in recent years, presumably at least partly in response to policy changes including HRRP. We found that hospitals have been effective at reducing readmissions across a range of patient risk strata and clinical conditions. As a result, the overall risk of readmission for hospitalized patients has remained constant for some but not all conditions. Whether institutions can continue to reduce readmission rates for most types of patients remains to be seen.

 

 

Acknowledgments

This study was supported by the Agency for Healthcare Research and Quality (AHRQ) grant R01HS022882. Dr. Blecker was supported by the AHRQ grant K08HS23683. The authors would like to thank Shawn Hoke and Jane Padikkala for administrative support.

Disclosure

This study was supported by the Agency for Healthcare Research and Quality (AHRQ) grants R01HS022882 and K08HS23683. The authors have no conflicts to report.

Files
References

1. Jha AK. Seeking Rational Approaches to Fixing Hospital Readmissions. JAMA. 2015;314(16):1681-1682. PubMed
2. Centers for Medicare & Medicaid Services. Readmissions Reduction Program. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program.html. Accessed on January 17, 2017.
3. Suter LG, Li SX, Grady JN, et al. National patterns of risk-standardized mortality and readmission after hospitalization for acute myocardial infarction, heart failure, and pneumonia: update on publicly reported outcomes measures based on the 2013 release. J Gen Intern Med. 2014;29(10):1333-1340. PubMed
4. Gerhardt G, Yemane A, Hickman P, Oelschlaeger A, Rollins E, Brennan N. Medicare readmission rates showed meaningful decline in 2012. Medicare Medicaid Res Rev. 2013;3(2):pii:mmrr.003.02.b01. PubMed
5. Centers for Medicare and Medicaid Services. New Data Shows Affordable Care Act Reforms Are Leading to Lower Hospital Readmission Rates for Medicare Beneficiaries. http://blog.cms.gov/2013/12/06/new-data-shows-affordable-care-act-reforms-are-leading-to-lower-hospital-readmission-rates-for-medicare-beneficiaries/. Accessed on January 17, 2017.
6. Krumholz HM, Normand SL, Wang Y. Trends in hospitalizations and outcomes for acute cardiovascular disease and stroke, 1999-2011. Circulation. 2014;130(12):966-975. PubMed
7. Zuckerman RB, Sheingold SH, Orav EJ, Ruhter J, Epstein AM. Readmissions, Observation, and the Hospital Readmissions Reduction Program. N Engl J Med. 2016;374(16):1543-1551. PubMed
8. Desai NR, Ross JS, Kwon JY, et al. Association Between Hospital Penalty Status Under the Hospital Readmission Reduction Program and Readmission Rates for Target and Nontarget Conditions. JAMA. 2016;316(24):2647-2656. PubMed
9. Brock J, Mitchell J, Irby K, et al. Association between quality improvement for care transitions in communities and rehospitalizations among Medicare beneficiaries. JAMA. 2013;309(4):381-391. PubMed
10. Jencks S. Protecting Hospitals That Improve Population Health. http://medicaring.org/2014/12/16/protecting-hospitals/. Accessed on January 5, 2017.
11. Dharmarajan K, Qin L, Lin Z, et al. Declining Admission Rates And Thirty-Day Readmission Rates Positively Associated Even Though Patients Grew Sicker Over Time. Health Aff (Millwood). 2016;35(7):1294-1302. PubMed
12. Krumholz HM, Nuti SV, Downing NS, Normand SL, Wang Y. Mortality, Hospitalizations, and Expenditures for the Medicare Population Aged 65 Years or Older, 1999-2013. JAMA. 2015;314(4):355-365. PubMed
13. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30-day readmission or death using electronic medical record data. Med Care. 2010;48(11):981-988. PubMed
14. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688-1698. PubMed
15. Horwitz LI, Partovian C, Lin Z, et al. Development and use of an administrative claims measure for profiling hospital-wide performance on 30-day unplanned readmission. Ann Intern Med. 2014;161(10 Suppl):S66-S75. PubMed
16. 2016 Condition-Specific Measures Updates and Specifications Report Hospital-Level 30-Day Risk-Standardized Readmission Measures. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Downloads/AMI-HF-PN-COPD-and-Stroke-Readmission-Updates.zip. Accessed on January 19, 2017.
17. Centers for Medicare & Medicaid Services. Pub 100-04 Medicare Claims Processing, Transmittal 2028. https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R2028CP.pdf. Accessed on November 28, 2016.
18. Martens FK, Tonk EC, Kers JG, Janssens AC. Small improvement in the area under the receiver operating characteristic curve indicated small changes in predicted risks. J Clin Epidemiol. 2016;79:159-164. PubMed
19. Blecker S, Goldfeld K, Park H, et al. Impact of an Intervention to Improve Weekend Hospital Care at an Academic Medical Center: An Observational Study. J Gen Intern Med. 2015;30(11):1657-1664. PubMed
20. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30-day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520-528. PubMed
21. Cavanaugh JJ, Jones CD, Embree G, et al. Implementation Science Workshop: primary care-based multidisciplinary readmission prevention program. J Gen Intern Med. 2014;29(5):798-804. PubMed
22. Jenq GY, Doyle MM, Belton BM, Herrin J, Horwitz LI. Quasi-Experimental Evaluation of the Effectiveness of a Large-Scale Readmission Reduction Program. JAMA Intern Med. 2016;176(5):681-690. PubMed
23. Lynn J, Jencks S. A Dangerous Malfunction in the Measure of Readmission Reduction. http://medicaring.org/2014/08/26/malfunctioning-metrics/. Accessed on January 17, 2017.
24. Calvillo-King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269-282. PubMed
25. Barnett ML, Hsu J, McWilliams JM. Patient Characteristics and Differences in Hospital Readmission Rates. JAMA Intern Med. 2015;175(11):1803-1812. PubMed
26. Singh S, Lin YL, Kuo YF, Nattinger AB, Goodwin JS. Variation in the risk of readmission among hospitals: the relative contribution of patient, hospital and inpatient provider characteristics. J Gen Intern Med. 2014;29(4):572-578. PubMed
27. American Hospital Association. American Hospital Association (AHA) Detailed Comments on the Inpatient Prospective Payment System (PPS) Proposed Rule for Fiscal Year (FY) 2016. http://www.aha.org/advocacy-issues/letter/2015/150616-cl-cms1632-p-ipps.pdf. Accessed on January 10, 2017.
28. Bernheim SM, Parzynski CS, Horwitz L, et al. Accounting For Patients’ Socioeconomic Status Does Not Change Hospital Readmission Rates. Health Aff (Millwood). 2016;35(8):1461-1470. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(8)
Publications
Topics
Page Number
537-543. Published online first February 12, 2018
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Given the high cost of readmissions to the healthcare system, there has been a substantial push to reduce readmissions by policymakers.1 Among these is the Hospital Readmissions Reduction Program (HRRP), in which hospitals with higher than expected readmission rates receive reduced payments from Medicare.2 Recent evidence has suggested the success of such policy changes, with multiple reports demonstrating a decrease in 30-day readmission rates in the Medicare population starting in 2010.3-8

Initiatives to reduce readmissions can also have an effect on total number of admissions.9,10 Indeed, along with the recent reduction in readmission, there has been a reduction in all admissions among Medicare beneficiaries.11,12 Some studies have found that as admissions have decreased, the burden of comorbidity has increased among hospitalized patients,3,11 suggesting that hospitals may be increasingly filled with patients at high risk of readmission. However, whether readmission risk among hospitalized patients has changed remains unknown, and understanding changes in risk profile could help inform which patients to target with future interventions to reduce readmissions.

Hospital efforts to reduce readmissions may have differential effects on types of patients by risk. For instance, low-intensity, system-wide interventions such as standardized discharge instructions or medicine reconciliation may have a stronger effect on patients at relatively low risk of readmission who may have a few important drivers of readmission that are easily overcome. Alternatively, the impact of intensive care transitions management might be greatest for high-risk patients, who have the most need for postdischarge medications, follow-up, and self-care.

The purpose of this study was therefore twofold: (1) to observe changes in average monthly risk of readmission among hospitalized Medicare patients and (2) to examine changes in readmission rates for Medicare patients at various risk of readmission. We hypothesized that readmission risk in the Medicare population would increase in recent years, as overall number of admissions and readmissions have fallen.7,11 Additionally, we hypothesized that standardized readmission rates would decline less in highest risk patients as compared with the lowest risk patients because transitional care interventions may not be able to mitigate the large burden of comorbidity and social issues present in many high-risk patients.13,14

METHODS

We performed a retrospective cohort study of hospitalizations to US nonfederal short-term acute care facilities by Medicare beneficiaries between January 2009 and June 2015. The design involved 4 steps. First, we estimated a predictive model for unplanned readmissions within 30 days of discharge. Second, we assigned each hospitalization a predicted risk of readmission based on the model. Third, we studied trends in mean predicted risk of readmission during the study period. Fourth, we examined trends in observed to expected (O/E) readmission for hospitalizations in the lowest, middle, and highest categories of predicted risk of readmission to determine whether reductions in readmissions were more substantial in certain risk groups than in others.

Data were obtained from the Centers for Medicare and Medicaid Services (CMS) Inpatient Standard Analytic File and the Medicare Enrollment Data Base. We included hospitalizations of fee-for-service Medicare beneficiaries age ≥65 with continuous enrollment in Part A Medicare fee-for-service for at least 1 year prior and 30 days after the hospitalization.15 Hospitalizations with a discharge disposition of death, transfer to another acute hospital, and left against medical advice (AMA) were excluded. We also excluded patients with enrollment in hospice care prior to hospitalization. We excluded hospitalizations in June 2012 because of an irregularity in data availability for that month.

Hospitalizations were categorized into 5 specialty cohorts according to service line. The 5 cohorts were those used for the CMS hospital-wide readmission measure and included surgery/gynecology, medicine, cardiovascular, cardiorespiratory, and neurology.15 Among the 3 clinical conditions tracked as part of HRRP, heart failure and pneumonia were a subset of the cardiorespiratory cohort, while acute myocardial infarction was a subset of the cardiovascular cohort. Our use of cohorts was threefold: first, the average risk of readmission differs substantially across these cohorts, so pooling them produces heterogeneous risk strata; second, risk variables perform differently in different cohorts, so one single model may not be as accurate for calculating risk; and, third, the use of disease cohorts makes our results comparable to the CMS model and similar to other readmission studies in Medicare.7,8,15

For development of the risk model, the outcome was 30-day unplanned hospital readmission. Planned readmissions were excluded; these were defined by the CMS algorithm as readmissions in which a typically planned procedure occurred in a hospitalization with a nonacute principal diagnosis.16 Independent variables included age and comorbidities in the final hospital-wide readmission models for each of the 5 specialty cohorts.15 In order to produce the best possible individual risk prediction for each patient, we added additional independent variables that CMS avoids for hospital quality measurement purposes but that contribute to risk of readmission: sex, race, dual eligibility status, number of prior AMA discharges, intensive care unit stay during current hospitalization, coronary care unit stay during current hospitalization, and hospitalization in the prior 30, 90, and 180 days. We also included an indicator variable for hospitalizations with more than 9 discharge diagnosis codes on or after January 2011, the time at which Medicare allowed an increase of the number of International Classification of Diseases, 9th Revision-Clinical Modification diagnosis billing codes from 9 to 25.17 This indicator adjusts for the increased availability of comorbidity codes, which might otherwise inflate the predicted risk relative to hospitalizations prior to that date.

Based on the risk models, each hospitalization was assigned a predicted risk of readmission. For each specialty cohort, we pooled all hospitalizations across all study years and divided them into risk quintiles. We categorized hospitalizations as high risk if in the highest quintile, medium risk if in the middle 3 quintiles, and low risk if in the lowest quintile of predicted risk for all study hospitalizations in a given specialty cohort.

For our time trend analyses, we studied 2 outcomes: monthly mean predicted risk and monthly ratio of observed readmissions to expected readmissions for patients in the lowest, middle, and highest categories of predicted risk of readmission. We studied monthly predicted risk to determine whether the average readmission risk of patients was changing over time as admission and readmission rates were declining. We studied the ratio of O/E readmissions to determine whether the decline in overall readmissions was more substantial in particular risk strata; we used the ratio of O/E readmissions, which measures number of readmissions divided by number of readmissions predicted by the model, rather than crude observed readmissions, as O/E readmissions account for any changes in risk profiles over time within each risk stratum. Independent variables in our trend analyses were year—entered as a continuous variable—and indicators for postintroduction of the Affordable Care Act (ACA, March 2010) and for postintroduction of HRRP (October 2012); these time indicators were included because of prior studies demonstrating that the introduction of ACA was associated with a decrease from baseline in readmission rates, which leveled off after introduction of HRRP.7 We also included an indicator for calendar quarter to account for seasonal effects.

 

 

Statistical Analysis

We developed generalized estimating equation models to predict 30-day unplanned readmission for each of the 5 specialty cohorts. The 5 models were fit using all patients in each cohort for the included time period and were adjusted for clustering by hospital. We assessed discrimination by calculating area under the receiver operating characteristic curve (AUC) for the 5 models; the AUCs measured the models’ ability to distinguish patients who were readmitted versus those who were not.18 We also calculated AUCs for each year to examine model performance over time.

Using these models, we calculated predicted risk for each hospitalization and averaged these to obtain mean predicted risk for each specialty cohort for each month. To test for trends in mean risk, we estimated 5 time series models, one for each cohort, with the dependent variable of monthly mean predicted risk. For each cohort, we first estimated a series of 12 empty autoregressive models, each with a different autoregressive term (1, 2...12). For each model, we calculated χ2 for the test that the autocorrelation was 0; based on a comparison of chi-squared values, we specified an autocorrelation of 1 month for all models. Accordingly, a 1-month lag was used to estimate one final model for each cohort. Independent variables included year and indicators for post-ACA and post-HRRP; these variables captured the effect of trends over time and the introduction of these policy changes, respectively.19

To determine whether changes in risk over time were a result of changes in particular risk groups, we categorized hospitalizations into risk strata based on quintiles of predicted risk for each specialty cohort for the entire study period. For each individual year, we calculated the proportion of hospitalizations in the highest, middle, and lowest readmission risk strata for each cohort.

We calculated the monthly ratio of O/E readmission for hospitalizations in the lowest 20%, middle 60%, and highest 20% of readmission risk by month; O/E reflects the excess or deficit observed events relative to the number predicted by the model. Using this monthly O/E as the dependent variable, we developed autoregressive time series models as above, again with a 1-month lag, for each of these 3 risk strata in each cohort. As before, independent variables were year as a continuous variable, indicator variables for post-ACA and post-HRRP, and a categorical variable for calendar quarter.

All analyses were done in SAS version 9.3 (SAS Institute Inc., Cary, NC) and Stata version 14.2 (StataCorp LLC, College Station, TX).

RESULTS

We included 47,288,961 hospitalizations in the study, of which 11,231,242 (23.8%) were in the surgery/gynecology cohort, 19,548,711 (41.3%) were in the medicine cohort, 5,433,125 (11.5%) were in the cardiovascular cohort, 8,179,691 (17.3%) were in the cardiorespiratory cohort, and 2,896,192 (6.1%) were in the neurology cohort. The readmission rate was 16.2% (n = 7,642,161) overall, with the highest rates observed in the cardiorespiratory (20.5%) and medicine (17.6%) cohorts and the lowest rates observed in the surgery/gynecology (11.8%) and neurology (13.8%) cohorts.

The final predictive models for each cohort ranged in number of parameters from 56 for the cardiorespiratory cohort to 264 for the surgery/gynecology cohort. The models had AUCs of 0.70, 0.65, 0.67, 0.65, and 0.63 for the surgery/gynecology, medicine, cardiovascular, cardiorespiratory, and neurology cohorts, respectively; AUCs remained fairly stable over time for all disease cohorts (Appendix Table 1).

We observed an increase in the mean predicted readmission risk for hospitalizations in the surgery/gynecology and cardiovascular hospitalizations in early 2011 (Figure 1), a period between the introduction of ACA in March 2010 and the introduction of HRRP in October 2012. In time series models, the surgery/gynecology, cardiovascular, and neurology cohorts had increased predictive risks of readmission of 0.24%, 0.32%, and 0.13% per year, respectively, although this difference did not reach statistical significance for the cardiovascular cohort (Table 1). We found no association between introduction of ACA or HRRP and predicted risk for these cohorts (Table 1). There were no trends or differences in predicted readmission risk for hospitalizations in the medicine cohort. We observed a seasonal variation in predicted readmission risk for the cardiorespiratory cohort but no notable change in predicted risk over time (Figure 1); in the time series model, there was a slight decrease in risk following introduction of HRRP (Table 1).

After categorizing hospitalizations by predicted readmission risk, trends in the percent of hospitalizations in low, middle, and high risk strata differed by cohort. In the surgery/gynecology cohort, the proportion of hospitalizations in the lowest risk stratum increased only slightly, from 20.1% in 2009 to 21.1% of all surgery/gynecology hospitalizations in 2015 (Appendix Table 2). The proportion of surgery/gynecology hospitalizations in the high risk stratum (top quintile of risk) increased from 16.1% to 21.6% between 2009 and 2011 and remained at 21.8% in 2015, and the proportion of surgery/gynecology hospitalizations in the middle risk stratum (middle three quintiles of risk) decreased from 63.7% in 2009 to 59.4% in 2011 to 57.1% in 2015. Low-risk hospitalizations in the medicine cohort decreased from 21.7% in 2009 to 19.0% in 2015, while high-risk hospitalizations increased from 18.2% to 20.7% during the period. Hospitalizations in the lowest stratum of risk steadily declined in both the cardiovascular and neurology cohorts, from 24.9% to 14.8% and 22.6% to 17.3% of hospitalizations during the period, respectively; this was accompanied by an increase in the proportion of high-risk hospitalizations in each of these cohorts from 16.0% to 23.4% and 17.8% to 21.6%, respectively. The proportion of hospitalizations in each of the 3 risk strata remained relatively stable in the cardiorespiratory cohort (Appendix Table 2).

In each of the 5 cohorts, O/E readmissions steadily declined from 2009 to 2015 for hospitalizations with the lowest, middle, and highest predicted readmission risk (Figure 2). Each risk stratum had similar rates of decline during the study period for all cohorts (Table 2). Among surgery/gynecology hospitalizations, the monthly O/E readmission declined by 0.030 per year from an initial ratio of 0.936 for the lowest risk hospitalizations, by 0.037 per year for the middle risk hospitalizations, and by 0.036 per year for the highest risk hospitalizations (Table 2). Similarly, for hospitalizations in the lowest versus highest risk of readmission, annual decreases in O/E readmission rates were 0.018 versus 0.015, 0.034 versus 0.033, 0.020 versus 0.015, and 0.038 versus 0.029 for the medicine, cardiovascular, cardiorespiratory, and neurology cohorts, respectively. For all cohorts and in all risk strata, we found no significant change in O/E readmission risk with introduction of ACA or HRRP (Table 2).

 

 

DISCUSSION

In this 6-year, national study of Medicare hospitalizations, we found that readmission risk increased over time for surgical and neurological patients but did not increase in medicine or cardiorespiratory hospitalizations, even though those cohorts are known to have had substantial decreases in admissions and readmissions over the same time period.7,8 Moreover, we found that O/E readmissions decreased similarly for all hospitalized Medicare patients, whether of low, moderate, or high risk of readmission. These findings suggest that hospital efforts have resulted in improved outcomes across the risk spectrum.

A number of mechanisms may account for the across-the-board improvements in readmission reduction. Many hospitals have instituted system-wide interventions, including patient education, medicine reconciliation, and early postdischarge follow-up,20 which may have reduced readmissions across all patient risk strata. Alternatively, hospitals may have implemented interventions that disproportionally benefited low-risk patients while simultaneously utilizing interventions that only benefited high-risk patients. For instance, increasing threshold for admission7 may have the greatest effect on low-risk patients who could be most easily managed at home, while many intensive transitional care interventions have been developed to target only high-risk patients.21,22

With the introduction of HRRP, there have been a number of concerns about the readmission measure used to penalize hospitals for high readmission rates. One major concern has been that the readmission metric may be flawed in its ability to capture continued improvement related to readmission.23 Some have suggested that with better population health management, admissions will decrease, patient risk of the remaining patients will increase, and hospitals will be increasingly filled with patients who have high likelihood of readmission. This potential for increased risk with HRRP was suggested by a recent study that found that comorbidities increased in hospitalized Medicare beneficiaries between 2010 and 2013.11 Our results were mixed in supporting this potential phenomenon because we examined global risk of readmission and found that some of the cohorts had increased risk over time while others did not. Others have expressed concern that readmission measure does not account for socioeconomic status, which has been associated with readmission rates.24-27 Although we did not directly examine socioeconomic status in our study, we found that hospitals have been able to reduce readmission across all levels of risk, which includes markers of socioeconomic status, including race and Medicaid eligibility status.

Although we hypothesized that readmission risk would increase as number of hospitalizations decreased over time, we found no increase in readmission risk among the cohorts with HRRP diagnoses that had the largest decrease in readmission rates.7,8 Conversely, readmission risk did increase—with a concurrent increase in the proportion of high-risk hospitalizations—in the surgery/gynecology and neurology cohorts that were not subject to HRRP penalties. Nonetheless, rehospitalizations were reduced for all risk categories in these 2 cohorts. Notably, surgery/gynecology and neurology had the lowest readmission rates overall. These findings suggest that initiatives to prevent initial hospitalizations, such as increasing the threshold for postoperative admission, may have had a greater effect on low- versus high-risk patients in low-risk hospitalizations. However, once a patient is hospitalized, multidisciplinary strategies appear to be effective at reducing readmissions for all risk classes in these cohorts.

For the 3 cohorts in which we observed an increase in readmission risk among hospitalized patients, the risk appeared to increase in early 2011. This time was about 10 months after passage of ACA, the timing of which was previously associated with a drop in readmission rates,7,8 but well before HRRP went into effect in October 2012. The increase in readmission risk coincided with an increase in the number of diagnostic codes that could be included on a hospital claim to Medicare.17 This increase in allowable codes allowed us to capture more diagnoses for some patients, potentially resulting in an increase in apparent predicted risk of readmissions. While we adjusted for this in our predictive models, we may not have fully accounted for differences in risk related to coding change. As a result, some of the observed differences in risk in our study may be attributable to coding differences. More broadly, studies demonstrating the success of HRRP have typically examined risk-adjusted rates of readmission.3,7 It is possible that a small portion of the observed reduction in risk-adjusted readmission rates may be related to the increase in predicted risk of readmission observed in our study. Future assessment of trends in readmission during this period should consider accounting for change in the number of allowed billing codes.

Other limitations should be considered in the interpretation of this study. First, like many predictive models for readmission,14 ours had imperfect discrimination, which could affect our results. Second, our study was based on older Medicare patients, so findings may not be applicable to younger patients. Third, while we accounted for surrogates for socioeconomic status, including dual eligibility and race, our models lacked other socioeconomic and community factors that can influence readmission.24-26 Nonetheless, 1 study suggested that easily measured socioeconomic factors may not have a strong influence on the readmission metric used by Medicare.28 Fourth, while our study included over 47 million hospitalizations, our time trend analyses used calendar month as the primary independent variable. As our study included 77 months, we may not have had sufficient power to detect small changes in risk over time.

Medicare readmissions have declined steadily in recent years, presumably at least partly in response to policy changes including HRRP. We found that hospitals have been effective at reducing readmissions across a range of patient risk strata and clinical conditions. As a result, the overall risk of readmission for hospitalized patients has remained constant for some but not all conditions. Whether institutions can continue to reduce readmission rates for most types of patients remains to be seen.

 

 

Acknowledgments

This study was supported by the Agency for Healthcare Research and Quality (AHRQ) grant R01HS022882. Dr. Blecker was supported by the AHRQ grant K08HS23683. The authors would like to thank Shawn Hoke and Jane Padikkala for administrative support.

Disclosure

This study was supported by the Agency for Healthcare Research and Quality (AHRQ) grants R01HS022882 and K08HS23683. The authors have no conflicts to report.

Given the high cost of readmissions to the healthcare system, there has been a substantial push to reduce readmissions by policymakers.1 Among these is the Hospital Readmissions Reduction Program (HRRP), in which hospitals with higher than expected readmission rates receive reduced payments from Medicare.2 Recent evidence has suggested the success of such policy changes, with multiple reports demonstrating a decrease in 30-day readmission rates in the Medicare population starting in 2010.3-8

Initiatives to reduce readmissions can also have an effect on total number of admissions.9,10 Indeed, along with the recent reduction in readmission, there has been a reduction in all admissions among Medicare beneficiaries.11,12 Some studies have found that as admissions have decreased, the burden of comorbidity has increased among hospitalized patients,3,11 suggesting that hospitals may be increasingly filled with patients at high risk of readmission. However, whether readmission risk among hospitalized patients has changed remains unknown, and understanding changes in risk profile could help inform which patients to target with future interventions to reduce readmissions.

Hospital efforts to reduce readmissions may have differential effects on types of patients by risk. For instance, low-intensity, system-wide interventions such as standardized discharge instructions or medicine reconciliation may have a stronger effect on patients at relatively low risk of readmission who may have a few important drivers of readmission that are easily overcome. Alternatively, the impact of intensive care transitions management might be greatest for high-risk patients, who have the most need for postdischarge medications, follow-up, and self-care.

The purpose of this study was therefore twofold: (1) to observe changes in average monthly risk of readmission among hospitalized Medicare patients and (2) to examine changes in readmission rates for Medicare patients at various risk of readmission. We hypothesized that readmission risk in the Medicare population would increase in recent years, as overall number of admissions and readmissions have fallen.7,11 Additionally, we hypothesized that standardized readmission rates would decline less in highest risk patients as compared with the lowest risk patients because transitional care interventions may not be able to mitigate the large burden of comorbidity and social issues present in many high-risk patients.13,14

METHODS

We performed a retrospective cohort study of hospitalizations to US nonfederal short-term acute care facilities by Medicare beneficiaries between January 2009 and June 2015. The design involved 4 steps. First, we estimated a predictive model for unplanned readmissions within 30 days of discharge. Second, we assigned each hospitalization a predicted risk of readmission based on the model. Third, we studied trends in mean predicted risk of readmission during the study period. Fourth, we examined trends in observed to expected (O/E) readmission for hospitalizations in the lowest, middle, and highest categories of predicted risk of readmission to determine whether reductions in readmissions were more substantial in certain risk groups than in others.

Data were obtained from the Centers for Medicare and Medicaid Services (CMS) Inpatient Standard Analytic File and the Medicare Enrollment Data Base. We included hospitalizations of fee-for-service Medicare beneficiaries age ≥65 with continuous enrollment in Part A Medicare fee-for-service for at least 1 year prior and 30 days after the hospitalization.15 Hospitalizations with a discharge disposition of death, transfer to another acute hospital, and left against medical advice (AMA) were excluded. We also excluded patients with enrollment in hospice care prior to hospitalization. We excluded hospitalizations in June 2012 because of an irregularity in data availability for that month.

Hospitalizations were categorized into 5 specialty cohorts according to service line. The 5 cohorts were those used for the CMS hospital-wide readmission measure and included surgery/gynecology, medicine, cardiovascular, cardiorespiratory, and neurology.15 Among the 3 clinical conditions tracked as part of HRRP, heart failure and pneumonia were a subset of the cardiorespiratory cohort, while acute myocardial infarction was a subset of the cardiovascular cohort. Our use of cohorts was threefold: first, the average risk of readmission differs substantially across these cohorts, so pooling them produces heterogeneous risk strata; second, risk variables perform differently in different cohorts, so one single model may not be as accurate for calculating risk; and, third, the use of disease cohorts makes our results comparable to the CMS model and similar to other readmission studies in Medicare.7,8,15

For development of the risk model, the outcome was 30-day unplanned hospital readmission. Planned readmissions were excluded; these were defined by the CMS algorithm as readmissions in which a typically planned procedure occurred in a hospitalization with a nonacute principal diagnosis.16 Independent variables included age and comorbidities in the final hospital-wide readmission models for each of the 5 specialty cohorts.15 In order to produce the best possible individual risk prediction for each patient, we added additional independent variables that CMS avoids for hospital quality measurement purposes but that contribute to risk of readmission: sex, race, dual eligibility status, number of prior AMA discharges, intensive care unit stay during current hospitalization, coronary care unit stay during current hospitalization, and hospitalization in the prior 30, 90, and 180 days. We also included an indicator variable for hospitalizations with more than 9 discharge diagnosis codes on or after January 2011, the time at which Medicare allowed an increase of the number of International Classification of Diseases, 9th Revision-Clinical Modification diagnosis billing codes from 9 to 25.17 This indicator adjusts for the increased availability of comorbidity codes, which might otherwise inflate the predicted risk relative to hospitalizations prior to that date.

Based on the risk models, each hospitalization was assigned a predicted risk of readmission. For each specialty cohort, we pooled all hospitalizations across all study years and divided them into risk quintiles. We categorized hospitalizations as high risk if in the highest quintile, medium risk if in the middle 3 quintiles, and low risk if in the lowest quintile of predicted risk for all study hospitalizations in a given specialty cohort.

For our time trend analyses, we studied 2 outcomes: monthly mean predicted risk and monthly ratio of observed readmissions to expected readmissions for patients in the lowest, middle, and highest categories of predicted risk of readmission. We studied monthly predicted risk to determine whether the average readmission risk of patients was changing over time as admission and readmission rates were declining. We studied the ratio of O/E readmissions to determine whether the decline in overall readmissions was more substantial in particular risk strata; we used the ratio of O/E readmissions, which measures number of readmissions divided by number of readmissions predicted by the model, rather than crude observed readmissions, as O/E readmissions account for any changes in risk profiles over time within each risk stratum. Independent variables in our trend analyses were year—entered as a continuous variable—and indicators for postintroduction of the Affordable Care Act (ACA, March 2010) and for postintroduction of HRRP (October 2012); these time indicators were included because of prior studies demonstrating that the introduction of ACA was associated with a decrease from baseline in readmission rates, which leveled off after introduction of HRRP.7 We also included an indicator for calendar quarter to account for seasonal effects.

 

 

Statistical Analysis

We developed generalized estimating equation models to predict 30-day unplanned readmission for each of the 5 specialty cohorts. The 5 models were fit using all patients in each cohort for the included time period and were adjusted for clustering by hospital. We assessed discrimination by calculating area under the receiver operating characteristic curve (AUC) for the 5 models; the AUCs measured the models’ ability to distinguish patients who were readmitted versus those who were not.18 We also calculated AUCs for each year to examine model performance over time.

Using these models, we calculated predicted risk for each hospitalization and averaged these to obtain mean predicted risk for each specialty cohort for each month. To test for trends in mean risk, we estimated 5 time series models, one for each cohort, with the dependent variable of monthly mean predicted risk. For each cohort, we first estimated a series of 12 empty autoregressive models, each with a different autoregressive term (1, 2...12). For each model, we calculated χ2 for the test that the autocorrelation was 0; based on a comparison of chi-squared values, we specified an autocorrelation of 1 month for all models. Accordingly, a 1-month lag was used to estimate one final model for each cohort. Independent variables included year and indicators for post-ACA and post-HRRP; these variables captured the effect of trends over time and the introduction of these policy changes, respectively.19

To determine whether changes in risk over time were a result of changes in particular risk groups, we categorized hospitalizations into risk strata based on quintiles of predicted risk for each specialty cohort for the entire study period. For each individual year, we calculated the proportion of hospitalizations in the highest, middle, and lowest readmission risk strata for each cohort.

We calculated the monthly ratio of O/E readmission for hospitalizations in the lowest 20%, middle 60%, and highest 20% of readmission risk by month; O/E reflects the excess or deficit observed events relative to the number predicted by the model. Using this monthly O/E as the dependent variable, we developed autoregressive time series models as above, again with a 1-month lag, for each of these 3 risk strata in each cohort. As before, independent variables were year as a continuous variable, indicator variables for post-ACA and post-HRRP, and a categorical variable for calendar quarter.

All analyses were done in SAS version 9.3 (SAS Institute Inc., Cary, NC) and Stata version 14.2 (StataCorp LLC, College Station, TX).

RESULTS

We included 47,288,961 hospitalizations in the study, of which 11,231,242 (23.8%) were in the surgery/gynecology cohort, 19,548,711 (41.3%) were in the medicine cohort, 5,433,125 (11.5%) were in the cardiovascular cohort, 8,179,691 (17.3%) were in the cardiorespiratory cohort, and 2,896,192 (6.1%) were in the neurology cohort. The readmission rate was 16.2% (n = 7,642,161) overall, with the highest rates observed in the cardiorespiratory (20.5%) and medicine (17.6%) cohorts and the lowest rates observed in the surgery/gynecology (11.8%) and neurology (13.8%) cohorts.

The final predictive models for each cohort ranged in number of parameters from 56 for the cardiorespiratory cohort to 264 for the surgery/gynecology cohort. The models had AUCs of 0.70, 0.65, 0.67, 0.65, and 0.63 for the surgery/gynecology, medicine, cardiovascular, cardiorespiratory, and neurology cohorts, respectively; AUCs remained fairly stable over time for all disease cohorts (Appendix Table 1).

We observed an increase in the mean predicted readmission risk for hospitalizations in the surgery/gynecology and cardiovascular hospitalizations in early 2011 (Figure 1), a period between the introduction of ACA in March 2010 and the introduction of HRRP in October 2012. In time series models, the surgery/gynecology, cardiovascular, and neurology cohorts had increased predictive risks of readmission of 0.24%, 0.32%, and 0.13% per year, respectively, although this difference did not reach statistical significance for the cardiovascular cohort (Table 1). We found no association between introduction of ACA or HRRP and predicted risk for these cohorts (Table 1). There were no trends or differences in predicted readmission risk for hospitalizations in the medicine cohort. We observed a seasonal variation in predicted readmission risk for the cardiorespiratory cohort but no notable change in predicted risk over time (Figure 1); in the time series model, there was a slight decrease in risk following introduction of HRRP (Table 1).

After categorizing hospitalizations by predicted readmission risk, trends in the percent of hospitalizations in low, middle, and high risk strata differed by cohort. In the surgery/gynecology cohort, the proportion of hospitalizations in the lowest risk stratum increased only slightly, from 20.1% in 2009 to 21.1% of all surgery/gynecology hospitalizations in 2015 (Appendix Table 2). The proportion of surgery/gynecology hospitalizations in the high risk stratum (top quintile of risk) increased from 16.1% to 21.6% between 2009 and 2011 and remained at 21.8% in 2015, and the proportion of surgery/gynecology hospitalizations in the middle risk stratum (middle three quintiles of risk) decreased from 63.7% in 2009 to 59.4% in 2011 to 57.1% in 2015. Low-risk hospitalizations in the medicine cohort decreased from 21.7% in 2009 to 19.0% in 2015, while high-risk hospitalizations increased from 18.2% to 20.7% during the period. Hospitalizations in the lowest stratum of risk steadily declined in both the cardiovascular and neurology cohorts, from 24.9% to 14.8% and 22.6% to 17.3% of hospitalizations during the period, respectively; this was accompanied by an increase in the proportion of high-risk hospitalizations in each of these cohorts from 16.0% to 23.4% and 17.8% to 21.6%, respectively. The proportion of hospitalizations in each of the 3 risk strata remained relatively stable in the cardiorespiratory cohort (Appendix Table 2).

In each of the 5 cohorts, O/E readmissions steadily declined from 2009 to 2015 for hospitalizations with the lowest, middle, and highest predicted readmission risk (Figure 2). Each risk stratum had similar rates of decline during the study period for all cohorts (Table 2). Among surgery/gynecology hospitalizations, the monthly O/E readmission declined by 0.030 per year from an initial ratio of 0.936 for the lowest risk hospitalizations, by 0.037 per year for the middle risk hospitalizations, and by 0.036 per year for the highest risk hospitalizations (Table 2). Similarly, for hospitalizations in the lowest versus highest risk of readmission, annual decreases in O/E readmission rates were 0.018 versus 0.015, 0.034 versus 0.033, 0.020 versus 0.015, and 0.038 versus 0.029 for the medicine, cardiovascular, cardiorespiratory, and neurology cohorts, respectively. For all cohorts and in all risk strata, we found no significant change in O/E readmission risk with introduction of ACA or HRRP (Table 2).

 

 

DISCUSSION

In this 6-year, national study of Medicare hospitalizations, we found that readmission risk increased over time for surgical and neurological patients but did not increase in medicine or cardiorespiratory hospitalizations, even though those cohorts are known to have had substantial decreases in admissions and readmissions over the same time period.7,8 Moreover, we found that O/E readmissions decreased similarly for all hospitalized Medicare patients, whether of low, moderate, or high risk of readmission. These findings suggest that hospital efforts have resulted in improved outcomes across the risk spectrum.

A number of mechanisms may account for the across-the-board improvements in readmission reduction. Many hospitals have instituted system-wide interventions, including patient education, medicine reconciliation, and early postdischarge follow-up,20 which may have reduced readmissions across all patient risk strata. Alternatively, hospitals may have implemented interventions that disproportionally benefited low-risk patients while simultaneously utilizing interventions that only benefited high-risk patients. For instance, increasing threshold for admission7 may have the greatest effect on low-risk patients who could be most easily managed at home, while many intensive transitional care interventions have been developed to target only high-risk patients.21,22

With the introduction of HRRP, there have been a number of concerns about the readmission measure used to penalize hospitals for high readmission rates. One major concern has been that the readmission metric may be flawed in its ability to capture continued improvement related to readmission.23 Some have suggested that with better population health management, admissions will decrease, patient risk of the remaining patients will increase, and hospitals will be increasingly filled with patients who have high likelihood of readmission. This potential for increased risk with HRRP was suggested by a recent study that found that comorbidities increased in hospitalized Medicare beneficiaries between 2010 and 2013.11 Our results were mixed in supporting this potential phenomenon because we examined global risk of readmission and found that some of the cohorts had increased risk over time while others did not. Others have expressed concern that readmission measure does not account for socioeconomic status, which has been associated with readmission rates.24-27 Although we did not directly examine socioeconomic status in our study, we found that hospitals have been able to reduce readmission across all levels of risk, which includes markers of socioeconomic status, including race and Medicaid eligibility status.

Although we hypothesized that readmission risk would increase as number of hospitalizations decreased over time, we found no increase in readmission risk among the cohorts with HRRP diagnoses that had the largest decrease in readmission rates.7,8 Conversely, readmission risk did increase—with a concurrent increase in the proportion of high-risk hospitalizations—in the surgery/gynecology and neurology cohorts that were not subject to HRRP penalties. Nonetheless, rehospitalizations were reduced for all risk categories in these 2 cohorts. Notably, surgery/gynecology and neurology had the lowest readmission rates overall. These findings suggest that initiatives to prevent initial hospitalizations, such as increasing the threshold for postoperative admission, may have had a greater effect on low- versus high-risk patients in low-risk hospitalizations. However, once a patient is hospitalized, multidisciplinary strategies appear to be effective at reducing readmissions for all risk classes in these cohorts.

For the 3 cohorts in which we observed an increase in readmission risk among hospitalized patients, the risk appeared to increase in early 2011. This time was about 10 months after passage of ACA, the timing of which was previously associated with a drop in readmission rates,7,8 but well before HRRP went into effect in October 2012. The increase in readmission risk coincided with an increase in the number of diagnostic codes that could be included on a hospital claim to Medicare.17 This increase in allowable codes allowed us to capture more diagnoses for some patients, potentially resulting in an increase in apparent predicted risk of readmissions. While we adjusted for this in our predictive models, we may not have fully accounted for differences in risk related to coding change. As a result, some of the observed differences in risk in our study may be attributable to coding differences. More broadly, studies demonstrating the success of HRRP have typically examined risk-adjusted rates of readmission.3,7 It is possible that a small portion of the observed reduction in risk-adjusted readmission rates may be related to the increase in predicted risk of readmission observed in our study. Future assessment of trends in readmission during this period should consider accounting for change in the number of allowed billing codes.

Other limitations should be considered in the interpretation of this study. First, like many predictive models for readmission,14 ours had imperfect discrimination, which could affect our results. Second, our study was based on older Medicare patients, so findings may not be applicable to younger patients. Third, while we accounted for surrogates for socioeconomic status, including dual eligibility and race, our models lacked other socioeconomic and community factors that can influence readmission.24-26 Nonetheless, 1 study suggested that easily measured socioeconomic factors may not have a strong influence on the readmission metric used by Medicare.28 Fourth, while our study included over 47 million hospitalizations, our time trend analyses used calendar month as the primary independent variable. As our study included 77 months, we may not have had sufficient power to detect small changes in risk over time.

Medicare readmissions have declined steadily in recent years, presumably at least partly in response to policy changes including HRRP. We found that hospitals have been effective at reducing readmissions across a range of patient risk strata and clinical conditions. As a result, the overall risk of readmission for hospitalized patients has remained constant for some but not all conditions. Whether institutions can continue to reduce readmission rates for most types of patients remains to be seen.

 

 

Acknowledgments

This study was supported by the Agency for Healthcare Research and Quality (AHRQ) grant R01HS022882. Dr. Blecker was supported by the AHRQ grant K08HS23683. The authors would like to thank Shawn Hoke and Jane Padikkala for administrative support.

Disclosure

This study was supported by the Agency for Healthcare Research and Quality (AHRQ) grants R01HS022882 and K08HS23683. The authors have no conflicts to report.

References

1. Jha AK. Seeking Rational Approaches to Fixing Hospital Readmissions. JAMA. 2015;314(16):1681-1682. PubMed
2. Centers for Medicare & Medicaid Services. Readmissions Reduction Program. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program.html. Accessed on January 17, 2017.
3. Suter LG, Li SX, Grady JN, et al. National patterns of risk-standardized mortality and readmission after hospitalization for acute myocardial infarction, heart failure, and pneumonia: update on publicly reported outcomes measures based on the 2013 release. J Gen Intern Med. 2014;29(10):1333-1340. PubMed
4. Gerhardt G, Yemane A, Hickman P, Oelschlaeger A, Rollins E, Brennan N. Medicare readmission rates showed meaningful decline in 2012. Medicare Medicaid Res Rev. 2013;3(2):pii:mmrr.003.02.b01. PubMed
5. Centers for Medicare and Medicaid Services. New Data Shows Affordable Care Act Reforms Are Leading to Lower Hospital Readmission Rates for Medicare Beneficiaries. http://blog.cms.gov/2013/12/06/new-data-shows-affordable-care-act-reforms-are-leading-to-lower-hospital-readmission-rates-for-medicare-beneficiaries/. Accessed on January 17, 2017.
6. Krumholz HM, Normand SL, Wang Y. Trends in hospitalizations and outcomes for acute cardiovascular disease and stroke, 1999-2011. Circulation. 2014;130(12):966-975. PubMed
7. Zuckerman RB, Sheingold SH, Orav EJ, Ruhter J, Epstein AM. Readmissions, Observation, and the Hospital Readmissions Reduction Program. N Engl J Med. 2016;374(16):1543-1551. PubMed
8. Desai NR, Ross JS, Kwon JY, et al. Association Between Hospital Penalty Status Under the Hospital Readmission Reduction Program and Readmission Rates for Target and Nontarget Conditions. JAMA. 2016;316(24):2647-2656. PubMed
9. Brock J, Mitchell J, Irby K, et al. Association between quality improvement for care transitions in communities and rehospitalizations among Medicare beneficiaries. JAMA. 2013;309(4):381-391. PubMed
10. Jencks S. Protecting Hospitals That Improve Population Health. http://medicaring.org/2014/12/16/protecting-hospitals/. Accessed on January 5, 2017.
11. Dharmarajan K, Qin L, Lin Z, et al. Declining Admission Rates And Thirty-Day Readmission Rates Positively Associated Even Though Patients Grew Sicker Over Time. Health Aff (Millwood). 2016;35(7):1294-1302. PubMed
12. Krumholz HM, Nuti SV, Downing NS, Normand SL, Wang Y. Mortality, Hospitalizations, and Expenditures for the Medicare Population Aged 65 Years or Older, 1999-2013. JAMA. 2015;314(4):355-365. PubMed
13. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30-day readmission or death using electronic medical record data. Med Care. 2010;48(11):981-988. PubMed
14. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688-1698. PubMed
15. Horwitz LI, Partovian C, Lin Z, et al. Development and use of an administrative claims measure for profiling hospital-wide performance on 30-day unplanned readmission. Ann Intern Med. 2014;161(10 Suppl):S66-S75. PubMed
16. 2016 Condition-Specific Measures Updates and Specifications Report Hospital-Level 30-Day Risk-Standardized Readmission Measures. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Downloads/AMI-HF-PN-COPD-and-Stroke-Readmission-Updates.zip. Accessed on January 19, 2017.
17. Centers for Medicare & Medicaid Services. Pub 100-04 Medicare Claims Processing, Transmittal 2028. https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R2028CP.pdf. Accessed on November 28, 2016.
18. Martens FK, Tonk EC, Kers JG, Janssens AC. Small improvement in the area under the receiver operating characteristic curve indicated small changes in predicted risks. J Clin Epidemiol. 2016;79:159-164. PubMed
19. Blecker S, Goldfeld K, Park H, et al. Impact of an Intervention to Improve Weekend Hospital Care at an Academic Medical Center: An Observational Study. J Gen Intern Med. 2015;30(11):1657-1664. PubMed
20. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30-day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520-528. PubMed
21. Cavanaugh JJ, Jones CD, Embree G, et al. Implementation Science Workshop: primary care-based multidisciplinary readmission prevention program. J Gen Intern Med. 2014;29(5):798-804. PubMed
22. Jenq GY, Doyle MM, Belton BM, Herrin J, Horwitz LI. Quasi-Experimental Evaluation of the Effectiveness of a Large-Scale Readmission Reduction Program. JAMA Intern Med. 2016;176(5):681-690. PubMed
23. Lynn J, Jencks S. A Dangerous Malfunction in the Measure of Readmission Reduction. http://medicaring.org/2014/08/26/malfunctioning-metrics/. Accessed on January 17, 2017.
24. Calvillo-King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269-282. PubMed
25. Barnett ML, Hsu J, McWilliams JM. Patient Characteristics and Differences in Hospital Readmission Rates. JAMA Intern Med. 2015;175(11):1803-1812. PubMed
26. Singh S, Lin YL, Kuo YF, Nattinger AB, Goodwin JS. Variation in the risk of readmission among hospitals: the relative contribution of patient, hospital and inpatient provider characteristics. J Gen Intern Med. 2014;29(4):572-578. PubMed
27. American Hospital Association. American Hospital Association (AHA) Detailed Comments on the Inpatient Prospective Payment System (PPS) Proposed Rule for Fiscal Year (FY) 2016. http://www.aha.org/advocacy-issues/letter/2015/150616-cl-cms1632-p-ipps.pdf. Accessed on January 10, 2017.
28. Bernheim SM, Parzynski CS, Horwitz L, et al. Accounting For Patients’ Socioeconomic Status Does Not Change Hospital Readmission Rates. Health Aff (Millwood). 2016;35(8):1461-1470. PubMed

References

1. Jha AK. Seeking Rational Approaches to Fixing Hospital Readmissions. JAMA. 2015;314(16):1681-1682. PubMed
2. Centers for Medicare & Medicaid Services. Readmissions Reduction Program. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program.html. Accessed on January 17, 2017.
3. Suter LG, Li SX, Grady JN, et al. National patterns of risk-standardized mortality and readmission after hospitalization for acute myocardial infarction, heart failure, and pneumonia: update on publicly reported outcomes measures based on the 2013 release. J Gen Intern Med. 2014;29(10):1333-1340. PubMed
4. Gerhardt G, Yemane A, Hickman P, Oelschlaeger A, Rollins E, Brennan N. Medicare readmission rates showed meaningful decline in 2012. Medicare Medicaid Res Rev. 2013;3(2):pii:mmrr.003.02.b01. PubMed
5. Centers for Medicare and Medicaid Services. New Data Shows Affordable Care Act Reforms Are Leading to Lower Hospital Readmission Rates for Medicare Beneficiaries. http://blog.cms.gov/2013/12/06/new-data-shows-affordable-care-act-reforms-are-leading-to-lower-hospital-readmission-rates-for-medicare-beneficiaries/. Accessed on January 17, 2017.
6. Krumholz HM, Normand SL, Wang Y. Trends in hospitalizations and outcomes for acute cardiovascular disease and stroke, 1999-2011. Circulation. 2014;130(12):966-975. PubMed
7. Zuckerman RB, Sheingold SH, Orav EJ, Ruhter J, Epstein AM. Readmissions, Observation, and the Hospital Readmissions Reduction Program. N Engl J Med. 2016;374(16):1543-1551. PubMed
8. Desai NR, Ross JS, Kwon JY, et al. Association Between Hospital Penalty Status Under the Hospital Readmission Reduction Program and Readmission Rates for Target and Nontarget Conditions. JAMA. 2016;316(24):2647-2656. PubMed
9. Brock J, Mitchell J, Irby K, et al. Association between quality improvement for care transitions in communities and rehospitalizations among Medicare beneficiaries. JAMA. 2013;309(4):381-391. PubMed
10. Jencks S. Protecting Hospitals That Improve Population Health. http://medicaring.org/2014/12/16/protecting-hospitals/. Accessed on January 5, 2017.
11. Dharmarajan K, Qin L, Lin Z, et al. Declining Admission Rates And Thirty-Day Readmission Rates Positively Associated Even Though Patients Grew Sicker Over Time. Health Aff (Millwood). 2016;35(7):1294-1302. PubMed
12. Krumholz HM, Nuti SV, Downing NS, Normand SL, Wang Y. Mortality, Hospitalizations, and Expenditures for the Medicare Population Aged 65 Years or Older, 1999-2013. JAMA. 2015;314(4):355-365. PubMed
13. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30-day readmission or death using electronic medical record data. Med Care. 2010;48(11):981-988. PubMed
14. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688-1698. PubMed
15. Horwitz LI, Partovian C, Lin Z, et al. Development and use of an administrative claims measure for profiling hospital-wide performance on 30-day unplanned readmission. Ann Intern Med. 2014;161(10 Suppl):S66-S75. PubMed
16. 2016 Condition-Specific Measures Updates and Specifications Report Hospital-Level 30-Day Risk-Standardized Readmission Measures. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Downloads/AMI-HF-PN-COPD-and-Stroke-Readmission-Updates.zip. Accessed on January 19, 2017.
17. Centers for Medicare & Medicaid Services. Pub 100-04 Medicare Claims Processing, Transmittal 2028. https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R2028CP.pdf. Accessed on November 28, 2016.
18. Martens FK, Tonk EC, Kers JG, Janssens AC. Small improvement in the area under the receiver operating characteristic curve indicated small changes in predicted risks. J Clin Epidemiol. 2016;79:159-164. PubMed
19. Blecker S, Goldfeld K, Park H, et al. Impact of an Intervention to Improve Weekend Hospital Care at an Academic Medical Center: An Observational Study. J Gen Intern Med. 2015;30(11):1657-1664. PubMed
20. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30-day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520-528. PubMed
21. Cavanaugh JJ, Jones CD, Embree G, et al. Implementation Science Workshop: primary care-based multidisciplinary readmission prevention program. J Gen Intern Med. 2014;29(5):798-804. PubMed
22. Jenq GY, Doyle MM, Belton BM, Herrin J, Horwitz LI. Quasi-Experimental Evaluation of the Effectiveness of a Large-Scale Readmission Reduction Program. JAMA Intern Med. 2016;176(5):681-690. PubMed
23. Lynn J, Jencks S. A Dangerous Malfunction in the Measure of Readmission Reduction. http://medicaring.org/2014/08/26/malfunctioning-metrics/. Accessed on January 17, 2017.
24. Calvillo-King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269-282. PubMed
25. Barnett ML, Hsu J, McWilliams JM. Patient Characteristics and Differences in Hospital Readmission Rates. JAMA Intern Med. 2015;175(11):1803-1812. PubMed
26. Singh S, Lin YL, Kuo YF, Nattinger AB, Goodwin JS. Variation in the risk of readmission among hospitals: the relative contribution of patient, hospital and inpatient provider characteristics. J Gen Intern Med. 2014;29(4):572-578. PubMed
27. American Hospital Association. American Hospital Association (AHA) Detailed Comments on the Inpatient Prospective Payment System (PPS) Proposed Rule for Fiscal Year (FY) 2016. http://www.aha.org/advocacy-issues/letter/2015/150616-cl-cms1632-p-ipps.pdf. Accessed on January 10, 2017.
28. Bernheim SM, Parzynski CS, Horwitz L, et al. Accounting For Patients’ Socioeconomic Status Does Not Change Hospital Readmission Rates. Health Aff (Millwood). 2016;35(8):1461-1470. PubMed

Issue
Journal of Hospital Medicine 13(8)
Issue
Journal of Hospital Medicine 13(8)
Page Number
537-543. Published online first February 12, 2018
Page Number
537-543. Published online first February 12, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Saul Blecker, MD, MHS, NYU School of Medicine, 227 E. 30th St., Room 734, New York, NY 10016; Telephone: 646-501-2513; Fax: 646-501-2706; E-mail: [email protected]
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media
Media Files

Planned Readmission Algorithm

Article Type
Changed
Tue, 05/16/2017 - 22:59
Display Headline
Development and Validation of an Algorithm to Identify Planned Readmissions From Claims Data

The Centers for Medicare & Medicaid Services (CMS) publicly reports all‐cause risk‐standardized readmission rates after acute‐care hospitalization for acute myocardial infarction, pneumonia, heart failure, total hip and knee arthroplasty, chronic obstructive pulmonary disease, stroke, and for patients hospital‐wide.[1, 2, 3, 4, 5] Ideally, these measures should capture unplanned readmissions that arise from acute clinical events requiring urgent rehospitalization. Planned readmissions, which are scheduled admissions usually involving nonurgent procedures, may not be a signal of quality of care. Including planned readmissions in readmission quality measures could create a disincentive to provide appropriate care to patients who are scheduled for elective or necessary procedures unrelated to the quality of the prior admission. Accordingly, under contract to the CMS, we were asked to develop an algorithm to identify planned readmissions. A version of this algorithm is now incorporated into all publicly reported readmission measures.

Given the widespread use of the planned readmission algorithm in public reporting and its implications for hospital quality measurement and evaluation, the objective of this study was to describe the development process, and to validate and refine the algorithm by reviewing charts of readmitted patients.

METHODS

Algorithm Development

To create a planned readmission algorithm, we first defined planned. We determined that readmissions for obstetrical delivery, maintenance chemotherapy, major organ transplant, and rehabilitation should always be considered planned in the sense that they are desired and/or inevitable, even if not specifically planned on a certain date. Apart from these specific types of readmissions, we defined planned readmissions as nonacute readmissions for scheduled procedures, because the vast majority of planned admissions are related to procedures. We also defined readmissions for acute illness or for complications of care as unplanned for the purposes of a quality measure. Even if such readmissions included a potentially planned procedure, because complications of care represent an important dimension of quality that should not be excluded from outcome measurement, these admissions should not be removed from the measure outcome. This definition of planned readmissions does not imply that all unplanned readmissions are unexpected or avoidable. However, it has proven very difficult to reliably define avoidable readmissions, even by expert review of charts, and we did not attempt to do so here.[6, 7]

In the second stage, we operationalized this definition into an algorithm. We used the Agency for Healthcare Research and Quality's Clinical Classification Software (CCS) codes to group thousands of individual procedure and diagnosis International Classification of Disease, Ninth Revision, Clinical Modification (ICD‐9‐CM) codes into clinically coherent, mutually exclusive procedure CCS categories and mutually exclusive diagnosis CCS categories, respectively. Clinicians on the investigative team reviewed the procedure categories to identify those that are commonly planned and that would require inpatient admission. We also reviewed the diagnosis categories to identify acute diagnoses unlikely to accompany elective procedures. We then created a flow diagram through which every readmission could be run to determine whether it was planned or unplanned based on our categorizations of procedures and diagnoses (Figure 1, and Supporting Information, Appendix A, in the online version of this article). This version of the algorithm (v1.0) was submitted to the National Quality Forum (NQF) as part of the hospital‐wide readmission measure. The measure (NQR #1789) received endorsement in April 2012.

Figure 1
Flow diagram for planned readmissions (see Supporting Information, Appendix A, in the online version of this article for referenced tables).

In the third stage of development, we posted the algorithm for 2 public comment periods and recruited 27 outside experts to review and refine the algorithm following a standardized, structured process (see Supporting Information, Appendix B, in the online version of this article). Because the measures publicly report and hold hospitals accountable for unplanned readmission rates, we felt it most important that the algorithm include as few planned readmissions in the reported, unplanned outcome as possible (ie, have high negative predictive value). Therefore, in equivocal situations in which experts felt procedure categories were equally often planned or unplanned, we added those procedures to the potentially planned list. We also solicited feedback from hospitals on algorithm performance during a confidential test run of the hospital‐wide readmission measure in the fall of 2012. Based on all of this feedback, we made a number of changes to the algorithm, which was then identified as v2.1. Version 2.1 of the algorithm was submitted to the NQF as part of the endorsement process for the acute myocardial infarction and heart failure readmission measures and was endorsed by the NQF in January 2013. The algorithm (v2.1) is now applied, adapted if necessary, to all publicly reported readmission measures.[8]

Algorithm Validation: Study Cohort

We recruited 2 hospital systems to participate in a chart validation study of the accuracy of the planned readmission algorithm (v2.1). Within these 2 health systems, we selected 7 hospitals with varying bed size, teaching status, and safety‐net status. Each included 1 large academic teaching hospital that serves as a regional referral center. For each hospital's index admissions, we applied the inclusion and exclusion criteria from the hospital‐wide readmission measure. Index admissions were included for patients age 65 years or older; enrolled in Medicare fee‐for‐service (FFS); discharged from a nonfederal, short‐stay, acute‐care hospital or critical access hospital; without an in‐hospital death; not transferred to another acute‐care facility; and enrolled in Part A Medicare for 1 year prior to discharge. We excluded index admissions for patients without at least 30 days postdischarge enrollment in FFS Medicare, discharged against medical advice, admitted for medical treatment of cancer or primary psychiatric disease, admitted to a Prospective Payment System‐exempt cancer hospital, or who died during the index hospitalization. In addition, for this study, we included only index admissions that were followed by a readmission to a hospital within the participating health system between July 1, 2011 and June 30, 2012. Institutional review board approval was obtained from each of the participating health systems, which granted waivers of signed informed consent and Health Insurance Portability and Accountability Act waivers.

Algorithm Validation: Sample Size Calculation

We determined a priori that the minimum acceptable positive predictive value, or proportion of all readmissions the algorithm labels planned that are truly planned, would be 60%, and the minimum acceptable negative predictive value, or proportion of all readmissions the algorithm labels as unplanned that are truly unplanned, would be 80%. We calculated the sample size required to be confident of these values 10% and determined we would need a total of 291 planned charts and 162 unplanned charts. We inflated these numbers by 20% to account for missing or unobtainable charts for a total of 550 charts. To achieve this sample size, we included all eligible readmissions from all participating hospitals that were categorized as planned. At the 5 smaller hospitals, we randomly selected an equal number of unplanned readmissions occurring at any hospital in its healthcare system. At the 2 largest hospitals, we randomly selected 50 unplanned readmissions occurring at any hospital in its healthcare system.

Algorithm Validation: Data Abstraction

We developed an abstraction tool, tested and refined it using sample charts, and built the final the tool into a secure, password‐protected Microsoft Access 2007 (Microsoft Corp., Redmond, WA) database (see Supporting Information, Appendix C, in the online version of this article). Experienced chart abstractors with RN or MD degrees from each hospital site participated in a 1‐hour training session to become familiar with reviewing medical charts, defining planned/unplanned readmissions, and the data abstraction process. For each readmission, we asked abstractors to review as needed: emergency department triage and physician notes, admission history and physical, operative report, discharge summary, and/or discharge summary from a prior admission. The abstractors verified the accuracy of the administrative billing data, including procedures and principal diagnosis. In addition, they abstracted the source of admission and dates of all major procedures. Then the abstractors provided their opinion and supporting rationale as to whether a readmission was planned or unplanned. They were not asked to determine whether the readmission was preventable. To determine the inter‐rater reliability of data abstraction, an independent abstractor at each health system recoded a random sample of 10% of the charts.

Statistical Analysis

To ensure that we had obtained a representative sample of charts, we identified the 10 most commonly planned procedures among cases identified as planned by the algorithm in the validation cohort and then compared this with planned cases nationally. To confirm the reliability of the abstraction process, we used the kappa statistic to determine the inter‐rater reliability of the determination of planned or unplanned status. Additionally, the full study team, including 5 practicing clinicians, reviewed the details of every chart abstraction in which the algorithm was found to have misclassified the readmission as planned or unplanned. In 11 cases we determined that the abstractor had misunderstood the definition of planned readmission (ie, not all direct admissions are necessarily planned) and we reclassified the chart review assignment accordingly.

We calculated sensitivity, specificity, positive predictive value, and negative predictive value of the algorithm for the validation cohort as a whole, weighted to account for the prevalence of planned readmissions as defined by the algorithm in the national data (7.8%). Weighting is necessary because we did not obtain a pure random sample, but rather selected a stratified sample that oversampled algorithm‐identified planned readmissions.[9] We also calculated these rates separately for large hospitals (>600 beds) and for small hospitals (600 beds).

Finally, we examined performance of the algorithm for individual procedures and diagnoses to determine whether any procedures or diagnoses should be added or removed from the algorithm. First, we reviewed the diagnoses, procedures, and brief narratives provided by the abstractors for all cases in which the algorithm misclassified the readmission as either planned or unplanned. Second, we calculated the positive predictive value for each procedure that had been flagged as planned by the algorithm, and reviewed all readmissions (correctly and incorrectly classified) in which procedures with low positive predictive value took place. We also calculated the frequency with which the procedure was the only qualifying procedure resulting in an accurate or inaccurate classification. Third, to identify changes that should be made to the lists of acute and nonacute diagnoses, we reviewed the principal diagnosis for all readmissions misclassified by the algorithm as either planned or unplanned, and examined the specific ICD‐9‐CM codes within each CCS group that were most commonly associated with misclassifications.

After determining the changes that should be made to the algorithm based on these analyses, we recalculated the sensitivity, specificity, positive predictive value, and negative predictive value of the proposed revised algorithm (v3.0). All analyses used SAS version 9.3 (SAS Institute, Cary, NC).

RESULTS

Study Cohort

Characteristics of participating hospitals are shown in Table 1. Hospitals represented in this sample ranged in size, teaching status, and safety net status, although all were nonprofit. We selected 663 readmissions for review, 363 planned and 300 unplanned. Overall we were able to select 80% of hospitals planned cases for review; the remainder occurred at hospitals outside the participating hospital system. Abstractors were able to locate and review 634 (96%) of the eligible charts (range, 86%100% per hospital). The kappa statistic for inter‐rater reliability was 0.83.

Hospital Characteristics
DescriptionHospitals, NReadmissions Selected for Review, N*Readmissions Reviewed, N (% of Eligible)Unplanned Readmissions Reviewed, NPlanned Readmissions Reviewed, N% of Hospital's Planned Readmissions Reviewed*
  • NOTE: *Nonselected cases were readmitted to hospitals outside the system and could not be reviewed.

All hospitals7663634 (95.6)28335177.3
No. of beds>6002346339 (98.0)11622384.5
>3006002190173 (91.1)858887.1
<3003127122 (96.0)824044.9
OwnershipGovernment0     
For profit0     
Not for profit7663634 (95.6)28335177.3
Teaching statusTeaching2346339 (98.0)11622384.5
Nonteaching5317295 (93.1)16712867.4
Safety net statusSafety net2346339 (98.0)11622384.5
Nonsafety net5317295 (93.1)16712867.4
RegionNew England3409392 (95.8)15523785.9
South Central4254242 (95.3)12811464.0

The study sample included 57/67 (85%) of the procedure or condition categories on the potentially planned list. The most common procedure CCS categories among planned readmissions (v2.1) in the validation cohort were very similar to those in the national dataset (see Supporting Information, Appendix D, in the online version of this article). Of the top 20 most commonly planned procedure CCS categories in the validation set, all but 2, therapeutic radiology for cancer treatment (CCS 211) and peripheral vascular bypass (CCS 55), were among the top 20 most commonly planned procedure CCS categories in the national data.

Test Characteristics of Algorithm

The weighted test characteristics of the current algorithm (v2.1) are shown in Table 2. Overall, the algorithm correctly identified 266 readmissions as unplanned and 181 readmissions as planned, and misidentified 170 readmissions as planned and 15 as unplanned. Once weighted to account for the stratified sampling design, the overall prevalence of true planned readmissions was 8.9% of readmissions. The weighted sensitivity was 45.1% overall and was higher in large teaching centers than in smaller community hospitals. The weighted specificity was 95.9%. The positive predictive value was 51.6%, and the negative predictive value was 94.7%.

Test Characteristics of the Algorithm
CohortSensitivitySpecificityPositive Predictive ValueNegative Predictive Value
Algorithm v2.1
Full cohort45.1%95.9%51.6%94.7%
Large hospitals50.9%96.1%53.8%95.6%
Small hospitals40.2%95.5%47.7%94.0%
Revised algorithm v3.0
Full cohort49.8%96.5%58.7%94.5%
Large hospitals57.1%96.8%63.0%95.9%
Small hospitals42.6%95.9%52.6%93.9%

Accuracy of Individual Diagnoses and Procedures

The positive predictive value of the algorithm for individual procedure categories varied widely, from 0% to 100% among procedures with at least 10 cases (Table 3). The procedure for which the algorithm was least accurate was CCS 211, therapeutic radiology for cancer treatment (0% positive predictive value). By contrast, maintenance chemotherapy (90%) and other therapeutic procedures, hemic and lymphatic system (100%) were most accurate. Common procedures with less than 50% positive predictive value (ie, that the algorithm commonly misclassified as planned) were diagnostic cardiac catheterization (25%); debridement of wound, infection, or burn (25%); amputation of lower extremity (29%); insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (33%); and other hernia repair (43%). Of these, diagnostic cardiac catheterization and cardiac devices are the first and second most common procedures nationally, respectively.

Positive Predictive Value of Algorithm by Procedure Category (Among Procedures With at Least Ten Readmissions in Validation Cohort)
Readmission Procedure CCS CodeTotal Categorized as Planned by Algorithm, NVerified as Planned by Chart Review, NPositive Predictive Value
  • NOTE: Abbreviations: CCS, Clinical Classification Software; OR, operating room.

47 Diagnostic cardiac catheterization; coronary arteriography441125%
224 Cancer chemotherapy402255%
157 Amputation of lower extremity31929%
49 Other operating room heart procedures271659%
48 Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator24833%
43 Heart valve procedures201680%
Maintenance chemotherapy (diagnosis CCS 45)201890%
78 Colorectal resection18950%
169 Debridement of wound, infection or burn16425%
84 Cholecystectomy and common duct exploration16531%
99 Other OR gastrointestinal therapeutic procedures16850%
158 Spinal fusion151173%
142 Partial excision bone141071%
86 Other hernia repair14642%
44 Coronary artery bypass graft131077%
67 Other therapeutic procedures, hemic and lymphatic system1313100%
211 Therapeutic radiology for cancer treatment1200%
45 Percutaneous transluminal coronary angioplasty11764%
Total49727254.7%

The readmissions with least abstractor agreement were those involving CCS 157 (amputation of lower extremity) and CCS 169 (debridement of wound, infection or burn). Readmissions for these procedures were nearly always performed as a consequence of acute worsening of chronic conditions such as osteomyelitis or ulceration. Abstractors were divided over whether these readmissions were appropriate to call planned.

Changes to the Algorithm

We determined that the accuracy of the algorithm would be improved by removing 2 procedure categories from the planned procedure list (therapeutic radiation [CCS 211] and cancer chemotherapy [CCS 224]), adding 1 diagnosis category to the acute diagnosis list (hypertension with complications [CCS 99]), and splitting 2 diagnosis condition categories into acute and nonacute ICD‐9‐CM codes (pancreatic disorders [CCS 149] and biliary tract disease [CCS 152]). Detailed rationales for each modification to the planned readmission algorithm are described in Table 4. We felt further examination of diagnostic cardiac catheterization and cardiac devices was warranted given their high frequency, despite low positive predictive value. We also elected not to alter the categorization of amputation or debridement because it was not easy to determine whether these admissions were planned or unplanned even with chart review. We plan further analyses of these procedure categories.

Suggested Changes to Planned Readmission Algorithm v2.1 With Rationale
ActionDiagnosis or Procedure CategoryAlgorithmChartNRationale for Change
  • NOTE: Abbreviations: CCS, Clinical Classification Software; ICD‐9, International Classification od Diseases, Ninth Revision. *Number of cases in which CCS 47 was the only qualifying procedure Number of cases in which CCS 48 was the only qualifying procedure.

Remove from planned procedure listTherapeutic radiation (CCS 211)Accurate  The algorithm was inaccurate in every case. All therapeutic radiology during readmissions was performed because of acute illness (pain crisis, neurologic crisis) or because scheduled treatment occurred during an unplanned readmission. In national data, this ranks as the 25th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned0
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Cancer chemotherapy (CCS 224)Accurate  Of the 22 correctly identified as planned, 18 (82%) would already have been categorized as planned because of a principal diagnosis of maintenance chemotherapy. Therefore, removing CCS 224 from the planned procedure list would only miss a small fraction of planned readmissions but would avoid a large number of misclassifications. In national data, this ranks as the 8th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned22
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned18
Add to planned procedure listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. A handful of these cases were missed because the planned procedure was not on the current planned procedure list; however, those procedures (eg, abdominal paracentesis, colonoscopy, endoscopy) were nearly always unplanned overall and should therefore not be added as procedures that potentially qualify as an admission as planned.
Remove from acute diagnosis listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. The relevant disqualifying acute diagnoses were much more often associated with unplanned readmissions in our dataset.
Add to acute diagnosis listHypertension with complications (CCS 99)Accurate  This CCS was associated with only 1 planned readmission (for elective nephrectomy, a very rare procedure). Every other time this CCS appeared in the dataset, it was associated with an unplanned readmission (12/13, 92%); 10 of those, however, were misclassified by the algorithm as planned because they were not excluded by diagnosis (91% error rate). Consequently, adding this CCS to the acute diagnosis list is likely to miss only a very small fraction of planned readmissions, while making the overall algorithm much more accurate.
PlannedPlanned1
UnplannedUnplanned2
Inaccurate  
UnplannedPlanned0
PlannedUnplanned10
Split diagnosis condition category into component ICD‐9 codesPancreatic disorders (CCS 152)Accurate  ICD‐9 code 577.0 (acute pancreatitis) is the only acute code in this CCS. Acute pancreatitis was present in 2 cases that were misclassified as planned. Clinically, there is no situation in which a planned procedure would reasonably be performed in the setting of acute pancreatitis. Moving ICD‐9 code 577.0 to the acute list and leaving the rest of the ICD‐9 codes in CCS 152 on the nonacute list will enable the algorithm to continue to identify planned procedures for chronic pancreatitis.
PlannedPlanned0
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned0
PlannedUnplanned2
Biliary tract disease (CCS 149)Accurate  This CCS is a mix of acute and chronic diagnoses. Of 14 charts classified as planned with CCS 149 in the principal diagnosis field, 12 were misclassified (of which 10 were associated with cholecystectomy). Separating out the acute and nonacute diagnoses will increase the accuracy of the algorithm while still ensuring that planned cholecystectomies and other procedures can be identified. Of the ICD‐9 codes in CCS 149, the following will be added to the acute diagnosis list: 574.0, 574.3, 574.6, 574.8, 575.0, 575.12, 576.1.
PlannedPlanned2
UnplannedUnplanned3
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Consider for change after additional studyDiagnostic cardiac catheterization (CCS 47)Accurate  The algorithm misclassified as planned 25/38 (66%) unplanned readmissions in which diagnostic catheterizations were the only qualifying planned procedure. It also correctly identified 3/3 (100%) planned readmissions in which diagnostic cardiac catheterizations were the only qualifying planned procedure. This is the highest volume procedure in national data.
PlannedPlanned3*
UnplannedUnplanned13*
Inaccurate  
UnplannedPlanned0*
PlannedUnplanned25*
Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (CCS 48)Accurate  The algorithm misclassified as planned 4/5 (80%) unplanned readmissions in which cardiac devices were the only qualifying procedure. However, it also correctly identified 7/8 (87.5%) planned readmissions in which cardiac devices were the only qualifying planned procedure. CCS 48 is the second most common planned procedure category nationally.
PlannedPlanned7
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned1
PlannedUnplanned4

The revised algorithm (v3.0) had a weighted sensitivity of 49.8%, weighted specificity of 96.5%, positive predictive value of 58.7%, and negative predictive value of 94.5% (Table 2). In aggregate, these changes would increase the reported unplanned readmission rate from 16.0% to 16.1% in the hospital‐wide readmission measure, using 2011 to 2012 data, and would decrease the fraction of all readmissions considered planned from 7.8% to 7.2%.

DISCUSSION

We developed an algorithm based on administrative data that in its currently implemented form is very accurate at identifying unplanned readmissions, ensuring that readmissions included in publicly reported readmission measures are likely to be truly unplanned. However, nearly half of readmissions the algorithm classifies as planned are actually unplanned. That is, the algorithm is overcautious in excluding unplanned readmissions that could have counted as outcomes, particularly among admissions that include diagnostic cardiac catheterization or placement of cardiac devices (pacemakers, defibrillators). However, these errors only occur within the 7.8% of readmissions that are classified as planned and therefore do not affect overall readmission rates dramatically. A perfect algorithm would reclassify approximately half of these planned readmissions as unplanned, increasing the overall readmission rate by 0.6 percentage points.

On the other hand, the algorithm also only identifies approximately half of true planned readmissions as planned. Because the true prevalence of planned readmissions is low (approximately 9% of readmissions based on weighted chart review prevalence, or an absolute rate of 1.4%), this low sensitivity has a small effect on algorithm performance. Removing all true planned readmissions from the measure outcome would decrease the overall readmission rate by 0.8 percentage points, similar to the expected 0.6 percentage point increase that would result from better identifying unplanned readmissions; thus, a perfect algorithm would likely decrease the reported unplanned readmission rate by a net 0.2%. Overall, the existing algorithm appears to come close to the true prevalence of planned readmissions, despite inaccuracy on an individual‐case basis. The algorithm performed best at large hospitals, which are at greatest risk of being statistical outliers and of accruing penalties under the Hospital Readmissions Reduction Program.[10]

We identified several changes that marginally improved the performance of the algorithm by reducing the number of unplanned readmissions that are incorrectly removed from the measure, while avoiding the inappropriate inclusion of planned readmissions in the outcome. This revised algorithm, v3.0, was applied to public reporting of readmission rates at the end of 2014. Overall, implementing these changes increases the reported readmission rate very slightly. We also identified other procedures associated with high inaccuracy rates, removal of which would have larger impact on reporting rates, and which therefore merit further evaluation.

There are other potential methods of identifying planned readmissions. For instance, as of October 1, 2013, new administrative billing codes were created to allow hospitals to indicate that a patient was discharged with a planned acute‐care hospital inpatient readmission, without limitation as to when it will take place.[11] This code must be used at the time of the index admission to indicate that a future planned admission is expected, and was specified only to be used for neonates and patients with acute myocardial infarction. This approach, however, would omit planned readmissions that are not known to the initial discharging team, potentially missing planned readmissions. Conversely, some patients discharged with a plan for readmission may be unexpectedly readmitted for an unplanned reason. Given that the new codes were not available at the time we conducted the validation study, we were not able to determine how often the billing codes accurately identified planned readmissions. This would be an important area to consider for future study.

An alternative approach would be to create indicator codes to be applied at the time of readmission that would indicate whether that admission was planned or unplanned. Such a code would have the advantage of allowing each planned readmission to be flagged by the admitting clinicians at the time of admission rather than by an algorithm that inherently cannot be perfect. However, identifying planned readmissions at the time of readmission would also create opportunity for gaming and inconsistent application of definitions between hospitals; additional checks would need to be put in place to guard against these possibilities.

Our study has some limitations. We relied on the opinion of chart abstractors to determine whether a readmission was planned or unplanned; in a few cases, such as smoldering wounds that ultimately require surgical intervention, that determination is debatable. Abstractions were done at local institutions to minimize risks to patient privacy, and therefore we could not centrally verify determinations of planned status except by reviewing source of admission, dates of procedures, and narrative comments reported by the abstractors. Finally, we did not have sufficient volume of planned procedures to determine accuracy of the algorithm for less common procedure categories or individual procedures within categories.

In summary, we developed an algorithm to identify planned readmissions from administrative data that had high specificity and moderate sensitivity, and refined it based on chart validation. This algorithm is in use in public reporting of readmission measures to maximize the probability that the reported readmission rates represent truly unplanned readmissions.[12]

Disclosures: Financial supportThis work was performed under contract HHSM‐500‐2008‐0025I/HHSM‐500‐T0001, Modification No. 000008, titled Measure Instrument Development and Support, funded by the Centers for Medicare and Medicaid Services (CMS), an agency of the US Department of Health and Human Services. Drs. Horwitz and Ross are supported by the National Institute on Aging (K08 AG038336 and K08 AG032886, respectively) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Krumholz is supported by grant U01 HL105270‐05 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. No funding source had any role in the study design; in the collection, analysis, and interpretation of data; or in the writing of the article. The CMS reviewed and approved the use of its data for this work and approved submission of the manuscript. All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that all authors have support from the CMS for the submitted work. In addition, Dr. Ross is a member of a scientific advisory board for FAIR Health Inc. Dr. Krumholz chairs a cardiac scientific advisory board for UnitedHealth and is the recipient of research agreements from Medtronic and Johnson & Johnson through Yale University, to develop methods of clinical trial data sharing. All other authors report no conflicts of interest.

Files
References
  1. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142150.
  2. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4(2):243252.
  3. Keenan PS, Normand SL, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1:2937.
  4. Grosso LM, Curtis JP, Lin Z, et al. Hospital‐level 30‐day all‐cause risk‐standardized readmission rate following elective primary total hip arthroplasty (THA) and/or total knee arthroplasty (TKA). Available at: http://www.qualitynet.org/dcs/ContentServer?c=Page161(supp10 l):S66S75.
  5. Walraven C, Jennings A, Forster AJ. A meta‐analysis of hospital 30‐day avoidable readmission rates. J Eval Clin Pract. 2011;18(6):12111218.
  6. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  7. Horwitz LI, Partovian C, Lin Z, et al. Centers for Medicare 3(4):477492.
  8. Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342343.
  9. Centers for Medicare and Medicaid Services. Inpatient Prospective Payment System/Long‐Term Care Hospital (IPPS/LTCH) final rule. Fed Regist. 2013;78:5053350534.
  10. Long SK, Stockley K, Dahlen H. Massachusetts health reforms: uninsurance remains low, self‐reported health status improves as state prepares to tackle costs. Health Aff (Millwood). 2012;31(2):444451.
Article PDF
Issue
Journal of Hospital Medicine - 10(10)
Publications
Page Number
670-677
Sections
Files
Files
Article PDF
Article PDF

The Centers for Medicare & Medicaid Services (CMS) publicly reports all‐cause risk‐standardized readmission rates after acute‐care hospitalization for acute myocardial infarction, pneumonia, heart failure, total hip and knee arthroplasty, chronic obstructive pulmonary disease, stroke, and for patients hospital‐wide.[1, 2, 3, 4, 5] Ideally, these measures should capture unplanned readmissions that arise from acute clinical events requiring urgent rehospitalization. Planned readmissions, which are scheduled admissions usually involving nonurgent procedures, may not be a signal of quality of care. Including planned readmissions in readmission quality measures could create a disincentive to provide appropriate care to patients who are scheduled for elective or necessary procedures unrelated to the quality of the prior admission. Accordingly, under contract to the CMS, we were asked to develop an algorithm to identify planned readmissions. A version of this algorithm is now incorporated into all publicly reported readmission measures.

Given the widespread use of the planned readmission algorithm in public reporting and its implications for hospital quality measurement and evaluation, the objective of this study was to describe the development process, and to validate and refine the algorithm by reviewing charts of readmitted patients.

METHODS

Algorithm Development

To create a planned readmission algorithm, we first defined planned. We determined that readmissions for obstetrical delivery, maintenance chemotherapy, major organ transplant, and rehabilitation should always be considered planned in the sense that they are desired and/or inevitable, even if not specifically planned on a certain date. Apart from these specific types of readmissions, we defined planned readmissions as nonacute readmissions for scheduled procedures, because the vast majority of planned admissions are related to procedures. We also defined readmissions for acute illness or for complications of care as unplanned for the purposes of a quality measure. Even if such readmissions included a potentially planned procedure, because complications of care represent an important dimension of quality that should not be excluded from outcome measurement, these admissions should not be removed from the measure outcome. This definition of planned readmissions does not imply that all unplanned readmissions are unexpected or avoidable. However, it has proven very difficult to reliably define avoidable readmissions, even by expert review of charts, and we did not attempt to do so here.[6, 7]

In the second stage, we operationalized this definition into an algorithm. We used the Agency for Healthcare Research and Quality's Clinical Classification Software (CCS) codes to group thousands of individual procedure and diagnosis International Classification of Disease, Ninth Revision, Clinical Modification (ICD‐9‐CM) codes into clinically coherent, mutually exclusive procedure CCS categories and mutually exclusive diagnosis CCS categories, respectively. Clinicians on the investigative team reviewed the procedure categories to identify those that are commonly planned and that would require inpatient admission. We also reviewed the diagnosis categories to identify acute diagnoses unlikely to accompany elective procedures. We then created a flow diagram through which every readmission could be run to determine whether it was planned or unplanned based on our categorizations of procedures and diagnoses (Figure 1, and Supporting Information, Appendix A, in the online version of this article). This version of the algorithm (v1.0) was submitted to the National Quality Forum (NQF) as part of the hospital‐wide readmission measure. The measure (NQR #1789) received endorsement in April 2012.

Figure 1
Flow diagram for planned readmissions (see Supporting Information, Appendix A, in the online version of this article for referenced tables).

In the third stage of development, we posted the algorithm for 2 public comment periods and recruited 27 outside experts to review and refine the algorithm following a standardized, structured process (see Supporting Information, Appendix B, in the online version of this article). Because the measures publicly report and hold hospitals accountable for unplanned readmission rates, we felt it most important that the algorithm include as few planned readmissions in the reported, unplanned outcome as possible (ie, have high negative predictive value). Therefore, in equivocal situations in which experts felt procedure categories were equally often planned or unplanned, we added those procedures to the potentially planned list. We also solicited feedback from hospitals on algorithm performance during a confidential test run of the hospital‐wide readmission measure in the fall of 2012. Based on all of this feedback, we made a number of changes to the algorithm, which was then identified as v2.1. Version 2.1 of the algorithm was submitted to the NQF as part of the endorsement process for the acute myocardial infarction and heart failure readmission measures and was endorsed by the NQF in January 2013. The algorithm (v2.1) is now applied, adapted if necessary, to all publicly reported readmission measures.[8]

Algorithm Validation: Study Cohort

We recruited 2 hospital systems to participate in a chart validation study of the accuracy of the planned readmission algorithm (v2.1). Within these 2 health systems, we selected 7 hospitals with varying bed size, teaching status, and safety‐net status. Each included 1 large academic teaching hospital that serves as a regional referral center. For each hospital's index admissions, we applied the inclusion and exclusion criteria from the hospital‐wide readmission measure. Index admissions were included for patients age 65 years or older; enrolled in Medicare fee‐for‐service (FFS); discharged from a nonfederal, short‐stay, acute‐care hospital or critical access hospital; without an in‐hospital death; not transferred to another acute‐care facility; and enrolled in Part A Medicare for 1 year prior to discharge. We excluded index admissions for patients without at least 30 days postdischarge enrollment in FFS Medicare, discharged against medical advice, admitted for medical treatment of cancer or primary psychiatric disease, admitted to a Prospective Payment System‐exempt cancer hospital, or who died during the index hospitalization. In addition, for this study, we included only index admissions that were followed by a readmission to a hospital within the participating health system between July 1, 2011 and June 30, 2012. Institutional review board approval was obtained from each of the participating health systems, which granted waivers of signed informed consent and Health Insurance Portability and Accountability Act waivers.

Algorithm Validation: Sample Size Calculation

We determined a priori that the minimum acceptable positive predictive value, or proportion of all readmissions the algorithm labels planned that are truly planned, would be 60%, and the minimum acceptable negative predictive value, or proportion of all readmissions the algorithm labels as unplanned that are truly unplanned, would be 80%. We calculated the sample size required to be confident of these values 10% and determined we would need a total of 291 planned charts and 162 unplanned charts. We inflated these numbers by 20% to account for missing or unobtainable charts for a total of 550 charts. To achieve this sample size, we included all eligible readmissions from all participating hospitals that were categorized as planned. At the 5 smaller hospitals, we randomly selected an equal number of unplanned readmissions occurring at any hospital in its healthcare system. At the 2 largest hospitals, we randomly selected 50 unplanned readmissions occurring at any hospital in its healthcare system.

Algorithm Validation: Data Abstraction

We developed an abstraction tool, tested and refined it using sample charts, and built the final the tool into a secure, password‐protected Microsoft Access 2007 (Microsoft Corp., Redmond, WA) database (see Supporting Information, Appendix C, in the online version of this article). Experienced chart abstractors with RN or MD degrees from each hospital site participated in a 1‐hour training session to become familiar with reviewing medical charts, defining planned/unplanned readmissions, and the data abstraction process. For each readmission, we asked abstractors to review as needed: emergency department triage and physician notes, admission history and physical, operative report, discharge summary, and/or discharge summary from a prior admission. The abstractors verified the accuracy of the administrative billing data, including procedures and principal diagnosis. In addition, they abstracted the source of admission and dates of all major procedures. Then the abstractors provided their opinion and supporting rationale as to whether a readmission was planned or unplanned. They were not asked to determine whether the readmission was preventable. To determine the inter‐rater reliability of data abstraction, an independent abstractor at each health system recoded a random sample of 10% of the charts.

Statistical Analysis

To ensure that we had obtained a representative sample of charts, we identified the 10 most commonly planned procedures among cases identified as planned by the algorithm in the validation cohort and then compared this with planned cases nationally. To confirm the reliability of the abstraction process, we used the kappa statistic to determine the inter‐rater reliability of the determination of planned or unplanned status. Additionally, the full study team, including 5 practicing clinicians, reviewed the details of every chart abstraction in which the algorithm was found to have misclassified the readmission as planned or unplanned. In 11 cases we determined that the abstractor had misunderstood the definition of planned readmission (ie, not all direct admissions are necessarily planned) and we reclassified the chart review assignment accordingly.

We calculated sensitivity, specificity, positive predictive value, and negative predictive value of the algorithm for the validation cohort as a whole, weighted to account for the prevalence of planned readmissions as defined by the algorithm in the national data (7.8%). Weighting is necessary because we did not obtain a pure random sample, but rather selected a stratified sample that oversampled algorithm‐identified planned readmissions.[9] We also calculated these rates separately for large hospitals (>600 beds) and for small hospitals (600 beds).

Finally, we examined performance of the algorithm for individual procedures and diagnoses to determine whether any procedures or diagnoses should be added or removed from the algorithm. First, we reviewed the diagnoses, procedures, and brief narratives provided by the abstractors for all cases in which the algorithm misclassified the readmission as either planned or unplanned. Second, we calculated the positive predictive value for each procedure that had been flagged as planned by the algorithm, and reviewed all readmissions (correctly and incorrectly classified) in which procedures with low positive predictive value took place. We also calculated the frequency with which the procedure was the only qualifying procedure resulting in an accurate or inaccurate classification. Third, to identify changes that should be made to the lists of acute and nonacute diagnoses, we reviewed the principal diagnosis for all readmissions misclassified by the algorithm as either planned or unplanned, and examined the specific ICD‐9‐CM codes within each CCS group that were most commonly associated with misclassifications.

After determining the changes that should be made to the algorithm based on these analyses, we recalculated the sensitivity, specificity, positive predictive value, and negative predictive value of the proposed revised algorithm (v3.0). All analyses used SAS version 9.3 (SAS Institute, Cary, NC).

RESULTS

Study Cohort

Characteristics of participating hospitals are shown in Table 1. Hospitals represented in this sample ranged in size, teaching status, and safety net status, although all were nonprofit. We selected 663 readmissions for review, 363 planned and 300 unplanned. Overall we were able to select 80% of hospitals planned cases for review; the remainder occurred at hospitals outside the participating hospital system. Abstractors were able to locate and review 634 (96%) of the eligible charts (range, 86%100% per hospital). The kappa statistic for inter‐rater reliability was 0.83.

Hospital Characteristics
DescriptionHospitals, NReadmissions Selected for Review, N*Readmissions Reviewed, N (% of Eligible)Unplanned Readmissions Reviewed, NPlanned Readmissions Reviewed, N% of Hospital's Planned Readmissions Reviewed*
  • NOTE: *Nonselected cases were readmitted to hospitals outside the system and could not be reviewed.

All hospitals7663634 (95.6)28335177.3
No. of beds>6002346339 (98.0)11622384.5
>3006002190173 (91.1)858887.1
<3003127122 (96.0)824044.9
OwnershipGovernment0     
For profit0     
Not for profit7663634 (95.6)28335177.3
Teaching statusTeaching2346339 (98.0)11622384.5
Nonteaching5317295 (93.1)16712867.4
Safety net statusSafety net2346339 (98.0)11622384.5
Nonsafety net5317295 (93.1)16712867.4
RegionNew England3409392 (95.8)15523785.9
South Central4254242 (95.3)12811464.0

The study sample included 57/67 (85%) of the procedure or condition categories on the potentially planned list. The most common procedure CCS categories among planned readmissions (v2.1) in the validation cohort were very similar to those in the national dataset (see Supporting Information, Appendix D, in the online version of this article). Of the top 20 most commonly planned procedure CCS categories in the validation set, all but 2, therapeutic radiology for cancer treatment (CCS 211) and peripheral vascular bypass (CCS 55), were among the top 20 most commonly planned procedure CCS categories in the national data.

Test Characteristics of Algorithm

The weighted test characteristics of the current algorithm (v2.1) are shown in Table 2. Overall, the algorithm correctly identified 266 readmissions as unplanned and 181 readmissions as planned, and misidentified 170 readmissions as planned and 15 as unplanned. Once weighted to account for the stratified sampling design, the overall prevalence of true planned readmissions was 8.9% of readmissions. The weighted sensitivity was 45.1% overall and was higher in large teaching centers than in smaller community hospitals. The weighted specificity was 95.9%. The positive predictive value was 51.6%, and the negative predictive value was 94.7%.

Test Characteristics of the Algorithm
CohortSensitivitySpecificityPositive Predictive ValueNegative Predictive Value
Algorithm v2.1
Full cohort45.1%95.9%51.6%94.7%
Large hospitals50.9%96.1%53.8%95.6%
Small hospitals40.2%95.5%47.7%94.0%
Revised algorithm v3.0
Full cohort49.8%96.5%58.7%94.5%
Large hospitals57.1%96.8%63.0%95.9%
Small hospitals42.6%95.9%52.6%93.9%

Accuracy of Individual Diagnoses and Procedures

The positive predictive value of the algorithm for individual procedure categories varied widely, from 0% to 100% among procedures with at least 10 cases (Table 3). The procedure for which the algorithm was least accurate was CCS 211, therapeutic radiology for cancer treatment (0% positive predictive value). By contrast, maintenance chemotherapy (90%) and other therapeutic procedures, hemic and lymphatic system (100%) were most accurate. Common procedures with less than 50% positive predictive value (ie, that the algorithm commonly misclassified as planned) were diagnostic cardiac catheterization (25%); debridement of wound, infection, or burn (25%); amputation of lower extremity (29%); insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (33%); and other hernia repair (43%). Of these, diagnostic cardiac catheterization and cardiac devices are the first and second most common procedures nationally, respectively.

Positive Predictive Value of Algorithm by Procedure Category (Among Procedures With at Least Ten Readmissions in Validation Cohort)
Readmission Procedure CCS CodeTotal Categorized as Planned by Algorithm, NVerified as Planned by Chart Review, NPositive Predictive Value
  • NOTE: Abbreviations: CCS, Clinical Classification Software; OR, operating room.

47 Diagnostic cardiac catheterization; coronary arteriography441125%
224 Cancer chemotherapy402255%
157 Amputation of lower extremity31929%
49 Other operating room heart procedures271659%
48 Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator24833%
43 Heart valve procedures201680%
Maintenance chemotherapy (diagnosis CCS 45)201890%
78 Colorectal resection18950%
169 Debridement of wound, infection or burn16425%
84 Cholecystectomy and common duct exploration16531%
99 Other OR gastrointestinal therapeutic procedures16850%
158 Spinal fusion151173%
142 Partial excision bone141071%
86 Other hernia repair14642%
44 Coronary artery bypass graft131077%
67 Other therapeutic procedures, hemic and lymphatic system1313100%
211 Therapeutic radiology for cancer treatment1200%
45 Percutaneous transluminal coronary angioplasty11764%
Total49727254.7%

The readmissions with least abstractor agreement were those involving CCS 157 (amputation of lower extremity) and CCS 169 (debridement of wound, infection or burn). Readmissions for these procedures were nearly always performed as a consequence of acute worsening of chronic conditions such as osteomyelitis or ulceration. Abstractors were divided over whether these readmissions were appropriate to call planned.

Changes to the Algorithm

We determined that the accuracy of the algorithm would be improved by removing 2 procedure categories from the planned procedure list (therapeutic radiation [CCS 211] and cancer chemotherapy [CCS 224]), adding 1 diagnosis category to the acute diagnosis list (hypertension with complications [CCS 99]), and splitting 2 diagnosis condition categories into acute and nonacute ICD‐9‐CM codes (pancreatic disorders [CCS 149] and biliary tract disease [CCS 152]). Detailed rationales for each modification to the planned readmission algorithm are described in Table 4. We felt further examination of diagnostic cardiac catheterization and cardiac devices was warranted given their high frequency, despite low positive predictive value. We also elected not to alter the categorization of amputation or debridement because it was not easy to determine whether these admissions were planned or unplanned even with chart review. We plan further analyses of these procedure categories.

Suggested Changes to Planned Readmission Algorithm v2.1 With Rationale
ActionDiagnosis or Procedure CategoryAlgorithmChartNRationale for Change
  • NOTE: Abbreviations: CCS, Clinical Classification Software; ICD‐9, International Classification od Diseases, Ninth Revision. *Number of cases in which CCS 47 was the only qualifying procedure Number of cases in which CCS 48 was the only qualifying procedure.

Remove from planned procedure listTherapeutic radiation (CCS 211)Accurate  The algorithm was inaccurate in every case. All therapeutic radiology during readmissions was performed because of acute illness (pain crisis, neurologic crisis) or because scheduled treatment occurred during an unplanned readmission. In national data, this ranks as the 25th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned0
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Cancer chemotherapy (CCS 224)Accurate  Of the 22 correctly identified as planned, 18 (82%) would already have been categorized as planned because of a principal diagnosis of maintenance chemotherapy. Therefore, removing CCS 224 from the planned procedure list would only miss a small fraction of planned readmissions but would avoid a large number of misclassifications. In national data, this ranks as the 8th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned22
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned18
Add to planned procedure listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. A handful of these cases were missed because the planned procedure was not on the current planned procedure list; however, those procedures (eg, abdominal paracentesis, colonoscopy, endoscopy) were nearly always unplanned overall and should therefore not be added as procedures that potentially qualify as an admission as planned.
Remove from acute diagnosis listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. The relevant disqualifying acute diagnoses were much more often associated with unplanned readmissions in our dataset.
Add to acute diagnosis listHypertension with complications (CCS 99)Accurate  This CCS was associated with only 1 planned readmission (for elective nephrectomy, a very rare procedure). Every other time this CCS appeared in the dataset, it was associated with an unplanned readmission (12/13, 92%); 10 of those, however, were misclassified by the algorithm as planned because they were not excluded by diagnosis (91% error rate). Consequently, adding this CCS to the acute diagnosis list is likely to miss only a very small fraction of planned readmissions, while making the overall algorithm much more accurate.
PlannedPlanned1
UnplannedUnplanned2
Inaccurate  
UnplannedPlanned0
PlannedUnplanned10
Split diagnosis condition category into component ICD‐9 codesPancreatic disorders (CCS 152)Accurate  ICD‐9 code 577.0 (acute pancreatitis) is the only acute code in this CCS. Acute pancreatitis was present in 2 cases that were misclassified as planned. Clinically, there is no situation in which a planned procedure would reasonably be performed in the setting of acute pancreatitis. Moving ICD‐9 code 577.0 to the acute list and leaving the rest of the ICD‐9 codes in CCS 152 on the nonacute list will enable the algorithm to continue to identify planned procedures for chronic pancreatitis.
PlannedPlanned0
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned0
PlannedUnplanned2
Biliary tract disease (CCS 149)Accurate  This CCS is a mix of acute and chronic diagnoses. Of 14 charts classified as planned with CCS 149 in the principal diagnosis field, 12 were misclassified (of which 10 were associated with cholecystectomy). Separating out the acute and nonacute diagnoses will increase the accuracy of the algorithm while still ensuring that planned cholecystectomies and other procedures can be identified. Of the ICD‐9 codes in CCS 149, the following will be added to the acute diagnosis list: 574.0, 574.3, 574.6, 574.8, 575.0, 575.12, 576.1.
PlannedPlanned2
UnplannedUnplanned3
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Consider for change after additional studyDiagnostic cardiac catheterization (CCS 47)Accurate  The algorithm misclassified as planned 25/38 (66%) unplanned readmissions in which diagnostic catheterizations were the only qualifying planned procedure. It also correctly identified 3/3 (100%) planned readmissions in which diagnostic cardiac catheterizations were the only qualifying planned procedure. This is the highest volume procedure in national data.
PlannedPlanned3*
UnplannedUnplanned13*
Inaccurate  
UnplannedPlanned0*
PlannedUnplanned25*
Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (CCS 48)Accurate  The algorithm misclassified as planned 4/5 (80%) unplanned readmissions in which cardiac devices were the only qualifying procedure. However, it also correctly identified 7/8 (87.5%) planned readmissions in which cardiac devices were the only qualifying planned procedure. CCS 48 is the second most common planned procedure category nationally.
PlannedPlanned7
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned1
PlannedUnplanned4

The revised algorithm (v3.0) had a weighted sensitivity of 49.8%, weighted specificity of 96.5%, positive predictive value of 58.7%, and negative predictive value of 94.5% (Table 2). In aggregate, these changes would increase the reported unplanned readmission rate from 16.0% to 16.1% in the hospital‐wide readmission measure, using 2011 to 2012 data, and would decrease the fraction of all readmissions considered planned from 7.8% to 7.2%.

DISCUSSION

We developed an algorithm based on administrative data that in its currently implemented form is very accurate at identifying unplanned readmissions, ensuring that readmissions included in publicly reported readmission measures are likely to be truly unplanned. However, nearly half of readmissions the algorithm classifies as planned are actually unplanned. That is, the algorithm is overcautious in excluding unplanned readmissions that could have counted as outcomes, particularly among admissions that include diagnostic cardiac catheterization or placement of cardiac devices (pacemakers, defibrillators). However, these errors only occur within the 7.8% of readmissions that are classified as planned and therefore do not affect overall readmission rates dramatically. A perfect algorithm would reclassify approximately half of these planned readmissions as unplanned, increasing the overall readmission rate by 0.6 percentage points.

On the other hand, the algorithm also only identifies approximately half of true planned readmissions as planned. Because the true prevalence of planned readmissions is low (approximately 9% of readmissions based on weighted chart review prevalence, or an absolute rate of 1.4%), this low sensitivity has a small effect on algorithm performance. Removing all true planned readmissions from the measure outcome would decrease the overall readmission rate by 0.8 percentage points, similar to the expected 0.6 percentage point increase that would result from better identifying unplanned readmissions; thus, a perfect algorithm would likely decrease the reported unplanned readmission rate by a net 0.2%. Overall, the existing algorithm appears to come close to the true prevalence of planned readmissions, despite inaccuracy on an individual‐case basis. The algorithm performed best at large hospitals, which are at greatest risk of being statistical outliers and of accruing penalties under the Hospital Readmissions Reduction Program.[10]

We identified several changes that marginally improved the performance of the algorithm by reducing the number of unplanned readmissions that are incorrectly removed from the measure, while avoiding the inappropriate inclusion of planned readmissions in the outcome. This revised algorithm, v3.0, was applied to public reporting of readmission rates at the end of 2014. Overall, implementing these changes increases the reported readmission rate very slightly. We also identified other procedures associated with high inaccuracy rates, removal of which would have larger impact on reporting rates, and which therefore merit further evaluation.

There are other potential methods of identifying planned readmissions. For instance, as of October 1, 2013, new administrative billing codes were created to allow hospitals to indicate that a patient was discharged with a planned acute‐care hospital inpatient readmission, without limitation as to when it will take place.[11] This code must be used at the time of the index admission to indicate that a future planned admission is expected, and was specified only to be used for neonates and patients with acute myocardial infarction. This approach, however, would omit planned readmissions that are not known to the initial discharging team, potentially missing planned readmissions. Conversely, some patients discharged with a plan for readmission may be unexpectedly readmitted for an unplanned reason. Given that the new codes were not available at the time we conducted the validation study, we were not able to determine how often the billing codes accurately identified planned readmissions. This would be an important area to consider for future study.

An alternative approach would be to create indicator codes to be applied at the time of readmission that would indicate whether that admission was planned or unplanned. Such a code would have the advantage of allowing each planned readmission to be flagged by the admitting clinicians at the time of admission rather than by an algorithm that inherently cannot be perfect. However, identifying planned readmissions at the time of readmission would also create opportunity for gaming and inconsistent application of definitions between hospitals; additional checks would need to be put in place to guard against these possibilities.

Our study has some limitations. We relied on the opinion of chart abstractors to determine whether a readmission was planned or unplanned; in a few cases, such as smoldering wounds that ultimately require surgical intervention, that determination is debatable. Abstractions were done at local institutions to minimize risks to patient privacy, and therefore we could not centrally verify determinations of planned status except by reviewing source of admission, dates of procedures, and narrative comments reported by the abstractors. Finally, we did not have sufficient volume of planned procedures to determine accuracy of the algorithm for less common procedure categories or individual procedures within categories.

In summary, we developed an algorithm to identify planned readmissions from administrative data that had high specificity and moderate sensitivity, and refined it based on chart validation. This algorithm is in use in public reporting of readmission measures to maximize the probability that the reported readmission rates represent truly unplanned readmissions.[12]

Disclosures: Financial supportThis work was performed under contract HHSM‐500‐2008‐0025I/HHSM‐500‐T0001, Modification No. 000008, titled Measure Instrument Development and Support, funded by the Centers for Medicare and Medicaid Services (CMS), an agency of the US Department of Health and Human Services. Drs. Horwitz and Ross are supported by the National Institute on Aging (K08 AG038336 and K08 AG032886, respectively) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Krumholz is supported by grant U01 HL105270‐05 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. No funding source had any role in the study design; in the collection, analysis, and interpretation of data; or in the writing of the article. The CMS reviewed and approved the use of its data for this work and approved submission of the manuscript. All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that all authors have support from the CMS for the submitted work. In addition, Dr. Ross is a member of a scientific advisory board for FAIR Health Inc. Dr. Krumholz chairs a cardiac scientific advisory board for UnitedHealth and is the recipient of research agreements from Medtronic and Johnson & Johnson through Yale University, to develop methods of clinical trial data sharing. All other authors report no conflicts of interest.

The Centers for Medicare & Medicaid Services (CMS) publicly reports all‐cause risk‐standardized readmission rates after acute‐care hospitalization for acute myocardial infarction, pneumonia, heart failure, total hip and knee arthroplasty, chronic obstructive pulmonary disease, stroke, and for patients hospital‐wide.[1, 2, 3, 4, 5] Ideally, these measures should capture unplanned readmissions that arise from acute clinical events requiring urgent rehospitalization. Planned readmissions, which are scheduled admissions usually involving nonurgent procedures, may not be a signal of quality of care. Including planned readmissions in readmission quality measures could create a disincentive to provide appropriate care to patients who are scheduled for elective or necessary procedures unrelated to the quality of the prior admission. Accordingly, under contract to the CMS, we were asked to develop an algorithm to identify planned readmissions. A version of this algorithm is now incorporated into all publicly reported readmission measures.

Given the widespread use of the planned readmission algorithm in public reporting and its implications for hospital quality measurement and evaluation, the objective of this study was to describe the development process, and to validate and refine the algorithm by reviewing charts of readmitted patients.

METHODS

Algorithm Development

To create a planned readmission algorithm, we first defined planned. We determined that readmissions for obstetrical delivery, maintenance chemotherapy, major organ transplant, and rehabilitation should always be considered planned in the sense that they are desired and/or inevitable, even if not specifically planned on a certain date. Apart from these specific types of readmissions, we defined planned readmissions as nonacute readmissions for scheduled procedures, because the vast majority of planned admissions are related to procedures. We also defined readmissions for acute illness or for complications of care as unplanned for the purposes of a quality measure. Even if such readmissions included a potentially planned procedure, because complications of care represent an important dimension of quality that should not be excluded from outcome measurement, these admissions should not be removed from the measure outcome. This definition of planned readmissions does not imply that all unplanned readmissions are unexpected or avoidable. However, it has proven very difficult to reliably define avoidable readmissions, even by expert review of charts, and we did not attempt to do so here.[6, 7]

In the second stage, we operationalized this definition into an algorithm. We used the Agency for Healthcare Research and Quality's Clinical Classification Software (CCS) codes to group thousands of individual procedure and diagnosis International Classification of Disease, Ninth Revision, Clinical Modification (ICD‐9‐CM) codes into clinically coherent, mutually exclusive procedure CCS categories and mutually exclusive diagnosis CCS categories, respectively. Clinicians on the investigative team reviewed the procedure categories to identify those that are commonly planned and that would require inpatient admission. We also reviewed the diagnosis categories to identify acute diagnoses unlikely to accompany elective procedures. We then created a flow diagram through which every readmission could be run to determine whether it was planned or unplanned based on our categorizations of procedures and diagnoses (Figure 1, and Supporting Information, Appendix A, in the online version of this article). This version of the algorithm (v1.0) was submitted to the National Quality Forum (NQF) as part of the hospital‐wide readmission measure. The measure (NQR #1789) received endorsement in April 2012.

Figure 1
Flow diagram for planned readmissions (see Supporting Information, Appendix A, in the online version of this article for referenced tables).

In the third stage of development, we posted the algorithm for 2 public comment periods and recruited 27 outside experts to review and refine the algorithm following a standardized, structured process (see Supporting Information, Appendix B, in the online version of this article). Because the measures publicly report and hold hospitals accountable for unplanned readmission rates, we felt it most important that the algorithm include as few planned readmissions in the reported, unplanned outcome as possible (ie, have high negative predictive value). Therefore, in equivocal situations in which experts felt procedure categories were equally often planned or unplanned, we added those procedures to the potentially planned list. We also solicited feedback from hospitals on algorithm performance during a confidential test run of the hospital‐wide readmission measure in the fall of 2012. Based on all of this feedback, we made a number of changes to the algorithm, which was then identified as v2.1. Version 2.1 of the algorithm was submitted to the NQF as part of the endorsement process for the acute myocardial infarction and heart failure readmission measures and was endorsed by the NQF in January 2013. The algorithm (v2.1) is now applied, adapted if necessary, to all publicly reported readmission measures.[8]

Algorithm Validation: Study Cohort

We recruited 2 hospital systems to participate in a chart validation study of the accuracy of the planned readmission algorithm (v2.1). Within these 2 health systems, we selected 7 hospitals with varying bed size, teaching status, and safety‐net status. Each included 1 large academic teaching hospital that serves as a regional referral center. For each hospital's index admissions, we applied the inclusion and exclusion criteria from the hospital‐wide readmission measure. Index admissions were included for patients age 65 years or older; enrolled in Medicare fee‐for‐service (FFS); discharged from a nonfederal, short‐stay, acute‐care hospital or critical access hospital; without an in‐hospital death; not transferred to another acute‐care facility; and enrolled in Part A Medicare for 1 year prior to discharge. We excluded index admissions for patients without at least 30 days postdischarge enrollment in FFS Medicare, discharged against medical advice, admitted for medical treatment of cancer or primary psychiatric disease, admitted to a Prospective Payment System‐exempt cancer hospital, or who died during the index hospitalization. In addition, for this study, we included only index admissions that were followed by a readmission to a hospital within the participating health system between July 1, 2011 and June 30, 2012. Institutional review board approval was obtained from each of the participating health systems, which granted waivers of signed informed consent and Health Insurance Portability and Accountability Act waivers.

Algorithm Validation: Sample Size Calculation

We determined a priori that the minimum acceptable positive predictive value, or proportion of all readmissions the algorithm labels planned that are truly planned, would be 60%, and the minimum acceptable negative predictive value, or proportion of all readmissions the algorithm labels as unplanned that are truly unplanned, would be 80%. We calculated the sample size required to be confident of these values 10% and determined we would need a total of 291 planned charts and 162 unplanned charts. We inflated these numbers by 20% to account for missing or unobtainable charts for a total of 550 charts. To achieve this sample size, we included all eligible readmissions from all participating hospitals that were categorized as planned. At the 5 smaller hospitals, we randomly selected an equal number of unplanned readmissions occurring at any hospital in its healthcare system. At the 2 largest hospitals, we randomly selected 50 unplanned readmissions occurring at any hospital in its healthcare system.

Algorithm Validation: Data Abstraction

We developed an abstraction tool, tested and refined it using sample charts, and built the final the tool into a secure, password‐protected Microsoft Access 2007 (Microsoft Corp., Redmond, WA) database (see Supporting Information, Appendix C, in the online version of this article). Experienced chart abstractors with RN or MD degrees from each hospital site participated in a 1‐hour training session to become familiar with reviewing medical charts, defining planned/unplanned readmissions, and the data abstraction process. For each readmission, we asked abstractors to review as needed: emergency department triage and physician notes, admission history and physical, operative report, discharge summary, and/or discharge summary from a prior admission. The abstractors verified the accuracy of the administrative billing data, including procedures and principal diagnosis. In addition, they abstracted the source of admission and dates of all major procedures. Then the abstractors provided their opinion and supporting rationale as to whether a readmission was planned or unplanned. They were not asked to determine whether the readmission was preventable. To determine the inter‐rater reliability of data abstraction, an independent abstractor at each health system recoded a random sample of 10% of the charts.

Statistical Analysis

To ensure that we had obtained a representative sample of charts, we identified the 10 most commonly planned procedures among cases identified as planned by the algorithm in the validation cohort and then compared this with planned cases nationally. To confirm the reliability of the abstraction process, we used the kappa statistic to determine the inter‐rater reliability of the determination of planned or unplanned status. Additionally, the full study team, including 5 practicing clinicians, reviewed the details of every chart abstraction in which the algorithm was found to have misclassified the readmission as planned or unplanned. In 11 cases we determined that the abstractor had misunderstood the definition of planned readmission (ie, not all direct admissions are necessarily planned) and we reclassified the chart review assignment accordingly.

We calculated sensitivity, specificity, positive predictive value, and negative predictive value of the algorithm for the validation cohort as a whole, weighted to account for the prevalence of planned readmissions as defined by the algorithm in the national data (7.8%). Weighting is necessary because we did not obtain a pure random sample, but rather selected a stratified sample that oversampled algorithm‐identified planned readmissions.[9] We also calculated these rates separately for large hospitals (>600 beds) and for small hospitals (600 beds).

Finally, we examined performance of the algorithm for individual procedures and diagnoses to determine whether any procedures or diagnoses should be added or removed from the algorithm. First, we reviewed the diagnoses, procedures, and brief narratives provided by the abstractors for all cases in which the algorithm misclassified the readmission as either planned or unplanned. Second, we calculated the positive predictive value for each procedure that had been flagged as planned by the algorithm, and reviewed all readmissions (correctly and incorrectly classified) in which procedures with low positive predictive value took place. We also calculated the frequency with which the procedure was the only qualifying procedure resulting in an accurate or inaccurate classification. Third, to identify changes that should be made to the lists of acute and nonacute diagnoses, we reviewed the principal diagnosis for all readmissions misclassified by the algorithm as either planned or unplanned, and examined the specific ICD‐9‐CM codes within each CCS group that were most commonly associated with misclassifications.

After determining the changes that should be made to the algorithm based on these analyses, we recalculated the sensitivity, specificity, positive predictive value, and negative predictive value of the proposed revised algorithm (v3.0). All analyses used SAS version 9.3 (SAS Institute, Cary, NC).

RESULTS

Study Cohort

Characteristics of participating hospitals are shown in Table 1. Hospitals represented in this sample ranged in size, teaching status, and safety net status, although all were nonprofit. We selected 663 readmissions for review, 363 planned and 300 unplanned. Overall we were able to select 80% of hospitals planned cases for review; the remainder occurred at hospitals outside the participating hospital system. Abstractors were able to locate and review 634 (96%) of the eligible charts (range, 86%100% per hospital). The kappa statistic for inter‐rater reliability was 0.83.

Hospital Characteristics
DescriptionHospitals, NReadmissions Selected for Review, N*Readmissions Reviewed, N (% of Eligible)Unplanned Readmissions Reviewed, NPlanned Readmissions Reviewed, N% of Hospital's Planned Readmissions Reviewed*
  • NOTE: *Nonselected cases were readmitted to hospitals outside the system and could not be reviewed.

All hospitals7663634 (95.6)28335177.3
No. of beds>6002346339 (98.0)11622384.5
>3006002190173 (91.1)858887.1
<3003127122 (96.0)824044.9
OwnershipGovernment0     
For profit0     
Not for profit7663634 (95.6)28335177.3
Teaching statusTeaching2346339 (98.0)11622384.5
Nonteaching5317295 (93.1)16712867.4
Safety net statusSafety net2346339 (98.0)11622384.5
Nonsafety net5317295 (93.1)16712867.4
RegionNew England3409392 (95.8)15523785.9
South Central4254242 (95.3)12811464.0

The study sample included 57/67 (85%) of the procedure or condition categories on the potentially planned list. The most common procedure CCS categories among planned readmissions (v2.1) in the validation cohort were very similar to those in the national dataset (see Supporting Information, Appendix D, in the online version of this article). Of the top 20 most commonly planned procedure CCS categories in the validation set, all but 2, therapeutic radiology for cancer treatment (CCS 211) and peripheral vascular bypass (CCS 55), were among the top 20 most commonly planned procedure CCS categories in the national data.

Test Characteristics of Algorithm

The weighted test characteristics of the current algorithm (v2.1) are shown in Table 2. Overall, the algorithm correctly identified 266 readmissions as unplanned and 181 readmissions as planned, and misidentified 170 readmissions as planned and 15 as unplanned. Once weighted to account for the stratified sampling design, the overall prevalence of true planned readmissions was 8.9% of readmissions. The weighted sensitivity was 45.1% overall and was higher in large teaching centers than in smaller community hospitals. The weighted specificity was 95.9%. The positive predictive value was 51.6%, and the negative predictive value was 94.7%.

Test Characteristics of the Algorithm
CohortSensitivitySpecificityPositive Predictive ValueNegative Predictive Value
Algorithm v2.1
Full cohort45.1%95.9%51.6%94.7%
Large hospitals50.9%96.1%53.8%95.6%
Small hospitals40.2%95.5%47.7%94.0%
Revised algorithm v3.0
Full cohort49.8%96.5%58.7%94.5%
Large hospitals57.1%96.8%63.0%95.9%
Small hospitals42.6%95.9%52.6%93.9%

Accuracy of Individual Diagnoses and Procedures

The positive predictive value of the algorithm for individual procedure categories varied widely, from 0% to 100% among procedures with at least 10 cases (Table 3). The procedure for which the algorithm was least accurate was CCS 211, therapeutic radiology for cancer treatment (0% positive predictive value). By contrast, maintenance chemotherapy (90%) and other therapeutic procedures, hemic and lymphatic system (100%) were most accurate. Common procedures with less than 50% positive predictive value (ie, that the algorithm commonly misclassified as planned) were diagnostic cardiac catheterization (25%); debridement of wound, infection, or burn (25%); amputation of lower extremity (29%); insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (33%); and other hernia repair (43%). Of these, diagnostic cardiac catheterization and cardiac devices are the first and second most common procedures nationally, respectively.

Positive Predictive Value of Algorithm by Procedure Category (Among Procedures With at Least Ten Readmissions in Validation Cohort)
Readmission Procedure CCS CodeTotal Categorized as Planned by Algorithm, NVerified as Planned by Chart Review, NPositive Predictive Value
  • NOTE: Abbreviations: CCS, Clinical Classification Software; OR, operating room.

47 Diagnostic cardiac catheterization; coronary arteriography441125%
224 Cancer chemotherapy402255%
157 Amputation of lower extremity31929%
49 Other operating room heart procedures271659%
48 Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator24833%
43 Heart valve procedures201680%
Maintenance chemotherapy (diagnosis CCS 45)201890%
78 Colorectal resection18950%
169 Debridement of wound, infection or burn16425%
84 Cholecystectomy and common duct exploration16531%
99 Other OR gastrointestinal therapeutic procedures16850%
158 Spinal fusion151173%
142 Partial excision bone141071%
86 Other hernia repair14642%
44 Coronary artery bypass graft131077%
67 Other therapeutic procedures, hemic and lymphatic system1313100%
211 Therapeutic radiology for cancer treatment1200%
45 Percutaneous transluminal coronary angioplasty11764%
Total49727254.7%

The readmissions with least abstractor agreement were those involving CCS 157 (amputation of lower extremity) and CCS 169 (debridement of wound, infection or burn). Readmissions for these procedures were nearly always performed as a consequence of acute worsening of chronic conditions such as osteomyelitis or ulceration. Abstractors were divided over whether these readmissions were appropriate to call planned.

Changes to the Algorithm

We determined that the accuracy of the algorithm would be improved by removing 2 procedure categories from the planned procedure list (therapeutic radiation [CCS 211] and cancer chemotherapy [CCS 224]), adding 1 diagnosis category to the acute diagnosis list (hypertension with complications [CCS 99]), and splitting 2 diagnosis condition categories into acute and nonacute ICD‐9‐CM codes (pancreatic disorders [CCS 149] and biliary tract disease [CCS 152]). Detailed rationales for each modification to the planned readmission algorithm are described in Table 4. We felt further examination of diagnostic cardiac catheterization and cardiac devices was warranted given their high frequency, despite low positive predictive value. We also elected not to alter the categorization of amputation or debridement because it was not easy to determine whether these admissions were planned or unplanned even with chart review. We plan further analyses of these procedure categories.

Suggested Changes to Planned Readmission Algorithm v2.1 With Rationale
ActionDiagnosis or Procedure CategoryAlgorithmChartNRationale for Change
  • NOTE: Abbreviations: CCS, Clinical Classification Software; ICD‐9, International Classification od Diseases, Ninth Revision. *Number of cases in which CCS 47 was the only qualifying procedure Number of cases in which CCS 48 was the only qualifying procedure.

Remove from planned procedure listTherapeutic radiation (CCS 211)Accurate  The algorithm was inaccurate in every case. All therapeutic radiology during readmissions was performed because of acute illness (pain crisis, neurologic crisis) or because scheduled treatment occurred during an unplanned readmission. In national data, this ranks as the 25th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned0
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Cancer chemotherapy (CCS 224)Accurate  Of the 22 correctly identified as planned, 18 (82%) would already have been categorized as planned because of a principal diagnosis of maintenance chemotherapy. Therefore, removing CCS 224 from the planned procedure list would only miss a small fraction of planned readmissions but would avoid a large number of misclassifications. In national data, this ranks as the 8th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned22
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned18
Add to planned procedure listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. A handful of these cases were missed because the planned procedure was not on the current planned procedure list; however, those procedures (eg, abdominal paracentesis, colonoscopy, endoscopy) were nearly always unplanned overall and should therefore not be added as procedures that potentially qualify as an admission as planned.
Remove from acute diagnosis listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. The relevant disqualifying acute diagnoses were much more often associated with unplanned readmissions in our dataset.
Add to acute diagnosis listHypertension with complications (CCS 99)Accurate  This CCS was associated with only 1 planned readmission (for elective nephrectomy, a very rare procedure). Every other time this CCS appeared in the dataset, it was associated with an unplanned readmission (12/13, 92%); 10 of those, however, were misclassified by the algorithm as planned because they were not excluded by diagnosis (91% error rate). Consequently, adding this CCS to the acute diagnosis list is likely to miss only a very small fraction of planned readmissions, while making the overall algorithm much more accurate.
PlannedPlanned1
UnplannedUnplanned2
Inaccurate  
UnplannedPlanned0
PlannedUnplanned10
Split diagnosis condition category into component ICD‐9 codesPancreatic disorders (CCS 152)Accurate  ICD‐9 code 577.0 (acute pancreatitis) is the only acute code in this CCS. Acute pancreatitis was present in 2 cases that were misclassified as planned. Clinically, there is no situation in which a planned procedure would reasonably be performed in the setting of acute pancreatitis. Moving ICD‐9 code 577.0 to the acute list and leaving the rest of the ICD‐9 codes in CCS 152 on the nonacute list will enable the algorithm to continue to identify planned procedures for chronic pancreatitis.
PlannedPlanned0
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned0
PlannedUnplanned2
Biliary tract disease (CCS 149)Accurate  This CCS is a mix of acute and chronic diagnoses. Of 14 charts classified as planned with CCS 149 in the principal diagnosis field, 12 were misclassified (of which 10 were associated with cholecystectomy). Separating out the acute and nonacute diagnoses will increase the accuracy of the algorithm while still ensuring that planned cholecystectomies and other procedures can be identified. Of the ICD‐9 codes in CCS 149, the following will be added to the acute diagnosis list: 574.0, 574.3, 574.6, 574.8, 575.0, 575.12, 576.1.
PlannedPlanned2
UnplannedUnplanned3
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Consider for change after additional studyDiagnostic cardiac catheterization (CCS 47)Accurate  The algorithm misclassified as planned 25/38 (66%) unplanned readmissions in which diagnostic catheterizations were the only qualifying planned procedure. It also correctly identified 3/3 (100%) planned readmissions in which diagnostic cardiac catheterizations were the only qualifying planned procedure. This is the highest volume procedure in national data.
PlannedPlanned3*
UnplannedUnplanned13*
Inaccurate  
UnplannedPlanned0*
PlannedUnplanned25*
Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (CCS 48)Accurate  The algorithm misclassified as planned 4/5 (80%) unplanned readmissions in which cardiac devices were the only qualifying procedure. However, it also correctly identified 7/8 (87.5%) planned readmissions in which cardiac devices were the only qualifying planned procedure. CCS 48 is the second most common planned procedure category nationally.
PlannedPlanned7
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned1
PlannedUnplanned4

The revised algorithm (v3.0) had a weighted sensitivity of 49.8%, weighted specificity of 96.5%, positive predictive value of 58.7%, and negative predictive value of 94.5% (Table 2). In aggregate, these changes would increase the reported unplanned readmission rate from 16.0% to 16.1% in the hospital‐wide readmission measure, using 2011 to 2012 data, and would decrease the fraction of all readmissions considered planned from 7.8% to 7.2%.

DISCUSSION

We developed an algorithm based on administrative data that in its currently implemented form is very accurate at identifying unplanned readmissions, ensuring that readmissions included in publicly reported readmission measures are likely to be truly unplanned. However, nearly half of readmissions the algorithm classifies as planned are actually unplanned. That is, the algorithm is overcautious in excluding unplanned readmissions that could have counted as outcomes, particularly among admissions that include diagnostic cardiac catheterization or placement of cardiac devices (pacemakers, defibrillators). However, these errors only occur within the 7.8% of readmissions that are classified as planned and therefore do not affect overall readmission rates dramatically. A perfect algorithm would reclassify approximately half of these planned readmissions as unplanned, increasing the overall readmission rate by 0.6 percentage points.

On the other hand, the algorithm also only identifies approximately half of true planned readmissions as planned. Because the true prevalence of planned readmissions is low (approximately 9% of readmissions based on weighted chart review prevalence, or an absolute rate of 1.4%), this low sensitivity has a small effect on algorithm performance. Removing all true planned readmissions from the measure outcome would decrease the overall readmission rate by 0.8 percentage points, similar to the expected 0.6 percentage point increase that would result from better identifying unplanned readmissions; thus, a perfect algorithm would likely decrease the reported unplanned readmission rate by a net 0.2%. Overall, the existing algorithm appears to come close to the true prevalence of planned readmissions, despite inaccuracy on an individual‐case basis. The algorithm performed best at large hospitals, which are at greatest risk of being statistical outliers and of accruing penalties under the Hospital Readmissions Reduction Program.[10]

We identified several changes that marginally improved the performance of the algorithm by reducing the number of unplanned readmissions that are incorrectly removed from the measure, while avoiding the inappropriate inclusion of planned readmissions in the outcome. This revised algorithm, v3.0, was applied to public reporting of readmission rates at the end of 2014. Overall, implementing these changes increases the reported readmission rate very slightly. We also identified other procedures associated with high inaccuracy rates, removal of which would have larger impact on reporting rates, and which therefore merit further evaluation.

There are other potential methods of identifying planned readmissions. For instance, as of October 1, 2013, new administrative billing codes were created to allow hospitals to indicate that a patient was discharged with a planned acute‐care hospital inpatient readmission, without limitation as to when it will take place.[11] This code must be used at the time of the index admission to indicate that a future planned admission is expected, and was specified only to be used for neonates and patients with acute myocardial infarction. This approach, however, would omit planned readmissions that are not known to the initial discharging team, potentially missing planned readmissions. Conversely, some patients discharged with a plan for readmission may be unexpectedly readmitted for an unplanned reason. Given that the new codes were not available at the time we conducted the validation study, we were not able to determine how often the billing codes accurately identified planned readmissions. This would be an important area to consider for future study.

An alternative approach would be to create indicator codes to be applied at the time of readmission that would indicate whether that admission was planned or unplanned. Such a code would have the advantage of allowing each planned readmission to be flagged by the admitting clinicians at the time of admission rather than by an algorithm that inherently cannot be perfect. However, identifying planned readmissions at the time of readmission would also create opportunity for gaming and inconsistent application of definitions between hospitals; additional checks would need to be put in place to guard against these possibilities.

Our study has some limitations. We relied on the opinion of chart abstractors to determine whether a readmission was planned or unplanned; in a few cases, such as smoldering wounds that ultimately require surgical intervention, that determination is debatable. Abstractions were done at local institutions to minimize risks to patient privacy, and therefore we could not centrally verify determinations of planned status except by reviewing source of admission, dates of procedures, and narrative comments reported by the abstractors. Finally, we did not have sufficient volume of planned procedures to determine accuracy of the algorithm for less common procedure categories or individual procedures within categories.

In summary, we developed an algorithm to identify planned readmissions from administrative data that had high specificity and moderate sensitivity, and refined it based on chart validation. This algorithm is in use in public reporting of readmission measures to maximize the probability that the reported readmission rates represent truly unplanned readmissions.[12]

Disclosures: Financial supportThis work was performed under contract HHSM‐500‐2008‐0025I/HHSM‐500‐T0001, Modification No. 000008, titled Measure Instrument Development and Support, funded by the Centers for Medicare and Medicaid Services (CMS), an agency of the US Department of Health and Human Services. Drs. Horwitz and Ross are supported by the National Institute on Aging (K08 AG038336 and K08 AG032886, respectively) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Krumholz is supported by grant U01 HL105270‐05 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. No funding source had any role in the study design; in the collection, analysis, and interpretation of data; or in the writing of the article. The CMS reviewed and approved the use of its data for this work and approved submission of the manuscript. All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that all authors have support from the CMS for the submitted work. In addition, Dr. Ross is a member of a scientific advisory board for FAIR Health Inc. Dr. Krumholz chairs a cardiac scientific advisory board for UnitedHealth and is the recipient of research agreements from Medtronic and Johnson & Johnson through Yale University, to develop methods of clinical trial data sharing. All other authors report no conflicts of interest.

References
  1. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142150.
  2. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4(2):243252.
  3. Keenan PS, Normand SL, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1:2937.
  4. Grosso LM, Curtis JP, Lin Z, et al. Hospital‐level 30‐day all‐cause risk‐standardized readmission rate following elective primary total hip arthroplasty (THA) and/or total knee arthroplasty (TKA). Available at: http://www.qualitynet.org/dcs/ContentServer?c=Page161(supp10 l):S66S75.
  5. Walraven C, Jennings A, Forster AJ. A meta‐analysis of hospital 30‐day avoidable readmission rates. J Eval Clin Pract. 2011;18(6):12111218.
  6. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  7. Horwitz LI, Partovian C, Lin Z, et al. Centers for Medicare 3(4):477492.
  8. Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342343.
  9. Centers for Medicare and Medicaid Services. Inpatient Prospective Payment System/Long‐Term Care Hospital (IPPS/LTCH) final rule. Fed Regist. 2013;78:5053350534.
  10. Long SK, Stockley K, Dahlen H. Massachusetts health reforms: uninsurance remains low, self‐reported health status improves as state prepares to tackle costs. Health Aff (Millwood). 2012;31(2):444451.
References
  1. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142150.
  2. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4(2):243252.
  3. Keenan PS, Normand SL, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1:2937.
  4. Grosso LM, Curtis JP, Lin Z, et al. Hospital‐level 30‐day all‐cause risk‐standardized readmission rate following elective primary total hip arthroplasty (THA) and/or total knee arthroplasty (TKA). Available at: http://www.qualitynet.org/dcs/ContentServer?c=Page161(supp10 l):S66S75.
  5. Walraven C, Jennings A, Forster AJ. A meta‐analysis of hospital 30‐day avoidable readmission rates. J Eval Clin Pract. 2011;18(6):12111218.
  6. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  7. Horwitz LI, Partovian C, Lin Z, et al. Centers for Medicare 3(4):477492.
  8. Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342343.
  9. Centers for Medicare and Medicaid Services. Inpatient Prospective Payment System/Long‐Term Care Hospital (IPPS/LTCH) final rule. Fed Regist. 2013;78:5053350534.
  10. Long SK, Stockley K, Dahlen H. Massachusetts health reforms: uninsurance remains low, self‐reported health status improves as state prepares to tackle costs. Health Aff (Millwood). 2012;31(2):444451.
Issue
Journal of Hospital Medicine - 10(10)
Issue
Journal of Hospital Medicine - 10(10)
Page Number
670-677
Page Number
670-677
Publications
Publications
Article Type
Display Headline
Development and Validation of an Algorithm to Identify Planned Readmissions From Claims Data
Display Headline
Development and Validation of an Algorithm to Identify Planned Readmissions From Claims Data
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Leora Horwitz, MD, Department of Population Health, NYU School of Medicine, 550 First Avenue, TRB, Room 607, New York, NY 10016; Telephone: 646‐501‐2685; Fax: 646‐501‐2706; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Readmission and Mortality [Rates] in Pneumonia

Article Type
Changed
Sun, 05/28/2017 - 20:18
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
Article PDF
Issue
Journal of Hospital Medicine - 5(6)
Publications
Page Number
E12-E18
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Sections
Article PDF
Article PDF

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
Issue
Journal of Hospital Medicine - 5(6)
Issue
Journal of Hospital Medicine - 5(6)
Page Number
E12-E18
Page Number
E12-E18
Publications
Publications
Article Type
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Sections
Article Source

Copyright © 2010 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Center for Quality of Care Research, Baystate Medical Center, 280 Chestnut St., Springfield, MA 01199
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media