Affiliations
Texas Health Resources, Dallas, Texas
Given name(s)
Bin
Family name
Xie
Degrees
PhD

Predicting 30-day pneumonia readmissions using electronic health record data

Article Type
Changed
Fri, 12/14/2018 - 08:24
Display Headline
Predicting 30-day pneumonia readmissions using electronic health record data

Pneumonia is a leading cause of hospitalizations in the U.S., accounting for more than 1.1 million discharges annually.1 Pneumonia is frequently complicated by hospital readmission, which is costly and potentially avoidable.2,3 Due to financial penalties imposed on hospitals for higher than expected 30-day readmission rates, there is increasing attention to implementing interventions to reduce readmissions in this population.4,5 However, because these programs are resource-intensive, interventions are thought to be most cost-effective if they are targeted to high-risk individuals who are most likely to benefit.6-8

Current pneumonia-specific readmission risk-prediction models that could enable identification of high-risk patients suffer from poor predictive ability, greatly limiting their use, and most were validated among older adults or by using data from single academic medical centers, limiting their generalizability.9-14 A potential reason for poor predictive accuracy is the omission of known robust clinical predictors of pneumonia-related outcomes, including pneumonia severity of illness and stability on discharge.15-17 Approaches using electronic health record (EHR) data, which include this clinically granular data, could enable hospitals to more accurately and pragmatically identify high-risk patients during the index hospitalization and enable interventions to be initiated prior to discharge.

An alternative strategy to identifying high-risk patients for readmission is to use a multi-condition risk-prediction model. Developing and implementing models for every condition may be time-consuming and costly. We have derived and validated 2 multi-condition risk-prediction models using EHR data—1 using data from the first day of hospital admission (‘first-day’ model), and the second incorporating data from the entire hospitalization (‘full-stay’ model) to reflect in-hospital complications and clinical stability at discharge.18,19 However, it is unknown if a multi-condition model for pneumonia would perform as well as a disease-specific model.

This study aimed to develop 2 EHR-based pneumonia-specific readmission risk-prediction models using data routinely collected in clinical practice—a ‘first-day’ and a ‘full-stay’ model—and compare the performance of each model to: 1) one another; 2) the corresponding multi-condition EHR model; and 3) to other potentially useful models in predicting pneumonia readmissions (the Centers for Medicare and Medicaid Services [CMS] pneumonia model, and 2 commonly used pneumonia severity of illness scores validated for predicting mortality). We hypothesized that the pneumonia-specific EHR models would outperform other models; and the full-stay pneumonia-specific model would outperform the first-day pneumonia-specific model.

METHODS

Study Design, Population, and Data Sources

 

 

We conducted an observational study using EHR data collected from 6 hospitals (including safety net, community, teaching, and nonteaching hospitals) in north Texas between November 2009 and October 2010, All hospitals used the Epic EHR (Epic Systems Corporation, Verona, WI). Details of this cohort have been published.18,19

We included consecutive hospitalizations among adults 18 years and older discharged from any medicine service with principal discharge diagnoses of pneumonia (ICD-9-CM codes 480-483, 485, 486-487), sepsis (ICD-9-CM codes 038, 995.91, 995.92, 785.52), or respiratory failure (ICD-9-CM codes 518.81, 518.82, 518.84, 799.1) when the latter 2 were also accompanied by a secondary diagnosis of pneumonia.20 For individuals with multiple hospitalizations during the study period, we included only the first hospitalization. We excluded individuals who died during the index hospitalization or within 30 days of discharge, were transferred to another acute care facility, or left against medical advice.

Outcomes

The primary outcome was all-cause 30-day readmission, defined as a nonelective hospitalization within 30 days of discharge to any of 75 acute care hospitals within a 100-mile radius of Dallas, ascertained from an all-payer regional hospitalization database.

Predictor Variables for the Pneumonia-Specific Readmission Models

The selection of candidate predictors was informed by our validated multi-condition risk-prediction models using EHR data available within 24 hours of admission (‘first-day’ multi-condition EHR model) or during the entire hospitalization (‘full-stay’ multi-condition EHR model).18,19 For the pneumonia-specific models, we included all variables in our published multi-condition models as candidate predictors, including sociodemographics, prior utilization, Charlson Comorbidity Index, select laboratory and vital sign abnormalities, length of stay, hospital complications (eg, venous thromboembolism), vital sign instabilities, and disposition status (see Supplemental Table 1 for complete list of variables). We also assessed additional variables specific to pneumonia for inclusion that were: (1) available in the EHR of all participating hospitals; (2) routinely collected or available at the time of admission or discharge; and (3) plausible predictors of adverse outcomes based on literature and clinical expertise. These included select comorbidities (eg, psychiatric conditions, chronic lung disease, history of pneumonia),10,11,21,22 the pneumonia severity index (PSI),16,23,24 intensive care unit stay, and receipt of invasive or noninvasive ventilation. We used a modified PSI score because certain data elements were missing. The modified PSI (henceforth referred to as PSI) did not include nursing home residence and included diagnostic codes as proxies for the presence of pleural effusion (ICD-9-CM codes 510, 511.1, and 511.9) and altered mental status (ICD-9-CM codes 780.0X, 780.97, 293.0, 293.1, and 348.3X).

Statistical Analysis

Model Derivation. Candidate predictor variables were classified as available in the EHR within 24 hours of admission and/or at the time of discharge. For example, socioeconomic factors could be ascertained within the first day of hospitalization, whereas length of stay would not be available until the day of discharge. Predictors with missing values were assumed to be normal (less than 1% missing for each variable). Univariate relationships between readmission and each candidate predictor were assessed in the overall cohort using a pre-specified significance threshold of P ≤ 0.10. Significant variables were entered in the respective first-day and full-stay pneumonia-specific multivariable logistic regression models using stepwise-backward selection with a pre-specified significance threshold of P ≤ 0.05. In sensitivity analyses, we alternately derived our models using stepwise-forward selection, as well as stepwise-backward selection minimizing the Bayesian information criterion and Akaike information criterion separately. These alternate modeling strategies yielded identical predictors to our final models.

Model Validation. Model validation was performed using 5-fold cross-validation, with the overall cohort randomly divided into 5 equal-size subsets.25 For each cycle, 4 subsets were used for training to estimate model coefficients, and the fifth subset was used for validation. This cycle was repeated 5 times with each randomly-divided subset used once as the validation set. We repeated this entire process 50 times and averaged the C statistic estimates to derive an optimism-corrected C statistic. Model calibration was assessed qualitatively by comparing predicted to observed probabilities of readmission by quintiles of predicted risk, and with the Hosmer-Lemeshow goodness-of-fit test.

Comparison to Other Models. The main comparisons of the first-day and full-stay pneumonia-specific EHR model performance were to each other and the corresponding multi-condition EHR model.18,19 The multi-condition EHR models were separately derived and validated within the larger parent cohort from which this study cohort was derived, and outperformed the CMS all-cause model, the HOSPITAL model, and the LACE index.19 To further triangulate our findings, given the lack of other rigorously validated pneumonia-specific risk-prediction models for readmission,14 we compared the pneumonia-specific EHR models to the CMS pneumonia model derived from administrative claims data,10 and 2 commonly used risk-prediction scores for short-term mortality among patients with community-acquired pneumonia, the PSI and CURB-65 scores.16 Although derived and validated using patient-level data, the CMS model was developed to benchmark hospitals according to hospital-level readmission rates.10 The CURB-65 score in this study was also modified to include the same altered mental status diagnostic codes according to the modified PSI as a proxy for “confusion.” Both the PSI and CURB-65 scores were calculated using the most abnormal values within the first 24 hours of admission. The ‘updated’ PSI and the ‘updated’ CURB-65 were calculated using the most abnormal values within 24 hours prior to discharge, or the last known observation prior to discharge if no results were recorded within this time period. A complete list of variables for each of the comparison models are shown in Supplemental Table 1.

We assessed model performance by calculating the C statistic, integrated discrimination index, and net reclassification index (NRI) compared to our pneumonia-specific models. The integrated discrimination index is the difference in the mean predicted probability of readmission between patients who were and were not actually readmitted between 2 models, where more positive values suggest improvement in model performance compared to a reference model.26 The NRI is defined as the sum of the net proportions of correctly reclassified persons with and without the event of interest.27 Here, we calculated a category-based NRI to evaluate the performance of pneumonia-specific models in correctly classifying individuals with and without readmissions into the 2 highest readmission risk quintiles vs the lowest 3 risk quintiles compared to other models.27 This pre-specified cutoff is relevant for hospitals interested in identifying the highest risk individuals for targeted intervention.7 Finally, we assessed calibration of comparator models in our cohort by comparing predicted probability to observed probability of readmission by quintiles of risk for each model. We conducted all analyses using Stata 12.1 (StataCorp, College Station, Texas). This study was approved by the University of Texas Southwestern Medical Center Institutional Review Board.

 

 

RESULTS

Of 1463 index hospitalizations (Supplemental Figure 1), the 30-day all-cause readmission rate was 13.6%. Individuals with a 30-day readmission had markedly different sociodemographic and clinical characteristics compared to those not readmitted (Table 1; see Supplemental Table 2 for additional clinical characteristics).

Baseline Characteristics of Patients Hospitalized with Pneumonia
Table 1

Derivation, Validation, and Performance of the Pneumonia-Specific Readmission Risk-Prediction Models

The final first-day pneumonia-specific EHR model included 7 variables, including sociodemographic characteristics; prior hospitalizations; thrombocytosis, and PSI (Table 2). The first-day pneumonia-specific model had adequate discrimination (C statistic, 0.695; optimism-corrected C statistic 0.675, 95% confidence interval [CI], 0.667-0.685; Table 3). It also effectively stratified individuals across a broad range of risk (average predicted decile of risk ranged from 4% to 33%; Table 3) and was well calibrated (Supplemental Table 3).

Final Pneumonia-Specific EHR Risk-Prediction Models for Readmissions
Table 2

The final full-stay pneumonia-specific EHR readmission model included 8 predictors, including 3 variables from the first-day model (median income, thrombocytosis, and prior hospitalizations; Table 2). The full-stay pneumonia-specific EHR model also included vital sign instabilities on discharge, updated PSI, and disposition status (ie, being discharged with home health or to a post-acute care facility was associated with greater odds of readmission, and hospice with lower odds). The full-stay pneumonia-specific EHR model had good discrimination (C statistic, 0.731; optimism-corrected C statistic, 0.714; 95% CI, 0.706-0.720), and stratified individuals across a broad range of risk (average predicted decile of risk ranged from 3% to 37%; Table 3), and was also well calibrated (Supplemental Table 3).

Model Performance and Comparison of Pneumonia-Specific EHR Readmissions Models vs Other Models
Table 3

First-Day Pneumonia-Specific EHR Model vs First-Day Multi-Condition EHR Model

The first-day pneumonia-specific EHR model outperformed the first-day multi-condition EHR model with better discrimination (P = 0.029) and more correctly classified individuals in the top 2 highest risk quintiles vs the bottom 3 risk quintiles (Table 3, Supplemental Table 4, and Supplemental Figure 2A). With respect to calibration, the first-day multi-condition EHR model overestimated risk among the highest quintile risk group compared to the first-day pneumonia-specific EHR model (Figure 1A, 1B).

Comparison of the calibration of different readmission models
Figure 1

Full-Stay Pneumonia-Specific EHR Model vs Other Models

The full-stay pneumonia-specific EHR model comparatively outperformed the corresponding full-stay multi-condition EHR model, as well as the first-day pneumonia-specific EHR model, the CMS pneumonia model, the updated PSI, and the updated CURB-65 (Table 3, Supplemental Table 5, Supplemental Table 6, and Supplemental Figures 2B and 2C). Compared to the full-stay multi-condition and first-day pneumonia-specific EHR models, the full-stay pneumonia-specific EHR model had better discrimination, better reclassification (NRI, 0.09 and 0.08, respectively), and was able to stratify individuals across a broader range of readmission risk (Table 3). It also had better calibration in the highest quintile risk group compared to the full-stay multi-condition EHR model (Figure 1C and 1D).

Updated vs First-Day Modified PSI and CURB-65 Scores

The updated PSI was more strongly predictive of readmission than the PSI calculated on the day of admission (Wald test, 9.83; P = 0.002). Each 10-point increase in the updated PSI was associated with a 22% increased odds of readmission vs an 11% increase for the PSI calculated upon admission (Table 2). The improved predictive ability of the updated PSI and CURB-65 scores was also reflected in the superior discrimination and calibration vs the respective first-day pneumonia severity of illness scores (Table 3).

DISCUSSION

Using routinely available EHR data from 6 diverse hospitals, we developed 2 pneumonia-specific readmission risk-prediction models that aimed to allow hospitals to identify patients hospitalized with pneumonia at high risk for readmission. Overall, we found that a pneumonia-specific model using EHR data from the entire hospitalization outperformed all other models—including the first-day pneumonia-specific model using data present only on admission, our own multi-condition EHR models, and the CMS pneumonia model based on administrative claims data—in all aspects of model performance (discrimination, calibration, and reclassification). We found that socioeconomic status, prior hospitalizations, thrombocytosis, and measures of clinical severity and stability were important predictors of 30-day all-cause readmissions among patients hospitalized with pneumonia. Additionally, an updated discharge PSI score was a stronger independent predictor of readmissions compared to the PSI score calculated upon admission; and inclusion of the updated PSI in our full-stay pneumonia model led to improved prediction of 30-day readmissions.

The marked improvement in performance of the full-stay pneumonia-specific EHR model compared to the first-day pneumonia-specific model suggests that clinical stability and trajectory during hospitalization (as modeled through disposition status, updated PSI, and vital sign instabilities at discharge) are important predictors of 30-day readmission among patients hospitalized for pneumonia, which was not the case for our EHR-based multi-condition models.19 With the inclusion of these measures, the full-stay pneumonia-specific model correctly reclassified an additional 8% of patients according to their true risk compared to the first-day pneumonia-specific model. One implication of these findings is that hospitals interested in targeting their highest risk individuals with pneumonia for transitional care interventions could do so using the first-day pneumonia-specific EHR model and could refine their targeted strategy at the time of discharge by using the full-stay pneumonia model. This staged risk-prediction strategy would enable hospitals to initiate transitional care interventions for high-risk individuals in the inpatient setting (ie, patient education).7 Then, hospitals could enroll both persistent and newly identified high-risk individuals for outpatient interventions (ie, follow-up telephone call) in the immediate post-discharge period, an interval characterized by heightened vulnerability for adverse events,28 based on patients’ illness severity and stability at discharge. This approach can be implemented by hospitals by building these risk-prediction models directly into the EHR, or by extracting EHR data in near real time as our group has done successfully for heart failure.7

Another key implication of our study is that, for pneumonia, a disease-specific modeling approach has better predictive ability than using a multi-condition model. Compared to multi-condition models, the first-day and full-stay pneumonia-specific EHR models correctly reclassified an additional 6% and 9% of patients, respectively. Thus, hospitals interested in identifying the highest risk patients with pneumonia for targeted interventions should do so using the disease-specific models, if the costs and resources of doing so are within reach of the healthcare system.

An additional novel finding of our study is the added value of an updated PSI for predicting adverse events. Studies of pneumonia severity of illness scores have calculated the PSI and CURB-65 scores using data present only on admission.16,24 While our study also confirms that the PSI calculated upon admission is a significant predictor of readmission,23,29 this study extends this work by showing that an updated PSI score calculated at the time of discharge is an even stronger predictor for readmission, and its inclusion in the model significantly improves risk stratification and prognostication.

Our study was noteworthy for several strengths. First, we used data from a common EHR system, thus potentially allowing for the implementation of the pneumonia-specific models in real time across a number of hospitals. The use of routinely collected data for risk-prediction modeling makes this approach scalable and sustainable, because it obviates the need for burdensome data collection and entry. Second, to our knowledge, this is the first study to measure the additive influence of illness severity and stability at discharge on the readmission risk among patients hospitalized with pneumonia. Third, our study population was derived from 6 hospitals diverse in payer status, age, race/ethnicity, and socioeconomic status. Fourth, our models are less likely to be overfit to the idiosyncrasies of our data given that several predictors included in our final pneumonia-specific models have been associated with readmission in this population, including marital status,13,30 income,11,31 prior hospitalizations,11,13 thrombocytosis,32-34 and vital sign instabilities on discharge.17 Lastly, the discrimination of the CMS pneumonia model in our cohort (C statistic, 0.64) closely matched the discrimination observed in 4 independent cohorts (C statistic, 0.63), suggesting adequate generalizability of our study setting and population.10,12

Our results should be interpreted in the context of several limitations. First, generalizability to other regions beyond north Texas is unknown. Second, although we included a diverse cohort of safety net, community, teaching, and nonteaching hospitals, the pneumonia-specific models were not externally validated in a separate cohort, which may lead to more optimistic estimates of model performance. Third, PSI and CURB-65 scores were modified to use diagnostic codes for altered mental status and pleural effusion, and omitted nursing home residence. Thus, the independent associations for the PSI and CURB-65 scores and their predictive ability are likely attenuated. Fourth, we were unable to include data on medications (antibiotics and steroid use) and outpatient visits, which may influence readmission risk.2,9,13,35-40 Fifth, we included only the first pneumonia hospitalization per patient in this study. Had we included multiple hospitalizations per patient, we anticipate better model performance for the 2 pneumonia-specific EHR models since prior hospitalization was a robust predictor of readmission.

In conclusion, the full-stay pneumonia-specific EHR readmission risk-prediction model outperformed the first-day pneumonia-specific model, multi-condition EHR models, and the CMS pneumonia model. This suggests that: measures of clinical severity and stability at the time of discharge are important predictors for identifying patients at highest risk for readmission; and that EHR data routinely collected for clinical practice can be used to accurately predict risk of readmission among patients hospitalized for pneumonia.

 

 

Acknowledgments

The authors would like to acknowledge Ruben Amarasingham, MD, MBA, president and chief executive officer of Parkland Center for Clinical Innovation, and Ferdinand Velasco, MD, chief health information officer at Texas Health Resources for their assistance in assembling the 6-hospital cohort used in this study.

Disclosures

This work was supported by the Agency for Healthcare Research and Quality-funded UT Southwestern Center for Patient-Centered Outcomes Research (R24 HS022418-01); the Commonwealth Foundation (#20100323); the UT Southwestern KL2 Scholars Program supported by the National Institutes of Health (KL2 TR001103 to ANM and OKN); and the National Center for Advancing Translational Sciences at the National Institute of Health (U54 RFA-TR-12-006 to E.A.H.). The study sponsors had no role in design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The authors have no financial conflicts of interest to disclose

Files
References

1. Centers for Disease Control and Prevention. Pneumonia. http://www.cdc.gov/nchs/fastats/pneumonia.htm. Accessed January 26, 2016.
2. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee-for-service program. N Engl J Med. 2009;364(16):1582. PubMed
3. van Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391-E402. PubMed
4. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital-initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433-440. PubMed
5. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30-day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520-528. PubMed
6. Rennke S, Shoeb MH, Nguyen OK, Magan Y, Wachter RM, Ranji SR. Interventions to Improve Care Transitions at Hospital Discharge. Rockville, MD: Agency for Healthcare Research and Quality, US Department of Health and Human Services;March 2013. PubMed
7. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real-time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):998-1005. PubMed
8. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood). 2014;33(7):1148-1154. PubMed
9. Hebert C, Shivade C, Foraker R, et al. Diagnosis-specific readmission risk prediction using electronic health data: a retrospective cohort study. BMC Med Inform Decis Mak. 2014;14:65. PubMed
10. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30-day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142-150. PubMed
11. Mather JF, Fortunato GJ, Ash JL, Davis MJ, Kumar A. Prediction of pneumonia 30-day readmissions: a single-center attempt to increase model performance. Respir Care. 2014;59(2):199-208. PubMed
12. O’Brien WJ, Chen Q, Mull HJ, et al. What is the value of adding Medicare data in estimating VA hospital readmission rates? Health Serv Res. 2015;50(1):40-57. PubMed
13. Tang VL, Halm EA, Fine MJ, Johnson CS, Anzueto A, Mortensen EM. Predictors of rehospitalization after admission for pneumonia in the veterans affairs healthcare system. J Hosp Med. 2014;9(6):379-383. PubMed
14. Weinreich M, Nguyen OK, Wang D, et al. Predicting the risk of readmission in pneumonia: a systematic review of model performance. Ann Am Thorac Soc. 2016;13(9):1607-1614. PubMed
15. Kwok CS, Loke YK, Woo K, Myint PK. Risk prediction models for mortality in community-acquired pneumonia: a systematic review. Biomed Res Int. 2013;2013:504136. PubMed
16. Loke YK, Kwok CS, Niruban A, Myint PK. Value of severity scales in predicting mortality from community-acquired pneumonia: systematic review and meta-analysis. Thorax. 2010;65(10):884-890. PubMed
17. Halm EA, Fine MJ, Kapoor WN, Singer DE, Marrie TJ, Siu AL. Instability on hospital discharge and the risk of adverse outcomes in patients with pneumonia. Arch Intern Med. 2002;162(11):1278-1284. PubMed
18. Amarasingham R, Velasco F, Xie B, et al. Electronic medical record-based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models. BMC Med Inform Decis Mak. 2015;15:39. PubMed
19. Nguyen OK, Makam AN, Clark C, et al. Predicting all-cause readmissions using electronic health record data from the entire hospitalization: Model development and comparison. J Hosp Med. 2016;11(7):473-480. PubMed
20. Lindenauer PK, Lagu T, Shieh MS, Pekow PS, Rothberg MB. Association of diagnostic coding with trends in hospitalizations and mortality of patients with pneumonia, 2003-2009. JAMA. 2012;307(13):1405-1413. PubMed
21. Ahmedani BK, Solberg LI, Copeland LA, et al. Psychiatric comorbidity and 30-day readmissions after hospitalization for heart failure, AMI, and pneumonia. Psychiatr Serv. 2015;66(2):134-140. PubMed
22. Jasti H, Mortensen EM, Obrosky DS, Kapoor WN, Fine MJ. Causes and risk factors for rehospitalization of patients hospitalized with community-acquired pneumonia. Clin Infect Dis. 2008;46(4):550-556. PubMed
23. Capelastegui A, España Yandiola PP, Quintana JM, et al. Predictors of short-term rehospitalization following discharge of patients hospitalized with community-acquired pneumonia. Chest. 2009;136(4):1079-1085. PubMed
24. Fine MJ, Auble TE, Yealy DM, et al. A prediction rule to identify low-risk patients with community-acquired pneumonia. N Engl J Med. 1997;336(4):243-250. PubMed
25. Vittinghoff E, Glidden D, Shiboski S, McCulloch C. Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models (Statistics for Biology and Health). New York City, NY: Springer; 2012.
26. Pencina MJ, D’Agostino RB Sr, D’Agostino RB Jr, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2):157-172; discussion 207-112. PubMed
27. Leening MJ, Vedder MM, Witteman JC, Pencina MJ, Steyerberg EW. Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician’s guide. Ann Intern Med. 2014;160(2):122-131. PubMed
28. Krumholz HM. Post-hospital syndrome--an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100-102. PubMed
29. Micek ST, Lang A, Fuller BM, Hampton NB, Kollef MH. Clinical implications for patients treated inappropriately for community-acquired pneumonia in the emergency department. BMC Infect Dis. 2014;14:61. PubMed
30. Metersky ML, Fine MJ, Mortensen EM. The effect of marital status on the presentation and outcomes of elderly male veterans hospitalized for pneumonia. Chest. 2012;142(4):982-987. PubMed
31. Calvillo-King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269-282. PubMed
32. Mirsaeidi M, Peyrani P, Aliberti S, et al. Thrombocytopenia and thrombocytosis at time of hospitalization predict mortality in patients with community-acquired pneumonia. Chest. 2010;137(2):416-420. PubMed
33. Prina E, Ferrer M, Ranzani OT, et al. Thrombocytosis is a marker of poor outcome in community-acquired pneumonia. Chest. 2013;143(3):767-775. PubMed

34. Violi F, Cangemi R, Calvieri C. Pneumonia, thrombosis and vascular disease. J Thromb Haemost. 2014;12(9):1391-1400. PubMed
35. Weinberger M, Oddone EZ, Henderson WG. Does increased access to primary care reduce hospital readmissions? Veterans Affairs Cooperative Study Group on Primary Care and Hospital Readmission. N Engl J Med. 1996;334(22):1441-1447. PubMed
36. Field TS, Ogarek J, Garber L, Reed G, Gurwitz JH. Association of early post-discharge follow-up by a primary care physician and 30-day rehospitalization among older adults. J Gen Intern Med. 2015;30(5):565-571. PubMed
37. Spatz ES, Sheth SD, Gosch KL, et al. Usual source of care and outcomes following acute myocardial infarction. J Gen Intern Med. 2014;29(6):862-869. PubMed
38. Brooke BS, Stone DH, Cronenwett JL, et al. Early primary care provider follow-up and readmission after high-risk surgery. JAMA Surg. 2014;149(8):821-828. PubMed
39. Adamuz J, Viasus D, Campreciós-Rodriguez P, et al. A prospective cohort study of healthcare visits and rehospitalizations after discharge of patients with community-acquired pneumonia. Respirology. 2011;16(7):1119-1126. PubMed
40. Shorr AF, Zilberberg MD, Reichley R, et al. Readmission following hospitalization for pneumonia: the impact of pneumonia type and its implication for hospitals. Clin Infect Dis. 2013;57(3):362-367. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12(4)
Publications
Topics
Page Number
209-216
Sections
Files
Files
Article PDF
Article PDF

Pneumonia is a leading cause of hospitalizations in the U.S., accounting for more than 1.1 million discharges annually.1 Pneumonia is frequently complicated by hospital readmission, which is costly and potentially avoidable.2,3 Due to financial penalties imposed on hospitals for higher than expected 30-day readmission rates, there is increasing attention to implementing interventions to reduce readmissions in this population.4,5 However, because these programs are resource-intensive, interventions are thought to be most cost-effective if they are targeted to high-risk individuals who are most likely to benefit.6-8

Current pneumonia-specific readmission risk-prediction models that could enable identification of high-risk patients suffer from poor predictive ability, greatly limiting their use, and most were validated among older adults or by using data from single academic medical centers, limiting their generalizability.9-14 A potential reason for poor predictive accuracy is the omission of known robust clinical predictors of pneumonia-related outcomes, including pneumonia severity of illness and stability on discharge.15-17 Approaches using electronic health record (EHR) data, which include this clinically granular data, could enable hospitals to more accurately and pragmatically identify high-risk patients during the index hospitalization and enable interventions to be initiated prior to discharge.

An alternative strategy to identifying high-risk patients for readmission is to use a multi-condition risk-prediction model. Developing and implementing models for every condition may be time-consuming and costly. We have derived and validated 2 multi-condition risk-prediction models using EHR data—1 using data from the first day of hospital admission (‘first-day’ model), and the second incorporating data from the entire hospitalization (‘full-stay’ model) to reflect in-hospital complications and clinical stability at discharge.18,19 However, it is unknown if a multi-condition model for pneumonia would perform as well as a disease-specific model.

This study aimed to develop 2 EHR-based pneumonia-specific readmission risk-prediction models using data routinely collected in clinical practice—a ‘first-day’ and a ‘full-stay’ model—and compare the performance of each model to: 1) one another; 2) the corresponding multi-condition EHR model; and 3) to other potentially useful models in predicting pneumonia readmissions (the Centers for Medicare and Medicaid Services [CMS] pneumonia model, and 2 commonly used pneumonia severity of illness scores validated for predicting mortality). We hypothesized that the pneumonia-specific EHR models would outperform other models; and the full-stay pneumonia-specific model would outperform the first-day pneumonia-specific model.

METHODS

Study Design, Population, and Data Sources

 

 

We conducted an observational study using EHR data collected from 6 hospitals (including safety net, community, teaching, and nonteaching hospitals) in north Texas between November 2009 and October 2010, All hospitals used the Epic EHR (Epic Systems Corporation, Verona, WI). Details of this cohort have been published.18,19

We included consecutive hospitalizations among adults 18 years and older discharged from any medicine service with principal discharge diagnoses of pneumonia (ICD-9-CM codes 480-483, 485, 486-487), sepsis (ICD-9-CM codes 038, 995.91, 995.92, 785.52), or respiratory failure (ICD-9-CM codes 518.81, 518.82, 518.84, 799.1) when the latter 2 were also accompanied by a secondary diagnosis of pneumonia.20 For individuals with multiple hospitalizations during the study period, we included only the first hospitalization. We excluded individuals who died during the index hospitalization or within 30 days of discharge, were transferred to another acute care facility, or left against medical advice.

Outcomes

The primary outcome was all-cause 30-day readmission, defined as a nonelective hospitalization within 30 days of discharge to any of 75 acute care hospitals within a 100-mile radius of Dallas, ascertained from an all-payer regional hospitalization database.

Predictor Variables for the Pneumonia-Specific Readmission Models

The selection of candidate predictors was informed by our validated multi-condition risk-prediction models using EHR data available within 24 hours of admission (‘first-day’ multi-condition EHR model) or during the entire hospitalization (‘full-stay’ multi-condition EHR model).18,19 For the pneumonia-specific models, we included all variables in our published multi-condition models as candidate predictors, including sociodemographics, prior utilization, Charlson Comorbidity Index, select laboratory and vital sign abnormalities, length of stay, hospital complications (eg, venous thromboembolism), vital sign instabilities, and disposition status (see Supplemental Table 1 for complete list of variables). We also assessed additional variables specific to pneumonia for inclusion that were: (1) available in the EHR of all participating hospitals; (2) routinely collected or available at the time of admission or discharge; and (3) plausible predictors of adverse outcomes based on literature and clinical expertise. These included select comorbidities (eg, psychiatric conditions, chronic lung disease, history of pneumonia),10,11,21,22 the pneumonia severity index (PSI),16,23,24 intensive care unit stay, and receipt of invasive or noninvasive ventilation. We used a modified PSI score because certain data elements were missing. The modified PSI (henceforth referred to as PSI) did not include nursing home residence and included diagnostic codes as proxies for the presence of pleural effusion (ICD-9-CM codes 510, 511.1, and 511.9) and altered mental status (ICD-9-CM codes 780.0X, 780.97, 293.0, 293.1, and 348.3X).

Statistical Analysis

Model Derivation. Candidate predictor variables were classified as available in the EHR within 24 hours of admission and/or at the time of discharge. For example, socioeconomic factors could be ascertained within the first day of hospitalization, whereas length of stay would not be available until the day of discharge. Predictors with missing values were assumed to be normal (less than 1% missing for each variable). Univariate relationships between readmission and each candidate predictor were assessed in the overall cohort using a pre-specified significance threshold of P ≤ 0.10. Significant variables were entered in the respective first-day and full-stay pneumonia-specific multivariable logistic regression models using stepwise-backward selection with a pre-specified significance threshold of P ≤ 0.05. In sensitivity analyses, we alternately derived our models using stepwise-forward selection, as well as stepwise-backward selection minimizing the Bayesian information criterion and Akaike information criterion separately. These alternate modeling strategies yielded identical predictors to our final models.

Model Validation. Model validation was performed using 5-fold cross-validation, with the overall cohort randomly divided into 5 equal-size subsets.25 For each cycle, 4 subsets were used for training to estimate model coefficients, and the fifth subset was used for validation. This cycle was repeated 5 times with each randomly-divided subset used once as the validation set. We repeated this entire process 50 times and averaged the C statistic estimates to derive an optimism-corrected C statistic. Model calibration was assessed qualitatively by comparing predicted to observed probabilities of readmission by quintiles of predicted risk, and with the Hosmer-Lemeshow goodness-of-fit test.

Comparison to Other Models. The main comparisons of the first-day and full-stay pneumonia-specific EHR model performance were to each other and the corresponding multi-condition EHR model.18,19 The multi-condition EHR models were separately derived and validated within the larger parent cohort from which this study cohort was derived, and outperformed the CMS all-cause model, the HOSPITAL model, and the LACE index.19 To further triangulate our findings, given the lack of other rigorously validated pneumonia-specific risk-prediction models for readmission,14 we compared the pneumonia-specific EHR models to the CMS pneumonia model derived from administrative claims data,10 and 2 commonly used risk-prediction scores for short-term mortality among patients with community-acquired pneumonia, the PSI and CURB-65 scores.16 Although derived and validated using patient-level data, the CMS model was developed to benchmark hospitals according to hospital-level readmission rates.10 The CURB-65 score in this study was also modified to include the same altered mental status diagnostic codes according to the modified PSI as a proxy for “confusion.” Both the PSI and CURB-65 scores were calculated using the most abnormal values within the first 24 hours of admission. The ‘updated’ PSI and the ‘updated’ CURB-65 were calculated using the most abnormal values within 24 hours prior to discharge, or the last known observation prior to discharge if no results were recorded within this time period. A complete list of variables for each of the comparison models are shown in Supplemental Table 1.

We assessed model performance by calculating the C statistic, integrated discrimination index, and net reclassification index (NRI) compared to our pneumonia-specific models. The integrated discrimination index is the difference in the mean predicted probability of readmission between patients who were and were not actually readmitted between 2 models, where more positive values suggest improvement in model performance compared to a reference model.26 The NRI is defined as the sum of the net proportions of correctly reclassified persons with and without the event of interest.27 Here, we calculated a category-based NRI to evaluate the performance of pneumonia-specific models in correctly classifying individuals with and without readmissions into the 2 highest readmission risk quintiles vs the lowest 3 risk quintiles compared to other models.27 This pre-specified cutoff is relevant for hospitals interested in identifying the highest risk individuals for targeted intervention.7 Finally, we assessed calibration of comparator models in our cohort by comparing predicted probability to observed probability of readmission by quintiles of risk for each model. We conducted all analyses using Stata 12.1 (StataCorp, College Station, Texas). This study was approved by the University of Texas Southwestern Medical Center Institutional Review Board.

 

 

RESULTS

Of 1463 index hospitalizations (Supplemental Figure 1), the 30-day all-cause readmission rate was 13.6%. Individuals with a 30-day readmission had markedly different sociodemographic and clinical characteristics compared to those not readmitted (Table 1; see Supplemental Table 2 for additional clinical characteristics).

Baseline Characteristics of Patients Hospitalized with Pneumonia
Table 1

Derivation, Validation, and Performance of the Pneumonia-Specific Readmission Risk-Prediction Models

The final first-day pneumonia-specific EHR model included 7 variables, including sociodemographic characteristics; prior hospitalizations; thrombocytosis, and PSI (Table 2). The first-day pneumonia-specific model had adequate discrimination (C statistic, 0.695; optimism-corrected C statistic 0.675, 95% confidence interval [CI], 0.667-0.685; Table 3). It also effectively stratified individuals across a broad range of risk (average predicted decile of risk ranged from 4% to 33%; Table 3) and was well calibrated (Supplemental Table 3).

Final Pneumonia-Specific EHR Risk-Prediction Models for Readmissions
Table 2

The final full-stay pneumonia-specific EHR readmission model included 8 predictors, including 3 variables from the first-day model (median income, thrombocytosis, and prior hospitalizations; Table 2). The full-stay pneumonia-specific EHR model also included vital sign instabilities on discharge, updated PSI, and disposition status (ie, being discharged with home health or to a post-acute care facility was associated with greater odds of readmission, and hospice with lower odds). The full-stay pneumonia-specific EHR model had good discrimination (C statistic, 0.731; optimism-corrected C statistic, 0.714; 95% CI, 0.706-0.720), and stratified individuals across a broad range of risk (average predicted decile of risk ranged from 3% to 37%; Table 3), and was also well calibrated (Supplemental Table 3).

Model Performance and Comparison of Pneumonia-Specific EHR Readmissions Models vs Other Models
Table 3

First-Day Pneumonia-Specific EHR Model vs First-Day Multi-Condition EHR Model

The first-day pneumonia-specific EHR model outperformed the first-day multi-condition EHR model with better discrimination (P = 0.029) and more correctly classified individuals in the top 2 highest risk quintiles vs the bottom 3 risk quintiles (Table 3, Supplemental Table 4, and Supplemental Figure 2A). With respect to calibration, the first-day multi-condition EHR model overestimated risk among the highest quintile risk group compared to the first-day pneumonia-specific EHR model (Figure 1A, 1B).

Comparison of the calibration of different readmission models
Figure 1

Full-Stay Pneumonia-Specific EHR Model vs Other Models

The full-stay pneumonia-specific EHR model comparatively outperformed the corresponding full-stay multi-condition EHR model, as well as the first-day pneumonia-specific EHR model, the CMS pneumonia model, the updated PSI, and the updated CURB-65 (Table 3, Supplemental Table 5, Supplemental Table 6, and Supplemental Figures 2B and 2C). Compared to the full-stay multi-condition and first-day pneumonia-specific EHR models, the full-stay pneumonia-specific EHR model had better discrimination, better reclassification (NRI, 0.09 and 0.08, respectively), and was able to stratify individuals across a broader range of readmission risk (Table 3). It also had better calibration in the highest quintile risk group compared to the full-stay multi-condition EHR model (Figure 1C and 1D).

Updated vs First-Day Modified PSI and CURB-65 Scores

The updated PSI was more strongly predictive of readmission than the PSI calculated on the day of admission (Wald test, 9.83; P = 0.002). Each 10-point increase in the updated PSI was associated with a 22% increased odds of readmission vs an 11% increase for the PSI calculated upon admission (Table 2). The improved predictive ability of the updated PSI and CURB-65 scores was also reflected in the superior discrimination and calibration vs the respective first-day pneumonia severity of illness scores (Table 3).

DISCUSSION

Using routinely available EHR data from 6 diverse hospitals, we developed 2 pneumonia-specific readmission risk-prediction models that aimed to allow hospitals to identify patients hospitalized with pneumonia at high risk for readmission. Overall, we found that a pneumonia-specific model using EHR data from the entire hospitalization outperformed all other models—including the first-day pneumonia-specific model using data present only on admission, our own multi-condition EHR models, and the CMS pneumonia model based on administrative claims data—in all aspects of model performance (discrimination, calibration, and reclassification). We found that socioeconomic status, prior hospitalizations, thrombocytosis, and measures of clinical severity and stability were important predictors of 30-day all-cause readmissions among patients hospitalized with pneumonia. Additionally, an updated discharge PSI score was a stronger independent predictor of readmissions compared to the PSI score calculated upon admission; and inclusion of the updated PSI in our full-stay pneumonia model led to improved prediction of 30-day readmissions.

The marked improvement in performance of the full-stay pneumonia-specific EHR model compared to the first-day pneumonia-specific model suggests that clinical stability and trajectory during hospitalization (as modeled through disposition status, updated PSI, and vital sign instabilities at discharge) are important predictors of 30-day readmission among patients hospitalized for pneumonia, which was not the case for our EHR-based multi-condition models.19 With the inclusion of these measures, the full-stay pneumonia-specific model correctly reclassified an additional 8% of patients according to their true risk compared to the first-day pneumonia-specific model. One implication of these findings is that hospitals interested in targeting their highest risk individuals with pneumonia for transitional care interventions could do so using the first-day pneumonia-specific EHR model and could refine their targeted strategy at the time of discharge by using the full-stay pneumonia model. This staged risk-prediction strategy would enable hospitals to initiate transitional care interventions for high-risk individuals in the inpatient setting (ie, patient education).7 Then, hospitals could enroll both persistent and newly identified high-risk individuals for outpatient interventions (ie, follow-up telephone call) in the immediate post-discharge period, an interval characterized by heightened vulnerability for adverse events,28 based on patients’ illness severity and stability at discharge. This approach can be implemented by hospitals by building these risk-prediction models directly into the EHR, or by extracting EHR data in near real time as our group has done successfully for heart failure.7

Another key implication of our study is that, for pneumonia, a disease-specific modeling approach has better predictive ability than using a multi-condition model. Compared to multi-condition models, the first-day and full-stay pneumonia-specific EHR models correctly reclassified an additional 6% and 9% of patients, respectively. Thus, hospitals interested in identifying the highest risk patients with pneumonia for targeted interventions should do so using the disease-specific models, if the costs and resources of doing so are within reach of the healthcare system.

An additional novel finding of our study is the added value of an updated PSI for predicting adverse events. Studies of pneumonia severity of illness scores have calculated the PSI and CURB-65 scores using data present only on admission.16,24 While our study also confirms that the PSI calculated upon admission is a significant predictor of readmission,23,29 this study extends this work by showing that an updated PSI score calculated at the time of discharge is an even stronger predictor for readmission, and its inclusion in the model significantly improves risk stratification and prognostication.

Our study was noteworthy for several strengths. First, we used data from a common EHR system, thus potentially allowing for the implementation of the pneumonia-specific models in real time across a number of hospitals. The use of routinely collected data for risk-prediction modeling makes this approach scalable and sustainable, because it obviates the need for burdensome data collection and entry. Second, to our knowledge, this is the first study to measure the additive influence of illness severity and stability at discharge on the readmission risk among patients hospitalized with pneumonia. Third, our study population was derived from 6 hospitals diverse in payer status, age, race/ethnicity, and socioeconomic status. Fourth, our models are less likely to be overfit to the idiosyncrasies of our data given that several predictors included in our final pneumonia-specific models have been associated with readmission in this population, including marital status,13,30 income,11,31 prior hospitalizations,11,13 thrombocytosis,32-34 and vital sign instabilities on discharge.17 Lastly, the discrimination of the CMS pneumonia model in our cohort (C statistic, 0.64) closely matched the discrimination observed in 4 independent cohorts (C statistic, 0.63), suggesting adequate generalizability of our study setting and population.10,12

Our results should be interpreted in the context of several limitations. First, generalizability to other regions beyond north Texas is unknown. Second, although we included a diverse cohort of safety net, community, teaching, and nonteaching hospitals, the pneumonia-specific models were not externally validated in a separate cohort, which may lead to more optimistic estimates of model performance. Third, PSI and CURB-65 scores were modified to use diagnostic codes for altered mental status and pleural effusion, and omitted nursing home residence. Thus, the independent associations for the PSI and CURB-65 scores and their predictive ability are likely attenuated. Fourth, we were unable to include data on medications (antibiotics and steroid use) and outpatient visits, which may influence readmission risk.2,9,13,35-40 Fifth, we included only the first pneumonia hospitalization per patient in this study. Had we included multiple hospitalizations per patient, we anticipate better model performance for the 2 pneumonia-specific EHR models since prior hospitalization was a robust predictor of readmission.

In conclusion, the full-stay pneumonia-specific EHR readmission risk-prediction model outperformed the first-day pneumonia-specific model, multi-condition EHR models, and the CMS pneumonia model. This suggests that: measures of clinical severity and stability at the time of discharge are important predictors for identifying patients at highest risk for readmission; and that EHR data routinely collected for clinical practice can be used to accurately predict risk of readmission among patients hospitalized for pneumonia.

 

 

Acknowledgments

The authors would like to acknowledge Ruben Amarasingham, MD, MBA, president and chief executive officer of Parkland Center for Clinical Innovation, and Ferdinand Velasco, MD, chief health information officer at Texas Health Resources for their assistance in assembling the 6-hospital cohort used in this study.

Disclosures

This work was supported by the Agency for Healthcare Research and Quality-funded UT Southwestern Center for Patient-Centered Outcomes Research (R24 HS022418-01); the Commonwealth Foundation (#20100323); the UT Southwestern KL2 Scholars Program supported by the National Institutes of Health (KL2 TR001103 to ANM and OKN); and the National Center for Advancing Translational Sciences at the National Institute of Health (U54 RFA-TR-12-006 to E.A.H.). The study sponsors had no role in design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The authors have no financial conflicts of interest to disclose

Pneumonia is a leading cause of hospitalizations in the U.S., accounting for more than 1.1 million discharges annually.1 Pneumonia is frequently complicated by hospital readmission, which is costly and potentially avoidable.2,3 Due to financial penalties imposed on hospitals for higher than expected 30-day readmission rates, there is increasing attention to implementing interventions to reduce readmissions in this population.4,5 However, because these programs are resource-intensive, interventions are thought to be most cost-effective if they are targeted to high-risk individuals who are most likely to benefit.6-8

Current pneumonia-specific readmission risk-prediction models that could enable identification of high-risk patients suffer from poor predictive ability, greatly limiting their use, and most were validated among older adults or by using data from single academic medical centers, limiting their generalizability.9-14 A potential reason for poor predictive accuracy is the omission of known robust clinical predictors of pneumonia-related outcomes, including pneumonia severity of illness and stability on discharge.15-17 Approaches using electronic health record (EHR) data, which include this clinically granular data, could enable hospitals to more accurately and pragmatically identify high-risk patients during the index hospitalization and enable interventions to be initiated prior to discharge.

An alternative strategy to identifying high-risk patients for readmission is to use a multi-condition risk-prediction model. Developing and implementing models for every condition may be time-consuming and costly. We have derived and validated 2 multi-condition risk-prediction models using EHR data—1 using data from the first day of hospital admission (‘first-day’ model), and the second incorporating data from the entire hospitalization (‘full-stay’ model) to reflect in-hospital complications and clinical stability at discharge.18,19 However, it is unknown if a multi-condition model for pneumonia would perform as well as a disease-specific model.

This study aimed to develop 2 EHR-based pneumonia-specific readmission risk-prediction models using data routinely collected in clinical practice—a ‘first-day’ and a ‘full-stay’ model—and compare the performance of each model to: 1) one another; 2) the corresponding multi-condition EHR model; and 3) to other potentially useful models in predicting pneumonia readmissions (the Centers for Medicare and Medicaid Services [CMS] pneumonia model, and 2 commonly used pneumonia severity of illness scores validated for predicting mortality). We hypothesized that the pneumonia-specific EHR models would outperform other models; and the full-stay pneumonia-specific model would outperform the first-day pneumonia-specific model.

METHODS

Study Design, Population, and Data Sources

 

 

We conducted an observational study using EHR data collected from 6 hospitals (including safety net, community, teaching, and nonteaching hospitals) in north Texas between November 2009 and October 2010, All hospitals used the Epic EHR (Epic Systems Corporation, Verona, WI). Details of this cohort have been published.18,19

We included consecutive hospitalizations among adults 18 years and older discharged from any medicine service with principal discharge diagnoses of pneumonia (ICD-9-CM codes 480-483, 485, 486-487), sepsis (ICD-9-CM codes 038, 995.91, 995.92, 785.52), or respiratory failure (ICD-9-CM codes 518.81, 518.82, 518.84, 799.1) when the latter 2 were also accompanied by a secondary diagnosis of pneumonia.20 For individuals with multiple hospitalizations during the study period, we included only the first hospitalization. We excluded individuals who died during the index hospitalization or within 30 days of discharge, were transferred to another acute care facility, or left against medical advice.

Outcomes

The primary outcome was all-cause 30-day readmission, defined as a nonelective hospitalization within 30 days of discharge to any of 75 acute care hospitals within a 100-mile radius of Dallas, ascertained from an all-payer regional hospitalization database.

Predictor Variables for the Pneumonia-Specific Readmission Models

The selection of candidate predictors was informed by our validated multi-condition risk-prediction models using EHR data available within 24 hours of admission (‘first-day’ multi-condition EHR model) or during the entire hospitalization (‘full-stay’ multi-condition EHR model).18,19 For the pneumonia-specific models, we included all variables in our published multi-condition models as candidate predictors, including sociodemographics, prior utilization, Charlson Comorbidity Index, select laboratory and vital sign abnormalities, length of stay, hospital complications (eg, venous thromboembolism), vital sign instabilities, and disposition status (see Supplemental Table 1 for complete list of variables). We also assessed additional variables specific to pneumonia for inclusion that were: (1) available in the EHR of all participating hospitals; (2) routinely collected or available at the time of admission or discharge; and (3) plausible predictors of adverse outcomes based on literature and clinical expertise. These included select comorbidities (eg, psychiatric conditions, chronic lung disease, history of pneumonia),10,11,21,22 the pneumonia severity index (PSI),16,23,24 intensive care unit stay, and receipt of invasive or noninvasive ventilation. We used a modified PSI score because certain data elements were missing. The modified PSI (henceforth referred to as PSI) did not include nursing home residence and included diagnostic codes as proxies for the presence of pleural effusion (ICD-9-CM codes 510, 511.1, and 511.9) and altered mental status (ICD-9-CM codes 780.0X, 780.97, 293.0, 293.1, and 348.3X).

Statistical Analysis

Model Derivation. Candidate predictor variables were classified as available in the EHR within 24 hours of admission and/or at the time of discharge. For example, socioeconomic factors could be ascertained within the first day of hospitalization, whereas length of stay would not be available until the day of discharge. Predictors with missing values were assumed to be normal (less than 1% missing for each variable). Univariate relationships between readmission and each candidate predictor were assessed in the overall cohort using a pre-specified significance threshold of P ≤ 0.10. Significant variables were entered in the respective first-day and full-stay pneumonia-specific multivariable logistic regression models using stepwise-backward selection with a pre-specified significance threshold of P ≤ 0.05. In sensitivity analyses, we alternately derived our models using stepwise-forward selection, as well as stepwise-backward selection minimizing the Bayesian information criterion and Akaike information criterion separately. These alternate modeling strategies yielded identical predictors to our final models.

Model Validation. Model validation was performed using 5-fold cross-validation, with the overall cohort randomly divided into 5 equal-size subsets.25 For each cycle, 4 subsets were used for training to estimate model coefficients, and the fifth subset was used for validation. This cycle was repeated 5 times with each randomly-divided subset used once as the validation set. We repeated this entire process 50 times and averaged the C statistic estimates to derive an optimism-corrected C statistic. Model calibration was assessed qualitatively by comparing predicted to observed probabilities of readmission by quintiles of predicted risk, and with the Hosmer-Lemeshow goodness-of-fit test.

Comparison to Other Models. The main comparisons of the first-day and full-stay pneumonia-specific EHR model performance were to each other and the corresponding multi-condition EHR model.18,19 The multi-condition EHR models were separately derived and validated within the larger parent cohort from which this study cohort was derived, and outperformed the CMS all-cause model, the HOSPITAL model, and the LACE index.19 To further triangulate our findings, given the lack of other rigorously validated pneumonia-specific risk-prediction models for readmission,14 we compared the pneumonia-specific EHR models to the CMS pneumonia model derived from administrative claims data,10 and 2 commonly used risk-prediction scores for short-term mortality among patients with community-acquired pneumonia, the PSI and CURB-65 scores.16 Although derived and validated using patient-level data, the CMS model was developed to benchmark hospitals according to hospital-level readmission rates.10 The CURB-65 score in this study was also modified to include the same altered mental status diagnostic codes according to the modified PSI as a proxy for “confusion.” Both the PSI and CURB-65 scores were calculated using the most abnormal values within the first 24 hours of admission. The ‘updated’ PSI and the ‘updated’ CURB-65 were calculated using the most abnormal values within 24 hours prior to discharge, or the last known observation prior to discharge if no results were recorded within this time period. A complete list of variables for each of the comparison models are shown in Supplemental Table 1.

We assessed model performance by calculating the C statistic, integrated discrimination index, and net reclassification index (NRI) compared to our pneumonia-specific models. The integrated discrimination index is the difference in the mean predicted probability of readmission between patients who were and were not actually readmitted between 2 models, where more positive values suggest improvement in model performance compared to a reference model.26 The NRI is defined as the sum of the net proportions of correctly reclassified persons with and without the event of interest.27 Here, we calculated a category-based NRI to evaluate the performance of pneumonia-specific models in correctly classifying individuals with and without readmissions into the 2 highest readmission risk quintiles vs the lowest 3 risk quintiles compared to other models.27 This pre-specified cutoff is relevant for hospitals interested in identifying the highest risk individuals for targeted intervention.7 Finally, we assessed calibration of comparator models in our cohort by comparing predicted probability to observed probability of readmission by quintiles of risk for each model. We conducted all analyses using Stata 12.1 (StataCorp, College Station, Texas). This study was approved by the University of Texas Southwestern Medical Center Institutional Review Board.

 

 

RESULTS

Of 1463 index hospitalizations (Supplemental Figure 1), the 30-day all-cause readmission rate was 13.6%. Individuals with a 30-day readmission had markedly different sociodemographic and clinical characteristics compared to those not readmitted (Table 1; see Supplemental Table 2 for additional clinical characteristics).

Baseline Characteristics of Patients Hospitalized with Pneumonia
Table 1

Derivation, Validation, and Performance of the Pneumonia-Specific Readmission Risk-Prediction Models

The final first-day pneumonia-specific EHR model included 7 variables, including sociodemographic characteristics; prior hospitalizations; thrombocytosis, and PSI (Table 2). The first-day pneumonia-specific model had adequate discrimination (C statistic, 0.695; optimism-corrected C statistic 0.675, 95% confidence interval [CI], 0.667-0.685; Table 3). It also effectively stratified individuals across a broad range of risk (average predicted decile of risk ranged from 4% to 33%; Table 3) and was well calibrated (Supplemental Table 3).

Final Pneumonia-Specific EHR Risk-Prediction Models for Readmissions
Table 2

The final full-stay pneumonia-specific EHR readmission model included 8 predictors, including 3 variables from the first-day model (median income, thrombocytosis, and prior hospitalizations; Table 2). The full-stay pneumonia-specific EHR model also included vital sign instabilities on discharge, updated PSI, and disposition status (ie, being discharged with home health or to a post-acute care facility was associated with greater odds of readmission, and hospice with lower odds). The full-stay pneumonia-specific EHR model had good discrimination (C statistic, 0.731; optimism-corrected C statistic, 0.714; 95% CI, 0.706-0.720), and stratified individuals across a broad range of risk (average predicted decile of risk ranged from 3% to 37%; Table 3), and was also well calibrated (Supplemental Table 3).

Model Performance and Comparison of Pneumonia-Specific EHR Readmissions Models vs Other Models
Table 3

First-Day Pneumonia-Specific EHR Model vs First-Day Multi-Condition EHR Model

The first-day pneumonia-specific EHR model outperformed the first-day multi-condition EHR model with better discrimination (P = 0.029) and more correctly classified individuals in the top 2 highest risk quintiles vs the bottom 3 risk quintiles (Table 3, Supplemental Table 4, and Supplemental Figure 2A). With respect to calibration, the first-day multi-condition EHR model overestimated risk among the highest quintile risk group compared to the first-day pneumonia-specific EHR model (Figure 1A, 1B).

Comparison of the calibration of different readmission models
Figure 1

Full-Stay Pneumonia-Specific EHR Model vs Other Models

The full-stay pneumonia-specific EHR model comparatively outperformed the corresponding full-stay multi-condition EHR model, as well as the first-day pneumonia-specific EHR model, the CMS pneumonia model, the updated PSI, and the updated CURB-65 (Table 3, Supplemental Table 5, Supplemental Table 6, and Supplemental Figures 2B and 2C). Compared to the full-stay multi-condition and first-day pneumonia-specific EHR models, the full-stay pneumonia-specific EHR model had better discrimination, better reclassification (NRI, 0.09 and 0.08, respectively), and was able to stratify individuals across a broader range of readmission risk (Table 3). It also had better calibration in the highest quintile risk group compared to the full-stay multi-condition EHR model (Figure 1C and 1D).

Updated vs First-Day Modified PSI and CURB-65 Scores

The updated PSI was more strongly predictive of readmission than the PSI calculated on the day of admission (Wald test, 9.83; P = 0.002). Each 10-point increase in the updated PSI was associated with a 22% increased odds of readmission vs an 11% increase for the PSI calculated upon admission (Table 2). The improved predictive ability of the updated PSI and CURB-65 scores was also reflected in the superior discrimination and calibration vs the respective first-day pneumonia severity of illness scores (Table 3).

DISCUSSION

Using routinely available EHR data from 6 diverse hospitals, we developed 2 pneumonia-specific readmission risk-prediction models that aimed to allow hospitals to identify patients hospitalized with pneumonia at high risk for readmission. Overall, we found that a pneumonia-specific model using EHR data from the entire hospitalization outperformed all other models—including the first-day pneumonia-specific model using data present only on admission, our own multi-condition EHR models, and the CMS pneumonia model based on administrative claims data—in all aspects of model performance (discrimination, calibration, and reclassification). We found that socioeconomic status, prior hospitalizations, thrombocytosis, and measures of clinical severity and stability were important predictors of 30-day all-cause readmissions among patients hospitalized with pneumonia. Additionally, an updated discharge PSI score was a stronger independent predictor of readmissions compared to the PSI score calculated upon admission; and inclusion of the updated PSI in our full-stay pneumonia model led to improved prediction of 30-day readmissions.

The marked improvement in performance of the full-stay pneumonia-specific EHR model compared to the first-day pneumonia-specific model suggests that clinical stability and trajectory during hospitalization (as modeled through disposition status, updated PSI, and vital sign instabilities at discharge) are important predictors of 30-day readmission among patients hospitalized for pneumonia, which was not the case for our EHR-based multi-condition models.19 With the inclusion of these measures, the full-stay pneumonia-specific model correctly reclassified an additional 8% of patients according to their true risk compared to the first-day pneumonia-specific model. One implication of these findings is that hospitals interested in targeting their highest risk individuals with pneumonia for transitional care interventions could do so using the first-day pneumonia-specific EHR model and could refine their targeted strategy at the time of discharge by using the full-stay pneumonia model. This staged risk-prediction strategy would enable hospitals to initiate transitional care interventions for high-risk individuals in the inpatient setting (ie, patient education).7 Then, hospitals could enroll both persistent and newly identified high-risk individuals for outpatient interventions (ie, follow-up telephone call) in the immediate post-discharge period, an interval characterized by heightened vulnerability for adverse events,28 based on patients’ illness severity and stability at discharge. This approach can be implemented by hospitals by building these risk-prediction models directly into the EHR, or by extracting EHR data in near real time as our group has done successfully for heart failure.7

Another key implication of our study is that, for pneumonia, a disease-specific modeling approach has better predictive ability than using a multi-condition model. Compared to multi-condition models, the first-day and full-stay pneumonia-specific EHR models correctly reclassified an additional 6% and 9% of patients, respectively. Thus, hospitals interested in identifying the highest risk patients with pneumonia for targeted interventions should do so using the disease-specific models, if the costs and resources of doing so are within reach of the healthcare system.

An additional novel finding of our study is the added value of an updated PSI for predicting adverse events. Studies of pneumonia severity of illness scores have calculated the PSI and CURB-65 scores using data present only on admission.16,24 While our study also confirms that the PSI calculated upon admission is a significant predictor of readmission,23,29 this study extends this work by showing that an updated PSI score calculated at the time of discharge is an even stronger predictor for readmission, and its inclusion in the model significantly improves risk stratification and prognostication.

Our study was noteworthy for several strengths. First, we used data from a common EHR system, thus potentially allowing for the implementation of the pneumonia-specific models in real time across a number of hospitals. The use of routinely collected data for risk-prediction modeling makes this approach scalable and sustainable, because it obviates the need for burdensome data collection and entry. Second, to our knowledge, this is the first study to measure the additive influence of illness severity and stability at discharge on the readmission risk among patients hospitalized with pneumonia. Third, our study population was derived from 6 hospitals diverse in payer status, age, race/ethnicity, and socioeconomic status. Fourth, our models are less likely to be overfit to the idiosyncrasies of our data given that several predictors included in our final pneumonia-specific models have been associated with readmission in this population, including marital status,13,30 income,11,31 prior hospitalizations,11,13 thrombocytosis,32-34 and vital sign instabilities on discharge.17 Lastly, the discrimination of the CMS pneumonia model in our cohort (C statistic, 0.64) closely matched the discrimination observed in 4 independent cohorts (C statistic, 0.63), suggesting adequate generalizability of our study setting and population.10,12

Our results should be interpreted in the context of several limitations. First, generalizability to other regions beyond north Texas is unknown. Second, although we included a diverse cohort of safety net, community, teaching, and nonteaching hospitals, the pneumonia-specific models were not externally validated in a separate cohort, which may lead to more optimistic estimates of model performance. Third, PSI and CURB-65 scores were modified to use diagnostic codes for altered mental status and pleural effusion, and omitted nursing home residence. Thus, the independent associations for the PSI and CURB-65 scores and their predictive ability are likely attenuated. Fourth, we were unable to include data on medications (antibiotics and steroid use) and outpatient visits, which may influence readmission risk.2,9,13,35-40 Fifth, we included only the first pneumonia hospitalization per patient in this study. Had we included multiple hospitalizations per patient, we anticipate better model performance for the 2 pneumonia-specific EHR models since prior hospitalization was a robust predictor of readmission.

In conclusion, the full-stay pneumonia-specific EHR readmission risk-prediction model outperformed the first-day pneumonia-specific model, multi-condition EHR models, and the CMS pneumonia model. This suggests that: measures of clinical severity and stability at the time of discharge are important predictors for identifying patients at highest risk for readmission; and that EHR data routinely collected for clinical practice can be used to accurately predict risk of readmission among patients hospitalized for pneumonia.

 

 

Acknowledgments

The authors would like to acknowledge Ruben Amarasingham, MD, MBA, president and chief executive officer of Parkland Center for Clinical Innovation, and Ferdinand Velasco, MD, chief health information officer at Texas Health Resources for their assistance in assembling the 6-hospital cohort used in this study.

Disclosures

This work was supported by the Agency for Healthcare Research and Quality-funded UT Southwestern Center for Patient-Centered Outcomes Research (R24 HS022418-01); the Commonwealth Foundation (#20100323); the UT Southwestern KL2 Scholars Program supported by the National Institutes of Health (KL2 TR001103 to ANM and OKN); and the National Center for Advancing Translational Sciences at the National Institute of Health (U54 RFA-TR-12-006 to E.A.H.). The study sponsors had no role in design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The authors have no financial conflicts of interest to disclose

References

1. Centers for Disease Control and Prevention. Pneumonia. http://www.cdc.gov/nchs/fastats/pneumonia.htm. Accessed January 26, 2016.
2. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee-for-service program. N Engl J Med. 2009;364(16):1582. PubMed
3. van Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391-E402. PubMed
4. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital-initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433-440. PubMed
5. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30-day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520-528. PubMed
6. Rennke S, Shoeb MH, Nguyen OK, Magan Y, Wachter RM, Ranji SR. Interventions to Improve Care Transitions at Hospital Discharge. Rockville, MD: Agency for Healthcare Research and Quality, US Department of Health and Human Services;March 2013. PubMed
7. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real-time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):998-1005. PubMed
8. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood). 2014;33(7):1148-1154. PubMed
9. Hebert C, Shivade C, Foraker R, et al. Diagnosis-specific readmission risk prediction using electronic health data: a retrospective cohort study. BMC Med Inform Decis Mak. 2014;14:65. PubMed
10. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30-day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142-150. PubMed
11. Mather JF, Fortunato GJ, Ash JL, Davis MJ, Kumar A. Prediction of pneumonia 30-day readmissions: a single-center attempt to increase model performance. Respir Care. 2014;59(2):199-208. PubMed
12. O’Brien WJ, Chen Q, Mull HJ, et al. What is the value of adding Medicare data in estimating VA hospital readmission rates? Health Serv Res. 2015;50(1):40-57. PubMed
13. Tang VL, Halm EA, Fine MJ, Johnson CS, Anzueto A, Mortensen EM. Predictors of rehospitalization after admission for pneumonia in the veterans affairs healthcare system. J Hosp Med. 2014;9(6):379-383. PubMed
14. Weinreich M, Nguyen OK, Wang D, et al. Predicting the risk of readmission in pneumonia: a systematic review of model performance. Ann Am Thorac Soc. 2016;13(9):1607-1614. PubMed
15. Kwok CS, Loke YK, Woo K, Myint PK. Risk prediction models for mortality in community-acquired pneumonia: a systematic review. Biomed Res Int. 2013;2013:504136. PubMed
16. Loke YK, Kwok CS, Niruban A, Myint PK. Value of severity scales in predicting mortality from community-acquired pneumonia: systematic review and meta-analysis. Thorax. 2010;65(10):884-890. PubMed
17. Halm EA, Fine MJ, Kapoor WN, Singer DE, Marrie TJ, Siu AL. Instability on hospital discharge and the risk of adverse outcomes in patients with pneumonia. Arch Intern Med. 2002;162(11):1278-1284. PubMed
18. Amarasingham R, Velasco F, Xie B, et al. Electronic medical record-based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models. BMC Med Inform Decis Mak. 2015;15:39. PubMed
19. Nguyen OK, Makam AN, Clark C, et al. Predicting all-cause readmissions using electronic health record data from the entire hospitalization: Model development and comparison. J Hosp Med. 2016;11(7):473-480. PubMed
20. Lindenauer PK, Lagu T, Shieh MS, Pekow PS, Rothberg MB. Association of diagnostic coding with trends in hospitalizations and mortality of patients with pneumonia, 2003-2009. JAMA. 2012;307(13):1405-1413. PubMed
21. Ahmedani BK, Solberg LI, Copeland LA, et al. Psychiatric comorbidity and 30-day readmissions after hospitalization for heart failure, AMI, and pneumonia. Psychiatr Serv. 2015;66(2):134-140. PubMed
22. Jasti H, Mortensen EM, Obrosky DS, Kapoor WN, Fine MJ. Causes and risk factors for rehospitalization of patients hospitalized with community-acquired pneumonia. Clin Infect Dis. 2008;46(4):550-556. PubMed
23. Capelastegui A, España Yandiola PP, Quintana JM, et al. Predictors of short-term rehospitalization following discharge of patients hospitalized with community-acquired pneumonia. Chest. 2009;136(4):1079-1085. PubMed
24. Fine MJ, Auble TE, Yealy DM, et al. A prediction rule to identify low-risk patients with community-acquired pneumonia. N Engl J Med. 1997;336(4):243-250. PubMed
25. Vittinghoff E, Glidden D, Shiboski S, McCulloch C. Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models (Statistics for Biology and Health). New York City, NY: Springer; 2012.
26. Pencina MJ, D’Agostino RB Sr, D’Agostino RB Jr, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2):157-172; discussion 207-112. PubMed
27. Leening MJ, Vedder MM, Witteman JC, Pencina MJ, Steyerberg EW. Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician’s guide. Ann Intern Med. 2014;160(2):122-131. PubMed
28. Krumholz HM. Post-hospital syndrome--an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100-102. PubMed
29. Micek ST, Lang A, Fuller BM, Hampton NB, Kollef MH. Clinical implications for patients treated inappropriately for community-acquired pneumonia in the emergency department. BMC Infect Dis. 2014;14:61. PubMed
30. Metersky ML, Fine MJ, Mortensen EM. The effect of marital status on the presentation and outcomes of elderly male veterans hospitalized for pneumonia. Chest. 2012;142(4):982-987. PubMed
31. Calvillo-King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269-282. PubMed
32. Mirsaeidi M, Peyrani P, Aliberti S, et al. Thrombocytopenia and thrombocytosis at time of hospitalization predict mortality in patients with community-acquired pneumonia. Chest. 2010;137(2):416-420. PubMed
33. Prina E, Ferrer M, Ranzani OT, et al. Thrombocytosis is a marker of poor outcome in community-acquired pneumonia. Chest. 2013;143(3):767-775. PubMed

34. Violi F, Cangemi R, Calvieri C. Pneumonia, thrombosis and vascular disease. J Thromb Haemost. 2014;12(9):1391-1400. PubMed
35. Weinberger M, Oddone EZ, Henderson WG. Does increased access to primary care reduce hospital readmissions? Veterans Affairs Cooperative Study Group on Primary Care and Hospital Readmission. N Engl J Med. 1996;334(22):1441-1447. PubMed
36. Field TS, Ogarek J, Garber L, Reed G, Gurwitz JH. Association of early post-discharge follow-up by a primary care physician and 30-day rehospitalization among older adults. J Gen Intern Med. 2015;30(5):565-571. PubMed
37. Spatz ES, Sheth SD, Gosch KL, et al. Usual source of care and outcomes following acute myocardial infarction. J Gen Intern Med. 2014;29(6):862-869. PubMed
38. Brooke BS, Stone DH, Cronenwett JL, et al. Early primary care provider follow-up and readmission after high-risk surgery. JAMA Surg. 2014;149(8):821-828. PubMed
39. Adamuz J, Viasus D, Campreciós-Rodriguez P, et al. A prospective cohort study of healthcare visits and rehospitalizations after discharge of patients with community-acquired pneumonia. Respirology. 2011;16(7):1119-1126. PubMed
40. Shorr AF, Zilberberg MD, Reichley R, et al. Readmission following hospitalization for pneumonia: the impact of pneumonia type and its implication for hospitals. Clin Infect Dis. 2013;57(3):362-367. PubMed

References

1. Centers for Disease Control and Prevention. Pneumonia. http://www.cdc.gov/nchs/fastats/pneumonia.htm. Accessed January 26, 2016.
2. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee-for-service program. N Engl J Med. 2009;364(16):1582. PubMed
3. van Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391-E402. PubMed
4. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital-initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433-440. PubMed
5. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30-day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520-528. PubMed
6. Rennke S, Shoeb MH, Nguyen OK, Magan Y, Wachter RM, Ranji SR. Interventions to Improve Care Transitions at Hospital Discharge. Rockville, MD: Agency for Healthcare Research and Quality, US Department of Health and Human Services;March 2013. PubMed
7. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real-time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):998-1005. PubMed
8. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood). 2014;33(7):1148-1154. PubMed
9. Hebert C, Shivade C, Foraker R, et al. Diagnosis-specific readmission risk prediction using electronic health data: a retrospective cohort study. BMC Med Inform Decis Mak. 2014;14:65. PubMed
10. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30-day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142-150. PubMed
11. Mather JF, Fortunato GJ, Ash JL, Davis MJ, Kumar A. Prediction of pneumonia 30-day readmissions: a single-center attempt to increase model performance. Respir Care. 2014;59(2):199-208. PubMed
12. O’Brien WJ, Chen Q, Mull HJ, et al. What is the value of adding Medicare data in estimating VA hospital readmission rates? Health Serv Res. 2015;50(1):40-57. PubMed
13. Tang VL, Halm EA, Fine MJ, Johnson CS, Anzueto A, Mortensen EM. Predictors of rehospitalization after admission for pneumonia in the veterans affairs healthcare system. J Hosp Med. 2014;9(6):379-383. PubMed
14. Weinreich M, Nguyen OK, Wang D, et al. Predicting the risk of readmission in pneumonia: a systematic review of model performance. Ann Am Thorac Soc. 2016;13(9):1607-1614. PubMed
15. Kwok CS, Loke YK, Woo K, Myint PK. Risk prediction models for mortality in community-acquired pneumonia: a systematic review. Biomed Res Int. 2013;2013:504136. PubMed
16. Loke YK, Kwok CS, Niruban A, Myint PK. Value of severity scales in predicting mortality from community-acquired pneumonia: systematic review and meta-analysis. Thorax. 2010;65(10):884-890. PubMed
17. Halm EA, Fine MJ, Kapoor WN, Singer DE, Marrie TJ, Siu AL. Instability on hospital discharge and the risk of adverse outcomes in patients with pneumonia. Arch Intern Med. 2002;162(11):1278-1284. PubMed
18. Amarasingham R, Velasco F, Xie B, et al. Electronic medical record-based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models. BMC Med Inform Decis Mak. 2015;15:39. PubMed
19. Nguyen OK, Makam AN, Clark C, et al. Predicting all-cause readmissions using electronic health record data from the entire hospitalization: Model development and comparison. J Hosp Med. 2016;11(7):473-480. PubMed
20. Lindenauer PK, Lagu T, Shieh MS, Pekow PS, Rothberg MB. Association of diagnostic coding with trends in hospitalizations and mortality of patients with pneumonia, 2003-2009. JAMA. 2012;307(13):1405-1413. PubMed
21. Ahmedani BK, Solberg LI, Copeland LA, et al. Psychiatric comorbidity and 30-day readmissions after hospitalization for heart failure, AMI, and pneumonia. Psychiatr Serv. 2015;66(2):134-140. PubMed
22. Jasti H, Mortensen EM, Obrosky DS, Kapoor WN, Fine MJ. Causes and risk factors for rehospitalization of patients hospitalized with community-acquired pneumonia. Clin Infect Dis. 2008;46(4):550-556. PubMed
23. Capelastegui A, España Yandiola PP, Quintana JM, et al. Predictors of short-term rehospitalization following discharge of patients hospitalized with community-acquired pneumonia. Chest. 2009;136(4):1079-1085. PubMed
24. Fine MJ, Auble TE, Yealy DM, et al. A prediction rule to identify low-risk patients with community-acquired pneumonia. N Engl J Med. 1997;336(4):243-250. PubMed
25. Vittinghoff E, Glidden D, Shiboski S, McCulloch C. Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models (Statistics for Biology and Health). New York City, NY: Springer; 2012.
26. Pencina MJ, D’Agostino RB Sr, D’Agostino RB Jr, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2):157-172; discussion 207-112. PubMed
27. Leening MJ, Vedder MM, Witteman JC, Pencina MJ, Steyerberg EW. Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician’s guide. Ann Intern Med. 2014;160(2):122-131. PubMed
28. Krumholz HM. Post-hospital syndrome--an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100-102. PubMed
29. Micek ST, Lang A, Fuller BM, Hampton NB, Kollef MH. Clinical implications for patients treated inappropriately for community-acquired pneumonia in the emergency department. BMC Infect Dis. 2014;14:61. PubMed
30. Metersky ML, Fine MJ, Mortensen EM. The effect of marital status on the presentation and outcomes of elderly male veterans hospitalized for pneumonia. Chest. 2012;142(4):982-987. PubMed
31. Calvillo-King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269-282. PubMed
32. Mirsaeidi M, Peyrani P, Aliberti S, et al. Thrombocytopenia and thrombocytosis at time of hospitalization predict mortality in patients with community-acquired pneumonia. Chest. 2010;137(2):416-420. PubMed
33. Prina E, Ferrer M, Ranzani OT, et al. Thrombocytosis is a marker of poor outcome in community-acquired pneumonia. Chest. 2013;143(3):767-775. PubMed

34. Violi F, Cangemi R, Calvieri C. Pneumonia, thrombosis and vascular disease. J Thromb Haemost. 2014;12(9):1391-1400. PubMed
35. Weinberger M, Oddone EZ, Henderson WG. Does increased access to primary care reduce hospital readmissions? Veterans Affairs Cooperative Study Group on Primary Care and Hospital Readmission. N Engl J Med. 1996;334(22):1441-1447. PubMed
36. Field TS, Ogarek J, Garber L, Reed G, Gurwitz JH. Association of early post-discharge follow-up by a primary care physician and 30-day rehospitalization among older adults. J Gen Intern Med. 2015;30(5):565-571. PubMed
37. Spatz ES, Sheth SD, Gosch KL, et al. Usual source of care and outcomes following acute myocardial infarction. J Gen Intern Med. 2014;29(6):862-869. PubMed
38. Brooke BS, Stone DH, Cronenwett JL, et al. Early primary care provider follow-up and readmission after high-risk surgery. JAMA Surg. 2014;149(8):821-828. PubMed
39. Adamuz J, Viasus D, Campreciós-Rodriguez P, et al. A prospective cohort study of healthcare visits and rehospitalizations after discharge of patients with community-acquired pneumonia. Respirology. 2011;16(7):1119-1126. PubMed
40. Shorr AF, Zilberberg MD, Reichley R, et al. Readmission following hospitalization for pneumonia: the impact of pneumonia type and its implication for hospitals. Clin Infect Dis. 2013;57(3):362-367. PubMed

Issue
Journal of Hospital Medicine 12(4)
Issue
Journal of Hospital Medicine 12(4)
Page Number
209-216
Page Number
209-216
Publications
Publications
Topics
Article Type
Display Headline
Predicting 30-day pneumonia readmissions using electronic health record data
Display Headline
Predicting 30-day pneumonia readmissions using electronic health record data
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Anil N. Makam, MD, MAS; 5323 Harry Hines Blvd., Dallas, TX, 75390-9169; Telephone: 214-648-3272; Fax: 214-648-3232; E-mail: [email protected]
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Use ProPublica
Article PDF Media
Media Files

Sneak Peek: Journal of Hospital Medicine

Article Type
Changed
Fri, 09/14/2018 - 12:00
Predicting 30-day pneumonia readmissions using electronic health record data.

 

BACKGROUND: Readmissions after hospitalization for pneumonia are common, but the few risk-prediction models have poor to modest predictive ability. Data routinely collected in the EHR may improve prediction.

OBJECTIVE: To develop pneumonia-specific readmission risk-prediction models using EHR data from the first day and from the entire hospital stay (“full stay”).

DESIGN: Observational cohort study using backward-stepwise selection and cross validation.

SUBJECTS: Consecutive pneumonia hospitalizations from six diverse hospitals in north Texas from 2009 to 2010.

MEASURES: All-cause, nonelective, 30-day readmissions, ascertained from 75 regional hospitals.

 

 

RESULTS: Of 1,463 patients, 13.6% were readmitted. The first-day, pneumonia-specific model included sociodemographic factors, prior hospitalizations, thrombocytosis, and a modified pneumonia severity index. The full-stay model included disposition status, vital sign instabilities on discharge, and an updated pneumonia severity index calculated using values from the day of discharge as additional predictors. The full-stay, pneumonia-specific model outperformed the first-day model (C-statistic, 0.731 vs. 0.695; P = .02; net reclassification index = 0.08). Compared with a validated multicondition readmission model, the Centers for Medicare & Medicaid Services pneumonia model, and two commonly used pneumonia severity of illness scores, the full-stay pneumonia-specific model had better discrimination (C-statistic, 0.604-0.681; P less than 0.01 for all comparisons), predicted a broader range of risk, and better reclassified individuals by their true risk (net reclassification index range, 0.09-0.18).

CONCLUSIONS: EHR data collected from the entire hospitalization can accurately predict readmission risk among patients hospitalized for pneumonia. This approach outperforms a first-day, pneumonia-specific model, the Centers for Medicare & Medicaid Services pneumonia model, and two commonly used pneumonia severity of illness scores.
 

Also In JHM This Month

Evaluating automated rules for rapid response system alarm triggers in medical and surgical patients
AUTHORS: Santiago Romero-Brufau, MD; Bruce W. Morlan, MS; Matthew Johnson, MPH; Joel Hickman; Lisa L. Kirkland, MD; James M. Naessens, ScD; Jeanne Huddleston, MD, FACP, FHM

Prognosticating with the Hospital-Patient One-year Mortality Risk score using information abstracted from the medical record
AUTHORS: Genevieve Casey, MD, and Carl van Walraven, MD, FRCPC, MSc

Automating venous thromboembolism risk calculation using electronic health record data upon hospital admission: The Automated Padua Prediction Score
AUTHORS: Pierre Elias, MD; Raman Khanna, MD; Adams Dudley, MD, MBA; Jason Davies, MD, PhD; Ronald Jacolbia, MSN; Kara McArthur, BA; Andrew D. Auerbach, MD, MPH, SFHM

The value of ultrasound in cellulitis to rule out deep venous thrombosis
AUTHORS: Hyung J. Cho, MD, and Andrew S. Dunn, MD, SFHM

Hospital medicine and perioperative care: A framework for high quality, high value collaborative care
AUTHORS: Rachel E. Thompson, MD, MPH, SFHM; Kurt Pfeifer, MD, FHM; Paul Grant, MD, SFHM; Cornelia Taylor, MD; Barbara Slawski, MD, FACP, MS, SFHM; Christopher Whinney, MD, FACP, FHM; Laurence Wellikson, MD, MHM; Amir K. Jaffer, MD, MBA, SFHM
 

Publications
Topics
Sections
Predicting 30-day pneumonia readmissions using electronic health record data.
Predicting 30-day pneumonia readmissions using electronic health record data.

 

BACKGROUND: Readmissions after hospitalization for pneumonia are common, but the few risk-prediction models have poor to modest predictive ability. Data routinely collected in the EHR may improve prediction.

OBJECTIVE: To develop pneumonia-specific readmission risk-prediction models using EHR data from the first day and from the entire hospital stay (“full stay”).

DESIGN: Observational cohort study using backward-stepwise selection and cross validation.

SUBJECTS: Consecutive pneumonia hospitalizations from six diverse hospitals in north Texas from 2009 to 2010.

MEASURES: All-cause, nonelective, 30-day readmissions, ascertained from 75 regional hospitals.

 

 

RESULTS: Of 1,463 patients, 13.6% were readmitted. The first-day, pneumonia-specific model included sociodemographic factors, prior hospitalizations, thrombocytosis, and a modified pneumonia severity index. The full-stay model included disposition status, vital sign instabilities on discharge, and an updated pneumonia severity index calculated using values from the day of discharge as additional predictors. The full-stay, pneumonia-specific model outperformed the first-day model (C-statistic, 0.731 vs. 0.695; P = .02; net reclassification index = 0.08). Compared with a validated multicondition readmission model, the Centers for Medicare & Medicaid Services pneumonia model, and two commonly used pneumonia severity of illness scores, the full-stay pneumonia-specific model had better discrimination (C-statistic, 0.604-0.681; P less than 0.01 for all comparisons), predicted a broader range of risk, and better reclassified individuals by their true risk (net reclassification index range, 0.09-0.18).

CONCLUSIONS: EHR data collected from the entire hospitalization can accurately predict readmission risk among patients hospitalized for pneumonia. This approach outperforms a first-day, pneumonia-specific model, the Centers for Medicare & Medicaid Services pneumonia model, and two commonly used pneumonia severity of illness scores.
 

Also In JHM This Month

Evaluating automated rules for rapid response system alarm triggers in medical and surgical patients
AUTHORS: Santiago Romero-Brufau, MD; Bruce W. Morlan, MS; Matthew Johnson, MPH; Joel Hickman; Lisa L. Kirkland, MD; James M. Naessens, ScD; Jeanne Huddleston, MD, FACP, FHM

Prognosticating with the Hospital-Patient One-year Mortality Risk score using information abstracted from the medical record
AUTHORS: Genevieve Casey, MD, and Carl van Walraven, MD, FRCPC, MSc

Automating venous thromboembolism risk calculation using electronic health record data upon hospital admission: The Automated Padua Prediction Score
AUTHORS: Pierre Elias, MD; Raman Khanna, MD; Adams Dudley, MD, MBA; Jason Davies, MD, PhD; Ronald Jacolbia, MSN; Kara McArthur, BA; Andrew D. Auerbach, MD, MPH, SFHM

The value of ultrasound in cellulitis to rule out deep venous thrombosis
AUTHORS: Hyung J. Cho, MD, and Andrew S. Dunn, MD, SFHM

Hospital medicine and perioperative care: A framework for high quality, high value collaborative care
AUTHORS: Rachel E. Thompson, MD, MPH, SFHM; Kurt Pfeifer, MD, FHM; Paul Grant, MD, SFHM; Cornelia Taylor, MD; Barbara Slawski, MD, FACP, MS, SFHM; Christopher Whinney, MD, FACP, FHM; Laurence Wellikson, MD, MHM; Amir K. Jaffer, MD, MBA, SFHM
 

 

BACKGROUND: Readmissions after hospitalization for pneumonia are common, but the few risk-prediction models have poor to modest predictive ability. Data routinely collected in the EHR may improve prediction.

OBJECTIVE: To develop pneumonia-specific readmission risk-prediction models using EHR data from the first day and from the entire hospital stay (“full stay”).

DESIGN: Observational cohort study using backward-stepwise selection and cross validation.

SUBJECTS: Consecutive pneumonia hospitalizations from six diverse hospitals in north Texas from 2009 to 2010.

MEASURES: All-cause, nonelective, 30-day readmissions, ascertained from 75 regional hospitals.

 

 

RESULTS: Of 1,463 patients, 13.6% were readmitted. The first-day, pneumonia-specific model included sociodemographic factors, prior hospitalizations, thrombocytosis, and a modified pneumonia severity index. The full-stay model included disposition status, vital sign instabilities on discharge, and an updated pneumonia severity index calculated using values from the day of discharge as additional predictors. The full-stay, pneumonia-specific model outperformed the first-day model (C-statistic, 0.731 vs. 0.695; P = .02; net reclassification index = 0.08). Compared with a validated multicondition readmission model, the Centers for Medicare & Medicaid Services pneumonia model, and two commonly used pneumonia severity of illness scores, the full-stay pneumonia-specific model had better discrimination (C-statistic, 0.604-0.681; P less than 0.01 for all comparisons), predicted a broader range of risk, and better reclassified individuals by their true risk (net reclassification index range, 0.09-0.18).

CONCLUSIONS: EHR data collected from the entire hospitalization can accurately predict readmission risk among patients hospitalized for pneumonia. This approach outperforms a first-day, pneumonia-specific model, the Centers for Medicare & Medicaid Services pneumonia model, and two commonly used pneumonia severity of illness scores.
 

Also In JHM This Month

Evaluating automated rules for rapid response system alarm triggers in medical and surgical patients
AUTHORS: Santiago Romero-Brufau, MD; Bruce W. Morlan, MS; Matthew Johnson, MPH; Joel Hickman; Lisa L. Kirkland, MD; James M. Naessens, ScD; Jeanne Huddleston, MD, FACP, FHM

Prognosticating with the Hospital-Patient One-year Mortality Risk score using information abstracted from the medical record
AUTHORS: Genevieve Casey, MD, and Carl van Walraven, MD, FRCPC, MSc

Automating venous thromboembolism risk calculation using electronic health record data upon hospital admission: The Automated Padua Prediction Score
AUTHORS: Pierre Elias, MD; Raman Khanna, MD; Adams Dudley, MD, MBA; Jason Davies, MD, PhD; Ronald Jacolbia, MSN; Kara McArthur, BA; Andrew D. Auerbach, MD, MPH, SFHM

The value of ultrasound in cellulitis to rule out deep venous thrombosis
AUTHORS: Hyung J. Cho, MD, and Andrew S. Dunn, MD, SFHM

Hospital medicine and perioperative care: A framework for high quality, high value collaborative care
AUTHORS: Rachel E. Thompson, MD, MPH, SFHM; Kurt Pfeifer, MD, FHM; Paul Grant, MD, SFHM; Cornelia Taylor, MD; Barbara Slawski, MD, FACP, MS, SFHM; Christopher Whinney, MD, FACP, FHM; Laurence Wellikson, MD, MHM; Amir K. Jaffer, MD, MBA, SFHM
 

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME

Predicting Readmissions from EHR Data

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Predicting all‐cause readmissions using electronic health record data from the entire hospitalization: Model development and comparison

Unplanned hospital readmissions are frequent, costly, and potentially avoidable.[1, 2] Due to major federal financial readmissions penalties targeting excessive 30‐day readmissions, there is increasing attention to implementing hospital‐initiated interventions to reduce readmissions.[3, 4] However, universal enrollment of all hospitalized patients into such programs may be too resource intensive for many hospitals.[5] To optimize efficiency and effectiveness, interventions should be targeted to individuals most likely to benefit.[6, 7] However, existing readmission risk‐prediction models have achieved only modest discrimination, have largely used administrative claims data not available until months after discharge, or are limited to only a subset of patients with Medicare or a specific clinical condition.[8, 9, 10, 11, 12, 13, 14] These limitations have precluded accurate identification of high‐risk individuals in an all‐payer general medical inpatient population to provide actionable information for intervention prior to discharge.

Approaches using electronic health record (EHR) data could allow early identification of high‐risk patients during the index hospitalization to enable initiation of interventions prior to discharge. To date, such strategies have relied largely on EHR data from the day of admission.[15, 16] However, given that variation in 30‐day readmission rates are thought to reflect the quality of in‐hospital care, incorporating EHR data from the entire hospital stay to reflect hospital care processes and clinical trajectory may more accurately identify at‐risk patients.[17, 18, 19, 20] Improved accuracy in risk prediction would help better target intervention efforts in the immediate postdischarge period, an interval characterized by heightened vulnerability for adverse events.[21]

To help hospitals target transitional care interventions more effectively to high‐risk individuals prior to discharge, we derived and validated a readmissions risk‐prediction model incorporating EHR data from the entire course of the index hospitalization, which we termed the full‐stay EHR model. We also compared the full‐stay EHR model performance to our group's previously derived prediction model based on EHR data on the day of admission, termed the first‐day EHR model, as well as to 2 other validated readmission models similarly intended to yield near real‐time risk predictions prior to or shortly after hospital discharge.[9, 10, 15]

METHODS

Study Design, Population, and Data Sources

We conducted an observational cohort study using EHR data from 6 hospitals in the DallasFort Worth metroplex between November 1, 2009 and October 30, 2010 using the same EHR system (Epic Systems Corp., Verona, WI). One site was a university‐affiliated safety net hospital; the remaining 5 sites were teaching and nonteaching community sites.

We included consecutive hospitalizations among adults 18 years old discharged alive from any medicine inpatient service. For individuals with multiple hospitalizations during the study period, we included only the first hospitalization. We excluded individuals who died during the index hospitalization, were transferred to another acute care facility, left against medical advice, or who died outside of the hospital within 30 days of discharge. For model derivation, we randomly split the sample into separate derivation (50%) and validation cohorts (50%).

Outcomes

The primary outcome was 30‐day hospital readmission, defined as a nonelective hospitalization within 30 days of discharge to any of 75 acute care hospitals within a 100‐mile radius of Dallas, ascertained from an all‐payer regional hospitalization database. Nonelective hospitalizations included all hospitalizations classified as a emergency, urgent, or trauma, and excluded those classified as elective as per the Centers for Medicare and Medicaid Services Claim Inpatient Admission Type Code definitions.

Predictor Variables for the Full‐Stay EHR Model

The full‐stay EHR model was iteratively developed from our group's previously derived and validated risk‐prediction model using EHR data available on admission (first‐day EHR model).[15] For the full‐stay EHR model, we included all predictor variables included in our published first‐day EHR model as candidate risk factors. Based on prior literature, we additionally expanded candidate predictors available on admission to include marital status (proxy for social isolation) and socioeconomic disadvantage (percent poverty, unemployment, median income, and educational attainment by zip code of residence as proxy measures of the social and built environment).[22, 23, 24, 25, 26, 27] We also expanded the ascertainment of prior hospitalization to include admissions at both the index hospital and any of 75 acute care hospitals from the same, separate all‐payer regional hospitalization database used to ascertain 30‐day readmissions.

Candidate predictors from the remainder of the hospital stay (ie, following the first 24 hours of admission) were included if they were: (1) available in the EHR of all participating hospitals, (2) routinely collected or available at the time of hospital discharge, and (3) plausible predictors of adverse outcomes based on prior literature and clinical expertise. These included length of stay, in‐hospital complications, transfer to an intensive or coronary care unit, blood transfusions, vital sign instabilities within 24 hours of discharge, select laboratory values at time of discharge, and disposition status. We also assessed trajectories of vital signs and selected laboratory values (defined as changes in these measures from admission to discharge).

Statistical Analysis

Model Derivation

Univariate relationships between readmission and each of the candidate predictors were assessed in the derivation cohort using a prespecified significance threshold of P 0.05. We included all factors from our previously derived and validated first‐day EHR model as candidate predictors.[15] Continuous laboratory and vital sign values at the time of discharge were categorized based on clinically meaningful cutoffs; predictors with missing values were assumed to be normal (<1% missing for each variable). Significant univariate candidate variables were entered in a multivariate logistic regression model using stepwise backward selection with a prespecified significance threshold of P 0.05. We performed several sensitivity analyses to confirm the robustness of our model. First, we alternately derived the full‐stay model using stepwise forward selection. Second, we forced in all significant variables from our first‐day EHR model, and entered the candidate variables from the remainder of the hospital stay using both stepwise backward and forward selection separately. Third, prespecified interactions between variables were evaluated for inclusion. Though final predictors varied slightly between the different approaches, discrimination of each model was similar to the model derived using our primary analytic approach (C statistics 0.01, data not shown).

Model Validation

We assessed model discrimination and calibration of the derived full‐stay EHR model using the validation cohort. Model discrimination was estimated by the C statistic. The C statistic represents the probability that, given 2 hospitalized individuals (1 who was readmitted and the other who was not), the model will predict a higher risk for the readmitted patient than for the nonreadmitted patient. Model calibration was assessed by comparing predicted to observed probabilities of readmission by quintiles of risk, and with the Hosmer‐Lemeshow goodness‐of‐fit test.

Comparison to Existing Models

We compared the full‐stay EHR model performance to 3 previously published models: our group's first‐day EHR model, and the LACE (includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year) and HOSPITAL (includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay) models, which were both derived to predict 30‐day readmissions among general medical inpatients and were intended to help clinicians identify high‐risk patients to target for discharge interventions.[9, 10, 15] We assessed each model's performance in our validation cohort, calculating the C statistic, integrated discrimination index (IDI), and net reclassification index (NRI) compared to the full‐stay model. IDI is a summary measure of both discrimination and reclassification, where more positive values suggest improvement in model performance in both these domains compared to a reference model.[28] The NRI is defined as the sum of the net proportions of correctly reclassified persons with and without the event of interest.[29] The theoretical range of values is 2 to 2, with more positive values indicating improved net reclassification compared to a reference model. Here, we calculated a category‐based NRI to evaluate the performance of models in correctly classifying individuals with and without readmissions into the highest readmission risk quintile versus the lowest 4 risk quintiles compared to the full‐stay EHR model.[29] This prespecified cutoff is relevant for hospitals interested in identifying the highest‐risk individuals for targeted intervention.[6] Because some hospitals may be able to target a greater number of individuals for intervention, we performed a sensitivity analysis by assessing category‐based NRI for reclassification into the top 2 risk quintiles versus the lowest 3 risk quintiles and found no meaningful difference in our results (data not shown). Finally, we qualitatively assessed calibration of comparator models in our validation cohort by comparing predicted probability to observed probability of readmission by quintiles of risk for each model. We conducted all analyses using Stata 12.1 (StataCorp, College Station, TX). This study was approved by the UT Southwestern Medical Center institutional review board.

RESULTS

Overall, 32,922 index hospitalizations were included in our study cohort; 12.7% resulted in a 30‐day readmission (see Supporting Figure 1 in the online version of this article). Individuals had a mean age of 62 years and had diverse race/ethnicity and primary insurance status; half were female (Table 1). The study sample was randomly split into a derivation cohort (50%, n = 16,492) and validation cohort (50%, n = 16,430). Individuals in the derivation cohort with a 30‐day readmission had markedly different socioeconomic and clinical characteristics compared to those not readmitted (Table 1).

Baseline Characteristics and Candidate Variables for Risk‐Prediction Model
Entire Cohort, N = 32,922 Derivation Cohort, N = 16,492

No Readmission, N = 14,312

Readmission, N = 2,180

P Value
  • NOTE: Abbreviations: ED, emergency department; ICU, intensive care unit; IQR, interquartile range; SD, standard deviation. *20% poverty in zip code as per high poverty area US Census designation. Prior ED visit at site of index hospitalization within the past year. Prior hospitalization at any of 75 acute care hospitals in the North Texas region within the past year. Nonelective admission defined as hospitalization categorized as medical emergency, urgent, or trauma. ∥Calculated from diagnoses available within 1 year prior to index hospitalization. Conditions were considered complications if they were not listed as a principle diagnosis for hospitalization or as a previous diagnosis in the prior year. #On day of discharge or last known observation before discharge. Instabilities were defined as temperature 37.8C, heart rate >100 beats/minute, respiratory rate >24 breaths/minute, systolic blood pressure 90 mm Hg, or oxygen saturation <90%. **Discharges to nursing home, skilled nursing facility, or long‐term acute care hospital.

Demographic characteristics
Age, y, mean (SD) 62 (17.3) 61 (17.4) 64 (17.0) 0.001
Female, n (%) 17,715 (53.8) 7,694 (53.8) 1,163 (53.3) 0.72
Race/ethnicity 0.001
White 21,359 (64.9) 9,329 (65.2) 1,361 (62.4)
Black 5,964 (18.1) 2,520 (17.6) 434 (19.9)
Hispanic 4,452 (13.5) 1,931 (13.5) 338 (15.5)
Other 1,147 (3.5) 532 (3.7) 47 (2.2)
Marital status, n (%) 0.001
Single 8,076 (24.5) 3,516 (24.6) 514 (23.6)
Married 13,394 (40.7) 5,950 (41.6) 812 (37.3)
Separated/divorced 3,468 (10.5) 1,460 (10.2) 251 (11.5)
Widowed 4,487 (13.7) 1,868 (13.1) 388 (17.8)
Other 3,497 (10.6) 1,518 (10.6) 215 (9.9)
Primary payer, n (%) 0.001
Private 13,090 (39.8) 5,855 (40.9) 726 (33.3)
Medicare 13,015 (39.5) 5,597 (39.1) 987 (45.3)
Medicaid 2,204 (6.7) 852 (5.9) 242 (11.1)
Charity, self‐pay, or other 4,613 (14.0) 2,008 (14.0) 225 (10.3)
High‐poverty neighborhood, n (%)* 7,468 (22.7) 3,208 (22.4) 548 (25.1) 0.001
Utilization history
1 ED visits in past year, n (%) 9,299 (28.2) 3,793 (26.5) 823 (37.8) 0.001
1 hospitalizations in past year, n (%) 10,189 (30.9) 4,074 (28.5) 1,012 (46.4) 0.001
Clinical factors from first day of hospitalization
Nonelective admission, n (%) 27,818 (84.5) 11,960 (83.6) 1,960 (89.9) 0.001
Charlson Comorbidity Index, median (IQR)∥ 0 (01) 0 (00) 0 (03) 0.001
Laboratory abnormalities within 24 hours of admission
Albumin <2 g/dL 355 (1.1) 119 (0.8) 46 (2.1) 0.001
Albumin 23 g/dL 4,732 (14.4) 1,956 (13.7) 458 (21.0) 0.001
Aspartate aminotransferase >40 U/L 4,610 (14.0) 1,922 (13.4) 383 (17.6) 0.001
Creatine phosphokinase <60 g/L 3,728 (11.3) 1,536 (10.7) 330 (15.1) 0.001
Mean corpuscular volume >100 fL/red cell 1,346 (4.1) 537 (3.8) 134 (6.2) 0.001
Platelets <90 103/L 912 (2.8) 357 (2.5) 116 (5.3) 0.001
Platelets >350 103/L 3,332 (10.1) 1,433 (10.0) 283 (13.0) 0.001
Prothrombin time >35 seconds 248 (0.8) 90 (0.6) 35 (1.6) 0.001
Clinical factors from remainder of hospital stay
Length of stay, d, median (IQR) 4 (26) 4 (26) 5 (38) 0.001
ICU transfer after first 24 hours, n (%) 988 (3.0) 408 (2.9) 94 (4.3) 0.001
Hospital complications, n (%)
Clostridium difficile infection 119 (0.4) 44 (0.3) 24 (1.1) 0.001
Pressure ulcer 358 (1.1) 126 (0.9) 46 (2.1) 0.001
Venous thromboembolism 301 (0.9) 112 (0.8) 34 (1.6) 0.001
Respiratory failure 1,048 (3.2) 463 (3.2) 112 (5.1) 0.001
Central line‐associated bloodstream infection 22 (0.07) 6 (0.04) 5 (0.23) 0.005
Catheter‐associated urinary tract infection 47 (0.14) 20 (0.14) 6 (0.28) 0.15
Acute myocardial infarction 293 (0.9) 110 (0.8) 32 (1.5) 0.001
Pneumonia 1,754 (5.3) 719 (5.0) 154 (7.1) 0.001
Sepsis 853 (2.6) 368 (2.6) 73 (3.4) 0.04
Blood transfusion during hospitalization, n (%) 4,511 (13.7) 1,837 (12.8) 425 (19.5) 0.001
Laboratory abnormalities at discharge#
Blood urea nitrogen >20 mg/dL, n (%) 10,014 (30.4) 4,077 (28.5) 929 (42.6) 0.001
Sodium <135 mEq/L, n (%) 4,583 (13.9) 1,850 (12.9) 440 (20.2) 0.001
Hematocrit 27 3,104 (9.4) 1,231 (8.6) 287 (13.2) 0.001
1 vital sign instability at discharge, n (%)# 6,192 (18.8) 2,624 (18.3) 525 (24.1) 0.001
Discharge location, n (%) 0.001
Home 23,339 (70.9) 10,282 (71.8) 1,383 (63.4)
Home health 3,185 (9.7) 1,356 (9.5) 234 (10.7)
Postacute care** 5,990 (18.2) 2,496 (17.4) 549 (25.2)
Hospice 408 (1.2) 178 (1.2) 14 (0.6)

Derivation and Validation of the Full‐Stay EHR Model for 30‐Day Readmission

Our final model included 24 independent variables, including demographic characteristics, utilization history, clinical factors from the first day of admission, and clinical factors from the remainder of the hospital stay (Table 2). The strongest independent predictor of readmission was hospital‐acquired Clostridium difficile infection (adjusted odds ratio [AOR]: 2.03, 95% confidence interval [CI] 1.18‐3.48); other hospital‐acquired complications including pressure ulcers and venous thromboembolism were also significant predictors. Though having Medicaid was associated with increased odds of readmission (AOR: 1.55, 95% CI: 1.31‐1.83), other zip codelevel measures of socioeconomic disadvantage were not predictive and were not included in the final model. Being discharged to hospice was associated with markedly lower odds of readmission (AOR: 0.23, 95% CI: 0.13‐0.40).

Final Full‐Stay EHR Model Predicting 30‐Day Readmissions (Derivation Cohort, N = 16,492)
Odds Ratio (95% CI)
Univariate Multivariate*
  • NOTE: Abbreviations: CI, confidence interval; ED, emergency department. *Values shown reflect adjusted odds ratios and 95% CI for each factor after adjustment for all other factors listed in the table.

Demographic characteristics
Age, per 10 years 1.08 (1.051.11) 1.07 (1.041.10)
Medicaid 1.97 (1.702.29) 1.55 (1.311.83)
Widow 1.44 (1.281.63) 1.27 (1.111.45)
Utilization history
Prior ED visit, per visit 1.08 (1.061.10) 1.04 (1.021.06)
Prior hospitalization, per hospitalization 1.30 (1.271.34) 1.16 (1.121.20)
Hospital and clinical factors from first day of hospitalization
Nonelective admission 1.75 (1.512.03) 1.42 (1.221.65)
Charlson Comorbidity Index, per point 1.19 (1.171.21) 1.06 (1.041.09)
Laboratory abnormalities within 24 hours of admission
Albumin <2 g/dL 2.57 (1.823.62) 1.52 (1.052.21)
Albumin 23 g/dL 1.68 (1.501.88) 1.20 (1.061.36)
Aspartate aminotransferase >40 U/L 1.37 (1.221.55) 1.21 (1.061.38)
Creatine phosphokinase <60 g/L 1.48 (1.301.69) 1.28 (1.111.46)
Mean corpuscular volume >100 fL/red cell 1.68 (1.382.04) 1.32 (1.071.62)
Platelets <90 103/L 2.20 (1.772.72) 1.56 (1.231.97)
Platelets >350 103/L 1.34 (1.171.54) 1.24 (1.081.44)
Prothrombin time >35 seconds 2.58 (1.743.82) 1.92 (1.272.90)
Hospital and clinical factors from remainder of hospital stay
Length of stay, per day 1.08 (1.071.09) 1.06 (1.041.07)
Hospital complications
Clostridium difficile infection 3.61 (2.195.95) 2.03 (1.183.48)
Pressure ulcer 2.43 (1.733.41) 1.64 (1.152.34)
Venous thromboembolism 2.01 (1.362.96) 1.55 (1.032.32)
Laboratory abnormalities at discharge
Blood urea nitrogen >20 mg/dL 1.86 (1.702.04) 1.37 (1.241.52)
Sodium <135 mEq/L 1.70 (1.521.91) 1.34 (1.181.51)
Hematocrit 27 1.61 (1.401.85) 1.22 (1.051.41)
Vital sign instability at discharge, per instability 1.29 (1.201.40) 1.25 (1.151.36)
Discharged to hospice 0.51 (0.300.89) 0.23 (0.130.40)

In our validation cohort, the full‐stay EHR model had fair discrimination, with a C statistic of 0.69 (95% CI: 0.68‐0.70) (Table 3). The full‐stay EHR model was well calibrated across all quintiles of risk, with slight overestimation of predicted risk in the lowest and highest quintiles (Figure 1a) (see Supporting Table 5 in the online version of this article). It also effectively stratified individuals across a broad range of predicted readmission risk from 4.1% in the lowest decile to 36.5% in the highest decile (Table 3).

Comparison of the Discrimination and Reclassification of Different Readmission Models*
Model Name C‐Statistic (95% CI) IDI, % (95% CI) NRI (95% CI) Average Predicted Risk, %
Lowest Decile Highest Decile
  • NOTE: Abbreviations; CI, confidence interval; EHR, electronic health record; IDI, Integrated Discrimination Improvement; NRI, Net Reclassification Index. *All measures were assessed using the validation cohort (N = 16,430), except for estimating the C‐statistic for the derivation cohort. P value <0.001 for all pairwise comparisons of C‐statistic between full‐stay model and first‐day, LACE, and HOSPITAL models, respectively. The LACE model includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year. The HOSPITAL model includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay.

Full‐stay EHR model
Derivation cohort 0.72 (0.70 to 0.73) 4.1 36.5
Validation cohort 0.69 (0.68 to 0.70) [Reference] [Reference] 4.1 36.5
First‐day EHR model 0.67 (0.66 to 0.68) 1.2 (1.4 to 1.0) 0.020 (0.038 to 0.002) 5.8 31.9
LACE model 0.65 (0.64 to 0.66) 2.6 (2.9 to 2.3) 0.046 (0.067 to 0.024) 6.1 27.5
HOSPITAL model 0.64 (0.62 to 0.65) 3.2 (3.5 to 2.9) 0.058 (0.080 to 0.035) 6.7 26.6
Figure 1
Comparison of the calibration of different readmission models. Calibration graphs for full‐stay (a), first‐day (b), LACE (c), and HOSPITAL (d) models in the validation cohort. Each graph shows predicted probability compared to observed probability of readmission by quintiles of risk for each model. The LACE model includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year. The HOSPITAL model includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay.

Comparing the Performance of the Full‐Stay EHR Model to Other Models

The full‐stay EHR model had better discrimination compared to the first‐day EHR model and the LACE and HOSPITAL models, though the magnitude of improvement was modest (Table 3). The full‐stay EHR model also stratified individuals across a broader range of readmission risk, and was better able to discriminate and classify those in the highest quintile of risk from those in the lowest 4 quintiles of risk compared to other models as assessed by the IDI and NRI (Table 3) (see Supporting Tables 14 and Supporting Figure 2 in the online version of this article). In terms of model calibration, both the first‐day EHR and LACE models were also well calibrated, whereas the HOSPITAL model was less robust (Figure 1).

The diagnostic accuracy of the full‐stay EHR model in correctly predicting those in the highest quintile of risk was better than that of the first‐day, LACE, and HOSPITAL models, though overall improvements in the sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios were also modest (see Supporting Table 6 in the online version of this article).

DISCUSSION

In this study, we used clinically detailed EHR data from the entire hospitalization on 32,922 individuals treated in 6 diverse hospitals to develop an all‐payer, multicondition readmission risk‐prediction model. To our knowledge, this is the first 30‐day hospital readmission risk‐prediction model to use a comprehensive set of factors from EHR data from the entire hospital stay. Prior EHR‐based models have focused exclusively on data available on or prior to the first day of admission, which account for clinical severity on admission but do not account for factors uncovered during the inpatient stay that influence the chance of a postdischarge adverse outcome.[15, 30] We specifically assessed the prognostic impact of a comprehensive set of factors from the entire index hospitalization, including hospital‐acquired complications, clinical trajectory, and stability on discharge in predicting hospital readmissions. Our full‐stay EHR model had statistically better discrimination, calibration, and diagnostic accuracy than our existing all‐cause first‐day EHR model[15] and 2 previously published readmissions models that included more limited information from hospitalization (such as length of stay).[9, 10] However, although the more complicated full‐stay EHR model was statistically better than previously published models, we were surprised that the predictive performance was only modestly improved despite the inclusion of many additional clinically relevant prognostic factors.

Taken together, our study has several important implications. First, the added complexity and resource intensity of implementing a full‐stay EHR model yields only modestly improved readmission risk prediction. Thus, hospitals and healthcare systems interested in targeting their highest‐risk individuals for interventions to reduce 30‐day readmission should consider doing so within the first day of hospital admission. Our group's previously derived and validated first‐day EHR model, which used data only from the first day of admission, qualitatively performed nearly as well as the full‐stay EHR model.[15] Additionally, a recent study using only preadmission EHR data to predict 30‐day readmissions also achieved similar discrimination and diagnostic accuracy as our full‐stay model.[30]

Second, the field of readmissions risk‐prediction modeling may be reaching the maximum achievable model performance using data that are currently available in the EHR. Our limited ability to accurately predict all‐cause 30‐day readmission risk may reflect the influence of currently unmeasured patient, system, and community factors on readmissions.[31, 32, 33] Due to the constraints of data collected in the EHR, we were unable to include several patient‐level clinical characteristics associated with hospital readmission, including self‐perceived health status, functional impairment, and cognition.[33, 34, 35, 36] However, given their modest effect sizes (ORs ranging from 1.062.10), adequately measuring and including these risk factors in our model may not meaningfully improve model performance and diagnostic accuracy. Further, many social and behavioral patient‐level factors are also not consistently available in EHR data. Though we explored the role of several neighborhood‐level socioeconomic measuresincluding prevalence of poverty, median income, education, and unemploymentwe found that none were significantly associated with 30‐day readmissions. These particular measures may have been inadequate to characterize individual‐level social and behavioral factors, as several previous studies have demonstrated that patient‐level factors such as social support, substance abuse, and medication and visit adherence can influence readmission risk in heart failure and pneumonia.[11, 16, 22, 25] This underscores the need for more standardized routine collection of data across functional, social, and behavioral domains in clinical settings, as recently championed by the Institute of Medicine.[11, 37] Integrating data from outside the EHR on postdischarge health behaviors, self‐management, follow‐up care, recovery, and home environment may be another important but untapped strategy for further improving prediction of readmissions.[25, 38]

Third, a multicondition readmission risk‐prediction model may be a less effective strategy than more customized disease‐specific models for selected conditions associated with high 30‐day readmission rates. Our group's previously derived and internally validated models for heart failure and human immunodeficiency virus had superior discrimination compared to our full‐stay EHR model (C statistic of 0.72 for each).[11, 13] However, given differences in the included population and time periods studied, a head‐to‐head comparison of these different strategies is needed to assess differences in model performance and utility.

Our study had several strengths. To our knowledge, this is the first study to rigorously measure the additive influence of in‐hospital complications, clinical trajectory, and stability on discharge on the risk of 30‐day hospital readmission. Additionally, our study included a large, diverse study population that included all payers, all ages of adults, a mix of community, academic, and safety net hospitals, and individuals from a broad array of racial/ethnic and socioeconomic backgrounds.

Our results should be interpreted in light of several limitations. First, though we sought to represent a diverse group of hospitals, all study sites were located within north Texas and generalizability to other regions is uncertain. Second, our ascertainment of prior hospitalizations and readmissions was more inclusive than what could be typically accomplished in real time using only EHR data from a single clinical site. We performed a sensitivity analysis using only prior utilization data available within the EHR from the index hospital with no meaningful difference in our findings (data not shown). Additionally, a recent study found that 30‐day readmissions occur at the index hospital for over 75% of events, suggesting that 30‐day readmissions are fairly comprehensively captured even with only single‐site data.[39] Third, we were not able to include data on outpatient visits before or after the index hospitalization, which may influence the risk of readmission.[1, 40]

In conclusion, incorporating clinically granular EHR data from the entire course of hospitalization modestly improves prediction of 30‐day readmissions compared to models that only include information from the first 24 hours of hospital admission or models that use far fewer variables. However, given the limited improvement in prediction, our findings suggest that from the practical perspective of implementing real‐time models to identify those at highest risk for readmission, it may not be worth the added complexity of waiting until the end of a hospitalization to leverage additional data on hospital complications, and the trajectory of laboratory and vital sign values currently available in the EHR. Further improvement in prediction of readmissions will likely require accounting for psychosocial, functional, behavioral, and postdischarge factors not currently present in the inpatient EHR.

Disclosures: This study was presented at the Society of Hospital Medicine 2015 Annual Meeting in National Harbor, Maryland, and the Society of General Internal Medicine 2015 Annual Meeting in Toronto, Canada. This work was supported by the Agency for Healthcare Research and Qualityfunded UT Southwestern Center for Patient‐Centered Outcomes Research (1R24HS022418‐01) and the Commonwealth Foundation (#20100323). Drs. Nguyen and Makam received funding from the UT Southwestern KL2 Scholars Program (NIH/NCATS KL2 TR001103). Dr. Halm was also supported in part by NIH/NCATS U54 RFA‐TR‐12‐006. The study sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The authors have no conflicts of interest to disclose.

Files
References
  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):14181428.
  2. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  3. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital‐initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433440.
  4. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520528.
  5. Rennke S, Shoeb MH, Nguyen OK, Magan Y, Wachter RM, Ranji SR. Interventions to Improve Care Transitions at Hospital Discharge. Rockville, MD: Agency for Healthcare Research and Quality; 2013.
  6. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):9981005.
  7. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood). 2014;33(7):11481154.
  8. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  9. Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551557.
  10. Donze J, Aujesky D, Williams D, Schnipper JL. Potentially avoidable 30‐day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632638.
  11. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981988.
  12. Singal AG, Rahimi RS, Clark C, et al. An automated model using electronic medical record data identifies patients with cirrhosis at high risk for readmission. Clin Gastroenterol Hepatol. 2013;11(10):13351341.e1331.
  13. Nijhawan AE, Clark C, Kaplan R, Moore B, Halm EA, Amarasingham R. An electronic medical record‐based model to predict 30‐day risk of readmission and death among HIV‐infected inpatients. J Acquir Immune Defic Syndr. 2012;61(3):349358.
  14. Horwitz LI, Partovian C, Lin Z, et al. Development and use of an administrative claims measure for profiling hospital‐wide performance on 30‐day unplanned readmission. Ann Intern Med. 2014;161(10 suppl):S66S75.
  15. Amarasingham R, Velasco F, Xie B, et al. Electronic medical record‐based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models. BMC Med Inform Decis Mak. 2015;15(1):39.
  16. Watson AJ, O'Rourke J, Jethwani K, et al. Linking electronic health record‐extracted psychosocial data in real‐time to risk of readmission for heart failure. Psychosomatics. 2011;52(4):319327.
  17. Ashton CM, Wray NP. A conceptual framework for the study of early readmission as an indicator of quality of care. Soc Sci Med. 1996;43(11):15331541.
  18. Dharmarajan K, Hsieh AF, Lin Z, et al. Hospital readmission performance and patterns of readmission: retrospective cohort study of Medicare admissions. BMJ. 2013;347:f6571.
  19. Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting more performance from performance measurement. N Engl J Med. 2014;371(23):21452147.
  20. Bradley EH, Sipsma H, Horwitz LI, et al. Hospital strategy uptake and reductions in unplanned readmission rates for patients with heart failure: a prospective study. J Gen Intern Med. 2015;30(5):605611.
  21. Krumholz HM. Post‐hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100102.
  22. Calvillo‐King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269282.
  23. Keyhani S, Myers LJ, Cheng E, Hebert P, Williams LS, Bravata DM. Effect of clinical and social risk factors on hospital profiling for stroke readmission: a cohort study. Ann Intern Med. 2014;161(11):775784.
  24. Kind AJ, Jencks S, Brock J, et al. Neighborhood socioeconomic disadvantage and 30‐day rehospitalization: a retrospective cohort study. Ann Intern Med. 2014;161(11):765774.
  25. Arbaje AI, Wolff JL, Yu Q, Powe NR, Anderson GF, Boult C. Postdischarge environmental and socioeconomic factors and the likelihood of early hospital readmission among community‐dwelling Medicare beneficiaries. Gerontologist. 2008;48(4):495504.
  26. Hu J, Gonsahn MD, Nerenz DR. Socioeconomic status and readmissions: evidence from an urban teaching hospital. Health Aff (Millwood). 2014;33(5):778785.
  27. Nagasako EM, Reidhead M, Waterman B, Dunagan WC. Adding socioeconomic data to hospital readmissions calculations may produce more useful results. Health Aff (Millwood). 2014;33(5):786791.
  28. Pencina MJ, D'Agostino RB, D'Agostino RB, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2):157172; discussion 207–212.
  29. Leening MJ, Vedder MM, Witteman JC, Pencina MJ, Steyerberg EW. Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician's guide. Ann Intern Med. 2014;160(2):122131.
  30. Shadmi E, Flaks‐Manov N, Hoshen M, Goldman O, Bitterman H, Balicer RD. Predicting 30‐day readmissions with preadmission electronic health record data. Med Care. 2015;53(3):283289.
  31. Kangovi S, Grande D. Hospital readmissions—not just a measure of quality. JAMA. 2011;306(16):17961797.
  32. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):13661369.
  33. Greysen SR, Stijacic Cenzer I, Auerbach AD, Covinsky KE. Functional impairment and hospital readmission in medicare seniors. JAMA Intern Med. 2015;175(4):559565.
  34. Holloway JJ, Thomas JW, Shapiro L. Clinical and sociodemographic risk factors for readmission of Medicare beneficiaries. Health Care Financ Rev. 1988;10(1):2736.
  35. Patel A, Parikh R, Howell EH, Hsich E, Landers SH, Gorodeski EZ. Mini‐cog performance: novel marker of post discharge risk among patients hospitalized for heart failure. Circ Heart Fail. 2015;8(1):816.
  36. Hoyer EH, Needham DM, Atanelov L, Knox B, Friedman M, Brotman DJ. Association of impaired functional status at hospital discharge and subsequent rehospitalization. J Hosp Med. 2014;9(5):277282.
  37. Adler NE, Stead WW. Patients in context—EHR capture of social and behavioral determinants of health. N Engl J Med. 2015;372(8):698701.
  38. Nguyen OK, Chan CV, Makam A, Stieglitz H, Amarasingham R. Envisioning a social‐health information exchange as a platform to support a patient‐centered medical neighborhood: a feasibility study. J Gen Intern Med. 2015;30(1):6067.
  39. Henke RM, Karaca Z, Lin H, Wier LM, Marder W, Wong HS. Patient factors contributing to variation in same‐hospital readmission rate. Med Care Res Review. 2015;72(3):338358.
  40. Weinberger M, Oddone EZ, Henderson WG. Does increased access to primary care reduce hospital readmissions? Veterans Affairs Cooperative Study Group on Primary Care and Hospital Readmission. N Engl J Med. 1996;334(22):14411447.
Article PDF
Issue
Journal of Hospital Medicine - 11(7)
Publications
Page Number
473-480
Sections
Files
Files
Article PDF
Article PDF

Unplanned hospital readmissions are frequent, costly, and potentially avoidable.[1, 2] Due to major federal financial readmissions penalties targeting excessive 30‐day readmissions, there is increasing attention to implementing hospital‐initiated interventions to reduce readmissions.[3, 4] However, universal enrollment of all hospitalized patients into such programs may be too resource intensive for many hospitals.[5] To optimize efficiency and effectiveness, interventions should be targeted to individuals most likely to benefit.[6, 7] However, existing readmission risk‐prediction models have achieved only modest discrimination, have largely used administrative claims data not available until months after discharge, or are limited to only a subset of patients with Medicare or a specific clinical condition.[8, 9, 10, 11, 12, 13, 14] These limitations have precluded accurate identification of high‐risk individuals in an all‐payer general medical inpatient population to provide actionable information for intervention prior to discharge.

Approaches using electronic health record (EHR) data could allow early identification of high‐risk patients during the index hospitalization to enable initiation of interventions prior to discharge. To date, such strategies have relied largely on EHR data from the day of admission.[15, 16] However, given that variation in 30‐day readmission rates are thought to reflect the quality of in‐hospital care, incorporating EHR data from the entire hospital stay to reflect hospital care processes and clinical trajectory may more accurately identify at‐risk patients.[17, 18, 19, 20] Improved accuracy in risk prediction would help better target intervention efforts in the immediate postdischarge period, an interval characterized by heightened vulnerability for adverse events.[21]

To help hospitals target transitional care interventions more effectively to high‐risk individuals prior to discharge, we derived and validated a readmissions risk‐prediction model incorporating EHR data from the entire course of the index hospitalization, which we termed the full‐stay EHR model. We also compared the full‐stay EHR model performance to our group's previously derived prediction model based on EHR data on the day of admission, termed the first‐day EHR model, as well as to 2 other validated readmission models similarly intended to yield near real‐time risk predictions prior to or shortly after hospital discharge.[9, 10, 15]

METHODS

Study Design, Population, and Data Sources

We conducted an observational cohort study using EHR data from 6 hospitals in the DallasFort Worth metroplex between November 1, 2009 and October 30, 2010 using the same EHR system (Epic Systems Corp., Verona, WI). One site was a university‐affiliated safety net hospital; the remaining 5 sites were teaching and nonteaching community sites.

We included consecutive hospitalizations among adults 18 years old discharged alive from any medicine inpatient service. For individuals with multiple hospitalizations during the study period, we included only the first hospitalization. We excluded individuals who died during the index hospitalization, were transferred to another acute care facility, left against medical advice, or who died outside of the hospital within 30 days of discharge. For model derivation, we randomly split the sample into separate derivation (50%) and validation cohorts (50%).

Outcomes

The primary outcome was 30‐day hospital readmission, defined as a nonelective hospitalization within 30 days of discharge to any of 75 acute care hospitals within a 100‐mile radius of Dallas, ascertained from an all‐payer regional hospitalization database. Nonelective hospitalizations included all hospitalizations classified as a emergency, urgent, or trauma, and excluded those classified as elective as per the Centers for Medicare and Medicaid Services Claim Inpatient Admission Type Code definitions.

Predictor Variables for the Full‐Stay EHR Model

The full‐stay EHR model was iteratively developed from our group's previously derived and validated risk‐prediction model using EHR data available on admission (first‐day EHR model).[15] For the full‐stay EHR model, we included all predictor variables included in our published first‐day EHR model as candidate risk factors. Based on prior literature, we additionally expanded candidate predictors available on admission to include marital status (proxy for social isolation) and socioeconomic disadvantage (percent poverty, unemployment, median income, and educational attainment by zip code of residence as proxy measures of the social and built environment).[22, 23, 24, 25, 26, 27] We also expanded the ascertainment of prior hospitalization to include admissions at both the index hospital and any of 75 acute care hospitals from the same, separate all‐payer regional hospitalization database used to ascertain 30‐day readmissions.

Candidate predictors from the remainder of the hospital stay (ie, following the first 24 hours of admission) were included if they were: (1) available in the EHR of all participating hospitals, (2) routinely collected or available at the time of hospital discharge, and (3) plausible predictors of adverse outcomes based on prior literature and clinical expertise. These included length of stay, in‐hospital complications, transfer to an intensive or coronary care unit, blood transfusions, vital sign instabilities within 24 hours of discharge, select laboratory values at time of discharge, and disposition status. We also assessed trajectories of vital signs and selected laboratory values (defined as changes in these measures from admission to discharge).

Statistical Analysis

Model Derivation

Univariate relationships between readmission and each of the candidate predictors were assessed in the derivation cohort using a prespecified significance threshold of P 0.05. We included all factors from our previously derived and validated first‐day EHR model as candidate predictors.[15] Continuous laboratory and vital sign values at the time of discharge were categorized based on clinically meaningful cutoffs; predictors with missing values were assumed to be normal (<1% missing for each variable). Significant univariate candidate variables were entered in a multivariate logistic regression model using stepwise backward selection with a prespecified significance threshold of P 0.05. We performed several sensitivity analyses to confirm the robustness of our model. First, we alternately derived the full‐stay model using stepwise forward selection. Second, we forced in all significant variables from our first‐day EHR model, and entered the candidate variables from the remainder of the hospital stay using both stepwise backward and forward selection separately. Third, prespecified interactions between variables were evaluated for inclusion. Though final predictors varied slightly between the different approaches, discrimination of each model was similar to the model derived using our primary analytic approach (C statistics 0.01, data not shown).

Model Validation

We assessed model discrimination and calibration of the derived full‐stay EHR model using the validation cohort. Model discrimination was estimated by the C statistic. The C statistic represents the probability that, given 2 hospitalized individuals (1 who was readmitted and the other who was not), the model will predict a higher risk for the readmitted patient than for the nonreadmitted patient. Model calibration was assessed by comparing predicted to observed probabilities of readmission by quintiles of risk, and with the Hosmer‐Lemeshow goodness‐of‐fit test.

Comparison to Existing Models

We compared the full‐stay EHR model performance to 3 previously published models: our group's first‐day EHR model, and the LACE (includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year) and HOSPITAL (includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay) models, which were both derived to predict 30‐day readmissions among general medical inpatients and were intended to help clinicians identify high‐risk patients to target for discharge interventions.[9, 10, 15] We assessed each model's performance in our validation cohort, calculating the C statistic, integrated discrimination index (IDI), and net reclassification index (NRI) compared to the full‐stay model. IDI is a summary measure of both discrimination and reclassification, where more positive values suggest improvement in model performance in both these domains compared to a reference model.[28] The NRI is defined as the sum of the net proportions of correctly reclassified persons with and without the event of interest.[29] The theoretical range of values is 2 to 2, with more positive values indicating improved net reclassification compared to a reference model. Here, we calculated a category‐based NRI to evaluate the performance of models in correctly classifying individuals with and without readmissions into the highest readmission risk quintile versus the lowest 4 risk quintiles compared to the full‐stay EHR model.[29] This prespecified cutoff is relevant for hospitals interested in identifying the highest‐risk individuals for targeted intervention.[6] Because some hospitals may be able to target a greater number of individuals for intervention, we performed a sensitivity analysis by assessing category‐based NRI for reclassification into the top 2 risk quintiles versus the lowest 3 risk quintiles and found no meaningful difference in our results (data not shown). Finally, we qualitatively assessed calibration of comparator models in our validation cohort by comparing predicted probability to observed probability of readmission by quintiles of risk for each model. We conducted all analyses using Stata 12.1 (StataCorp, College Station, TX). This study was approved by the UT Southwestern Medical Center institutional review board.

RESULTS

Overall, 32,922 index hospitalizations were included in our study cohort; 12.7% resulted in a 30‐day readmission (see Supporting Figure 1 in the online version of this article). Individuals had a mean age of 62 years and had diverse race/ethnicity and primary insurance status; half were female (Table 1). The study sample was randomly split into a derivation cohort (50%, n = 16,492) and validation cohort (50%, n = 16,430). Individuals in the derivation cohort with a 30‐day readmission had markedly different socioeconomic and clinical characteristics compared to those not readmitted (Table 1).

Baseline Characteristics and Candidate Variables for Risk‐Prediction Model
Entire Cohort, N = 32,922 Derivation Cohort, N = 16,492

No Readmission, N = 14,312

Readmission, N = 2,180

P Value
  • NOTE: Abbreviations: ED, emergency department; ICU, intensive care unit; IQR, interquartile range; SD, standard deviation. *20% poverty in zip code as per high poverty area US Census designation. Prior ED visit at site of index hospitalization within the past year. Prior hospitalization at any of 75 acute care hospitals in the North Texas region within the past year. Nonelective admission defined as hospitalization categorized as medical emergency, urgent, or trauma. ∥Calculated from diagnoses available within 1 year prior to index hospitalization. Conditions were considered complications if they were not listed as a principle diagnosis for hospitalization or as a previous diagnosis in the prior year. #On day of discharge or last known observation before discharge. Instabilities were defined as temperature 37.8C, heart rate >100 beats/minute, respiratory rate >24 breaths/minute, systolic blood pressure 90 mm Hg, or oxygen saturation <90%. **Discharges to nursing home, skilled nursing facility, or long‐term acute care hospital.

Demographic characteristics
Age, y, mean (SD) 62 (17.3) 61 (17.4) 64 (17.0) 0.001
Female, n (%) 17,715 (53.8) 7,694 (53.8) 1,163 (53.3) 0.72
Race/ethnicity 0.001
White 21,359 (64.9) 9,329 (65.2) 1,361 (62.4)
Black 5,964 (18.1) 2,520 (17.6) 434 (19.9)
Hispanic 4,452 (13.5) 1,931 (13.5) 338 (15.5)
Other 1,147 (3.5) 532 (3.7) 47 (2.2)
Marital status, n (%) 0.001
Single 8,076 (24.5) 3,516 (24.6) 514 (23.6)
Married 13,394 (40.7) 5,950 (41.6) 812 (37.3)
Separated/divorced 3,468 (10.5) 1,460 (10.2) 251 (11.5)
Widowed 4,487 (13.7) 1,868 (13.1) 388 (17.8)
Other 3,497 (10.6) 1,518 (10.6) 215 (9.9)
Primary payer, n (%) 0.001
Private 13,090 (39.8) 5,855 (40.9) 726 (33.3)
Medicare 13,015 (39.5) 5,597 (39.1) 987 (45.3)
Medicaid 2,204 (6.7) 852 (5.9) 242 (11.1)
Charity, self‐pay, or other 4,613 (14.0) 2,008 (14.0) 225 (10.3)
High‐poverty neighborhood, n (%)* 7,468 (22.7) 3,208 (22.4) 548 (25.1) 0.001
Utilization history
1 ED visits in past year, n (%) 9,299 (28.2) 3,793 (26.5) 823 (37.8) 0.001
1 hospitalizations in past year, n (%) 10,189 (30.9) 4,074 (28.5) 1,012 (46.4) 0.001
Clinical factors from first day of hospitalization
Nonelective admission, n (%) 27,818 (84.5) 11,960 (83.6) 1,960 (89.9) 0.001
Charlson Comorbidity Index, median (IQR)∥ 0 (01) 0 (00) 0 (03) 0.001
Laboratory abnormalities within 24 hours of admission
Albumin <2 g/dL 355 (1.1) 119 (0.8) 46 (2.1) 0.001
Albumin 23 g/dL 4,732 (14.4) 1,956 (13.7) 458 (21.0) 0.001
Aspartate aminotransferase >40 U/L 4,610 (14.0) 1,922 (13.4) 383 (17.6) 0.001
Creatine phosphokinase <60 g/L 3,728 (11.3) 1,536 (10.7) 330 (15.1) 0.001
Mean corpuscular volume >100 fL/red cell 1,346 (4.1) 537 (3.8) 134 (6.2) 0.001
Platelets <90 103/L 912 (2.8) 357 (2.5) 116 (5.3) 0.001
Platelets >350 103/L 3,332 (10.1) 1,433 (10.0) 283 (13.0) 0.001
Prothrombin time >35 seconds 248 (0.8) 90 (0.6) 35 (1.6) 0.001
Clinical factors from remainder of hospital stay
Length of stay, d, median (IQR) 4 (26) 4 (26) 5 (38) 0.001
ICU transfer after first 24 hours, n (%) 988 (3.0) 408 (2.9) 94 (4.3) 0.001
Hospital complications, n (%)
Clostridium difficile infection 119 (0.4) 44 (0.3) 24 (1.1) 0.001
Pressure ulcer 358 (1.1) 126 (0.9) 46 (2.1) 0.001
Venous thromboembolism 301 (0.9) 112 (0.8) 34 (1.6) 0.001
Respiratory failure 1,048 (3.2) 463 (3.2) 112 (5.1) 0.001
Central line‐associated bloodstream infection 22 (0.07) 6 (0.04) 5 (0.23) 0.005
Catheter‐associated urinary tract infection 47 (0.14) 20 (0.14) 6 (0.28) 0.15
Acute myocardial infarction 293 (0.9) 110 (0.8) 32 (1.5) 0.001
Pneumonia 1,754 (5.3) 719 (5.0) 154 (7.1) 0.001
Sepsis 853 (2.6) 368 (2.6) 73 (3.4) 0.04
Blood transfusion during hospitalization, n (%) 4,511 (13.7) 1,837 (12.8) 425 (19.5) 0.001
Laboratory abnormalities at discharge#
Blood urea nitrogen >20 mg/dL, n (%) 10,014 (30.4) 4,077 (28.5) 929 (42.6) 0.001
Sodium <135 mEq/L, n (%) 4,583 (13.9) 1,850 (12.9) 440 (20.2) 0.001
Hematocrit 27 3,104 (9.4) 1,231 (8.6) 287 (13.2) 0.001
1 vital sign instability at discharge, n (%)# 6,192 (18.8) 2,624 (18.3) 525 (24.1) 0.001
Discharge location, n (%) 0.001
Home 23,339 (70.9) 10,282 (71.8) 1,383 (63.4)
Home health 3,185 (9.7) 1,356 (9.5) 234 (10.7)
Postacute care** 5,990 (18.2) 2,496 (17.4) 549 (25.2)
Hospice 408 (1.2) 178 (1.2) 14 (0.6)

Derivation and Validation of the Full‐Stay EHR Model for 30‐Day Readmission

Our final model included 24 independent variables, including demographic characteristics, utilization history, clinical factors from the first day of admission, and clinical factors from the remainder of the hospital stay (Table 2). The strongest independent predictor of readmission was hospital‐acquired Clostridium difficile infection (adjusted odds ratio [AOR]: 2.03, 95% confidence interval [CI] 1.18‐3.48); other hospital‐acquired complications including pressure ulcers and venous thromboembolism were also significant predictors. Though having Medicaid was associated with increased odds of readmission (AOR: 1.55, 95% CI: 1.31‐1.83), other zip codelevel measures of socioeconomic disadvantage were not predictive and were not included in the final model. Being discharged to hospice was associated with markedly lower odds of readmission (AOR: 0.23, 95% CI: 0.13‐0.40).

Final Full‐Stay EHR Model Predicting 30‐Day Readmissions (Derivation Cohort, N = 16,492)
Odds Ratio (95% CI)
Univariate Multivariate*
  • NOTE: Abbreviations: CI, confidence interval; ED, emergency department. *Values shown reflect adjusted odds ratios and 95% CI for each factor after adjustment for all other factors listed in the table.

Demographic characteristics
Age, per 10 years 1.08 (1.051.11) 1.07 (1.041.10)
Medicaid 1.97 (1.702.29) 1.55 (1.311.83)
Widow 1.44 (1.281.63) 1.27 (1.111.45)
Utilization history
Prior ED visit, per visit 1.08 (1.061.10) 1.04 (1.021.06)
Prior hospitalization, per hospitalization 1.30 (1.271.34) 1.16 (1.121.20)
Hospital and clinical factors from first day of hospitalization
Nonelective admission 1.75 (1.512.03) 1.42 (1.221.65)
Charlson Comorbidity Index, per point 1.19 (1.171.21) 1.06 (1.041.09)
Laboratory abnormalities within 24 hours of admission
Albumin <2 g/dL 2.57 (1.823.62) 1.52 (1.052.21)
Albumin 23 g/dL 1.68 (1.501.88) 1.20 (1.061.36)
Aspartate aminotransferase >40 U/L 1.37 (1.221.55) 1.21 (1.061.38)
Creatine phosphokinase <60 g/L 1.48 (1.301.69) 1.28 (1.111.46)
Mean corpuscular volume >100 fL/red cell 1.68 (1.382.04) 1.32 (1.071.62)
Platelets <90 103/L 2.20 (1.772.72) 1.56 (1.231.97)
Platelets >350 103/L 1.34 (1.171.54) 1.24 (1.081.44)
Prothrombin time >35 seconds 2.58 (1.743.82) 1.92 (1.272.90)
Hospital and clinical factors from remainder of hospital stay
Length of stay, per day 1.08 (1.071.09) 1.06 (1.041.07)
Hospital complications
Clostridium difficile infection 3.61 (2.195.95) 2.03 (1.183.48)
Pressure ulcer 2.43 (1.733.41) 1.64 (1.152.34)
Venous thromboembolism 2.01 (1.362.96) 1.55 (1.032.32)
Laboratory abnormalities at discharge
Blood urea nitrogen >20 mg/dL 1.86 (1.702.04) 1.37 (1.241.52)
Sodium <135 mEq/L 1.70 (1.521.91) 1.34 (1.181.51)
Hematocrit 27 1.61 (1.401.85) 1.22 (1.051.41)
Vital sign instability at discharge, per instability 1.29 (1.201.40) 1.25 (1.151.36)
Discharged to hospice 0.51 (0.300.89) 0.23 (0.130.40)

In our validation cohort, the full‐stay EHR model had fair discrimination, with a C statistic of 0.69 (95% CI: 0.68‐0.70) (Table 3). The full‐stay EHR model was well calibrated across all quintiles of risk, with slight overestimation of predicted risk in the lowest and highest quintiles (Figure 1a) (see Supporting Table 5 in the online version of this article). It also effectively stratified individuals across a broad range of predicted readmission risk from 4.1% in the lowest decile to 36.5% in the highest decile (Table 3).

Comparison of the Discrimination and Reclassification of Different Readmission Models*
Model Name C‐Statistic (95% CI) IDI, % (95% CI) NRI (95% CI) Average Predicted Risk, %
Lowest Decile Highest Decile
  • NOTE: Abbreviations; CI, confidence interval; EHR, electronic health record; IDI, Integrated Discrimination Improvement; NRI, Net Reclassification Index. *All measures were assessed using the validation cohort (N = 16,430), except for estimating the C‐statistic for the derivation cohort. P value <0.001 for all pairwise comparisons of C‐statistic between full‐stay model and first‐day, LACE, and HOSPITAL models, respectively. The LACE model includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year. The HOSPITAL model includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay.

Full‐stay EHR model
Derivation cohort 0.72 (0.70 to 0.73) 4.1 36.5
Validation cohort 0.69 (0.68 to 0.70) [Reference] [Reference] 4.1 36.5
First‐day EHR model 0.67 (0.66 to 0.68) 1.2 (1.4 to 1.0) 0.020 (0.038 to 0.002) 5.8 31.9
LACE model 0.65 (0.64 to 0.66) 2.6 (2.9 to 2.3) 0.046 (0.067 to 0.024) 6.1 27.5
HOSPITAL model 0.64 (0.62 to 0.65) 3.2 (3.5 to 2.9) 0.058 (0.080 to 0.035) 6.7 26.6
Figure 1
Comparison of the calibration of different readmission models. Calibration graphs for full‐stay (a), first‐day (b), LACE (c), and HOSPITAL (d) models in the validation cohort. Each graph shows predicted probability compared to observed probability of readmission by quintiles of risk for each model. The LACE model includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year. The HOSPITAL model includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay.

Comparing the Performance of the Full‐Stay EHR Model to Other Models

The full‐stay EHR model had better discrimination compared to the first‐day EHR model and the LACE and HOSPITAL models, though the magnitude of improvement was modest (Table 3). The full‐stay EHR model also stratified individuals across a broader range of readmission risk, and was better able to discriminate and classify those in the highest quintile of risk from those in the lowest 4 quintiles of risk compared to other models as assessed by the IDI and NRI (Table 3) (see Supporting Tables 14 and Supporting Figure 2 in the online version of this article). In terms of model calibration, both the first‐day EHR and LACE models were also well calibrated, whereas the HOSPITAL model was less robust (Figure 1).

The diagnostic accuracy of the full‐stay EHR model in correctly predicting those in the highest quintile of risk was better than that of the first‐day, LACE, and HOSPITAL models, though overall improvements in the sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios were also modest (see Supporting Table 6 in the online version of this article).

DISCUSSION

In this study, we used clinically detailed EHR data from the entire hospitalization on 32,922 individuals treated in 6 diverse hospitals to develop an all‐payer, multicondition readmission risk‐prediction model. To our knowledge, this is the first 30‐day hospital readmission risk‐prediction model to use a comprehensive set of factors from EHR data from the entire hospital stay. Prior EHR‐based models have focused exclusively on data available on or prior to the first day of admission, which account for clinical severity on admission but do not account for factors uncovered during the inpatient stay that influence the chance of a postdischarge adverse outcome.[15, 30] We specifically assessed the prognostic impact of a comprehensive set of factors from the entire index hospitalization, including hospital‐acquired complications, clinical trajectory, and stability on discharge in predicting hospital readmissions. Our full‐stay EHR model had statistically better discrimination, calibration, and diagnostic accuracy than our existing all‐cause first‐day EHR model[15] and 2 previously published readmissions models that included more limited information from hospitalization (such as length of stay).[9, 10] However, although the more complicated full‐stay EHR model was statistically better than previously published models, we were surprised that the predictive performance was only modestly improved despite the inclusion of many additional clinically relevant prognostic factors.

Taken together, our study has several important implications. First, the added complexity and resource intensity of implementing a full‐stay EHR model yields only modestly improved readmission risk prediction. Thus, hospitals and healthcare systems interested in targeting their highest‐risk individuals for interventions to reduce 30‐day readmission should consider doing so within the first day of hospital admission. Our group's previously derived and validated first‐day EHR model, which used data only from the first day of admission, qualitatively performed nearly as well as the full‐stay EHR model.[15] Additionally, a recent study using only preadmission EHR data to predict 30‐day readmissions also achieved similar discrimination and diagnostic accuracy as our full‐stay model.[30]

Second, the field of readmissions risk‐prediction modeling may be reaching the maximum achievable model performance using data that are currently available in the EHR. Our limited ability to accurately predict all‐cause 30‐day readmission risk may reflect the influence of currently unmeasured patient, system, and community factors on readmissions.[31, 32, 33] Due to the constraints of data collected in the EHR, we were unable to include several patient‐level clinical characteristics associated with hospital readmission, including self‐perceived health status, functional impairment, and cognition.[33, 34, 35, 36] However, given their modest effect sizes (ORs ranging from 1.062.10), adequately measuring and including these risk factors in our model may not meaningfully improve model performance and diagnostic accuracy. Further, many social and behavioral patient‐level factors are also not consistently available in EHR data. Though we explored the role of several neighborhood‐level socioeconomic measuresincluding prevalence of poverty, median income, education, and unemploymentwe found that none were significantly associated with 30‐day readmissions. These particular measures may have been inadequate to characterize individual‐level social and behavioral factors, as several previous studies have demonstrated that patient‐level factors such as social support, substance abuse, and medication and visit adherence can influence readmission risk in heart failure and pneumonia.[11, 16, 22, 25] This underscores the need for more standardized routine collection of data across functional, social, and behavioral domains in clinical settings, as recently championed by the Institute of Medicine.[11, 37] Integrating data from outside the EHR on postdischarge health behaviors, self‐management, follow‐up care, recovery, and home environment may be another important but untapped strategy for further improving prediction of readmissions.[25, 38]

Third, a multicondition readmission risk‐prediction model may be a less effective strategy than more customized disease‐specific models for selected conditions associated with high 30‐day readmission rates. Our group's previously derived and internally validated models for heart failure and human immunodeficiency virus had superior discrimination compared to our full‐stay EHR model (C statistic of 0.72 for each).[11, 13] However, given differences in the included population and time periods studied, a head‐to‐head comparison of these different strategies is needed to assess differences in model performance and utility.

Our study had several strengths. To our knowledge, this is the first study to rigorously measure the additive influence of in‐hospital complications, clinical trajectory, and stability on discharge on the risk of 30‐day hospital readmission. Additionally, our study included a large, diverse study population that included all payers, all ages of adults, a mix of community, academic, and safety net hospitals, and individuals from a broad array of racial/ethnic and socioeconomic backgrounds.

Our results should be interpreted in light of several limitations. First, though we sought to represent a diverse group of hospitals, all study sites were located within north Texas and generalizability to other regions is uncertain. Second, our ascertainment of prior hospitalizations and readmissions was more inclusive than what could be typically accomplished in real time using only EHR data from a single clinical site. We performed a sensitivity analysis using only prior utilization data available within the EHR from the index hospital with no meaningful difference in our findings (data not shown). Additionally, a recent study found that 30‐day readmissions occur at the index hospital for over 75% of events, suggesting that 30‐day readmissions are fairly comprehensively captured even with only single‐site data.[39] Third, we were not able to include data on outpatient visits before or after the index hospitalization, which may influence the risk of readmission.[1, 40]

In conclusion, incorporating clinically granular EHR data from the entire course of hospitalization modestly improves prediction of 30‐day readmissions compared to models that only include information from the first 24 hours of hospital admission or models that use far fewer variables. However, given the limited improvement in prediction, our findings suggest that from the practical perspective of implementing real‐time models to identify those at highest risk for readmission, it may not be worth the added complexity of waiting until the end of a hospitalization to leverage additional data on hospital complications, and the trajectory of laboratory and vital sign values currently available in the EHR. Further improvement in prediction of readmissions will likely require accounting for psychosocial, functional, behavioral, and postdischarge factors not currently present in the inpatient EHR.

Disclosures: This study was presented at the Society of Hospital Medicine 2015 Annual Meeting in National Harbor, Maryland, and the Society of General Internal Medicine 2015 Annual Meeting in Toronto, Canada. This work was supported by the Agency for Healthcare Research and Qualityfunded UT Southwestern Center for Patient‐Centered Outcomes Research (1R24HS022418‐01) and the Commonwealth Foundation (#20100323). Drs. Nguyen and Makam received funding from the UT Southwestern KL2 Scholars Program (NIH/NCATS KL2 TR001103). Dr. Halm was also supported in part by NIH/NCATS U54 RFA‐TR‐12‐006. The study sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The authors have no conflicts of interest to disclose.

Unplanned hospital readmissions are frequent, costly, and potentially avoidable.[1, 2] Due to major federal financial readmissions penalties targeting excessive 30‐day readmissions, there is increasing attention to implementing hospital‐initiated interventions to reduce readmissions.[3, 4] However, universal enrollment of all hospitalized patients into such programs may be too resource intensive for many hospitals.[5] To optimize efficiency and effectiveness, interventions should be targeted to individuals most likely to benefit.[6, 7] However, existing readmission risk‐prediction models have achieved only modest discrimination, have largely used administrative claims data not available until months after discharge, or are limited to only a subset of patients with Medicare or a specific clinical condition.[8, 9, 10, 11, 12, 13, 14] These limitations have precluded accurate identification of high‐risk individuals in an all‐payer general medical inpatient population to provide actionable information for intervention prior to discharge.

Approaches using electronic health record (EHR) data could allow early identification of high‐risk patients during the index hospitalization to enable initiation of interventions prior to discharge. To date, such strategies have relied largely on EHR data from the day of admission.[15, 16] However, given that variation in 30‐day readmission rates are thought to reflect the quality of in‐hospital care, incorporating EHR data from the entire hospital stay to reflect hospital care processes and clinical trajectory may more accurately identify at‐risk patients.[17, 18, 19, 20] Improved accuracy in risk prediction would help better target intervention efforts in the immediate postdischarge period, an interval characterized by heightened vulnerability for adverse events.[21]

To help hospitals target transitional care interventions more effectively to high‐risk individuals prior to discharge, we derived and validated a readmissions risk‐prediction model incorporating EHR data from the entire course of the index hospitalization, which we termed the full‐stay EHR model. We also compared the full‐stay EHR model performance to our group's previously derived prediction model based on EHR data on the day of admission, termed the first‐day EHR model, as well as to 2 other validated readmission models similarly intended to yield near real‐time risk predictions prior to or shortly after hospital discharge.[9, 10, 15]

METHODS

Study Design, Population, and Data Sources

We conducted an observational cohort study using EHR data from 6 hospitals in the DallasFort Worth metroplex between November 1, 2009 and October 30, 2010 using the same EHR system (Epic Systems Corp., Verona, WI). One site was a university‐affiliated safety net hospital; the remaining 5 sites were teaching and nonteaching community sites.

We included consecutive hospitalizations among adults 18 years old discharged alive from any medicine inpatient service. For individuals with multiple hospitalizations during the study period, we included only the first hospitalization. We excluded individuals who died during the index hospitalization, were transferred to another acute care facility, left against medical advice, or who died outside of the hospital within 30 days of discharge. For model derivation, we randomly split the sample into separate derivation (50%) and validation cohorts (50%).

Outcomes

The primary outcome was 30‐day hospital readmission, defined as a nonelective hospitalization within 30 days of discharge to any of 75 acute care hospitals within a 100‐mile radius of Dallas, ascertained from an all‐payer regional hospitalization database. Nonelective hospitalizations included all hospitalizations classified as a emergency, urgent, or trauma, and excluded those classified as elective as per the Centers for Medicare and Medicaid Services Claim Inpatient Admission Type Code definitions.

Predictor Variables for the Full‐Stay EHR Model

The full‐stay EHR model was iteratively developed from our group's previously derived and validated risk‐prediction model using EHR data available on admission (first‐day EHR model).[15] For the full‐stay EHR model, we included all predictor variables included in our published first‐day EHR model as candidate risk factors. Based on prior literature, we additionally expanded candidate predictors available on admission to include marital status (proxy for social isolation) and socioeconomic disadvantage (percent poverty, unemployment, median income, and educational attainment by zip code of residence as proxy measures of the social and built environment).[22, 23, 24, 25, 26, 27] We also expanded the ascertainment of prior hospitalization to include admissions at both the index hospital and any of 75 acute care hospitals from the same, separate all‐payer regional hospitalization database used to ascertain 30‐day readmissions.

Candidate predictors from the remainder of the hospital stay (ie, following the first 24 hours of admission) were included if they were: (1) available in the EHR of all participating hospitals, (2) routinely collected or available at the time of hospital discharge, and (3) plausible predictors of adverse outcomes based on prior literature and clinical expertise. These included length of stay, in‐hospital complications, transfer to an intensive or coronary care unit, blood transfusions, vital sign instabilities within 24 hours of discharge, select laboratory values at time of discharge, and disposition status. We also assessed trajectories of vital signs and selected laboratory values (defined as changes in these measures from admission to discharge).

Statistical Analysis

Model Derivation

Univariate relationships between readmission and each of the candidate predictors were assessed in the derivation cohort using a prespecified significance threshold of P 0.05. We included all factors from our previously derived and validated first‐day EHR model as candidate predictors.[15] Continuous laboratory and vital sign values at the time of discharge were categorized based on clinically meaningful cutoffs; predictors with missing values were assumed to be normal (<1% missing for each variable). Significant univariate candidate variables were entered in a multivariate logistic regression model using stepwise backward selection with a prespecified significance threshold of P 0.05. We performed several sensitivity analyses to confirm the robustness of our model. First, we alternately derived the full‐stay model using stepwise forward selection. Second, we forced in all significant variables from our first‐day EHR model, and entered the candidate variables from the remainder of the hospital stay using both stepwise backward and forward selection separately. Third, prespecified interactions between variables were evaluated for inclusion. Though final predictors varied slightly between the different approaches, discrimination of each model was similar to the model derived using our primary analytic approach (C statistics 0.01, data not shown).

Model Validation

We assessed model discrimination and calibration of the derived full‐stay EHR model using the validation cohort. Model discrimination was estimated by the C statistic. The C statistic represents the probability that, given 2 hospitalized individuals (1 who was readmitted and the other who was not), the model will predict a higher risk for the readmitted patient than for the nonreadmitted patient. Model calibration was assessed by comparing predicted to observed probabilities of readmission by quintiles of risk, and with the Hosmer‐Lemeshow goodness‐of‐fit test.

Comparison to Existing Models

We compared the full‐stay EHR model performance to 3 previously published models: our group's first‐day EHR model, and the LACE (includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year) and HOSPITAL (includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay) models, which were both derived to predict 30‐day readmissions among general medical inpatients and were intended to help clinicians identify high‐risk patients to target for discharge interventions.[9, 10, 15] We assessed each model's performance in our validation cohort, calculating the C statistic, integrated discrimination index (IDI), and net reclassification index (NRI) compared to the full‐stay model. IDI is a summary measure of both discrimination and reclassification, where more positive values suggest improvement in model performance in both these domains compared to a reference model.[28] The NRI is defined as the sum of the net proportions of correctly reclassified persons with and without the event of interest.[29] The theoretical range of values is 2 to 2, with more positive values indicating improved net reclassification compared to a reference model. Here, we calculated a category‐based NRI to evaluate the performance of models in correctly classifying individuals with and without readmissions into the highest readmission risk quintile versus the lowest 4 risk quintiles compared to the full‐stay EHR model.[29] This prespecified cutoff is relevant for hospitals interested in identifying the highest‐risk individuals for targeted intervention.[6] Because some hospitals may be able to target a greater number of individuals for intervention, we performed a sensitivity analysis by assessing category‐based NRI for reclassification into the top 2 risk quintiles versus the lowest 3 risk quintiles and found no meaningful difference in our results (data not shown). Finally, we qualitatively assessed calibration of comparator models in our validation cohort by comparing predicted probability to observed probability of readmission by quintiles of risk for each model. We conducted all analyses using Stata 12.1 (StataCorp, College Station, TX). This study was approved by the UT Southwestern Medical Center institutional review board.

RESULTS

Overall, 32,922 index hospitalizations were included in our study cohort; 12.7% resulted in a 30‐day readmission (see Supporting Figure 1 in the online version of this article). Individuals had a mean age of 62 years and had diverse race/ethnicity and primary insurance status; half were female (Table 1). The study sample was randomly split into a derivation cohort (50%, n = 16,492) and validation cohort (50%, n = 16,430). Individuals in the derivation cohort with a 30‐day readmission had markedly different socioeconomic and clinical characteristics compared to those not readmitted (Table 1).

Baseline Characteristics and Candidate Variables for Risk‐Prediction Model
Entire Cohort, N = 32,922 Derivation Cohort, N = 16,492

No Readmission, N = 14,312

Readmission, N = 2,180

P Value
  • NOTE: Abbreviations: ED, emergency department; ICU, intensive care unit; IQR, interquartile range; SD, standard deviation. *20% poverty in zip code as per high poverty area US Census designation. Prior ED visit at site of index hospitalization within the past year. Prior hospitalization at any of 75 acute care hospitals in the North Texas region within the past year. Nonelective admission defined as hospitalization categorized as medical emergency, urgent, or trauma. ∥Calculated from diagnoses available within 1 year prior to index hospitalization. Conditions were considered complications if they were not listed as a principle diagnosis for hospitalization or as a previous diagnosis in the prior year. #On day of discharge or last known observation before discharge. Instabilities were defined as temperature 37.8C, heart rate >100 beats/minute, respiratory rate >24 breaths/minute, systolic blood pressure 90 mm Hg, or oxygen saturation <90%. **Discharges to nursing home, skilled nursing facility, or long‐term acute care hospital.

Demographic characteristics
Age, y, mean (SD) 62 (17.3) 61 (17.4) 64 (17.0) 0.001
Female, n (%) 17,715 (53.8) 7,694 (53.8) 1,163 (53.3) 0.72
Race/ethnicity 0.001
White 21,359 (64.9) 9,329 (65.2) 1,361 (62.4)
Black 5,964 (18.1) 2,520 (17.6) 434 (19.9)
Hispanic 4,452 (13.5) 1,931 (13.5) 338 (15.5)
Other 1,147 (3.5) 532 (3.7) 47 (2.2)
Marital status, n (%) 0.001
Single 8,076 (24.5) 3,516 (24.6) 514 (23.6)
Married 13,394 (40.7) 5,950 (41.6) 812 (37.3)
Separated/divorced 3,468 (10.5) 1,460 (10.2) 251 (11.5)
Widowed 4,487 (13.7) 1,868 (13.1) 388 (17.8)
Other 3,497 (10.6) 1,518 (10.6) 215 (9.9)
Primary payer, n (%) 0.001
Private 13,090 (39.8) 5,855 (40.9) 726 (33.3)
Medicare 13,015 (39.5) 5,597 (39.1) 987 (45.3)
Medicaid 2,204 (6.7) 852 (5.9) 242 (11.1)
Charity, self‐pay, or other 4,613 (14.0) 2,008 (14.0) 225 (10.3)
High‐poverty neighborhood, n (%)* 7,468 (22.7) 3,208 (22.4) 548 (25.1) 0.001
Utilization history
1 ED visits in past year, n (%) 9,299 (28.2) 3,793 (26.5) 823 (37.8) 0.001
1 hospitalizations in past year, n (%) 10,189 (30.9) 4,074 (28.5) 1,012 (46.4) 0.001
Clinical factors from first day of hospitalization
Nonelective admission, n (%) 27,818 (84.5) 11,960 (83.6) 1,960 (89.9) 0.001
Charlson Comorbidity Index, median (IQR)∥ 0 (01) 0 (00) 0 (03) 0.001
Laboratory abnormalities within 24 hours of admission
Albumin <2 g/dL 355 (1.1) 119 (0.8) 46 (2.1) 0.001
Albumin 23 g/dL 4,732 (14.4) 1,956 (13.7) 458 (21.0) 0.001
Aspartate aminotransferase >40 U/L 4,610 (14.0) 1,922 (13.4) 383 (17.6) 0.001
Creatine phosphokinase <60 g/L 3,728 (11.3) 1,536 (10.7) 330 (15.1) 0.001
Mean corpuscular volume >100 fL/red cell 1,346 (4.1) 537 (3.8) 134 (6.2) 0.001
Platelets <90 103/L 912 (2.8) 357 (2.5) 116 (5.3) 0.001
Platelets >350 103/L 3,332 (10.1) 1,433 (10.0) 283 (13.0) 0.001
Prothrombin time >35 seconds 248 (0.8) 90 (0.6) 35 (1.6) 0.001
Clinical factors from remainder of hospital stay
Length of stay, d, median (IQR) 4 (26) 4 (26) 5 (38) 0.001
ICU transfer after first 24 hours, n (%) 988 (3.0) 408 (2.9) 94 (4.3) 0.001
Hospital complications, n (%)
Clostridium difficile infection 119 (0.4) 44 (0.3) 24 (1.1) 0.001
Pressure ulcer 358 (1.1) 126 (0.9) 46 (2.1) 0.001
Venous thromboembolism 301 (0.9) 112 (0.8) 34 (1.6) 0.001
Respiratory failure 1,048 (3.2) 463 (3.2) 112 (5.1) 0.001
Central line‐associated bloodstream infection 22 (0.07) 6 (0.04) 5 (0.23) 0.005
Catheter‐associated urinary tract infection 47 (0.14) 20 (0.14) 6 (0.28) 0.15
Acute myocardial infarction 293 (0.9) 110 (0.8) 32 (1.5) 0.001
Pneumonia 1,754 (5.3) 719 (5.0) 154 (7.1) 0.001
Sepsis 853 (2.6) 368 (2.6) 73 (3.4) 0.04
Blood transfusion during hospitalization, n (%) 4,511 (13.7) 1,837 (12.8) 425 (19.5) 0.001
Laboratory abnormalities at discharge#
Blood urea nitrogen >20 mg/dL, n (%) 10,014 (30.4) 4,077 (28.5) 929 (42.6) 0.001
Sodium <135 mEq/L, n (%) 4,583 (13.9) 1,850 (12.9) 440 (20.2) 0.001
Hematocrit 27 3,104 (9.4) 1,231 (8.6) 287 (13.2) 0.001
1 vital sign instability at discharge, n (%)# 6,192 (18.8) 2,624 (18.3) 525 (24.1) 0.001
Discharge location, n (%) 0.001
Home 23,339 (70.9) 10,282 (71.8) 1,383 (63.4)
Home health 3,185 (9.7) 1,356 (9.5) 234 (10.7)
Postacute care** 5,990 (18.2) 2,496 (17.4) 549 (25.2)
Hospice 408 (1.2) 178 (1.2) 14 (0.6)

Derivation and Validation of the Full‐Stay EHR Model for 30‐Day Readmission

Our final model included 24 independent variables, including demographic characteristics, utilization history, clinical factors from the first day of admission, and clinical factors from the remainder of the hospital stay (Table 2). The strongest independent predictor of readmission was hospital‐acquired Clostridium difficile infection (adjusted odds ratio [AOR]: 2.03, 95% confidence interval [CI] 1.18‐3.48); other hospital‐acquired complications including pressure ulcers and venous thromboembolism were also significant predictors. Though having Medicaid was associated with increased odds of readmission (AOR: 1.55, 95% CI: 1.31‐1.83), other zip codelevel measures of socioeconomic disadvantage were not predictive and were not included in the final model. Being discharged to hospice was associated with markedly lower odds of readmission (AOR: 0.23, 95% CI: 0.13‐0.40).

Final Full‐Stay EHR Model Predicting 30‐Day Readmissions (Derivation Cohort, N = 16,492)
Odds Ratio (95% CI)
Univariate Multivariate*
  • NOTE: Abbreviations: CI, confidence interval; ED, emergency department. *Values shown reflect adjusted odds ratios and 95% CI for each factor after adjustment for all other factors listed in the table.

Demographic characteristics
Age, per 10 years 1.08 (1.051.11) 1.07 (1.041.10)
Medicaid 1.97 (1.702.29) 1.55 (1.311.83)
Widow 1.44 (1.281.63) 1.27 (1.111.45)
Utilization history
Prior ED visit, per visit 1.08 (1.061.10) 1.04 (1.021.06)
Prior hospitalization, per hospitalization 1.30 (1.271.34) 1.16 (1.121.20)
Hospital and clinical factors from first day of hospitalization
Nonelective admission 1.75 (1.512.03) 1.42 (1.221.65)
Charlson Comorbidity Index, per point 1.19 (1.171.21) 1.06 (1.041.09)
Laboratory abnormalities within 24 hours of admission
Albumin <2 g/dL 2.57 (1.823.62) 1.52 (1.052.21)
Albumin 23 g/dL 1.68 (1.501.88) 1.20 (1.061.36)
Aspartate aminotransferase >40 U/L 1.37 (1.221.55) 1.21 (1.061.38)
Creatine phosphokinase <60 g/L 1.48 (1.301.69) 1.28 (1.111.46)
Mean corpuscular volume >100 fL/red cell 1.68 (1.382.04) 1.32 (1.071.62)
Platelets <90 103/L 2.20 (1.772.72) 1.56 (1.231.97)
Platelets >350 103/L 1.34 (1.171.54) 1.24 (1.081.44)
Prothrombin time >35 seconds 2.58 (1.743.82) 1.92 (1.272.90)
Hospital and clinical factors from remainder of hospital stay
Length of stay, per day 1.08 (1.071.09) 1.06 (1.041.07)
Hospital complications
Clostridium difficile infection 3.61 (2.195.95) 2.03 (1.183.48)
Pressure ulcer 2.43 (1.733.41) 1.64 (1.152.34)
Venous thromboembolism 2.01 (1.362.96) 1.55 (1.032.32)
Laboratory abnormalities at discharge
Blood urea nitrogen >20 mg/dL 1.86 (1.702.04) 1.37 (1.241.52)
Sodium <135 mEq/L 1.70 (1.521.91) 1.34 (1.181.51)
Hematocrit 27 1.61 (1.401.85) 1.22 (1.051.41)
Vital sign instability at discharge, per instability 1.29 (1.201.40) 1.25 (1.151.36)
Discharged to hospice 0.51 (0.300.89) 0.23 (0.130.40)

In our validation cohort, the full‐stay EHR model had fair discrimination, with a C statistic of 0.69 (95% CI: 0.68‐0.70) (Table 3). The full‐stay EHR model was well calibrated across all quintiles of risk, with slight overestimation of predicted risk in the lowest and highest quintiles (Figure 1a) (see Supporting Table 5 in the online version of this article). It also effectively stratified individuals across a broad range of predicted readmission risk from 4.1% in the lowest decile to 36.5% in the highest decile (Table 3).

Comparison of the Discrimination and Reclassification of Different Readmission Models*
Model Name C‐Statistic (95% CI) IDI, % (95% CI) NRI (95% CI) Average Predicted Risk, %
Lowest Decile Highest Decile
  • NOTE: Abbreviations; CI, confidence interval; EHR, electronic health record; IDI, Integrated Discrimination Improvement; NRI, Net Reclassification Index. *All measures were assessed using the validation cohort (N = 16,430), except for estimating the C‐statistic for the derivation cohort. P value <0.001 for all pairwise comparisons of C‐statistic between full‐stay model and first‐day, LACE, and HOSPITAL models, respectively. The LACE model includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year. The HOSPITAL model includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay.

Full‐stay EHR model
Derivation cohort 0.72 (0.70 to 0.73) 4.1 36.5
Validation cohort 0.69 (0.68 to 0.70) [Reference] [Reference] 4.1 36.5
First‐day EHR model 0.67 (0.66 to 0.68) 1.2 (1.4 to 1.0) 0.020 (0.038 to 0.002) 5.8 31.9
LACE model 0.65 (0.64 to 0.66) 2.6 (2.9 to 2.3) 0.046 (0.067 to 0.024) 6.1 27.5
HOSPITAL model 0.64 (0.62 to 0.65) 3.2 (3.5 to 2.9) 0.058 (0.080 to 0.035) 6.7 26.6
Figure 1
Comparison of the calibration of different readmission models. Calibration graphs for full‐stay (a), first‐day (b), LACE (c), and HOSPITAL (d) models in the validation cohort. Each graph shows predicted probability compared to observed probability of readmission by quintiles of risk for each model. The LACE model includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year. The HOSPITAL model includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay.

Comparing the Performance of the Full‐Stay EHR Model to Other Models

The full‐stay EHR model had better discrimination compared to the first‐day EHR model and the LACE and HOSPITAL models, though the magnitude of improvement was modest (Table 3). The full‐stay EHR model also stratified individuals across a broader range of readmission risk, and was better able to discriminate and classify those in the highest quintile of risk from those in the lowest 4 quintiles of risk compared to other models as assessed by the IDI and NRI (Table 3) (see Supporting Tables 14 and Supporting Figure 2 in the online version of this article). In terms of model calibration, both the first‐day EHR and LACE models were also well calibrated, whereas the HOSPITAL model was less robust (Figure 1).

The diagnostic accuracy of the full‐stay EHR model in correctly predicting those in the highest quintile of risk was better than that of the first‐day, LACE, and HOSPITAL models, though overall improvements in the sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios were also modest (see Supporting Table 6 in the online version of this article).

DISCUSSION

In this study, we used clinically detailed EHR data from the entire hospitalization on 32,922 individuals treated in 6 diverse hospitals to develop an all‐payer, multicondition readmission risk‐prediction model. To our knowledge, this is the first 30‐day hospital readmission risk‐prediction model to use a comprehensive set of factors from EHR data from the entire hospital stay. Prior EHR‐based models have focused exclusively on data available on or prior to the first day of admission, which account for clinical severity on admission but do not account for factors uncovered during the inpatient stay that influence the chance of a postdischarge adverse outcome.[15, 30] We specifically assessed the prognostic impact of a comprehensive set of factors from the entire index hospitalization, including hospital‐acquired complications, clinical trajectory, and stability on discharge in predicting hospital readmissions. Our full‐stay EHR model had statistically better discrimination, calibration, and diagnostic accuracy than our existing all‐cause first‐day EHR model[15] and 2 previously published readmissions models that included more limited information from hospitalization (such as length of stay).[9, 10] However, although the more complicated full‐stay EHR model was statistically better than previously published models, we were surprised that the predictive performance was only modestly improved despite the inclusion of many additional clinically relevant prognostic factors.

Taken together, our study has several important implications. First, the added complexity and resource intensity of implementing a full‐stay EHR model yields only modestly improved readmission risk prediction. Thus, hospitals and healthcare systems interested in targeting their highest‐risk individuals for interventions to reduce 30‐day readmission should consider doing so within the first day of hospital admission. Our group's previously derived and validated first‐day EHR model, which used data only from the first day of admission, qualitatively performed nearly as well as the full‐stay EHR model.[15] Additionally, a recent study using only preadmission EHR data to predict 30‐day readmissions also achieved similar discrimination and diagnostic accuracy as our full‐stay model.[30]

Second, the field of readmissions risk‐prediction modeling may be reaching the maximum achievable model performance using data that are currently available in the EHR. Our limited ability to accurately predict all‐cause 30‐day readmission risk may reflect the influence of currently unmeasured patient, system, and community factors on readmissions.[31, 32, 33] Due to the constraints of data collected in the EHR, we were unable to include several patient‐level clinical characteristics associated with hospital readmission, including self‐perceived health status, functional impairment, and cognition.[33, 34, 35, 36] However, given their modest effect sizes (ORs ranging from 1.062.10), adequately measuring and including these risk factors in our model may not meaningfully improve model performance and diagnostic accuracy. Further, many social and behavioral patient‐level factors are also not consistently available in EHR data. Though we explored the role of several neighborhood‐level socioeconomic measuresincluding prevalence of poverty, median income, education, and unemploymentwe found that none were significantly associated with 30‐day readmissions. These particular measures may have been inadequate to characterize individual‐level social and behavioral factors, as several previous studies have demonstrated that patient‐level factors such as social support, substance abuse, and medication and visit adherence can influence readmission risk in heart failure and pneumonia.[11, 16, 22, 25] This underscores the need for more standardized routine collection of data across functional, social, and behavioral domains in clinical settings, as recently championed by the Institute of Medicine.[11, 37] Integrating data from outside the EHR on postdischarge health behaviors, self‐management, follow‐up care, recovery, and home environment may be another important but untapped strategy for further improving prediction of readmissions.[25, 38]

Third, a multicondition readmission risk‐prediction model may be a less effective strategy than more customized disease‐specific models for selected conditions associated with high 30‐day readmission rates. Our group's previously derived and internally validated models for heart failure and human immunodeficiency virus had superior discrimination compared to our full‐stay EHR model (C statistic of 0.72 for each).[11, 13] However, given differences in the included population and time periods studied, a head‐to‐head comparison of these different strategies is needed to assess differences in model performance and utility.

Our study had several strengths. To our knowledge, this is the first study to rigorously measure the additive influence of in‐hospital complications, clinical trajectory, and stability on discharge on the risk of 30‐day hospital readmission. Additionally, our study included a large, diverse study population that included all payers, all ages of adults, a mix of community, academic, and safety net hospitals, and individuals from a broad array of racial/ethnic and socioeconomic backgrounds.

Our results should be interpreted in light of several limitations. First, though we sought to represent a diverse group of hospitals, all study sites were located within north Texas and generalizability to other regions is uncertain. Second, our ascertainment of prior hospitalizations and readmissions was more inclusive than what could be typically accomplished in real time using only EHR data from a single clinical site. We performed a sensitivity analysis using only prior utilization data available within the EHR from the index hospital with no meaningful difference in our findings (data not shown). Additionally, a recent study found that 30‐day readmissions occur at the index hospital for over 75% of events, suggesting that 30‐day readmissions are fairly comprehensively captured even with only single‐site data.[39] Third, we were not able to include data on outpatient visits before or after the index hospitalization, which may influence the risk of readmission.[1, 40]

In conclusion, incorporating clinically granular EHR data from the entire course of hospitalization modestly improves prediction of 30‐day readmissions compared to models that only include information from the first 24 hours of hospital admission or models that use far fewer variables. However, given the limited improvement in prediction, our findings suggest that from the practical perspective of implementing real‐time models to identify those at highest risk for readmission, it may not be worth the added complexity of waiting until the end of a hospitalization to leverage additional data on hospital complications, and the trajectory of laboratory and vital sign values currently available in the EHR. Further improvement in prediction of readmissions will likely require accounting for psychosocial, functional, behavioral, and postdischarge factors not currently present in the inpatient EHR.

Disclosures: This study was presented at the Society of Hospital Medicine 2015 Annual Meeting in National Harbor, Maryland, and the Society of General Internal Medicine 2015 Annual Meeting in Toronto, Canada. This work was supported by the Agency for Healthcare Research and Qualityfunded UT Southwestern Center for Patient‐Centered Outcomes Research (1R24HS022418‐01) and the Commonwealth Foundation (#20100323). Drs. Nguyen and Makam received funding from the UT Southwestern KL2 Scholars Program (NIH/NCATS KL2 TR001103). Dr. Halm was also supported in part by NIH/NCATS U54 RFA‐TR‐12‐006. The study sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The authors have no conflicts of interest to disclose.

References
  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):14181428.
  2. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  3. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital‐initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433440.
  4. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520528.
  5. Rennke S, Shoeb MH, Nguyen OK, Magan Y, Wachter RM, Ranji SR. Interventions to Improve Care Transitions at Hospital Discharge. Rockville, MD: Agency for Healthcare Research and Quality; 2013.
  6. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):9981005.
  7. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood). 2014;33(7):11481154.
  8. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  9. Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551557.
  10. Donze J, Aujesky D, Williams D, Schnipper JL. Potentially avoidable 30‐day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632638.
  11. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981988.
  12. Singal AG, Rahimi RS, Clark C, et al. An automated model using electronic medical record data identifies patients with cirrhosis at high risk for readmission. Clin Gastroenterol Hepatol. 2013;11(10):13351341.e1331.
  13. Nijhawan AE, Clark C, Kaplan R, Moore B, Halm EA, Amarasingham R. An electronic medical record‐based model to predict 30‐day risk of readmission and death among HIV‐infected inpatients. J Acquir Immune Defic Syndr. 2012;61(3):349358.
  14. Horwitz LI, Partovian C, Lin Z, et al. Development and use of an administrative claims measure for profiling hospital‐wide performance on 30‐day unplanned readmission. Ann Intern Med. 2014;161(10 suppl):S66S75.
  15. Amarasingham R, Velasco F, Xie B, et al. Electronic medical record‐based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models. BMC Med Inform Decis Mak. 2015;15(1):39.
  16. Watson AJ, O'Rourke J, Jethwani K, et al. Linking electronic health record‐extracted psychosocial data in real‐time to risk of readmission for heart failure. Psychosomatics. 2011;52(4):319327.
  17. Ashton CM, Wray NP. A conceptual framework for the study of early readmission as an indicator of quality of care. Soc Sci Med. 1996;43(11):15331541.
  18. Dharmarajan K, Hsieh AF, Lin Z, et al. Hospital readmission performance and patterns of readmission: retrospective cohort study of Medicare admissions. BMJ. 2013;347:f6571.
  19. Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting more performance from performance measurement. N Engl J Med. 2014;371(23):21452147.
  20. Bradley EH, Sipsma H, Horwitz LI, et al. Hospital strategy uptake and reductions in unplanned readmission rates for patients with heart failure: a prospective study. J Gen Intern Med. 2015;30(5):605611.
  21. Krumholz HM. Post‐hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100102.
  22. Calvillo‐King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269282.
  23. Keyhani S, Myers LJ, Cheng E, Hebert P, Williams LS, Bravata DM. Effect of clinical and social risk factors on hospital profiling for stroke readmission: a cohort study. Ann Intern Med. 2014;161(11):775784.
  24. Kind AJ, Jencks S, Brock J, et al. Neighborhood socioeconomic disadvantage and 30‐day rehospitalization: a retrospective cohort study. Ann Intern Med. 2014;161(11):765774.
  25. Arbaje AI, Wolff JL, Yu Q, Powe NR, Anderson GF, Boult C. Postdischarge environmental and socioeconomic factors and the likelihood of early hospital readmission among community‐dwelling Medicare beneficiaries. Gerontologist. 2008;48(4):495504.
  26. Hu J, Gonsahn MD, Nerenz DR. Socioeconomic status and readmissions: evidence from an urban teaching hospital. Health Aff (Millwood). 2014;33(5):778785.
  27. Nagasako EM, Reidhead M, Waterman B, Dunagan WC. Adding socioeconomic data to hospital readmissions calculations may produce more useful results. Health Aff (Millwood). 2014;33(5):786791.
  28. Pencina MJ, D'Agostino RB, D'Agostino RB, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2):157172; discussion 207–212.
  29. Leening MJ, Vedder MM, Witteman JC, Pencina MJ, Steyerberg EW. Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician's guide. Ann Intern Med. 2014;160(2):122131.
  30. Shadmi E, Flaks‐Manov N, Hoshen M, Goldman O, Bitterman H, Balicer RD. Predicting 30‐day readmissions with preadmission electronic health record data. Med Care. 2015;53(3):283289.
  31. Kangovi S, Grande D. Hospital readmissions—not just a measure of quality. JAMA. 2011;306(16):17961797.
  32. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):13661369.
  33. Greysen SR, Stijacic Cenzer I, Auerbach AD, Covinsky KE. Functional impairment and hospital readmission in medicare seniors. JAMA Intern Med. 2015;175(4):559565.
  34. Holloway JJ, Thomas JW, Shapiro L. Clinical and sociodemographic risk factors for readmission of Medicare beneficiaries. Health Care Financ Rev. 1988;10(1):2736.
  35. Patel A, Parikh R, Howell EH, Hsich E, Landers SH, Gorodeski EZ. Mini‐cog performance: novel marker of post discharge risk among patients hospitalized for heart failure. Circ Heart Fail. 2015;8(1):816.
  36. Hoyer EH, Needham DM, Atanelov L, Knox B, Friedman M, Brotman DJ. Association of impaired functional status at hospital discharge and subsequent rehospitalization. J Hosp Med. 2014;9(5):277282.
  37. Adler NE, Stead WW. Patients in context—EHR capture of social and behavioral determinants of health. N Engl J Med. 2015;372(8):698701.
  38. Nguyen OK, Chan CV, Makam A, Stieglitz H, Amarasingham R. Envisioning a social‐health information exchange as a platform to support a patient‐centered medical neighborhood: a feasibility study. J Gen Intern Med. 2015;30(1):6067.
  39. Henke RM, Karaca Z, Lin H, Wier LM, Marder W, Wong HS. Patient factors contributing to variation in same‐hospital readmission rate. Med Care Res Review. 2015;72(3):338358.
  40. Weinberger M, Oddone EZ, Henderson WG. Does increased access to primary care reduce hospital readmissions? Veterans Affairs Cooperative Study Group on Primary Care and Hospital Readmission. N Engl J Med. 1996;334(22):14411447.
References
  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):14181428.
  2. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  3. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital‐initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433440.
  4. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520528.
  5. Rennke S, Shoeb MH, Nguyen OK, Magan Y, Wachter RM, Ranji SR. Interventions to Improve Care Transitions at Hospital Discharge. Rockville, MD: Agency for Healthcare Research and Quality; 2013.
  6. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):9981005.
  7. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood). 2014;33(7):11481154.
  8. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  9. Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551557.
  10. Donze J, Aujesky D, Williams D, Schnipper JL. Potentially avoidable 30‐day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632638.
  11. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981988.
  12. Singal AG, Rahimi RS, Clark C, et al. An automated model using electronic medical record data identifies patients with cirrhosis at high risk for readmission. Clin Gastroenterol Hepatol. 2013;11(10):13351341.e1331.
  13. Nijhawan AE, Clark C, Kaplan R, Moore B, Halm EA, Amarasingham R. An electronic medical record‐based model to predict 30‐day risk of readmission and death among HIV‐infected inpatients. J Acquir Immune Defic Syndr. 2012;61(3):349358.
  14. Horwitz LI, Partovian C, Lin Z, et al. Development and use of an administrative claims measure for profiling hospital‐wide performance on 30‐day unplanned readmission. Ann Intern Med. 2014;161(10 suppl):S66S75.
  15. Amarasingham R, Velasco F, Xie B, et al. Electronic medical record‐based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models. BMC Med Inform Decis Mak. 2015;15(1):39.
  16. Watson AJ, O'Rourke J, Jethwani K, et al. Linking electronic health record‐extracted psychosocial data in real‐time to risk of readmission for heart failure. Psychosomatics. 2011;52(4):319327.
  17. Ashton CM, Wray NP. A conceptual framework for the study of early readmission as an indicator of quality of care. Soc Sci Med. 1996;43(11):15331541.
  18. Dharmarajan K, Hsieh AF, Lin Z, et al. Hospital readmission performance and patterns of readmission: retrospective cohort study of Medicare admissions. BMJ. 2013;347:f6571.
  19. Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting more performance from performance measurement. N Engl J Med. 2014;371(23):21452147.
  20. Bradley EH, Sipsma H, Horwitz LI, et al. Hospital strategy uptake and reductions in unplanned readmission rates for patients with heart failure: a prospective study. J Gen Intern Med. 2015;30(5):605611.
  21. Krumholz HM. Post‐hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100102.
  22. Calvillo‐King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269282.
  23. Keyhani S, Myers LJ, Cheng E, Hebert P, Williams LS, Bravata DM. Effect of clinical and social risk factors on hospital profiling for stroke readmission: a cohort study. Ann Intern Med. 2014;161(11):775784.
  24. Kind AJ, Jencks S, Brock J, et al. Neighborhood socioeconomic disadvantage and 30‐day rehospitalization: a retrospective cohort study. Ann Intern Med. 2014;161(11):765774.
  25. Arbaje AI, Wolff JL, Yu Q, Powe NR, Anderson GF, Boult C. Postdischarge environmental and socioeconomic factors and the likelihood of early hospital readmission among community‐dwelling Medicare beneficiaries. Gerontologist. 2008;48(4):495504.
  26. Hu J, Gonsahn MD, Nerenz DR. Socioeconomic status and readmissions: evidence from an urban teaching hospital. Health Aff (Millwood). 2014;33(5):778785.
  27. Nagasako EM, Reidhead M, Waterman B, Dunagan WC. Adding socioeconomic data to hospital readmissions calculations may produce more useful results. Health Aff (Millwood). 2014;33(5):786791.
  28. Pencina MJ, D'Agostino RB, D'Agostino RB, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2):157172; discussion 207–212.
  29. Leening MJ, Vedder MM, Witteman JC, Pencina MJ, Steyerberg EW. Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician's guide. Ann Intern Med. 2014;160(2):122131.
  30. Shadmi E, Flaks‐Manov N, Hoshen M, Goldman O, Bitterman H, Balicer RD. Predicting 30‐day readmissions with preadmission electronic health record data. Med Care. 2015;53(3):283289.
  31. Kangovi S, Grande D. Hospital readmissions—not just a measure of quality. JAMA. 2011;306(16):17961797.
  32. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):13661369.
  33. Greysen SR, Stijacic Cenzer I, Auerbach AD, Covinsky KE. Functional impairment and hospital readmission in medicare seniors. JAMA Intern Med. 2015;175(4):559565.
  34. Holloway JJ, Thomas JW, Shapiro L. Clinical and sociodemographic risk factors for readmission of Medicare beneficiaries. Health Care Financ Rev. 1988;10(1):2736.
  35. Patel A, Parikh R, Howell EH, Hsich E, Landers SH, Gorodeski EZ. Mini‐cog performance: novel marker of post discharge risk among patients hospitalized for heart failure. Circ Heart Fail. 2015;8(1):816.
  36. Hoyer EH, Needham DM, Atanelov L, Knox B, Friedman M, Brotman DJ. Association of impaired functional status at hospital discharge and subsequent rehospitalization. J Hosp Med. 2014;9(5):277282.
  37. Adler NE, Stead WW. Patients in context—EHR capture of social and behavioral determinants of health. N Engl J Med. 2015;372(8):698701.
  38. Nguyen OK, Chan CV, Makam A, Stieglitz H, Amarasingham R. Envisioning a social‐health information exchange as a platform to support a patient‐centered medical neighborhood: a feasibility study. J Gen Intern Med. 2015;30(1):6067.
  39. Henke RM, Karaca Z, Lin H, Wier LM, Marder W, Wong HS. Patient factors contributing to variation in same‐hospital readmission rate. Med Care Res Review. 2015;72(3):338358.
  40. Weinberger M, Oddone EZ, Henderson WG. Does increased access to primary care reduce hospital readmissions? Veterans Affairs Cooperative Study Group on Primary Care and Hospital Readmission. N Engl J Med. 1996;334(22):14411447.
Issue
Journal of Hospital Medicine - 11(7)
Issue
Journal of Hospital Medicine - 11(7)
Page Number
473-480
Page Number
473-480
Publications
Publications
Article Type
Display Headline
Predicting all‐cause readmissions using electronic health record data from the entire hospitalization: Model development and comparison
Display Headline
Predicting all‐cause readmissions using electronic health record data from the entire hospitalization: Model development and comparison
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Oanh Kieu Nguyen, MD, 5323 Harry Hines Blvd., Dallas, Texas 75390‐9169; Telephone: 214‐648‐3135; Fax: 214‐648‐3232; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files