User login
Inpatient Glycemic Control With Sliding Scale Insulin in Noncritical Patients With Type 2 Diabetes: Who Can Slide?
Sliding scale insulin (SSI) for inpatient glycemic control was first proposed by Elliott P Joslin in 1934 when he recommended titration of insulin based on urine glucose levels.1 As bedside glucose meters became widely available, physicians transitioned to dosing SSI based on capillary blood glucose (BG) levels,2,3 and SSI became widely used for the management of inpatient hyperglycemia.1 However, during the past decade, there has been strong opposition to the use of SSI in hospitals. Many authors oppose its use, highlighting the retrospective rather than prospective nature of SSI therapy and concerns about inadequate glycemic control.4-6 In 2004, the American College of Endocrinology first released a position statement discouraging the use of SSI alone and recommended basal-bolus insulin as the preferred method of glycemic control for inpatients with type 2 diabetes (T2D).7 The American Diabetes Association (ADA) inpatient guidelines in 20058 and the Endocrine Society guidelines in 20129 also opposed SSI monotherapy and reaffirmed that a basal-bolus insulin regimen should be used for most non–critically ill patients with diabetes. Those guidelines remain in place currently.
Several randomized controlled trials (RCTs) and meta-analyses have shown that basal-bolus insulin regimens provide superior glycemic control in non–critical inpatients when compared with SSI alone.10-14 In addition, the RABBIT 2 (Randomized Study of Basal-Bolus Insulin Therapy in the Inpatient Management of Patients With Type 2 Diabetes) trial showed a significant reduction in perioperative complications10 among surgical patients when treated with basal-bolus insulin therapy. Despite these studies and strong recommendations against its use, SSI continues to be widely used in the United States. According to a 2007 survey of 44 US hospitals, 41% of noncritical patients with hyperglycemia were treated with SSI alone.15 In addition, SSI remains one of the most commonly prescribed insulin regimens in many countries around the world.16-19 The persistence of SSI use raises questions as to why clinicians continue to use a therapy that has been strongly criticized. Some authors point to convenience and fear of hypoglycemia with a basal-bolus insulin regimen.20,21 Alternatively, it is possible that SSI usage remains so pervasive because it is effective in a subset of patients. In fact, a 2018 Cochrane review concluded that existing evidence is not sufficiently robust to definitively recommend basal-bolus insulin over SSI for inpatient diabetes management of non–critically ill patients despite existing guidelines.22
Owing to the ongoing controversy and widespread use of SSI, we designed an exploratory analysis to understand the rationale for such therapy by investigating whether a certain subpopulation of hospitalized patients with T2D may achieve target glycemic control with SSI alone. We hypothesized that noncritical patients with mild hyperglycemia and admission BG <180 mg/dL would do well with SSI alone and may not require intensive treatment with basal-bolus insulin regimens. To address this question, we used electronic health records with individual-level patient data to assess inpatient glycemic control of non–critically ill patients with T2D treated with SSI alone.
METHODS
Participants
Data from 25,813 adult noncritical inpatients with T2D, with an index admission between June 1, 2010, and June 30, 2018, were obtained through the Emory Healthcare Clinical Data Warehouse infrastructure program. All patients were admitted to Emory Healthcare hospitals, including Emory University Hospital, Emory University Hospital Midtown, and Emory Saint Joseph’s Hospital, in Atlanta, Georgia. Data were extracted for each patient during the index hospitalization, including demographics, anthropometrics, and admission and inpatient laboratory values. Information was collected on daily point-of-care glucose values, hemoglobin A1c (HbA1c), hypoglycemic events, insulin doses, hospital complications, comorbidities, and hospital setting (medical vs surgical admission). International Classification of Diseases, 9th and 10th Revisions (ICD-9/10) codes were used to determine diagnosis of T2D, comorbidities, and complications.
From our initial dataset, we identified 16,366 patients who were treated with SSI during hospitalization. We excluded patients who were admitted to the intensive care unit (ICU) or placed on intravenous insulin, patients with missing admission BG values, and patients with a length of stay less than 1 day. To prevent inclusion of patients presenting in diabetic ketoacidosis or hyperosmolar hyperglycemic syndrome, we excluded patients with an admission BG >500 mg/dL. We then excluded 6,739 patients who received basal insulin within the first 2 days of hospitalization, as well as 943 patients who were treated with noninsulin (oral or injectable) antidiabetic agents. Our final dataset included 8,095 patients (Appendix Figure).
Patients in the SSI cohort included all patients who were treated with short-acting insulin only (regular insulin or rapid-acting [lispro, aspart, glulisine] insulin analogs) during the first 2 days of hospitalization. Patients who remained on only short-acting insulin during the entire hospitalization were defined as continuous SSI patients. Patients who subsequently received basal insulin after day 2 of hospitalization were defined as patients who transitioned to basal. Patients were stratified according to admission BG levels (first BG available on day of admission) and HbA1c (when available during index admission). We compared the baseline characteristics and clinical outcomes of patients who remained on SSI alone throughout the entirety of hospitalization with those of patients who required transition to basal insulin. The mean hospital BG was calculated by taking the average of all BG measurements during the hospital stay. We defined hypoglycemia as a BG <70 mg/dL and severe hypoglycemia as BG <40 mg/dL. Repeated hypoglycemia values were excluded if they occurred within a period of 2 hours.
Outcome Measures
The primary outcome was the percentage of patients with T2D achieving target glycemic control with SSI therapy, defined as mean hospital BG between 70 and 180 mg/dL without hypoglycemia <70 mg/dL during hospital stay. This threshold was determined based on 2019 ADA recommendations targeting hospital BG <180 mg/dL and avoidance of hypoglycemia.23
Statistical Analysis
Patients were stratified according to continuous SSI versus transitioned to basal treatment. Patients who remained on continuous SSI were further categorized into four categories based on admission BG: <140 mg/dL, 140 to 180 mg/dL, 180 to 250 mg/dL, and ≥250 mg/dL. Clinical characteristics were compared using Wilcoxon rank-sum tests (if continuous) and chi-square tests or Fisher exact tests (if categorical). We then compared the clinical outcomes among continuous SSI patients with different admission BG levels (<140 mg/dL, 140-180 mg/dL, 180-250 mg/dL, and ≥250 mg/dL) and with different HbA1c levels (<7%, 7%-8%, 8%-9%, ≥9%). Within each scenario, logistic regression for the outcome of poor glycemic control, defined as mean hospital BG >180 mg/dL, was performed to evaluate the HbA1c levels and admission BG levels controlling for other factors (age, gender, body mass index [BMI], race, setting [medicine versus surgery] and Charlson Comorbidity Index score). A P value < .05 was regarded as statistically significant. All analyses were performed based on available cases and conducted in SAS version 9.4 (SAS Institute Inc.).
RESULTS
Among 25,813 adult patients with T2D, 8,095 patients (31.4%) were treated with SSI alone during the first 2 days of hospitalization. Of those patients treated with SSI, 6,903 (85%) remained on continuous SSI alone during the entire hospitalization, and 1,192 (15%) were transitioned to basal insulin. The clinical characteristics of these patients on continuous SSI and those who transitioned to basal insulin are shown in Table 1. Patients who transitioned to basal insulin had significantly higher mean (SD) admission BG (191.8 [88.2] mg/dL vs 156.4 [65.4] mg/dL, P < .001) and higher mean (SD) HbA1c (8.1% [2.0%] vs 7.01% [1.5%], P < .001), compared with those who remained on continuous SSI. Patients who transitioned to basal insulin were also younger and more likely to have chronic kidney disease (CKD), but less likely to have congestive heart failure, coronary artery disease, or chronic obstructive pulmonary disease (COPD). The Charlson Comorbidity Index score was significantly higher for patients who transitioned to basal (4.4 [2.5]) than for those who remained on continuous SSI (4.1 [2.5], P < .001). There were no significant differences among sex, BMI, or glomerular filtration rate (GFR) on admission. Of those transitioned to basal insulin, 53% achieved a mean hospitalization BG <180 mg/dL, compared with 82% of those on continuous SSI. The overall rate of hypoglycemia in the continuous SSI group was 8% compared with 18% in those transitioned to basal insulin.
Of the patients who remained on continuous SSI throughout the hospitalization, 3,319 patients (48%) had admission BG <140 mg/dL, 1,671 patients (24%) had admission BG 140 to 180 mg/dL, and 1,913 patients (28%) had admission BG >180 mg/dL. Only 9% of patients who remained on continuous SSI had admission BG ≥250 mg/dL. Patients with admission BG <140 mg/dL were older, had lower BMI and HbA1c, had higher rates of COPD and CKD, and were more likely to be admitted to a surgical service compared with patients with admission BG >140 mg/dL (P < .05 for all; Table 2).
Hospital glycemic control for patients on continuous SSI according to admission BG is displayed in Table 3. Among patients who remained on continuous SSI, 96% of patients with admission BG <140 mg/dL had a mean hospital BG <180 mg/dL; of them, 86% achieved target control without hypoglycemia. Similar rates of target control were achieved in patients with admission BG 140 to 180 mg/dL (83%), in contrast to patients with admission BG ≥250 mg/dL, of whom only 18% achieved target control (P < .001). These findings parallel those seen in patients transitioned to basal insulin. Of patients in the transition group admitted with BG <140 mg/dL and <180 mg/dL, 88.5% and 84.6% had mean hospital BG <180 mg/dL, respectively, while 69.1% and 68.9% had mean BG between 70 and 180 mg/dL without hypoglycemia. The overall frequency of hypoglycemia <70 mg/dL among patients on continuous SSI was 8% and was more common in patients with admission BG <140 mg/dL (10%) compared with patients with higher admission glucose levels (BG 140-180 mg/dL [4%], 180-250 mg/dL [4%], or ≥250 mg/dL [6%], P < .001). There was no difference in rates of severe hypoglycemia <40 mg/dL among groups.
HbA1c data were available for 2,560 of the patients on continuous SSI (Table 3). Mean hospital BG increased significantly with increasing HbA1c values. Patients admitted with HbA1c <7% had lower mean (SD) hospital BG (132.2 [28.2] mg/dL) and were more likely to achieve target glucose control during hospitalization (85%) compared with those with HbA1c 7% to 8% (mean BG, 148.7 [30.8] mg/dL; 80% target control), HbA1c 8% to 9% (mean BG, 169.1 [37.9] mg/dL; 61% target control), or HbA1c ≥9% (mean BG, 194.9 [53.4] mg/dL; 38% target control) (P < .001).
In a logistic regression analysis adjusted for age, gender, BMI, race, setting (medicine vs surgery), and Charlson Comorbidity Index score, the odds of poor glycemic control increased with higher admission BG (admission BG 140-180 mg/dL: odds ratio [OR], 1.8; 95% CI, 1.5-2.2; admission BG 180-250 mg/dL: OR, 3.7; 95% CI, 3.1-4.4; admission BG ≥250 mg/dL: OR, 7.2; 95% CI, 5.8-9.0; reference admission BG <140 mg/dL; Figure). Similarly, the logistic regression analysis showed greater odds of poor in-hospital glycemic control with increasing HbA1c (OR, 6.1; 95% CI, 4.3-8.8 for HbA1c >9% compared with HbA1c <7%).
DISCUSSION
This large retrospective cohort study examined the effectiveness of SSI for glycemic control in noncritical inpatients with T2D. Our results indicate that SSI is still widely used in our hospital system, with 31.4% of our initial cohort managed with SSI alone. We found that 86% of patients with BG <140 mg/dL and 83% of patients with BG 140 to 180 mg/dL achieved glycemic control without hypoglycemia when managed with SSI alone, compared with 53% of those admitted with BG 180 to 250 mg/dL and only 18% of those with admission BG ≥250 mg/dL. This high success rate of achieving optimal BG control with SSI alone is comparable to that seen with transition to basal insulin and may explain the prevalent use of SSI for the management of patients with T2D and mild to moderate hyperglycemia.
Published clinical guideline recommendations promoting the use of basal-bolus insulin treatment algorithms are based on the results of a few RCTs that compared the efficacy of SSI vs a basal-bolus insulin regimen. These studies reported significantly lower mean daily BG concentration with basal or basal-bolus insulin therapy compared with SSI.10,11,24 However, it is interesting to note that the mean admission BG of patients treated with SSI in these RCTs ranged from 184 to 225 mg/dL. Patients in these trials were excluded if admission BG was <140 mg/dL.10,11,24 This is in contrast to our study evaluating real-world data in non–critically ill settings in which we found that 48% of patients treated with SSI had admission BG <140 mg/dL, and nearly 75% had admission BG <180 mg/dL. This suggests that by nature of study design, most RCTs excluded the population of patients who do achieve good glycemic control with SSI and may have contributed to the perception that basal insulin is preferable in all populations.
Our analysis indicates that healthcare professionals should consider admission BG when selecting the type of insulin regimen to manage patients with T2D in the hospital. Our results suggest that SSI may be appropriate for many patients with admission BG <180 mg/dL and should be avoided as monotherapy in patients with admission BG ≥180 mg/dL, as the proportion of patients achieving target control decreased with increasing admission BG. More importantly, if a patient is not controlled with SSI alone, intensification of therapy with the addition of basal insulin is indicated to achieve glycemic control. In addition, we found that the admission HbA1c is an appropriate marker to consider as well, with hospital glycemic control deteriorating with increasing HbA1c values, paralleling the admission BG. The main limitation to widespread use of HbA1c for therapeutic decision-making is access to values at time of patient admission; in our population, only 37% of patients had an HbA1c value available during the index hospitalization.
Previous publications have reported that hypoglycemia carries significant safety concerns, especially among a hospitalized population.25-27 As such, we included hypoglycemia as an important metric in our definition of target glycemic control rather than simply using mean hospital BG or number of hyperglycemic events to define treatment effectiveness. We did find a higher rate of hypoglycemia in patients with moderate admission BG treated with SSI compared with those with higher admission BG; however, few patients overall experienced clinically significant (<54 mg/dL) or severe (<40 mg/dL) hypoglycemia.
In our population, only 15% of patients started on SSI received additional basal insulin during hospitalization. This finding is similar to data reported in the Rabbit 2 trial, in which 14% of patients failed SSI alone, with a higher failure rate among those with higher BG on admission.10 Given the observational nature of this study, we cannot definitively state why certain patients in our population required additional basal insulin, but we can hypothesize that these patients admitted with BG ≥180 mg/dL had higher treatment failure rates and greater rates of hyperglycemia, therefore receiving intensified insulin therapy as clinically indicated at the discretion of the treating physician. Patients who transitioned from SSI to basal insulin had significantly higher admission BG and HbA1c compared with patients who remained on SSI alone. We noted that the rates of hypoglycemia were higher in the group that transitioned to basal (18% vs 8%) and similar to rates reported in previous RCTs.11,24
This observational study takes advantage of a large, diverse study population and a combination of medicine and surgery patients in a real-world setting. We acknowledge several limitations in our study. Our primary data were observational in nature, and as such, some baseline patient characteristics were notably different between groups, suggesting selection bias for treatment allocation to SSI. We do not know which patients were managed by primary teams compared with specialized diabetes consult services, which may also influence treatment regimens. We did not have access to information about patients’ at-home diabetes medication regimens or duration of diabetes, both of which have been shown in prior publications to affect an individual’s overall hospital glycemic control. Data on HbA1c values were available for only approximately one-third of patients. In addition, our study did not include patients without a history of diabetes who developed stress-induced hyperglycemia, a population that may benefit from conservative therapy such as SSI.28 A diagnosis of CKD was defined based on ICD 9/10 codes and not on admission estimated GFR. More specific data regarding stage of CKD or changes in renal function over the duration of hospitalization are not available, which could influence insulin prescribing practice. In addition, we defined the basal group as patients prescribed any form of basal insulin (NPH, glargine, detemir or degludec), and we do not have information on the use of prandial versus correction doses of rapid-acting insulin in the basal insulin–treated group.
CONCLUSION
In conclusion, our observational study indicates that the use of SSI results in appropriate target glycemic control for most noncritical medicine and surgery patients with admission BG <180 mg/dL. In agreement with previous RCTs, our study confirms that SSI as monotherapy is frequently inadequate in patients with significant hyperglycemia >180 mg/dL.10,11,24,29 We propose that an individualized approach to inpatient glycemic management is imperative, and cautious use of SSI may be a viable option for certain patients with mild hyperglycemia and admission BG <180 mg/dL. Further observational and randomized studies are needed to confirm the efficacy of SSI therapy in T2D patients with mild hyperglycemia. By identifying which subset of patients can be safely managed with SSI alone, we can better understand which patients will require escalation of therapy with intensive glucose management.
1. Umpierrez GE, Palacio A, Smiley D. Sliding scale insulin use: myth or insanity? Am J Med. 2007;120(7):563-567. https://doi.org/10.1016/j.amjmed.2006.05.070
2. Kitabchi AE, Ayyagari V, Guerra SM. The efficacy of low-dose versus conventional therapy of insulin for treatment of diabetic ketoacidosis. Ann Intern Med. 1976;84(6):633-638. https://doi.org/10.7326/0003-4819-84-6-633
3. Skyler JS, Skyler DL, Seigler DE, O’Sullivan MJ. Algorithms for adjustment of insulin dosage by patients who monitor blood glucose. Diabetes Care. 1981;4(2):311-318. https://doi.org/10.2337/diacare.4.2.311
4. Gearhart JG, Duncan JL 3rd, Replogle WH, Forbes RC, Walley EJ. Efficacy of sliding-scale insulin therapy: a comparison with prospective regimens. Fam Pract Res J. 1994;14(4):313-322.
5. Queale WS, Seidler AJ, Brancati FL. Glycemic control and sliding scale insulin use in medical inpatients with diabetes mellitus. Arch Intern Med. 1997;157(5):545-552.
6. Clement S, Braithwaite SS, Magee MF, et al. Management of diabetes and hyperglycemia in hospitals. Diabetes Care. 2004;27(2):553-591. https://doi.org/10.2337/diacare.27.2.553
7. Garber AJ, Moghissi ES, Bransome ED Jr, et al. American College of Endocrinology position statement on inpatient diabetes and metabolic control. Endocr Pract. 2004;10(1):78-82. https://doi.org/10.4158/EP.10.1.77
8. American Diabetes Association. Standards of medical care in diabetes. Diabetes Care. 2005;28(suppl 1):S4-S36.
9. Umpierrez GE, Hellman R, Korytkowski MT, , et al. Management of hyperglycemia in hospitalized patients in non-critical care setting: an Endocrine Society clinical practice guideline. J Clin Endocrinol Metab. 2012;97(1):16-38. https://doi.org/10.1210/jc.2011-2098
10. Umpierrez GE, Smiley D, Zisman A, et al. Randomized study of basal-bolus insulin therapy in the inpatient management of patients with type 2 diabetes. Diabetes Care. 2007;30(9):2181-2186. https://doi.org/10.2337/dc07-0295
11. Umpierrez GE, Smiley D, Jacobs S, et al. Randomized study of basal-bolus insulin therapy in the inpatient management of patients with type 2 diabetes undergoing general surgery (RABBIT 2 surgery). Diabetes Care. 2011;34(2):256-261. https://doi.org/10.2337/dc10-1407
12. Schroeder JE, Liebergall M, Raz I, Egleston R, Ben Sussan G, Peyser A. Benefits of a simple glycaemic protocol in an orthopaedic surgery ward: a randomized prospective study. Diabetes Metab Res Rev. 2012;28:71-75. https://doi.org/10.1002/dmrr.1217
13. Lee YY, Lin YM, Leu WJ, et al. Sliding-scale insulin used for blood glucose control: a meta-analysis of randomized controlled trials. Metabolism. 2015;64(9):1183-1192. https://doi.org/10.1016/j.metabol.2015.05.011
14. Christensen MB, Gotfredsen A, Nørgaard K. Efficacy of basal-bolus insulin regimens in the inpatient management of non-critically ill patients with type 2 diabetes: a systematic review and meta-analysis. Diabetes Metab Res Rev. 2017;33(5):e2885. https://doi.org/10.1002/dmrr.2885
15. Wexler DJ, Meigs JB, Cagliero E, Nathan DM, Grant RW. Prevalence of hyper- and hypoglycemia among inpatients with diabetes: a national survey of 44 U.S. hospitals. Diabetes Care. 2007;30(2):367-369. https://doi.org/10.2337/dc06-1715
16. Moreira ED Jr, Silveira PCB, Neves RCS, Souza C Jr, Nunes ZO, Almeida MdCC. Glycemic control and diabetes management in hospitalized patients in Brazil. Diabetol Metab Syndr. 2013;5(1):62. https://doi.org/10.1186/1758-5996-5-62
17. Akhtar ST, Mahmood K, Naqvi IH, Vaswani AS. Inpatient management of type 2 diabetes mellitus: does choice of insulin regimen really matter? Pakistan J Med Sci. 2014;30(4):895-898.
18. Gómez Cuervo C, Sánchez Morla A, Pérez-Jacoiste Asín MA, Bisbal Pardo O, Pérez Ordoño L, Vila Santos J. Effective adverse event reduction with bolus-basal versus sliding scale insulin therapy in patients with diabetes during conventional hospitalization: systematic review and meta-analysis. Endocrinol Nutr. 2016;63(4):145-156. https://doi.org/10.1016/j.endonu.2015.11.008
19. Bain A, Hasan SS, Babar ZUD. Interventions to improve insulin prescribing practice for people with diabetes in hospital: a systematic review. Diabet Med. 2019;36(8):948-960. https://doi.org/10.1111/dme.13982
20. Ambrus DB, O’Connor MJ. Things We Do For No Reason: sliding-scale insulin as monotherapy for glycemic control in hospitalized patients. J Hosp Med. 2019;14(2):114-116. https://doi.org/10.12788/jhm.3109
21. Nau KC, Lorenzetti RC, Cucuzzella M, Devine T, Kline J. Glycemic control in hospitalized patients not in intensive care: beyond sliding-scale insulin. Am Fam Physician. 2010;81(9):1130-1135.
22. Colunga-Lozano LE, Gonzalez Torres FJ, Delgado-Figueroa N, et al. Sliding scale insulin for non-critically ill hospitalised adults with diabetes mellitus. Cochrane Database Syst Rev. 2018;11(11):CD011296. https://doi.org/10.1002/14651858.CD011296.pub2
23. American Diabetes Association. Diabetes care in the hospital: Standards of Medical Care in Diabetes—2019. Diabetes Care. 2019;42(suppl 1):S173-S181. https://doi.org/10.2337/dc19-S015
24. Umpierrez GE, Smiley D, Hermayer K, et al. Randomized study comparing a basal-bolus with a basal plus correction management of medical and surgical patients with type 2 diabetes: basal plus trial. Diabetes Care. 2013;36(8):2169-2174. https://doi.org/10.2337/dc12-1988
25. Turchin A, Matheny ME, Shubina M, Scanlon SV, Greenwood B, Pendergrass ML. Hypoglycemia and clinical outcomes in patients with diabetes hospitalized in the general ward. Diabetes Care. 2009;32(7):1153-1157. https://doi.org/10.2337/dc08-2127
26. Garg R, Hurwitz S, Turchin A, Trivedi A. Hypoglycemia, with or without insulin therapy, is associated with increased mortality among hospitalized patients. Diabetes Care. 2013;36(5):1107-1110. https://doi.org/10.2337/dc12-1296
27. Zapatero A, Gómez-Huelgas R, González N, et al. Frequency of hypoglycemia and its impact on length of stay, mortality, and short-term readmission in patients with diabetes hospitalized in internal medicine wards. Endocr Pract. 2014;20(9):870-875. https://doi.org/10.4158/EP14006.OR
28. Umpierrez GE, Isaacs SD, Bazargan N, You X, Thaler LM, Kitabchi AE. Hyperglycemia: an independent marker of in-hospital mortality in patients with undiagnosed diabetes. J Clin Endocrinol Metab. 2002;87(3):978-982. https://doi.org/10.1210/jcem.87.3.8341
29. Dickerson LM, Ye X, Sack JL, Hueston WJ. Glycemic control in medical inpatients with type 2 diabetes mellitus receiving sliding scale insulin regimens versus routine diabetes medications: a multicenter randomized controlled trial. Ann Fam Med. 2003;1(1):29-35. https://doi.org/10.1370/afm.2
Sliding scale insulin (SSI) for inpatient glycemic control was first proposed by Elliott P Joslin in 1934 when he recommended titration of insulin based on urine glucose levels.1 As bedside glucose meters became widely available, physicians transitioned to dosing SSI based on capillary blood glucose (BG) levels,2,3 and SSI became widely used for the management of inpatient hyperglycemia.1 However, during the past decade, there has been strong opposition to the use of SSI in hospitals. Many authors oppose its use, highlighting the retrospective rather than prospective nature of SSI therapy and concerns about inadequate glycemic control.4-6 In 2004, the American College of Endocrinology first released a position statement discouraging the use of SSI alone and recommended basal-bolus insulin as the preferred method of glycemic control for inpatients with type 2 diabetes (T2D).7 The American Diabetes Association (ADA) inpatient guidelines in 20058 and the Endocrine Society guidelines in 20129 also opposed SSI monotherapy and reaffirmed that a basal-bolus insulin regimen should be used for most non–critically ill patients with diabetes. Those guidelines remain in place currently.
Several randomized controlled trials (RCTs) and meta-analyses have shown that basal-bolus insulin regimens provide superior glycemic control in non–critical inpatients when compared with SSI alone.10-14 In addition, the RABBIT 2 (Randomized Study of Basal-Bolus Insulin Therapy in the Inpatient Management of Patients With Type 2 Diabetes) trial showed a significant reduction in perioperative complications10 among surgical patients when treated with basal-bolus insulin therapy. Despite these studies and strong recommendations against its use, SSI continues to be widely used in the United States. According to a 2007 survey of 44 US hospitals, 41% of noncritical patients with hyperglycemia were treated with SSI alone.15 In addition, SSI remains one of the most commonly prescribed insulin regimens in many countries around the world.16-19 The persistence of SSI use raises questions as to why clinicians continue to use a therapy that has been strongly criticized. Some authors point to convenience and fear of hypoglycemia with a basal-bolus insulin regimen.20,21 Alternatively, it is possible that SSI usage remains so pervasive because it is effective in a subset of patients. In fact, a 2018 Cochrane review concluded that existing evidence is not sufficiently robust to definitively recommend basal-bolus insulin over SSI for inpatient diabetes management of non–critically ill patients despite existing guidelines.22
Owing to the ongoing controversy and widespread use of SSI, we designed an exploratory analysis to understand the rationale for such therapy by investigating whether a certain subpopulation of hospitalized patients with T2D may achieve target glycemic control with SSI alone. We hypothesized that noncritical patients with mild hyperglycemia and admission BG <180 mg/dL would do well with SSI alone and may not require intensive treatment with basal-bolus insulin regimens. To address this question, we used electronic health records with individual-level patient data to assess inpatient glycemic control of non–critically ill patients with T2D treated with SSI alone.
METHODS
Participants
Data from 25,813 adult noncritical inpatients with T2D, with an index admission between June 1, 2010, and June 30, 2018, were obtained through the Emory Healthcare Clinical Data Warehouse infrastructure program. All patients were admitted to Emory Healthcare hospitals, including Emory University Hospital, Emory University Hospital Midtown, and Emory Saint Joseph’s Hospital, in Atlanta, Georgia. Data were extracted for each patient during the index hospitalization, including demographics, anthropometrics, and admission and inpatient laboratory values. Information was collected on daily point-of-care glucose values, hemoglobin A1c (HbA1c), hypoglycemic events, insulin doses, hospital complications, comorbidities, and hospital setting (medical vs surgical admission). International Classification of Diseases, 9th and 10th Revisions (ICD-9/10) codes were used to determine diagnosis of T2D, comorbidities, and complications.
From our initial dataset, we identified 16,366 patients who were treated with SSI during hospitalization. We excluded patients who were admitted to the intensive care unit (ICU) or placed on intravenous insulin, patients with missing admission BG values, and patients with a length of stay less than 1 day. To prevent inclusion of patients presenting in diabetic ketoacidosis or hyperosmolar hyperglycemic syndrome, we excluded patients with an admission BG >500 mg/dL. We then excluded 6,739 patients who received basal insulin within the first 2 days of hospitalization, as well as 943 patients who were treated with noninsulin (oral or injectable) antidiabetic agents. Our final dataset included 8,095 patients (Appendix Figure).
Patients in the SSI cohort included all patients who were treated with short-acting insulin only (regular insulin or rapid-acting [lispro, aspart, glulisine] insulin analogs) during the first 2 days of hospitalization. Patients who remained on only short-acting insulin during the entire hospitalization were defined as continuous SSI patients. Patients who subsequently received basal insulin after day 2 of hospitalization were defined as patients who transitioned to basal. Patients were stratified according to admission BG levels (first BG available on day of admission) and HbA1c (when available during index admission). We compared the baseline characteristics and clinical outcomes of patients who remained on SSI alone throughout the entirety of hospitalization with those of patients who required transition to basal insulin. The mean hospital BG was calculated by taking the average of all BG measurements during the hospital stay. We defined hypoglycemia as a BG <70 mg/dL and severe hypoglycemia as BG <40 mg/dL. Repeated hypoglycemia values were excluded if they occurred within a period of 2 hours.
Outcome Measures
The primary outcome was the percentage of patients with T2D achieving target glycemic control with SSI therapy, defined as mean hospital BG between 70 and 180 mg/dL without hypoglycemia <70 mg/dL during hospital stay. This threshold was determined based on 2019 ADA recommendations targeting hospital BG <180 mg/dL and avoidance of hypoglycemia.23
Statistical Analysis
Patients were stratified according to continuous SSI versus transitioned to basal treatment. Patients who remained on continuous SSI were further categorized into four categories based on admission BG: <140 mg/dL, 140 to 180 mg/dL, 180 to 250 mg/dL, and ≥250 mg/dL. Clinical characteristics were compared using Wilcoxon rank-sum tests (if continuous) and chi-square tests or Fisher exact tests (if categorical). We then compared the clinical outcomes among continuous SSI patients with different admission BG levels (<140 mg/dL, 140-180 mg/dL, 180-250 mg/dL, and ≥250 mg/dL) and with different HbA1c levels (<7%, 7%-8%, 8%-9%, ≥9%). Within each scenario, logistic regression for the outcome of poor glycemic control, defined as mean hospital BG >180 mg/dL, was performed to evaluate the HbA1c levels and admission BG levels controlling for other factors (age, gender, body mass index [BMI], race, setting [medicine versus surgery] and Charlson Comorbidity Index score). A P value < .05 was regarded as statistically significant. All analyses were performed based on available cases and conducted in SAS version 9.4 (SAS Institute Inc.).
RESULTS
Among 25,813 adult patients with T2D, 8,095 patients (31.4%) were treated with SSI alone during the first 2 days of hospitalization. Of those patients treated with SSI, 6,903 (85%) remained on continuous SSI alone during the entire hospitalization, and 1,192 (15%) were transitioned to basal insulin. The clinical characteristics of these patients on continuous SSI and those who transitioned to basal insulin are shown in Table 1. Patients who transitioned to basal insulin had significantly higher mean (SD) admission BG (191.8 [88.2] mg/dL vs 156.4 [65.4] mg/dL, P < .001) and higher mean (SD) HbA1c (8.1% [2.0%] vs 7.01% [1.5%], P < .001), compared with those who remained on continuous SSI. Patients who transitioned to basal insulin were also younger and more likely to have chronic kidney disease (CKD), but less likely to have congestive heart failure, coronary artery disease, or chronic obstructive pulmonary disease (COPD). The Charlson Comorbidity Index score was significantly higher for patients who transitioned to basal (4.4 [2.5]) than for those who remained on continuous SSI (4.1 [2.5], P < .001). There were no significant differences among sex, BMI, or glomerular filtration rate (GFR) on admission. Of those transitioned to basal insulin, 53% achieved a mean hospitalization BG <180 mg/dL, compared with 82% of those on continuous SSI. The overall rate of hypoglycemia in the continuous SSI group was 8% compared with 18% in those transitioned to basal insulin.
Of the patients who remained on continuous SSI throughout the hospitalization, 3,319 patients (48%) had admission BG <140 mg/dL, 1,671 patients (24%) had admission BG 140 to 180 mg/dL, and 1,913 patients (28%) had admission BG >180 mg/dL. Only 9% of patients who remained on continuous SSI had admission BG ≥250 mg/dL. Patients with admission BG <140 mg/dL were older, had lower BMI and HbA1c, had higher rates of COPD and CKD, and were more likely to be admitted to a surgical service compared with patients with admission BG >140 mg/dL (P < .05 for all; Table 2).
Hospital glycemic control for patients on continuous SSI according to admission BG is displayed in Table 3. Among patients who remained on continuous SSI, 96% of patients with admission BG <140 mg/dL had a mean hospital BG <180 mg/dL; of them, 86% achieved target control without hypoglycemia. Similar rates of target control were achieved in patients with admission BG 140 to 180 mg/dL (83%), in contrast to patients with admission BG ≥250 mg/dL, of whom only 18% achieved target control (P < .001). These findings parallel those seen in patients transitioned to basal insulin. Of patients in the transition group admitted with BG <140 mg/dL and <180 mg/dL, 88.5% and 84.6% had mean hospital BG <180 mg/dL, respectively, while 69.1% and 68.9% had mean BG between 70 and 180 mg/dL without hypoglycemia. The overall frequency of hypoglycemia <70 mg/dL among patients on continuous SSI was 8% and was more common in patients with admission BG <140 mg/dL (10%) compared with patients with higher admission glucose levels (BG 140-180 mg/dL [4%], 180-250 mg/dL [4%], or ≥250 mg/dL [6%], P < .001). There was no difference in rates of severe hypoglycemia <40 mg/dL among groups.
HbA1c data were available for 2,560 of the patients on continuous SSI (Table 3). Mean hospital BG increased significantly with increasing HbA1c values. Patients admitted with HbA1c <7% had lower mean (SD) hospital BG (132.2 [28.2] mg/dL) and were more likely to achieve target glucose control during hospitalization (85%) compared with those with HbA1c 7% to 8% (mean BG, 148.7 [30.8] mg/dL; 80% target control), HbA1c 8% to 9% (mean BG, 169.1 [37.9] mg/dL; 61% target control), or HbA1c ≥9% (mean BG, 194.9 [53.4] mg/dL; 38% target control) (P < .001).
In a logistic regression analysis adjusted for age, gender, BMI, race, setting (medicine vs surgery), and Charlson Comorbidity Index score, the odds of poor glycemic control increased with higher admission BG (admission BG 140-180 mg/dL: odds ratio [OR], 1.8; 95% CI, 1.5-2.2; admission BG 180-250 mg/dL: OR, 3.7; 95% CI, 3.1-4.4; admission BG ≥250 mg/dL: OR, 7.2; 95% CI, 5.8-9.0; reference admission BG <140 mg/dL; Figure). Similarly, the logistic regression analysis showed greater odds of poor in-hospital glycemic control with increasing HbA1c (OR, 6.1; 95% CI, 4.3-8.8 for HbA1c >9% compared with HbA1c <7%).
DISCUSSION
This large retrospective cohort study examined the effectiveness of SSI for glycemic control in noncritical inpatients with T2D. Our results indicate that SSI is still widely used in our hospital system, with 31.4% of our initial cohort managed with SSI alone. We found that 86% of patients with BG <140 mg/dL and 83% of patients with BG 140 to 180 mg/dL achieved glycemic control without hypoglycemia when managed with SSI alone, compared with 53% of those admitted with BG 180 to 250 mg/dL and only 18% of those with admission BG ≥250 mg/dL. This high success rate of achieving optimal BG control with SSI alone is comparable to that seen with transition to basal insulin and may explain the prevalent use of SSI for the management of patients with T2D and mild to moderate hyperglycemia.
Published clinical guideline recommendations promoting the use of basal-bolus insulin treatment algorithms are based on the results of a few RCTs that compared the efficacy of SSI vs a basal-bolus insulin regimen. These studies reported significantly lower mean daily BG concentration with basal or basal-bolus insulin therapy compared with SSI.10,11,24 However, it is interesting to note that the mean admission BG of patients treated with SSI in these RCTs ranged from 184 to 225 mg/dL. Patients in these trials were excluded if admission BG was <140 mg/dL.10,11,24 This is in contrast to our study evaluating real-world data in non–critically ill settings in which we found that 48% of patients treated with SSI had admission BG <140 mg/dL, and nearly 75% had admission BG <180 mg/dL. This suggests that by nature of study design, most RCTs excluded the population of patients who do achieve good glycemic control with SSI and may have contributed to the perception that basal insulin is preferable in all populations.
Our analysis indicates that healthcare professionals should consider admission BG when selecting the type of insulin regimen to manage patients with T2D in the hospital. Our results suggest that SSI may be appropriate for many patients with admission BG <180 mg/dL and should be avoided as monotherapy in patients with admission BG ≥180 mg/dL, as the proportion of patients achieving target control decreased with increasing admission BG. More importantly, if a patient is not controlled with SSI alone, intensification of therapy with the addition of basal insulin is indicated to achieve glycemic control. In addition, we found that the admission HbA1c is an appropriate marker to consider as well, with hospital glycemic control deteriorating with increasing HbA1c values, paralleling the admission BG. The main limitation to widespread use of HbA1c for therapeutic decision-making is access to values at time of patient admission; in our population, only 37% of patients had an HbA1c value available during the index hospitalization.
Previous publications have reported that hypoglycemia carries significant safety concerns, especially among a hospitalized population.25-27 As such, we included hypoglycemia as an important metric in our definition of target glycemic control rather than simply using mean hospital BG or number of hyperglycemic events to define treatment effectiveness. We did find a higher rate of hypoglycemia in patients with moderate admission BG treated with SSI compared with those with higher admission BG; however, few patients overall experienced clinically significant (<54 mg/dL) or severe (<40 mg/dL) hypoglycemia.
In our population, only 15% of patients started on SSI received additional basal insulin during hospitalization. This finding is similar to data reported in the Rabbit 2 trial, in which 14% of patients failed SSI alone, with a higher failure rate among those with higher BG on admission.10 Given the observational nature of this study, we cannot definitively state why certain patients in our population required additional basal insulin, but we can hypothesize that these patients admitted with BG ≥180 mg/dL had higher treatment failure rates and greater rates of hyperglycemia, therefore receiving intensified insulin therapy as clinically indicated at the discretion of the treating physician. Patients who transitioned from SSI to basal insulin had significantly higher admission BG and HbA1c compared with patients who remained on SSI alone. We noted that the rates of hypoglycemia were higher in the group that transitioned to basal (18% vs 8%) and similar to rates reported in previous RCTs.11,24
This observational study takes advantage of a large, diverse study population and a combination of medicine and surgery patients in a real-world setting. We acknowledge several limitations in our study. Our primary data were observational in nature, and as such, some baseline patient characteristics were notably different between groups, suggesting selection bias for treatment allocation to SSI. We do not know which patients were managed by primary teams compared with specialized diabetes consult services, which may also influence treatment regimens. We did not have access to information about patients’ at-home diabetes medication regimens or duration of diabetes, both of which have been shown in prior publications to affect an individual’s overall hospital glycemic control. Data on HbA1c values were available for only approximately one-third of patients. In addition, our study did not include patients without a history of diabetes who developed stress-induced hyperglycemia, a population that may benefit from conservative therapy such as SSI.28 A diagnosis of CKD was defined based on ICD 9/10 codes and not on admission estimated GFR. More specific data regarding stage of CKD or changes in renal function over the duration of hospitalization are not available, which could influence insulin prescribing practice. In addition, we defined the basal group as patients prescribed any form of basal insulin (NPH, glargine, detemir or degludec), and we do not have information on the use of prandial versus correction doses of rapid-acting insulin in the basal insulin–treated group.
CONCLUSION
In conclusion, our observational study indicates that the use of SSI results in appropriate target glycemic control for most noncritical medicine and surgery patients with admission BG <180 mg/dL. In agreement with previous RCTs, our study confirms that SSI as monotherapy is frequently inadequate in patients with significant hyperglycemia >180 mg/dL.10,11,24,29 We propose that an individualized approach to inpatient glycemic management is imperative, and cautious use of SSI may be a viable option for certain patients with mild hyperglycemia and admission BG <180 mg/dL. Further observational and randomized studies are needed to confirm the efficacy of SSI therapy in T2D patients with mild hyperglycemia. By identifying which subset of patients can be safely managed with SSI alone, we can better understand which patients will require escalation of therapy with intensive glucose management.
Sliding scale insulin (SSI) for inpatient glycemic control was first proposed by Elliott P Joslin in 1934 when he recommended titration of insulin based on urine glucose levels.1 As bedside glucose meters became widely available, physicians transitioned to dosing SSI based on capillary blood glucose (BG) levels,2,3 and SSI became widely used for the management of inpatient hyperglycemia.1 However, during the past decade, there has been strong opposition to the use of SSI in hospitals. Many authors oppose its use, highlighting the retrospective rather than prospective nature of SSI therapy and concerns about inadequate glycemic control.4-6 In 2004, the American College of Endocrinology first released a position statement discouraging the use of SSI alone and recommended basal-bolus insulin as the preferred method of glycemic control for inpatients with type 2 diabetes (T2D).7 The American Diabetes Association (ADA) inpatient guidelines in 20058 and the Endocrine Society guidelines in 20129 also opposed SSI monotherapy and reaffirmed that a basal-bolus insulin regimen should be used for most non–critically ill patients with diabetes. Those guidelines remain in place currently.
Several randomized controlled trials (RCTs) and meta-analyses have shown that basal-bolus insulin regimens provide superior glycemic control in non–critical inpatients when compared with SSI alone.10-14 In addition, the RABBIT 2 (Randomized Study of Basal-Bolus Insulin Therapy in the Inpatient Management of Patients With Type 2 Diabetes) trial showed a significant reduction in perioperative complications10 among surgical patients when treated with basal-bolus insulin therapy. Despite these studies and strong recommendations against its use, SSI continues to be widely used in the United States. According to a 2007 survey of 44 US hospitals, 41% of noncritical patients with hyperglycemia were treated with SSI alone.15 In addition, SSI remains one of the most commonly prescribed insulin regimens in many countries around the world.16-19 The persistence of SSI use raises questions as to why clinicians continue to use a therapy that has been strongly criticized. Some authors point to convenience and fear of hypoglycemia with a basal-bolus insulin regimen.20,21 Alternatively, it is possible that SSI usage remains so pervasive because it is effective in a subset of patients. In fact, a 2018 Cochrane review concluded that existing evidence is not sufficiently robust to definitively recommend basal-bolus insulin over SSI for inpatient diabetes management of non–critically ill patients despite existing guidelines.22
Owing to the ongoing controversy and widespread use of SSI, we designed an exploratory analysis to understand the rationale for such therapy by investigating whether a certain subpopulation of hospitalized patients with T2D may achieve target glycemic control with SSI alone. We hypothesized that noncritical patients with mild hyperglycemia and admission BG <180 mg/dL would do well with SSI alone and may not require intensive treatment with basal-bolus insulin regimens. To address this question, we used electronic health records with individual-level patient data to assess inpatient glycemic control of non–critically ill patients with T2D treated with SSI alone.
METHODS
Participants
Data from 25,813 adult noncritical inpatients with T2D, with an index admission between June 1, 2010, and June 30, 2018, were obtained through the Emory Healthcare Clinical Data Warehouse infrastructure program. All patients were admitted to Emory Healthcare hospitals, including Emory University Hospital, Emory University Hospital Midtown, and Emory Saint Joseph’s Hospital, in Atlanta, Georgia. Data were extracted for each patient during the index hospitalization, including demographics, anthropometrics, and admission and inpatient laboratory values. Information was collected on daily point-of-care glucose values, hemoglobin A1c (HbA1c), hypoglycemic events, insulin doses, hospital complications, comorbidities, and hospital setting (medical vs surgical admission). International Classification of Diseases, 9th and 10th Revisions (ICD-9/10) codes were used to determine diagnosis of T2D, comorbidities, and complications.
From our initial dataset, we identified 16,366 patients who were treated with SSI during hospitalization. We excluded patients who were admitted to the intensive care unit (ICU) or placed on intravenous insulin, patients with missing admission BG values, and patients with a length of stay less than 1 day. To prevent inclusion of patients presenting in diabetic ketoacidosis or hyperosmolar hyperglycemic syndrome, we excluded patients with an admission BG >500 mg/dL. We then excluded 6,739 patients who received basal insulin within the first 2 days of hospitalization, as well as 943 patients who were treated with noninsulin (oral or injectable) antidiabetic agents. Our final dataset included 8,095 patients (Appendix Figure).
Patients in the SSI cohort included all patients who were treated with short-acting insulin only (regular insulin or rapid-acting [lispro, aspart, glulisine] insulin analogs) during the first 2 days of hospitalization. Patients who remained on only short-acting insulin during the entire hospitalization were defined as continuous SSI patients. Patients who subsequently received basal insulin after day 2 of hospitalization were defined as patients who transitioned to basal. Patients were stratified according to admission BG levels (first BG available on day of admission) and HbA1c (when available during index admission). We compared the baseline characteristics and clinical outcomes of patients who remained on SSI alone throughout the entirety of hospitalization with those of patients who required transition to basal insulin. The mean hospital BG was calculated by taking the average of all BG measurements during the hospital stay. We defined hypoglycemia as a BG <70 mg/dL and severe hypoglycemia as BG <40 mg/dL. Repeated hypoglycemia values were excluded if they occurred within a period of 2 hours.
Outcome Measures
The primary outcome was the percentage of patients with T2D achieving target glycemic control with SSI therapy, defined as mean hospital BG between 70 and 180 mg/dL without hypoglycemia <70 mg/dL during hospital stay. This threshold was determined based on 2019 ADA recommendations targeting hospital BG <180 mg/dL and avoidance of hypoglycemia.23
Statistical Analysis
Patients were stratified according to continuous SSI versus transitioned to basal treatment. Patients who remained on continuous SSI were further categorized into four categories based on admission BG: <140 mg/dL, 140 to 180 mg/dL, 180 to 250 mg/dL, and ≥250 mg/dL. Clinical characteristics were compared using Wilcoxon rank-sum tests (if continuous) and chi-square tests or Fisher exact tests (if categorical). We then compared the clinical outcomes among continuous SSI patients with different admission BG levels (<140 mg/dL, 140-180 mg/dL, 180-250 mg/dL, and ≥250 mg/dL) and with different HbA1c levels (<7%, 7%-8%, 8%-9%, ≥9%). Within each scenario, logistic regression for the outcome of poor glycemic control, defined as mean hospital BG >180 mg/dL, was performed to evaluate the HbA1c levels and admission BG levels controlling for other factors (age, gender, body mass index [BMI], race, setting [medicine versus surgery] and Charlson Comorbidity Index score). A P value < .05 was regarded as statistically significant. All analyses were performed based on available cases and conducted in SAS version 9.4 (SAS Institute Inc.).
RESULTS
Among 25,813 adult patients with T2D, 8,095 patients (31.4%) were treated with SSI alone during the first 2 days of hospitalization. Of those patients treated with SSI, 6,903 (85%) remained on continuous SSI alone during the entire hospitalization, and 1,192 (15%) were transitioned to basal insulin. The clinical characteristics of these patients on continuous SSI and those who transitioned to basal insulin are shown in Table 1. Patients who transitioned to basal insulin had significantly higher mean (SD) admission BG (191.8 [88.2] mg/dL vs 156.4 [65.4] mg/dL, P < .001) and higher mean (SD) HbA1c (8.1% [2.0%] vs 7.01% [1.5%], P < .001), compared with those who remained on continuous SSI. Patients who transitioned to basal insulin were also younger and more likely to have chronic kidney disease (CKD), but less likely to have congestive heart failure, coronary artery disease, or chronic obstructive pulmonary disease (COPD). The Charlson Comorbidity Index score was significantly higher for patients who transitioned to basal (4.4 [2.5]) than for those who remained on continuous SSI (4.1 [2.5], P < .001). There were no significant differences among sex, BMI, or glomerular filtration rate (GFR) on admission. Of those transitioned to basal insulin, 53% achieved a mean hospitalization BG <180 mg/dL, compared with 82% of those on continuous SSI. The overall rate of hypoglycemia in the continuous SSI group was 8% compared with 18% in those transitioned to basal insulin.
Of the patients who remained on continuous SSI throughout the hospitalization, 3,319 patients (48%) had admission BG <140 mg/dL, 1,671 patients (24%) had admission BG 140 to 180 mg/dL, and 1,913 patients (28%) had admission BG >180 mg/dL. Only 9% of patients who remained on continuous SSI had admission BG ≥250 mg/dL. Patients with admission BG <140 mg/dL were older, had lower BMI and HbA1c, had higher rates of COPD and CKD, and were more likely to be admitted to a surgical service compared with patients with admission BG >140 mg/dL (P < .05 for all; Table 2).
Hospital glycemic control for patients on continuous SSI according to admission BG is displayed in Table 3. Among patients who remained on continuous SSI, 96% of patients with admission BG <140 mg/dL had a mean hospital BG <180 mg/dL; of them, 86% achieved target control without hypoglycemia. Similar rates of target control were achieved in patients with admission BG 140 to 180 mg/dL (83%), in contrast to patients with admission BG ≥250 mg/dL, of whom only 18% achieved target control (P < .001). These findings parallel those seen in patients transitioned to basal insulin. Of patients in the transition group admitted with BG <140 mg/dL and <180 mg/dL, 88.5% and 84.6% had mean hospital BG <180 mg/dL, respectively, while 69.1% and 68.9% had mean BG between 70 and 180 mg/dL without hypoglycemia. The overall frequency of hypoglycemia <70 mg/dL among patients on continuous SSI was 8% and was more common in patients with admission BG <140 mg/dL (10%) compared with patients with higher admission glucose levels (BG 140-180 mg/dL [4%], 180-250 mg/dL [4%], or ≥250 mg/dL [6%], P < .001). There was no difference in rates of severe hypoglycemia <40 mg/dL among groups.
HbA1c data were available for 2,560 of the patients on continuous SSI (Table 3). Mean hospital BG increased significantly with increasing HbA1c values. Patients admitted with HbA1c <7% had lower mean (SD) hospital BG (132.2 [28.2] mg/dL) and were more likely to achieve target glucose control during hospitalization (85%) compared with those with HbA1c 7% to 8% (mean BG, 148.7 [30.8] mg/dL; 80% target control), HbA1c 8% to 9% (mean BG, 169.1 [37.9] mg/dL; 61% target control), or HbA1c ≥9% (mean BG, 194.9 [53.4] mg/dL; 38% target control) (P < .001).
In a logistic regression analysis adjusted for age, gender, BMI, race, setting (medicine vs surgery), and Charlson Comorbidity Index score, the odds of poor glycemic control increased with higher admission BG (admission BG 140-180 mg/dL: odds ratio [OR], 1.8; 95% CI, 1.5-2.2; admission BG 180-250 mg/dL: OR, 3.7; 95% CI, 3.1-4.4; admission BG ≥250 mg/dL: OR, 7.2; 95% CI, 5.8-9.0; reference admission BG <140 mg/dL; Figure). Similarly, the logistic regression analysis showed greater odds of poor in-hospital glycemic control with increasing HbA1c (OR, 6.1; 95% CI, 4.3-8.8 for HbA1c >9% compared with HbA1c <7%).
DISCUSSION
This large retrospective cohort study examined the effectiveness of SSI for glycemic control in noncritical inpatients with T2D. Our results indicate that SSI is still widely used in our hospital system, with 31.4% of our initial cohort managed with SSI alone. We found that 86% of patients with BG <140 mg/dL and 83% of patients with BG 140 to 180 mg/dL achieved glycemic control without hypoglycemia when managed with SSI alone, compared with 53% of those admitted with BG 180 to 250 mg/dL and only 18% of those with admission BG ≥250 mg/dL. This high success rate of achieving optimal BG control with SSI alone is comparable to that seen with transition to basal insulin and may explain the prevalent use of SSI for the management of patients with T2D and mild to moderate hyperglycemia.
Published clinical guideline recommendations promoting the use of basal-bolus insulin treatment algorithms are based on the results of a few RCTs that compared the efficacy of SSI vs a basal-bolus insulin regimen. These studies reported significantly lower mean daily BG concentration with basal or basal-bolus insulin therapy compared with SSI.10,11,24 However, it is interesting to note that the mean admission BG of patients treated with SSI in these RCTs ranged from 184 to 225 mg/dL. Patients in these trials were excluded if admission BG was <140 mg/dL.10,11,24 This is in contrast to our study evaluating real-world data in non–critically ill settings in which we found that 48% of patients treated with SSI had admission BG <140 mg/dL, and nearly 75% had admission BG <180 mg/dL. This suggests that by nature of study design, most RCTs excluded the population of patients who do achieve good glycemic control with SSI and may have contributed to the perception that basal insulin is preferable in all populations.
Our analysis indicates that healthcare professionals should consider admission BG when selecting the type of insulin regimen to manage patients with T2D in the hospital. Our results suggest that SSI may be appropriate for many patients with admission BG <180 mg/dL and should be avoided as monotherapy in patients with admission BG ≥180 mg/dL, as the proportion of patients achieving target control decreased with increasing admission BG. More importantly, if a patient is not controlled with SSI alone, intensification of therapy with the addition of basal insulin is indicated to achieve glycemic control. In addition, we found that the admission HbA1c is an appropriate marker to consider as well, with hospital glycemic control deteriorating with increasing HbA1c values, paralleling the admission BG. The main limitation to widespread use of HbA1c for therapeutic decision-making is access to values at time of patient admission; in our population, only 37% of patients had an HbA1c value available during the index hospitalization.
Previous publications have reported that hypoglycemia carries significant safety concerns, especially among a hospitalized population.25-27 As such, we included hypoglycemia as an important metric in our definition of target glycemic control rather than simply using mean hospital BG or number of hyperglycemic events to define treatment effectiveness. We did find a higher rate of hypoglycemia in patients with moderate admission BG treated with SSI compared with those with higher admission BG; however, few patients overall experienced clinically significant (<54 mg/dL) or severe (<40 mg/dL) hypoglycemia.
In our population, only 15% of patients started on SSI received additional basal insulin during hospitalization. This finding is similar to data reported in the Rabbit 2 trial, in which 14% of patients failed SSI alone, with a higher failure rate among those with higher BG on admission.10 Given the observational nature of this study, we cannot definitively state why certain patients in our population required additional basal insulin, but we can hypothesize that these patients admitted with BG ≥180 mg/dL had higher treatment failure rates and greater rates of hyperglycemia, therefore receiving intensified insulin therapy as clinically indicated at the discretion of the treating physician. Patients who transitioned from SSI to basal insulin had significantly higher admission BG and HbA1c compared with patients who remained on SSI alone. We noted that the rates of hypoglycemia were higher in the group that transitioned to basal (18% vs 8%) and similar to rates reported in previous RCTs.11,24
This observational study takes advantage of a large, diverse study population and a combination of medicine and surgery patients in a real-world setting. We acknowledge several limitations in our study. Our primary data were observational in nature, and as such, some baseline patient characteristics were notably different between groups, suggesting selection bias for treatment allocation to SSI. We do not know which patients were managed by primary teams compared with specialized diabetes consult services, which may also influence treatment regimens. We did not have access to information about patients’ at-home diabetes medication regimens or duration of diabetes, both of which have been shown in prior publications to affect an individual’s overall hospital glycemic control. Data on HbA1c values were available for only approximately one-third of patients. In addition, our study did not include patients without a history of diabetes who developed stress-induced hyperglycemia, a population that may benefit from conservative therapy such as SSI.28 A diagnosis of CKD was defined based on ICD 9/10 codes and not on admission estimated GFR. More specific data regarding stage of CKD or changes in renal function over the duration of hospitalization are not available, which could influence insulin prescribing practice. In addition, we defined the basal group as patients prescribed any form of basal insulin (NPH, glargine, detemir or degludec), and we do not have information on the use of prandial versus correction doses of rapid-acting insulin in the basal insulin–treated group.
CONCLUSION
In conclusion, our observational study indicates that the use of SSI results in appropriate target glycemic control for most noncritical medicine and surgery patients with admission BG <180 mg/dL. In agreement with previous RCTs, our study confirms that SSI as monotherapy is frequently inadequate in patients with significant hyperglycemia >180 mg/dL.10,11,24,29 We propose that an individualized approach to inpatient glycemic management is imperative, and cautious use of SSI may be a viable option for certain patients with mild hyperglycemia and admission BG <180 mg/dL. Further observational and randomized studies are needed to confirm the efficacy of SSI therapy in T2D patients with mild hyperglycemia. By identifying which subset of patients can be safely managed with SSI alone, we can better understand which patients will require escalation of therapy with intensive glucose management.
1. Umpierrez GE, Palacio A, Smiley D. Sliding scale insulin use: myth or insanity? Am J Med. 2007;120(7):563-567. https://doi.org/10.1016/j.amjmed.2006.05.070
2. Kitabchi AE, Ayyagari V, Guerra SM. The efficacy of low-dose versus conventional therapy of insulin for treatment of diabetic ketoacidosis. Ann Intern Med. 1976;84(6):633-638. https://doi.org/10.7326/0003-4819-84-6-633
3. Skyler JS, Skyler DL, Seigler DE, O’Sullivan MJ. Algorithms for adjustment of insulin dosage by patients who monitor blood glucose. Diabetes Care. 1981;4(2):311-318. https://doi.org/10.2337/diacare.4.2.311
4. Gearhart JG, Duncan JL 3rd, Replogle WH, Forbes RC, Walley EJ. Efficacy of sliding-scale insulin therapy: a comparison with prospective regimens. Fam Pract Res J. 1994;14(4):313-322.
5. Queale WS, Seidler AJ, Brancati FL. Glycemic control and sliding scale insulin use in medical inpatients with diabetes mellitus. Arch Intern Med. 1997;157(5):545-552.
6. Clement S, Braithwaite SS, Magee MF, et al. Management of diabetes and hyperglycemia in hospitals. Diabetes Care. 2004;27(2):553-591. https://doi.org/10.2337/diacare.27.2.553
7. Garber AJ, Moghissi ES, Bransome ED Jr, et al. American College of Endocrinology position statement on inpatient diabetes and metabolic control. Endocr Pract. 2004;10(1):78-82. https://doi.org/10.4158/EP.10.1.77
8. American Diabetes Association. Standards of medical care in diabetes. Diabetes Care. 2005;28(suppl 1):S4-S36.
9. Umpierrez GE, Hellman R, Korytkowski MT, , et al. Management of hyperglycemia in hospitalized patients in non-critical care setting: an Endocrine Society clinical practice guideline. J Clin Endocrinol Metab. 2012;97(1):16-38. https://doi.org/10.1210/jc.2011-2098
10. Umpierrez GE, Smiley D, Zisman A, et al. Randomized study of basal-bolus insulin therapy in the inpatient management of patients with type 2 diabetes. Diabetes Care. 2007;30(9):2181-2186. https://doi.org/10.2337/dc07-0295
11. Umpierrez GE, Smiley D, Jacobs S, et al. Randomized study of basal-bolus insulin therapy in the inpatient management of patients with type 2 diabetes undergoing general surgery (RABBIT 2 surgery). Diabetes Care. 2011;34(2):256-261. https://doi.org/10.2337/dc10-1407
12. Schroeder JE, Liebergall M, Raz I, Egleston R, Ben Sussan G, Peyser A. Benefits of a simple glycaemic protocol in an orthopaedic surgery ward: a randomized prospective study. Diabetes Metab Res Rev. 2012;28:71-75. https://doi.org/10.1002/dmrr.1217
13. Lee YY, Lin YM, Leu WJ, et al. Sliding-scale insulin used for blood glucose control: a meta-analysis of randomized controlled trials. Metabolism. 2015;64(9):1183-1192. https://doi.org/10.1016/j.metabol.2015.05.011
14. Christensen MB, Gotfredsen A, Nørgaard K. Efficacy of basal-bolus insulin regimens in the inpatient management of non-critically ill patients with type 2 diabetes: a systematic review and meta-analysis. Diabetes Metab Res Rev. 2017;33(5):e2885. https://doi.org/10.1002/dmrr.2885
15. Wexler DJ, Meigs JB, Cagliero E, Nathan DM, Grant RW. Prevalence of hyper- and hypoglycemia among inpatients with diabetes: a national survey of 44 U.S. hospitals. Diabetes Care. 2007;30(2):367-369. https://doi.org/10.2337/dc06-1715
16. Moreira ED Jr, Silveira PCB, Neves RCS, Souza C Jr, Nunes ZO, Almeida MdCC. Glycemic control and diabetes management in hospitalized patients in Brazil. Diabetol Metab Syndr. 2013;5(1):62. https://doi.org/10.1186/1758-5996-5-62
17. Akhtar ST, Mahmood K, Naqvi IH, Vaswani AS. Inpatient management of type 2 diabetes mellitus: does choice of insulin regimen really matter? Pakistan J Med Sci. 2014;30(4):895-898.
18. Gómez Cuervo C, Sánchez Morla A, Pérez-Jacoiste Asín MA, Bisbal Pardo O, Pérez Ordoño L, Vila Santos J. Effective adverse event reduction with bolus-basal versus sliding scale insulin therapy in patients with diabetes during conventional hospitalization: systematic review and meta-analysis. Endocrinol Nutr. 2016;63(4):145-156. https://doi.org/10.1016/j.endonu.2015.11.008
19. Bain A, Hasan SS, Babar ZUD. Interventions to improve insulin prescribing practice for people with diabetes in hospital: a systematic review. Diabet Med. 2019;36(8):948-960. https://doi.org/10.1111/dme.13982
20. Ambrus DB, O’Connor MJ. Things We Do For No Reason: sliding-scale insulin as monotherapy for glycemic control in hospitalized patients. J Hosp Med. 2019;14(2):114-116. https://doi.org/10.12788/jhm.3109
21. Nau KC, Lorenzetti RC, Cucuzzella M, Devine T, Kline J. Glycemic control in hospitalized patients not in intensive care: beyond sliding-scale insulin. Am Fam Physician. 2010;81(9):1130-1135.
22. Colunga-Lozano LE, Gonzalez Torres FJ, Delgado-Figueroa N, et al. Sliding scale insulin for non-critically ill hospitalised adults with diabetes mellitus. Cochrane Database Syst Rev. 2018;11(11):CD011296. https://doi.org/10.1002/14651858.CD011296.pub2
23. American Diabetes Association. Diabetes care in the hospital: Standards of Medical Care in Diabetes—2019. Diabetes Care. 2019;42(suppl 1):S173-S181. https://doi.org/10.2337/dc19-S015
24. Umpierrez GE, Smiley D, Hermayer K, et al. Randomized study comparing a basal-bolus with a basal plus correction management of medical and surgical patients with type 2 diabetes: basal plus trial. Diabetes Care. 2013;36(8):2169-2174. https://doi.org/10.2337/dc12-1988
25. Turchin A, Matheny ME, Shubina M, Scanlon SV, Greenwood B, Pendergrass ML. Hypoglycemia and clinical outcomes in patients with diabetes hospitalized in the general ward. Diabetes Care. 2009;32(7):1153-1157. https://doi.org/10.2337/dc08-2127
26. Garg R, Hurwitz S, Turchin A, Trivedi A. Hypoglycemia, with or without insulin therapy, is associated with increased mortality among hospitalized patients. Diabetes Care. 2013;36(5):1107-1110. https://doi.org/10.2337/dc12-1296
27. Zapatero A, Gómez-Huelgas R, González N, et al. Frequency of hypoglycemia and its impact on length of stay, mortality, and short-term readmission in patients with diabetes hospitalized in internal medicine wards. Endocr Pract. 2014;20(9):870-875. https://doi.org/10.4158/EP14006.OR
28. Umpierrez GE, Isaacs SD, Bazargan N, You X, Thaler LM, Kitabchi AE. Hyperglycemia: an independent marker of in-hospital mortality in patients with undiagnosed diabetes. J Clin Endocrinol Metab. 2002;87(3):978-982. https://doi.org/10.1210/jcem.87.3.8341
29. Dickerson LM, Ye X, Sack JL, Hueston WJ. Glycemic control in medical inpatients with type 2 diabetes mellitus receiving sliding scale insulin regimens versus routine diabetes medications: a multicenter randomized controlled trial. Ann Fam Med. 2003;1(1):29-35. https://doi.org/10.1370/afm.2
1. Umpierrez GE, Palacio A, Smiley D. Sliding scale insulin use: myth or insanity? Am J Med. 2007;120(7):563-567. https://doi.org/10.1016/j.amjmed.2006.05.070
2. Kitabchi AE, Ayyagari V, Guerra SM. The efficacy of low-dose versus conventional therapy of insulin for treatment of diabetic ketoacidosis. Ann Intern Med. 1976;84(6):633-638. https://doi.org/10.7326/0003-4819-84-6-633
3. Skyler JS, Skyler DL, Seigler DE, O’Sullivan MJ. Algorithms for adjustment of insulin dosage by patients who monitor blood glucose. Diabetes Care. 1981;4(2):311-318. https://doi.org/10.2337/diacare.4.2.311
4. Gearhart JG, Duncan JL 3rd, Replogle WH, Forbes RC, Walley EJ. Efficacy of sliding-scale insulin therapy: a comparison with prospective regimens. Fam Pract Res J. 1994;14(4):313-322.
5. Queale WS, Seidler AJ, Brancati FL. Glycemic control and sliding scale insulin use in medical inpatients with diabetes mellitus. Arch Intern Med. 1997;157(5):545-552.
6. Clement S, Braithwaite SS, Magee MF, et al. Management of diabetes and hyperglycemia in hospitals. Diabetes Care. 2004;27(2):553-591. https://doi.org/10.2337/diacare.27.2.553
7. Garber AJ, Moghissi ES, Bransome ED Jr, et al. American College of Endocrinology position statement on inpatient diabetes and metabolic control. Endocr Pract. 2004;10(1):78-82. https://doi.org/10.4158/EP.10.1.77
8. American Diabetes Association. Standards of medical care in diabetes. Diabetes Care. 2005;28(suppl 1):S4-S36.
9. Umpierrez GE, Hellman R, Korytkowski MT, , et al. Management of hyperglycemia in hospitalized patients in non-critical care setting: an Endocrine Society clinical practice guideline. J Clin Endocrinol Metab. 2012;97(1):16-38. https://doi.org/10.1210/jc.2011-2098
10. Umpierrez GE, Smiley D, Zisman A, et al. Randomized study of basal-bolus insulin therapy in the inpatient management of patients with type 2 diabetes. Diabetes Care. 2007;30(9):2181-2186. https://doi.org/10.2337/dc07-0295
11. Umpierrez GE, Smiley D, Jacobs S, et al. Randomized study of basal-bolus insulin therapy in the inpatient management of patients with type 2 diabetes undergoing general surgery (RABBIT 2 surgery). Diabetes Care. 2011;34(2):256-261. https://doi.org/10.2337/dc10-1407
12. Schroeder JE, Liebergall M, Raz I, Egleston R, Ben Sussan G, Peyser A. Benefits of a simple glycaemic protocol in an orthopaedic surgery ward: a randomized prospective study. Diabetes Metab Res Rev. 2012;28:71-75. https://doi.org/10.1002/dmrr.1217
13. Lee YY, Lin YM, Leu WJ, et al. Sliding-scale insulin used for blood glucose control: a meta-analysis of randomized controlled trials. Metabolism. 2015;64(9):1183-1192. https://doi.org/10.1016/j.metabol.2015.05.011
14. Christensen MB, Gotfredsen A, Nørgaard K. Efficacy of basal-bolus insulin regimens in the inpatient management of non-critically ill patients with type 2 diabetes: a systematic review and meta-analysis. Diabetes Metab Res Rev. 2017;33(5):e2885. https://doi.org/10.1002/dmrr.2885
15. Wexler DJ, Meigs JB, Cagliero E, Nathan DM, Grant RW. Prevalence of hyper- and hypoglycemia among inpatients with diabetes: a national survey of 44 U.S. hospitals. Diabetes Care. 2007;30(2):367-369. https://doi.org/10.2337/dc06-1715
16. Moreira ED Jr, Silveira PCB, Neves RCS, Souza C Jr, Nunes ZO, Almeida MdCC. Glycemic control and diabetes management in hospitalized patients in Brazil. Diabetol Metab Syndr. 2013;5(1):62. https://doi.org/10.1186/1758-5996-5-62
17. Akhtar ST, Mahmood K, Naqvi IH, Vaswani AS. Inpatient management of type 2 diabetes mellitus: does choice of insulin regimen really matter? Pakistan J Med Sci. 2014;30(4):895-898.
18. Gómez Cuervo C, Sánchez Morla A, Pérez-Jacoiste Asín MA, Bisbal Pardo O, Pérez Ordoño L, Vila Santos J. Effective adverse event reduction with bolus-basal versus sliding scale insulin therapy in patients with diabetes during conventional hospitalization: systematic review and meta-analysis. Endocrinol Nutr. 2016;63(4):145-156. https://doi.org/10.1016/j.endonu.2015.11.008
19. Bain A, Hasan SS, Babar ZUD. Interventions to improve insulin prescribing practice for people with diabetes in hospital: a systematic review. Diabet Med. 2019;36(8):948-960. https://doi.org/10.1111/dme.13982
20. Ambrus DB, O’Connor MJ. Things We Do For No Reason: sliding-scale insulin as monotherapy for glycemic control in hospitalized patients. J Hosp Med. 2019;14(2):114-116. https://doi.org/10.12788/jhm.3109
21. Nau KC, Lorenzetti RC, Cucuzzella M, Devine T, Kline J. Glycemic control in hospitalized patients not in intensive care: beyond sliding-scale insulin. Am Fam Physician. 2010;81(9):1130-1135.
22. Colunga-Lozano LE, Gonzalez Torres FJ, Delgado-Figueroa N, et al. Sliding scale insulin for non-critically ill hospitalised adults with diabetes mellitus. Cochrane Database Syst Rev. 2018;11(11):CD011296. https://doi.org/10.1002/14651858.CD011296.pub2
23. American Diabetes Association. Diabetes care in the hospital: Standards of Medical Care in Diabetes—2019. Diabetes Care. 2019;42(suppl 1):S173-S181. https://doi.org/10.2337/dc19-S015
24. Umpierrez GE, Smiley D, Hermayer K, et al. Randomized study comparing a basal-bolus with a basal plus correction management of medical and surgical patients with type 2 diabetes: basal plus trial. Diabetes Care. 2013;36(8):2169-2174. https://doi.org/10.2337/dc12-1988
25. Turchin A, Matheny ME, Shubina M, Scanlon SV, Greenwood B, Pendergrass ML. Hypoglycemia and clinical outcomes in patients with diabetes hospitalized in the general ward. Diabetes Care. 2009;32(7):1153-1157. https://doi.org/10.2337/dc08-2127
26. Garg R, Hurwitz S, Turchin A, Trivedi A. Hypoglycemia, with or without insulin therapy, is associated with increased mortality among hospitalized patients. Diabetes Care. 2013;36(5):1107-1110. https://doi.org/10.2337/dc12-1296
27. Zapatero A, Gómez-Huelgas R, González N, et al. Frequency of hypoglycemia and its impact on length of stay, mortality, and short-term readmission in patients with diabetes hospitalized in internal medicine wards. Endocr Pract. 2014;20(9):870-875. https://doi.org/10.4158/EP14006.OR
28. Umpierrez GE, Isaacs SD, Bazargan N, You X, Thaler LM, Kitabchi AE. Hyperglycemia: an independent marker of in-hospital mortality in patients with undiagnosed diabetes. J Clin Endocrinol Metab. 2002;87(3):978-982. https://doi.org/10.1210/jcem.87.3.8341
29. Dickerson LM, Ye X, Sack JL, Hueston WJ. Glycemic control in medical inpatients with type 2 diabetes mellitus receiving sliding scale insulin regimens versus routine diabetes medications: a multicenter randomized controlled trial. Ann Fam Med. 2003;1(1):29-35. https://doi.org/10.1370/afm.2
© 2021 Society of Hospital Medicine
Identifying the Sickest During Triage: Using Point-of-Care Severity Scores to Predict Prognosis in Emergency Department Patients With Suspected Sepsis
Sepsis is the leading cause of in-hospital mortality in the United States.1 Sepsis is present on admission in 85% of cases, and each hour delay in antibiotic treatment is associated with 4% to 7% increased odds of mortality.2,3 Prompt identification and treatment of sepsis is essential for reducing morbidity and mortality, but identifying sepsis during triage is challenging.2
Risk stratification scores that rely solely on data readily available at the bedside have been developed to quickly identify those at greatest risk of poor outcomes from sepsis in real time. The quick Sequential Organ Failure Assessment (qSOFA) score, the National Early Warning System (NEWS2), and the Shock Index are easy-to-calculate measures that use routinely collected clinical data that are not subject to laboratory delay. These scores can be incorporated into electronic health record (EHR)-based alerts and can be calculated longitudinally to track the risk of poor outcomes over time. qSOFA was developed to quantify patient risk at bedside in non-intensive care unit (ICU) settings, but there is no consensus about its ability to predict adverse outcomes such as mortality and ICU admission.4-6 The United Kingdom’s National Health Service uses NEWS2 to identify patients at risk for sepsis.7 NEWS has been shown to have similar or better sensitivity in identifying poorer outcomes in sepsis patients compared with systemic inflammatory response syndrome (SIRS) criteria and qSOFA.4,8-11 However, since the latest update of NEWS2 in 2017, there has been little study of its predictive ability. The Shock Index is a simple bedside score (heart rate divided by systolic blood pressure) that was developed to detect changes in cardiovascular performance before systemic shock onset. Although it was not developed for infection and has not been regularly applied in the sepsis literature, the Shock Index might be useful for identifying patients at increased risk of poor outcomes. Patients with higher and sustained Shock Index scores are more likely to experience morbidity, such as hyperlactatemia, vasopressor use, and organ failure, and also have an increased risk of mortality.12-14
Although the predictive abilities of these bedside risk stratification scores have been assessed individually using standard binary cut-points, the comparative performance of qSOFA, the Shock Index, and NEWS2 has not been evaluated in patients presenting to an emergency department (ED) with suspected sepsis.
METHODS
Design and Setting
We conducted a retrospective cohort study of ED patients who presented with suspected sepsis to the University of California San Francisco (UCSF) Helen Diller Medical Center at Parnassus Heights between June 1, 2012, and December 31, 2018. Our institution is a 785-bed academic teaching hospital with approximately 30,000 ED encounters per year. The study was approved with a waiver of informed consent by the UCSF Human Research Protection Program.
Participants
We use an Epic-based EHR platform (Epic 2017, Epic Systems Corporation) for clinical care, which was implemented on June 1, 2012. All data elements were obtained from Clarity, the relational database that stores Epic’s inpatient data. The study included encounters for patients age ≥18 years who had blood cultures ordered within 24 hours of ED presentation and administration of intravenous antibiotics within 24 hours. Repeat encounters were treated independently in our analysis.
Outcomes and Measures
We compared the ability of qSOFA, the Shock Index, and NEWS2 to predict in-hospital mortality and admission to the ICU from the ED (ED-to-ICU admission). We used the
We compared demographic and clinical characteristics of patients who were positive for qSOFA, the Shock Index, and NEWS2. Demographic data were extracted from the EHR and included primary language, age, sex, and insurance status. All International Classification of Diseases (ICD)-9/10 diagnosis codes were pulled from Clarity billing tables. We used the Elixhauser comorbidity groupings19 of ICD-9/10 codes present on admission to identify preexisting comorbidities and underlying organ dysfunction. To estimate burden of comorbid illnesses, we calculated the validated van Walraven comorbidity index,20 which provides an estimated risk of in-hospital death based on documented Elixhauser comorbidities. Admission level of care (acute, stepdown, or intensive care) was collected for inpatient admissions to assess initial illness severity.21 We also evaluated discharge disposition and in-hospital mortality. Index blood culture results were collected, and dates and timestamps of mechanical ventilation, fluid, vasopressor, and antibiotic administration were obtained for the duration of the encounter.
UCSF uses an automated, real-time, algorithm-based severe sepsis alert that is triggered when a patient meets ≥2 SIRS criteria and again when the patient meets severe sepsis or septic shock criteria (ie, ≥2 SIRS criteria in addition to end-organ dysfunction and/or fluid nonresponsive hypotension). This sepsis screening alert was in use for the duration of our study.22
Statistical Analysis
We performed a subgroup analysis among those who were diagnosed with sepsis, according to the 2016 Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3) criteria.
All statistical analyses were conducted using Stata 14 (StataCorp). We summarized differences in demographic and clinical characteristics among the populations meeting each severity score but elected not to conduct hypothesis testing because patients could be positive for one or more scores. We calculated sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for each score to predict in-hospital mortality and ED-to-ICU admission. To allow comparison with other studies, we also created a composite outcome of either in-hospital mortality or ED-to-ICU admission.
RESULTS
Within our sample 23,837 ED patients had blood cultures ordered within 24 hours of ED presentation and were considered to have suspected sepsis. The mean age of the cohort was 60.8 years, and 1,612 (6.8%) had positive blood cultures. A total of 12,928 patients (54.2%) were found to have sepsis. We documented 1,427 in-hospital deaths (6.0%) and 3,149 (13.2%) ED-to-ICU admissions. At ED triage 1,921 (8.1%) were qSOFA-positive, 4,273 (17.9%) were Shock Index-positive, and 11,832 (49.6%) were NEWS2-positive. At ED triage, blood pressure, heart rate, respiratory rate, and oxygen saturated were documented in >99% of patients, 93.5% had temperature documented, and 28.5% had GCS recorded. If the window of assessment was widened to 1 hour, GCS was only documented among 44.2% of those with suspected sepsis.
Demographic Characteristics and Clinical Course
qSOFA-positive patients received antibiotics more quickly than those who were Shock Index-positive or NEWS2-positive (median 1.5, 1.8, and 2.8 hours after admission, respectively). In addition, those who were qSOFA-positive were more likely to have a positive blood culture (10.9%, 9.4%, and 8.5%, respectively) and to receive an EHR-based diagnosis of sepsis (77.0%, 69.6%, and 60.9%, respectively) than those who were Shock Index- or NEWS2-positive. Those who were qSOFA-positive also were more likely to be mechanically ventilated during their hospital stay (25.4%, 19.2%, and 10.8%, respectively) and to receive vasopressors (33.5%, 22.5%, and 12.2%, respectively). In-hospital mortality also was more common among those who were qSOFA-positive at triage (23.4%, 15.3%, and 9.2%, respectively).
Because both qSOFA and NEWS2 incorporate GCS, we explored baseline characteristics of patients with GCS documented at triage (n = 6,794). These patients were older (median age 63 and 61 years, P < .0001), more likely to be male (54.9% and 53.4%, P = .0031), more likely to have renal failure (22.8% and 20.1%, P < .0001), more likely to have liver disease (14.2% and 12.8%, P = .006), had a higher van Walraven comorbidity score on presentation (median 10 and 8, P < .0001), and were more likely to go directly to the ICU from the ED (20.2% and 10.6%, P < .0001). However, among the 6,397 GCS scores documented at triage, only 1,579 (24.7%) were abnormal.
Test Characteristics of qSOFA, Shock Index, and NEWS2 for Predicting In-hospital Mortality and ED-to-ICU Admission
Among 23,837 patients with suspected sepsis, NEWS2 had the highest sensitivity for predicting in-hospital mortality (76.0%; 95% CI, 73.7%-78.2%) and ED-to-ICU admission (78.9%; 95% CI, 77.5%-80.4%) but had the lowest specificity for in-hospital mortality (52.0%; 95% CI, 51.4%-52.7%) and for ED-to-ICU admission (54.8%; 95% CI, 54.1%-55.5%) (Table 3). qSOFA had the lowest sensitivity for in-hospital mortality (31.5%; 95% CI, 29.1%-33.9%) and ED-to-ICU admission (29.3%; 95% CI, 27.7%-30.9%) but the highest specificity for in-hospital mortality (93.4%; 95% CI, 93.1%-93.8%) and ED-to-ICU admission (95.2%; 95% CI, 94.9%-95.5%). The Shock Index had a sensitivity that fell between qSOFA and NEWS2 for in-hospital mortality (45.8%; 95% CI, 43.2%-48.5%) and ED-to-ICU admission (49.2%; 95% CI, 47.5%-51.0%). The specificity of the Shock Index also was between qSOFA and NEWS2 for in-hospital mortality (83.9%; 95% CI, 83.4%-84.3%) and ED-to-ICU admission (86.8%; 95% CI, 86.4%-87.3%). All three scores exhibited relatively low PPV, ranging from 9.2% to 23.4% for in-hospital mortality and 21.0% to 48.0% for ED-to-ICU triage. Conversely, all three scores exhibited relatively high NPV, ranging from 95.5% to 97.1% for in-hospital mortality and 89.8% to 94.5% for ED-to-ICU triage.
When considering a binary cutoff, the Shock Index exhibited the highest AUROC for in-hospital mortality (0.648; 95% CI, 0.635-0.662) and had a significantly higher AUROC than qSOFA (AUROC, 0.625; 95% CI, 0.612-0.637; P = .0005), but there was no difference compared with NEWS2 (AUROC, 0.640; 95% CI, 0.628-0.652; P = .2112). NEWS2 had a significantly higher AUROC than qSOFA for predicting in-hospital mortality (P = .0227). The Shock Index also exhibited the highest AUROC for ED-to-ICU admission (0.680; 95% CI, 0.617-0.689), which was significantly higher than the AUROC for qSOFA (P < .0001) and NEWS2 (P = 0.0151). NEWS2 had a significantly higher AUROC than qSOFA for predicting ED-to-ICU admission (P < .0001). Similar findings were seen in patients found to have sepsis.
DISCUSSION
In this retrospective cohort study of 23,837 patients who presented to the ED with suspected sepsis, the standard qSOFA threshold was met least frequently, followed by the Shock Index and NEWS2. NEWS2 had the highest sensitivity but the lowest specificity for predicting in-hospital mortality and ED-to-ICU admission, making it a challenging bedside risk stratification scale for identifying patients at risk of poor clinical outcomes. When comparing predictive performance among the three scales, qSOFA had the highest specificity and the Shock Index had the highest AUROC for in-hospital mortality and ED-to-ICU admission in this cohort of patients with suspected sepsis. These trends in sensitivity, specificity, and AUROC were consistent among those who met EHR criteria for a sepsis diagnosis. In the analysis of the three scoring systems using all available cut-points, qSOFA and NEWS2 had the highest AUROCs, followed by the Shock Index.
Considering the rapid progression from organ dysfunction to death in sepsis patients, as well as the difficulty establishing a sepsis diagnosis at triage,23 providers must quickly identify patients at increased risk of poor outcomes when they present to the ED. Sepsis alerts often are built using SIRS criteria,27 including the one used for sepsis surveillance at UCSF since 2012,22 but the white blood cell count criterion is subject to a laboratory lag and could lead to a delay in identification. Implementation of a point-of-care bedside score alert that uses readily available clinical data could allow providers to identify patients at greatest risk of poor outcomes immediately at ED presentation and triage, which motivated us to explore the predictive performance of qSOFA, the Shock Index, and NEWS2.
Our study is the first to provide a head-to-head comparison of the predictive performance of qSOFA, the Shock Index, and NEWS2, three easy-to-calculate bedside risk scores that use EHR data collected among patients with suspected sepsis. The Sepsis-3 guidelines recommend qSOFA to quickly identify non-ICU patients at greatest risk of poor outcomes because the measure exhibited predictive performance similar to the more extensive SOFA score outside the ICU.16,23 Although some studies have confirmed qSOFA’s high predictive performance,28-31 our test characteristics and AUROC findings are in line with other published analyses.4,6,10,17 The UK National Health Service is using NEWS2 to screen for patients at risk of poor outcomes from sepsis. Several analyses that assessed the predictive ability of NEWS have reported estimates in line with our findings.4,10,32 The Shock Index was introduced in 1967 and provided a metric to evaluate hemodynamic stability based on heart rate and systolic blood pressure.33 The Shock Index has been studied in several contexts, including sepsis,34 and studies show that a sustained Shock Index is associated with increased odds of vasopressor administration, higher prevalence of hyperlactatemia, and increased risk of poor outcomes in the ICU.13,14
For our study, we were particularly interested in exploring how the Shock Index would compare with more frequently used severity scores such as qSOFA and NEWS2 among patients with suspected sepsis, given the simplicity of its calculation and the easy availability of required data. In our cohort of 23,837 patients, only 159 people had missing blood pressure and only 71 had omitted heart rate. In contrast, both qSOFA and NEWS2 include an assessment of level of consciousness that can be subject to variability in assessment methods and EHR documentation across institutions.11 In our cohort, GCS within 30 minutes of ED presentation was missing in 72 patients, which could have led to incomplete calculation of qSOFA and NEWS2 if a missing value was not actually within normal limits.
Several investigations relate qSOFA to NEWS but few compare qSOFA with the newer NEWS2, and even fewer evaluate the Shock Index with any of these scores.10,11,18,29,35-37 In general, studies have shown that NEWS exhibits a higher AUROC for predicting mortality, sepsis with organ dysfunction, and ICU admission, often as a composite outcome.4,11,18,37,38 A handful of studies compare the Shock Index to SIRS; however, little has been done to compare the Shock Index to qSOFA or NEWS2, scores that have been used specifically for sepsis and might be more predictive of poor outcomes than SIRS.33 In our study, the Shock Index had a higher AUROC than either qSOFA or NEWS2 for predicting in-hospital mortality and ED-to-ICU admission measured as separate outcomes and as a composite outcome using standard cut-points for these scores.
When selecting a severity score to apply in an institution, it is important to carefully evaluate the score’s test characteristics, in addition to considering the availability of reliable data. Tests with high sensitivity and NPV for the population being studied can be useful to rule out disease or risk of poor outcome, while tests with high specificity and PPV can be useful to rule in disease or risk of poor outcome.39 When considering specificity, qSOFA’s performance was superior to the Shock Index and NEWS2 in our study, but a small percentage of the population was identified using a cut-point of qSOFA ≥2. If we used qSOFA and applied this standard cut-point at our institution, we could be confident that those identified were at increased risk, but we would miss a significant number of patients who would experience a poor outcome. When considering sensitivity, performance of NEWS2 was superior to qSOFA and the Shock Index in our study, but one-half of the population was identified using a cut-point of NEWS2 ≥5. If we were to apply this standard NEWS2 cut-point at our institution, we would assume that one-half of our population was at risk, which might drive resource use towards patients who will not experience a poor outcome. Although none of the scores exhibited a robust AUROC measure, the Shock Index had the highest AUROC for in-hospital mortality and ED-to-ICU admission when using the standard binary cut-point, and its sensitivity and specificity is between that of qSOFA and NEWS2, potentially making it a score to use in settings where qSOFA and NEWS2 score components, such as altered mentation, are not reliably collected. Finally, our sensitivity analysis varying the binary cut-point of each score within our population demonstrated that the standard cut-points might not be as useful within a specific population and might need to be tailored for implementation, balancing sensitivity, specificity, PPV, and NPV to meet local priorities and ICU capacity.
Our study has limitations. It is a single-center, retrospective analysis, factors that could reduce generalizability. However, it does include a large and diverse patient population spanning several years. Missing GCS data could have affected the predictive ability of qSOFA and NEWS2 in our cohort. We could not reliably perform imputation of GCS because of the high missingness and therefore we assumed missing was normal, as was done in the Sepsis-3 derivation studies.16 Previous studies have attempted to impute GCS and have not observed improved performance of qSOFA to predict mortality.40 Because manually collected variables such as GCS are less reliably documented in the EHR, there might be limitations in their use for triage risk scores.
Although the current analysis focused on the predictive performance of qSOFA, the Shock Index, and NEWS2 at triage, performance of these scores could affect the ED team’s treatment decisions before handoff to the hospitalist team and the expected level of care the patient will receive after in-patient admission. These tests also have the advantage of being easy to calculate at the bedside over time, which could provide an objective assessment of longitudinal predicted prognosis.
CONCLUSION
Local priorities should drive selection of a screening tool, balancing sensitivity, specificity, PPV, and NPV to achieve the institution’s goals. qSOFA, Shock Index, and NEWS2 are risk stratification tools that can be easily implemented at ED triage using data available at the bedside. Although none of these scores performed strongly when comparing AUROCs, qSOFA was highly specific for identifying patients with poor outcomes, and NEWS2 was the most sensitive for ruling out those at high risk among patients with suspected sepsis. The Shock Index exhibited a sensitivity and specificity that fell between qSOFA and NEWS2 and also might be considered to identify those at increased risk, given its ease of implementation, particularly in settings where altered mentation is unreliably or inconsistently documented.
Acknowledgment
The authors thank the UCSF Division of Hospital Medicine Data Core for their assistance with data acquisition.
1. Jones SL, Ashton CM, Kiehne LB, et al. Outcomes and resource use of sepsis-associated stays by presence on admission, severity, and hospital type. Med Care. 2016;54(3):303-310. https://doi.org/10.1097/MLR.0000000000000481
2. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis. N Engl J Med. 2017;376(23):2235-2244. https://doi.org/10.1056/NEJMoa1703058
3. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34(6):1589-1596. https://doi.org/10.1097/01.CCM.0000217961.75225.E9
4. Churpek MM, Snyder A, Sokol S, Pettit NN, Edelson DP. Investigating the impact of different suspicion of infection criteria on the accuracy of Quick Sepsis-Related Organ Failure Assessment, Systemic Inflammatory Response Syndrome, and Early Warning Scores. Crit Care Med. 2017;45(11):1805-1812. https://doi.org/10.1097/CCM.0000000000002648
5. Abdullah SMOB, Sørensen RH, Dessau RBC, Sattar SMRU, Wiese L, Nielsen FE. Prognostic accuracy of qSOFA in predicting 28-day mortality among infected patients in an emergency department: a prospective validation study. Emerg Med J. 2019;36(12):722-728. https://doi.org/10.1136/emermed-2019-208456
6. Kim KS, Suh GJ, Kim K, et al. Quick Sepsis-related Organ Failure Assessment score is not sensitive enough to predict 28-day mortality in emergency department patients with sepsis: a retrospective review. Clin Exp Emerg Med. 2019;6(1):77-83. HTTPS://DOI.ORG/ 10.15441/ceem.17.294
7. National Early Warning Score (NEWS) 2: Standardising the assessment of acute-illness severity in the NHS. Royal College of Physicians; 2017.
8. Brink A, Alsma J, Verdonschot RJCG, et al. Predicting mortality in patients with suspected sepsis at the emergency department: a retrospective cohort study comparing qSOFA, SIRS and National Early Warning Score. PLoS One. 2019;14(1):e0211133. https://doi.org/ 10.1371/journal.pone.0211133
9. Redfern OC, Smith GB, Prytherch DR, Meredith P, Inada-Kim M, Schmidt PE. A comparison of the Quick Sequential (Sepsis-Related) Organ Failure Assessment Score and the National Early Warning Score in non-ICU patients with/without infection. Crit Care Med. 2018;46(12):1923-1933. https://doi.org/10.1097/CCM.0000000000003359
10. Churpek MM, Snyder A, Han X, et al. Quick Sepsis-related Organ Failure Assessment, Systemic Inflammatory Response Syndrome, and Early Warning Scores for detecting clinical deterioration in infected patients outside the intensive care unit. Am J Respir Crit Care Med. 2017;195(7):906-911. https://doi.org/10.1164/rccm.201604-0854OC
11. Goulden R, Hoyle MC, Monis J, et al. qSOFA, SIRS and NEWS for predicting inhospital mortality and ICU admission in emergency admissions treated as sepsis. Emerg Med J. 2018;35(6):345-349. https://doi.org/10.1136/emermed-2017-207120
12. Biney I, Shepherd A, Thomas J, Mehari A. Shock Index and outcomes in patients admitted to the ICU with sepsis. Chest. 2015;148(suppl 4):337A. https://doi.org/https://doi.org/10.1378/chest.2281151
13. Wira CR, Francis MW, Bhat S, Ehrman R, Conner D, Siegel M. The shock index as a predictor of vasopressor use in emergency department patients with severe sepsis. West J Emerg Med. 2014;15(1):60-66. https://doi.org/10.5811/westjem.2013.7.18472
14. Berger T, Green J, Horeczko T, et al. Shock index and early recognition of sepsis in the emergency department: pilot study. West J Emerg Med. 2013;14(2):168-174. https://doi.org/10.5811/westjem.2012.8.11546
15. Middleton DJ, Smith TO, Bedford R, Neilly M, Myint PK. Shock Index predicts outcome in patients with suspected sepsis or community-acquired pneumonia: a systematic review. J Clin Med. 2019;8(8):1144. https://doi.org/10.3390/jcm8081144
16. Seymour CW, Liu VX, Iwashyna TJ, et al. Assessment of clinical criteria for sepsis: for the Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3). JAMA. 2016;315(8):762-774. https://doi.org/ 10.1001/jama.2016.0288
17. Abdullah S, Sørensen RH, Dessau RBC, Sattar S, Wiese L, Nielsen FE. Prognostic accuracy of qSOFA in predicting 28-day mortality among infected patients in an emergency department: a prospective validation study. Emerg Med J. 2019;36(12):722-728. https://doi.org/10.1136/emermed-2019-208456
18. Usman OA, Usman AA, Ward MA. Comparison of SIRS, qSOFA, and NEWS for the early identification of sepsis in the Emergency Department. Am J Emerg Med. 2018;37(8):1490-1497. https://doi.org/10.1016/j.ajem.2018.10.058
19. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. https://doi.org/10.1097/00005650-199801000-00004
20. van Walraven C, Austin PC, Jennings A, Quan H, Forster AJ. A modification of the Elixhauser comorbidity measures into a point system for hospital death using administrative data. Med Care. 2009;47(6):626-633. https://doi.org/10.1097/MLR.0b013e31819432e5
21. Prin M, Wunsch H. The role of stepdown beds in hospital care. Am J Respir Crit Care Med. 2014;190(11):1210-1216. https://doi.org/10.1164/rccm.201406-1117PP
22. Narayanan N, Gross AK, Pintens M, Fee C, MacDougall C. Effect of an electronic medical record alert for severe sepsis among ED patients. Am J Emerg Med. 2016;34(2):185-188. https://doi.org/10.1016/j.ajem.2015.10.005
23. Singer M, Deutschman CS, Seymour CW, et al. The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3). JAMA. 2016;315(8):801-810. https://doi.org/10.1001/jama.2016.0287
24. Rhee C, Dantes R, Epstein L, et al. Incidence and trends of sepsis in US hospitals using clinical vs claims data, 2009-2014. JAMA. 2017;318(13):1241-1249. https://doi.org/10.1001/jama.2017.13836
25. Safari S, Baratloo A, Elfil M, Negida A. Evidence based emergency medicine; part 5 receiver operating curve and area under the curve. Emerg (Tehran). 2016;4(2):111-113.
26. DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988;44(3):837-845.
27. Kangas C, Iverson L, Pierce D. Sepsis screening: combining Early Warning Scores and SIRS Criteria. Clin Nurs Res. 2021;30(1):42-49. https://doi.org/10.1177/1054773818823334.
28. Freund Y, Lemachatti N, Krastinova E, et al. Prognostic accuracy of Sepsis-3 Criteria for in-hospital mortality among patients with suspected infection presenting to the emergency department. JAMA. 2017;317(3):301-308. https://doi.org/10.1001/jama.2016.20329
29. Finkelsztein EJ, Jones DS, Ma KC, et al. Comparison of qSOFA and SIRS for predicting adverse outcomes of patients with suspicion of sepsis outside the intensive care unit. Crit Care. 2017;21(1):73. https://doi.org/10.1186/s13054-017-1658-5
30. Canet E, Taylor DM, Khor R, Krishnan V, Bellomo R. qSOFA as predictor of mortality and prolonged ICU admission in Emergency Department patients with suspected infection. J Crit Care. 2018;48:118-123. https://doi.org/10.1016/j.jcrc.2018.08.022
31. Anand V, Zhang Z, Kadri SS, Klompas M, Rhee C; CDC Prevention Epicenters Program. Epidemiology of Quick Sequential Organ Failure Assessment criteria in undifferentiated patients and association with suspected infection and sepsis. Chest. 2019;156(2):289-297. https://doi.org/10.1016/j.chest.2019.03.032
32. Hamilton F, Arnold D, Baird A, Albur M, Whiting P. Early Warning Scores do not accurately predict mortality in sepsis: A meta-analysis and systematic review of the literature. J Infect. 2018;76(3):241-248. https://doi.org/10.1016/j.jinf.2018.01.002
33. Koch E, Lovett S, Nghiem T, Riggs RA, Rech MA. Shock Index in the emergency department: utility and limitations. Open Access Emerg Med. 2019;11:179-199. https://doi.org/10.2147/OAEM.S178358
34. Yussof SJ, Zakaria MI, Mohamed FL, Bujang MA, Lakshmanan S, Asaari AH. Value of Shock Index in prognosticating the short-term outcome of death for patients presenting with severe sepsis and septic shock in the emergency department. Med J Malaysia. 2012;67(4):406-411.
35. Siddiqui S, Chua M, Kumaresh V, Choo R. A comparison of pre ICU admission SIRS, EWS and q SOFA scores for predicting mortality and length of stay in ICU. J Crit Care. 2017;41:191-193. https://doi.org/10.1016/j.jcrc.2017.05.017
36. Costa RT, Nassar AP, Caruso P. Accuracy of SOFA, qSOFA, and SIRS scores for mortality in cancer patients admitted to an intensive care unit with suspected infection. J Crit Care. 2018;45:52-57. https://doi.org/10.1016/j.jcrc.2017.12.024
37. Mellhammar L, Linder A, Tverring J, et al. NEWS2 is Superior to qSOFA in detecting sepsis with organ dysfunction in the emergency department. J Clin Med. 2019;8(8):1128. https://doi.org/10.3390/jcm8081128
38. Szakmany T, Pugh R, Kopczynska M, et al. Defining sepsis on the wards: results of a multi-centre point-prevalence study comparing two sepsis definitions. Anaesthesia. 2018;73(2):195-204. https://doi.org/10.1111/anae.14062
39. Newman TB, Kohn MA. Evidence-Based Diagnosis: An Introduction to Clinical Epidemiology. Cambridge University Press; 2009.
40. Askim Å, Moser F, Gustad LT, et al. Poor performance of quick-SOFA (qSOFA) score in predicting severe sepsis and mortality - a prospective study of patients admitted with infection to the emergency department. Scand J Trauma Resusc Emerg Med. 2017;25(1):56. https://doi.org/10.1186/s13049-017-0399-4
Sepsis is the leading cause of in-hospital mortality in the United States.1 Sepsis is present on admission in 85% of cases, and each hour delay in antibiotic treatment is associated with 4% to 7% increased odds of mortality.2,3 Prompt identification and treatment of sepsis is essential for reducing morbidity and mortality, but identifying sepsis during triage is challenging.2
Risk stratification scores that rely solely on data readily available at the bedside have been developed to quickly identify those at greatest risk of poor outcomes from sepsis in real time. The quick Sequential Organ Failure Assessment (qSOFA) score, the National Early Warning System (NEWS2), and the Shock Index are easy-to-calculate measures that use routinely collected clinical data that are not subject to laboratory delay. These scores can be incorporated into electronic health record (EHR)-based alerts and can be calculated longitudinally to track the risk of poor outcomes over time. qSOFA was developed to quantify patient risk at bedside in non-intensive care unit (ICU) settings, but there is no consensus about its ability to predict adverse outcomes such as mortality and ICU admission.4-6 The United Kingdom’s National Health Service uses NEWS2 to identify patients at risk for sepsis.7 NEWS has been shown to have similar or better sensitivity in identifying poorer outcomes in sepsis patients compared with systemic inflammatory response syndrome (SIRS) criteria and qSOFA.4,8-11 However, since the latest update of NEWS2 in 2017, there has been little study of its predictive ability. The Shock Index is a simple bedside score (heart rate divided by systolic blood pressure) that was developed to detect changes in cardiovascular performance before systemic shock onset. Although it was not developed for infection and has not been regularly applied in the sepsis literature, the Shock Index might be useful for identifying patients at increased risk of poor outcomes. Patients with higher and sustained Shock Index scores are more likely to experience morbidity, such as hyperlactatemia, vasopressor use, and organ failure, and also have an increased risk of mortality.12-14
Although the predictive abilities of these bedside risk stratification scores have been assessed individually using standard binary cut-points, the comparative performance of qSOFA, the Shock Index, and NEWS2 has not been evaluated in patients presenting to an emergency department (ED) with suspected sepsis.
METHODS
Design and Setting
We conducted a retrospective cohort study of ED patients who presented with suspected sepsis to the University of California San Francisco (UCSF) Helen Diller Medical Center at Parnassus Heights between June 1, 2012, and December 31, 2018. Our institution is a 785-bed academic teaching hospital with approximately 30,000 ED encounters per year. The study was approved with a waiver of informed consent by the UCSF Human Research Protection Program.
Participants
We use an Epic-based EHR platform (Epic 2017, Epic Systems Corporation) for clinical care, which was implemented on June 1, 2012. All data elements were obtained from Clarity, the relational database that stores Epic’s inpatient data. The study included encounters for patients age ≥18 years who had blood cultures ordered within 24 hours of ED presentation and administration of intravenous antibiotics within 24 hours. Repeat encounters were treated independently in our analysis.
Outcomes and Measures
We compared the ability of qSOFA, the Shock Index, and NEWS2 to predict in-hospital mortality and admission to the ICU from the ED (ED-to-ICU admission). We used the
We compared demographic and clinical characteristics of patients who were positive for qSOFA, the Shock Index, and NEWS2. Demographic data were extracted from the EHR and included primary language, age, sex, and insurance status. All International Classification of Diseases (ICD)-9/10 diagnosis codes were pulled from Clarity billing tables. We used the Elixhauser comorbidity groupings19 of ICD-9/10 codes present on admission to identify preexisting comorbidities and underlying organ dysfunction. To estimate burden of comorbid illnesses, we calculated the validated van Walraven comorbidity index,20 which provides an estimated risk of in-hospital death based on documented Elixhauser comorbidities. Admission level of care (acute, stepdown, or intensive care) was collected for inpatient admissions to assess initial illness severity.21 We also evaluated discharge disposition and in-hospital mortality. Index blood culture results were collected, and dates and timestamps of mechanical ventilation, fluid, vasopressor, and antibiotic administration were obtained for the duration of the encounter.
UCSF uses an automated, real-time, algorithm-based severe sepsis alert that is triggered when a patient meets ≥2 SIRS criteria and again when the patient meets severe sepsis or septic shock criteria (ie, ≥2 SIRS criteria in addition to end-organ dysfunction and/or fluid nonresponsive hypotension). This sepsis screening alert was in use for the duration of our study.22
Statistical Analysis
We performed a subgroup analysis among those who were diagnosed with sepsis, according to the 2016 Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3) criteria.
All statistical analyses were conducted using Stata 14 (StataCorp). We summarized differences in demographic and clinical characteristics among the populations meeting each severity score but elected not to conduct hypothesis testing because patients could be positive for one or more scores. We calculated sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for each score to predict in-hospital mortality and ED-to-ICU admission. To allow comparison with other studies, we also created a composite outcome of either in-hospital mortality or ED-to-ICU admission.
RESULTS
Within our sample 23,837 ED patients had blood cultures ordered within 24 hours of ED presentation and were considered to have suspected sepsis. The mean age of the cohort was 60.8 years, and 1,612 (6.8%) had positive blood cultures. A total of 12,928 patients (54.2%) were found to have sepsis. We documented 1,427 in-hospital deaths (6.0%) and 3,149 (13.2%) ED-to-ICU admissions. At ED triage 1,921 (8.1%) were qSOFA-positive, 4,273 (17.9%) were Shock Index-positive, and 11,832 (49.6%) were NEWS2-positive. At ED triage, blood pressure, heart rate, respiratory rate, and oxygen saturated were documented in >99% of patients, 93.5% had temperature documented, and 28.5% had GCS recorded. If the window of assessment was widened to 1 hour, GCS was only documented among 44.2% of those with suspected sepsis.
Demographic Characteristics and Clinical Course
qSOFA-positive patients received antibiotics more quickly than those who were Shock Index-positive or NEWS2-positive (median 1.5, 1.8, and 2.8 hours after admission, respectively). In addition, those who were qSOFA-positive were more likely to have a positive blood culture (10.9%, 9.4%, and 8.5%, respectively) and to receive an EHR-based diagnosis of sepsis (77.0%, 69.6%, and 60.9%, respectively) than those who were Shock Index- or NEWS2-positive. Those who were qSOFA-positive also were more likely to be mechanically ventilated during their hospital stay (25.4%, 19.2%, and 10.8%, respectively) and to receive vasopressors (33.5%, 22.5%, and 12.2%, respectively). In-hospital mortality also was more common among those who were qSOFA-positive at triage (23.4%, 15.3%, and 9.2%, respectively).
Because both qSOFA and NEWS2 incorporate GCS, we explored baseline characteristics of patients with GCS documented at triage (n = 6,794). These patients were older (median age 63 and 61 years, P < .0001), more likely to be male (54.9% and 53.4%, P = .0031), more likely to have renal failure (22.8% and 20.1%, P < .0001), more likely to have liver disease (14.2% and 12.8%, P = .006), had a higher van Walraven comorbidity score on presentation (median 10 and 8, P < .0001), and were more likely to go directly to the ICU from the ED (20.2% and 10.6%, P < .0001). However, among the 6,397 GCS scores documented at triage, only 1,579 (24.7%) were abnormal.
Test Characteristics of qSOFA, Shock Index, and NEWS2 for Predicting In-hospital Mortality and ED-to-ICU Admission
Among 23,837 patients with suspected sepsis, NEWS2 had the highest sensitivity for predicting in-hospital mortality (76.0%; 95% CI, 73.7%-78.2%) and ED-to-ICU admission (78.9%; 95% CI, 77.5%-80.4%) but had the lowest specificity for in-hospital mortality (52.0%; 95% CI, 51.4%-52.7%) and for ED-to-ICU admission (54.8%; 95% CI, 54.1%-55.5%) (Table 3). qSOFA had the lowest sensitivity for in-hospital mortality (31.5%; 95% CI, 29.1%-33.9%) and ED-to-ICU admission (29.3%; 95% CI, 27.7%-30.9%) but the highest specificity for in-hospital mortality (93.4%; 95% CI, 93.1%-93.8%) and ED-to-ICU admission (95.2%; 95% CI, 94.9%-95.5%). The Shock Index had a sensitivity that fell between qSOFA and NEWS2 for in-hospital mortality (45.8%; 95% CI, 43.2%-48.5%) and ED-to-ICU admission (49.2%; 95% CI, 47.5%-51.0%). The specificity of the Shock Index also was between qSOFA and NEWS2 for in-hospital mortality (83.9%; 95% CI, 83.4%-84.3%) and ED-to-ICU admission (86.8%; 95% CI, 86.4%-87.3%). All three scores exhibited relatively low PPV, ranging from 9.2% to 23.4% for in-hospital mortality and 21.0% to 48.0% for ED-to-ICU triage. Conversely, all three scores exhibited relatively high NPV, ranging from 95.5% to 97.1% for in-hospital mortality and 89.8% to 94.5% for ED-to-ICU triage.
When considering a binary cutoff, the Shock Index exhibited the highest AUROC for in-hospital mortality (0.648; 95% CI, 0.635-0.662) and had a significantly higher AUROC than qSOFA (AUROC, 0.625; 95% CI, 0.612-0.637; P = .0005), but there was no difference compared with NEWS2 (AUROC, 0.640; 95% CI, 0.628-0.652; P = .2112). NEWS2 had a significantly higher AUROC than qSOFA for predicting in-hospital mortality (P = .0227). The Shock Index also exhibited the highest AUROC for ED-to-ICU admission (0.680; 95% CI, 0.617-0.689), which was significantly higher than the AUROC for qSOFA (P < .0001) and NEWS2 (P = 0.0151). NEWS2 had a significantly higher AUROC than qSOFA for predicting ED-to-ICU admission (P < .0001). Similar findings were seen in patients found to have sepsis.
DISCUSSION
In this retrospective cohort study of 23,837 patients who presented to the ED with suspected sepsis, the standard qSOFA threshold was met least frequently, followed by the Shock Index and NEWS2. NEWS2 had the highest sensitivity but the lowest specificity for predicting in-hospital mortality and ED-to-ICU admission, making it a challenging bedside risk stratification scale for identifying patients at risk of poor clinical outcomes. When comparing predictive performance among the three scales, qSOFA had the highest specificity and the Shock Index had the highest AUROC for in-hospital mortality and ED-to-ICU admission in this cohort of patients with suspected sepsis. These trends in sensitivity, specificity, and AUROC were consistent among those who met EHR criteria for a sepsis diagnosis. In the analysis of the three scoring systems using all available cut-points, qSOFA and NEWS2 had the highest AUROCs, followed by the Shock Index.
Considering the rapid progression from organ dysfunction to death in sepsis patients, as well as the difficulty establishing a sepsis diagnosis at triage,23 providers must quickly identify patients at increased risk of poor outcomes when they present to the ED. Sepsis alerts often are built using SIRS criteria,27 including the one used for sepsis surveillance at UCSF since 2012,22 but the white blood cell count criterion is subject to a laboratory lag and could lead to a delay in identification. Implementation of a point-of-care bedside score alert that uses readily available clinical data could allow providers to identify patients at greatest risk of poor outcomes immediately at ED presentation and triage, which motivated us to explore the predictive performance of qSOFA, the Shock Index, and NEWS2.
Our study is the first to provide a head-to-head comparison of the predictive performance of qSOFA, the Shock Index, and NEWS2, three easy-to-calculate bedside risk scores that use EHR data collected among patients with suspected sepsis. The Sepsis-3 guidelines recommend qSOFA to quickly identify non-ICU patients at greatest risk of poor outcomes because the measure exhibited predictive performance similar to the more extensive SOFA score outside the ICU.16,23 Although some studies have confirmed qSOFA’s high predictive performance,28-31 our test characteristics and AUROC findings are in line with other published analyses.4,6,10,17 The UK National Health Service is using NEWS2 to screen for patients at risk of poor outcomes from sepsis. Several analyses that assessed the predictive ability of NEWS have reported estimates in line with our findings.4,10,32 The Shock Index was introduced in 1967 and provided a metric to evaluate hemodynamic stability based on heart rate and systolic blood pressure.33 The Shock Index has been studied in several contexts, including sepsis,34 and studies show that a sustained Shock Index is associated with increased odds of vasopressor administration, higher prevalence of hyperlactatemia, and increased risk of poor outcomes in the ICU.13,14
For our study, we were particularly interested in exploring how the Shock Index would compare with more frequently used severity scores such as qSOFA and NEWS2 among patients with suspected sepsis, given the simplicity of its calculation and the easy availability of required data. In our cohort of 23,837 patients, only 159 people had missing blood pressure and only 71 had omitted heart rate. In contrast, both qSOFA and NEWS2 include an assessment of level of consciousness that can be subject to variability in assessment methods and EHR documentation across institutions.11 In our cohort, GCS within 30 minutes of ED presentation was missing in 72 patients, which could have led to incomplete calculation of qSOFA and NEWS2 if a missing value was not actually within normal limits.
Several investigations relate qSOFA to NEWS but few compare qSOFA with the newer NEWS2, and even fewer evaluate the Shock Index with any of these scores.10,11,18,29,35-37 In general, studies have shown that NEWS exhibits a higher AUROC for predicting mortality, sepsis with organ dysfunction, and ICU admission, often as a composite outcome.4,11,18,37,38 A handful of studies compare the Shock Index to SIRS; however, little has been done to compare the Shock Index to qSOFA or NEWS2, scores that have been used specifically for sepsis and might be more predictive of poor outcomes than SIRS.33 In our study, the Shock Index had a higher AUROC than either qSOFA or NEWS2 for predicting in-hospital mortality and ED-to-ICU admission measured as separate outcomes and as a composite outcome using standard cut-points for these scores.
When selecting a severity score to apply in an institution, it is important to carefully evaluate the score’s test characteristics, in addition to considering the availability of reliable data. Tests with high sensitivity and NPV for the population being studied can be useful to rule out disease or risk of poor outcome, while tests with high specificity and PPV can be useful to rule in disease or risk of poor outcome.39 When considering specificity, qSOFA’s performance was superior to the Shock Index and NEWS2 in our study, but a small percentage of the population was identified using a cut-point of qSOFA ≥2. If we used qSOFA and applied this standard cut-point at our institution, we could be confident that those identified were at increased risk, but we would miss a significant number of patients who would experience a poor outcome. When considering sensitivity, performance of NEWS2 was superior to qSOFA and the Shock Index in our study, but one-half of the population was identified using a cut-point of NEWS2 ≥5. If we were to apply this standard NEWS2 cut-point at our institution, we would assume that one-half of our population was at risk, which might drive resource use towards patients who will not experience a poor outcome. Although none of the scores exhibited a robust AUROC measure, the Shock Index had the highest AUROC for in-hospital mortality and ED-to-ICU admission when using the standard binary cut-point, and its sensitivity and specificity is between that of qSOFA and NEWS2, potentially making it a score to use in settings where qSOFA and NEWS2 score components, such as altered mentation, are not reliably collected. Finally, our sensitivity analysis varying the binary cut-point of each score within our population demonstrated that the standard cut-points might not be as useful within a specific population and might need to be tailored for implementation, balancing sensitivity, specificity, PPV, and NPV to meet local priorities and ICU capacity.
Our study has limitations. It is a single-center, retrospective analysis, factors that could reduce generalizability. However, it does include a large and diverse patient population spanning several years. Missing GCS data could have affected the predictive ability of qSOFA and NEWS2 in our cohort. We could not reliably perform imputation of GCS because of the high missingness and therefore we assumed missing was normal, as was done in the Sepsis-3 derivation studies.16 Previous studies have attempted to impute GCS and have not observed improved performance of qSOFA to predict mortality.40 Because manually collected variables such as GCS are less reliably documented in the EHR, there might be limitations in their use for triage risk scores.
Although the current analysis focused on the predictive performance of qSOFA, the Shock Index, and NEWS2 at triage, performance of these scores could affect the ED team’s treatment decisions before handoff to the hospitalist team and the expected level of care the patient will receive after in-patient admission. These tests also have the advantage of being easy to calculate at the bedside over time, which could provide an objective assessment of longitudinal predicted prognosis.
CONCLUSION
Local priorities should drive selection of a screening tool, balancing sensitivity, specificity, PPV, and NPV to achieve the institution’s goals. qSOFA, Shock Index, and NEWS2 are risk stratification tools that can be easily implemented at ED triage using data available at the bedside. Although none of these scores performed strongly when comparing AUROCs, qSOFA was highly specific for identifying patients with poor outcomes, and NEWS2 was the most sensitive for ruling out those at high risk among patients with suspected sepsis. The Shock Index exhibited a sensitivity and specificity that fell between qSOFA and NEWS2 and also might be considered to identify those at increased risk, given its ease of implementation, particularly in settings where altered mentation is unreliably or inconsistently documented.
Acknowledgment
The authors thank the UCSF Division of Hospital Medicine Data Core for their assistance with data acquisition.
Sepsis is the leading cause of in-hospital mortality in the United States.1 Sepsis is present on admission in 85% of cases, and each hour delay in antibiotic treatment is associated with 4% to 7% increased odds of mortality.2,3 Prompt identification and treatment of sepsis is essential for reducing morbidity and mortality, but identifying sepsis during triage is challenging.2
Risk stratification scores that rely solely on data readily available at the bedside have been developed to quickly identify those at greatest risk of poor outcomes from sepsis in real time. The quick Sequential Organ Failure Assessment (qSOFA) score, the National Early Warning System (NEWS2), and the Shock Index are easy-to-calculate measures that use routinely collected clinical data that are not subject to laboratory delay. These scores can be incorporated into electronic health record (EHR)-based alerts and can be calculated longitudinally to track the risk of poor outcomes over time. qSOFA was developed to quantify patient risk at bedside in non-intensive care unit (ICU) settings, but there is no consensus about its ability to predict adverse outcomes such as mortality and ICU admission.4-6 The United Kingdom’s National Health Service uses NEWS2 to identify patients at risk for sepsis.7 NEWS has been shown to have similar or better sensitivity in identifying poorer outcomes in sepsis patients compared with systemic inflammatory response syndrome (SIRS) criteria and qSOFA.4,8-11 However, since the latest update of NEWS2 in 2017, there has been little study of its predictive ability. The Shock Index is a simple bedside score (heart rate divided by systolic blood pressure) that was developed to detect changes in cardiovascular performance before systemic shock onset. Although it was not developed for infection and has not been regularly applied in the sepsis literature, the Shock Index might be useful for identifying patients at increased risk of poor outcomes. Patients with higher and sustained Shock Index scores are more likely to experience morbidity, such as hyperlactatemia, vasopressor use, and organ failure, and also have an increased risk of mortality.12-14
Although the predictive abilities of these bedside risk stratification scores have been assessed individually using standard binary cut-points, the comparative performance of qSOFA, the Shock Index, and NEWS2 has not been evaluated in patients presenting to an emergency department (ED) with suspected sepsis.
METHODS
Design and Setting
We conducted a retrospective cohort study of ED patients who presented with suspected sepsis to the University of California San Francisco (UCSF) Helen Diller Medical Center at Parnassus Heights between June 1, 2012, and December 31, 2018. Our institution is a 785-bed academic teaching hospital with approximately 30,000 ED encounters per year. The study was approved with a waiver of informed consent by the UCSF Human Research Protection Program.
Participants
We use an Epic-based EHR platform (Epic 2017, Epic Systems Corporation) for clinical care, which was implemented on June 1, 2012. All data elements were obtained from Clarity, the relational database that stores Epic’s inpatient data. The study included encounters for patients age ≥18 years who had blood cultures ordered within 24 hours of ED presentation and administration of intravenous antibiotics within 24 hours. Repeat encounters were treated independently in our analysis.
Outcomes and Measures
We compared the ability of qSOFA, the Shock Index, and NEWS2 to predict in-hospital mortality and admission to the ICU from the ED (ED-to-ICU admission). We used the
We compared demographic and clinical characteristics of patients who were positive for qSOFA, the Shock Index, and NEWS2. Demographic data were extracted from the EHR and included primary language, age, sex, and insurance status. All International Classification of Diseases (ICD)-9/10 diagnosis codes were pulled from Clarity billing tables. We used the Elixhauser comorbidity groupings19 of ICD-9/10 codes present on admission to identify preexisting comorbidities and underlying organ dysfunction. To estimate burden of comorbid illnesses, we calculated the validated van Walraven comorbidity index,20 which provides an estimated risk of in-hospital death based on documented Elixhauser comorbidities. Admission level of care (acute, stepdown, or intensive care) was collected for inpatient admissions to assess initial illness severity.21 We also evaluated discharge disposition and in-hospital mortality. Index blood culture results were collected, and dates and timestamps of mechanical ventilation, fluid, vasopressor, and antibiotic administration were obtained for the duration of the encounter.
UCSF uses an automated, real-time, algorithm-based severe sepsis alert that is triggered when a patient meets ≥2 SIRS criteria and again when the patient meets severe sepsis or septic shock criteria (ie, ≥2 SIRS criteria in addition to end-organ dysfunction and/or fluid nonresponsive hypotension). This sepsis screening alert was in use for the duration of our study.22
Statistical Analysis
We performed a subgroup analysis among those who were diagnosed with sepsis, according to the 2016 Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3) criteria.
All statistical analyses were conducted using Stata 14 (StataCorp). We summarized differences in demographic and clinical characteristics among the populations meeting each severity score but elected not to conduct hypothesis testing because patients could be positive for one or more scores. We calculated sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for each score to predict in-hospital mortality and ED-to-ICU admission. To allow comparison with other studies, we also created a composite outcome of either in-hospital mortality or ED-to-ICU admission.
RESULTS
Within our sample 23,837 ED patients had blood cultures ordered within 24 hours of ED presentation and were considered to have suspected sepsis. The mean age of the cohort was 60.8 years, and 1,612 (6.8%) had positive blood cultures. A total of 12,928 patients (54.2%) were found to have sepsis. We documented 1,427 in-hospital deaths (6.0%) and 3,149 (13.2%) ED-to-ICU admissions. At ED triage 1,921 (8.1%) were qSOFA-positive, 4,273 (17.9%) were Shock Index-positive, and 11,832 (49.6%) were NEWS2-positive. At ED triage, blood pressure, heart rate, respiratory rate, and oxygen saturated were documented in >99% of patients, 93.5% had temperature documented, and 28.5% had GCS recorded. If the window of assessment was widened to 1 hour, GCS was only documented among 44.2% of those with suspected sepsis.
Demographic Characteristics and Clinical Course
qSOFA-positive patients received antibiotics more quickly than those who were Shock Index-positive or NEWS2-positive (median 1.5, 1.8, and 2.8 hours after admission, respectively). In addition, those who were qSOFA-positive were more likely to have a positive blood culture (10.9%, 9.4%, and 8.5%, respectively) and to receive an EHR-based diagnosis of sepsis (77.0%, 69.6%, and 60.9%, respectively) than those who were Shock Index- or NEWS2-positive. Those who were qSOFA-positive also were more likely to be mechanically ventilated during their hospital stay (25.4%, 19.2%, and 10.8%, respectively) and to receive vasopressors (33.5%, 22.5%, and 12.2%, respectively). In-hospital mortality also was more common among those who were qSOFA-positive at triage (23.4%, 15.3%, and 9.2%, respectively).
Because both qSOFA and NEWS2 incorporate GCS, we explored baseline characteristics of patients with GCS documented at triage (n = 6,794). These patients were older (median age 63 and 61 years, P < .0001), more likely to be male (54.9% and 53.4%, P = .0031), more likely to have renal failure (22.8% and 20.1%, P < .0001), more likely to have liver disease (14.2% and 12.8%, P = .006), had a higher van Walraven comorbidity score on presentation (median 10 and 8, P < .0001), and were more likely to go directly to the ICU from the ED (20.2% and 10.6%, P < .0001). However, among the 6,397 GCS scores documented at triage, only 1,579 (24.7%) were abnormal.
Test Characteristics of qSOFA, Shock Index, and NEWS2 for Predicting In-hospital Mortality and ED-to-ICU Admission
Among 23,837 patients with suspected sepsis, NEWS2 had the highest sensitivity for predicting in-hospital mortality (76.0%; 95% CI, 73.7%-78.2%) and ED-to-ICU admission (78.9%; 95% CI, 77.5%-80.4%) but had the lowest specificity for in-hospital mortality (52.0%; 95% CI, 51.4%-52.7%) and for ED-to-ICU admission (54.8%; 95% CI, 54.1%-55.5%) (Table 3). qSOFA had the lowest sensitivity for in-hospital mortality (31.5%; 95% CI, 29.1%-33.9%) and ED-to-ICU admission (29.3%; 95% CI, 27.7%-30.9%) but the highest specificity for in-hospital mortality (93.4%; 95% CI, 93.1%-93.8%) and ED-to-ICU admission (95.2%; 95% CI, 94.9%-95.5%). The Shock Index had a sensitivity that fell between qSOFA and NEWS2 for in-hospital mortality (45.8%; 95% CI, 43.2%-48.5%) and ED-to-ICU admission (49.2%; 95% CI, 47.5%-51.0%). The specificity of the Shock Index also was between qSOFA and NEWS2 for in-hospital mortality (83.9%; 95% CI, 83.4%-84.3%) and ED-to-ICU admission (86.8%; 95% CI, 86.4%-87.3%). All three scores exhibited relatively low PPV, ranging from 9.2% to 23.4% for in-hospital mortality and 21.0% to 48.0% for ED-to-ICU triage. Conversely, all three scores exhibited relatively high NPV, ranging from 95.5% to 97.1% for in-hospital mortality and 89.8% to 94.5% for ED-to-ICU triage.
When considering a binary cutoff, the Shock Index exhibited the highest AUROC for in-hospital mortality (0.648; 95% CI, 0.635-0.662) and had a significantly higher AUROC than qSOFA (AUROC, 0.625; 95% CI, 0.612-0.637; P = .0005), but there was no difference compared with NEWS2 (AUROC, 0.640; 95% CI, 0.628-0.652; P = .2112). NEWS2 had a significantly higher AUROC than qSOFA for predicting in-hospital mortality (P = .0227). The Shock Index also exhibited the highest AUROC for ED-to-ICU admission (0.680; 95% CI, 0.617-0.689), which was significantly higher than the AUROC for qSOFA (P < .0001) and NEWS2 (P = 0.0151). NEWS2 had a significantly higher AUROC than qSOFA for predicting ED-to-ICU admission (P < .0001). Similar findings were seen in patients found to have sepsis.
DISCUSSION
In this retrospective cohort study of 23,837 patients who presented to the ED with suspected sepsis, the standard qSOFA threshold was met least frequently, followed by the Shock Index and NEWS2. NEWS2 had the highest sensitivity but the lowest specificity for predicting in-hospital mortality and ED-to-ICU admission, making it a challenging bedside risk stratification scale for identifying patients at risk of poor clinical outcomes. When comparing predictive performance among the three scales, qSOFA had the highest specificity and the Shock Index had the highest AUROC for in-hospital mortality and ED-to-ICU admission in this cohort of patients with suspected sepsis. These trends in sensitivity, specificity, and AUROC were consistent among those who met EHR criteria for a sepsis diagnosis. In the analysis of the three scoring systems using all available cut-points, qSOFA and NEWS2 had the highest AUROCs, followed by the Shock Index.
Considering the rapid progression from organ dysfunction to death in sepsis patients, as well as the difficulty establishing a sepsis diagnosis at triage,23 providers must quickly identify patients at increased risk of poor outcomes when they present to the ED. Sepsis alerts often are built using SIRS criteria,27 including the one used for sepsis surveillance at UCSF since 2012,22 but the white blood cell count criterion is subject to a laboratory lag and could lead to a delay in identification. Implementation of a point-of-care bedside score alert that uses readily available clinical data could allow providers to identify patients at greatest risk of poor outcomes immediately at ED presentation and triage, which motivated us to explore the predictive performance of qSOFA, the Shock Index, and NEWS2.
Our study is the first to provide a head-to-head comparison of the predictive performance of qSOFA, the Shock Index, and NEWS2, three easy-to-calculate bedside risk scores that use EHR data collected among patients with suspected sepsis. The Sepsis-3 guidelines recommend qSOFA to quickly identify non-ICU patients at greatest risk of poor outcomes because the measure exhibited predictive performance similar to the more extensive SOFA score outside the ICU.16,23 Although some studies have confirmed qSOFA’s high predictive performance,28-31 our test characteristics and AUROC findings are in line with other published analyses.4,6,10,17 The UK National Health Service is using NEWS2 to screen for patients at risk of poor outcomes from sepsis. Several analyses that assessed the predictive ability of NEWS have reported estimates in line with our findings.4,10,32 The Shock Index was introduced in 1967 and provided a metric to evaluate hemodynamic stability based on heart rate and systolic blood pressure.33 The Shock Index has been studied in several contexts, including sepsis,34 and studies show that a sustained Shock Index is associated with increased odds of vasopressor administration, higher prevalence of hyperlactatemia, and increased risk of poor outcomes in the ICU.13,14
For our study, we were particularly interested in exploring how the Shock Index would compare with more frequently used severity scores such as qSOFA and NEWS2 among patients with suspected sepsis, given the simplicity of its calculation and the easy availability of required data. In our cohort of 23,837 patients, only 159 people had missing blood pressure and only 71 had omitted heart rate. In contrast, both qSOFA and NEWS2 include an assessment of level of consciousness that can be subject to variability in assessment methods and EHR documentation across institutions.11 In our cohort, GCS within 30 minutes of ED presentation was missing in 72 patients, which could have led to incomplete calculation of qSOFA and NEWS2 if a missing value was not actually within normal limits.
Several investigations relate qSOFA to NEWS but few compare qSOFA with the newer NEWS2, and even fewer evaluate the Shock Index with any of these scores.10,11,18,29,35-37 In general, studies have shown that NEWS exhibits a higher AUROC for predicting mortality, sepsis with organ dysfunction, and ICU admission, often as a composite outcome.4,11,18,37,38 A handful of studies compare the Shock Index to SIRS; however, little has been done to compare the Shock Index to qSOFA or NEWS2, scores that have been used specifically for sepsis and might be more predictive of poor outcomes than SIRS.33 In our study, the Shock Index had a higher AUROC than either qSOFA or NEWS2 for predicting in-hospital mortality and ED-to-ICU admission measured as separate outcomes and as a composite outcome using standard cut-points for these scores.
When selecting a severity score to apply in an institution, it is important to carefully evaluate the score’s test characteristics, in addition to considering the availability of reliable data. Tests with high sensitivity and NPV for the population being studied can be useful to rule out disease or risk of poor outcome, while tests with high specificity and PPV can be useful to rule in disease or risk of poor outcome.39 When considering specificity, qSOFA’s performance was superior to the Shock Index and NEWS2 in our study, but a small percentage of the population was identified using a cut-point of qSOFA ≥2. If we used qSOFA and applied this standard cut-point at our institution, we could be confident that those identified were at increased risk, but we would miss a significant number of patients who would experience a poor outcome. When considering sensitivity, performance of NEWS2 was superior to qSOFA and the Shock Index in our study, but one-half of the population was identified using a cut-point of NEWS2 ≥5. If we were to apply this standard NEWS2 cut-point at our institution, we would assume that one-half of our population was at risk, which might drive resource use towards patients who will not experience a poor outcome. Although none of the scores exhibited a robust AUROC measure, the Shock Index had the highest AUROC for in-hospital mortality and ED-to-ICU admission when using the standard binary cut-point, and its sensitivity and specificity is between that of qSOFA and NEWS2, potentially making it a score to use in settings where qSOFA and NEWS2 score components, such as altered mentation, are not reliably collected. Finally, our sensitivity analysis varying the binary cut-point of each score within our population demonstrated that the standard cut-points might not be as useful within a specific population and might need to be tailored for implementation, balancing sensitivity, specificity, PPV, and NPV to meet local priorities and ICU capacity.
Our study has limitations. It is a single-center, retrospective analysis, factors that could reduce generalizability. However, it does include a large and diverse patient population spanning several years. Missing GCS data could have affected the predictive ability of qSOFA and NEWS2 in our cohort. We could not reliably perform imputation of GCS because of the high missingness and therefore we assumed missing was normal, as was done in the Sepsis-3 derivation studies.16 Previous studies have attempted to impute GCS and have not observed improved performance of qSOFA to predict mortality.40 Because manually collected variables such as GCS are less reliably documented in the EHR, there might be limitations in their use for triage risk scores.
Although the current analysis focused on the predictive performance of qSOFA, the Shock Index, and NEWS2 at triage, performance of these scores could affect the ED team’s treatment decisions before handoff to the hospitalist team and the expected level of care the patient will receive after in-patient admission. These tests also have the advantage of being easy to calculate at the bedside over time, which could provide an objective assessment of longitudinal predicted prognosis.
CONCLUSION
Local priorities should drive selection of a screening tool, balancing sensitivity, specificity, PPV, and NPV to achieve the institution’s goals. qSOFA, Shock Index, and NEWS2 are risk stratification tools that can be easily implemented at ED triage using data available at the bedside. Although none of these scores performed strongly when comparing AUROCs, qSOFA was highly specific for identifying patients with poor outcomes, and NEWS2 was the most sensitive for ruling out those at high risk among patients with suspected sepsis. The Shock Index exhibited a sensitivity and specificity that fell between qSOFA and NEWS2 and also might be considered to identify those at increased risk, given its ease of implementation, particularly in settings where altered mentation is unreliably or inconsistently documented.
Acknowledgment
The authors thank the UCSF Division of Hospital Medicine Data Core for their assistance with data acquisition.
1. Jones SL, Ashton CM, Kiehne LB, et al. Outcomes and resource use of sepsis-associated stays by presence on admission, severity, and hospital type. Med Care. 2016;54(3):303-310. https://doi.org/10.1097/MLR.0000000000000481
2. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis. N Engl J Med. 2017;376(23):2235-2244. https://doi.org/10.1056/NEJMoa1703058
3. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34(6):1589-1596. https://doi.org/10.1097/01.CCM.0000217961.75225.E9
4. Churpek MM, Snyder A, Sokol S, Pettit NN, Edelson DP. Investigating the impact of different suspicion of infection criteria on the accuracy of Quick Sepsis-Related Organ Failure Assessment, Systemic Inflammatory Response Syndrome, and Early Warning Scores. Crit Care Med. 2017;45(11):1805-1812. https://doi.org/10.1097/CCM.0000000000002648
5. Abdullah SMOB, Sørensen RH, Dessau RBC, Sattar SMRU, Wiese L, Nielsen FE. Prognostic accuracy of qSOFA in predicting 28-day mortality among infected patients in an emergency department: a prospective validation study. Emerg Med J. 2019;36(12):722-728. https://doi.org/10.1136/emermed-2019-208456
6. Kim KS, Suh GJ, Kim K, et al. Quick Sepsis-related Organ Failure Assessment score is not sensitive enough to predict 28-day mortality in emergency department patients with sepsis: a retrospective review. Clin Exp Emerg Med. 2019;6(1):77-83. HTTPS://DOI.ORG/ 10.15441/ceem.17.294
7. National Early Warning Score (NEWS) 2: Standardising the assessment of acute-illness severity in the NHS. Royal College of Physicians; 2017.
8. Brink A, Alsma J, Verdonschot RJCG, et al. Predicting mortality in patients with suspected sepsis at the emergency department: a retrospective cohort study comparing qSOFA, SIRS and National Early Warning Score. PLoS One. 2019;14(1):e0211133. https://doi.org/ 10.1371/journal.pone.0211133
9. Redfern OC, Smith GB, Prytherch DR, Meredith P, Inada-Kim M, Schmidt PE. A comparison of the Quick Sequential (Sepsis-Related) Organ Failure Assessment Score and the National Early Warning Score in non-ICU patients with/without infection. Crit Care Med. 2018;46(12):1923-1933. https://doi.org/10.1097/CCM.0000000000003359
10. Churpek MM, Snyder A, Han X, et al. Quick Sepsis-related Organ Failure Assessment, Systemic Inflammatory Response Syndrome, and Early Warning Scores for detecting clinical deterioration in infected patients outside the intensive care unit. Am J Respir Crit Care Med. 2017;195(7):906-911. https://doi.org/10.1164/rccm.201604-0854OC
11. Goulden R, Hoyle MC, Monis J, et al. qSOFA, SIRS and NEWS for predicting inhospital mortality and ICU admission in emergency admissions treated as sepsis. Emerg Med J. 2018;35(6):345-349. https://doi.org/10.1136/emermed-2017-207120
12. Biney I, Shepherd A, Thomas J, Mehari A. Shock Index and outcomes in patients admitted to the ICU with sepsis. Chest. 2015;148(suppl 4):337A. https://doi.org/https://doi.org/10.1378/chest.2281151
13. Wira CR, Francis MW, Bhat S, Ehrman R, Conner D, Siegel M. The shock index as a predictor of vasopressor use in emergency department patients with severe sepsis. West J Emerg Med. 2014;15(1):60-66. https://doi.org/10.5811/westjem.2013.7.18472
14. Berger T, Green J, Horeczko T, et al. Shock index and early recognition of sepsis in the emergency department: pilot study. West J Emerg Med. 2013;14(2):168-174. https://doi.org/10.5811/westjem.2012.8.11546
15. Middleton DJ, Smith TO, Bedford R, Neilly M, Myint PK. Shock Index predicts outcome in patients with suspected sepsis or community-acquired pneumonia: a systematic review. J Clin Med. 2019;8(8):1144. https://doi.org/10.3390/jcm8081144
16. Seymour CW, Liu VX, Iwashyna TJ, et al. Assessment of clinical criteria for sepsis: for the Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3). JAMA. 2016;315(8):762-774. https://doi.org/ 10.1001/jama.2016.0288
17. Abdullah S, Sørensen RH, Dessau RBC, Sattar S, Wiese L, Nielsen FE. Prognostic accuracy of qSOFA in predicting 28-day mortality among infected patients in an emergency department: a prospective validation study. Emerg Med J. 2019;36(12):722-728. https://doi.org/10.1136/emermed-2019-208456
18. Usman OA, Usman AA, Ward MA. Comparison of SIRS, qSOFA, and NEWS for the early identification of sepsis in the Emergency Department. Am J Emerg Med. 2018;37(8):1490-1497. https://doi.org/10.1016/j.ajem.2018.10.058
19. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. https://doi.org/10.1097/00005650-199801000-00004
20. van Walraven C, Austin PC, Jennings A, Quan H, Forster AJ. A modification of the Elixhauser comorbidity measures into a point system for hospital death using administrative data. Med Care. 2009;47(6):626-633. https://doi.org/10.1097/MLR.0b013e31819432e5
21. Prin M, Wunsch H. The role of stepdown beds in hospital care. Am J Respir Crit Care Med. 2014;190(11):1210-1216. https://doi.org/10.1164/rccm.201406-1117PP
22. Narayanan N, Gross AK, Pintens M, Fee C, MacDougall C. Effect of an electronic medical record alert for severe sepsis among ED patients. Am J Emerg Med. 2016;34(2):185-188. https://doi.org/10.1016/j.ajem.2015.10.005
23. Singer M, Deutschman CS, Seymour CW, et al. The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3). JAMA. 2016;315(8):801-810. https://doi.org/10.1001/jama.2016.0287
24. Rhee C, Dantes R, Epstein L, et al. Incidence and trends of sepsis in US hospitals using clinical vs claims data, 2009-2014. JAMA. 2017;318(13):1241-1249. https://doi.org/10.1001/jama.2017.13836
25. Safari S, Baratloo A, Elfil M, Negida A. Evidence based emergency medicine; part 5 receiver operating curve and area under the curve. Emerg (Tehran). 2016;4(2):111-113.
26. DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988;44(3):837-845.
27. Kangas C, Iverson L, Pierce D. Sepsis screening: combining Early Warning Scores and SIRS Criteria. Clin Nurs Res. 2021;30(1):42-49. https://doi.org/10.1177/1054773818823334.
28. Freund Y, Lemachatti N, Krastinova E, et al. Prognostic accuracy of Sepsis-3 Criteria for in-hospital mortality among patients with suspected infection presenting to the emergency department. JAMA. 2017;317(3):301-308. https://doi.org/10.1001/jama.2016.20329
29. Finkelsztein EJ, Jones DS, Ma KC, et al. Comparison of qSOFA and SIRS for predicting adverse outcomes of patients with suspicion of sepsis outside the intensive care unit. Crit Care. 2017;21(1):73. https://doi.org/10.1186/s13054-017-1658-5
30. Canet E, Taylor DM, Khor R, Krishnan V, Bellomo R. qSOFA as predictor of mortality and prolonged ICU admission in Emergency Department patients with suspected infection. J Crit Care. 2018;48:118-123. https://doi.org/10.1016/j.jcrc.2018.08.022
31. Anand V, Zhang Z, Kadri SS, Klompas M, Rhee C; CDC Prevention Epicenters Program. Epidemiology of Quick Sequential Organ Failure Assessment criteria in undifferentiated patients and association with suspected infection and sepsis. Chest. 2019;156(2):289-297. https://doi.org/10.1016/j.chest.2019.03.032
32. Hamilton F, Arnold D, Baird A, Albur M, Whiting P. Early Warning Scores do not accurately predict mortality in sepsis: A meta-analysis and systematic review of the literature. J Infect. 2018;76(3):241-248. https://doi.org/10.1016/j.jinf.2018.01.002
33. Koch E, Lovett S, Nghiem T, Riggs RA, Rech MA. Shock Index in the emergency department: utility and limitations. Open Access Emerg Med. 2019;11:179-199. https://doi.org/10.2147/OAEM.S178358
34. Yussof SJ, Zakaria MI, Mohamed FL, Bujang MA, Lakshmanan S, Asaari AH. Value of Shock Index in prognosticating the short-term outcome of death for patients presenting with severe sepsis and septic shock in the emergency department. Med J Malaysia. 2012;67(4):406-411.
35. Siddiqui S, Chua M, Kumaresh V, Choo R. A comparison of pre ICU admission SIRS, EWS and q SOFA scores for predicting mortality and length of stay in ICU. J Crit Care. 2017;41:191-193. https://doi.org/10.1016/j.jcrc.2017.05.017
36. Costa RT, Nassar AP, Caruso P. Accuracy of SOFA, qSOFA, and SIRS scores for mortality in cancer patients admitted to an intensive care unit with suspected infection. J Crit Care. 2018;45:52-57. https://doi.org/10.1016/j.jcrc.2017.12.024
37. Mellhammar L, Linder A, Tverring J, et al. NEWS2 is Superior to qSOFA in detecting sepsis with organ dysfunction in the emergency department. J Clin Med. 2019;8(8):1128. https://doi.org/10.3390/jcm8081128
38. Szakmany T, Pugh R, Kopczynska M, et al. Defining sepsis on the wards: results of a multi-centre point-prevalence study comparing two sepsis definitions. Anaesthesia. 2018;73(2):195-204. https://doi.org/10.1111/anae.14062
39. Newman TB, Kohn MA. Evidence-Based Diagnosis: An Introduction to Clinical Epidemiology. Cambridge University Press; 2009.
40. Askim Å, Moser F, Gustad LT, et al. Poor performance of quick-SOFA (qSOFA) score in predicting severe sepsis and mortality - a prospective study of patients admitted with infection to the emergency department. Scand J Trauma Resusc Emerg Med. 2017;25(1):56. https://doi.org/10.1186/s13049-017-0399-4
1. Jones SL, Ashton CM, Kiehne LB, et al. Outcomes and resource use of sepsis-associated stays by presence on admission, severity, and hospital type. Med Care. 2016;54(3):303-310. https://doi.org/10.1097/MLR.0000000000000481
2. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis. N Engl J Med. 2017;376(23):2235-2244. https://doi.org/10.1056/NEJMoa1703058
3. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34(6):1589-1596. https://doi.org/10.1097/01.CCM.0000217961.75225.E9
4. Churpek MM, Snyder A, Sokol S, Pettit NN, Edelson DP. Investigating the impact of different suspicion of infection criteria on the accuracy of Quick Sepsis-Related Organ Failure Assessment, Systemic Inflammatory Response Syndrome, and Early Warning Scores. Crit Care Med. 2017;45(11):1805-1812. https://doi.org/10.1097/CCM.0000000000002648
5. Abdullah SMOB, Sørensen RH, Dessau RBC, Sattar SMRU, Wiese L, Nielsen FE. Prognostic accuracy of qSOFA in predicting 28-day mortality among infected patients in an emergency department: a prospective validation study. Emerg Med J. 2019;36(12):722-728. https://doi.org/10.1136/emermed-2019-208456
6. Kim KS, Suh GJ, Kim K, et al. Quick Sepsis-related Organ Failure Assessment score is not sensitive enough to predict 28-day mortality in emergency department patients with sepsis: a retrospective review. Clin Exp Emerg Med. 2019;6(1):77-83. HTTPS://DOI.ORG/ 10.15441/ceem.17.294
7. National Early Warning Score (NEWS) 2: Standardising the assessment of acute-illness severity in the NHS. Royal College of Physicians; 2017.
8. Brink A, Alsma J, Verdonschot RJCG, et al. Predicting mortality in patients with suspected sepsis at the emergency department: a retrospective cohort study comparing qSOFA, SIRS and National Early Warning Score. PLoS One. 2019;14(1):e0211133. https://doi.org/ 10.1371/journal.pone.0211133
9. Redfern OC, Smith GB, Prytherch DR, Meredith P, Inada-Kim M, Schmidt PE. A comparison of the Quick Sequential (Sepsis-Related) Organ Failure Assessment Score and the National Early Warning Score in non-ICU patients with/without infection. Crit Care Med. 2018;46(12):1923-1933. https://doi.org/10.1097/CCM.0000000000003359
10. Churpek MM, Snyder A, Han X, et al. Quick Sepsis-related Organ Failure Assessment, Systemic Inflammatory Response Syndrome, and Early Warning Scores for detecting clinical deterioration in infected patients outside the intensive care unit. Am J Respir Crit Care Med. 2017;195(7):906-911. https://doi.org/10.1164/rccm.201604-0854OC
11. Goulden R, Hoyle MC, Monis J, et al. qSOFA, SIRS and NEWS for predicting inhospital mortality and ICU admission in emergency admissions treated as sepsis. Emerg Med J. 2018;35(6):345-349. https://doi.org/10.1136/emermed-2017-207120
12. Biney I, Shepherd A, Thomas J, Mehari A. Shock Index and outcomes in patients admitted to the ICU with sepsis. Chest. 2015;148(suppl 4):337A. https://doi.org/https://doi.org/10.1378/chest.2281151
13. Wira CR, Francis MW, Bhat S, Ehrman R, Conner D, Siegel M. The shock index as a predictor of vasopressor use in emergency department patients with severe sepsis. West J Emerg Med. 2014;15(1):60-66. https://doi.org/10.5811/westjem.2013.7.18472
14. Berger T, Green J, Horeczko T, et al. Shock index and early recognition of sepsis in the emergency department: pilot study. West J Emerg Med. 2013;14(2):168-174. https://doi.org/10.5811/westjem.2012.8.11546
15. Middleton DJ, Smith TO, Bedford R, Neilly M, Myint PK. Shock Index predicts outcome in patients with suspected sepsis or community-acquired pneumonia: a systematic review. J Clin Med. 2019;8(8):1144. https://doi.org/10.3390/jcm8081144
16. Seymour CW, Liu VX, Iwashyna TJ, et al. Assessment of clinical criteria for sepsis: for the Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3). JAMA. 2016;315(8):762-774. https://doi.org/ 10.1001/jama.2016.0288
17. Abdullah S, Sørensen RH, Dessau RBC, Sattar S, Wiese L, Nielsen FE. Prognostic accuracy of qSOFA in predicting 28-day mortality among infected patients in an emergency department: a prospective validation study. Emerg Med J. 2019;36(12):722-728. https://doi.org/10.1136/emermed-2019-208456
18. Usman OA, Usman AA, Ward MA. Comparison of SIRS, qSOFA, and NEWS for the early identification of sepsis in the Emergency Department. Am J Emerg Med. 2018;37(8):1490-1497. https://doi.org/10.1016/j.ajem.2018.10.058
19. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. https://doi.org/10.1097/00005650-199801000-00004
20. van Walraven C, Austin PC, Jennings A, Quan H, Forster AJ. A modification of the Elixhauser comorbidity measures into a point system for hospital death using administrative data. Med Care. 2009;47(6):626-633. https://doi.org/10.1097/MLR.0b013e31819432e5
21. Prin M, Wunsch H. The role of stepdown beds in hospital care. Am J Respir Crit Care Med. 2014;190(11):1210-1216. https://doi.org/10.1164/rccm.201406-1117PP
22. Narayanan N, Gross AK, Pintens M, Fee C, MacDougall C. Effect of an electronic medical record alert for severe sepsis among ED patients. Am J Emerg Med. 2016;34(2):185-188. https://doi.org/10.1016/j.ajem.2015.10.005
23. Singer M, Deutschman CS, Seymour CW, et al. The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3). JAMA. 2016;315(8):801-810. https://doi.org/10.1001/jama.2016.0287
24. Rhee C, Dantes R, Epstein L, et al. Incidence and trends of sepsis in US hospitals using clinical vs claims data, 2009-2014. JAMA. 2017;318(13):1241-1249. https://doi.org/10.1001/jama.2017.13836
25. Safari S, Baratloo A, Elfil M, Negida A. Evidence based emergency medicine; part 5 receiver operating curve and area under the curve. Emerg (Tehran). 2016;4(2):111-113.
26. DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988;44(3):837-845.
27. Kangas C, Iverson L, Pierce D. Sepsis screening: combining Early Warning Scores and SIRS Criteria. Clin Nurs Res. 2021;30(1):42-49. https://doi.org/10.1177/1054773818823334.
28. Freund Y, Lemachatti N, Krastinova E, et al. Prognostic accuracy of Sepsis-3 Criteria for in-hospital mortality among patients with suspected infection presenting to the emergency department. JAMA. 2017;317(3):301-308. https://doi.org/10.1001/jama.2016.20329
29. Finkelsztein EJ, Jones DS, Ma KC, et al. Comparison of qSOFA and SIRS for predicting adverse outcomes of patients with suspicion of sepsis outside the intensive care unit. Crit Care. 2017;21(1):73. https://doi.org/10.1186/s13054-017-1658-5
30. Canet E, Taylor DM, Khor R, Krishnan V, Bellomo R. qSOFA as predictor of mortality and prolonged ICU admission in Emergency Department patients with suspected infection. J Crit Care. 2018;48:118-123. https://doi.org/10.1016/j.jcrc.2018.08.022
31. Anand V, Zhang Z, Kadri SS, Klompas M, Rhee C; CDC Prevention Epicenters Program. Epidemiology of Quick Sequential Organ Failure Assessment criteria in undifferentiated patients and association with suspected infection and sepsis. Chest. 2019;156(2):289-297. https://doi.org/10.1016/j.chest.2019.03.032
32. Hamilton F, Arnold D, Baird A, Albur M, Whiting P. Early Warning Scores do not accurately predict mortality in sepsis: A meta-analysis and systematic review of the literature. J Infect. 2018;76(3):241-248. https://doi.org/10.1016/j.jinf.2018.01.002
33. Koch E, Lovett S, Nghiem T, Riggs RA, Rech MA. Shock Index in the emergency department: utility and limitations. Open Access Emerg Med. 2019;11:179-199. https://doi.org/10.2147/OAEM.S178358
34. Yussof SJ, Zakaria MI, Mohamed FL, Bujang MA, Lakshmanan S, Asaari AH. Value of Shock Index in prognosticating the short-term outcome of death for patients presenting with severe sepsis and septic shock in the emergency department. Med J Malaysia. 2012;67(4):406-411.
35. Siddiqui S, Chua M, Kumaresh V, Choo R. A comparison of pre ICU admission SIRS, EWS and q SOFA scores for predicting mortality and length of stay in ICU. J Crit Care. 2017;41:191-193. https://doi.org/10.1016/j.jcrc.2017.05.017
36. Costa RT, Nassar AP, Caruso P. Accuracy of SOFA, qSOFA, and SIRS scores for mortality in cancer patients admitted to an intensive care unit with suspected infection. J Crit Care. 2018;45:52-57. https://doi.org/10.1016/j.jcrc.2017.12.024
37. Mellhammar L, Linder A, Tverring J, et al. NEWS2 is Superior to qSOFA in detecting sepsis with organ dysfunction in the emergency department. J Clin Med. 2019;8(8):1128. https://doi.org/10.3390/jcm8081128
38. Szakmany T, Pugh R, Kopczynska M, et al. Defining sepsis on the wards: results of a multi-centre point-prevalence study comparing two sepsis definitions. Anaesthesia. 2018;73(2):195-204. https://doi.org/10.1111/anae.14062
39. Newman TB, Kohn MA. Evidence-Based Diagnosis: An Introduction to Clinical Epidemiology. Cambridge University Press; 2009.
40. Askim Å, Moser F, Gustad LT, et al. Poor performance of quick-SOFA (qSOFA) score in predicting severe sepsis and mortality - a prospective study of patients admitted with infection to the emergency department. Scand J Trauma Resusc Emerg Med. 2017;25(1):56. https://doi.org/10.1186/s13049-017-0399-4
© 2021 Society of Hospital Medicine
Home Modifications for Rural Veterans With Disabilities
The US Department of Veterans Affairs (VA) created the Home Improvements and Structural Alterations (HISA) program to help provide necessary home modifications (HMs) to veterans with disabilities (VWDs) that will facilitate the provision of medical services at home and improve home accessibility and functional independence. The Veterans Health Administration (VHA) has more than 9 million veteran enrollees; of those, 2.7 million are classified as rural or highly rural.1 Rural veterans (RVs) possess higher rate of disability compared with that of urban veterans.2-5 RVs have unequal access to screening of ambulatory care sensitive conditions (eg, hypertension, diabetes mellitus).6 Furthermore, RVs are at risk of poor medical outcomes due to distance from health care facilities and specialist care, which can be a barrier to emergency care when issues arise. These barriers, among others, are associated with compromised health quality of life and health outcomes for RVs.3,6 The HISA program may be key to decreasing falls and other serious mishaps in the home. Therefore, understanding use of the HISA program by RVs is important. However, to date little information has been available regarding use of HISA benefits by RVs or characteristics of RVs who receive HISA benefits.
HISA Alterations Program
HISA was initially developed by VA to improve veterans’ transition from acute medical care to home.7,8 However, to obtain HISA grants currently, there is an average 3 to 6 months application process.7 Through the HISA program, VWDs can be prescribed the following HMs, including (but not limited to): flooring replacement, permanent ramps, roll-in showers, installation of central air-conditioning systems, improved lighting, kitchen/bathroom modifications, and home inspections. The HMs prescribed depend on an assessment of medical need by health care providers (HCPs).8
As time passed and the veteran population aged, the program now primarily helps ensure the ability to enter into essential areas and safety in the home.5 The amount of a HISA payment is based on whether a veteran’s health condition is related to military service as defined by the VHA service connection medical evaluation process. Barriers to obtaining a HISA HM can include difficulty in navigating the evaluation process and difficulty in finding a qualified contractor or builder to do the HM.7
This article aims to: (1) Detail the sociodemographic and clinical characteristics of rural HISA users (RHUs); (2) report on HISA usage patterns in number, types, and cost of HMs; (3) compare use amid the diverse VA medical centers (VAMCs) and related complexity levels and Veterans Integrated Service Networks (VISNs); and (4) examine the relationship between travel time/distance and HISA utilization. The long-term goal is to provide accurate information to researchers, HM administrators, health care providers and policy makers on HISA program utilization by rural VWDs, which may help improve its use and bring awareness of its users. This study was approved by the affiliate University of Florida Institutional Review Board and VA research and development committee at the North Florida/South Georgia Veterans Health System.
Methods
Data were obtained from 3 VA sources: the National Prosthetics Patient Database (NPPD), the VHA Medical Inpatient Dataset, and the VHA Outpatient Dataset.7 The NPPD is a national administrative database that contains information on prosthetic-associated products ordered by HCPs for patients, such as portable ramps, handrails, home oxygen equipment, and orthotic and prosthetic apparatus. Data obtained from the NPPD included cost of HMs, clinical characteristics, VISN, and VAMC. VA facilities are categorized into complexity levels 1a, 1b, 1c, 2, and 3. Complexity level 1a to 1c VAMCs address medical cases that entail “heightening involvedness,” meaning a larger number of patients presented with medical concerns needing medical specialists. Complexity levels 2 and 3 have fewer resources, lower patient numbers, and less medically complex patients. Finally, the VHA Medical Inpatient and Outpatient Datasets administrated by VA Informatics and Computing Infrastructure, consist of in-depth health services national data on inpatient and outpatient encounters and procedures.
The study cohort was divided into those with service-connected conditions (Class 1) or those with conditions not related to military service (Class 2). If veterans were identified in both classes, they were assigned to Class 1. The cost variable is determined by using the veterans’ classification. Class 1 veterans receive a lifetime limit of $6800, and Class 2 veterans receive a lifetime limit of $2000. A Class 2 veteran with ≥ 50% disability rating is eligible for a HISA lifetime limit of $6800. Whenever a value exceeds allowed limit of $6800 or $2000, due to data entry error or other reasons, the study team reassigned the cost value to the maximum allowed value.
Travel distance and time were derived by loading patient zip codes and HISA facility locations into the geographical information system program and using the nearest facility and find-route tools. These tools used a road network that simulates real-world driving conditions to calculate distance.
Study Variables
VWDs of any age, gender, and race/ethnicity who qualified for HISA and received HMs from fiscal year ( FY) 2015 through FY 2018 were identified (N = 30,823). Most VWDs were nonrural subjects (n = 19,970), and 43 had no Federal Information Processing System data. The final study cohort consisted of 10,810 HISA recipients. The NPPD, inpatient and outpatient data were merged by scrambled social security numbers to retrieve the following data: age, gender, race, ethnicity, marital status, Class (1 or 2), mean and total number of inpatient days, and type of HMs prescribed.
We also recorded rurality using the VA Rural-Urban Commuting Areas (RUCA) system, but we combined the rural and highly rural designation.1 Census tracts with a RUCA score of 10.0 are deemed highly rural, the remainder are considered rural except those with a RUCA score of 1.0 or 1.1. Travel time and distance from a veteran’s home to the VA facility that provided the HISA prescription were determined from zip codes. The current study focuses on VAMCs prescribing stations (affiliated sites of administrative parent medical facilities) where the HISA users obtained the HM, not the parent station (administrative parent medical facilities).
HISA Utilization
To characterize HISA utilization geographically and over time, the number of users were mapped by county. Areas where users were increasing (hot spots) or decreasing (cold spots) also were mapped. The maps were created using Environmental Systems Research Institute ArcGIS Pro 2.2.1 software. We chose to use natural breaks (Jenks) data classification method in a choropleth to symbolize the change over time map. We then used the Getis Ord GI* optimized hot spot analysis tool in the ArcGIS Pro spatial statistics tool set to generate the hot/cold spot maps. This tool identifies clusters of high values (hot spots) and low values (cold spots) creating a new output layer, RHUs by county, with a Z score, P value, and CI for each county. The Gi Bin field classifies statistically significant hot and cold spots. Counties sorted into the ± 3 category (bin) have a clustering characteristic (eg, with neighboring counties) that is statistically significant with a 99% CI; the ± 2 bin indicates a 95% CI for those county clustering sorted therein; ± 1 reflects a 90% CI; and 0 bin contains county features that have no statistical significant clustering with neighboring counties.
Data Analysis
Data were cleaned and analyzed using SAS 9.4 and R 3.5.3. Descriptive statistics are provided for sociodemographic characteristics, clinical characteristics, and class. ANOVA and t tests were used to compare continuous variables between groups, while χ2 and Fisher exact tests were used for dichotomous and categorical outcome variables. The threshold for statistical significance for these tests was set at α = .001.
Results
There were 10,810 RHUs from FY 2015 through FY 2018 and HISA utilization increased each year (Figure 1). Although some years may show usage decreases relative to previous fiscal years, the cumulative trends showed an increase relative to FY 2015 for both Classes of RVs (Figure 2). There was a 45.4% increase from FY 2015 to FY 2018 with a mean 13.6% yearly increase. Class 1 increased 21.0% and Class 2 increased 39.5% from FY 2015 to FY 2016 (Figure 3).
Most RHUs were male, White, and married. Class 1 and Class 2 RHUs differed significantly by age, race, marital status, and disability conditions: Class 1 RHUs were aged 6.6 years younger with a mean age of 69.1 years compared with 75.7 years for Class 2 users. For Class 1 RHUs, a plurality (29.4%) were aged 65 to 69 years; while a plurality (41.4%) of Class 2 users were aged ≥ 80 years. Musculoskeletal was the most common identified type of condition for all RHUs (Table 1).
To better understand HISA utilization patterns and net RHUs per county, we used a map to detail RHUs by county and change over time (Figure 4). Additionally, we compared US counties by RHUs from FY 2015 to FY 2018 and determined how clusters of high numbers of RHUs (hot spots) and low numbers of RHUs (cold spots) shifted over this period (Figure 5). While HISA utilization grew over the study period, the net count of RHUs per county varied by 9 to 20 persons/county. The population of RHUs increased over time in the Southwest, Southeast, and over much of the East/Northeast, while in the Central and Midwest regions, number of RHUs seems to decrease in population and/or use of the system. The cold spots in the Midwest and South Central US seem to increase with a significant relationship to neighboring counties having a low number of RHUs.
There were 11,166 HM prescribed to RHUs (Table 2). Bathroom HMs also were the dominant HM type for all facilities regardless of complexity levels (Table 3). The San Antonio, Texas, VAMC demonstrated the highest Class 1 vs Class 2 difference in HISA use (Class 1: 87.7% and Class 2: 12.3%). Except for the Des Moines VAMC, all other VAMCs showed HISA use > 60% by Class 1.
Cost Data
Air-conditioning installation ($5007) was the costliest HM overall (Table 4), closely followed by bathroom ($4978) and kitchen modifications ($4305). Bathroom renovations were the costliest HM type for both Class 1 and Class 2, closely followed by electrical repair and air-conditioning installation for Class 1 and driveway reconstruction and wooden ramp construction for Class 2.
The mean award received for HM was $4687 (Table 5). While the number of RHUs increased from FY 2015 to FY 2016, the average cost decreased, both overall ($280) and for Class 1 ($195) and Class 2 ($153). Except for a small decline in the number of Class 2 HISA recipients from FY 2017 to FY 2018, overall, the number of RHUs continuously grew from FY 2015 to FY 2018: 977 for the overall cohort, 678 for Class 1 and 299 for Class 2. Despite the obvious gain in the number of RHUs, the average costs did not notably change over time. VISN 21 had the highest mean cost, followed by VISNs 17, 6, 22, and 20.
Travel
Travel time and distance to the HISA prescribing facility differed significantly between Class 1 and Class 2 HISA users. RHUs had to travel about 95 minutes from their place of residence to access the HISA benefits program. There were no statistically significant differences between Class 1 and 2 users with respect to travel time and distance traveled (Table 6).
The majority of Class 1 and Class 2 veterans accessed the HISA from their nearest facility. However, nearly one-quarter of both Class 1 and 2 RHUs (24% each) did not. Among the 2598 who accessed the nonnearest facility, 97 (3.7%) accessed a facility that is ≤ 40 miles. Many (44%) users traveled 40 to 100 miles, and another 43.2% traveled 100 to 200 miles from their residence to access a HM prescription. Some 2598 users (1.1%) traveled > 500 miles to access a facility.
Discussion
Although utilization of the HISA program has steadily increased, overall participation by subpopulations such as RHUs can still be improved significantly. Veterans aged ≤ 46 years who have a disability that is common to those receiving HISA benefits have low HISA utilization. Similarly, veterans with sensory disabilities also have low use. These subpopulations are among those in great need of attention and services.
A study by Lucas and Zelaya, using the 2016 National Health Interview Survey data with an aim to measure degree of vision problems, dual sensory impairment, and hearing trouble in male veterans aged ≥ 18 years, found that veterans were more likely to report dual sensory impairment and balance difficulties when compared with nonveterans.9 The number of female veterans is growing but had very low representation in this study.10 This emerging VHA population requires information and education on their HM benefits.
Home Modifications
The most common HM prescribed for RHUs was for the bathroom. Further investigation is warranted as to why, given the diversity of HM types that the grant covers, low prescription rates exist across most of the HM types. There may be a lack of knowledge by providers and VWD as to the range of HMs that can be awarded under the grant. It is important that HCPs and veterans receive education on HISA HM options.
Semeah and colleagues pointed out the need for an assessment of the HISA HM ordering system to ensure that multiple HMs items (eg, kitchen, air conditioning, fees, driveway, and plumbing) are listed among the forced choices shown to clinicians to select from.7 Poor housing in rural America is widespread: 63% of rural dwellings need renovations and/or repairs to be accessible to individuals with disabilities, with > 6.7 million rural homes having no or faulty plumbing or kitchens; yet in this study, prescriptions for these HMs accounted for < 1%.11,12
VISN 6 had the most HISA awards with 1364, while VISN 21 had the fewest (245). Across all VISNs, Class 1 RHUs received more prescriptions than did Class 2 RHUs. Future research may seek to examine whether prescribers are fully aware of the eligibility of HM prescription to Class 2 veterans. VISN 21 ($5354); VISN 17 ($5302); and VISN 6 ($5301) had the highest mean HM expenditures. The national mean cost for HISA HMs were $4978 for bathrooms and $4305 for kitchens; for non-HISA HMs in FY 2017, the mean costs were $6362 and $12,255, respectively. A noteworthy concern is whether the maximum grant limit awards are sufficient to perform more expensive and complex HMs, such as the kitchen or major bathroom alternations.13
Facilities categorized as 1a, 1b, or 1c provided
North Florida/Sough Georgia was the highest-prescribing VAMC with 39% more HM prescriptions than the second highest prescribing facility (Durham, NC). Unfortunately, the data presented here cannot establish causality for the large variance difference between the top facilities, and the skewed distribution of total RHUs across VAMCs.
Travel-Related Variables
HISA beneficiaries face significant travel-related challenges. Just 3.6% of RHUs could access a facility within 40 miles of their home and 43.2% traveled 100 to 200 miles from their home to access a HM prescription. Further exploration is warranted to understand how travel patterns impact access to or the uptake of HISA.
RVs already have problems with accessing care because of long travel time.14,15 The choice or necessity to travel to a farther facility for HISA prescription is problematic for RVs, especially when transportation is often reported in the literature as a barrier to resources for people living in rural communities.15-17 When patients have travel barriers, they wait longer to obtain medical services and often wait for their conditions to worsen before seeking services.15,18 Once HM is completed, telerehabilitation is an effective delivery method used for delivering health care services to people in remote places.18,19 Considering that HISA use has the potential to improve quality of life, afford comfort, facilitate the accomplishment of activities of daily living for RVs, it is important that future studies examine how existing telehealth technologies can be used to improve HISA access.
Future Directions
County-level analyses is warranted in future studies exploring potential variables associated with HISA use; for example, county-level rates of primary care physicians and other HCPs. Future research should explore how long distance travel impacts the HISA application process and HM implementation. Further research also should focus on the HISA application structure and process to identify causes of delays. The HISA application process takes a mean 6 months to complete, yet the duration of hospital stays is 1 to 3 weeks, thus it is impossible to connect HISA to hospital discharge, which was the original intent of the program. Future research can examine how telehealth services can expedite HISA obtainment and coordination of the application process. Future research also may study the possible causes of the wide variations in HM prescriptions per facility. It is also important that educational programs provide information on the array of HM items that veterans can obtain.
Conclusions
In our previous study of the HISA cohort (2011-2017), we documented that an increase in utilization of the HISA program was warranted based on the low national budgetary appropriation and identification of significant low participation by vulnerable subpopulations, including veterans residing in rural areas or having returned from recent conflicts.7 The present study documents national utilization patterns, demographic profiles, and clinical characteristics of RHUs from FY 2015 through FY 2018, data that may be useful to policy makers and HISA administrators in predicting future use and users. It is important to note that the data and information presented in this article identify trends. The work in no way establishes a gold standard or any targeted goal of utilization. Future research could focus on conceptualizing or theorizing what steps are necessary to set such a gold standard of utilization rate and steps toward achievement.
Acknowledgments
This research was supported by grant 15521 from the US Department of Veterans Affairs, Office of Rural Health . Furthermore, the research was supported in part by grant K12 HD055929 from the National Institutes of Health.
1. US Department of Veterans Affairs, Veteran Health Administration, Office of Rural Health. Rural veteran health care challenges. Updated February 9, 2021. Accessed June 11, 2021. https://www.ruralhealth.va.gov/aboutus/ruralvets.asp
2. Holder, K.A. Veterans in rural America, 2011–2015. Published January 2017. Accessed June 11, 2021. https://www.census.gov/content/dam/Census/library/publications/2017/acs/acs-36.pdf
3. Pezzin LE, Bogner HR, Kurichi JE, et al. Preventable hospitalizations, barriers to care, and disability. Medicine (Baltimore). 2018;97(19):e0691. doi:10.1097/MD.0000000000010691
4. Rosenbach ML. Access and satisfaction within the disabled Medicare population. Health Care Financ Rev. 1995;17(2):147-167.
5. Semeah LM, Ganesh SP, Wang X, et al. Home modification and health services utilization in rural and urban veterans with disabilities. Housing Policy Debate. 2021. Published online: March 4, 2021. doi:10.1080/10511482.2020.1858923
6. Spoont M, Greer N, Su J, Fitzgerald P, Rutks I, and Wilt TJ. Rural vs. urban ambulatory health care: A Systematic Review. Published May 2011. Accessed June 11, 2021. https://www.hsrd.research.va.gov/publications/esp/ambulatory.pdf
7. Semeah LM, Wang X, Cowper Ripley DC, et al. Improving health through a home modification service for veterans. In: Fiedler BA, ed. Three Facets of Public Health and Paths to Improvements. Academic Press; 2020:381-416.
8. Semeah LM, Ahrentzen S, Jia H, Cowper-Ripley DC, Levy CE, Mann WC. The home improvements and structural alterations benefits program: veterans with disabilities and home accessibility. J Disability Policy Studies. 2017;28(1):43-51. doi:10.1177/1044207317696275
9. Lucas, JW, Zelaya, CE. Hearing difficulty, vision trouble, and balance problems among male veterans and nonveterans. Published June 12, 2020. Accessed June 11, 2021. https://www.cdc.gov/nchs/data/nhsr/nhsr142-508.pdf
10. US Department of Veterans Affairs, National Center for Veterans Analysis and Statistics. Women veterans report: the past, present, and future of women veterans. Published February 2017. Accessed June 11, 2021. https://www.va.gov/vetdata/docs/SpecialReports/Women_Veterans_2015_Final.pdf
11. US Department of Housing and Urban Development, Office of Policy Development and Research. Housing challenges of rural seniors. Published 2017. Accessed June 11, 2021. https://www.huduser.gov/portal/periodicals/em/summer17/highlight1.html
12. Pendall R, Goodman L, Zhu J, Gold A. The future of rural housing. Published October 2016. Accessed June 11, 202.1 https://www.urban.org/sites/default/files/publication/85101/2000972-the-future-of-rural-housing_6.pdf
13. Joint Center for Housing Studies at Harvard University. Improving America’s housing 2019. Published 2019. Accessed June 11, 2021. https://www.jchs.harvard.edu/sites/default/files/reports/files/Harvard_JCHS_Improving_Americas_Housing_2019.pdf
14. Schooley BL, Horan TA, Lee PW, West PA. Rural veteran access to healthcare services: investigating the role of information and communication technologies in overcoming spatial barriers. Perspect Health Inf Manag. 2010;7(Spring):1f. Published 2010 Apr 1.
15. Ripley DC, Kwong PL, Vogel WB, Kurichi JE, Bates BE, Davenport C. How does geographic access affect in-hospital mortality for veterans with acute ischemic stroke?. Med Care. 2015;53(6):501-509. doi:10.1097/MLR.0000000000000366
16. Cowper-Ripley DC, Reker DM, Hayes J, et al. Geographic access to VHA rehabilitation services for traumatically injured veterans. Fed Pract. 2009;26(10):28-39.
17. Smith M, Towne S, Herrera-Venson A, Cameron K, Horel S, Ory M, et al. Delivery of fall prevention interventions for at-risk older adults in rural areas: Findings from a national dissemination. International journal of environmental research and public health. 2018;15:2798. doi: 10.3390/ijerph15122798
18. Hale-Gallardo JL, Kreider CM, Jia H, et al. Telerehabilitation for Rural Veterans: A Qualitative Assessment of Barriers and Facilitators to Implementation. J Multidiscip Healthc. 2020;13:559-570. doi:10.2147/JMDH.S247267
19. Sarfo FS, Akassi J, Kyem G, et al. Long-Term Outcomes of Stroke in a Ghanaian Outpatient Clinic. J Stroke Cerebrovasc Dis. 2018;27(4):1090-1099. doi:10.1016/j.jstrokecerebrovasdis.2017.11.017
The US Department of Veterans Affairs (VA) created the Home Improvements and Structural Alterations (HISA) program to help provide necessary home modifications (HMs) to veterans with disabilities (VWDs) that will facilitate the provision of medical services at home and improve home accessibility and functional independence. The Veterans Health Administration (VHA) has more than 9 million veteran enrollees; of those, 2.7 million are classified as rural or highly rural.1 Rural veterans (RVs) possess higher rate of disability compared with that of urban veterans.2-5 RVs have unequal access to screening of ambulatory care sensitive conditions (eg, hypertension, diabetes mellitus).6 Furthermore, RVs are at risk of poor medical outcomes due to distance from health care facilities and specialist care, which can be a barrier to emergency care when issues arise. These barriers, among others, are associated with compromised health quality of life and health outcomes for RVs.3,6 The HISA program may be key to decreasing falls and other serious mishaps in the home. Therefore, understanding use of the HISA program by RVs is important. However, to date little information has been available regarding use of HISA benefits by RVs or characteristics of RVs who receive HISA benefits.
HISA Alterations Program
HISA was initially developed by VA to improve veterans’ transition from acute medical care to home.7,8 However, to obtain HISA grants currently, there is an average 3 to 6 months application process.7 Through the HISA program, VWDs can be prescribed the following HMs, including (but not limited to): flooring replacement, permanent ramps, roll-in showers, installation of central air-conditioning systems, improved lighting, kitchen/bathroom modifications, and home inspections. The HMs prescribed depend on an assessment of medical need by health care providers (HCPs).8
As time passed and the veteran population aged, the program now primarily helps ensure the ability to enter into essential areas and safety in the home.5 The amount of a HISA payment is based on whether a veteran’s health condition is related to military service as defined by the VHA service connection medical evaluation process. Barriers to obtaining a HISA HM can include difficulty in navigating the evaluation process and difficulty in finding a qualified contractor or builder to do the HM.7
This article aims to: (1) Detail the sociodemographic and clinical characteristics of rural HISA users (RHUs); (2) report on HISA usage patterns in number, types, and cost of HMs; (3) compare use amid the diverse VA medical centers (VAMCs) and related complexity levels and Veterans Integrated Service Networks (VISNs); and (4) examine the relationship between travel time/distance and HISA utilization. The long-term goal is to provide accurate information to researchers, HM administrators, health care providers and policy makers on HISA program utilization by rural VWDs, which may help improve its use and bring awareness of its users. This study was approved by the affiliate University of Florida Institutional Review Board and VA research and development committee at the North Florida/South Georgia Veterans Health System.
Methods
Data were obtained from 3 VA sources: the National Prosthetics Patient Database (NPPD), the VHA Medical Inpatient Dataset, and the VHA Outpatient Dataset.7 The NPPD is a national administrative database that contains information on prosthetic-associated products ordered by HCPs for patients, such as portable ramps, handrails, home oxygen equipment, and orthotic and prosthetic apparatus. Data obtained from the NPPD included cost of HMs, clinical characteristics, VISN, and VAMC. VA facilities are categorized into complexity levels 1a, 1b, 1c, 2, and 3. Complexity level 1a to 1c VAMCs address medical cases that entail “heightening involvedness,” meaning a larger number of patients presented with medical concerns needing medical specialists. Complexity levels 2 and 3 have fewer resources, lower patient numbers, and less medically complex patients. Finally, the VHA Medical Inpatient and Outpatient Datasets administrated by VA Informatics and Computing Infrastructure, consist of in-depth health services national data on inpatient and outpatient encounters and procedures.
The study cohort was divided into those with service-connected conditions (Class 1) or those with conditions not related to military service (Class 2). If veterans were identified in both classes, they were assigned to Class 1. The cost variable is determined by using the veterans’ classification. Class 1 veterans receive a lifetime limit of $6800, and Class 2 veterans receive a lifetime limit of $2000. A Class 2 veteran with ≥ 50% disability rating is eligible for a HISA lifetime limit of $6800. Whenever a value exceeds allowed limit of $6800 or $2000, due to data entry error or other reasons, the study team reassigned the cost value to the maximum allowed value.
Travel distance and time were derived by loading patient zip codes and HISA facility locations into the geographical information system program and using the nearest facility and find-route tools. These tools used a road network that simulates real-world driving conditions to calculate distance.
Study Variables
VWDs of any age, gender, and race/ethnicity who qualified for HISA and received HMs from fiscal year ( FY) 2015 through FY 2018 were identified (N = 30,823). Most VWDs were nonrural subjects (n = 19,970), and 43 had no Federal Information Processing System data. The final study cohort consisted of 10,810 HISA recipients. The NPPD, inpatient and outpatient data were merged by scrambled social security numbers to retrieve the following data: age, gender, race, ethnicity, marital status, Class (1 or 2), mean and total number of inpatient days, and type of HMs prescribed.
We also recorded rurality using the VA Rural-Urban Commuting Areas (RUCA) system, but we combined the rural and highly rural designation.1 Census tracts with a RUCA score of 10.0 are deemed highly rural, the remainder are considered rural except those with a RUCA score of 1.0 or 1.1. Travel time and distance from a veteran’s home to the VA facility that provided the HISA prescription were determined from zip codes. The current study focuses on VAMCs prescribing stations (affiliated sites of administrative parent medical facilities) where the HISA users obtained the HM, not the parent station (administrative parent medical facilities).
HISA Utilization
To characterize HISA utilization geographically and over time, the number of users were mapped by county. Areas where users were increasing (hot spots) or decreasing (cold spots) also were mapped. The maps were created using Environmental Systems Research Institute ArcGIS Pro 2.2.1 software. We chose to use natural breaks (Jenks) data classification method in a choropleth to symbolize the change over time map. We then used the Getis Ord GI* optimized hot spot analysis tool in the ArcGIS Pro spatial statistics tool set to generate the hot/cold spot maps. This tool identifies clusters of high values (hot spots) and low values (cold spots) creating a new output layer, RHUs by county, with a Z score, P value, and CI for each county. The Gi Bin field classifies statistically significant hot and cold spots. Counties sorted into the ± 3 category (bin) have a clustering characteristic (eg, with neighboring counties) that is statistically significant with a 99% CI; the ± 2 bin indicates a 95% CI for those county clustering sorted therein; ± 1 reflects a 90% CI; and 0 bin contains county features that have no statistical significant clustering with neighboring counties.
Data Analysis
Data were cleaned and analyzed using SAS 9.4 and R 3.5.3. Descriptive statistics are provided for sociodemographic characteristics, clinical characteristics, and class. ANOVA and t tests were used to compare continuous variables between groups, while χ2 and Fisher exact tests were used for dichotomous and categorical outcome variables. The threshold for statistical significance for these tests was set at α = .001.
Results
There were 10,810 RHUs from FY 2015 through FY 2018 and HISA utilization increased each year (Figure 1). Although some years may show usage decreases relative to previous fiscal years, the cumulative trends showed an increase relative to FY 2015 for both Classes of RVs (Figure 2). There was a 45.4% increase from FY 2015 to FY 2018 with a mean 13.6% yearly increase. Class 1 increased 21.0% and Class 2 increased 39.5% from FY 2015 to FY 2016 (Figure 3).
Most RHUs were male, White, and married. Class 1 and Class 2 RHUs differed significantly by age, race, marital status, and disability conditions: Class 1 RHUs were aged 6.6 years younger with a mean age of 69.1 years compared with 75.7 years for Class 2 users. For Class 1 RHUs, a plurality (29.4%) were aged 65 to 69 years; while a plurality (41.4%) of Class 2 users were aged ≥ 80 years. Musculoskeletal was the most common identified type of condition for all RHUs (Table 1).
To better understand HISA utilization patterns and net RHUs per county, we used a map to detail RHUs by county and change over time (Figure 4). Additionally, we compared US counties by RHUs from FY 2015 to FY 2018 and determined how clusters of high numbers of RHUs (hot spots) and low numbers of RHUs (cold spots) shifted over this period (Figure 5). While HISA utilization grew over the study period, the net count of RHUs per county varied by 9 to 20 persons/county. The population of RHUs increased over time in the Southwest, Southeast, and over much of the East/Northeast, while in the Central and Midwest regions, number of RHUs seems to decrease in population and/or use of the system. The cold spots in the Midwest and South Central US seem to increase with a significant relationship to neighboring counties having a low number of RHUs.
There were 11,166 HM prescribed to RHUs (Table 2). Bathroom HMs also were the dominant HM type for all facilities regardless of complexity levels (Table 3). The San Antonio, Texas, VAMC demonstrated the highest Class 1 vs Class 2 difference in HISA use (Class 1: 87.7% and Class 2: 12.3%). Except for the Des Moines VAMC, all other VAMCs showed HISA use > 60% by Class 1.
Cost Data
Air-conditioning installation ($5007) was the costliest HM overall (Table 4), closely followed by bathroom ($4978) and kitchen modifications ($4305). Bathroom renovations were the costliest HM type for both Class 1 and Class 2, closely followed by electrical repair and air-conditioning installation for Class 1 and driveway reconstruction and wooden ramp construction for Class 2.
The mean award received for HM was $4687 (Table 5). While the number of RHUs increased from FY 2015 to FY 2016, the average cost decreased, both overall ($280) and for Class 1 ($195) and Class 2 ($153). Except for a small decline in the number of Class 2 HISA recipients from FY 2017 to FY 2018, overall, the number of RHUs continuously grew from FY 2015 to FY 2018: 977 for the overall cohort, 678 for Class 1 and 299 for Class 2. Despite the obvious gain in the number of RHUs, the average costs did not notably change over time. VISN 21 had the highest mean cost, followed by VISNs 17, 6, 22, and 20.
Travel
Travel time and distance to the HISA prescribing facility differed significantly between Class 1 and Class 2 HISA users. RHUs had to travel about 95 minutes from their place of residence to access the HISA benefits program. There were no statistically significant differences between Class 1 and 2 users with respect to travel time and distance traveled (Table 6).
The majority of Class 1 and Class 2 veterans accessed the HISA from their nearest facility. However, nearly one-quarter of both Class 1 and 2 RHUs (24% each) did not. Among the 2598 who accessed the nonnearest facility, 97 (3.7%) accessed a facility that is ≤ 40 miles. Many (44%) users traveled 40 to 100 miles, and another 43.2% traveled 100 to 200 miles from their residence to access a HM prescription. Some 2598 users (1.1%) traveled > 500 miles to access a facility.
Discussion
Although utilization of the HISA program has steadily increased, overall participation by subpopulations such as RHUs can still be improved significantly. Veterans aged ≤ 46 years who have a disability that is common to those receiving HISA benefits have low HISA utilization. Similarly, veterans with sensory disabilities also have low use. These subpopulations are among those in great need of attention and services.
A study by Lucas and Zelaya, using the 2016 National Health Interview Survey data with an aim to measure degree of vision problems, dual sensory impairment, and hearing trouble in male veterans aged ≥ 18 years, found that veterans were more likely to report dual sensory impairment and balance difficulties when compared with nonveterans.9 The number of female veterans is growing but had very low representation in this study.10 This emerging VHA population requires information and education on their HM benefits.
Home Modifications
The most common HM prescribed for RHUs was for the bathroom. Further investigation is warranted as to why, given the diversity of HM types that the grant covers, low prescription rates exist across most of the HM types. There may be a lack of knowledge by providers and VWD as to the range of HMs that can be awarded under the grant. It is important that HCPs and veterans receive education on HISA HM options.
Semeah and colleagues pointed out the need for an assessment of the HISA HM ordering system to ensure that multiple HMs items (eg, kitchen, air conditioning, fees, driveway, and plumbing) are listed among the forced choices shown to clinicians to select from.7 Poor housing in rural America is widespread: 63% of rural dwellings need renovations and/or repairs to be accessible to individuals with disabilities, with > 6.7 million rural homes having no or faulty plumbing or kitchens; yet in this study, prescriptions for these HMs accounted for < 1%.11,12
VISN 6 had the most HISA awards with 1364, while VISN 21 had the fewest (245). Across all VISNs, Class 1 RHUs received more prescriptions than did Class 2 RHUs. Future research may seek to examine whether prescribers are fully aware of the eligibility of HM prescription to Class 2 veterans. VISN 21 ($5354); VISN 17 ($5302); and VISN 6 ($5301) had the highest mean HM expenditures. The national mean cost for HISA HMs were $4978 for bathrooms and $4305 for kitchens; for non-HISA HMs in FY 2017, the mean costs were $6362 and $12,255, respectively. A noteworthy concern is whether the maximum grant limit awards are sufficient to perform more expensive and complex HMs, such as the kitchen or major bathroom alternations.13
Facilities categorized as 1a, 1b, or 1c provided
North Florida/Sough Georgia was the highest-prescribing VAMC with 39% more HM prescriptions than the second highest prescribing facility (Durham, NC). Unfortunately, the data presented here cannot establish causality for the large variance difference between the top facilities, and the skewed distribution of total RHUs across VAMCs.
Travel-Related Variables
HISA beneficiaries face significant travel-related challenges. Just 3.6% of RHUs could access a facility within 40 miles of their home and 43.2% traveled 100 to 200 miles from their home to access a HM prescription. Further exploration is warranted to understand how travel patterns impact access to or the uptake of HISA.
RVs already have problems with accessing care because of long travel time.14,15 The choice or necessity to travel to a farther facility for HISA prescription is problematic for RVs, especially when transportation is often reported in the literature as a barrier to resources for people living in rural communities.15-17 When patients have travel barriers, they wait longer to obtain medical services and often wait for their conditions to worsen before seeking services.15,18 Once HM is completed, telerehabilitation is an effective delivery method used for delivering health care services to people in remote places.18,19 Considering that HISA use has the potential to improve quality of life, afford comfort, facilitate the accomplishment of activities of daily living for RVs, it is important that future studies examine how existing telehealth technologies can be used to improve HISA access.
Future Directions
County-level analyses is warranted in future studies exploring potential variables associated with HISA use; for example, county-level rates of primary care physicians and other HCPs. Future research should explore how long distance travel impacts the HISA application process and HM implementation. Further research also should focus on the HISA application structure and process to identify causes of delays. The HISA application process takes a mean 6 months to complete, yet the duration of hospital stays is 1 to 3 weeks, thus it is impossible to connect HISA to hospital discharge, which was the original intent of the program. Future research can examine how telehealth services can expedite HISA obtainment and coordination of the application process. Future research also may study the possible causes of the wide variations in HM prescriptions per facility. It is also important that educational programs provide information on the array of HM items that veterans can obtain.
Conclusions
In our previous study of the HISA cohort (2011-2017), we documented that an increase in utilization of the HISA program was warranted based on the low national budgetary appropriation and identification of significant low participation by vulnerable subpopulations, including veterans residing in rural areas or having returned from recent conflicts.7 The present study documents national utilization patterns, demographic profiles, and clinical characteristics of RHUs from FY 2015 through FY 2018, data that may be useful to policy makers and HISA administrators in predicting future use and users. It is important to note that the data and information presented in this article identify trends. The work in no way establishes a gold standard or any targeted goal of utilization. Future research could focus on conceptualizing or theorizing what steps are necessary to set such a gold standard of utilization rate and steps toward achievement.
Acknowledgments
This research was supported by grant 15521 from the US Department of Veterans Affairs, Office of Rural Health . Furthermore, the research was supported in part by grant K12 HD055929 from the National Institutes of Health.
The US Department of Veterans Affairs (VA) created the Home Improvements and Structural Alterations (HISA) program to help provide necessary home modifications (HMs) to veterans with disabilities (VWDs) that will facilitate the provision of medical services at home and improve home accessibility and functional independence. The Veterans Health Administration (VHA) has more than 9 million veteran enrollees; of those, 2.7 million are classified as rural or highly rural.1 Rural veterans (RVs) possess higher rate of disability compared with that of urban veterans.2-5 RVs have unequal access to screening of ambulatory care sensitive conditions (eg, hypertension, diabetes mellitus).6 Furthermore, RVs are at risk of poor medical outcomes due to distance from health care facilities and specialist care, which can be a barrier to emergency care when issues arise. These barriers, among others, are associated with compromised health quality of life and health outcomes for RVs.3,6 The HISA program may be key to decreasing falls and other serious mishaps in the home. Therefore, understanding use of the HISA program by RVs is important. However, to date little information has been available regarding use of HISA benefits by RVs or characteristics of RVs who receive HISA benefits.
HISA Alterations Program
HISA was initially developed by VA to improve veterans’ transition from acute medical care to home.7,8 However, to obtain HISA grants currently, there is an average 3 to 6 months application process.7 Through the HISA program, VWDs can be prescribed the following HMs, including (but not limited to): flooring replacement, permanent ramps, roll-in showers, installation of central air-conditioning systems, improved lighting, kitchen/bathroom modifications, and home inspections. The HMs prescribed depend on an assessment of medical need by health care providers (HCPs).8
As time passed and the veteran population aged, the program now primarily helps ensure the ability to enter into essential areas and safety in the home.5 The amount of a HISA payment is based on whether a veteran’s health condition is related to military service as defined by the VHA service connection medical evaluation process. Barriers to obtaining a HISA HM can include difficulty in navigating the evaluation process and difficulty in finding a qualified contractor or builder to do the HM.7
This article aims to: (1) Detail the sociodemographic and clinical characteristics of rural HISA users (RHUs); (2) report on HISA usage patterns in number, types, and cost of HMs; (3) compare use amid the diverse VA medical centers (VAMCs) and related complexity levels and Veterans Integrated Service Networks (VISNs); and (4) examine the relationship between travel time/distance and HISA utilization. The long-term goal is to provide accurate information to researchers, HM administrators, health care providers and policy makers on HISA program utilization by rural VWDs, which may help improve its use and bring awareness of its users. This study was approved by the affiliate University of Florida Institutional Review Board and VA research and development committee at the North Florida/South Georgia Veterans Health System.
Methods
Data were obtained from 3 VA sources: the National Prosthetics Patient Database (NPPD), the VHA Medical Inpatient Dataset, and the VHA Outpatient Dataset.7 The NPPD is a national administrative database that contains information on prosthetic-associated products ordered by HCPs for patients, such as portable ramps, handrails, home oxygen equipment, and orthotic and prosthetic apparatus. Data obtained from the NPPD included cost of HMs, clinical characteristics, VISN, and VAMC. VA facilities are categorized into complexity levels 1a, 1b, 1c, 2, and 3. Complexity level 1a to 1c VAMCs address medical cases that entail “heightening involvedness,” meaning a larger number of patients presented with medical concerns needing medical specialists. Complexity levels 2 and 3 have fewer resources, lower patient numbers, and less medically complex patients. Finally, the VHA Medical Inpatient and Outpatient Datasets administrated by VA Informatics and Computing Infrastructure, consist of in-depth health services national data on inpatient and outpatient encounters and procedures.
The study cohort was divided into those with service-connected conditions (Class 1) or those with conditions not related to military service (Class 2). If veterans were identified in both classes, they were assigned to Class 1. The cost variable is determined by using the veterans’ classification. Class 1 veterans receive a lifetime limit of $6800, and Class 2 veterans receive a lifetime limit of $2000. A Class 2 veteran with ≥ 50% disability rating is eligible for a HISA lifetime limit of $6800. Whenever a value exceeds allowed limit of $6800 or $2000, due to data entry error or other reasons, the study team reassigned the cost value to the maximum allowed value.
Travel distance and time were derived by loading patient zip codes and HISA facility locations into the geographical information system program and using the nearest facility and find-route tools. These tools used a road network that simulates real-world driving conditions to calculate distance.
Study Variables
VWDs of any age, gender, and race/ethnicity who qualified for HISA and received HMs from fiscal year ( FY) 2015 through FY 2018 were identified (N = 30,823). Most VWDs were nonrural subjects (n = 19,970), and 43 had no Federal Information Processing System data. The final study cohort consisted of 10,810 HISA recipients. The NPPD, inpatient and outpatient data were merged by scrambled social security numbers to retrieve the following data: age, gender, race, ethnicity, marital status, Class (1 or 2), mean and total number of inpatient days, and type of HMs prescribed.
We also recorded rurality using the VA Rural-Urban Commuting Areas (RUCA) system, but we combined the rural and highly rural designation.1 Census tracts with a RUCA score of 10.0 are deemed highly rural, the remainder are considered rural except those with a RUCA score of 1.0 or 1.1. Travel time and distance from a veteran’s home to the VA facility that provided the HISA prescription were determined from zip codes. The current study focuses on VAMCs prescribing stations (affiliated sites of administrative parent medical facilities) where the HISA users obtained the HM, not the parent station (administrative parent medical facilities).
HISA Utilization
To characterize HISA utilization geographically and over time, the number of users were mapped by county. Areas where users were increasing (hot spots) or decreasing (cold spots) also were mapped. The maps were created using Environmental Systems Research Institute ArcGIS Pro 2.2.1 software. We chose to use natural breaks (Jenks) data classification method in a choropleth to symbolize the change over time map. We then used the Getis Ord GI* optimized hot spot analysis tool in the ArcGIS Pro spatial statistics tool set to generate the hot/cold spot maps. This tool identifies clusters of high values (hot spots) and low values (cold spots) creating a new output layer, RHUs by county, with a Z score, P value, and CI for each county. The Gi Bin field classifies statistically significant hot and cold spots. Counties sorted into the ± 3 category (bin) have a clustering characteristic (eg, with neighboring counties) that is statistically significant with a 99% CI; the ± 2 bin indicates a 95% CI for those county clustering sorted therein; ± 1 reflects a 90% CI; and 0 bin contains county features that have no statistical significant clustering with neighboring counties.
Data Analysis
Data were cleaned and analyzed using SAS 9.4 and R 3.5.3. Descriptive statistics are provided for sociodemographic characteristics, clinical characteristics, and class. ANOVA and t tests were used to compare continuous variables between groups, while χ2 and Fisher exact tests were used for dichotomous and categorical outcome variables. The threshold for statistical significance for these tests was set at α = .001.
Results
There were 10,810 RHUs from FY 2015 through FY 2018 and HISA utilization increased each year (Figure 1). Although some years may show usage decreases relative to previous fiscal years, the cumulative trends showed an increase relative to FY 2015 for both Classes of RVs (Figure 2). There was a 45.4% increase from FY 2015 to FY 2018 with a mean 13.6% yearly increase. Class 1 increased 21.0% and Class 2 increased 39.5% from FY 2015 to FY 2016 (Figure 3).
Most RHUs were male, White, and married. Class 1 and Class 2 RHUs differed significantly by age, race, marital status, and disability conditions: Class 1 RHUs were aged 6.6 years younger with a mean age of 69.1 years compared with 75.7 years for Class 2 users. For Class 1 RHUs, a plurality (29.4%) were aged 65 to 69 years; while a plurality (41.4%) of Class 2 users were aged ≥ 80 years. Musculoskeletal was the most common identified type of condition for all RHUs (Table 1).
To better understand HISA utilization patterns and net RHUs per county, we used a map to detail RHUs by county and change over time (Figure 4). Additionally, we compared US counties by RHUs from FY 2015 to FY 2018 and determined how clusters of high numbers of RHUs (hot spots) and low numbers of RHUs (cold spots) shifted over this period (Figure 5). While HISA utilization grew over the study period, the net count of RHUs per county varied by 9 to 20 persons/county. The population of RHUs increased over time in the Southwest, Southeast, and over much of the East/Northeast, while in the Central and Midwest regions, number of RHUs seems to decrease in population and/or use of the system. The cold spots in the Midwest and South Central US seem to increase with a significant relationship to neighboring counties having a low number of RHUs.
There were 11,166 HM prescribed to RHUs (Table 2). Bathroom HMs also were the dominant HM type for all facilities regardless of complexity levels (Table 3). The San Antonio, Texas, VAMC demonstrated the highest Class 1 vs Class 2 difference in HISA use (Class 1: 87.7% and Class 2: 12.3%). Except for the Des Moines VAMC, all other VAMCs showed HISA use > 60% by Class 1.
Cost Data
Air-conditioning installation ($5007) was the costliest HM overall (Table 4), closely followed by bathroom ($4978) and kitchen modifications ($4305). Bathroom renovations were the costliest HM type for both Class 1 and Class 2, closely followed by electrical repair and air-conditioning installation for Class 1 and driveway reconstruction and wooden ramp construction for Class 2.
The mean award received for HM was $4687 (Table 5). While the number of RHUs increased from FY 2015 to FY 2016, the average cost decreased, both overall ($280) and for Class 1 ($195) and Class 2 ($153). Except for a small decline in the number of Class 2 HISA recipients from FY 2017 to FY 2018, overall, the number of RHUs continuously grew from FY 2015 to FY 2018: 977 for the overall cohort, 678 for Class 1 and 299 for Class 2. Despite the obvious gain in the number of RHUs, the average costs did not notably change over time. VISN 21 had the highest mean cost, followed by VISNs 17, 6, 22, and 20.
Travel
Travel time and distance to the HISA prescribing facility differed significantly between Class 1 and Class 2 HISA users. RHUs had to travel about 95 minutes from their place of residence to access the HISA benefits program. There were no statistically significant differences between Class 1 and 2 users with respect to travel time and distance traveled (Table 6).
The majority of Class 1 and Class 2 veterans accessed the HISA from their nearest facility. However, nearly one-quarter of both Class 1 and 2 RHUs (24% each) did not. Among the 2598 who accessed the nonnearest facility, 97 (3.7%) accessed a facility that is ≤ 40 miles. Many (44%) users traveled 40 to 100 miles, and another 43.2% traveled 100 to 200 miles from their residence to access a HM prescription. Some 2598 users (1.1%) traveled > 500 miles to access a facility.
Discussion
Although utilization of the HISA program has steadily increased, overall participation by subpopulations such as RHUs can still be improved significantly. Veterans aged ≤ 46 years who have a disability that is common to those receiving HISA benefits have low HISA utilization. Similarly, veterans with sensory disabilities also have low use. These subpopulations are among those in great need of attention and services.
A study by Lucas and Zelaya, using the 2016 National Health Interview Survey data with an aim to measure degree of vision problems, dual sensory impairment, and hearing trouble in male veterans aged ≥ 18 years, found that veterans were more likely to report dual sensory impairment and balance difficulties when compared with nonveterans.9 The number of female veterans is growing but had very low representation in this study.10 This emerging VHA population requires information and education on their HM benefits.
Home Modifications
The most common HM prescribed for RHUs was for the bathroom. Further investigation is warranted as to why, given the diversity of HM types that the grant covers, low prescription rates exist across most of the HM types. There may be a lack of knowledge by providers and VWD as to the range of HMs that can be awarded under the grant. It is important that HCPs and veterans receive education on HISA HM options.
Semeah and colleagues pointed out the need for an assessment of the HISA HM ordering system to ensure that multiple HMs items (eg, kitchen, air conditioning, fees, driveway, and plumbing) are listed among the forced choices shown to clinicians to select from.7 Poor housing in rural America is widespread: 63% of rural dwellings need renovations and/or repairs to be accessible to individuals with disabilities, with > 6.7 million rural homes having no or faulty plumbing or kitchens; yet in this study, prescriptions for these HMs accounted for < 1%.11,12
VISN 6 had the most HISA awards with 1364, while VISN 21 had the fewest (245). Across all VISNs, Class 1 RHUs received more prescriptions than did Class 2 RHUs. Future research may seek to examine whether prescribers are fully aware of the eligibility of HM prescription to Class 2 veterans. VISN 21 ($5354); VISN 17 ($5302); and VISN 6 ($5301) had the highest mean HM expenditures. The national mean cost for HISA HMs were $4978 for bathrooms and $4305 for kitchens; for non-HISA HMs in FY 2017, the mean costs were $6362 and $12,255, respectively. A noteworthy concern is whether the maximum grant limit awards are sufficient to perform more expensive and complex HMs, such as the kitchen or major bathroom alternations.13
Facilities categorized as 1a, 1b, or 1c provided
North Florida/Sough Georgia was the highest-prescribing VAMC with 39% more HM prescriptions than the second highest prescribing facility (Durham, NC). Unfortunately, the data presented here cannot establish causality for the large variance difference between the top facilities, and the skewed distribution of total RHUs across VAMCs.
Travel-Related Variables
HISA beneficiaries face significant travel-related challenges. Just 3.6% of RHUs could access a facility within 40 miles of their home and 43.2% traveled 100 to 200 miles from their home to access a HM prescription. Further exploration is warranted to understand how travel patterns impact access to or the uptake of HISA.
RVs already have problems with accessing care because of long travel time.14,15 The choice or necessity to travel to a farther facility for HISA prescription is problematic for RVs, especially when transportation is often reported in the literature as a barrier to resources for people living in rural communities.15-17 When patients have travel barriers, they wait longer to obtain medical services and often wait for their conditions to worsen before seeking services.15,18 Once HM is completed, telerehabilitation is an effective delivery method used for delivering health care services to people in remote places.18,19 Considering that HISA use has the potential to improve quality of life, afford comfort, facilitate the accomplishment of activities of daily living for RVs, it is important that future studies examine how existing telehealth technologies can be used to improve HISA access.
Future Directions
County-level analyses is warranted in future studies exploring potential variables associated with HISA use; for example, county-level rates of primary care physicians and other HCPs. Future research should explore how long distance travel impacts the HISA application process and HM implementation. Further research also should focus on the HISA application structure and process to identify causes of delays. The HISA application process takes a mean 6 months to complete, yet the duration of hospital stays is 1 to 3 weeks, thus it is impossible to connect HISA to hospital discharge, which was the original intent of the program. Future research can examine how telehealth services can expedite HISA obtainment and coordination of the application process. Future research also may study the possible causes of the wide variations in HM prescriptions per facility. It is also important that educational programs provide information on the array of HM items that veterans can obtain.
Conclusions
In our previous study of the HISA cohort (2011-2017), we documented that an increase in utilization of the HISA program was warranted based on the low national budgetary appropriation and identification of significant low participation by vulnerable subpopulations, including veterans residing in rural areas or having returned from recent conflicts.7 The present study documents national utilization patterns, demographic profiles, and clinical characteristics of RHUs from FY 2015 through FY 2018, data that may be useful to policy makers and HISA administrators in predicting future use and users. It is important to note that the data and information presented in this article identify trends. The work in no way establishes a gold standard or any targeted goal of utilization. Future research could focus on conceptualizing or theorizing what steps are necessary to set such a gold standard of utilization rate and steps toward achievement.
Acknowledgments
This research was supported by grant 15521 from the US Department of Veterans Affairs, Office of Rural Health . Furthermore, the research was supported in part by grant K12 HD055929 from the National Institutes of Health.
1. US Department of Veterans Affairs, Veteran Health Administration, Office of Rural Health. Rural veteran health care challenges. Updated February 9, 2021. Accessed June 11, 2021. https://www.ruralhealth.va.gov/aboutus/ruralvets.asp
2. Holder, K.A. Veterans in rural America, 2011–2015. Published January 2017. Accessed June 11, 2021. https://www.census.gov/content/dam/Census/library/publications/2017/acs/acs-36.pdf
3. Pezzin LE, Bogner HR, Kurichi JE, et al. Preventable hospitalizations, barriers to care, and disability. Medicine (Baltimore). 2018;97(19):e0691. doi:10.1097/MD.0000000000010691
4. Rosenbach ML. Access and satisfaction within the disabled Medicare population. Health Care Financ Rev. 1995;17(2):147-167.
5. Semeah LM, Ganesh SP, Wang X, et al. Home modification and health services utilization in rural and urban veterans with disabilities. Housing Policy Debate. 2021. Published online: March 4, 2021. doi:10.1080/10511482.2020.1858923
6. Spoont M, Greer N, Su J, Fitzgerald P, Rutks I, and Wilt TJ. Rural vs. urban ambulatory health care: A Systematic Review. Published May 2011. Accessed June 11, 2021. https://www.hsrd.research.va.gov/publications/esp/ambulatory.pdf
7. Semeah LM, Wang X, Cowper Ripley DC, et al. Improving health through a home modification service for veterans. In: Fiedler BA, ed. Three Facets of Public Health and Paths to Improvements. Academic Press; 2020:381-416.
8. Semeah LM, Ahrentzen S, Jia H, Cowper-Ripley DC, Levy CE, Mann WC. The home improvements and structural alterations benefits program: veterans with disabilities and home accessibility. J Disability Policy Studies. 2017;28(1):43-51. doi:10.1177/1044207317696275
9. Lucas, JW, Zelaya, CE. Hearing difficulty, vision trouble, and balance problems among male veterans and nonveterans. Published June 12, 2020. Accessed June 11, 2021. https://www.cdc.gov/nchs/data/nhsr/nhsr142-508.pdf
10. US Department of Veterans Affairs, National Center for Veterans Analysis and Statistics. Women veterans report: the past, present, and future of women veterans. Published February 2017. Accessed June 11, 2021. https://www.va.gov/vetdata/docs/SpecialReports/Women_Veterans_2015_Final.pdf
11. US Department of Housing and Urban Development, Office of Policy Development and Research. Housing challenges of rural seniors. Published 2017. Accessed June 11, 2021. https://www.huduser.gov/portal/periodicals/em/summer17/highlight1.html
12. Pendall R, Goodman L, Zhu J, Gold A. The future of rural housing. Published October 2016. Accessed June 11, 202.1 https://www.urban.org/sites/default/files/publication/85101/2000972-the-future-of-rural-housing_6.pdf
13. Joint Center for Housing Studies at Harvard University. Improving America’s housing 2019. Published 2019. Accessed June 11, 2021. https://www.jchs.harvard.edu/sites/default/files/reports/files/Harvard_JCHS_Improving_Americas_Housing_2019.pdf
14. Schooley BL, Horan TA, Lee PW, West PA. Rural veteran access to healthcare services: investigating the role of information and communication technologies in overcoming spatial barriers. Perspect Health Inf Manag. 2010;7(Spring):1f. Published 2010 Apr 1.
15. Ripley DC, Kwong PL, Vogel WB, Kurichi JE, Bates BE, Davenport C. How does geographic access affect in-hospital mortality for veterans with acute ischemic stroke?. Med Care. 2015;53(6):501-509. doi:10.1097/MLR.0000000000000366
16. Cowper-Ripley DC, Reker DM, Hayes J, et al. Geographic access to VHA rehabilitation services for traumatically injured veterans. Fed Pract. 2009;26(10):28-39.
17. Smith M, Towne S, Herrera-Venson A, Cameron K, Horel S, Ory M, et al. Delivery of fall prevention interventions for at-risk older adults in rural areas: Findings from a national dissemination. International journal of environmental research and public health. 2018;15:2798. doi: 10.3390/ijerph15122798
18. Hale-Gallardo JL, Kreider CM, Jia H, et al. Telerehabilitation for Rural Veterans: A Qualitative Assessment of Barriers and Facilitators to Implementation. J Multidiscip Healthc. 2020;13:559-570. doi:10.2147/JMDH.S247267
19. Sarfo FS, Akassi J, Kyem G, et al. Long-Term Outcomes of Stroke in a Ghanaian Outpatient Clinic. J Stroke Cerebrovasc Dis. 2018;27(4):1090-1099. doi:10.1016/j.jstrokecerebrovasdis.2017.11.017
1. US Department of Veterans Affairs, Veteran Health Administration, Office of Rural Health. Rural veteran health care challenges. Updated February 9, 2021. Accessed June 11, 2021. https://www.ruralhealth.va.gov/aboutus/ruralvets.asp
2. Holder, K.A. Veterans in rural America, 2011–2015. Published January 2017. Accessed June 11, 2021. https://www.census.gov/content/dam/Census/library/publications/2017/acs/acs-36.pdf
3. Pezzin LE, Bogner HR, Kurichi JE, et al. Preventable hospitalizations, barriers to care, and disability. Medicine (Baltimore). 2018;97(19):e0691. doi:10.1097/MD.0000000000010691
4. Rosenbach ML. Access and satisfaction within the disabled Medicare population. Health Care Financ Rev. 1995;17(2):147-167.
5. Semeah LM, Ganesh SP, Wang X, et al. Home modification and health services utilization in rural and urban veterans with disabilities. Housing Policy Debate. 2021. Published online: March 4, 2021. doi:10.1080/10511482.2020.1858923
6. Spoont M, Greer N, Su J, Fitzgerald P, Rutks I, and Wilt TJ. Rural vs. urban ambulatory health care: A Systematic Review. Published May 2011. Accessed June 11, 2021. https://www.hsrd.research.va.gov/publications/esp/ambulatory.pdf
7. Semeah LM, Wang X, Cowper Ripley DC, et al. Improving health through a home modification service for veterans. In: Fiedler BA, ed. Three Facets of Public Health and Paths to Improvements. Academic Press; 2020:381-416.
8. Semeah LM, Ahrentzen S, Jia H, Cowper-Ripley DC, Levy CE, Mann WC. The home improvements and structural alterations benefits program: veterans with disabilities and home accessibility. J Disability Policy Studies. 2017;28(1):43-51. doi:10.1177/1044207317696275
9. Lucas, JW, Zelaya, CE. Hearing difficulty, vision trouble, and balance problems among male veterans and nonveterans. Published June 12, 2020. Accessed June 11, 2021. https://www.cdc.gov/nchs/data/nhsr/nhsr142-508.pdf
10. US Department of Veterans Affairs, National Center for Veterans Analysis and Statistics. Women veterans report: the past, present, and future of women veterans. Published February 2017. Accessed June 11, 2021. https://www.va.gov/vetdata/docs/SpecialReports/Women_Veterans_2015_Final.pdf
11. US Department of Housing and Urban Development, Office of Policy Development and Research. Housing challenges of rural seniors. Published 2017. Accessed June 11, 2021. https://www.huduser.gov/portal/periodicals/em/summer17/highlight1.html
12. Pendall R, Goodman L, Zhu J, Gold A. The future of rural housing. Published October 2016. Accessed June 11, 202.1 https://www.urban.org/sites/default/files/publication/85101/2000972-the-future-of-rural-housing_6.pdf
13. Joint Center for Housing Studies at Harvard University. Improving America’s housing 2019. Published 2019. Accessed June 11, 2021. https://www.jchs.harvard.edu/sites/default/files/reports/files/Harvard_JCHS_Improving_Americas_Housing_2019.pdf
14. Schooley BL, Horan TA, Lee PW, West PA. Rural veteran access to healthcare services: investigating the role of information and communication technologies in overcoming spatial barriers. Perspect Health Inf Manag. 2010;7(Spring):1f. Published 2010 Apr 1.
15. Ripley DC, Kwong PL, Vogel WB, Kurichi JE, Bates BE, Davenport C. How does geographic access affect in-hospital mortality for veterans with acute ischemic stroke?. Med Care. 2015;53(6):501-509. doi:10.1097/MLR.0000000000000366
16. Cowper-Ripley DC, Reker DM, Hayes J, et al. Geographic access to VHA rehabilitation services for traumatically injured veterans. Fed Pract. 2009;26(10):28-39.
17. Smith M, Towne S, Herrera-Venson A, Cameron K, Horel S, Ory M, et al. Delivery of fall prevention interventions for at-risk older adults in rural areas: Findings from a national dissemination. International journal of environmental research and public health. 2018;15:2798. doi: 10.3390/ijerph15122798
18. Hale-Gallardo JL, Kreider CM, Jia H, et al. Telerehabilitation for Rural Veterans: A Qualitative Assessment of Barriers and Facilitators to Implementation. J Multidiscip Healthc. 2020;13:559-570. doi:10.2147/JMDH.S247267
19. Sarfo FS, Akassi J, Kyem G, et al. Long-Term Outcomes of Stroke in a Ghanaian Outpatient Clinic. J Stroke Cerebrovasc Dis. 2018;27(4):1090-1099. doi:10.1016/j.jstrokecerebrovasdis.2017.11.017
Preoperative Care Assessment of Need Scores Are Associated With Postoperative Mortality and Length of Stay in Veterans Undergoing Knee Replacement
Risk calculators can be of great value in guiding clinical decision making, patient-centered precision medicine, and resource allocation.1 Several perioperative risk prediction models have emerged in recent decades that estimate specific hazards (eg, cardiovascular complications after noncardiac surgery) with varying accuracy and utility. In the perioperative sphere, the time windows are often limited to an index hospitalization or 30 days following surgery or discharge.2-9 Although longer periods are of interest to patients, families, and health systems, few widely used or validated models are designed to look beyond this very narrow window.10,11 In addition, perioperative risk prediction models do not routinely incorporate parameters of a wide variety of health or demographic domains, such as patterns of health care, health care utilization, or medication use.
In 2013, in response to the need for near real-time information to guide delivery of enhanced care management services, the Veterans Health Administration (VHA) Office of Informatics and Analytics developed automated risk prediction models that used detailed electronic health record (EHR) data. These models were used to report Care Assessment Need (CAN) scores each week for all VHA enrollees and include data from a wide array of health domains. These CAN scores predict the risk for hospitalization, death, or either event within 90 days and 1 year.12,13 Each score is reported as both a predicted probability (0-1) and as a percentile in relation to all other VHA enrollees (a value between 1 and 99).13 The data used to calculate CAN scores are listed in Table 1.12
Surgical procedures or admissions would not be differentiated from nonsurgical admissions or other procedural clinic visits, and as such, it is not possible to isolate the effect of undergoing a surgical procedure from another health-related event on the CAN score. At the same time though, a short-term increase in system utilization caused by an elective surgical procedure such as a total knee replacement (TKR) would presumably be reflected in a change in CAN score, but this has not been studied.
Since their introduction, CAN scores have been routinely accessed by primary care teams and used to facilitate care coordination for thousands of VHA patients. However, these CAN scores are currently not available to VHA surgeons, anesthesiologists, or other perioperative clinicians. In this study, we examine the distributions of preoperative CAN scores and explore the relationships of preoperative CAN 1-year mortality scores with 1-year survival following discharge and length of stay (LOS) during index hospitalization in a cohort of US veterans who underwent TKR, the most common elective operation performed within the VHA system.
Methods
Following approval of the Durham Veterans Affairs Medical Center Institutional Review Board, all necessary data were extracted from the VHA Corporate Data Warehouse (CDW) repository.14 Informed consent was waived due to the minimal risk nature of the study.
We used Current Procedural Terminology codes (27438, 27446, 27447, 27486, 27487, 27488) and International Classification of Diseases, 9th edition clinical modification procedure codes (81.54, 81.55, 81.59, 00.80-00.84) to identify all veterans who had undergone primary or revision TKR between July 2014 and December 2015 in VHA Veterans Integrated Service Network 1 (Maine, Vermont, New Hampshire, Massachusetts, Connecticut, Rhode Island, New York, Pennsylvania, West Virginia, Virginia, North Carolina). Because we focused on outcomes following hospital discharge, patients who died before discharge were excluded from the analysis. Preoperative CAN 1-year mortality score was chosen as the measure under the assumption that long-term survival may be the most meaningful of the 4 possible CAN score measures.
Our primary objective was to determine distribution of preoperative CAN scores in the study population. Our secondary was to study relationships among the preoperative CAN 1-year mortality scores and 1-year mortality and hospital LOS.
Study Variables
For each patient, we extracted the date of index surgery. The primary exposure or independent variable was the CAN score in the week prior to this date. Because prior study has shown that CAN scores trajectories do not significantly change over time, the date-stamped CAN scores in the week before surgery represent what would have been available to clinicians in a preoperative setting.15 Since CAN scores are refreshed and overwritten every week, we extracted archived scores from the CDW.
For the 1-year survival outcome, the primary dependent variable, we queried the vital status files in the CDW for the date of death if applicable. We confirmed survival beyond 1 year by examining vital signs in the CDW for a minimum of 2 independent encounters beyond 1 year after the date of discharge. To compute the index LOS, the secondary outcome, we computed the difference between the date of admission and date of hospital discharge.
Statistical Methods
The parameters and performance of the multivariable logistic regression models developed to compute the various CAN mortality and hospitalization risk scores have been previously described.12 Briefly, Wang and colleagues created parsimonious regression models using backward selection. Model discrimination was evaluated using C (concordance)-statistic. Model calibration was assessed by comparing predicted vs observed event rates by risk deciles and performing Cox proportional hazards regression.
We plotted histograms to display preoperative CAN scores as a simple measure of distribution (Figure 1). We also examined the cumulative proportion of patients at each preoperative CAN 1-year mortality score.
Using a conventional t test, we compared means of preoperative CAN 1-year mortality scores in patients who survived vs those who died within 1 year. We also constructed a plot of the proportion of patients who had died within 1 year vs preoperative CAN 1-year mortality scores. Kaplan-Meier curves were then constructed examining 1-year survival by CAN 1-year mortality score by terciles.
Finally, we examined the relationship between preoperative CAN 1-year mortality scores and index LOS in 2 ways: We plotted LOS across CAN scores, and we constructed a
Results
We identified 8206 patients who had undergone a TKR over the 18-month study period. The overall mean (SD) for age was 65 (8.41) years; 93% were male, and 78% were White veterans. Patient demographics are well described in a previous publication.16,17
In terms of model parameters for the CAN score models, C-statistics for the 90-day outcome models were as follows: 0.833 for the model predicting hospitalization (95% CI, 0.832-0.834); 0.865 for the model predicting death (95% CI, 0.863-0.876); and 0.811 for the model predicting either event (95% CI, 0.810-0.812). C-statistics for the 1-year outcome models were 0.809 for the model predicting hospitalization (95% CI, 0.808-0.810); 0.851 for the model predicting death (95% CI, 0.849-0.852); and 0.787 for the model predicting either event (95% CI, 0.786-0.787). Models were well calibrated with α = 0 and β = 1, demonstrating strong agreement between observed and predicted event rates.
The distribution of preoperative CAN 1-year mortality scores was close to normal (median, 50; interquartile range, 40; mean [SD] 48 [25.6]) (eTable). The original CAN score models were developed having an equal number of patients in each strata and as such, are normally distributed.12 Our cohort was similar in pattern of distribution. Distributions of the remaining preoperative CAN scores (90-day mortality, 1-year hospitalization, 90-day hospitalization) are shown in Figures 2, 3, and 4. Not surprisingly, histograms for both 90-day and 1-year hospitalization were skewed toward higher scores, indicating that these patients were expected to be hospitalized in the near future.
Overall, 1.4% (110/8096) of patients died within 1 year of surgery. Comparing 1-year mortality CAN scores in survivors vs nonsurvivors, we found statistically significant differences in means (47 vs 66 respectively, P < .001) and medians (45 vs 75 respectively, P < .001) (Table 2). In the plot examining the relationship between preoperative 1-year mortality CAN scores and 1-year mortality, the percentage who died within 1 year increased initially for patients with CAN scores > 60 and again exponentially for patients with CAN scores > 80. Examining Kaplan-Meier curves, we found that survivors and nonsurvivors separated early after surgery, and the differences between the top tercile and the middle/lower terciles were statistically significant (P < .001). Mortality rates were about 0.5% in the lower and middle terciles but about 2% in the upper tercile (Figure 5).
In the plot examining the relationship between CAN scores and index LOS, the LOS rose significantly beyond a CAN score of 60 and dramatically beyond a CAN score of 80 (Figure 6). LOESS curves also showed 2 inflection points suggesting an incremental and sequential rise in the LOS with increasing CAN scores (Figure 7). Mean (SD) LOS in days for the lowest to highest terciles was 2.6 (1.7), 2.8 (2.1), and 3.6 (2.2), respectively.
Discussion
CAN scores are automatically generated each week by EHR-based multivariable risk models. These scores have excellent predictive accuracy for 90-day and 1-year mortality and hospitalization and are routinely used by VHA primary care teams to assist with clinical operations.13 We studied the distribution of CAN 1-year mortality scores in a preoperative context and examined relationships of the preoperative CAN 1-year mortality scores with postoperative mortality and LOS in 8206 veterans who underwent TKR.
There are several noteworthy findings. First, the overall 1-year mortality rate observed following TKR (1.4%) was similar to other published reports.18,19 Not surprisingly, preoperative CAN 1-year mortality scores were significantly higher in veterans who died compared with those of survivors. The majority of patients who died had a preoperative CAN 1-year mortality score > 75 while most who survived had a preoperative CAN 1-year mortality score < 45 (P < .001). Interestingly, the same scores showed a nonlinear correlation with LOS. Index LOS was about 4 days in patients in the highest tercile of CAN scores vs 2.5 days in the lowest tercile, but the initial increase in LOS was detected at a CAN score of about 55 to 60.
In addition, mortality rate varied widely in different segments of the population when grouped according to preoperative CAN scores. One-year mortality rates in the highest tercile reached 2%, about 4-fold higher than that of lower terciles (0.5%). Examination of the Kaplan-Meier curves showed that this difference in mortality between the highest tercile and the lower 2 groups appears soon after discharge and continues to increase over time, suggesting that the factors contributing to the increased mortality are present at the time of discharge and persist beyond the postoperative period. In summary, although CAN scores were not designed for use in the perioperative context, we found that preoperative CAN 1-year mortality scores are broadly predictive of mortality, but especially for increases in LOS following elective TKA, both increases in hospital LOS following elective TKA and mortality over the year after TKA.
Our findings raise several important questions. The decision to undergo elective surgery is complex. Arguably, individuals who undergo elective knee replacement should be healthy enough to undergo, recover, and reap the benefits from a procedure that does not extend life. The distribution of preoperative CAN 1-year mortality scores for our study population was similar to that of the general VHA enrollee population with similar measured mortality rates (≤ 0.5% vs ≥ 1.7% in the low and high terciles, respectively).1 Further study comparing outcomes in matched cohorts who did and did not undergo joint replacement would be of interest. In lieu of this, though, the association of high but not extreme CAN scores with increased hospital LOS may potentially be used to guide allocation of resources to this group, obviating the increased cost and risk to which this group is exposed. And the additional insight afforded by CAN scores may enhance shared decision-making models by identifying patients at the very highest risk (eg, 1-year mortality CAN score ≥ 90), patients who conceivably might not survive long enough to recover from and enjoy their reconstructed knee, who might in the long run be harmed by undergoing the procedure.
Many total joint arthroplasties are performed in older patients, a population in which frailty is increasingly recognized as a significant risk factor for poor outcomes.20,21 CAN scores reliably identify high-risk patients and have been shown to correlate with frailty in this group.22 Multiple authors have reported improved outcomes with cost reductions after implementation of programs targeting modifiable risk factors in high-risk surgical candidates.23-25 A preoperative assessment that includes the CAN score may be valuable in identifying patients who would benefit most from prehabilitation programs or other interventions designed to blunt the impact of frailty. It is true that many elements used to calculate the CAN score would not be considered modifiable, especially in the short term. However, specific contributors to frailty, such as nutritional status and polypharmacy might be potential candidates. As with all multivariable risk prediction models, there are multiple paths to a high CAN score, and further research to identify clinically relevant subgroups may help inform efforts to improve perioperative care within this population.
Hospital LOS is of intense interest for many reasons, not least its utility as a surrogate for cost and increased risk for immediate perioperative adverse events, such as multidrug-resistant hospital acquired infections, need for postacute facility-based rehabilitation, and deconditioning that increase risks of falls and fractures in the older population.26-29 In addition, its importance is magnified due to the COVID-19 pandemic context in which restarting elective surgery programs has changed traditional criteria by which patients are scheduled for surgery.
We have shown that elevated CAN scores are able to identify patients at risk for extended hospital stays and, as such, may be useful additional data in allocating scarce operating room time and other resources for optimal patient and health care provider safety.30,31 Individual surgeons and hospital systems would, of course, decide which patients should be triaged to go first, based on local priorities; however, choosing lower risk patients with minimal risk of morbidity and mortality while pursuing prehabilitation for higher risk patients is a reasonable approach.
Limitations
Our study has several limitations. Only a single surgical procedure was included, albeit the most common one performed in the VHA. In addition, no information was available concerning the precise clinical course for these patients, such as the duration of surgery, anesthetic technique, and management of acute, perioperative course. Although the assumption was made that patients received standard care in a manner such that these factors would not significantly affect either their mortality or their LOS out of proportion to their preoperative clinical status, confounding cannot be excluded. Therefore, further study is necessary to determine whether CAN scores can accurately predict mortality and/or LOS for patients undergoing other procedures. Further, a clinical trial is required to assess whether systematic provision of the CAN score at the point of surgery would impact care and, more important, impact outcomes. In addition, multivariable analyses were not performed, including and excluding various components of the CAN score models. Currently, CAN scores could be made available to the surgical/anesthesia communities at minimal or no cost and are updated automatically. Model calibration and discrimination in this particular setting were not validated.
Because our interest is in leveraging an existing resource to a current clinical and operational problem rather than in creating or validating a new tool, we chose to test the simple bivariate relationship between preoperative CAN scores and outcomes. We chose the preoperative 1-year mortality CAN score from among the 4 options under the assumption that long-term survival is the most meaningful of the 4 candidate outcomes. Finally, while the CAN scores are currently only calculated and generated for patients cared for within the VHA, few data elements are unavailable to civilian health systems. The most problematic would be documentation of actual prescription filling, but this is a topic of increasing interest to the medical and academic communities and access to such information we hope will improve.32-34
Conclusions
Although designed for use by VHA primary care teams, CAN scores also may have value for perioperative clinicians, predicting mortality and prolonged hospital LOS in those with elevated 1-year mortality scores. Advantages of CAN scores relative to other perioperative risk calculators lies in their ability to predict long-term rather than 30-day survival and that they are automatically generated on a near-real-time basis for all patients who receive care in VHA ambulatory clinics. Further study is needed to determine practical utility in shared decision making, preoperative evaluation and optimization, and perioperative resource allocation.
Acknowledgments
This work was supported by the US Department of Veterans Affairs (VA) National Center for Patient Safety, Field Office 10A4E, through the Patient Safety Center of Inquiry at the Durham VA Medical Center in North Carolina. The study also received support from the Center of Innovation to Accelerate Discovery and Practice Transformation (CIN 13-410) at the Durham VA Health Care System.
1. McNair AGK, MacKichan F, Donovan JL, et al. What surgeons tell patients and what patients want to know before major cancer surgery: a qualitative study. BMC Cancer. 2016;16:258. doi:10.1186/s12885-016-2292-3
2. Grover FL, Hammermeister KE, Burchfiel C. Initial report of the Veterans Administration Preoperative Risk Assessment Study for Cardiac Surgery. Ann Thorac Surg. 1990;50(1):12-26; discussion 27-18. doi:10.1016/0003-4975(90)90073-f
3. Khuri SF, Daley J, Henderson W, et al. The National Veterans Administration Surgical Risk Study: risk adjustment for the comparative assessment of the quality of surgical care. J Am Coll Surg. 1995;180(5):519-531.
4. Glance LG, Lustik SJ, Hannan EL, et al. The Surgical Mortality Probability Model: derivation and validation of a simple simple risk prediction rule for noncardiac surgery. Ann Surg. 2012;255(4):696-702. doi:10.1097/SLA.0b013e31824b45af
5. Keller DS, Kroll D, Papaconstantinou HT, Ellis CN. Development and validation of a methodology to reduce mortality using the veterans affairs surgical quality improvement program risk calculator. J Am Coll Surg. 2017;224(4):602-607. doi:10.1016/j.jamcollsurg.2016.12.033
6. Bilimoria KY, Liu Y, Paruch JL, et al. Development and evaluation of the universal ACS NSQIP surgical risk calculator: a decision aid and informed consent tool for patients and surgeons. J Am Coll Surg. 2013;217(5):833-842.e831-833. doi:10.1016/j.jamcollsurg.2013.07.385
7. Ford MK, Beattie WS, Wijeysundera DN. Systematic review: prediction of perioperative cardiac complications and mortality by the revised cardiac risk index. Ann Intern Med. 2010;152(1):26-35. doi:10.7326/0003-4819-152-1-201001050-00007
8. Gupta PK, Gupta H, Sundaram A, et al. Development and validation of a risk calculator for prediction of cardiac risk after surgery. Circulation. 2011;124(4):381-387. doi:10.1161/CIRCULATIONAHA.110.015701
9. Lee TH, Marcantonio ER, Mangione CM, et al. Derivation and prospective validation of a simple index for prediction of cardiac risk of major noncardiac surgery. Circulation. 1999;100(10):1043-1049. doi:10.1161/01.cir.100.10.1043
10. Smith T, Li X, Nylander W, Gunnar W. Thirty-day postoperative mortality risk estimates and 1-year survival in Veterans Health Administration surgery patients. JAMA Surg. 2016;151(5):417-422. doi:10.1001/jamasurg.2015.4882
11. Damhuis RA, Wijnhoven BP, Plaisier PW, Kirkels WJ, Kranse R, van Lanschot JJ. Comparison of 30-day, 90- day and in-hospital postoperative mortality for eight different cancer types. Br J Surg. 2012;99(8):1149-1154. doi:10.1002/bjs.8813
12. Wang L, Porter B, Maynard C, et al. Predicting risk of hospitalization or death among patients receiving primary care in the Veterans Health Administration. Med Care. 2013;51(4):368-373. doi:10.1016/j.amjcard.2012.06.038
13. Fihn SD, Francis J, Clancy C, et al. Insights from advanced analytics at the Veterans Health Administration. Health Aff (Millwood). 2014;33(7):1203-1211. doi:10.1377/hlthaff.2014.0054
14. Noël PH, Copeland LA, Perrin RA, et al. VHA Corporate Data Warehouse height and weight data: opportunities and challenges for health services research. J Rehabil Res Dev. 2010;47(8):739-750. doi:10.1682/jrrd.2009.08.0110
15. Wong ES, Yoon J, Piegari RI, Rosland AM, Fihn SD, Chang ET. Identifying latent subgroups of high-risk patients using risk score trajectories. J Gen Intern Med. 2018;33(12):2120-2126. doi:10.1007/s11606-018-4653-x
16. Chen Q, Hsia HL, Overman R, et al. Impact of an opioid safety initiative on patients undergoing total knee arthroplasty: a time series analysis. Anesthesiology. 2019;131(2):369-380. doi:10.1097/ALN.0000000000002771
17. Hsia HL, Takemoto S, van de Ven T, et al. Acute pain is associated with chronic opioid use after total knee arthroplasty. Reg Anesth Pain Med. 2018;43(7):705-711. doi:10.1097/AAP.0000000000000831
18. Inacio MCS, Dillon MT, Miric A, Navarro RA, Paxton EW. Mortality after total knee and total hip arthroplasty in a large integrated health care system. Perm J. 2017;21:16-171. doi:10.7812/TPP/16-171
19. Lee QJ, Mak WP, Wong YC. Mortality following primary total knee replacement in public hospitals in Hong Kong. Hong Kong Med J. 2016;22(3):237-241. doi:10.12809/hkmj154712
20. Lin HS, Watts JN, Peel NM, Hubbard RE. Frailty and post-operative outcomes in older surgical patients: a systematic review. BMC Geriatr. 2016;16(1):157. doi:10.1186/s12877-016-0329-8
21. Shinall MC Jr, Arya S, Youk A, et al. Association of preoperative patient frailty and operative stress with postoperative mortality. JAMA Surg. 2019;155(1):e194620. doi:10.1001/jamasurg.2019.4620
22. Ruiz JG, Priyadarshni S, Rahaman Z, et al. Validation of an automatically generated screening score for frailty: the care assessment need (CAN) score. BMC Geriatr. 2018;18(1):106. doi:10.1186/s12877-018-0802-7
23. Bernstein DN, Liu TC, Winegar AL, et al. Evaluation of a preoperative optimization protocol for primary hip and knee arthroplasty patients. J Arthroplasty. 2018;33(12):3642- 3648. doi:10.1016/j.arth.2018.08.018
24. Sodhi N, Anis HK, Coste M, et al. A nationwide analysis of preoperative planning on operative times and postoperative complications in total knee arthroplasty. J Knee Surg. 2019;32(11):1040-1045. doi:10.1055/s-0039-1677790
25. Krause A, Sayeed Z, El-Othmani M, Pallekonda V, Mihalko W, Saleh KJ. Outpatient total knee arthroplasty: are we there yet? (part 1). Orthop Clin North Am. 2018;49(1):1-6. doi:10.1016/j.ocl.2017.08.002
26. Barrasa-Villar JI, Aibar-Remón C, Prieto-Andrés P, Mareca- Doñate R, Moliner-Lahoz J. Impact on morbidity, mortality, and length of stay of hospital-acquired infections by resistant microorganisms. Clin Infect Dis. 2017;65(4):644-652. doi:10.1093/cid/cix411
27. Nikkel LE, Kates SL, Schreck M, Maceroli M, Mahmood B, Elfar JC. Length of hospital stay after hip fracture and risk of early mortality after discharge in New York state: retrospective cohort study. BMJ. 2015;351:h6246. doi:10.1136/bmj.h6246
28. Marfil-Garza BA, Belaunzarán-Zamudio PF, Gulias-Herrero A, et al. Risk factors associated with prolonged hospital length-of-stay: 18-year retrospective study of hospitalizations in a tertiary healthcare center in Mexico. PLoS One. 2018;13(11):e0207203. doi:10.1371/journal.pone.0207203
29. Hirsch CH, Sommers L, Olsen A, Mullen L, Winograd CH. The natural history of functional morbidity in hospitalized older patients. J Am Geriatr Soc. 1990;38(12):1296-1303. doi:10.1111/j.1532-5415.1990.tb03451.x
30. Iyengar KP, Jain VK, Vaish A, Vaishya R, Maini L, Lal H. Post COVID-19: planning strategies to resume orthopaedic surgery -challenges and considerations. J Clin Orthop Trauma. 2020;11(suppl 3):S291-S295. doi:10.1016/j.jcot.2020.04.028
31. O’Connor CM, Anoushiravani AA, DiCaprio MR, Healy WL, Iorio R. Economic recovery after the COVID-19 pandemic: resuming elective orthopedic surgery and total joint arthroplasty. J Arthroplasty. 2020;35(suppl 7):S32-S36. doi:10.1016/j.arth.2020.04.038.
32. Mauseth SA, Skurtveit S, Skovlund E, Langhammer A, Spigset O. Medication use and association with urinary incontinence in women: data from the Norwegian Prescription Database and the HUNT study. Neurourol Urodyn. 2018;37(4):1448-1457. doi:10.1002/nau.23473
33. Sultan RS, Correll CU, Schoenbaum M, King M, Walkup JT, Olfson M. National patterns of commonly prescribed psychotropic medications to young people. J Child Adolesc Psychopharmacol. 2018;28(3):158-165. doi:10.1089/cap.2017.0077
34. McCoy RG, Dykhoff HJ, Sangaralingham L, et al. Adoption of new glucose-lowering medications in the U.S.-the case of SGLT2 inhibitors: nationwide cohort study. Diabetes Technol Ther. 2019;21(12):702-712. doi:10.1089/dia.2019.0213
Risk calculators can be of great value in guiding clinical decision making, patient-centered precision medicine, and resource allocation.1 Several perioperative risk prediction models have emerged in recent decades that estimate specific hazards (eg, cardiovascular complications after noncardiac surgery) with varying accuracy and utility. In the perioperative sphere, the time windows are often limited to an index hospitalization or 30 days following surgery or discharge.2-9 Although longer periods are of interest to patients, families, and health systems, few widely used or validated models are designed to look beyond this very narrow window.10,11 In addition, perioperative risk prediction models do not routinely incorporate parameters of a wide variety of health or demographic domains, such as patterns of health care, health care utilization, or medication use.
In 2013, in response to the need for near real-time information to guide delivery of enhanced care management services, the Veterans Health Administration (VHA) Office of Informatics and Analytics developed automated risk prediction models that used detailed electronic health record (EHR) data. These models were used to report Care Assessment Need (CAN) scores each week for all VHA enrollees and include data from a wide array of health domains. These CAN scores predict the risk for hospitalization, death, or either event within 90 days and 1 year.12,13 Each score is reported as both a predicted probability (0-1) and as a percentile in relation to all other VHA enrollees (a value between 1 and 99).13 The data used to calculate CAN scores are listed in Table 1.12
Surgical procedures or admissions would not be differentiated from nonsurgical admissions or other procedural clinic visits, and as such, it is not possible to isolate the effect of undergoing a surgical procedure from another health-related event on the CAN score. At the same time though, a short-term increase in system utilization caused by an elective surgical procedure such as a total knee replacement (TKR) would presumably be reflected in a change in CAN score, but this has not been studied.
Since their introduction, CAN scores have been routinely accessed by primary care teams and used to facilitate care coordination for thousands of VHA patients. However, these CAN scores are currently not available to VHA surgeons, anesthesiologists, or other perioperative clinicians. In this study, we examine the distributions of preoperative CAN scores and explore the relationships of preoperative CAN 1-year mortality scores with 1-year survival following discharge and length of stay (LOS) during index hospitalization in a cohort of US veterans who underwent TKR, the most common elective operation performed within the VHA system.
Methods
Following approval of the Durham Veterans Affairs Medical Center Institutional Review Board, all necessary data were extracted from the VHA Corporate Data Warehouse (CDW) repository.14 Informed consent was waived due to the minimal risk nature of the study.
We used Current Procedural Terminology codes (27438, 27446, 27447, 27486, 27487, 27488) and International Classification of Diseases, 9th edition clinical modification procedure codes (81.54, 81.55, 81.59, 00.80-00.84) to identify all veterans who had undergone primary or revision TKR between July 2014 and December 2015 in VHA Veterans Integrated Service Network 1 (Maine, Vermont, New Hampshire, Massachusetts, Connecticut, Rhode Island, New York, Pennsylvania, West Virginia, Virginia, North Carolina). Because we focused on outcomes following hospital discharge, patients who died before discharge were excluded from the analysis. Preoperative CAN 1-year mortality score was chosen as the measure under the assumption that long-term survival may be the most meaningful of the 4 possible CAN score measures.
Our primary objective was to determine distribution of preoperative CAN scores in the study population. Our secondary was to study relationships among the preoperative CAN 1-year mortality scores and 1-year mortality and hospital LOS.
Study Variables
For each patient, we extracted the date of index surgery. The primary exposure or independent variable was the CAN score in the week prior to this date. Because prior study has shown that CAN scores trajectories do not significantly change over time, the date-stamped CAN scores in the week before surgery represent what would have been available to clinicians in a preoperative setting.15 Since CAN scores are refreshed and overwritten every week, we extracted archived scores from the CDW.
For the 1-year survival outcome, the primary dependent variable, we queried the vital status files in the CDW for the date of death if applicable. We confirmed survival beyond 1 year by examining vital signs in the CDW for a minimum of 2 independent encounters beyond 1 year after the date of discharge. To compute the index LOS, the secondary outcome, we computed the difference between the date of admission and date of hospital discharge.
Statistical Methods
The parameters and performance of the multivariable logistic regression models developed to compute the various CAN mortality and hospitalization risk scores have been previously described.12 Briefly, Wang and colleagues created parsimonious regression models using backward selection. Model discrimination was evaluated using C (concordance)-statistic. Model calibration was assessed by comparing predicted vs observed event rates by risk deciles and performing Cox proportional hazards regression.
We plotted histograms to display preoperative CAN scores as a simple measure of distribution (Figure 1). We also examined the cumulative proportion of patients at each preoperative CAN 1-year mortality score.
Using a conventional t test, we compared means of preoperative CAN 1-year mortality scores in patients who survived vs those who died within 1 year. We also constructed a plot of the proportion of patients who had died within 1 year vs preoperative CAN 1-year mortality scores. Kaplan-Meier curves were then constructed examining 1-year survival by CAN 1-year mortality score by terciles.
Finally, we examined the relationship between preoperative CAN 1-year mortality scores and index LOS in 2 ways: We plotted LOS across CAN scores, and we constructed a
Results
We identified 8206 patients who had undergone a TKR over the 18-month study period. The overall mean (SD) for age was 65 (8.41) years; 93% were male, and 78% were White veterans. Patient demographics are well described in a previous publication.16,17
In terms of model parameters for the CAN score models, C-statistics for the 90-day outcome models were as follows: 0.833 for the model predicting hospitalization (95% CI, 0.832-0.834); 0.865 for the model predicting death (95% CI, 0.863-0.876); and 0.811 for the model predicting either event (95% CI, 0.810-0.812). C-statistics for the 1-year outcome models were 0.809 for the model predicting hospitalization (95% CI, 0.808-0.810); 0.851 for the model predicting death (95% CI, 0.849-0.852); and 0.787 for the model predicting either event (95% CI, 0.786-0.787). Models were well calibrated with α = 0 and β = 1, demonstrating strong agreement between observed and predicted event rates.
The distribution of preoperative CAN 1-year mortality scores was close to normal (median, 50; interquartile range, 40; mean [SD] 48 [25.6]) (eTable). The original CAN score models were developed having an equal number of patients in each strata and as such, are normally distributed.12 Our cohort was similar in pattern of distribution. Distributions of the remaining preoperative CAN scores (90-day mortality, 1-year hospitalization, 90-day hospitalization) are shown in Figures 2, 3, and 4. Not surprisingly, histograms for both 90-day and 1-year hospitalization were skewed toward higher scores, indicating that these patients were expected to be hospitalized in the near future.
Overall, 1.4% (110/8096) of patients died within 1 year of surgery. Comparing 1-year mortality CAN scores in survivors vs nonsurvivors, we found statistically significant differences in means (47 vs 66 respectively, P < .001) and medians (45 vs 75 respectively, P < .001) (Table 2). In the plot examining the relationship between preoperative 1-year mortality CAN scores and 1-year mortality, the percentage who died within 1 year increased initially for patients with CAN scores > 60 and again exponentially for patients with CAN scores > 80. Examining Kaplan-Meier curves, we found that survivors and nonsurvivors separated early after surgery, and the differences between the top tercile and the middle/lower terciles were statistically significant (P < .001). Mortality rates were about 0.5% in the lower and middle terciles but about 2% in the upper tercile (Figure 5).
In the plot examining the relationship between CAN scores and index LOS, the LOS rose significantly beyond a CAN score of 60 and dramatically beyond a CAN score of 80 (Figure 6). LOESS curves also showed 2 inflection points suggesting an incremental and sequential rise in the LOS with increasing CAN scores (Figure 7). Mean (SD) LOS in days for the lowest to highest terciles was 2.6 (1.7), 2.8 (2.1), and 3.6 (2.2), respectively.
Discussion
CAN scores are automatically generated each week by EHR-based multivariable risk models. These scores have excellent predictive accuracy for 90-day and 1-year mortality and hospitalization and are routinely used by VHA primary care teams to assist with clinical operations.13 We studied the distribution of CAN 1-year mortality scores in a preoperative context and examined relationships of the preoperative CAN 1-year mortality scores with postoperative mortality and LOS in 8206 veterans who underwent TKR.
There are several noteworthy findings. First, the overall 1-year mortality rate observed following TKR (1.4%) was similar to other published reports.18,19 Not surprisingly, preoperative CAN 1-year mortality scores were significantly higher in veterans who died compared with those of survivors. The majority of patients who died had a preoperative CAN 1-year mortality score > 75 while most who survived had a preoperative CAN 1-year mortality score < 45 (P < .001). Interestingly, the same scores showed a nonlinear correlation with LOS. Index LOS was about 4 days in patients in the highest tercile of CAN scores vs 2.5 days in the lowest tercile, but the initial increase in LOS was detected at a CAN score of about 55 to 60.
In addition, mortality rate varied widely in different segments of the population when grouped according to preoperative CAN scores. One-year mortality rates in the highest tercile reached 2%, about 4-fold higher than that of lower terciles (0.5%). Examination of the Kaplan-Meier curves showed that this difference in mortality between the highest tercile and the lower 2 groups appears soon after discharge and continues to increase over time, suggesting that the factors contributing to the increased mortality are present at the time of discharge and persist beyond the postoperative period. In summary, although CAN scores were not designed for use in the perioperative context, we found that preoperative CAN 1-year mortality scores are broadly predictive of mortality, but especially for increases in LOS following elective TKA, both increases in hospital LOS following elective TKA and mortality over the year after TKA.
Our findings raise several important questions. The decision to undergo elective surgery is complex. Arguably, individuals who undergo elective knee replacement should be healthy enough to undergo, recover, and reap the benefits from a procedure that does not extend life. The distribution of preoperative CAN 1-year mortality scores for our study population was similar to that of the general VHA enrollee population with similar measured mortality rates (≤ 0.5% vs ≥ 1.7% in the low and high terciles, respectively).1 Further study comparing outcomes in matched cohorts who did and did not undergo joint replacement would be of interest. In lieu of this, though, the association of high but not extreme CAN scores with increased hospital LOS may potentially be used to guide allocation of resources to this group, obviating the increased cost and risk to which this group is exposed. And the additional insight afforded by CAN scores may enhance shared decision-making models by identifying patients at the very highest risk (eg, 1-year mortality CAN score ≥ 90), patients who conceivably might not survive long enough to recover from and enjoy their reconstructed knee, who might in the long run be harmed by undergoing the procedure.
Many total joint arthroplasties are performed in older patients, a population in which frailty is increasingly recognized as a significant risk factor for poor outcomes.20,21 CAN scores reliably identify high-risk patients and have been shown to correlate with frailty in this group.22 Multiple authors have reported improved outcomes with cost reductions after implementation of programs targeting modifiable risk factors in high-risk surgical candidates.23-25 A preoperative assessment that includes the CAN score may be valuable in identifying patients who would benefit most from prehabilitation programs or other interventions designed to blunt the impact of frailty. It is true that many elements used to calculate the CAN score would not be considered modifiable, especially in the short term. However, specific contributors to frailty, such as nutritional status and polypharmacy might be potential candidates. As with all multivariable risk prediction models, there are multiple paths to a high CAN score, and further research to identify clinically relevant subgroups may help inform efforts to improve perioperative care within this population.
Hospital LOS is of intense interest for many reasons, not least its utility as a surrogate for cost and increased risk for immediate perioperative adverse events, such as multidrug-resistant hospital acquired infections, need for postacute facility-based rehabilitation, and deconditioning that increase risks of falls and fractures in the older population.26-29 In addition, its importance is magnified due to the COVID-19 pandemic context in which restarting elective surgery programs has changed traditional criteria by which patients are scheduled for surgery.
We have shown that elevated CAN scores are able to identify patients at risk for extended hospital stays and, as such, may be useful additional data in allocating scarce operating room time and other resources for optimal patient and health care provider safety.30,31 Individual surgeons and hospital systems would, of course, decide which patients should be triaged to go first, based on local priorities; however, choosing lower risk patients with minimal risk of morbidity and mortality while pursuing prehabilitation for higher risk patients is a reasonable approach.
Limitations
Our study has several limitations. Only a single surgical procedure was included, albeit the most common one performed in the VHA. In addition, no information was available concerning the precise clinical course for these patients, such as the duration of surgery, anesthetic technique, and management of acute, perioperative course. Although the assumption was made that patients received standard care in a manner such that these factors would not significantly affect either their mortality or their LOS out of proportion to their preoperative clinical status, confounding cannot be excluded. Therefore, further study is necessary to determine whether CAN scores can accurately predict mortality and/or LOS for patients undergoing other procedures. Further, a clinical trial is required to assess whether systematic provision of the CAN score at the point of surgery would impact care and, more important, impact outcomes. In addition, multivariable analyses were not performed, including and excluding various components of the CAN score models. Currently, CAN scores could be made available to the surgical/anesthesia communities at minimal or no cost and are updated automatically. Model calibration and discrimination in this particular setting were not validated.
Because our interest is in leveraging an existing resource to a current clinical and operational problem rather than in creating or validating a new tool, we chose to test the simple bivariate relationship between preoperative CAN scores and outcomes. We chose the preoperative 1-year mortality CAN score from among the 4 options under the assumption that long-term survival is the most meaningful of the 4 candidate outcomes. Finally, while the CAN scores are currently only calculated and generated for patients cared for within the VHA, few data elements are unavailable to civilian health systems. The most problematic would be documentation of actual prescription filling, but this is a topic of increasing interest to the medical and academic communities and access to such information we hope will improve.32-34
Conclusions
Although designed for use by VHA primary care teams, CAN scores also may have value for perioperative clinicians, predicting mortality and prolonged hospital LOS in those with elevated 1-year mortality scores. Advantages of CAN scores relative to other perioperative risk calculators lies in their ability to predict long-term rather than 30-day survival and that they are automatically generated on a near-real-time basis for all patients who receive care in VHA ambulatory clinics. Further study is needed to determine practical utility in shared decision making, preoperative evaluation and optimization, and perioperative resource allocation.
Acknowledgments
This work was supported by the US Department of Veterans Affairs (VA) National Center for Patient Safety, Field Office 10A4E, through the Patient Safety Center of Inquiry at the Durham VA Medical Center in North Carolina. The study also received support from the Center of Innovation to Accelerate Discovery and Practice Transformation (CIN 13-410) at the Durham VA Health Care System.
Risk calculators can be of great value in guiding clinical decision making, patient-centered precision medicine, and resource allocation.1 Several perioperative risk prediction models have emerged in recent decades that estimate specific hazards (eg, cardiovascular complications after noncardiac surgery) with varying accuracy and utility. In the perioperative sphere, the time windows are often limited to an index hospitalization or 30 days following surgery or discharge.2-9 Although longer periods are of interest to patients, families, and health systems, few widely used or validated models are designed to look beyond this very narrow window.10,11 In addition, perioperative risk prediction models do not routinely incorporate parameters of a wide variety of health or demographic domains, such as patterns of health care, health care utilization, or medication use.
In 2013, in response to the need for near real-time information to guide delivery of enhanced care management services, the Veterans Health Administration (VHA) Office of Informatics and Analytics developed automated risk prediction models that used detailed electronic health record (EHR) data. These models were used to report Care Assessment Need (CAN) scores each week for all VHA enrollees and include data from a wide array of health domains. These CAN scores predict the risk for hospitalization, death, or either event within 90 days and 1 year.12,13 Each score is reported as both a predicted probability (0-1) and as a percentile in relation to all other VHA enrollees (a value between 1 and 99).13 The data used to calculate CAN scores are listed in Table 1.12
Surgical procedures or admissions would not be differentiated from nonsurgical admissions or other procedural clinic visits, and as such, it is not possible to isolate the effect of undergoing a surgical procedure from another health-related event on the CAN score. At the same time though, a short-term increase in system utilization caused by an elective surgical procedure such as a total knee replacement (TKR) would presumably be reflected in a change in CAN score, but this has not been studied.
Since their introduction, CAN scores have been routinely accessed by primary care teams and used to facilitate care coordination for thousands of VHA patients. However, these CAN scores are currently not available to VHA surgeons, anesthesiologists, or other perioperative clinicians. In this study, we examine the distributions of preoperative CAN scores and explore the relationships of preoperative CAN 1-year mortality scores with 1-year survival following discharge and length of stay (LOS) during index hospitalization in a cohort of US veterans who underwent TKR, the most common elective operation performed within the VHA system.
Methods
Following approval of the Durham Veterans Affairs Medical Center Institutional Review Board, all necessary data were extracted from the VHA Corporate Data Warehouse (CDW) repository.14 Informed consent was waived due to the minimal risk nature of the study.
We used Current Procedural Terminology codes (27438, 27446, 27447, 27486, 27487, 27488) and International Classification of Diseases, 9th edition clinical modification procedure codes (81.54, 81.55, 81.59, 00.80-00.84) to identify all veterans who had undergone primary or revision TKR between July 2014 and December 2015 in VHA Veterans Integrated Service Network 1 (Maine, Vermont, New Hampshire, Massachusetts, Connecticut, Rhode Island, New York, Pennsylvania, West Virginia, Virginia, North Carolina). Because we focused on outcomes following hospital discharge, patients who died before discharge were excluded from the analysis. Preoperative CAN 1-year mortality score was chosen as the measure under the assumption that long-term survival may be the most meaningful of the 4 possible CAN score measures.
Our primary objective was to determine distribution of preoperative CAN scores in the study population. Our secondary was to study relationships among the preoperative CAN 1-year mortality scores and 1-year mortality and hospital LOS.
Study Variables
For each patient, we extracted the date of index surgery. The primary exposure or independent variable was the CAN score in the week prior to this date. Because prior study has shown that CAN scores trajectories do not significantly change over time, the date-stamped CAN scores in the week before surgery represent what would have been available to clinicians in a preoperative setting.15 Since CAN scores are refreshed and overwritten every week, we extracted archived scores from the CDW.
For the 1-year survival outcome, the primary dependent variable, we queried the vital status files in the CDW for the date of death if applicable. We confirmed survival beyond 1 year by examining vital signs in the CDW for a minimum of 2 independent encounters beyond 1 year after the date of discharge. To compute the index LOS, the secondary outcome, we computed the difference between the date of admission and date of hospital discharge.
Statistical Methods
The parameters and performance of the multivariable logistic regression models developed to compute the various CAN mortality and hospitalization risk scores have been previously described.12 Briefly, Wang and colleagues created parsimonious regression models using backward selection. Model discrimination was evaluated using C (concordance)-statistic. Model calibration was assessed by comparing predicted vs observed event rates by risk deciles and performing Cox proportional hazards regression.
We plotted histograms to display preoperative CAN scores as a simple measure of distribution (Figure 1). We also examined the cumulative proportion of patients at each preoperative CAN 1-year mortality score.
Using a conventional t test, we compared means of preoperative CAN 1-year mortality scores in patients who survived vs those who died within 1 year. We also constructed a plot of the proportion of patients who had died within 1 year vs preoperative CAN 1-year mortality scores. Kaplan-Meier curves were then constructed examining 1-year survival by CAN 1-year mortality score by terciles.
Finally, we examined the relationship between preoperative CAN 1-year mortality scores and index LOS in 2 ways: We plotted LOS across CAN scores, and we constructed a
Results
We identified 8206 patients who had undergone a TKR over the 18-month study period. The overall mean (SD) for age was 65 (8.41) years; 93% were male, and 78% were White veterans. Patient demographics are well described in a previous publication.16,17
In terms of model parameters for the CAN score models, C-statistics for the 90-day outcome models were as follows: 0.833 for the model predicting hospitalization (95% CI, 0.832-0.834); 0.865 for the model predicting death (95% CI, 0.863-0.876); and 0.811 for the model predicting either event (95% CI, 0.810-0.812). C-statistics for the 1-year outcome models were 0.809 for the model predicting hospitalization (95% CI, 0.808-0.810); 0.851 for the model predicting death (95% CI, 0.849-0.852); and 0.787 for the model predicting either event (95% CI, 0.786-0.787). Models were well calibrated with α = 0 and β = 1, demonstrating strong agreement between observed and predicted event rates.
The distribution of preoperative CAN 1-year mortality scores was close to normal (median, 50; interquartile range, 40; mean [SD] 48 [25.6]) (eTable). The original CAN score models were developed having an equal number of patients in each strata and as such, are normally distributed.12 Our cohort was similar in pattern of distribution. Distributions of the remaining preoperative CAN scores (90-day mortality, 1-year hospitalization, 90-day hospitalization) are shown in Figures 2, 3, and 4. Not surprisingly, histograms for both 90-day and 1-year hospitalization were skewed toward higher scores, indicating that these patients were expected to be hospitalized in the near future.
Overall, 1.4% (110/8096) of patients died within 1 year of surgery. Comparing 1-year mortality CAN scores in survivors vs nonsurvivors, we found statistically significant differences in means (47 vs 66 respectively, P < .001) and medians (45 vs 75 respectively, P < .001) (Table 2). In the plot examining the relationship between preoperative 1-year mortality CAN scores and 1-year mortality, the percentage who died within 1 year increased initially for patients with CAN scores > 60 and again exponentially for patients with CAN scores > 80. Examining Kaplan-Meier curves, we found that survivors and nonsurvivors separated early after surgery, and the differences between the top tercile and the middle/lower terciles were statistically significant (P < .001). Mortality rates were about 0.5% in the lower and middle terciles but about 2% in the upper tercile (Figure 5).
In the plot examining the relationship between CAN scores and index LOS, the LOS rose significantly beyond a CAN score of 60 and dramatically beyond a CAN score of 80 (Figure 6). LOESS curves also showed 2 inflection points suggesting an incremental and sequential rise in the LOS with increasing CAN scores (Figure 7). Mean (SD) LOS in days for the lowest to highest terciles was 2.6 (1.7), 2.8 (2.1), and 3.6 (2.2), respectively.
Discussion
CAN scores are automatically generated each week by EHR-based multivariable risk models. These scores have excellent predictive accuracy for 90-day and 1-year mortality and hospitalization and are routinely used by VHA primary care teams to assist with clinical operations.13 We studied the distribution of CAN 1-year mortality scores in a preoperative context and examined relationships of the preoperative CAN 1-year mortality scores with postoperative mortality and LOS in 8206 veterans who underwent TKR.
There are several noteworthy findings. First, the overall 1-year mortality rate observed following TKR (1.4%) was similar to other published reports.18,19 Not surprisingly, preoperative CAN 1-year mortality scores were significantly higher in veterans who died compared with those of survivors. The majority of patients who died had a preoperative CAN 1-year mortality score > 75 while most who survived had a preoperative CAN 1-year mortality score < 45 (P < .001). Interestingly, the same scores showed a nonlinear correlation with LOS. Index LOS was about 4 days in patients in the highest tercile of CAN scores vs 2.5 days in the lowest tercile, but the initial increase in LOS was detected at a CAN score of about 55 to 60.
In addition, mortality rate varied widely in different segments of the population when grouped according to preoperative CAN scores. One-year mortality rates in the highest tercile reached 2%, about 4-fold higher than that of lower terciles (0.5%). Examination of the Kaplan-Meier curves showed that this difference in mortality between the highest tercile and the lower 2 groups appears soon after discharge and continues to increase over time, suggesting that the factors contributing to the increased mortality are present at the time of discharge and persist beyond the postoperative period. In summary, although CAN scores were not designed for use in the perioperative context, we found that preoperative CAN 1-year mortality scores are broadly predictive of mortality, but especially for increases in LOS following elective TKA, both increases in hospital LOS following elective TKA and mortality over the year after TKA.
Our findings raise several important questions. The decision to undergo elective surgery is complex. Arguably, individuals who undergo elective knee replacement should be healthy enough to undergo, recover, and reap the benefits from a procedure that does not extend life. The distribution of preoperative CAN 1-year mortality scores for our study population was similar to that of the general VHA enrollee population with similar measured mortality rates (≤ 0.5% vs ≥ 1.7% in the low and high terciles, respectively).1 Further study comparing outcomes in matched cohorts who did and did not undergo joint replacement would be of interest. In lieu of this, though, the association of high but not extreme CAN scores with increased hospital LOS may potentially be used to guide allocation of resources to this group, obviating the increased cost and risk to which this group is exposed. And the additional insight afforded by CAN scores may enhance shared decision-making models by identifying patients at the very highest risk (eg, 1-year mortality CAN score ≥ 90), patients who conceivably might not survive long enough to recover from and enjoy their reconstructed knee, who might in the long run be harmed by undergoing the procedure.
Many total joint arthroplasties are performed in older patients, a population in which frailty is increasingly recognized as a significant risk factor for poor outcomes.20,21 CAN scores reliably identify high-risk patients and have been shown to correlate with frailty in this group.22 Multiple authors have reported improved outcomes with cost reductions after implementation of programs targeting modifiable risk factors in high-risk surgical candidates.23-25 A preoperative assessment that includes the CAN score may be valuable in identifying patients who would benefit most from prehabilitation programs or other interventions designed to blunt the impact of frailty. It is true that many elements used to calculate the CAN score would not be considered modifiable, especially in the short term. However, specific contributors to frailty, such as nutritional status and polypharmacy might be potential candidates. As with all multivariable risk prediction models, there are multiple paths to a high CAN score, and further research to identify clinically relevant subgroups may help inform efforts to improve perioperative care within this population.
Hospital LOS is of intense interest for many reasons, not least its utility as a surrogate for cost and increased risk for immediate perioperative adverse events, such as multidrug-resistant hospital acquired infections, need for postacute facility-based rehabilitation, and deconditioning that increase risks of falls and fractures in the older population.26-29 In addition, its importance is magnified due to the COVID-19 pandemic context in which restarting elective surgery programs has changed traditional criteria by which patients are scheduled for surgery.
We have shown that elevated CAN scores are able to identify patients at risk for extended hospital stays and, as such, may be useful additional data in allocating scarce operating room time and other resources for optimal patient and health care provider safety.30,31 Individual surgeons and hospital systems would, of course, decide which patients should be triaged to go first, based on local priorities; however, choosing lower risk patients with minimal risk of morbidity and mortality while pursuing prehabilitation for higher risk patients is a reasonable approach.
Limitations
Our study has several limitations. Only a single surgical procedure was included, albeit the most common one performed in the VHA. In addition, no information was available concerning the precise clinical course for these patients, such as the duration of surgery, anesthetic technique, and management of acute, perioperative course. Although the assumption was made that patients received standard care in a manner such that these factors would not significantly affect either their mortality or their LOS out of proportion to their preoperative clinical status, confounding cannot be excluded. Therefore, further study is necessary to determine whether CAN scores can accurately predict mortality and/or LOS for patients undergoing other procedures. Further, a clinical trial is required to assess whether systematic provision of the CAN score at the point of surgery would impact care and, more important, impact outcomes. In addition, multivariable analyses were not performed, including and excluding various components of the CAN score models. Currently, CAN scores could be made available to the surgical/anesthesia communities at minimal or no cost and are updated automatically. Model calibration and discrimination in this particular setting were not validated.
Because our interest is in leveraging an existing resource to a current clinical and operational problem rather than in creating or validating a new tool, we chose to test the simple bivariate relationship between preoperative CAN scores and outcomes. We chose the preoperative 1-year mortality CAN score from among the 4 options under the assumption that long-term survival is the most meaningful of the 4 candidate outcomes. Finally, while the CAN scores are currently only calculated and generated for patients cared for within the VHA, few data elements are unavailable to civilian health systems. The most problematic would be documentation of actual prescription filling, but this is a topic of increasing interest to the medical and academic communities and access to such information we hope will improve.32-34
Conclusions
Although designed for use by VHA primary care teams, CAN scores also may have value for perioperative clinicians, predicting mortality and prolonged hospital LOS in those with elevated 1-year mortality scores. Advantages of CAN scores relative to other perioperative risk calculators lies in their ability to predict long-term rather than 30-day survival and that they are automatically generated on a near-real-time basis for all patients who receive care in VHA ambulatory clinics. Further study is needed to determine practical utility in shared decision making, preoperative evaluation and optimization, and perioperative resource allocation.
Acknowledgments
This work was supported by the US Department of Veterans Affairs (VA) National Center for Patient Safety, Field Office 10A4E, through the Patient Safety Center of Inquiry at the Durham VA Medical Center in North Carolina. The study also received support from the Center of Innovation to Accelerate Discovery and Practice Transformation (CIN 13-410) at the Durham VA Health Care System.
1. McNair AGK, MacKichan F, Donovan JL, et al. What surgeons tell patients and what patients want to know before major cancer surgery: a qualitative study. BMC Cancer. 2016;16:258. doi:10.1186/s12885-016-2292-3
2. Grover FL, Hammermeister KE, Burchfiel C. Initial report of the Veterans Administration Preoperative Risk Assessment Study for Cardiac Surgery. Ann Thorac Surg. 1990;50(1):12-26; discussion 27-18. doi:10.1016/0003-4975(90)90073-f
3. Khuri SF, Daley J, Henderson W, et al. The National Veterans Administration Surgical Risk Study: risk adjustment for the comparative assessment of the quality of surgical care. J Am Coll Surg. 1995;180(5):519-531.
4. Glance LG, Lustik SJ, Hannan EL, et al. The Surgical Mortality Probability Model: derivation and validation of a simple simple risk prediction rule for noncardiac surgery. Ann Surg. 2012;255(4):696-702. doi:10.1097/SLA.0b013e31824b45af
5. Keller DS, Kroll D, Papaconstantinou HT, Ellis CN. Development and validation of a methodology to reduce mortality using the veterans affairs surgical quality improvement program risk calculator. J Am Coll Surg. 2017;224(4):602-607. doi:10.1016/j.jamcollsurg.2016.12.033
6. Bilimoria KY, Liu Y, Paruch JL, et al. Development and evaluation of the universal ACS NSQIP surgical risk calculator: a decision aid and informed consent tool for patients and surgeons. J Am Coll Surg. 2013;217(5):833-842.e831-833. doi:10.1016/j.jamcollsurg.2013.07.385
7. Ford MK, Beattie WS, Wijeysundera DN. Systematic review: prediction of perioperative cardiac complications and mortality by the revised cardiac risk index. Ann Intern Med. 2010;152(1):26-35. doi:10.7326/0003-4819-152-1-201001050-00007
8. Gupta PK, Gupta H, Sundaram A, et al. Development and validation of a risk calculator for prediction of cardiac risk after surgery. Circulation. 2011;124(4):381-387. doi:10.1161/CIRCULATIONAHA.110.015701
9. Lee TH, Marcantonio ER, Mangione CM, et al. Derivation and prospective validation of a simple index for prediction of cardiac risk of major noncardiac surgery. Circulation. 1999;100(10):1043-1049. doi:10.1161/01.cir.100.10.1043
10. Smith T, Li X, Nylander W, Gunnar W. Thirty-day postoperative mortality risk estimates and 1-year survival in Veterans Health Administration surgery patients. JAMA Surg. 2016;151(5):417-422. doi:10.1001/jamasurg.2015.4882
11. Damhuis RA, Wijnhoven BP, Plaisier PW, Kirkels WJ, Kranse R, van Lanschot JJ. Comparison of 30-day, 90- day and in-hospital postoperative mortality for eight different cancer types. Br J Surg. 2012;99(8):1149-1154. doi:10.1002/bjs.8813
12. Wang L, Porter B, Maynard C, et al. Predicting risk of hospitalization or death among patients receiving primary care in the Veterans Health Administration. Med Care. 2013;51(4):368-373. doi:10.1016/j.amjcard.2012.06.038
13. Fihn SD, Francis J, Clancy C, et al. Insights from advanced analytics at the Veterans Health Administration. Health Aff (Millwood). 2014;33(7):1203-1211. doi:10.1377/hlthaff.2014.0054
14. Noël PH, Copeland LA, Perrin RA, et al. VHA Corporate Data Warehouse height and weight data: opportunities and challenges for health services research. J Rehabil Res Dev. 2010;47(8):739-750. doi:10.1682/jrrd.2009.08.0110
15. Wong ES, Yoon J, Piegari RI, Rosland AM, Fihn SD, Chang ET. Identifying latent subgroups of high-risk patients using risk score trajectories. J Gen Intern Med. 2018;33(12):2120-2126. doi:10.1007/s11606-018-4653-x
16. Chen Q, Hsia HL, Overman R, et al. Impact of an opioid safety initiative on patients undergoing total knee arthroplasty: a time series analysis. Anesthesiology. 2019;131(2):369-380. doi:10.1097/ALN.0000000000002771
17. Hsia HL, Takemoto S, van de Ven T, et al. Acute pain is associated with chronic opioid use after total knee arthroplasty. Reg Anesth Pain Med. 2018;43(7):705-711. doi:10.1097/AAP.0000000000000831
18. Inacio MCS, Dillon MT, Miric A, Navarro RA, Paxton EW. Mortality after total knee and total hip arthroplasty in a large integrated health care system. Perm J. 2017;21:16-171. doi:10.7812/TPP/16-171
19. Lee QJ, Mak WP, Wong YC. Mortality following primary total knee replacement in public hospitals in Hong Kong. Hong Kong Med J. 2016;22(3):237-241. doi:10.12809/hkmj154712
20. Lin HS, Watts JN, Peel NM, Hubbard RE. Frailty and post-operative outcomes in older surgical patients: a systematic review. BMC Geriatr. 2016;16(1):157. doi:10.1186/s12877-016-0329-8
21. Shinall MC Jr, Arya S, Youk A, et al. Association of preoperative patient frailty and operative stress with postoperative mortality. JAMA Surg. 2019;155(1):e194620. doi:10.1001/jamasurg.2019.4620
22. Ruiz JG, Priyadarshni S, Rahaman Z, et al. Validation of an automatically generated screening score for frailty: the care assessment need (CAN) score. BMC Geriatr. 2018;18(1):106. doi:10.1186/s12877-018-0802-7
23. Bernstein DN, Liu TC, Winegar AL, et al. Evaluation of a preoperative optimization protocol for primary hip and knee arthroplasty patients. J Arthroplasty. 2018;33(12):3642- 3648. doi:10.1016/j.arth.2018.08.018
24. Sodhi N, Anis HK, Coste M, et al. A nationwide analysis of preoperative planning on operative times and postoperative complications in total knee arthroplasty. J Knee Surg. 2019;32(11):1040-1045. doi:10.1055/s-0039-1677790
25. Krause A, Sayeed Z, El-Othmani M, Pallekonda V, Mihalko W, Saleh KJ. Outpatient total knee arthroplasty: are we there yet? (part 1). Orthop Clin North Am. 2018;49(1):1-6. doi:10.1016/j.ocl.2017.08.002
26. Barrasa-Villar JI, Aibar-Remón C, Prieto-Andrés P, Mareca- Doñate R, Moliner-Lahoz J. Impact on morbidity, mortality, and length of stay of hospital-acquired infections by resistant microorganisms. Clin Infect Dis. 2017;65(4):644-652. doi:10.1093/cid/cix411
27. Nikkel LE, Kates SL, Schreck M, Maceroli M, Mahmood B, Elfar JC. Length of hospital stay after hip fracture and risk of early mortality after discharge in New York state: retrospective cohort study. BMJ. 2015;351:h6246. doi:10.1136/bmj.h6246
28. Marfil-Garza BA, Belaunzarán-Zamudio PF, Gulias-Herrero A, et al. Risk factors associated with prolonged hospital length-of-stay: 18-year retrospective study of hospitalizations in a tertiary healthcare center in Mexico. PLoS One. 2018;13(11):e0207203. doi:10.1371/journal.pone.0207203
29. Hirsch CH, Sommers L, Olsen A, Mullen L, Winograd CH. The natural history of functional morbidity in hospitalized older patients. J Am Geriatr Soc. 1990;38(12):1296-1303. doi:10.1111/j.1532-5415.1990.tb03451.x
30. Iyengar KP, Jain VK, Vaish A, Vaishya R, Maini L, Lal H. Post COVID-19: planning strategies to resume orthopaedic surgery -challenges and considerations. J Clin Orthop Trauma. 2020;11(suppl 3):S291-S295. doi:10.1016/j.jcot.2020.04.028
31. O’Connor CM, Anoushiravani AA, DiCaprio MR, Healy WL, Iorio R. Economic recovery after the COVID-19 pandemic: resuming elective orthopedic surgery and total joint arthroplasty. J Arthroplasty. 2020;35(suppl 7):S32-S36. doi:10.1016/j.arth.2020.04.038.
32. Mauseth SA, Skurtveit S, Skovlund E, Langhammer A, Spigset O. Medication use and association with urinary incontinence in women: data from the Norwegian Prescription Database and the HUNT study. Neurourol Urodyn. 2018;37(4):1448-1457. doi:10.1002/nau.23473
33. Sultan RS, Correll CU, Schoenbaum M, King M, Walkup JT, Olfson M. National patterns of commonly prescribed psychotropic medications to young people. J Child Adolesc Psychopharmacol. 2018;28(3):158-165. doi:10.1089/cap.2017.0077
34. McCoy RG, Dykhoff HJ, Sangaralingham L, et al. Adoption of new glucose-lowering medications in the U.S.-the case of SGLT2 inhibitors: nationwide cohort study. Diabetes Technol Ther. 2019;21(12):702-712. doi:10.1089/dia.2019.0213
1. McNair AGK, MacKichan F, Donovan JL, et al. What surgeons tell patients and what patients want to know before major cancer surgery: a qualitative study. BMC Cancer. 2016;16:258. doi:10.1186/s12885-016-2292-3
2. Grover FL, Hammermeister KE, Burchfiel C. Initial report of the Veterans Administration Preoperative Risk Assessment Study for Cardiac Surgery. Ann Thorac Surg. 1990;50(1):12-26; discussion 27-18. doi:10.1016/0003-4975(90)90073-f
3. Khuri SF, Daley J, Henderson W, et al. The National Veterans Administration Surgical Risk Study: risk adjustment for the comparative assessment of the quality of surgical care. J Am Coll Surg. 1995;180(5):519-531.
4. Glance LG, Lustik SJ, Hannan EL, et al. The Surgical Mortality Probability Model: derivation and validation of a simple simple risk prediction rule for noncardiac surgery. Ann Surg. 2012;255(4):696-702. doi:10.1097/SLA.0b013e31824b45af
5. Keller DS, Kroll D, Papaconstantinou HT, Ellis CN. Development and validation of a methodology to reduce mortality using the veterans affairs surgical quality improvement program risk calculator. J Am Coll Surg. 2017;224(4):602-607. doi:10.1016/j.jamcollsurg.2016.12.033
6. Bilimoria KY, Liu Y, Paruch JL, et al. Development and evaluation of the universal ACS NSQIP surgical risk calculator: a decision aid and informed consent tool for patients and surgeons. J Am Coll Surg. 2013;217(5):833-842.e831-833. doi:10.1016/j.jamcollsurg.2013.07.385
7. Ford MK, Beattie WS, Wijeysundera DN. Systematic review: prediction of perioperative cardiac complications and mortality by the revised cardiac risk index. Ann Intern Med. 2010;152(1):26-35. doi:10.7326/0003-4819-152-1-201001050-00007
8. Gupta PK, Gupta H, Sundaram A, et al. Development and validation of a risk calculator for prediction of cardiac risk after surgery. Circulation. 2011;124(4):381-387. doi:10.1161/CIRCULATIONAHA.110.015701
9. Lee TH, Marcantonio ER, Mangione CM, et al. Derivation and prospective validation of a simple index for prediction of cardiac risk of major noncardiac surgery. Circulation. 1999;100(10):1043-1049. doi:10.1161/01.cir.100.10.1043
10. Smith T, Li X, Nylander W, Gunnar W. Thirty-day postoperative mortality risk estimates and 1-year survival in Veterans Health Administration surgery patients. JAMA Surg. 2016;151(5):417-422. doi:10.1001/jamasurg.2015.4882
11. Damhuis RA, Wijnhoven BP, Plaisier PW, Kirkels WJ, Kranse R, van Lanschot JJ. Comparison of 30-day, 90- day and in-hospital postoperative mortality for eight different cancer types. Br J Surg. 2012;99(8):1149-1154. doi:10.1002/bjs.8813
12. Wang L, Porter B, Maynard C, et al. Predicting risk of hospitalization or death among patients receiving primary care in the Veterans Health Administration. Med Care. 2013;51(4):368-373. doi:10.1016/j.amjcard.2012.06.038
13. Fihn SD, Francis J, Clancy C, et al. Insights from advanced analytics at the Veterans Health Administration. Health Aff (Millwood). 2014;33(7):1203-1211. doi:10.1377/hlthaff.2014.0054
14. Noël PH, Copeland LA, Perrin RA, et al. VHA Corporate Data Warehouse height and weight data: opportunities and challenges for health services research. J Rehabil Res Dev. 2010;47(8):739-750. doi:10.1682/jrrd.2009.08.0110
15. Wong ES, Yoon J, Piegari RI, Rosland AM, Fihn SD, Chang ET. Identifying latent subgroups of high-risk patients using risk score trajectories. J Gen Intern Med. 2018;33(12):2120-2126. doi:10.1007/s11606-018-4653-x
16. Chen Q, Hsia HL, Overman R, et al. Impact of an opioid safety initiative on patients undergoing total knee arthroplasty: a time series analysis. Anesthesiology. 2019;131(2):369-380. doi:10.1097/ALN.0000000000002771
17. Hsia HL, Takemoto S, van de Ven T, et al. Acute pain is associated with chronic opioid use after total knee arthroplasty. Reg Anesth Pain Med. 2018;43(7):705-711. doi:10.1097/AAP.0000000000000831
18. Inacio MCS, Dillon MT, Miric A, Navarro RA, Paxton EW. Mortality after total knee and total hip arthroplasty in a large integrated health care system. Perm J. 2017;21:16-171. doi:10.7812/TPP/16-171
19. Lee QJ, Mak WP, Wong YC. Mortality following primary total knee replacement in public hospitals in Hong Kong. Hong Kong Med J. 2016;22(3):237-241. doi:10.12809/hkmj154712
20. Lin HS, Watts JN, Peel NM, Hubbard RE. Frailty and post-operative outcomes in older surgical patients: a systematic review. BMC Geriatr. 2016;16(1):157. doi:10.1186/s12877-016-0329-8
21. Shinall MC Jr, Arya S, Youk A, et al. Association of preoperative patient frailty and operative stress with postoperative mortality. JAMA Surg. 2019;155(1):e194620. doi:10.1001/jamasurg.2019.4620
22. Ruiz JG, Priyadarshni S, Rahaman Z, et al. Validation of an automatically generated screening score for frailty: the care assessment need (CAN) score. BMC Geriatr. 2018;18(1):106. doi:10.1186/s12877-018-0802-7
23. Bernstein DN, Liu TC, Winegar AL, et al. Evaluation of a preoperative optimization protocol for primary hip and knee arthroplasty patients. J Arthroplasty. 2018;33(12):3642- 3648. doi:10.1016/j.arth.2018.08.018
24. Sodhi N, Anis HK, Coste M, et al. A nationwide analysis of preoperative planning on operative times and postoperative complications in total knee arthroplasty. J Knee Surg. 2019;32(11):1040-1045. doi:10.1055/s-0039-1677790
25. Krause A, Sayeed Z, El-Othmani M, Pallekonda V, Mihalko W, Saleh KJ. Outpatient total knee arthroplasty: are we there yet? (part 1). Orthop Clin North Am. 2018;49(1):1-6. doi:10.1016/j.ocl.2017.08.002
26. Barrasa-Villar JI, Aibar-Remón C, Prieto-Andrés P, Mareca- Doñate R, Moliner-Lahoz J. Impact on morbidity, mortality, and length of stay of hospital-acquired infections by resistant microorganisms. Clin Infect Dis. 2017;65(4):644-652. doi:10.1093/cid/cix411
27. Nikkel LE, Kates SL, Schreck M, Maceroli M, Mahmood B, Elfar JC. Length of hospital stay after hip fracture and risk of early mortality after discharge in New York state: retrospective cohort study. BMJ. 2015;351:h6246. doi:10.1136/bmj.h6246
28. Marfil-Garza BA, Belaunzarán-Zamudio PF, Gulias-Herrero A, et al. Risk factors associated with prolonged hospital length-of-stay: 18-year retrospective study of hospitalizations in a tertiary healthcare center in Mexico. PLoS One. 2018;13(11):e0207203. doi:10.1371/journal.pone.0207203
29. Hirsch CH, Sommers L, Olsen A, Mullen L, Winograd CH. The natural history of functional morbidity in hospitalized older patients. J Am Geriatr Soc. 1990;38(12):1296-1303. doi:10.1111/j.1532-5415.1990.tb03451.x
30. Iyengar KP, Jain VK, Vaish A, Vaishya R, Maini L, Lal H. Post COVID-19: planning strategies to resume orthopaedic surgery -challenges and considerations. J Clin Orthop Trauma. 2020;11(suppl 3):S291-S295. doi:10.1016/j.jcot.2020.04.028
31. O’Connor CM, Anoushiravani AA, DiCaprio MR, Healy WL, Iorio R. Economic recovery after the COVID-19 pandemic: resuming elective orthopedic surgery and total joint arthroplasty. J Arthroplasty. 2020;35(suppl 7):S32-S36. doi:10.1016/j.arth.2020.04.038.
32. Mauseth SA, Skurtveit S, Skovlund E, Langhammer A, Spigset O. Medication use and association with urinary incontinence in women: data from the Norwegian Prescription Database and the HUNT study. Neurourol Urodyn. 2018;37(4):1448-1457. doi:10.1002/nau.23473
33. Sultan RS, Correll CU, Schoenbaum M, King M, Walkup JT, Olfson M. National patterns of commonly prescribed psychotropic medications to young people. J Child Adolesc Psychopharmacol. 2018;28(3):158-165. doi:10.1089/cap.2017.0077
34. McCoy RG, Dykhoff HJ, Sangaralingham L, et al. Adoption of new glucose-lowering medications in the U.S.-the case of SGLT2 inhibitors: nationwide cohort study. Diabetes Technol Ther. 2019;21(12):702-712. doi:10.1089/dia.2019.0213
Twenty Years of Breast Reduction Surgery at a Veterans Affairs Medical Center
Women make up an estimated 10% of the veteran population.1 The US Department of Veterans Affairs (VA) projected that there would be an increase of 18,000 female veterans per year for 10 years based on 2015 data. The number of women veterans enrolled in the VA health care increased from 397,024 to 729,989 (83.9%) between 2005 and 2015.2 This rise in the number of enrolled women veterans also increased the demand for female-specific health care services, such as breast reduction surgery, a reconstructive procedure provided at the Malcom Randall VA Medical Center (MRVAMC) federal teaching hospital in Gainesville, Florida.
Patients who experience symptomatic macromastia will report a history of neck and shoulder pain, shoulder grooving from bra straps, inframammary intertrigo, difficulty finding clothes that fit, and discomfort participating in sports. For the treatment of symptomatic macromastia, patients report a high satisfaction rate after breast reduction surgery.3-5 Unfortunately, the complications from the surgery can significantly disrupt a woman’s life due to previously unplanned hospital admissions, clinic appointments, wound care, time off work, and poor aesthetic outcome. Faculty awareness of a large number of complications for patients after breast reduction surgery prompted the MRVAMC Plastic Surgery Service to establish a stricter surgical screening protocol using body mass index (BMI) values and negative nicotine status to help patients be healthier and reduce the potential risk before offering surgery. A medical literature search did not find an existing study on veteran-specific breast reduction surgery.
Methods
The University of Florida and North Florida/South Georgia Veterans Health System Institutional Review Board approved a retrospective chart review of all breast reduction surgeries performed at MRVAMC over a 20-year period (July 1, 2000-June 30, 2020). Electronic health records were queried for all primary bilateral breast reduction surgeries performed for symptomatic macromastia using Current Procedural Terminology code 19318. Potentially modifiable or predictable risk factors for wound complications were recorded: nicotine status, BMI, diabetes mellitus (DM) status, skin incision pattern, and pedicle location. Skin incision patterns were either vertical (periareolar plus a vertical scar from the areola to the inframammary fold) or traditional Wise pattern (also known as anchor pattern: periareolar scar, vertical scar to inframammary fold, plus a horizontal scar along the inframammary fold) as seen in Figures 1 and 2. The pedicle is the source of blood supply to the nipple, which was documented as either from the inferior aspect or the superior or superior/medial aspect.
For this study, the blood supply from the superior and superior/medial was logged in the same category. Records were reviewed 3 months after surgery for documentation of local wound complications, such as hematoma, infection, wound breakdown, skin necrosis, and nipple necrosis. Major complications were defined as requiring an unplanned hospital admission or urgent return to the operating room. A χ2 test using a P value of < .05 was used to determine statistical significance between the incidence of wound complications and the individually identifiable variables.
Results
One hundred fifteen bilateral breast reduction surgeries were performed at MRVAMC over a 20-year period. Patient median age was 43 years. Median combined specimen weight was 1272 g. Forty-eight (41.7%) wound complications were documented, including 8 (7%) major complications. Most complications were identified in the first 7 years of the study before the new protocol and consult template became active. The new template resulted in the local complication rate dropping from 62% (July 2000-June 2007) to 26% (July 2007-June 2020). BMI > 32 (P = .03) and active nicotine use (P = .004) were found to be statistically significant independent risk factors for wound complications. Median BMI for all patients was 30. DM status (P = .22), skin incision pattern (P = .25), and pedicle location (P = .13) were not found to be predictors of wound complications (Table). There was no significant change in the incidence of major complications before and after the new protocols were enforced.
Discussion
Breast reduction surgery is an elective reconstructive option to treat symptomatic macromastia. There are several accepted ways to do the reduction surgical procedure where the blood supply (pedicle) to the nipple can vary and the visible scars can be in a horizontal, vertical, or Wise pattern. Technique is usually based on surgeon training, comfort, and preference. There are several known complications specific to this operation that include asymmetry, changes in nipple sensation, unattractive scars, diminished ability to breastfeed, and wound complications.5-7 Wound complications include seroma, hematoma, dehiscence, infection, wound breakdown, skin necrosis, and nipple necrosis.
This study focused on wound complications with the objective of identifying and modifying risk factors. Two known risk factors documented in the literature, nicotine use and obesity, already had been addressed by our service, and results were known anecdotally but had not been previously verified. This study also looked at other potential risk factors, including the pedicle location, skin incision, and DM status.
Residents or fellows participated in all the surgeries. An outcome analysis from The American College of Surgeons National Surgical Quality Improvement Program database from 2005 to 2011 found that resident participation was associated with morbidity, including wound complications.8 This study was performed at a federal hospital with a complexity level 1a rating, which is designated based on the highest level of patient volume, risk, teaching, research, intensive care unit beds, and specialty services.9 The hospital is closely affiliated with a level 1 trauma center and teaching hospital; therefore, resident and fellow participation is not a modifiable risk factor.
This study did not find an increased risk of wound complications in patients with DM, which has been found to be an independent risk factor in a prior study.10 DM status was indicated in only 3 histories, and they all had perioperative hemoglobin A1c levels < 8%. There is documentation of patients receiving perioperative antibiotics in 99 out of 116 of the surgical records; however, we did not include this in the analysis because the operative reports from the first year of the study were incomplete.
Smoking is a known risk factor for local wound complications in breast reduction surgery.10-15 The VA has a smoking cessation program through its mental health service that provides counseling and medication treatment options, including nicotine replacement, bupropion, and varenicline. We require patients to be at least 4 weeks nicotine free before surgery, which has been previously recommended in the literature.16
Existing studies that compare the traditional Wise pattern/inferior pedicle with vertical pattern/superior medial pedicle did not find an increased risk of wound complications.17-19 Our study separated the different incisions from the pedicle because the surgical technique among the different surgeons in the study varied, where sometimes the traditional Wise pattern was combined with the less traditional superior-medial pedicle. We did not find a statistical difference when comparing the incisions and pedicle location, which suggests that the incision type and source of blood supply to the nipple are not the determining factors for wound complications in the early postoperative period.
Obesity is a known risk factor for local wound complications.12,13,15,20-22 Studies have shown that patients who are obese benefit from breast reduction surgery; authors have argued against restricting surgery to these higher risk patients.4,23-25 Patients usually report decades of macromastia symptoms at consultation; so, we believe delaying the surgical procedure to get patients to a safer risk profile is in their best interest. We chose a cutoff BMI of 32 as a realistic value rather than 30, which is considered the definition of obesity. Patients at MRVAMC have access to MOVE!, a weight loss management program through primary care. We believe in being reasonable; so if a patient makes a significant improvement in her health but falls short of the required cutoff, we will still consider offering the surgical procedure after a full explanation of the surgical risks.
Wound complications, especially those that require admission or frequent appointments can seriously disrupt a patient’s life, creating unnecessary hardships and expense in time lost from work, travel, and child care. MRVAMC has a catchment area the size of North Carolina; so many of our patients travel hours for their appointments. The added scars and deformity from wound dehiscence and debridement can lead to asymmetry, widened scars, and future revision operations. Multiple clinic appointments for wound care not only impact that individual patient, but also has the effect of limiting access for all patients in a health care environment with high patient volume and limited providers, operating room time, and clinic appointments. As a result, minimizing predictable wound complications benefits the entire system.
Limitations and Strengths
This retrospective review comprised multiple different surgeons, including faculty and trainees, who were involved in the consultation, surgery, and postoperative care of the patients over a 20-year period; therefore, consistency in documentation is lacking. In addition, we were limited to only the information available on the charts. For example, wound size and laterality were not consistently documented. The MRVAMC complication rate was consistent with the current literature (range, 14-52%).12,18,20,24
The major strength of the study is that the veterans tend to stay within the VA, which makes complications easier to identify and follow. Patients who do not present initially to their surgeon due to travel limitations will typically contact their primary care provider or present to their local VA urgent care or emergency department provider, who will route the patient back to the surgical specialty service through the electronic health record.
Conclusions
Breast reduction surgery has a high wound complication rate, which can be predicted and improved on so that patients can receive their indicated surgical procedure with minimal inconvenience and downtime. This review confirms that preoperative weight loss and nicotine cessation were the appropriate focus of the MRVAMC plastic surgery service’s efforts to achieve a safer surgical experience. We will continue to enforce our protocol and encourage patients who are interested in breast reduction surgery and fall outside the requirements to work with their primary care provider on smoking cessation and weight loss through better nutrition and physical activity.
Acknowledgment
This manuscript is the result of work supported with resources and the use of facilities at the North Florida/South Georgia Veterans Health System in Gainesville, Florida.
1. US Department of Veterans Affairs. Statistics at a glance. Published February 2020. Accessed June 18, 2021. https://www.va.gov/vetdata/docs/Quickfacts/Homepage_slideshow_4_6_20.PDF
2. US Department of Veterans Affairs, National Center for Veterans Analysis and Statistics. Women veterans report: the past, present, and future of women veterans. Published February 2017. Accessed June 18, 2020. https://www.va.gov/vetdata/docs/specialreports/women_veterans_2015_final.pdf
3. Crittenden TA, Watson DI, Ratcliffe J, Griffin PA, Dean NR. Outcomes of breast reduction surgery using the breast-q: a prospective study and comparison with normative data. Plast Reconstr Surg. 2019;144(5):1034-1044. doi:10.1097/PRS.0000000000006114
4. Thoma A, Sprague S, Veltri K, Duku E, Furlong W. A prospective study of patients undergoing breast reduction surgery: health-related quality of life and clinical outcomes. Plast Reconstr Surg. 2007;120(1):13-26. doi:10.1097/01.prs.0000263370.94191.90
5. Nuzzi LC, Firriolo JM, Pike CM, DiVasta AD, Labow BI. Complications and quality of life following reduction mammaplasty in adolescents and young women.Plast Reconstr Surg. 2019;144(3):572-581. doi:10.1097/PRS.0000000000005907
6. Hall-Findlay EJ, Shestak KC. Breast reduction. Plast Reconstr Surg. 2015;136(4):531e-544e. doi:10.1097/PRS.0000000000001622
7. Kraut RY, Brown E, Korownyk C, et al. The impact of breast reduction surgery on breastfeeding: systematic review of observational studies. PLoS One. 2017;12(10):e0186591. doi:10.1371/journal.pone.0186591
8. Fischer JP, Wes AM, Kovach SJ. The impact of surgical resident participation in breast reduction surgery--outcome analysis from the 2005-2011 ACS-NSQIP datasets. J Plast Surg Hand Surg. 2014;48(5):315-321. doi:10.3109/2000656X.2014.882345
9. Site Facility Name and Complexity Summary of VHA Facility. Accessed June 18, 2021. https://www.vendorportal.ecms.va.gov/FBODocumentServer/DocumentServer.aspx?DocumentId=2793591&FileName=VA118-16-R-1059-A00002002.docx
10. Lewin R, Göransson M, Elander A, Thorarinsson A, Lundberg J, Lidén M. Risk factors for complications after breast reduction surgery. J Plast Surg Hand Surg. 2014;48(1):10-14. doi:10.3109/2000656X.2013.791625
11. Cunningham BL, Gear AJ, Kerrigan CL, Collins ED. Analysis of breast reduction complications derived from the BRAVO study. Plast Reconstr Surg. 2005;115(6):1597-1604. doi:10.1097/01.prs.0000160695.33457.db
12. Karamanos E, Wei B, Siddiqui A, Rubinfeld I. Tobacco use and body mass index as predictors of outcomes in patients undergoing breast reduction mammoplasty. Ann Plast Surg. 2015;75(4):383-387. doi:10.1097/SAP.0000000000000192
13. Manahan MA, Buretta KJ, Chang D, Mithani SK, Mallalieu J, Shermak MA. An outcomes analysis of 2142 breast reduction procedures. Ann Plast Surg. 2015;74(3):289-292. doi:10.1097/SAP.0b013e31829d2261
14. Hillam JS, Borsting EA, Chim JH, Thaller SR. Smoking as a risk factor for breast reduction: an analysis of 13,503 cases. J Plast Reconstr Aesthet Surg. 2017;70(6):734-740. doi:10.1016/j.bjps.2016.12.012
15. Zhang MX, Chen CY, Fang QQ, et al. Risk factors for complications after reduction mammoplasty: a meta-analysis. PLoS One. 2016;11(12):e0167746. doi:10.1371/journal.pone.0167746
16. Sørensen LT. Wound healing and infection in surgery: the pathophysiological impact of smoking, smoking cessation, and nicotine replacement therapy: a systematic review. Ann Surg. 2012;255(6):1069-1079.doi:10.1097/SLA.0b013e31824f632d
17. Antony AK, Yegiyants SS, Danielson KK, et al. A matched cohort study of superomedial pedicle vertical scar breast reduction (100 breasts) and traditional inferior pedicle Wise-pattern reduction (100 breasts): an outcomes study over 3 years. Plast Reconstr Surg. 2013;132(5):1068-1076. doi:10.1097/PRS.0b013e3182a48b2d
18. Hunter-Smith DJ, Smoll NR, Marne B, Maung H, Findlay MW. Comparing breast-reduction techniques: time-to-event analysis and recommendations. Aesthetic Plast Surg. 2012;36(3):600-606. doi:10.1007/s00266-011-9860-3
19. Ogunleye AA, Leroux O, Morrison N, Preminger AB. Complications after reduction mammaplasty: a comparison of wise pattern/inferior pedicle and vertical scar/superomedial pedicle. Ann Plast Surg. 2017;79(1):13-16. doi:10.1097/SAP.0000000000001059
20. Bauermeister AJ, Gill K, Zuriarrain A, Earle SA, Newman MI. Reduction mammaplasty with superomedial pedicle technique: a literature review and retrospective analysis of 938 consecutive breast reductions. J Plast Reconstr Aesthet Surg. 2019;72(3):410-418. doi:10.1016/j.bjps.2018.12.004
21. Nelson JA, Fischer JP, Chung CU, et al. Obesity and early complications following reduction mammaplasty: an analysis of 4545 patients from the 2005-2011 NSQIP datasets. J Plast Surg Hand Surg. 2014;48(5):334-339. doi:10.3109/2000656X.2014.886582
22. Kreithen J, Caffee H, Rosenberg J, et al. A comparison of the LeJour and Wise pattern methods of breast reduction. Ann Plast Surg. 2005;54(3):236-241. doi:10.3109/2000656X.2014.886582
23. Güemes A, Pérez E, Sousa R, et al. Quality of life and alleviation of symptoms after breast reduction for macromastia in obese patients: is surgery worth it? Aesthetic Plast Surg. 2016;40(1):62-70. doi:10.1007/s00266-015-0601-x
24. Setälä L, Papp A, Joukainen S, et al. Obesity and complications in breast reduction surgery: are restrictions justified? J Plast Reconstr Aesthet Surg. 2009;62(2):195-199. doi:10.1016/j.bjps.2007.10.043
25. Shah R, Al-Ajam Y, Stott D, Kang N. Obesity in mammaplasty: a study of complications following breast reduction. J Plast Reconstr Aesthet Surg. 2011;64(4):508-514. doi:10.1016/j.bjps.2007.10.043
Women make up an estimated 10% of the veteran population.1 The US Department of Veterans Affairs (VA) projected that there would be an increase of 18,000 female veterans per year for 10 years based on 2015 data. The number of women veterans enrolled in the VA health care increased from 397,024 to 729,989 (83.9%) between 2005 and 2015.2 This rise in the number of enrolled women veterans also increased the demand for female-specific health care services, such as breast reduction surgery, a reconstructive procedure provided at the Malcom Randall VA Medical Center (MRVAMC) federal teaching hospital in Gainesville, Florida.
Patients who experience symptomatic macromastia will report a history of neck and shoulder pain, shoulder grooving from bra straps, inframammary intertrigo, difficulty finding clothes that fit, and discomfort participating in sports. For the treatment of symptomatic macromastia, patients report a high satisfaction rate after breast reduction surgery.3-5 Unfortunately, the complications from the surgery can significantly disrupt a woman’s life due to previously unplanned hospital admissions, clinic appointments, wound care, time off work, and poor aesthetic outcome. Faculty awareness of a large number of complications for patients after breast reduction surgery prompted the MRVAMC Plastic Surgery Service to establish a stricter surgical screening protocol using body mass index (BMI) values and negative nicotine status to help patients be healthier and reduce the potential risk before offering surgery. A medical literature search did not find an existing study on veteran-specific breast reduction surgery.
Methods
The University of Florida and North Florida/South Georgia Veterans Health System Institutional Review Board approved a retrospective chart review of all breast reduction surgeries performed at MRVAMC over a 20-year period (July 1, 2000-June 30, 2020). Electronic health records were queried for all primary bilateral breast reduction surgeries performed for symptomatic macromastia using Current Procedural Terminology code 19318. Potentially modifiable or predictable risk factors for wound complications were recorded: nicotine status, BMI, diabetes mellitus (DM) status, skin incision pattern, and pedicle location. Skin incision patterns were either vertical (periareolar plus a vertical scar from the areola to the inframammary fold) or traditional Wise pattern (also known as anchor pattern: periareolar scar, vertical scar to inframammary fold, plus a horizontal scar along the inframammary fold) as seen in Figures 1 and 2. The pedicle is the source of blood supply to the nipple, which was documented as either from the inferior aspect or the superior or superior/medial aspect.
For this study, the blood supply from the superior and superior/medial was logged in the same category. Records were reviewed 3 months after surgery for documentation of local wound complications, such as hematoma, infection, wound breakdown, skin necrosis, and nipple necrosis. Major complications were defined as requiring an unplanned hospital admission or urgent return to the operating room. A χ2 test using a P value of < .05 was used to determine statistical significance between the incidence of wound complications and the individually identifiable variables.
Results
One hundred fifteen bilateral breast reduction surgeries were performed at MRVAMC over a 20-year period. Patient median age was 43 years. Median combined specimen weight was 1272 g. Forty-eight (41.7%) wound complications were documented, including 8 (7%) major complications. Most complications were identified in the first 7 years of the study before the new protocol and consult template became active. The new template resulted in the local complication rate dropping from 62% (July 2000-June 2007) to 26% (July 2007-June 2020). BMI > 32 (P = .03) and active nicotine use (P = .004) were found to be statistically significant independent risk factors for wound complications. Median BMI for all patients was 30. DM status (P = .22), skin incision pattern (P = .25), and pedicle location (P = .13) were not found to be predictors of wound complications (Table). There was no significant change in the incidence of major complications before and after the new protocols were enforced.
Discussion
Breast reduction surgery is an elective reconstructive option to treat symptomatic macromastia. There are several accepted ways to do the reduction surgical procedure where the blood supply (pedicle) to the nipple can vary and the visible scars can be in a horizontal, vertical, or Wise pattern. Technique is usually based on surgeon training, comfort, and preference. There are several known complications specific to this operation that include asymmetry, changes in nipple sensation, unattractive scars, diminished ability to breastfeed, and wound complications.5-7 Wound complications include seroma, hematoma, dehiscence, infection, wound breakdown, skin necrosis, and nipple necrosis.
This study focused on wound complications with the objective of identifying and modifying risk factors. Two known risk factors documented in the literature, nicotine use and obesity, already had been addressed by our service, and results were known anecdotally but had not been previously verified. This study also looked at other potential risk factors, including the pedicle location, skin incision, and DM status.
Residents or fellows participated in all the surgeries. An outcome analysis from The American College of Surgeons National Surgical Quality Improvement Program database from 2005 to 2011 found that resident participation was associated with morbidity, including wound complications.8 This study was performed at a federal hospital with a complexity level 1a rating, which is designated based on the highest level of patient volume, risk, teaching, research, intensive care unit beds, and specialty services.9 The hospital is closely affiliated with a level 1 trauma center and teaching hospital; therefore, resident and fellow participation is not a modifiable risk factor.
This study did not find an increased risk of wound complications in patients with DM, which has been found to be an independent risk factor in a prior study.10 DM status was indicated in only 3 histories, and they all had perioperative hemoglobin A1c levels < 8%. There is documentation of patients receiving perioperative antibiotics in 99 out of 116 of the surgical records; however, we did not include this in the analysis because the operative reports from the first year of the study were incomplete.
Smoking is a known risk factor for local wound complications in breast reduction surgery.10-15 The VA has a smoking cessation program through its mental health service that provides counseling and medication treatment options, including nicotine replacement, bupropion, and varenicline. We require patients to be at least 4 weeks nicotine free before surgery, which has been previously recommended in the literature.16
Existing studies that compare the traditional Wise pattern/inferior pedicle with vertical pattern/superior medial pedicle did not find an increased risk of wound complications.17-19 Our study separated the different incisions from the pedicle because the surgical technique among the different surgeons in the study varied, where sometimes the traditional Wise pattern was combined with the less traditional superior-medial pedicle. We did not find a statistical difference when comparing the incisions and pedicle location, which suggests that the incision type and source of blood supply to the nipple are not the determining factors for wound complications in the early postoperative period.
Obesity is a known risk factor for local wound complications.12,13,15,20-22 Studies have shown that patients who are obese benefit from breast reduction surgery; authors have argued against restricting surgery to these higher risk patients.4,23-25 Patients usually report decades of macromastia symptoms at consultation; so, we believe delaying the surgical procedure to get patients to a safer risk profile is in their best interest. We chose a cutoff BMI of 32 as a realistic value rather than 30, which is considered the definition of obesity. Patients at MRVAMC have access to MOVE!, a weight loss management program through primary care. We believe in being reasonable; so if a patient makes a significant improvement in her health but falls short of the required cutoff, we will still consider offering the surgical procedure after a full explanation of the surgical risks.
Wound complications, especially those that require admission or frequent appointments can seriously disrupt a patient’s life, creating unnecessary hardships and expense in time lost from work, travel, and child care. MRVAMC has a catchment area the size of North Carolina; so many of our patients travel hours for their appointments. The added scars and deformity from wound dehiscence and debridement can lead to asymmetry, widened scars, and future revision operations. Multiple clinic appointments for wound care not only impact that individual patient, but also has the effect of limiting access for all patients in a health care environment with high patient volume and limited providers, operating room time, and clinic appointments. As a result, minimizing predictable wound complications benefits the entire system.
Limitations and Strengths
This retrospective review comprised multiple different surgeons, including faculty and trainees, who were involved in the consultation, surgery, and postoperative care of the patients over a 20-year period; therefore, consistency in documentation is lacking. In addition, we were limited to only the information available on the charts. For example, wound size and laterality were not consistently documented. The MRVAMC complication rate was consistent with the current literature (range, 14-52%).12,18,20,24
The major strength of the study is that the veterans tend to stay within the VA, which makes complications easier to identify and follow. Patients who do not present initially to their surgeon due to travel limitations will typically contact their primary care provider or present to their local VA urgent care or emergency department provider, who will route the patient back to the surgical specialty service through the electronic health record.
Conclusions
Breast reduction surgery has a high wound complication rate, which can be predicted and improved on so that patients can receive their indicated surgical procedure with minimal inconvenience and downtime. This review confirms that preoperative weight loss and nicotine cessation were the appropriate focus of the MRVAMC plastic surgery service’s efforts to achieve a safer surgical experience. We will continue to enforce our protocol and encourage patients who are interested in breast reduction surgery and fall outside the requirements to work with their primary care provider on smoking cessation and weight loss through better nutrition and physical activity.
Acknowledgment
This manuscript is the result of work supported with resources and the use of facilities at the North Florida/South Georgia Veterans Health System in Gainesville, Florida.
Women make up an estimated 10% of the veteran population.1 The US Department of Veterans Affairs (VA) projected that there would be an increase of 18,000 female veterans per year for 10 years based on 2015 data. The number of women veterans enrolled in the VA health care increased from 397,024 to 729,989 (83.9%) between 2005 and 2015.2 This rise in the number of enrolled women veterans also increased the demand for female-specific health care services, such as breast reduction surgery, a reconstructive procedure provided at the Malcom Randall VA Medical Center (MRVAMC) federal teaching hospital in Gainesville, Florida.
Patients who experience symptomatic macromastia will report a history of neck and shoulder pain, shoulder grooving from bra straps, inframammary intertrigo, difficulty finding clothes that fit, and discomfort participating in sports. For the treatment of symptomatic macromastia, patients report a high satisfaction rate after breast reduction surgery.3-5 Unfortunately, the complications from the surgery can significantly disrupt a woman’s life due to previously unplanned hospital admissions, clinic appointments, wound care, time off work, and poor aesthetic outcome. Faculty awareness of a large number of complications for patients after breast reduction surgery prompted the MRVAMC Plastic Surgery Service to establish a stricter surgical screening protocol using body mass index (BMI) values and negative nicotine status to help patients be healthier and reduce the potential risk before offering surgery. A medical literature search did not find an existing study on veteran-specific breast reduction surgery.
Methods
The University of Florida and North Florida/South Georgia Veterans Health System Institutional Review Board approved a retrospective chart review of all breast reduction surgeries performed at MRVAMC over a 20-year period (July 1, 2000-June 30, 2020). Electronic health records were queried for all primary bilateral breast reduction surgeries performed for symptomatic macromastia using Current Procedural Terminology code 19318. Potentially modifiable or predictable risk factors for wound complications were recorded: nicotine status, BMI, diabetes mellitus (DM) status, skin incision pattern, and pedicle location. Skin incision patterns were either vertical (periareolar plus a vertical scar from the areola to the inframammary fold) or traditional Wise pattern (also known as anchor pattern: periareolar scar, vertical scar to inframammary fold, plus a horizontal scar along the inframammary fold) as seen in Figures 1 and 2. The pedicle is the source of blood supply to the nipple, which was documented as either from the inferior aspect or the superior or superior/medial aspect.
For this study, the blood supply from the superior and superior/medial was logged in the same category. Records were reviewed 3 months after surgery for documentation of local wound complications, such as hematoma, infection, wound breakdown, skin necrosis, and nipple necrosis. Major complications were defined as requiring an unplanned hospital admission or urgent return to the operating room. A χ2 test using a P value of < .05 was used to determine statistical significance between the incidence of wound complications and the individually identifiable variables.
Results
One hundred fifteen bilateral breast reduction surgeries were performed at MRVAMC over a 20-year period. Patient median age was 43 years. Median combined specimen weight was 1272 g. Forty-eight (41.7%) wound complications were documented, including 8 (7%) major complications. Most complications were identified in the first 7 years of the study before the new protocol and consult template became active. The new template resulted in the local complication rate dropping from 62% (July 2000-June 2007) to 26% (July 2007-June 2020). BMI > 32 (P = .03) and active nicotine use (P = .004) were found to be statistically significant independent risk factors for wound complications. Median BMI for all patients was 30. DM status (P = .22), skin incision pattern (P = .25), and pedicle location (P = .13) were not found to be predictors of wound complications (Table). There was no significant change in the incidence of major complications before and after the new protocols were enforced.
Discussion
Breast reduction surgery is an elective reconstructive option to treat symptomatic macromastia. There are several accepted ways to do the reduction surgical procedure where the blood supply (pedicle) to the nipple can vary and the visible scars can be in a horizontal, vertical, or Wise pattern. Technique is usually based on surgeon training, comfort, and preference. There are several known complications specific to this operation that include asymmetry, changes in nipple sensation, unattractive scars, diminished ability to breastfeed, and wound complications.5-7 Wound complications include seroma, hematoma, dehiscence, infection, wound breakdown, skin necrosis, and nipple necrosis.
This study focused on wound complications with the objective of identifying and modifying risk factors. Two known risk factors documented in the literature, nicotine use and obesity, already had been addressed by our service, and results were known anecdotally but had not been previously verified. This study also looked at other potential risk factors, including the pedicle location, skin incision, and DM status.
Residents or fellows participated in all the surgeries. An outcome analysis from The American College of Surgeons National Surgical Quality Improvement Program database from 2005 to 2011 found that resident participation was associated with morbidity, including wound complications.8 This study was performed at a federal hospital with a complexity level 1a rating, which is designated based on the highest level of patient volume, risk, teaching, research, intensive care unit beds, and specialty services.9 The hospital is closely affiliated with a level 1 trauma center and teaching hospital; therefore, resident and fellow participation is not a modifiable risk factor.
This study did not find an increased risk of wound complications in patients with DM, which has been found to be an independent risk factor in a prior study.10 DM status was indicated in only 3 histories, and they all had perioperative hemoglobin A1c levels < 8%. There is documentation of patients receiving perioperative antibiotics in 99 out of 116 of the surgical records; however, we did not include this in the analysis because the operative reports from the first year of the study were incomplete.
Smoking is a known risk factor for local wound complications in breast reduction surgery.10-15 The VA has a smoking cessation program through its mental health service that provides counseling and medication treatment options, including nicotine replacement, bupropion, and varenicline. We require patients to be at least 4 weeks nicotine free before surgery, which has been previously recommended in the literature.16
Existing studies that compare the traditional Wise pattern/inferior pedicle with vertical pattern/superior medial pedicle did not find an increased risk of wound complications.17-19 Our study separated the different incisions from the pedicle because the surgical technique among the different surgeons in the study varied, where sometimes the traditional Wise pattern was combined with the less traditional superior-medial pedicle. We did not find a statistical difference when comparing the incisions and pedicle location, which suggests that the incision type and source of blood supply to the nipple are not the determining factors for wound complications in the early postoperative period.
Obesity is a known risk factor for local wound complications.12,13,15,20-22 Studies have shown that patients who are obese benefit from breast reduction surgery; authors have argued against restricting surgery to these higher risk patients.4,23-25 Patients usually report decades of macromastia symptoms at consultation; so, we believe delaying the surgical procedure to get patients to a safer risk profile is in their best interest. We chose a cutoff BMI of 32 as a realistic value rather than 30, which is considered the definition of obesity. Patients at MRVAMC have access to MOVE!, a weight loss management program through primary care. We believe in being reasonable; so if a patient makes a significant improvement in her health but falls short of the required cutoff, we will still consider offering the surgical procedure after a full explanation of the surgical risks.
Wound complications, especially those that require admission or frequent appointments can seriously disrupt a patient’s life, creating unnecessary hardships and expense in time lost from work, travel, and child care. MRVAMC has a catchment area the size of North Carolina; so many of our patients travel hours for their appointments. The added scars and deformity from wound dehiscence and debridement can lead to asymmetry, widened scars, and future revision operations. Multiple clinic appointments for wound care not only impact that individual patient, but also has the effect of limiting access for all patients in a health care environment with high patient volume and limited providers, operating room time, and clinic appointments. As a result, minimizing predictable wound complications benefits the entire system.
Limitations and Strengths
This retrospective review comprised multiple different surgeons, including faculty and trainees, who were involved in the consultation, surgery, and postoperative care of the patients over a 20-year period; therefore, consistency in documentation is lacking. In addition, we were limited to only the information available on the charts. For example, wound size and laterality were not consistently documented. The MRVAMC complication rate was consistent with the current literature (range, 14-52%).12,18,20,24
The major strength of the study is that the veterans tend to stay within the VA, which makes complications easier to identify and follow. Patients who do not present initially to their surgeon due to travel limitations will typically contact their primary care provider or present to their local VA urgent care or emergency department provider, who will route the patient back to the surgical specialty service through the electronic health record.
Conclusions
Breast reduction surgery has a high wound complication rate, which can be predicted and improved on so that patients can receive their indicated surgical procedure with minimal inconvenience and downtime. This review confirms that preoperative weight loss and nicotine cessation were the appropriate focus of the MRVAMC plastic surgery service’s efforts to achieve a safer surgical experience. We will continue to enforce our protocol and encourage patients who are interested in breast reduction surgery and fall outside the requirements to work with their primary care provider on smoking cessation and weight loss through better nutrition and physical activity.
Acknowledgment
This manuscript is the result of work supported with resources and the use of facilities at the North Florida/South Georgia Veterans Health System in Gainesville, Florida.
1. US Department of Veterans Affairs. Statistics at a glance. Published February 2020. Accessed June 18, 2021. https://www.va.gov/vetdata/docs/Quickfacts/Homepage_slideshow_4_6_20.PDF
2. US Department of Veterans Affairs, National Center for Veterans Analysis and Statistics. Women veterans report: the past, present, and future of women veterans. Published February 2017. Accessed June 18, 2020. https://www.va.gov/vetdata/docs/specialreports/women_veterans_2015_final.pdf
3. Crittenden TA, Watson DI, Ratcliffe J, Griffin PA, Dean NR. Outcomes of breast reduction surgery using the breast-q: a prospective study and comparison with normative data. Plast Reconstr Surg. 2019;144(5):1034-1044. doi:10.1097/PRS.0000000000006114
4. Thoma A, Sprague S, Veltri K, Duku E, Furlong W. A prospective study of patients undergoing breast reduction surgery: health-related quality of life and clinical outcomes. Plast Reconstr Surg. 2007;120(1):13-26. doi:10.1097/01.prs.0000263370.94191.90
5. Nuzzi LC, Firriolo JM, Pike CM, DiVasta AD, Labow BI. Complications and quality of life following reduction mammaplasty in adolescents and young women.Plast Reconstr Surg. 2019;144(3):572-581. doi:10.1097/PRS.0000000000005907
6. Hall-Findlay EJ, Shestak KC. Breast reduction. Plast Reconstr Surg. 2015;136(4):531e-544e. doi:10.1097/PRS.0000000000001622
7. Kraut RY, Brown E, Korownyk C, et al. The impact of breast reduction surgery on breastfeeding: systematic review of observational studies. PLoS One. 2017;12(10):e0186591. doi:10.1371/journal.pone.0186591
8. Fischer JP, Wes AM, Kovach SJ. The impact of surgical resident participation in breast reduction surgery--outcome analysis from the 2005-2011 ACS-NSQIP datasets. J Plast Surg Hand Surg. 2014;48(5):315-321. doi:10.3109/2000656X.2014.882345
9. Site Facility Name and Complexity Summary of VHA Facility. Accessed June 18, 2021. https://www.vendorportal.ecms.va.gov/FBODocumentServer/DocumentServer.aspx?DocumentId=2793591&FileName=VA118-16-R-1059-A00002002.docx
10. Lewin R, Göransson M, Elander A, Thorarinsson A, Lundberg J, Lidén M. Risk factors for complications after breast reduction surgery. J Plast Surg Hand Surg. 2014;48(1):10-14. doi:10.3109/2000656X.2013.791625
11. Cunningham BL, Gear AJ, Kerrigan CL, Collins ED. Analysis of breast reduction complications derived from the BRAVO study. Plast Reconstr Surg. 2005;115(6):1597-1604. doi:10.1097/01.prs.0000160695.33457.db
12. Karamanos E, Wei B, Siddiqui A, Rubinfeld I. Tobacco use and body mass index as predictors of outcomes in patients undergoing breast reduction mammoplasty. Ann Plast Surg. 2015;75(4):383-387. doi:10.1097/SAP.0000000000000192
13. Manahan MA, Buretta KJ, Chang D, Mithani SK, Mallalieu J, Shermak MA. An outcomes analysis of 2142 breast reduction procedures. Ann Plast Surg. 2015;74(3):289-292. doi:10.1097/SAP.0b013e31829d2261
14. Hillam JS, Borsting EA, Chim JH, Thaller SR. Smoking as a risk factor for breast reduction: an analysis of 13,503 cases. J Plast Reconstr Aesthet Surg. 2017;70(6):734-740. doi:10.1016/j.bjps.2016.12.012
15. Zhang MX, Chen CY, Fang QQ, et al. Risk factors for complications after reduction mammoplasty: a meta-analysis. PLoS One. 2016;11(12):e0167746. doi:10.1371/journal.pone.0167746
16. Sørensen LT. Wound healing and infection in surgery: the pathophysiological impact of smoking, smoking cessation, and nicotine replacement therapy: a systematic review. Ann Surg. 2012;255(6):1069-1079.doi:10.1097/SLA.0b013e31824f632d
17. Antony AK, Yegiyants SS, Danielson KK, et al. A matched cohort study of superomedial pedicle vertical scar breast reduction (100 breasts) and traditional inferior pedicle Wise-pattern reduction (100 breasts): an outcomes study over 3 years. Plast Reconstr Surg. 2013;132(5):1068-1076. doi:10.1097/PRS.0b013e3182a48b2d
18. Hunter-Smith DJ, Smoll NR, Marne B, Maung H, Findlay MW. Comparing breast-reduction techniques: time-to-event analysis and recommendations. Aesthetic Plast Surg. 2012;36(3):600-606. doi:10.1007/s00266-011-9860-3
19. Ogunleye AA, Leroux O, Morrison N, Preminger AB. Complications after reduction mammaplasty: a comparison of wise pattern/inferior pedicle and vertical scar/superomedial pedicle. Ann Plast Surg. 2017;79(1):13-16. doi:10.1097/SAP.0000000000001059
20. Bauermeister AJ, Gill K, Zuriarrain A, Earle SA, Newman MI. Reduction mammaplasty with superomedial pedicle technique: a literature review and retrospective analysis of 938 consecutive breast reductions. J Plast Reconstr Aesthet Surg. 2019;72(3):410-418. doi:10.1016/j.bjps.2018.12.004
21. Nelson JA, Fischer JP, Chung CU, et al. Obesity and early complications following reduction mammaplasty: an analysis of 4545 patients from the 2005-2011 NSQIP datasets. J Plast Surg Hand Surg. 2014;48(5):334-339. doi:10.3109/2000656X.2014.886582
22. Kreithen J, Caffee H, Rosenberg J, et al. A comparison of the LeJour and Wise pattern methods of breast reduction. Ann Plast Surg. 2005;54(3):236-241. doi:10.3109/2000656X.2014.886582
23. Güemes A, Pérez E, Sousa R, et al. Quality of life and alleviation of symptoms after breast reduction for macromastia in obese patients: is surgery worth it? Aesthetic Plast Surg. 2016;40(1):62-70. doi:10.1007/s00266-015-0601-x
24. Setälä L, Papp A, Joukainen S, et al. Obesity and complications in breast reduction surgery: are restrictions justified? J Plast Reconstr Aesthet Surg. 2009;62(2):195-199. doi:10.1016/j.bjps.2007.10.043
25. Shah R, Al-Ajam Y, Stott D, Kang N. Obesity in mammaplasty: a study of complications following breast reduction. J Plast Reconstr Aesthet Surg. 2011;64(4):508-514. doi:10.1016/j.bjps.2007.10.043
1. US Department of Veterans Affairs. Statistics at a glance. Published February 2020. Accessed June 18, 2021. https://www.va.gov/vetdata/docs/Quickfacts/Homepage_slideshow_4_6_20.PDF
2. US Department of Veterans Affairs, National Center for Veterans Analysis and Statistics. Women veterans report: the past, present, and future of women veterans. Published February 2017. Accessed June 18, 2020. https://www.va.gov/vetdata/docs/specialreports/women_veterans_2015_final.pdf
3. Crittenden TA, Watson DI, Ratcliffe J, Griffin PA, Dean NR. Outcomes of breast reduction surgery using the breast-q: a prospective study and comparison with normative data. Plast Reconstr Surg. 2019;144(5):1034-1044. doi:10.1097/PRS.0000000000006114
4. Thoma A, Sprague S, Veltri K, Duku E, Furlong W. A prospective study of patients undergoing breast reduction surgery: health-related quality of life and clinical outcomes. Plast Reconstr Surg. 2007;120(1):13-26. doi:10.1097/01.prs.0000263370.94191.90
5. Nuzzi LC, Firriolo JM, Pike CM, DiVasta AD, Labow BI. Complications and quality of life following reduction mammaplasty in adolescents and young women.Plast Reconstr Surg. 2019;144(3):572-581. doi:10.1097/PRS.0000000000005907
6. Hall-Findlay EJ, Shestak KC. Breast reduction. Plast Reconstr Surg. 2015;136(4):531e-544e. doi:10.1097/PRS.0000000000001622
7. Kraut RY, Brown E, Korownyk C, et al. The impact of breast reduction surgery on breastfeeding: systematic review of observational studies. PLoS One. 2017;12(10):e0186591. doi:10.1371/journal.pone.0186591
8. Fischer JP, Wes AM, Kovach SJ. The impact of surgical resident participation in breast reduction surgery--outcome analysis from the 2005-2011 ACS-NSQIP datasets. J Plast Surg Hand Surg. 2014;48(5):315-321. doi:10.3109/2000656X.2014.882345
9. Site Facility Name and Complexity Summary of VHA Facility. Accessed June 18, 2021. https://www.vendorportal.ecms.va.gov/FBODocumentServer/DocumentServer.aspx?DocumentId=2793591&FileName=VA118-16-R-1059-A00002002.docx
10. Lewin R, Göransson M, Elander A, Thorarinsson A, Lundberg J, Lidén M. Risk factors for complications after breast reduction surgery. J Plast Surg Hand Surg. 2014;48(1):10-14. doi:10.3109/2000656X.2013.791625
11. Cunningham BL, Gear AJ, Kerrigan CL, Collins ED. Analysis of breast reduction complications derived from the BRAVO study. Plast Reconstr Surg. 2005;115(6):1597-1604. doi:10.1097/01.prs.0000160695.33457.db
12. Karamanos E, Wei B, Siddiqui A, Rubinfeld I. Tobacco use and body mass index as predictors of outcomes in patients undergoing breast reduction mammoplasty. Ann Plast Surg. 2015;75(4):383-387. doi:10.1097/SAP.0000000000000192
13. Manahan MA, Buretta KJ, Chang D, Mithani SK, Mallalieu J, Shermak MA. An outcomes analysis of 2142 breast reduction procedures. Ann Plast Surg. 2015;74(3):289-292. doi:10.1097/SAP.0b013e31829d2261
14. Hillam JS, Borsting EA, Chim JH, Thaller SR. Smoking as a risk factor for breast reduction: an analysis of 13,503 cases. J Plast Reconstr Aesthet Surg. 2017;70(6):734-740. doi:10.1016/j.bjps.2016.12.012
15. Zhang MX, Chen CY, Fang QQ, et al. Risk factors for complications after reduction mammoplasty: a meta-analysis. PLoS One. 2016;11(12):e0167746. doi:10.1371/journal.pone.0167746
16. Sørensen LT. Wound healing and infection in surgery: the pathophysiological impact of smoking, smoking cessation, and nicotine replacement therapy: a systematic review. Ann Surg. 2012;255(6):1069-1079.doi:10.1097/SLA.0b013e31824f632d
17. Antony AK, Yegiyants SS, Danielson KK, et al. A matched cohort study of superomedial pedicle vertical scar breast reduction (100 breasts) and traditional inferior pedicle Wise-pattern reduction (100 breasts): an outcomes study over 3 years. Plast Reconstr Surg. 2013;132(5):1068-1076. doi:10.1097/PRS.0b013e3182a48b2d
18. Hunter-Smith DJ, Smoll NR, Marne B, Maung H, Findlay MW. Comparing breast-reduction techniques: time-to-event analysis and recommendations. Aesthetic Plast Surg. 2012;36(3):600-606. doi:10.1007/s00266-011-9860-3
19. Ogunleye AA, Leroux O, Morrison N, Preminger AB. Complications after reduction mammaplasty: a comparison of wise pattern/inferior pedicle and vertical scar/superomedial pedicle. Ann Plast Surg. 2017;79(1):13-16. doi:10.1097/SAP.0000000000001059
20. Bauermeister AJ, Gill K, Zuriarrain A, Earle SA, Newman MI. Reduction mammaplasty with superomedial pedicle technique: a literature review and retrospective analysis of 938 consecutive breast reductions. J Plast Reconstr Aesthet Surg. 2019;72(3):410-418. doi:10.1016/j.bjps.2018.12.004
21. Nelson JA, Fischer JP, Chung CU, et al. Obesity and early complications following reduction mammaplasty: an analysis of 4545 patients from the 2005-2011 NSQIP datasets. J Plast Surg Hand Surg. 2014;48(5):334-339. doi:10.3109/2000656X.2014.886582
22. Kreithen J, Caffee H, Rosenberg J, et al. A comparison of the LeJour and Wise pattern methods of breast reduction. Ann Plast Surg. 2005;54(3):236-241. doi:10.3109/2000656X.2014.886582
23. Güemes A, Pérez E, Sousa R, et al. Quality of life and alleviation of symptoms after breast reduction for macromastia in obese patients: is surgery worth it? Aesthetic Plast Surg. 2016;40(1):62-70. doi:10.1007/s00266-015-0601-x
24. Setälä L, Papp A, Joukainen S, et al. Obesity and complications in breast reduction surgery: are restrictions justified? J Plast Reconstr Aesthet Surg. 2009;62(2):195-199. doi:10.1016/j.bjps.2007.10.043
25. Shah R, Al-Ajam Y, Stott D, Kang N. Obesity in mammaplasty: a study of complications following breast reduction. J Plast Reconstr Aesthet Surg. 2011;64(4):508-514. doi:10.1016/j.bjps.2007.10.043
Assessment of a Medication Deprescribing Tool on Polypharmacy and Cost Avoidance
According to the Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), the use of prescription drugs has increased in the past half century. Although prescription drugs have played an important role in preventing, controlling, and delaying onset or progression of disease, their growth in use also has posed many risks.1 One ramification of this growth is the occurrence of polypharmacy, which does not have a universal, clear definition. In general, it can be described as the concurrent use of multiple medications by a single patient to treat one or more medical ailments. Five or more medications taken simultaneously is the most common definition to date, but this is just one of many acceptable definitions and that varies from one health care facility to another.1,2
Regardless of the cutoffs established to indicate polypharmacy, its incidence can result in poor and potentially harmful health outcomes. Polypharmacy increases the risk of experiencing adverse drug events (ADEs), drug-drug interactions (DDIs), geriatric-related syndromes, falls, hospitalization, and mortality. Issues with adherence may begin to unfold secondary to increased pill burden. Both the patient and the health care system may encounter financial strain, as polypharmacy can lead to unnecessary and essentially preventable costs of care. When evaluating the likelihood of polypharmacy based on age group, NCHS found that 47.5% of patients taking ≥ 5 medications were aged ≥ 65 years.1-5 This indicates that polypharmacy is of great concern in the geriatric population, which also represents a large proportion of individuals accessing Veterans Health Administration (VHA) care.
Deprescibing
Deprescribing is the act of withdrawing or discontinuing potentially inappropriate medications (PIM), or medications used by older patients harboring ADEs that generally outweigh the clinical benefits of the drug. Deprescribing is an effective tool for managing or reducing polypharmacy. A variety of tools have been created whose sole purpose is to simplify deprescribing. Some tools explicitly identify PIM and are widely familiar in medical practice. Examples are the Beers Criteria developed in 1991 or Screening Tool to Alert Right Treatment/Screening Tool of Older Persons Prescriptions (START/STOPP) criteria created in 2003. Other tools that are less commonplace but equally as resourceful are MedStopper and Deprescribing.org. The former was launched in 2015 and is a Canadian online system that provides risk assessments for medications with guidance for tapering or stopping medications if continuation of the drug presents higher risk than benefit.5-7 The latter is a full-fledged website developed by a physician, a pharmacist, and their research teams that serves as an exchange hub for deprescribing information.
In 2016, the VIONE (Vital, Important, Optional, Not indicated/treatment complete, and Every medication has an indication) deprescribing tool was developed by Saraswathy Battar, MD, at Central Arkansas Veterans Healthcare System (CAVHS) in Little Rock, as a system that could go beyond medication reconciliation (Table 1). Health care providers (HCPs) and pharmacists evaluate each medication that a patient has been prescribed and places each medication in a VIONE category. Prescribers may then take the opportunity to deprescribe or discontinue medications if deemed appropriate based on their clinical assessments and shared decision making.8 Traditionally, medication reconciliation involves the process of obtaining a complete and accurate list of medications as reported by a patient or caregiver to a HCP. VIONE encourages HCPs and pharmacists not only to ensure medication lists are accurate, but also that each medication reported is appropriate for continued use. In other words, VIONE is meant to help implement deprescribing at opportune times. More than 14,000 medications have been deprescribed using the VIONE method, resulting in more than $2,000,000 of annualized cost avoidance after just 1 year of implementation at CAVHS.9
VIONE consists of 2 major components in the Computerized Patient Record System (CPRS): a template and a dropdown discontinuation menu. The template captured patient allergies, pertinent laboratory data, the patient’s active problem list and applicable diagnoses, and active medication list. Patient aligned care team (PACT) pharmacists used the information captured in the template to conduct medication reconciliations and polypharmacy reviews. Each medication is categorized in VIONE using data collected during reviews. A menu delineates reasons for discontinuation: optional, dose decrease, no diagnosis, not indicated/treatment complete, discontinue alternate medication prescribed, and patient reported no longer taking. The discontinuation menu allowed PACT pharmacists and physicians to choose 1 VIONE option per medication to clarify the reason for discontinuation. VIONE-based discontinuations are recorded in CPRS and identified as deprescribed.
At the time of this project, > 30 US Department of Veterans Affairs (VA) facilities had adopted VIONE. Use of VIONE at VA Southern Nevada Healthcare System (VASNHS) in North Las Vegas has been incorporated in the everyday practices of home-based primary care pharmacists and physicians but has yet to be implemented in other areas of the facility. The purpose of this project was to determine the impact of the VIONE tool on polypharmacy and cost avoidance at VASNHS when used by primary care physicians (PCPs) and PACT primary care clinics.
Methods
Veterans receiving care at VASNHS aged ≥ 65 years with ≥ 10 active medications noted in CPRS were included in this project. PACT pharmacists and physicians were educated on the proper use of the VIONE tool prior to its implementation. Education included a 15-minute slide presentation followed by dissemination of a 1-page VIONE tool handout during a PACT all-staff clinic meeting.
Data were collected for 3 months before and after the intervention. Data were made available for assessment by the Automated Data Processing Application Coordinator (ADPAC) at VASNHS. The ADPAC created and generated an Excel spreadsheet report, which listed all medications deprescribed using the VIONE method. The primary endpoint was the total number of medications discontinued using the VIONE template and/or discontinuation menu. For the purpose of this project, appropriate discontinuation was considered any prescription deprescribed, excluding medical supplies, by pharmacists and PCPs who received VIONE education.
The secondary endpoint was the estimated annualized cost avoidance for the facility (Figure). The calculation does not include medications discontinued due to the prescription of an alternative medication or dose decreases since these VIONE selections imply that a new prescription or order was placed and the original prescription was not deprescribed. Annualized cost avoidance was determined with use of the VIONE dashboard, a database that retrospectively gathers information regarding patients at risk of polypharmacy, polypharmacy-related ADEs, and cost. Manual adjustments were made to various parameters on the Veterans Integrated Service Network 15 VIONE dashboard by the author in order to obtain data specific to this project. These parameters allowed selection of service sections, specific staff members or the option to include or exclude chronic or nonchronic medications. The annualized cost avoidance figure was then compared to raw data pulled by a VIONE dashboard correspondent to ensure the manual calculation was accurate. Finally, the 5 most common classes of medications deprescribed were identified for information purposes and to provide a better postulation on the types of medications being discontinued using the VIONE method.
Results
A total of 2,442 veterans met inclusion criteria, and the VIONE method was applied to 598 between late October 2018 and January 2019. The 13 PACT pharmacists contacted at least 10 veterans each, thus at least 130 were randomly selected for telephone calls to perform polypharmacy reviews using the VIONE note template. The discontinuation menu was used if a medication qualified to be deprescribed. After 3 months, 1986 prescriptions were deprescribed using VIONE; however, 1060 prescriptions were considered appropriately deprescribed (Table 2). The 13 PACT pharmacists deprescribed 361 medications, and the 29 PACT physicians deprescribed 699 medications. These prescriptions were then separated into medication categories to determine the most common discontinued classes. Vitamins and supplements were the medication class most frequently deprescribed (19.4%), followed by pain medications (15.5%), antimicrobial agents (9.6%), antihypertensive medications (9.2%), and diabetes medications (6.4%) (Table 3). The top 5 medication categories accounted for 60% of all medications appropriately deprescribed.
The estimated annualized cost avoidance for all medications deprescribed in the 3-month project period was $84,030.46. To provide the most appropriate and accurate calculation, medication classes excluded from this figure were acute or short-term prescriptions and antimicrobial agents. Medications prescribed short-term typically are not suitable to continue for an extended period, and antimicrobial agents were excluded since they are normally associated with higher costs, and may overestimate the cost avoidance calculation for the facility.
Discussion
The outcomes for the primary and secondary endpoints of this project illustrate that using VIONE in PACT primary care clinics had a notable impact on polypharmacy and cost avoidance over a short period. This outcome can be attributed to 2 significant effects of using the deprescribing tool. VIONE’s simplicity in application allowed clinicians to incorporate daily use of the tool with minimal effort. Education was all that was required to fully enable clinicians to work together successfully and exercise collaborative practice to promote deprescribing. VIONE also elicited a cascade of favorable effects that improve patient safety and health outcomes. The tool aided in identification of PIM, which helped reduce polypharmacy and medication burden. The risk for DDIs and ADEs may decrease; therefore, the incidence of falls, need for emergency department visits or inpatient care related to polypharmacy may decline. Less complex medication regimens may alleviate issues with adherence and avoid the various consequences of polypharmacy in theory. Simplified regimens can potentially improve disease management and quality of life for patients. Further studies are needed to substantiate deprescribing and its true effect on patient adherence and better health outcomes at this time.10
Reducing polypharmacy can lead to cost savings. Based on the results of this 3-month study, we expect that VASNHS would save more than $84,000 by reducing polypharmacy among its patients. Those savings can be funneled back into the health care system, and allotted to necessary patient care, prescriptions, and health care facility needs.
Limitations
There are some important limitations to this study. Definitions of polypharmacy may vary from one health care facility to another. The cutoffs for polypharmacy may differ, causing the prevalence of polypharmacy and potential costs savings to vary. Use of VIONE may be inconsistent among users if not previously educated or properly trained. For instance, VIONE selections are listed in the same menu as the standard CPRS discontinuation options, which may lead to discontinuation of medical supplies or laboratory orders instead of prescriptions.
The method of data analysis and project design used in this study may have been subject to error. For example, the list of PCPs may have been inaccurate or outdated, which would result in an over- or underrepresentation of those who contributed to data collection. Furthermore, there is some volatility in calculating the total cost avoidance. For example, medications for chronic conditions that were only taken on an as needed basis may have overestimated savings. Either under- or overestimations could occur when parameters are adjusted on the VIONE discontinuation dashboard without appropriate guidance. With the ability to manually adjust the dashboard parameters, dissimilarities in calculations may follow.
Conclusions
The VIONE tool may be useful in improving patient safety through deprescribing and discontinuing PIM. Decreasing the number of medications being taken concomitantly by a patient and continuing only those that are imperative in their medical treatment is the first step to reducing the incidence of polypharmacy. Consequently, chances of ADEs or DDIs are lessened, especially among older individuals who are considered high risk for experiencing the detrimental effects that may ensue. These effects include geriatric-related syndromes, increased risk of fall, hospital visits or admissions, or death. Use of VIONE easily promotes collaboration among clinicians to evaluate medications eligible for discontinuation more regularly. If this deprescribing tool is continuously used, costs avoided can likely be maximized within VA health care systems.
The results of this project should serve as an incentive to push for better prescribing practices and increase deprescribing efforts. It should provoke the need for change in regimens and the subsequent discontinuation of prescriptions that are not considered vital to continue. Finally, the result of this project should substantiate the positive impact a deprescribing tool can possess to avert the issues commonly associated with polypharmacy.
1. Centers for Disease Control and Prevention, National Center for Health Statistics. Health, United States, 2013: with special feature on prescription drugs. Published May 2014. Accessed May 13, 2021. https://www.cdc.gov/nchs/data/hus/hus13.pdf
2. Masnoon N, Shakib S, Kalisch-Ellett L, Caughey GE. What is polypharmacy? A systematic review of definitions. BMC Geriatr. 2017;17(1):230. Published 2017 Oct 10. doi:10.1186/s12877-017-0621-2
3. Parulekar MS, Rogers CK. Polypharmacy and mobility. In: Cifu DX, Lew HL, Oh-Park M., eds Geriatric Rehabilitation. Elsevier; 2018. doi:10.1016/B978-0-323-54454-2.12001-1
4. Rieckert A, Trampisch US, Klaaßen-Mielke R, et al. Polypharmacy in older patients with chronic diseases: a cross-sectional analysis of factors associated with excessive polypharmacy. BMC Fam Pract. 2018;19(1):113. Published 2018 Jul 18. doi:10.1186/s12875-018-0795-5
5. Thompson CA. New medication review method cuts veterans’ Rx load, saves millions. Am J Health Syst Pharm. 2018;75(8):502-503. doi:10.2146/news180023
6. Reeve E. Deprescribing tools: a review of the types of tools available to aid deprescribing in clinical practice. J Pharm Pract Res. 2020;50(1):98-107. doi:10.1002/jppr.1626
7. Fried TR, Niehoff KM, Street RL, et al. Effect of the Tool to Reduce Inappropriate Medications on Medication Communication and Deprescribing. J Am Geriatr Soc. 2017;65(10):2265-2271. doi:10.1111/jgs.15042
8. Battar S, Dickerson KR, Sedgwick C, et al. Understanding principles of high reliability organizations through the eyes of VIONE, a clinical program to improve patient safety by deprescribing potentially inappropriate medications and reducing polypharmacy. Fed Pract. 2019;36(12):564-568.
9. Battar S, Cmelik T, Dickerson K, Scott, M. Experience better health with VIONE a safe medication deprescribing tool [Nonpublic source, not verified]
10. Ulley J, Harrop D, Ali A, et al. Desprescribing interventions and their impact on medication adherence in community-dwelling older adults with polypharmacy: a systematic review. BMC Geriatr. 2019;19(15):1-13.
According to the Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), the use of prescription drugs has increased in the past half century. Although prescription drugs have played an important role in preventing, controlling, and delaying onset or progression of disease, their growth in use also has posed many risks.1 One ramification of this growth is the occurrence of polypharmacy, which does not have a universal, clear definition. In general, it can be described as the concurrent use of multiple medications by a single patient to treat one or more medical ailments. Five or more medications taken simultaneously is the most common definition to date, but this is just one of many acceptable definitions and that varies from one health care facility to another.1,2
Regardless of the cutoffs established to indicate polypharmacy, its incidence can result in poor and potentially harmful health outcomes. Polypharmacy increases the risk of experiencing adverse drug events (ADEs), drug-drug interactions (DDIs), geriatric-related syndromes, falls, hospitalization, and mortality. Issues with adherence may begin to unfold secondary to increased pill burden. Both the patient and the health care system may encounter financial strain, as polypharmacy can lead to unnecessary and essentially preventable costs of care. When evaluating the likelihood of polypharmacy based on age group, NCHS found that 47.5% of patients taking ≥ 5 medications were aged ≥ 65 years.1-5 This indicates that polypharmacy is of great concern in the geriatric population, which also represents a large proportion of individuals accessing Veterans Health Administration (VHA) care.
Deprescibing
Deprescribing is the act of withdrawing or discontinuing potentially inappropriate medications (PIM), or medications used by older patients harboring ADEs that generally outweigh the clinical benefits of the drug. Deprescribing is an effective tool for managing or reducing polypharmacy. A variety of tools have been created whose sole purpose is to simplify deprescribing. Some tools explicitly identify PIM and are widely familiar in medical practice. Examples are the Beers Criteria developed in 1991 or Screening Tool to Alert Right Treatment/Screening Tool of Older Persons Prescriptions (START/STOPP) criteria created in 2003. Other tools that are less commonplace but equally as resourceful are MedStopper and Deprescribing.org. The former was launched in 2015 and is a Canadian online system that provides risk assessments for medications with guidance for tapering or stopping medications if continuation of the drug presents higher risk than benefit.5-7 The latter is a full-fledged website developed by a physician, a pharmacist, and their research teams that serves as an exchange hub for deprescribing information.
In 2016, the VIONE (Vital, Important, Optional, Not indicated/treatment complete, and Every medication has an indication) deprescribing tool was developed by Saraswathy Battar, MD, at Central Arkansas Veterans Healthcare System (CAVHS) in Little Rock, as a system that could go beyond medication reconciliation (Table 1). Health care providers (HCPs) and pharmacists evaluate each medication that a patient has been prescribed and places each medication in a VIONE category. Prescribers may then take the opportunity to deprescribe or discontinue medications if deemed appropriate based on their clinical assessments and shared decision making.8 Traditionally, medication reconciliation involves the process of obtaining a complete and accurate list of medications as reported by a patient or caregiver to a HCP. VIONE encourages HCPs and pharmacists not only to ensure medication lists are accurate, but also that each medication reported is appropriate for continued use. In other words, VIONE is meant to help implement deprescribing at opportune times. More than 14,000 medications have been deprescribed using the VIONE method, resulting in more than $2,000,000 of annualized cost avoidance after just 1 year of implementation at CAVHS.9
VIONE consists of 2 major components in the Computerized Patient Record System (CPRS): a template and a dropdown discontinuation menu. The template captured patient allergies, pertinent laboratory data, the patient’s active problem list and applicable diagnoses, and active medication list. Patient aligned care team (PACT) pharmacists used the information captured in the template to conduct medication reconciliations and polypharmacy reviews. Each medication is categorized in VIONE using data collected during reviews. A menu delineates reasons for discontinuation: optional, dose decrease, no diagnosis, not indicated/treatment complete, discontinue alternate medication prescribed, and patient reported no longer taking. The discontinuation menu allowed PACT pharmacists and physicians to choose 1 VIONE option per medication to clarify the reason for discontinuation. VIONE-based discontinuations are recorded in CPRS and identified as deprescribed.
At the time of this project, > 30 US Department of Veterans Affairs (VA) facilities had adopted VIONE. Use of VIONE at VA Southern Nevada Healthcare System (VASNHS) in North Las Vegas has been incorporated in the everyday practices of home-based primary care pharmacists and physicians but has yet to be implemented in other areas of the facility. The purpose of this project was to determine the impact of the VIONE tool on polypharmacy and cost avoidance at VASNHS when used by primary care physicians (PCPs) and PACT primary care clinics.
Methods
Veterans receiving care at VASNHS aged ≥ 65 years with ≥ 10 active medications noted in CPRS were included in this project. PACT pharmacists and physicians were educated on the proper use of the VIONE tool prior to its implementation. Education included a 15-minute slide presentation followed by dissemination of a 1-page VIONE tool handout during a PACT all-staff clinic meeting.
Data were collected for 3 months before and after the intervention. Data were made available for assessment by the Automated Data Processing Application Coordinator (ADPAC) at VASNHS. The ADPAC created and generated an Excel spreadsheet report, which listed all medications deprescribed using the VIONE method. The primary endpoint was the total number of medications discontinued using the VIONE template and/or discontinuation menu. For the purpose of this project, appropriate discontinuation was considered any prescription deprescribed, excluding medical supplies, by pharmacists and PCPs who received VIONE education.
The secondary endpoint was the estimated annualized cost avoidance for the facility (Figure). The calculation does not include medications discontinued due to the prescription of an alternative medication or dose decreases since these VIONE selections imply that a new prescription or order was placed and the original prescription was not deprescribed. Annualized cost avoidance was determined with use of the VIONE dashboard, a database that retrospectively gathers information regarding patients at risk of polypharmacy, polypharmacy-related ADEs, and cost. Manual adjustments were made to various parameters on the Veterans Integrated Service Network 15 VIONE dashboard by the author in order to obtain data specific to this project. These parameters allowed selection of service sections, specific staff members or the option to include or exclude chronic or nonchronic medications. The annualized cost avoidance figure was then compared to raw data pulled by a VIONE dashboard correspondent to ensure the manual calculation was accurate. Finally, the 5 most common classes of medications deprescribed were identified for information purposes and to provide a better postulation on the types of medications being discontinued using the VIONE method.
Results
A total of 2,442 veterans met inclusion criteria, and the VIONE method was applied to 598 between late October 2018 and January 2019. The 13 PACT pharmacists contacted at least 10 veterans each, thus at least 130 were randomly selected for telephone calls to perform polypharmacy reviews using the VIONE note template. The discontinuation menu was used if a medication qualified to be deprescribed. After 3 months, 1986 prescriptions were deprescribed using VIONE; however, 1060 prescriptions were considered appropriately deprescribed (Table 2). The 13 PACT pharmacists deprescribed 361 medications, and the 29 PACT physicians deprescribed 699 medications. These prescriptions were then separated into medication categories to determine the most common discontinued classes. Vitamins and supplements were the medication class most frequently deprescribed (19.4%), followed by pain medications (15.5%), antimicrobial agents (9.6%), antihypertensive medications (9.2%), and diabetes medications (6.4%) (Table 3). The top 5 medication categories accounted for 60% of all medications appropriately deprescribed.
The estimated annualized cost avoidance for all medications deprescribed in the 3-month project period was $84,030.46. To provide the most appropriate and accurate calculation, medication classes excluded from this figure were acute or short-term prescriptions and antimicrobial agents. Medications prescribed short-term typically are not suitable to continue for an extended period, and antimicrobial agents were excluded since they are normally associated with higher costs, and may overestimate the cost avoidance calculation for the facility.
Discussion
The outcomes for the primary and secondary endpoints of this project illustrate that using VIONE in PACT primary care clinics had a notable impact on polypharmacy and cost avoidance over a short period. This outcome can be attributed to 2 significant effects of using the deprescribing tool. VIONE’s simplicity in application allowed clinicians to incorporate daily use of the tool with minimal effort. Education was all that was required to fully enable clinicians to work together successfully and exercise collaborative practice to promote deprescribing. VIONE also elicited a cascade of favorable effects that improve patient safety and health outcomes. The tool aided in identification of PIM, which helped reduce polypharmacy and medication burden. The risk for DDIs and ADEs may decrease; therefore, the incidence of falls, need for emergency department visits or inpatient care related to polypharmacy may decline. Less complex medication regimens may alleviate issues with adherence and avoid the various consequences of polypharmacy in theory. Simplified regimens can potentially improve disease management and quality of life for patients. Further studies are needed to substantiate deprescribing and its true effect on patient adherence and better health outcomes at this time.10
Reducing polypharmacy can lead to cost savings. Based on the results of this 3-month study, we expect that VASNHS would save more than $84,000 by reducing polypharmacy among its patients. Those savings can be funneled back into the health care system, and allotted to necessary patient care, prescriptions, and health care facility needs.
Limitations
There are some important limitations to this study. Definitions of polypharmacy may vary from one health care facility to another. The cutoffs for polypharmacy may differ, causing the prevalence of polypharmacy and potential costs savings to vary. Use of VIONE may be inconsistent among users if not previously educated or properly trained. For instance, VIONE selections are listed in the same menu as the standard CPRS discontinuation options, which may lead to discontinuation of medical supplies or laboratory orders instead of prescriptions.
The method of data analysis and project design used in this study may have been subject to error. For example, the list of PCPs may have been inaccurate or outdated, which would result in an over- or underrepresentation of those who contributed to data collection. Furthermore, there is some volatility in calculating the total cost avoidance. For example, medications for chronic conditions that were only taken on an as needed basis may have overestimated savings. Either under- or overestimations could occur when parameters are adjusted on the VIONE discontinuation dashboard without appropriate guidance. With the ability to manually adjust the dashboard parameters, dissimilarities in calculations may follow.
Conclusions
The VIONE tool may be useful in improving patient safety through deprescribing and discontinuing PIM. Decreasing the number of medications being taken concomitantly by a patient and continuing only those that are imperative in their medical treatment is the first step to reducing the incidence of polypharmacy. Consequently, chances of ADEs or DDIs are lessened, especially among older individuals who are considered high risk for experiencing the detrimental effects that may ensue. These effects include geriatric-related syndromes, increased risk of fall, hospital visits or admissions, or death. Use of VIONE easily promotes collaboration among clinicians to evaluate medications eligible for discontinuation more regularly. If this deprescribing tool is continuously used, costs avoided can likely be maximized within VA health care systems.
The results of this project should serve as an incentive to push for better prescribing practices and increase deprescribing efforts. It should provoke the need for change in regimens and the subsequent discontinuation of prescriptions that are not considered vital to continue. Finally, the result of this project should substantiate the positive impact a deprescribing tool can possess to avert the issues commonly associated with polypharmacy.
According to the Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), the use of prescription drugs has increased in the past half century. Although prescription drugs have played an important role in preventing, controlling, and delaying onset or progression of disease, their growth in use also has posed many risks.1 One ramification of this growth is the occurrence of polypharmacy, which does not have a universal, clear definition. In general, it can be described as the concurrent use of multiple medications by a single patient to treat one or more medical ailments. Five or more medications taken simultaneously is the most common definition to date, but this is just one of many acceptable definitions and that varies from one health care facility to another.1,2
Regardless of the cutoffs established to indicate polypharmacy, its incidence can result in poor and potentially harmful health outcomes. Polypharmacy increases the risk of experiencing adverse drug events (ADEs), drug-drug interactions (DDIs), geriatric-related syndromes, falls, hospitalization, and mortality. Issues with adherence may begin to unfold secondary to increased pill burden. Both the patient and the health care system may encounter financial strain, as polypharmacy can lead to unnecessary and essentially preventable costs of care. When evaluating the likelihood of polypharmacy based on age group, NCHS found that 47.5% of patients taking ≥ 5 medications were aged ≥ 65 years.1-5 This indicates that polypharmacy is of great concern in the geriatric population, which also represents a large proportion of individuals accessing Veterans Health Administration (VHA) care.
Deprescibing
Deprescribing is the act of withdrawing or discontinuing potentially inappropriate medications (PIM), or medications used by older patients harboring ADEs that generally outweigh the clinical benefits of the drug. Deprescribing is an effective tool for managing or reducing polypharmacy. A variety of tools have been created whose sole purpose is to simplify deprescribing. Some tools explicitly identify PIM and are widely familiar in medical practice. Examples are the Beers Criteria developed in 1991 or Screening Tool to Alert Right Treatment/Screening Tool of Older Persons Prescriptions (START/STOPP) criteria created in 2003. Other tools that are less commonplace but equally as resourceful are MedStopper and Deprescribing.org. The former was launched in 2015 and is a Canadian online system that provides risk assessments for medications with guidance for tapering or stopping medications if continuation of the drug presents higher risk than benefit.5-7 The latter is a full-fledged website developed by a physician, a pharmacist, and their research teams that serves as an exchange hub for deprescribing information.
In 2016, the VIONE (Vital, Important, Optional, Not indicated/treatment complete, and Every medication has an indication) deprescribing tool was developed by Saraswathy Battar, MD, at Central Arkansas Veterans Healthcare System (CAVHS) in Little Rock, as a system that could go beyond medication reconciliation (Table 1). Health care providers (HCPs) and pharmacists evaluate each medication that a patient has been prescribed and places each medication in a VIONE category. Prescribers may then take the opportunity to deprescribe or discontinue medications if deemed appropriate based on their clinical assessments and shared decision making.8 Traditionally, medication reconciliation involves the process of obtaining a complete and accurate list of medications as reported by a patient or caregiver to a HCP. VIONE encourages HCPs and pharmacists not only to ensure medication lists are accurate, but also that each medication reported is appropriate for continued use. In other words, VIONE is meant to help implement deprescribing at opportune times. More than 14,000 medications have been deprescribed using the VIONE method, resulting in more than $2,000,000 of annualized cost avoidance after just 1 year of implementation at CAVHS.9
VIONE consists of 2 major components in the Computerized Patient Record System (CPRS): a template and a dropdown discontinuation menu. The template captured patient allergies, pertinent laboratory data, the patient’s active problem list and applicable diagnoses, and active medication list. Patient aligned care team (PACT) pharmacists used the information captured in the template to conduct medication reconciliations and polypharmacy reviews. Each medication is categorized in VIONE using data collected during reviews. A menu delineates reasons for discontinuation: optional, dose decrease, no diagnosis, not indicated/treatment complete, discontinue alternate medication prescribed, and patient reported no longer taking. The discontinuation menu allowed PACT pharmacists and physicians to choose 1 VIONE option per medication to clarify the reason for discontinuation. VIONE-based discontinuations are recorded in CPRS and identified as deprescribed.
At the time of this project, > 30 US Department of Veterans Affairs (VA) facilities had adopted VIONE. Use of VIONE at VA Southern Nevada Healthcare System (VASNHS) in North Las Vegas has been incorporated in the everyday practices of home-based primary care pharmacists and physicians but has yet to be implemented in other areas of the facility. The purpose of this project was to determine the impact of the VIONE tool on polypharmacy and cost avoidance at VASNHS when used by primary care physicians (PCPs) and PACT primary care clinics.
Methods
Veterans receiving care at VASNHS aged ≥ 65 years with ≥ 10 active medications noted in CPRS were included in this project. PACT pharmacists and physicians were educated on the proper use of the VIONE tool prior to its implementation. Education included a 15-minute slide presentation followed by dissemination of a 1-page VIONE tool handout during a PACT all-staff clinic meeting.
Data were collected for 3 months before and after the intervention. Data were made available for assessment by the Automated Data Processing Application Coordinator (ADPAC) at VASNHS. The ADPAC created and generated an Excel spreadsheet report, which listed all medications deprescribed using the VIONE method. The primary endpoint was the total number of medications discontinued using the VIONE template and/or discontinuation menu. For the purpose of this project, appropriate discontinuation was considered any prescription deprescribed, excluding medical supplies, by pharmacists and PCPs who received VIONE education.
The secondary endpoint was the estimated annualized cost avoidance for the facility (Figure). The calculation does not include medications discontinued due to the prescription of an alternative medication or dose decreases since these VIONE selections imply that a new prescription or order was placed and the original prescription was not deprescribed. Annualized cost avoidance was determined with use of the VIONE dashboard, a database that retrospectively gathers information regarding patients at risk of polypharmacy, polypharmacy-related ADEs, and cost. Manual adjustments were made to various parameters on the Veterans Integrated Service Network 15 VIONE dashboard by the author in order to obtain data specific to this project. These parameters allowed selection of service sections, specific staff members or the option to include or exclude chronic or nonchronic medications. The annualized cost avoidance figure was then compared to raw data pulled by a VIONE dashboard correspondent to ensure the manual calculation was accurate. Finally, the 5 most common classes of medications deprescribed were identified for information purposes and to provide a better postulation on the types of medications being discontinued using the VIONE method.
Results
A total of 2,442 veterans met inclusion criteria, and the VIONE method was applied to 598 between late October 2018 and January 2019. The 13 PACT pharmacists contacted at least 10 veterans each, thus at least 130 were randomly selected for telephone calls to perform polypharmacy reviews using the VIONE note template. The discontinuation menu was used if a medication qualified to be deprescribed. After 3 months, 1986 prescriptions were deprescribed using VIONE; however, 1060 prescriptions were considered appropriately deprescribed (Table 2). The 13 PACT pharmacists deprescribed 361 medications, and the 29 PACT physicians deprescribed 699 medications. These prescriptions were then separated into medication categories to determine the most common discontinued classes. Vitamins and supplements were the medication class most frequently deprescribed (19.4%), followed by pain medications (15.5%), antimicrobial agents (9.6%), antihypertensive medications (9.2%), and diabetes medications (6.4%) (Table 3). The top 5 medication categories accounted for 60% of all medications appropriately deprescribed.
The estimated annualized cost avoidance for all medications deprescribed in the 3-month project period was $84,030.46. To provide the most appropriate and accurate calculation, medication classes excluded from this figure were acute or short-term prescriptions and antimicrobial agents. Medications prescribed short-term typically are not suitable to continue for an extended period, and antimicrobial agents were excluded since they are normally associated with higher costs, and may overestimate the cost avoidance calculation for the facility.
Discussion
The outcomes for the primary and secondary endpoints of this project illustrate that using VIONE in PACT primary care clinics had a notable impact on polypharmacy and cost avoidance over a short period. This outcome can be attributed to 2 significant effects of using the deprescribing tool. VIONE’s simplicity in application allowed clinicians to incorporate daily use of the tool with minimal effort. Education was all that was required to fully enable clinicians to work together successfully and exercise collaborative practice to promote deprescribing. VIONE also elicited a cascade of favorable effects that improve patient safety and health outcomes. The tool aided in identification of PIM, which helped reduce polypharmacy and medication burden. The risk for DDIs and ADEs may decrease; therefore, the incidence of falls, need for emergency department visits or inpatient care related to polypharmacy may decline. Less complex medication regimens may alleviate issues with adherence and avoid the various consequences of polypharmacy in theory. Simplified regimens can potentially improve disease management and quality of life for patients. Further studies are needed to substantiate deprescribing and its true effect on patient adherence and better health outcomes at this time.10
Reducing polypharmacy can lead to cost savings. Based on the results of this 3-month study, we expect that VASNHS would save more than $84,000 by reducing polypharmacy among its patients. Those savings can be funneled back into the health care system, and allotted to necessary patient care, prescriptions, and health care facility needs.
Limitations
There are some important limitations to this study. Definitions of polypharmacy may vary from one health care facility to another. The cutoffs for polypharmacy may differ, causing the prevalence of polypharmacy and potential costs savings to vary. Use of VIONE may be inconsistent among users if not previously educated or properly trained. For instance, VIONE selections are listed in the same menu as the standard CPRS discontinuation options, which may lead to discontinuation of medical supplies or laboratory orders instead of prescriptions.
The method of data analysis and project design used in this study may have been subject to error. For example, the list of PCPs may have been inaccurate or outdated, which would result in an over- or underrepresentation of those who contributed to data collection. Furthermore, there is some volatility in calculating the total cost avoidance. For example, medications for chronic conditions that were only taken on an as needed basis may have overestimated savings. Either under- or overestimations could occur when parameters are adjusted on the VIONE discontinuation dashboard without appropriate guidance. With the ability to manually adjust the dashboard parameters, dissimilarities in calculations may follow.
Conclusions
The VIONE tool may be useful in improving patient safety through deprescribing and discontinuing PIM. Decreasing the number of medications being taken concomitantly by a patient and continuing only those that are imperative in their medical treatment is the first step to reducing the incidence of polypharmacy. Consequently, chances of ADEs or DDIs are lessened, especially among older individuals who are considered high risk for experiencing the detrimental effects that may ensue. These effects include geriatric-related syndromes, increased risk of fall, hospital visits or admissions, or death. Use of VIONE easily promotes collaboration among clinicians to evaluate medications eligible for discontinuation more regularly. If this deprescribing tool is continuously used, costs avoided can likely be maximized within VA health care systems.
The results of this project should serve as an incentive to push for better prescribing practices and increase deprescribing efforts. It should provoke the need for change in regimens and the subsequent discontinuation of prescriptions that are not considered vital to continue. Finally, the result of this project should substantiate the positive impact a deprescribing tool can possess to avert the issues commonly associated with polypharmacy.
1. Centers for Disease Control and Prevention, National Center for Health Statistics. Health, United States, 2013: with special feature on prescription drugs. Published May 2014. Accessed May 13, 2021. https://www.cdc.gov/nchs/data/hus/hus13.pdf
2. Masnoon N, Shakib S, Kalisch-Ellett L, Caughey GE. What is polypharmacy? A systematic review of definitions. BMC Geriatr. 2017;17(1):230. Published 2017 Oct 10. doi:10.1186/s12877-017-0621-2
3. Parulekar MS, Rogers CK. Polypharmacy and mobility. In: Cifu DX, Lew HL, Oh-Park M., eds Geriatric Rehabilitation. Elsevier; 2018. doi:10.1016/B978-0-323-54454-2.12001-1
4. Rieckert A, Trampisch US, Klaaßen-Mielke R, et al. Polypharmacy in older patients with chronic diseases: a cross-sectional analysis of factors associated with excessive polypharmacy. BMC Fam Pract. 2018;19(1):113. Published 2018 Jul 18. doi:10.1186/s12875-018-0795-5
5. Thompson CA. New medication review method cuts veterans’ Rx load, saves millions. Am J Health Syst Pharm. 2018;75(8):502-503. doi:10.2146/news180023
6. Reeve E. Deprescribing tools: a review of the types of tools available to aid deprescribing in clinical practice. J Pharm Pract Res. 2020;50(1):98-107. doi:10.1002/jppr.1626
7. Fried TR, Niehoff KM, Street RL, et al. Effect of the Tool to Reduce Inappropriate Medications on Medication Communication and Deprescribing. J Am Geriatr Soc. 2017;65(10):2265-2271. doi:10.1111/jgs.15042
8. Battar S, Dickerson KR, Sedgwick C, et al. Understanding principles of high reliability organizations through the eyes of VIONE, a clinical program to improve patient safety by deprescribing potentially inappropriate medications and reducing polypharmacy. Fed Pract. 2019;36(12):564-568.
9. Battar S, Cmelik T, Dickerson K, Scott, M. Experience better health with VIONE a safe medication deprescribing tool [Nonpublic source, not verified]
10. Ulley J, Harrop D, Ali A, et al. Desprescribing interventions and their impact on medication adherence in community-dwelling older adults with polypharmacy: a systematic review. BMC Geriatr. 2019;19(15):1-13.
1. Centers for Disease Control and Prevention, National Center for Health Statistics. Health, United States, 2013: with special feature on prescription drugs. Published May 2014. Accessed May 13, 2021. https://www.cdc.gov/nchs/data/hus/hus13.pdf
2. Masnoon N, Shakib S, Kalisch-Ellett L, Caughey GE. What is polypharmacy? A systematic review of definitions. BMC Geriatr. 2017;17(1):230. Published 2017 Oct 10. doi:10.1186/s12877-017-0621-2
3. Parulekar MS, Rogers CK. Polypharmacy and mobility. In: Cifu DX, Lew HL, Oh-Park M., eds Geriatric Rehabilitation. Elsevier; 2018. doi:10.1016/B978-0-323-54454-2.12001-1
4. Rieckert A, Trampisch US, Klaaßen-Mielke R, et al. Polypharmacy in older patients with chronic diseases: a cross-sectional analysis of factors associated with excessive polypharmacy. BMC Fam Pract. 2018;19(1):113. Published 2018 Jul 18. doi:10.1186/s12875-018-0795-5
5. Thompson CA. New medication review method cuts veterans’ Rx load, saves millions. Am J Health Syst Pharm. 2018;75(8):502-503. doi:10.2146/news180023
6. Reeve E. Deprescribing tools: a review of the types of tools available to aid deprescribing in clinical practice. J Pharm Pract Res. 2020;50(1):98-107. doi:10.1002/jppr.1626
7. Fried TR, Niehoff KM, Street RL, et al. Effect of the Tool to Reduce Inappropriate Medications on Medication Communication and Deprescribing. J Am Geriatr Soc. 2017;65(10):2265-2271. doi:10.1111/jgs.15042
8. Battar S, Dickerson KR, Sedgwick C, et al. Understanding principles of high reliability organizations through the eyes of VIONE, a clinical program to improve patient safety by deprescribing potentially inappropriate medications and reducing polypharmacy. Fed Pract. 2019;36(12):564-568.
9. Battar S, Cmelik T, Dickerson K, Scott, M. Experience better health with VIONE a safe medication deprescribing tool [Nonpublic source, not verified]
10. Ulley J, Harrop D, Ali A, et al. Desprescribing interventions and their impact on medication adherence in community-dwelling older adults with polypharmacy: a systematic review. BMC Geriatr. 2019;19(15):1-13.
Ocular Manifestations of Patients With Cutaneous Rosacea With and Without Demodex Infection
Acne rosacea is a chronic inflammatory disease that may affect the facial skin, eyes, and eyelids.1 It is characterized by transient or persistent flushing, facial erythema, and telangiectases, generally located on the central portion of the face, and may progress to papules and pustules.2,3 At the late stage of the disease, dermal edema or fibroplasia and sebaceous gland hypertrophy may cause phymatous alterations in the skin. In 2004, the National Rosacea Society Expert Committee developed a classification system for rosacea to standardize subtypes and variants that has since been widely accepted and continues to aid in research and epidemiologic studies.4 The committee defined 4 subtypes based on clinical characteristics: erythematotelangiectatic (ETR), papulopustular (PPR), phymatous, and ocular rosacea.2,3
Ocular rosacea may accompany mild, moderate, and severe dermatologic disease or may occur in the absence of diagnostic skin disease.5 Ocular signs include eyelid margin telangiectasia, spade-shaped infiltrates in the cornea, scleritis, and sclerokeratitis. Common symptoms include burning, stinging, light sensitivity, and foreign-body sensation. Ocular signs commonly seen in rosacea are meibomian gland dysfunction characterized by inspissation and inflammation of the meibomian glands (chalazia), conjunctivitis, honey crust and cylindrical collarette accumulation at the base of the eyelashes, irregularity of the eyelid margin architecture, and evaporative tear dysfunction.5,6
The physiopathology of rosacea is still unknown. Potential factors include genetic predisposition, abnormal inflammation, vascular dysfunction, and involvement of several microbial agents, such as commensal Demodex mites. The number of Demodex mites on normal skin flora is less than 5/cm2; however, the increased vascular dilation and capillary permeability associated with rosacea that result from sunlight and heat exposure increase the density of Demodex folliculorum.7 Elevated Demodex mite density has been observed in the lumens of the sebaceous follicles in patients with rosacea. However, because the severity of the clinical manifestations of the disease is not directly associated with the density of D folliculorum, it generally is accepted that D folliculorum is not a pathogenetic but rather an exacerbating factor.8 It has been reported that this species of mite is mostly found on the face and around the eyelashes and scalp of patients and that it can cause ocular surface inflammation.8
Most studies have researched ocular manifestations of rosacea but not ocular involvement in rosacea patients with and without Demodex mite infestation. In our study, we sought to compare the ocular surface, meibomian gland characteristics, and tear film abnormalities among patients with cutaneous rosacea with and without Demodex infestation.
Materials and Methods
We conducted a retrospective study of 60 patients with cutaneous rosacea. This study was approved by the ethics committee of the local hospital (2018/002-003), and all patients provided verbal and written informed consent before participating in the study. The study was carried out according to the guidelines of the Declaration of Helsinki.
Patient Selection and Evaluation
Patients diagnosed with rosacea by a dermatologist within 6 months were included in the study. Diagnosis of the disease was made after a detailed anamnesis and dermatologic examination. Rosacea was diagnosed if patients had an itching sensation, erythema and/or erythema attacks, and papules and pustules, and fulfilled the diagnostic criteria according to the National Rosacea Society. The skin disease was classified according to the subtypes as ETR, PPR, phymatous rosacea, or ocular rosacea.
The standard skin surface biopsy method was used in 60 patients for detecting Demodex density. When more than 5 mites were detected per square centimeter, the result was recorded as positive. Thirty consecutive, newly diagnosed patients with cutaneous acne rosacea with Demodex infestation and 30 consecutive, newly diagnosed sex- and age-matched patients with acne rosacea without Demodex infestation admitted to the dermatology outpatient clinic were included to this study. The patients who did not have any known dermatologic, systemic, or ocular diseases were included in the study. Patients who met any of the following criteria were excluded from the study: prior anti-inflammatory topical and/or systemic treatment for rosacea during the last 3 months, contact lens wear, eyelid surgery, or autoimmune disease requiring treatment.
Microscopic Demodex Examination
Demodex count was determined using a standardized skin surface biopsy, which is a noninvasive method. Every patient gave samples from the cheeks. This biopsy was repeated from the same site. A drop of cyanoacrylate was placed on a clean slide, pressed against a skin lesion, held in place for 1 minute, and removed. The obtained samples were evaluated under a light microscope (Nikon E200) with oil immersion. When more than 5 mites were detected per square centimeter, the result was recorded as positive.
Ophthalmologic Examination
A complete ophthalmologic examination including visual acuity assessment, standardized slit lamp examination, and fundus examination was done for all patients. Ocular rosacea was diagnosed on detection of 1 or more of the following: watery or bloodshot appearance, foreign-body sensation, burning or stinging, dryness, itching, light sensitivity, blurred vision, telangiectases of the conjunctiva and eyelid margin, eyelid lid and periocular erythema, anterior blepharitis, meibomian gland dysfunction, or irregularity of eyelid margins. All patients were screened for the signs and symptoms of ocular rosacea and underwent other ophthalmologic examinations, including tear function tests. Tear functions were evaluated with Schirmer tests without anesthesia and fluorescein tear breakup time (TBUT). Tear film breakup time was assessed after instillation of 2% fluorescein staining under a cobalt blue filter. The time interval between the last complete blink and the appearance of the first dry spot was recorded. The mean of 3 consecutive measurements was obtained. The Schirmer test was performed without topical anesthesia using a standardized filter strip (Bio-Tech Vision Care). The amount of wetting was measured after 5 minutes. Meibomian gland expressibility was assessed by applying digital pressure to the eyelid margin.
Statistical Analysis
Statistical analysis of the study was performed with SPSS Statistics Version 22.0 (SPSS Inc). Continuous variables were reported as mean (SD), and categorical variables were reported as percentages and counts. Descriptive statistics for numerical variables were created. An independent sample t test was used for normally distributed continuous variables. The Kolmogorov-Smirnov test was used to determine normality. The Schirmer test without anesthesia and TBUT values among groups were compared using one-way analysis of variance. The differences were calculated using the multiple comparison Tukey test. P<.05 was considered statistically significant.
Results
Demographic Characteristics of Rosacea Patients
Sixty eyes of 30 newly diagnosed patients with acne rosacea with Demodex infestation and 60 eyes of 30 newly diagnosed patients with acne rosacea without Demodex infestation were enrolled in this study. The mean age (SD) of the 60 patients was 37.63 (10.01) years. The mean TBUT (SD) of the 120 eyes was 6.65 (3.44) seconds, and the mean Schirmer score (SD) was 12.59 (6.71) mm (Table 1).
Meibomian Gland Dysfunction vs Subgroup of Rosacea Patients
Thirty-four (57%) patients had blepharitis, and 18 (30%) patients had meibomitis. Thirty-five (58.3%) patients had ETR, 5 (8.3%) patients had phymatous rosacea, and 20 (33.4%) patients had PPR (Table 2). Of the Demodex-negative patients, 73.3% (22/30) had ETR, 20% (6/30) had PPR, and 6.7% (2/30) had phymatous rosacea. Of the Demodex-positive patients, 43.3% (13/30) had ETR, 46.7% (14/30) had PPR, and 10% (3/30) had phymatous rosacea (Table 3). Papulopustular rosacea was found to be significantly associated with Demodex positivity (P=.003); neither ETR nor phymatous rosacea was found to be significantly associated with Demodex infestation (P=.66 and P=.13, respectively)(Table 3).
There was no statistically significant difference between the Demodex-negative and Demodex-positive groups for mean age (SD)(37.4 [11.54] years vs 37.87 [8.41] years; P=.85), mean TBUT (SD)(6.73 [3.62] seconds vs 6.57 [3.33] seconds; P=.85), and mean Schirmer score (SD)(13.68 [7.23] mm vs 11.5 [6.08] mm; P=.21)(Table 4).
Fifteen (50%) patients (30 eyes) in the Demodex-negative group and 19 (63.3%) patients (38 eyes) in the Demodex-positive group had blepharitis, with no statistically significant difference between the groups (P=.43). Seven (23.3%) patients (14 eyes) in the Demodex-negative group and 11 (36.7%) patients (22 eyes) in the Demodex-positive group had meibomitis, with no statistically significant difference between the groups (P=.39)(Table 3).
Sixteen (53.3%) patients (32 eyes) in the Demodex-negative group and 21 (70%) patients (42 eyes) in the Demodex-positive group had TBUT values less than 10 seconds. Eighteen (60%) patients (36 eyes) in the Demodex-negative group and 25 (83.3%) patients (50 eyes) in the Demodex-positive group had Schirmer scores less than 10 mm (Table 3). The 2 groups were not significantly different in dry eye findings (P=.25 and P=.29, respectively).
Comment
Inflammation in Rosacea
It is known that the density of nonfloral bacteria as well as D folliculorum and Demodex brevis increases in skin affected by rosacea compared to normal skin. Vascular dilation associated with rosacea that results from sunlight and heat causes increased capillary permeability and creates the ideal environment for the proliferation of D folliculorum. Demodex is thought to act as a vector for the activity of certain other microorganisms, particularly Bacillus oleronius, and thus initiates the inflammatory response associated with rosacea.9
One study reported that the inflammation associated with rosacea that was caused by Demodex and other environmental stimuli occurred through toll-like receptor 2 and various cytokines.10 It has been reported that the abnormal function of toll-like receptor 2 in the epidermis leads to the increased production of cathelicidin. Cathelicidin is an antimicrobial peptide with both vasoactive and proinflammatory activity and has been used as a basis to explain the pathogenesis of facial erythema, flushing, and telangiectasia in the context of rosacea.11,12 In addition, it has been reported that the increased secretion of proinflammatory cytokines such as IL-1 and gelatinase B in ocular rosacea leads to tearing film abnormalities that result from increased bacterial flora in the eyelids, which subsequently leads to decreased tear drainage and dry eyes.13 In addition, B oleronius isolated from a D folliculorum mite from patients with PPR produced proteins that induced an inflammatory immune response in 73% (16/22) of patients with rosacea.14
Ocular Findings in Rosacea Patients
In our study, PPR was found to be significantly associated with Demodex positivity compared to ETR and phymatous rosacea (P=.003). However, ocular inflammation findings such as blepharitis and meibomitis were not significantly different between Demodex-positive and Demodex-negative patients. Although the mean Schirmer score of Demodex-positive patients was lower than Demodex-negative patients, this difference was not statistically significant. We evaluated a TBUT of less than 10 seconds and a Schirmer score less than 10 mm as dry eye. Accordingly, the number of patients with dry eye was higher in the Demodex-positive group, but this difference was not statistically significant.
Chronic blepharitis, conjunctival inflammation, and meibomian gland dysfunction are among the most common findings of ocular rosacea.15,16 Patients with ocular rosacea commonly have dry eye and abnormal TBUT and Schirmer scores.17 In our study, we found that the fluorescein TBUT and Schirmer scores were more likely to be abnormal in the Demodex-positive group, but the difference between the 2 groups was not statistically significant.
It has been reported that proinflammatory cytokines due to a weakened immune system in rosacea patients were increased. The weakened immune system was further supported by the increased concentrations of proinflammatory cytokines such as IL-1 and matrix metalloproteinase 9 in these patients’ tears and the improvement of symptoms after the inhibition of these cytokines.11 Luo et al18 reported that Demodex inflammation causes dry eye, particularly with D brevis. Ayyildiz and Sezgin19 reported that Schirmer scores were significantly lower and that the Ocular Surface Disease Index had significantly increased in the Demodex-positive group compared to the Demodex-negative group (P=.001 for both). A Korean study reported that Demodex density was correlated with age, sex, and TBUT results, but there was no significant relationship between Demodex density and Schirmer scores.16
Sobolewska et al20 administered ivermectin cream 1% to 10 patients with cutaneous and ocular rosacea, but only to the forehead, chin, nose, cheeks, and regions close to the eyelids, and observed a significant improvement in blepharitis (P=.004). They stated that ivermectin, as applied only to the face, suppressed the proinflammatory cytokines associated with rosacea and showed anti-inflammatory effects by reducing Demodex mites.20Li et al21 demonstrated a strong correlation between ocular Demodex inflammation and serum reactivity to these bacterial proteins in patients with ocular rosacea, and they found that eyelid margin inflammation and facial rosacea correlated with reactivity to these proteins. These studies suggest a possible role for Demodex infestation and bacterial proteins in the etiology of rosacea.
Gonzalez-Hinojosa et al22 demonstrated that even though eyelash blepharitis was more common in PPR than ETR, there was no statistically significant association between rosacea and Demodex blepharitis. In our study, we found a significant correlation between PPR and Demodex positivity. Also, meibomian gland dysfunction was more common in the Demodex-positive group; however, this result was not statistically significant. One study compared patients with primary demodicosis and patients with rosacea with Demodex-induced blepharitis to healthy controls and found that patients with primary demodicosis and patients with rosacea did not have significantly different ocular findings.23 In contrast, Forton and De Maertelaer24 reported that patients with PPR had significantly more severe ocular manifestations compared with patients with demodicosis (P=.004).
Mizuno et al25 compared the normal (nonrosacea) population with and without Demodex-infested eyelashes and found that the 2 groups were not significantly different for meibomian gland dysfunction, fluorescein TBUT, or ocular surface discomfort.
Varying results have been reported regarding the association between Demodex and blepharitis or ocular surface discomfort with or without rosacea. In our study, we found that Demodex did not affect tear function tests or meibomian gland function in patients with rosacea. We believe this study is important because it demonstrates the effects of Demodex on ocular findings in patients with cutaneous rosacea.
Limitations
Our study has some limitations. The number of patients was relatively small, resulting in few significant differences between the comparison groups. A larger prospective research study is required to assess the prevalence of Demodex mites in the ocular rosacea population along with associated symptoms and findings.
Conclusion
Rosacea is a chronic disease associated with skin and ocular manifestations that range from mild to severe, that progresses in the form of attacks, and that requires long-term follow-up and treatment. Rosacea most often presents as a disease that causes ocular surface inflammation of varying degrees. Demodex infestation may increase cutaneous or ocular inflammation in rosacea. Therefore, every patient diagnosed with rosacea should be given a dermatologic examination to determine Demodex positivity and an ophthalmologic examination to determine ocular manifestations.
- O’Reilly N, Gallagher C, Reddy Katikireddy K, et al. Demodex-associated Bacillus proteins induce an aberrant wound healing response in a corneal epithelial cell line: possible implications for corneal ulcer formation in ocular rosacea. Invest Ophthalmol Vis Sci. 2012;53:3250-3259.
- Webster G, Schaller M. Ocular rosacea: a dermatologic perspective. J Am Acad Dermatol. 2013;69(6 suppl 1):S42-S43.
- Crawford GH, Pelle MT, James WD. Rosacea: I. etiology, pathogenesis, and subtype classification. J Am Acad Dermatol. 2004;51:327-341.
- Wilkin J, Dahl M, Detmar M, et al. Standard grading system for rosacea: report of the National Rosacea Society Expert Committee on the classification and staging of rosacea. J Am Acad Dermatol. 2004;50:907-912.
- Gallo RL, Granstein RD, Kang S, et al. Standard classification and pathophysiology of rosacea: the 2017 update by the National Rosacea Society Expert Committee. J Am Acad Dermatol. 2018;78:148-155.
- Gao YY, Di Pascuale MA, Li W, et al. High prevalence of Demodex in eyelashes with cylindrical dandruff. Invest Ophthalmol Vis Sci. 2005;46:3089-3094.
- Fallen RS, Gooderham M. Rosacea: update on management and emerging therapies. Skin Therapy Lett. 2012;17:1-4.
- Erbagcı Z, Ozgoztası O. The significance of Demodex folliculorum density in rosacea. Int J Dermatol. 1998;37:421-425.
- Ahn CS, Huang WW. Rosacea pathogenesis. Dermatol Clin. 2018;36:81‐86.
- Forton FMN, De Maertelaer V. Two consecutive standardized skin surface biopsies: an improved sampling method to evaluate Demodex density as a diagnostic tool for rosacea and demodicosis. Acta Derm Venereol. 2017;97:242‐248.
- Yamasaki K, Kanada K, Macleod DT, et al. TLR2 expression is increased in rosacea and stimulates enhanced serine protease production by keratinocytes. J Invest Dermatol. 2011;131:688-697.
- Gold LM, Draelos ZD. New and emerging treatments for rosacea. Am J Clin Dermatol. 2015;16:457-461.
- Two AM, Del Rosso JQ. Kallikrein 5-mediated inflammation in rosacea: clinically relevant correlations with acute and chronic manifestations in rosacea and how individual treatments may provide therapeutic benefit. J Clin Aesthet Dermatol. 2014;7:20-25.
- Lacey N, Delaney S, Kavanagh K, et al. Mite-related bacterial antigens stimulate inflammatory cells in rosacea. Br J Dermatol. 2007;157:474-481.
- Forton F, Germaux MA, Brasseur T, et al. Demodicosis and rosacea: epidemiology and significance in daily dermatologic practice. J Am Acad Dermatol. 2005;52:74-87.
- Lee SH, Chun YS, Kim JH, et al. The relationship between Demodex and ocular discomfort. Invest Ophthalmol Vis Sci. 2010;51:2906-2911.
- Awais M, Anwar MI, Ilfikhar R, et al. Rosacea—the ophthalmic perspective. Cutan Ocul Toxicol. 2015;34:161-166.
- Luo X, Li J, Chen C, et al. Ocular demodicosis as a potential cause of ocular surface inflammation. Cornea. 2017;36(suppl 1):S9-S14.
- Ayyildiz T, Sezgin FM. The effect of ocular Demodex colonization on Schirmer test and OSDI scores in newly diagnosed dry eye patients. Eye Contact Lens. 2020;46(suppl 1):S39-S41.
- Sobolewska B, Doycheva D, Deuter CM, et al. Efficacy of topical ivermectin for the treatment of cutaneous and ocular rosacea [published online April 7, 2020]. Ocul Immunol Inflamm. doi:10.1080/09273948.2020.1727531
- Li J, O‘Reilly N, Sheha H, et al. Correlation between ocular Demodex infestation and serum immunoreactivity to Bacillus proteins in patients with facial rosacea. 2010;117:870-877.
- Gonzalez‐Hinojosa D, Jaime‐Villalonga A, Aguilar‐Montes G, et al. Demodex and rosacea: is there a relationship? Indian J Ophthalmol. 2018;66:36‐38.
- Sarac G, Cankaya C, Ozcan KN, et al. Increased frequency of Demodex blepharitis in rosacea and facial demodicosis patients. J Cosmet Dermatol. 2020;19:1260-1265.
- Forton FMN, De Maertelaer V. Rosacea and demodicosis: little-known diagnostic signs and symptoms. Acta Derm Venereol. 2019;99:47-52.
- Mizuno M, Kawashima M, Uchino M, et al. Demodex-mite infestation in cilia and its association with ocular surface parameters in Japanese volunteers. Eye Contact Lens. 2020;46:291-296.
Acne rosacea is a chronic inflammatory disease that may affect the facial skin, eyes, and eyelids.1 It is characterized by transient or persistent flushing, facial erythema, and telangiectases, generally located on the central portion of the face, and may progress to papules and pustules.2,3 At the late stage of the disease, dermal edema or fibroplasia and sebaceous gland hypertrophy may cause phymatous alterations in the skin. In 2004, the National Rosacea Society Expert Committee developed a classification system for rosacea to standardize subtypes and variants that has since been widely accepted and continues to aid in research and epidemiologic studies.4 The committee defined 4 subtypes based on clinical characteristics: erythematotelangiectatic (ETR), papulopustular (PPR), phymatous, and ocular rosacea.2,3
Ocular rosacea may accompany mild, moderate, and severe dermatologic disease or may occur in the absence of diagnostic skin disease.5 Ocular signs include eyelid margin telangiectasia, spade-shaped infiltrates in the cornea, scleritis, and sclerokeratitis. Common symptoms include burning, stinging, light sensitivity, and foreign-body sensation. Ocular signs commonly seen in rosacea are meibomian gland dysfunction characterized by inspissation and inflammation of the meibomian glands (chalazia), conjunctivitis, honey crust and cylindrical collarette accumulation at the base of the eyelashes, irregularity of the eyelid margin architecture, and evaporative tear dysfunction.5,6
The physiopathology of rosacea is still unknown. Potential factors include genetic predisposition, abnormal inflammation, vascular dysfunction, and involvement of several microbial agents, such as commensal Demodex mites. The number of Demodex mites on normal skin flora is less than 5/cm2; however, the increased vascular dilation and capillary permeability associated with rosacea that result from sunlight and heat exposure increase the density of Demodex folliculorum.7 Elevated Demodex mite density has been observed in the lumens of the sebaceous follicles in patients with rosacea. However, because the severity of the clinical manifestations of the disease is not directly associated with the density of D folliculorum, it generally is accepted that D folliculorum is not a pathogenetic but rather an exacerbating factor.8 It has been reported that this species of mite is mostly found on the face and around the eyelashes and scalp of patients and that it can cause ocular surface inflammation.8
Most studies have researched ocular manifestations of rosacea but not ocular involvement in rosacea patients with and without Demodex mite infestation. In our study, we sought to compare the ocular surface, meibomian gland characteristics, and tear film abnormalities among patients with cutaneous rosacea with and without Demodex infestation.
Materials and Methods
We conducted a retrospective study of 60 patients with cutaneous rosacea. This study was approved by the ethics committee of the local hospital (2018/002-003), and all patients provided verbal and written informed consent before participating in the study. The study was carried out according to the guidelines of the Declaration of Helsinki.
Patient Selection and Evaluation
Patients diagnosed with rosacea by a dermatologist within 6 months were included in the study. Diagnosis of the disease was made after a detailed anamnesis and dermatologic examination. Rosacea was diagnosed if patients had an itching sensation, erythema and/or erythema attacks, and papules and pustules, and fulfilled the diagnostic criteria according to the National Rosacea Society. The skin disease was classified according to the subtypes as ETR, PPR, phymatous rosacea, or ocular rosacea.
The standard skin surface biopsy method was used in 60 patients for detecting Demodex density. When more than 5 mites were detected per square centimeter, the result was recorded as positive. Thirty consecutive, newly diagnosed patients with cutaneous acne rosacea with Demodex infestation and 30 consecutive, newly diagnosed sex- and age-matched patients with acne rosacea without Demodex infestation admitted to the dermatology outpatient clinic were included to this study. The patients who did not have any known dermatologic, systemic, or ocular diseases were included in the study. Patients who met any of the following criteria were excluded from the study: prior anti-inflammatory topical and/or systemic treatment for rosacea during the last 3 months, contact lens wear, eyelid surgery, or autoimmune disease requiring treatment.
Microscopic Demodex Examination
Demodex count was determined using a standardized skin surface biopsy, which is a noninvasive method. Every patient gave samples from the cheeks. This biopsy was repeated from the same site. A drop of cyanoacrylate was placed on a clean slide, pressed against a skin lesion, held in place for 1 minute, and removed. The obtained samples were evaluated under a light microscope (Nikon E200) with oil immersion. When more than 5 mites were detected per square centimeter, the result was recorded as positive.
Ophthalmologic Examination
A complete ophthalmologic examination including visual acuity assessment, standardized slit lamp examination, and fundus examination was done for all patients. Ocular rosacea was diagnosed on detection of 1 or more of the following: watery or bloodshot appearance, foreign-body sensation, burning or stinging, dryness, itching, light sensitivity, blurred vision, telangiectases of the conjunctiva and eyelid margin, eyelid lid and periocular erythema, anterior blepharitis, meibomian gland dysfunction, or irregularity of eyelid margins. All patients were screened for the signs and symptoms of ocular rosacea and underwent other ophthalmologic examinations, including tear function tests. Tear functions were evaluated with Schirmer tests without anesthesia and fluorescein tear breakup time (TBUT). Tear film breakup time was assessed after instillation of 2% fluorescein staining under a cobalt blue filter. The time interval between the last complete blink and the appearance of the first dry spot was recorded. The mean of 3 consecutive measurements was obtained. The Schirmer test was performed without topical anesthesia using a standardized filter strip (Bio-Tech Vision Care). The amount of wetting was measured after 5 minutes. Meibomian gland expressibility was assessed by applying digital pressure to the eyelid margin.
Statistical Analysis
Statistical analysis of the study was performed with SPSS Statistics Version 22.0 (SPSS Inc). Continuous variables were reported as mean (SD), and categorical variables were reported as percentages and counts. Descriptive statistics for numerical variables were created. An independent sample t test was used for normally distributed continuous variables. The Kolmogorov-Smirnov test was used to determine normality. The Schirmer test without anesthesia and TBUT values among groups were compared using one-way analysis of variance. The differences were calculated using the multiple comparison Tukey test. P<.05 was considered statistically significant.
Results
Demographic Characteristics of Rosacea Patients
Sixty eyes of 30 newly diagnosed patients with acne rosacea with Demodex infestation and 60 eyes of 30 newly diagnosed patients with acne rosacea without Demodex infestation were enrolled in this study. The mean age (SD) of the 60 patients was 37.63 (10.01) years. The mean TBUT (SD) of the 120 eyes was 6.65 (3.44) seconds, and the mean Schirmer score (SD) was 12.59 (6.71) mm (Table 1).
Meibomian Gland Dysfunction vs Subgroup of Rosacea Patients
Thirty-four (57%) patients had blepharitis, and 18 (30%) patients had meibomitis. Thirty-five (58.3%) patients had ETR, 5 (8.3%) patients had phymatous rosacea, and 20 (33.4%) patients had PPR (Table 2). Of the Demodex-negative patients, 73.3% (22/30) had ETR, 20% (6/30) had PPR, and 6.7% (2/30) had phymatous rosacea. Of the Demodex-positive patients, 43.3% (13/30) had ETR, 46.7% (14/30) had PPR, and 10% (3/30) had phymatous rosacea (Table 3). Papulopustular rosacea was found to be significantly associated with Demodex positivity (P=.003); neither ETR nor phymatous rosacea was found to be significantly associated with Demodex infestation (P=.66 and P=.13, respectively)(Table 3).
There was no statistically significant difference between the Demodex-negative and Demodex-positive groups for mean age (SD)(37.4 [11.54] years vs 37.87 [8.41] years; P=.85), mean TBUT (SD)(6.73 [3.62] seconds vs 6.57 [3.33] seconds; P=.85), and mean Schirmer score (SD)(13.68 [7.23] mm vs 11.5 [6.08] mm; P=.21)(Table 4).
Fifteen (50%) patients (30 eyes) in the Demodex-negative group and 19 (63.3%) patients (38 eyes) in the Demodex-positive group had blepharitis, with no statistically significant difference between the groups (P=.43). Seven (23.3%) patients (14 eyes) in the Demodex-negative group and 11 (36.7%) patients (22 eyes) in the Demodex-positive group had meibomitis, with no statistically significant difference between the groups (P=.39)(Table 3).
Sixteen (53.3%) patients (32 eyes) in the Demodex-negative group and 21 (70%) patients (42 eyes) in the Demodex-positive group had TBUT values less than 10 seconds. Eighteen (60%) patients (36 eyes) in the Demodex-negative group and 25 (83.3%) patients (50 eyes) in the Demodex-positive group had Schirmer scores less than 10 mm (Table 3). The 2 groups were not significantly different in dry eye findings (P=.25 and P=.29, respectively).
Comment
Inflammation in Rosacea
It is known that the density of nonfloral bacteria as well as D folliculorum and Demodex brevis increases in skin affected by rosacea compared to normal skin. Vascular dilation associated with rosacea that results from sunlight and heat causes increased capillary permeability and creates the ideal environment for the proliferation of D folliculorum. Demodex is thought to act as a vector for the activity of certain other microorganisms, particularly Bacillus oleronius, and thus initiates the inflammatory response associated with rosacea.9
One study reported that the inflammation associated with rosacea that was caused by Demodex and other environmental stimuli occurred through toll-like receptor 2 and various cytokines.10 It has been reported that the abnormal function of toll-like receptor 2 in the epidermis leads to the increased production of cathelicidin. Cathelicidin is an antimicrobial peptide with both vasoactive and proinflammatory activity and has been used as a basis to explain the pathogenesis of facial erythema, flushing, and telangiectasia in the context of rosacea.11,12 In addition, it has been reported that the increased secretion of proinflammatory cytokines such as IL-1 and gelatinase B in ocular rosacea leads to tearing film abnormalities that result from increased bacterial flora in the eyelids, which subsequently leads to decreased tear drainage and dry eyes.13 In addition, B oleronius isolated from a D folliculorum mite from patients with PPR produced proteins that induced an inflammatory immune response in 73% (16/22) of patients with rosacea.14
Ocular Findings in Rosacea Patients
In our study, PPR was found to be significantly associated with Demodex positivity compared to ETR and phymatous rosacea (P=.003). However, ocular inflammation findings such as blepharitis and meibomitis were not significantly different between Demodex-positive and Demodex-negative patients. Although the mean Schirmer score of Demodex-positive patients was lower than Demodex-negative patients, this difference was not statistically significant. We evaluated a TBUT of less than 10 seconds and a Schirmer score less than 10 mm as dry eye. Accordingly, the number of patients with dry eye was higher in the Demodex-positive group, but this difference was not statistically significant.
Chronic blepharitis, conjunctival inflammation, and meibomian gland dysfunction are among the most common findings of ocular rosacea.15,16 Patients with ocular rosacea commonly have dry eye and abnormal TBUT and Schirmer scores.17 In our study, we found that the fluorescein TBUT and Schirmer scores were more likely to be abnormal in the Demodex-positive group, but the difference between the 2 groups was not statistically significant.
It has been reported that proinflammatory cytokines due to a weakened immune system in rosacea patients were increased. The weakened immune system was further supported by the increased concentrations of proinflammatory cytokines such as IL-1 and matrix metalloproteinase 9 in these patients’ tears and the improvement of symptoms after the inhibition of these cytokines.11 Luo et al18 reported that Demodex inflammation causes dry eye, particularly with D brevis. Ayyildiz and Sezgin19 reported that Schirmer scores were significantly lower and that the Ocular Surface Disease Index had significantly increased in the Demodex-positive group compared to the Demodex-negative group (P=.001 for both). A Korean study reported that Demodex density was correlated with age, sex, and TBUT results, but there was no significant relationship between Demodex density and Schirmer scores.16
Sobolewska et al20 administered ivermectin cream 1% to 10 patients with cutaneous and ocular rosacea, but only to the forehead, chin, nose, cheeks, and regions close to the eyelids, and observed a significant improvement in blepharitis (P=.004). They stated that ivermectin, as applied only to the face, suppressed the proinflammatory cytokines associated with rosacea and showed anti-inflammatory effects by reducing Demodex mites.20Li et al21 demonstrated a strong correlation between ocular Demodex inflammation and serum reactivity to these bacterial proteins in patients with ocular rosacea, and they found that eyelid margin inflammation and facial rosacea correlated with reactivity to these proteins. These studies suggest a possible role for Demodex infestation and bacterial proteins in the etiology of rosacea.
Gonzalez-Hinojosa et al22 demonstrated that even though eyelash blepharitis was more common in PPR than ETR, there was no statistically significant association between rosacea and Demodex blepharitis. In our study, we found a significant correlation between PPR and Demodex positivity. Also, meibomian gland dysfunction was more common in the Demodex-positive group; however, this result was not statistically significant. One study compared patients with primary demodicosis and patients with rosacea with Demodex-induced blepharitis to healthy controls and found that patients with primary demodicosis and patients with rosacea did not have significantly different ocular findings.23 In contrast, Forton and De Maertelaer24 reported that patients with PPR had significantly more severe ocular manifestations compared with patients with demodicosis (P=.004).
Mizuno et al25 compared the normal (nonrosacea) population with and without Demodex-infested eyelashes and found that the 2 groups were not significantly different for meibomian gland dysfunction, fluorescein TBUT, or ocular surface discomfort.
Varying results have been reported regarding the association between Demodex and blepharitis or ocular surface discomfort with or without rosacea. In our study, we found that Demodex did not affect tear function tests or meibomian gland function in patients with rosacea. We believe this study is important because it demonstrates the effects of Demodex on ocular findings in patients with cutaneous rosacea.
Limitations
Our study has some limitations. The number of patients was relatively small, resulting in few significant differences between the comparison groups. A larger prospective research study is required to assess the prevalence of Demodex mites in the ocular rosacea population along with associated symptoms and findings.
Conclusion
Rosacea is a chronic disease associated with skin and ocular manifestations that range from mild to severe, that progresses in the form of attacks, and that requires long-term follow-up and treatment. Rosacea most often presents as a disease that causes ocular surface inflammation of varying degrees. Demodex infestation may increase cutaneous or ocular inflammation in rosacea. Therefore, every patient diagnosed with rosacea should be given a dermatologic examination to determine Demodex positivity and an ophthalmologic examination to determine ocular manifestations.
Acne rosacea is a chronic inflammatory disease that may affect the facial skin, eyes, and eyelids.1 It is characterized by transient or persistent flushing, facial erythema, and telangiectases, generally located on the central portion of the face, and may progress to papules and pustules.2,3 At the late stage of the disease, dermal edema or fibroplasia and sebaceous gland hypertrophy may cause phymatous alterations in the skin. In 2004, the National Rosacea Society Expert Committee developed a classification system for rosacea to standardize subtypes and variants that has since been widely accepted and continues to aid in research and epidemiologic studies.4 The committee defined 4 subtypes based on clinical characteristics: erythematotelangiectatic (ETR), papulopustular (PPR), phymatous, and ocular rosacea.2,3
Ocular rosacea may accompany mild, moderate, and severe dermatologic disease or may occur in the absence of diagnostic skin disease.5 Ocular signs include eyelid margin telangiectasia, spade-shaped infiltrates in the cornea, scleritis, and sclerokeratitis. Common symptoms include burning, stinging, light sensitivity, and foreign-body sensation. Ocular signs commonly seen in rosacea are meibomian gland dysfunction characterized by inspissation and inflammation of the meibomian glands (chalazia), conjunctivitis, honey crust and cylindrical collarette accumulation at the base of the eyelashes, irregularity of the eyelid margin architecture, and evaporative tear dysfunction.5,6
The physiopathology of rosacea is still unknown. Potential factors include genetic predisposition, abnormal inflammation, vascular dysfunction, and involvement of several microbial agents, such as commensal Demodex mites. The number of Demodex mites on normal skin flora is less than 5/cm2; however, the increased vascular dilation and capillary permeability associated with rosacea that result from sunlight and heat exposure increase the density of Demodex folliculorum.7 Elevated Demodex mite density has been observed in the lumens of the sebaceous follicles in patients with rosacea. However, because the severity of the clinical manifestations of the disease is not directly associated with the density of D folliculorum, it generally is accepted that D folliculorum is not a pathogenetic but rather an exacerbating factor.8 It has been reported that this species of mite is mostly found on the face and around the eyelashes and scalp of patients and that it can cause ocular surface inflammation.8
Most studies have researched ocular manifestations of rosacea but not ocular involvement in rosacea patients with and without Demodex mite infestation. In our study, we sought to compare the ocular surface, meibomian gland characteristics, and tear film abnormalities among patients with cutaneous rosacea with and without Demodex infestation.
Materials and Methods
We conducted a retrospective study of 60 patients with cutaneous rosacea. This study was approved by the ethics committee of the local hospital (2018/002-003), and all patients provided verbal and written informed consent before participating in the study. The study was carried out according to the guidelines of the Declaration of Helsinki.
Patient Selection and Evaluation
Patients diagnosed with rosacea by a dermatologist within 6 months were included in the study. Diagnosis of the disease was made after a detailed anamnesis and dermatologic examination. Rosacea was diagnosed if patients had an itching sensation, erythema and/or erythema attacks, and papules and pustules, and fulfilled the diagnostic criteria according to the National Rosacea Society. The skin disease was classified according to the subtypes as ETR, PPR, phymatous rosacea, or ocular rosacea.
The standard skin surface biopsy method was used in 60 patients for detecting Demodex density. When more than 5 mites were detected per square centimeter, the result was recorded as positive. Thirty consecutive, newly diagnosed patients with cutaneous acne rosacea with Demodex infestation and 30 consecutive, newly diagnosed sex- and age-matched patients with acne rosacea without Demodex infestation admitted to the dermatology outpatient clinic were included to this study. The patients who did not have any known dermatologic, systemic, or ocular diseases were included in the study. Patients who met any of the following criteria were excluded from the study: prior anti-inflammatory topical and/or systemic treatment for rosacea during the last 3 months, contact lens wear, eyelid surgery, or autoimmune disease requiring treatment.
Microscopic Demodex Examination
Demodex count was determined using a standardized skin surface biopsy, which is a noninvasive method. Every patient gave samples from the cheeks. This biopsy was repeated from the same site. A drop of cyanoacrylate was placed on a clean slide, pressed against a skin lesion, held in place for 1 minute, and removed. The obtained samples were evaluated under a light microscope (Nikon E200) with oil immersion. When more than 5 mites were detected per square centimeter, the result was recorded as positive.
Ophthalmologic Examination
A complete ophthalmologic examination including visual acuity assessment, standardized slit lamp examination, and fundus examination was done for all patients. Ocular rosacea was diagnosed on detection of 1 or more of the following: watery or bloodshot appearance, foreign-body sensation, burning or stinging, dryness, itching, light sensitivity, blurred vision, telangiectases of the conjunctiva and eyelid margin, eyelid lid and periocular erythema, anterior blepharitis, meibomian gland dysfunction, or irregularity of eyelid margins. All patients were screened for the signs and symptoms of ocular rosacea and underwent other ophthalmologic examinations, including tear function tests. Tear functions were evaluated with Schirmer tests without anesthesia and fluorescein tear breakup time (TBUT). Tear film breakup time was assessed after instillation of 2% fluorescein staining under a cobalt blue filter. The time interval between the last complete blink and the appearance of the first dry spot was recorded. The mean of 3 consecutive measurements was obtained. The Schirmer test was performed without topical anesthesia using a standardized filter strip (Bio-Tech Vision Care). The amount of wetting was measured after 5 minutes. Meibomian gland expressibility was assessed by applying digital pressure to the eyelid margin.
Statistical Analysis
Statistical analysis of the study was performed with SPSS Statistics Version 22.0 (SPSS Inc). Continuous variables were reported as mean (SD), and categorical variables were reported as percentages and counts. Descriptive statistics for numerical variables were created. An independent sample t test was used for normally distributed continuous variables. The Kolmogorov-Smirnov test was used to determine normality. The Schirmer test without anesthesia and TBUT values among groups were compared using one-way analysis of variance. The differences were calculated using the multiple comparison Tukey test. P<.05 was considered statistically significant.
Results
Demographic Characteristics of Rosacea Patients
Sixty eyes of 30 newly diagnosed patients with acne rosacea with Demodex infestation and 60 eyes of 30 newly diagnosed patients with acne rosacea without Demodex infestation were enrolled in this study. The mean age (SD) of the 60 patients was 37.63 (10.01) years. The mean TBUT (SD) of the 120 eyes was 6.65 (3.44) seconds, and the mean Schirmer score (SD) was 12.59 (6.71) mm (Table 1).
Meibomian Gland Dysfunction vs Subgroup of Rosacea Patients
Thirty-four (57%) patients had blepharitis, and 18 (30%) patients had meibomitis. Thirty-five (58.3%) patients had ETR, 5 (8.3%) patients had phymatous rosacea, and 20 (33.4%) patients had PPR (Table 2). Of the Demodex-negative patients, 73.3% (22/30) had ETR, 20% (6/30) had PPR, and 6.7% (2/30) had phymatous rosacea. Of the Demodex-positive patients, 43.3% (13/30) had ETR, 46.7% (14/30) had PPR, and 10% (3/30) had phymatous rosacea (Table 3). Papulopustular rosacea was found to be significantly associated with Demodex positivity (P=.003); neither ETR nor phymatous rosacea was found to be significantly associated with Demodex infestation (P=.66 and P=.13, respectively)(Table 3).
There was no statistically significant difference between the Demodex-negative and Demodex-positive groups for mean age (SD)(37.4 [11.54] years vs 37.87 [8.41] years; P=.85), mean TBUT (SD)(6.73 [3.62] seconds vs 6.57 [3.33] seconds; P=.85), and mean Schirmer score (SD)(13.68 [7.23] mm vs 11.5 [6.08] mm; P=.21)(Table 4).
Fifteen (50%) patients (30 eyes) in the Demodex-negative group and 19 (63.3%) patients (38 eyes) in the Demodex-positive group had blepharitis, with no statistically significant difference between the groups (P=.43). Seven (23.3%) patients (14 eyes) in the Demodex-negative group and 11 (36.7%) patients (22 eyes) in the Demodex-positive group had meibomitis, with no statistically significant difference between the groups (P=.39)(Table 3).
Sixteen (53.3%) patients (32 eyes) in the Demodex-negative group and 21 (70%) patients (42 eyes) in the Demodex-positive group had TBUT values less than 10 seconds. Eighteen (60%) patients (36 eyes) in the Demodex-negative group and 25 (83.3%) patients (50 eyes) in the Demodex-positive group had Schirmer scores less than 10 mm (Table 3). The 2 groups were not significantly different in dry eye findings (P=.25 and P=.29, respectively).
Comment
Inflammation in Rosacea
It is known that the density of nonfloral bacteria as well as D folliculorum and Demodex brevis increases in skin affected by rosacea compared to normal skin. Vascular dilation associated with rosacea that results from sunlight and heat causes increased capillary permeability and creates the ideal environment for the proliferation of D folliculorum. Demodex is thought to act as a vector for the activity of certain other microorganisms, particularly Bacillus oleronius, and thus initiates the inflammatory response associated with rosacea.9
One study reported that the inflammation associated with rosacea that was caused by Demodex and other environmental stimuli occurred through toll-like receptor 2 and various cytokines.10 It has been reported that the abnormal function of toll-like receptor 2 in the epidermis leads to the increased production of cathelicidin. Cathelicidin is an antimicrobial peptide with both vasoactive and proinflammatory activity and has been used as a basis to explain the pathogenesis of facial erythema, flushing, and telangiectasia in the context of rosacea.11,12 In addition, it has been reported that the increased secretion of proinflammatory cytokines such as IL-1 and gelatinase B in ocular rosacea leads to tearing film abnormalities that result from increased bacterial flora in the eyelids, which subsequently leads to decreased tear drainage and dry eyes.13 In addition, B oleronius isolated from a D folliculorum mite from patients with PPR produced proteins that induced an inflammatory immune response in 73% (16/22) of patients with rosacea.14
Ocular Findings in Rosacea Patients
In our study, PPR was found to be significantly associated with Demodex positivity compared to ETR and phymatous rosacea (P=.003). However, ocular inflammation findings such as blepharitis and meibomitis were not significantly different between Demodex-positive and Demodex-negative patients. Although the mean Schirmer score of Demodex-positive patients was lower than Demodex-negative patients, this difference was not statistically significant. We evaluated a TBUT of less than 10 seconds and a Schirmer score less than 10 mm as dry eye. Accordingly, the number of patients with dry eye was higher in the Demodex-positive group, but this difference was not statistically significant.
Chronic blepharitis, conjunctival inflammation, and meibomian gland dysfunction are among the most common findings of ocular rosacea.15,16 Patients with ocular rosacea commonly have dry eye and abnormal TBUT and Schirmer scores.17 In our study, we found that the fluorescein TBUT and Schirmer scores were more likely to be abnormal in the Demodex-positive group, but the difference between the 2 groups was not statistically significant.
It has been reported that proinflammatory cytokines due to a weakened immune system in rosacea patients were increased. The weakened immune system was further supported by the increased concentrations of proinflammatory cytokines such as IL-1 and matrix metalloproteinase 9 in these patients’ tears and the improvement of symptoms after the inhibition of these cytokines.11 Luo et al18 reported that Demodex inflammation causes dry eye, particularly with D brevis. Ayyildiz and Sezgin19 reported that Schirmer scores were significantly lower and that the Ocular Surface Disease Index had significantly increased in the Demodex-positive group compared to the Demodex-negative group (P=.001 for both). A Korean study reported that Demodex density was correlated with age, sex, and TBUT results, but there was no significant relationship between Demodex density and Schirmer scores.16
Sobolewska et al20 administered ivermectin cream 1% to 10 patients with cutaneous and ocular rosacea, but only to the forehead, chin, nose, cheeks, and regions close to the eyelids, and observed a significant improvement in blepharitis (P=.004). They stated that ivermectin, as applied only to the face, suppressed the proinflammatory cytokines associated with rosacea and showed anti-inflammatory effects by reducing Demodex mites.20Li et al21 demonstrated a strong correlation between ocular Demodex inflammation and serum reactivity to these bacterial proteins in patients with ocular rosacea, and they found that eyelid margin inflammation and facial rosacea correlated with reactivity to these proteins. These studies suggest a possible role for Demodex infestation and bacterial proteins in the etiology of rosacea.
Gonzalez-Hinojosa et al22 demonstrated that even though eyelash blepharitis was more common in PPR than ETR, there was no statistically significant association between rosacea and Demodex blepharitis. In our study, we found a significant correlation between PPR and Demodex positivity. Also, meibomian gland dysfunction was more common in the Demodex-positive group; however, this result was not statistically significant. One study compared patients with primary demodicosis and patients with rosacea with Demodex-induced blepharitis to healthy controls and found that patients with primary demodicosis and patients with rosacea did not have significantly different ocular findings.23 In contrast, Forton and De Maertelaer24 reported that patients with PPR had significantly more severe ocular manifestations compared with patients with demodicosis (P=.004).
Mizuno et al25 compared the normal (nonrosacea) population with and without Demodex-infested eyelashes and found that the 2 groups were not significantly different for meibomian gland dysfunction, fluorescein TBUT, or ocular surface discomfort.
Varying results have been reported regarding the association between Demodex and blepharitis or ocular surface discomfort with or without rosacea. In our study, we found that Demodex did not affect tear function tests or meibomian gland function in patients with rosacea. We believe this study is important because it demonstrates the effects of Demodex on ocular findings in patients with cutaneous rosacea.
Limitations
Our study has some limitations. The number of patients was relatively small, resulting in few significant differences between the comparison groups. A larger prospective research study is required to assess the prevalence of Demodex mites in the ocular rosacea population along with associated symptoms and findings.
Conclusion
Rosacea is a chronic disease associated with skin and ocular manifestations that range from mild to severe, that progresses in the form of attacks, and that requires long-term follow-up and treatment. Rosacea most often presents as a disease that causes ocular surface inflammation of varying degrees. Demodex infestation may increase cutaneous or ocular inflammation in rosacea. Therefore, every patient diagnosed with rosacea should be given a dermatologic examination to determine Demodex positivity and an ophthalmologic examination to determine ocular manifestations.
- O’Reilly N, Gallagher C, Reddy Katikireddy K, et al. Demodex-associated Bacillus proteins induce an aberrant wound healing response in a corneal epithelial cell line: possible implications for corneal ulcer formation in ocular rosacea. Invest Ophthalmol Vis Sci. 2012;53:3250-3259.
- Webster G, Schaller M. Ocular rosacea: a dermatologic perspective. J Am Acad Dermatol. 2013;69(6 suppl 1):S42-S43.
- Crawford GH, Pelle MT, James WD. Rosacea: I. etiology, pathogenesis, and subtype classification. J Am Acad Dermatol. 2004;51:327-341.
- Wilkin J, Dahl M, Detmar M, et al. Standard grading system for rosacea: report of the National Rosacea Society Expert Committee on the classification and staging of rosacea. J Am Acad Dermatol. 2004;50:907-912.
- Gallo RL, Granstein RD, Kang S, et al. Standard classification and pathophysiology of rosacea: the 2017 update by the National Rosacea Society Expert Committee. J Am Acad Dermatol. 2018;78:148-155.
- Gao YY, Di Pascuale MA, Li W, et al. High prevalence of Demodex in eyelashes with cylindrical dandruff. Invest Ophthalmol Vis Sci. 2005;46:3089-3094.
- Fallen RS, Gooderham M. Rosacea: update on management and emerging therapies. Skin Therapy Lett. 2012;17:1-4.
- Erbagcı Z, Ozgoztası O. The significance of Demodex folliculorum density in rosacea. Int J Dermatol. 1998;37:421-425.
- Ahn CS, Huang WW. Rosacea pathogenesis. Dermatol Clin. 2018;36:81‐86.
- Forton FMN, De Maertelaer V. Two consecutive standardized skin surface biopsies: an improved sampling method to evaluate Demodex density as a diagnostic tool for rosacea and demodicosis. Acta Derm Venereol. 2017;97:242‐248.
- Yamasaki K, Kanada K, Macleod DT, et al. TLR2 expression is increased in rosacea and stimulates enhanced serine protease production by keratinocytes. J Invest Dermatol. 2011;131:688-697.
- Gold LM, Draelos ZD. New and emerging treatments for rosacea. Am J Clin Dermatol. 2015;16:457-461.
- Two AM, Del Rosso JQ. Kallikrein 5-mediated inflammation in rosacea: clinically relevant correlations with acute and chronic manifestations in rosacea and how individual treatments may provide therapeutic benefit. J Clin Aesthet Dermatol. 2014;7:20-25.
- Lacey N, Delaney S, Kavanagh K, et al. Mite-related bacterial antigens stimulate inflammatory cells in rosacea. Br J Dermatol. 2007;157:474-481.
- Forton F, Germaux MA, Brasseur T, et al. Demodicosis and rosacea: epidemiology and significance in daily dermatologic practice. J Am Acad Dermatol. 2005;52:74-87.
- Lee SH, Chun YS, Kim JH, et al. The relationship between Demodex and ocular discomfort. Invest Ophthalmol Vis Sci. 2010;51:2906-2911.
- Awais M, Anwar MI, Ilfikhar R, et al. Rosacea—the ophthalmic perspective. Cutan Ocul Toxicol. 2015;34:161-166.
- Luo X, Li J, Chen C, et al. Ocular demodicosis as a potential cause of ocular surface inflammation. Cornea. 2017;36(suppl 1):S9-S14.
- Ayyildiz T, Sezgin FM. The effect of ocular Demodex colonization on Schirmer test and OSDI scores in newly diagnosed dry eye patients. Eye Contact Lens. 2020;46(suppl 1):S39-S41.
- Sobolewska B, Doycheva D, Deuter CM, et al. Efficacy of topical ivermectin for the treatment of cutaneous and ocular rosacea [published online April 7, 2020]. Ocul Immunol Inflamm. doi:10.1080/09273948.2020.1727531
- Li J, O‘Reilly N, Sheha H, et al. Correlation between ocular Demodex infestation and serum immunoreactivity to Bacillus proteins in patients with facial rosacea. 2010;117:870-877.
- Gonzalez‐Hinojosa D, Jaime‐Villalonga A, Aguilar‐Montes G, et al. Demodex and rosacea: is there a relationship? Indian J Ophthalmol. 2018;66:36‐38.
- Sarac G, Cankaya C, Ozcan KN, et al. Increased frequency of Demodex blepharitis in rosacea and facial demodicosis patients. J Cosmet Dermatol. 2020;19:1260-1265.
- Forton FMN, De Maertelaer V. Rosacea and demodicosis: little-known diagnostic signs and symptoms. Acta Derm Venereol. 2019;99:47-52.
- Mizuno M, Kawashima M, Uchino M, et al. Demodex-mite infestation in cilia and its association with ocular surface parameters in Japanese volunteers. Eye Contact Lens. 2020;46:291-296.
- O’Reilly N, Gallagher C, Reddy Katikireddy K, et al. Demodex-associated Bacillus proteins induce an aberrant wound healing response in a corneal epithelial cell line: possible implications for corneal ulcer formation in ocular rosacea. Invest Ophthalmol Vis Sci. 2012;53:3250-3259.
- Webster G, Schaller M. Ocular rosacea: a dermatologic perspective. J Am Acad Dermatol. 2013;69(6 suppl 1):S42-S43.
- Crawford GH, Pelle MT, James WD. Rosacea: I. etiology, pathogenesis, and subtype classification. J Am Acad Dermatol. 2004;51:327-341.
- Wilkin J, Dahl M, Detmar M, et al. Standard grading system for rosacea: report of the National Rosacea Society Expert Committee on the classification and staging of rosacea. J Am Acad Dermatol. 2004;50:907-912.
- Gallo RL, Granstein RD, Kang S, et al. Standard classification and pathophysiology of rosacea: the 2017 update by the National Rosacea Society Expert Committee. J Am Acad Dermatol. 2018;78:148-155.
- Gao YY, Di Pascuale MA, Li W, et al. High prevalence of Demodex in eyelashes with cylindrical dandruff. Invest Ophthalmol Vis Sci. 2005;46:3089-3094.
- Fallen RS, Gooderham M. Rosacea: update on management and emerging therapies. Skin Therapy Lett. 2012;17:1-4.
- Erbagcı Z, Ozgoztası O. The significance of Demodex folliculorum density in rosacea. Int J Dermatol. 1998;37:421-425.
- Ahn CS, Huang WW. Rosacea pathogenesis. Dermatol Clin. 2018;36:81‐86.
- Forton FMN, De Maertelaer V. Two consecutive standardized skin surface biopsies: an improved sampling method to evaluate Demodex density as a diagnostic tool for rosacea and demodicosis. Acta Derm Venereol. 2017;97:242‐248.
- Yamasaki K, Kanada K, Macleod DT, et al. TLR2 expression is increased in rosacea and stimulates enhanced serine protease production by keratinocytes. J Invest Dermatol. 2011;131:688-697.
- Gold LM, Draelos ZD. New and emerging treatments for rosacea. Am J Clin Dermatol. 2015;16:457-461.
- Two AM, Del Rosso JQ. Kallikrein 5-mediated inflammation in rosacea: clinically relevant correlations with acute and chronic manifestations in rosacea and how individual treatments may provide therapeutic benefit. J Clin Aesthet Dermatol. 2014;7:20-25.
- Lacey N, Delaney S, Kavanagh K, et al. Mite-related bacterial antigens stimulate inflammatory cells in rosacea. Br J Dermatol. 2007;157:474-481.
- Forton F, Germaux MA, Brasseur T, et al. Demodicosis and rosacea: epidemiology and significance in daily dermatologic practice. J Am Acad Dermatol. 2005;52:74-87.
- Lee SH, Chun YS, Kim JH, et al. The relationship between Demodex and ocular discomfort. Invest Ophthalmol Vis Sci. 2010;51:2906-2911.
- Awais M, Anwar MI, Ilfikhar R, et al. Rosacea—the ophthalmic perspective. Cutan Ocul Toxicol. 2015;34:161-166.
- Luo X, Li J, Chen C, et al. Ocular demodicosis as a potential cause of ocular surface inflammation. Cornea. 2017;36(suppl 1):S9-S14.
- Ayyildiz T, Sezgin FM. The effect of ocular Demodex colonization on Schirmer test and OSDI scores in newly diagnosed dry eye patients. Eye Contact Lens. 2020;46(suppl 1):S39-S41.
- Sobolewska B, Doycheva D, Deuter CM, et al. Efficacy of topical ivermectin for the treatment of cutaneous and ocular rosacea [published online April 7, 2020]. Ocul Immunol Inflamm. doi:10.1080/09273948.2020.1727531
- Li J, O‘Reilly N, Sheha H, et al. Correlation between ocular Demodex infestation and serum immunoreactivity to Bacillus proteins in patients with facial rosacea. 2010;117:870-877.
- Gonzalez‐Hinojosa D, Jaime‐Villalonga A, Aguilar‐Montes G, et al. Demodex and rosacea: is there a relationship? Indian J Ophthalmol. 2018;66:36‐38.
- Sarac G, Cankaya C, Ozcan KN, et al. Increased frequency of Demodex blepharitis in rosacea and facial demodicosis patients. J Cosmet Dermatol. 2020;19:1260-1265.
- Forton FMN, De Maertelaer V. Rosacea and demodicosis: little-known diagnostic signs and symptoms. Acta Derm Venereol. 2019;99:47-52.
- Mizuno M, Kawashima M, Uchino M, et al. Demodex-mite infestation in cilia and its association with ocular surface parameters in Japanese volunteers. Eye Contact Lens. 2020;46:291-296.
Practice Points
- Rosacea is a common chronic inflammatory skin disease of the central facial skin and is of unknown origin. Patients with ocular rosacea may report dryness, itching, and photophobia.
- Demodex infestation may increase cutaneous or ocular inflammation in rosacea.
Cutaneous Complications Associated With Intraosseous Access Placement
Intraosseous (IO) access can afford a lifesaving means of vascular access in emergency settings, as it allows for the administration of large volumes of fluids, blood products, and medications at high flow rates directly into the highly vascularized osseous medullary cavity.1 Fortunately, the complication rate with this resuscitative effort is low, with many reports demonstrating complication rates of less than 1%.2 The most commonly reported complications include fluid extravasation, osteomyelitis, traumatic bone fracture, and epiphyseal plate damage.1-3 Although compartment syndrome and skin necrosis have been reported,4,5 there is no comprehensive list of sequelae resulting from fluid extravasation in the literature, and there are no known studies examining the incidence and types of cutaneous complications. In this study, we sought to evaluate the dermatologic impacts of this procedure.
Methods
We performed a retrospective chart review approved by the institutional review board at a large metropolitan level I trauma center in the Midwestern United States spanning 18 consecutive months to identify all patients who underwent IO line placement, either en route to or upon arrival at the trauma center. The electronic medical records of 113 patients (age range, 10 days–94 years) were identified using either an automated natural language look-up program with keywords including intraosseous access and IO or a Current Procedural Terminology code 36680. Data including patient age, reason for IO insertion, anatomic location of the IO, and complications secondary to IO line placement were recorded.
Results
We identified an overall complication rate of 2.7% (3/113), with only 1 patient showing isolated cutaneous complications from IO line placement. The complications in the first 2 patients included compartment syndrome following IO line placement in the right tibia and needle breakage during IO line placement. The third patient, a 30-year-old heart transplant recipient, developed tense bullae on the left leg 5 days after a resuscitative effort required IO access through the bilateral tibiae. The patient had received vasopressors as well as 750 mL of normal saline through these access points. Two days after resuscitation, she developed an enlarg
At a scheduled 7-month dermatology follow-up, the wound bed appeared to be healing well with surrounding scarring with no residual bleeding or drainage (Figure 2) despite the patient reporting a protracted course of wound healing requiring debridement due to eschar formation and multiple follow-up appointments with the wound care service.
Comment
The most commonly reported complications with IO line placement result from fluid infiltration of the subcutaneous tissue secondary to catheter misplacement.1,3 Extravasated fluid may lead to tissue damage, compartment syndrome, and even tissue necrosis in some cases.1,4,5 Localized cellulitis and the formation of subcutaneous abscesses also have been reported, albeit rarely.3,5
In our retrospective cohort review, we identified an additional potential complication of IO line placement that has not been widely reported—development of large traumatic bullae. It is most likely that this patient’s IO catheter became dislodged, resulting in extravasation of fluids into the dermal and subcutaneous tissues.
Our findings support the previously noted complication rate of less than 1% following IO line placement, with an overall complication rate of 2.7% that included only 1 patient with a cutaneous complication.2 Given this low incidence, providers may not be used to recognizing such complications, leading to delayed or incorrect diagnosis of these entities. While there are certain conditions in which IO insertion is contraindicated, including severe bone diseases (eg, osteogenesis imperfecta, osteomyelitis), overlying cellulitis, and bone fracture, these conditions are rare and can be avoided in most cases by use of an alternative site for needle insertion.2 Due to the widespread utility of this tool and its few contraindications, its use in hospitalized patients is rapidly increasing, necessitating a need for quick recognition of potential complications.
From previous data on the incidence of traumatic blisters with underlying bone fractures, there are several identifiable risk factors that could be extended to patients at high risk for developing cutaneous IO complications secondary to the trauma associated with needle insertion,6 including wound-healing impairments in patients with fragile lymphatics, peripheral vascular disease, diabetes, or collagen vascular diseases (eg, lupus, rheumatoid arthritis, Sjögren syndrome). Patients with these conditions should be closely monitored for the development of bullae.6 While the patient we highlighted in our study did not have a history of such conditions, her history of cardiac disease, recent resuscitation attempts, and immunosuppression certainly could have contributed to suboptimal tissue agility and repair after IO line placement.
Conclusion
Intraosseous access is a safe, effective, and reliable option for vascular access in both pediatric and adult populations that is widely used in both prehospital (ie, paramedic administered) and hospital settings, including intensive care units, emergency departments, and any acute situation where rapid vascular access is necessary. This retrospective chart review examining the incidence and types of cutaneous complications associated with IO line placement at a level I trauma center revealed a total complication rate similar to those reported in previous studies and also highlighted a unique postprocedural cutaneous finding of traumatic bullae. Although no unified management recommendations currently exist, providers should consider this complication in the differential for hospitalized patients with large, atypical, asymmetric bullae in the absence of an alternative explanation for such skin findings.
- Day MW. Intraosseous devices for intravascular access in adult trauma patients. Crit Care Nurse. 2011;31:76-90. doi:10.4037/ccn2011615
- Petitpas F, Guenezan J, Vendeuvre T, et al. Use of intra-osseous access in adults: a systematic review. Crit Care. 2016;20:102. doi:10.1186/s13054-016-1277-6
- Desforges JF, Fiser DH. Intraosseous infusion. N Engl J Med. 1990;322:1579-1581. doi:10.1056/NEJM199005313222206
- Simmons CM, Johnson NE, Perkin RM, et al. Intraosseous extravasation complication reports. Ann Emerg Med. 1994;23:363-366. doi:10.1016/S0196-0644(94)70053-2
- Paxton JH. Intraosseous vascular access: a review. Trauma. 2012;14:195-232. doi:10.1177/1460408611430175
- Uebbing CM, Walsh M, Miller JB, et al. Fracture blisters. West J Emerg Med. 2011;12:131-133. doi:10.1016/S0190-9622(09)80152-7
Intraosseous (IO) access can afford a lifesaving means of vascular access in emergency settings, as it allows for the administration of large volumes of fluids, blood products, and medications at high flow rates directly into the highly vascularized osseous medullary cavity.1 Fortunately, the complication rate with this resuscitative effort is low, with many reports demonstrating complication rates of less than 1%.2 The most commonly reported complications include fluid extravasation, osteomyelitis, traumatic bone fracture, and epiphyseal plate damage.1-3 Although compartment syndrome and skin necrosis have been reported,4,5 there is no comprehensive list of sequelae resulting from fluid extravasation in the literature, and there are no known studies examining the incidence and types of cutaneous complications. In this study, we sought to evaluate the dermatologic impacts of this procedure.
Methods
We performed a retrospective chart review approved by the institutional review board at a large metropolitan level I trauma center in the Midwestern United States spanning 18 consecutive months to identify all patients who underwent IO line placement, either en route to or upon arrival at the trauma center. The electronic medical records of 113 patients (age range, 10 days–94 years) were identified using either an automated natural language look-up program with keywords including intraosseous access and IO or a Current Procedural Terminology code 36680. Data including patient age, reason for IO insertion, anatomic location of the IO, and complications secondary to IO line placement were recorded.
Results
We identified an overall complication rate of 2.7% (3/113), with only 1 patient showing isolated cutaneous complications from IO line placement. The complications in the first 2 patients included compartment syndrome following IO line placement in the right tibia and needle breakage during IO line placement. The third patient, a 30-year-old heart transplant recipient, developed tense bullae on the left leg 5 days after a resuscitative effort required IO access through the bilateral tibiae. The patient had received vasopressors as well as 750 mL of normal saline through these access points. Two days after resuscitation, she developed an enlarg
At a scheduled 7-month dermatology follow-up, the wound bed appeared to be healing well with surrounding scarring with no residual bleeding or drainage (Figure 2) despite the patient reporting a protracted course of wound healing requiring debridement due to eschar formation and multiple follow-up appointments with the wound care service.
Comment
The most commonly reported complications with IO line placement result from fluid infiltration of the subcutaneous tissue secondary to catheter misplacement.1,3 Extravasated fluid may lead to tissue damage, compartment syndrome, and even tissue necrosis in some cases.1,4,5 Localized cellulitis and the formation of subcutaneous abscesses also have been reported, albeit rarely.3,5
In our retrospective cohort review, we identified an additional potential complication of IO line placement that has not been widely reported—development of large traumatic bullae. It is most likely that this patient’s IO catheter became dislodged, resulting in extravasation of fluids into the dermal and subcutaneous tissues.
Our findings support the previously noted complication rate of less than 1% following IO line placement, with an overall complication rate of 2.7% that included only 1 patient with a cutaneous complication.2 Given this low incidence, providers may not be used to recognizing such complications, leading to delayed or incorrect diagnosis of these entities. While there are certain conditions in which IO insertion is contraindicated, including severe bone diseases (eg, osteogenesis imperfecta, osteomyelitis), overlying cellulitis, and bone fracture, these conditions are rare and can be avoided in most cases by use of an alternative site for needle insertion.2 Due to the widespread utility of this tool and its few contraindications, its use in hospitalized patients is rapidly increasing, necessitating a need for quick recognition of potential complications.
From previous data on the incidence of traumatic blisters with underlying bone fractures, there are several identifiable risk factors that could be extended to patients at high risk for developing cutaneous IO complications secondary to the trauma associated with needle insertion,6 including wound-healing impairments in patients with fragile lymphatics, peripheral vascular disease, diabetes, or collagen vascular diseases (eg, lupus, rheumatoid arthritis, Sjögren syndrome). Patients with these conditions should be closely monitored for the development of bullae.6 While the patient we highlighted in our study did not have a history of such conditions, her history of cardiac disease, recent resuscitation attempts, and immunosuppression certainly could have contributed to suboptimal tissue agility and repair after IO line placement.
Conclusion
Intraosseous access is a safe, effective, and reliable option for vascular access in both pediatric and adult populations that is widely used in both prehospital (ie, paramedic administered) and hospital settings, including intensive care units, emergency departments, and any acute situation where rapid vascular access is necessary. This retrospective chart review examining the incidence and types of cutaneous complications associated with IO line placement at a level I trauma center revealed a total complication rate similar to those reported in previous studies and also highlighted a unique postprocedural cutaneous finding of traumatic bullae. Although no unified management recommendations currently exist, providers should consider this complication in the differential for hospitalized patients with large, atypical, asymmetric bullae in the absence of an alternative explanation for such skin findings.
Intraosseous (IO) access can afford a lifesaving means of vascular access in emergency settings, as it allows for the administration of large volumes of fluids, blood products, and medications at high flow rates directly into the highly vascularized osseous medullary cavity.1 Fortunately, the complication rate with this resuscitative effort is low, with many reports demonstrating complication rates of less than 1%.2 The most commonly reported complications include fluid extravasation, osteomyelitis, traumatic bone fracture, and epiphyseal plate damage.1-3 Although compartment syndrome and skin necrosis have been reported,4,5 there is no comprehensive list of sequelae resulting from fluid extravasation in the literature, and there are no known studies examining the incidence and types of cutaneous complications. In this study, we sought to evaluate the dermatologic impacts of this procedure.
Methods
We performed a retrospective chart review approved by the institutional review board at a large metropolitan level I trauma center in the Midwestern United States spanning 18 consecutive months to identify all patients who underwent IO line placement, either en route to or upon arrival at the trauma center. The electronic medical records of 113 patients (age range, 10 days–94 years) were identified using either an automated natural language look-up program with keywords including intraosseous access and IO or a Current Procedural Terminology code 36680. Data including patient age, reason for IO insertion, anatomic location of the IO, and complications secondary to IO line placement were recorded.
Results
We identified an overall complication rate of 2.7% (3/113), with only 1 patient showing isolated cutaneous complications from IO line placement. The complications in the first 2 patients included compartment syndrome following IO line placement in the right tibia and needle breakage during IO line placement. The third patient, a 30-year-old heart transplant recipient, developed tense bullae on the left leg 5 days after a resuscitative effort required IO access through the bilateral tibiae. The patient had received vasopressors as well as 750 mL of normal saline through these access points. Two days after resuscitation, she developed an enlarg
At a scheduled 7-month dermatology follow-up, the wound bed appeared to be healing well with surrounding scarring with no residual bleeding or drainage (Figure 2) despite the patient reporting a protracted course of wound healing requiring debridement due to eschar formation and multiple follow-up appointments with the wound care service.
Comment
The most commonly reported complications with IO line placement result from fluid infiltration of the subcutaneous tissue secondary to catheter misplacement.1,3 Extravasated fluid may lead to tissue damage, compartment syndrome, and even tissue necrosis in some cases.1,4,5 Localized cellulitis and the formation of subcutaneous abscesses also have been reported, albeit rarely.3,5
In our retrospective cohort review, we identified an additional potential complication of IO line placement that has not been widely reported—development of large traumatic bullae. It is most likely that this patient’s IO catheter became dislodged, resulting in extravasation of fluids into the dermal and subcutaneous tissues.
Our findings support the previously noted complication rate of less than 1% following IO line placement, with an overall complication rate of 2.7% that included only 1 patient with a cutaneous complication.2 Given this low incidence, providers may not be used to recognizing such complications, leading to delayed or incorrect diagnosis of these entities. While there are certain conditions in which IO insertion is contraindicated, including severe bone diseases (eg, osteogenesis imperfecta, osteomyelitis), overlying cellulitis, and bone fracture, these conditions are rare and can be avoided in most cases by use of an alternative site for needle insertion.2 Due to the widespread utility of this tool and its few contraindications, its use in hospitalized patients is rapidly increasing, necessitating a need for quick recognition of potential complications.
From previous data on the incidence of traumatic blisters with underlying bone fractures, there are several identifiable risk factors that could be extended to patients at high risk for developing cutaneous IO complications secondary to the trauma associated with needle insertion,6 including wound-healing impairments in patients with fragile lymphatics, peripheral vascular disease, diabetes, or collagen vascular diseases (eg, lupus, rheumatoid arthritis, Sjögren syndrome). Patients with these conditions should be closely monitored for the development of bullae.6 While the patient we highlighted in our study did not have a history of such conditions, her history of cardiac disease, recent resuscitation attempts, and immunosuppression certainly could have contributed to suboptimal tissue agility and repair after IO line placement.
Conclusion
Intraosseous access is a safe, effective, and reliable option for vascular access in both pediatric and adult populations that is widely used in both prehospital (ie, paramedic administered) and hospital settings, including intensive care units, emergency departments, and any acute situation where rapid vascular access is necessary. This retrospective chart review examining the incidence and types of cutaneous complications associated with IO line placement at a level I trauma center revealed a total complication rate similar to those reported in previous studies and also highlighted a unique postprocedural cutaneous finding of traumatic bullae. Although no unified management recommendations currently exist, providers should consider this complication in the differential for hospitalized patients with large, atypical, asymmetric bullae in the absence of an alternative explanation for such skin findings.
- Day MW. Intraosseous devices for intravascular access in adult trauma patients. Crit Care Nurse. 2011;31:76-90. doi:10.4037/ccn2011615
- Petitpas F, Guenezan J, Vendeuvre T, et al. Use of intra-osseous access in adults: a systematic review. Crit Care. 2016;20:102. doi:10.1186/s13054-016-1277-6
- Desforges JF, Fiser DH. Intraosseous infusion. N Engl J Med. 1990;322:1579-1581. doi:10.1056/NEJM199005313222206
- Simmons CM, Johnson NE, Perkin RM, et al. Intraosseous extravasation complication reports. Ann Emerg Med. 1994;23:363-366. doi:10.1016/S0196-0644(94)70053-2
- Paxton JH. Intraosseous vascular access: a review. Trauma. 2012;14:195-232. doi:10.1177/1460408611430175
- Uebbing CM, Walsh M, Miller JB, et al. Fracture blisters. West J Emerg Med. 2011;12:131-133. doi:10.1016/S0190-9622(09)80152-7
- Day MW. Intraosseous devices for intravascular access in adult trauma patients. Crit Care Nurse. 2011;31:76-90. doi:10.4037/ccn2011615
- Petitpas F, Guenezan J, Vendeuvre T, et al. Use of intra-osseous access in adults: a systematic review. Crit Care. 2016;20:102. doi:10.1186/s13054-016-1277-6
- Desforges JF, Fiser DH. Intraosseous infusion. N Engl J Med. 1990;322:1579-1581. doi:10.1056/NEJM199005313222206
- Simmons CM, Johnson NE, Perkin RM, et al. Intraosseous extravasation complication reports. Ann Emerg Med. 1994;23:363-366. doi:10.1016/S0196-0644(94)70053-2
- Paxton JH. Intraosseous vascular access: a review. Trauma. 2012;14:195-232. doi:10.1177/1460408611430175
- Uebbing CM, Walsh M, Miller JB, et al. Fracture blisters. West J Emerg Med. 2011;12:131-133. doi:10.1016/S0190-9622(09)80152-7
Practice Points
- Intraosseous (IO) access provides rapid vascular access for the delivery of fluids, drugs, and blood products in emergent situations.
- Bullae are potential complications from IO line placement.
Efficacy of Etanercept in the Treatment of Stevens-Johnson Syndrome and Toxic Epidermal Necrolysis
Regarded as dermatologic emergencies, Stevens-Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) represent a spectrum of blistering skin diseases that have a high mortality rate. Because of a misguided immune response to medications or infections, CD8+ T lymphocytes release proinflammatory cytokines, giving rise to the extensive epidermal destruction seen in SJS and TEN. The exact pathogenesis of SJS and TEN is still poorly defined, but studies have proposed that T cells mediate keratinocyte (KC) apoptosis through perforin and granzyme release and activation of the Fas/Fas ligand (FasL). Functioning as a transmembrane death receptor in the tumor necrosis factor (TNF) superfamily, Fas (CD95) activates Fas-associated death domain protein, caspases, and nucleases, resulting in organized cell destruction. Likewise, perforin and granzymes also have been shown to play a similar role in apoptosis via activation of caspases.1
Evidence for the role of TNF-α in SJS and TEN has been supported by findings of elevated levels of TNF-α within the blister fluid, serum, and KC cell surface. Additionally, TNF-α has been shown to upregulate inducible nitric oxide synthase in KCs, causing an accumulation of nitric oxide and subsequent FasL-mediated cell death.1-3 Notably, studies have demonstrated a relative lack of lymphocytes in the tissue of TEN patients despite the extensive destruction that is observed, thus emphasizing the importance of amplification and cell signaling via inflammatory mediators such as TNF-α.1 In this proposed model, T cells release IFN-γ, causing KCs to release TNF-α that subsequently promotes the upregulation of the aforementioned FasL.1 Tumor necrosis factor α also may promote increased MHC class I complex deposition on KC surfaces that may play a role in perforin and granzyme-mediated apoptosis of KCs.1
There is still debate on the standard of care for the treatment of SJS and TEN, attributed to the absence of randomized controlled trials and the rarity of the disease as well as the numerous conflicting studies evaluating potential treatments.1,4 Despite conflicting data to support their use, supportive care and intravenous immunoglobulin (IVIG) continue to be common treatments for SJS and TEN in hospitals worldwide. Elucidation of the role of TNF-α has prompted the use of infliximab and etanercept. In a case series of Italian patients with TEN (average SCORTEN, 3.6) treated with the TNF-α antagonist etanercept, no mortality was observed, which was well below the calculated expected mortality of 46.9%.2 Our retrospective study compared the use of a TNF antagonist to other therapies in the treatment of SJS/TEN. Our data suggest that etanercept is a lifesaving and disease-modifying therapy.
Methods
Twenty-two patients with SJS/TEN were included in this analysis. This included all patients who carried a clinical diagnosis of SJS/TEN with a confirmatory biopsy at our 2 university centers—University of California, Los Angeles, and Keck-LA County-Norris Hospital at the University of Southern California, Los Angeles—from 2013 to 2016. The diagnosis was rendered when a clinical diagnosis of SJS/TEN was given by a dermatologist and a confirmatory biopsy was performed. Every patient given a diagnosis of SJS/TEN at either university system from 2015 onward received an injection of etanercept given the positive results reported by Paradisi et al.2
The 9 patients who presented from 2013 to 2014 to our 2 hospital systems and were given a diagnosis of SJS/TEN received either IVIG or supportive care alone and had an average body surface area (BSA) affected of 23%. The 13 patients who presented from 2015 to 2016 were treated with etanercept in the form of a 50-mg subcutaneous injection given once to the right upper arm. Of this group, 4 patients received dual therapy with both IVIG and etanercept. In the etanercept-treated group (etanercept alone and etanercept plus IVIG), the average BSA affected was 30%. At the time of preliminary diagnosis, all patient medications were evaluated for a possible temporal relationship to the onset of rash and were discontinued if felt to be causative. The causative agent and treatment course for each patient is summarized in Table 1.
Patients were monitored daily in the hospital for improvement, and time to re-epithelialization was measured. Re-epithelialization was defined as progressive healing with residual lesions (erosions, ulcers, or bullae) covering no more than 5% BSA and was contingent on the patient having no new lesions within 24 hours.5 SCORe of Toxic Epidermal Necrosis (SCORTEN), a validated severity-of-illness score,6 was calculated by giving 1 point for each of the following criteria at the time of diagnosis: age ≥40 years, concurrent malignancy, heart rate ≥120 beats/min, serum blood urea nitrogen >27 mg/dL, serum bicarbonate <20 mEq/L, serum glucose >250 mg/dL, and detached or compromised BSA >10%. The total SCORTEN was correlated with the following risk of mortality as supported by prior validation studies: SCORTEN of 0 to 1, 3.2%; SCORTEN of 2, 12.1%; SCORTEN of 3, 35.3%; SCORTEN of 4, 58.3%; SCORTEN of ≥5, >90%.
Results
A total of 13 patients received etanercept. The mean SCORTEN was 2.2. The observed mortality was 0%, which was markedly lower than the predicted mortality of 24.3% (as determined by linear interpolation). Of this cohort, 9 patients received etanercept alone (mean SCORTEN of 2.1, predicted mortality of 22.9%), whereas 4 patients received a combination of etanercept and IVIG (mean SCORTEN of 2.3, predicted mortality of 27.2%).
The 4 patients who received both etanercept and IVIG received dual therapy for varying reasons. In patient 2 (Table 1), the perceived severity of this case ultimately led to the decision to start IVIG in addition to etanercept, resulting in rapid recovery and discharge after only 1 week of hospitalization. Intravenous immunoglobulin also was given in patient 3 (SCORTEN of 4) and patient 6 (SCORTEN of 2) for progression of disease despite administration of etanercept, with subsequent cessation of progression after the addition of the second agent (IVIG). Patient 12 might have done well on etanercept monotherapy but was administered IVIG as a precautionary measure because of hospital treatment algorithms.
Nine patients did not receive etanercept. Of this group, 5 received IVIG and 4 were managed with supportive care alone. The average SCORTEN for this group was 2.4, only slightly higher than the group that received etanercept (Table 2). The mortality rate in this group was 33%, which was higher than the predicted mortality of 28.1%.
Re-epithelialization data were available for 8 patients who received etanercept. The average time to re-epithelialization for these patients was 8.9 days and ranged from 3 to 19 days. Of these patients, 2 received both IVIG and etanercept, with an average time to re-epithelialization of 13 days. For the 6 patients who received etanercept alone, the average time to re-epithelialization was 7.5 days. Re-epithelialization data were not available for any of the patients who received only IVIG or supportive care but to our recollection ranged from 14 to 21 days.
The clinical course of the 13 patients after the administration of a single dose of etanercept was remarkable, as there was complete absence of mortality and an increase in speed of recovery in most patients receiving this intervention (time to re-epithelialization, 3–19 days). We also observed another interesting trend from our patients treated with etanercept, which was the suggestion that treatment with etanercept may be less effective if IVIG and/or steroids are given prior to etanercept; likewise, treatment is more effective when etanercept is given quickly. For patients 1, 4, 5, 7, 9, and 11 (as shown in Table 1), no prior IVIG therapy or other immunosuppressive therapy had been given before etanercept was administered. In these 6 patients, the average time to re-epithelialization after etanercept administration was 7.5 days; average time to re-epithelialization, unfortunately, is not available for the patients who were not treated with etanercept. In addition, as shown in the Figure, it was noted in some patients that the depth of denudation was markedly more superficial than what would typically be clinically observed with TEN after administration of other immunomodulatory therapies such as IVIG or prednisone or with supportive care alone. In these 2 patients with superficial desquamation—patients 7 and 9—etanercept notably was given within 6 hours of onset of skin pain.
Comment
There is no definitive gold standard treatment of SJS, SJS/TEN overlap, or TEN. However, generally agreed upon management includes immediate discontinuation of the offending medication and supportive therapy with aggressive electrolyte replacement and wound care. Management in a burn unit or intensive care unit is recommended in severe cases. Contention over the efficacy of various medications in the treatment of SJS and TEN continues and largely is due to the rarity of SJS and TEN; studies are small and almost all lack randomization. Therapies that have been used include high-dose steroids, IVIG, plasmapheresis, cyclophosphamide, cyclosporine A, and TNF inhibitors (eg, etanercept, infliximab).1
Evidence for the use of anti–TNF-α antibodies has been limited thus far, with most of the literature focusing on infliximab and etanercept. Adalimumab, a fully humanized clonal antibody, has no reported cases in the dermatologic literature for use in patients with SJS/TEN. Two case reports of adalimumab paradoxically causing SJS have been documented. In both cases, adalimumab was stopped and patients responded to intravenous corticosteroids and infliximab.7,8 Similarly, thalidomide has not proven to be a promising anti–TNF-α agent for the treatment of SJS/TEN. In the only attempted randomized controlled trial for SJS and TEN, thalidomide appeared to increase mortality, eventuating in this trial being terminated prior to the planned end date.9Infliximab and etanercept have several case reports and a few case series highlighting potentially efficacious application of TNF-α inhibitors for the treatment of SJS/TEN.10-13 In 2002, Fischer et al10 reported the first case of TEN treated successfully with a single dose of infliximab 5 mg/kg. Kreft et al14 reported on etoricoxib-induced TEN that was treated with infliximab 5 mg/kg, which led to re-epithelialization within 5 weeks (notably a 5-week re-epithelialization time is not necessarily an improvement).
In 2005, Hunger et al3 demonstrated TNF-α’s release by KCs in the epidermis and by inflammatory cells in the dermis of a TEN patient. Twenty-four hours after the administration of infliximab 5 mg/kg in these patients, TNF-α was found to be below normal and epidermal detachment ceased.3 Wojtkietwicz et al13 demonstrated benefit following an infusion of infliximab 5 mg/kg in a patient whose disease continued to progress despite treatment with dexamethasone and 1.8 g/kg of IVIG.
Then 2 subsequent case series added further support for the efficacy of infliximab in the treatment of TEN. Patmanidis et al15 and Gaitanis et al16 reported similar results in 4 patients, each treated with infliximab 5 mg/kg immediately followed by initiation of high-dose IVIG (2 g/kg over 5 days). Zárate-Correa et al17 reported a 0% mortality rate and near-complete re-epithelialization after 5 to 14 days in 4 patients treated with a single 300-mg dose of infliximab.
However, the success of infliximab in the treatment of TEN has been countered by the pilot study by Paquet et al,18 which compared the efficacy of 150 mg/kg of N-acetylcysteine alone vs adding infliximab 5 mg/kg to treat 10 TEN patients. The study demonstrated no benefit at 48 hours in the group given infliximab, the time frame in which prior case reports touting infliximab’s benefit claimed the benefit was observed. Similarly, there was no effect on mortality for either treatment modality as assessed by illness auxiliary score.18
Evidence in support of the use of etanercept in the treatment of SJS/TEN is mounting, and some centers have begun to use it as the first-choice therapy for SJS/TEN. The first case was reported by Famularo et al,19 in which a patient with TEN was given 2 doses of etanercept 25 mg after failure to improve with prednisolone 1 mg/kg. The patient showed near-complete and rapid re-epithelization in 6 days before death due to disseminated intravascular coagulation 10 days after admission.19 Gubinelli et al20 and Sadighha21 independently reported cases of TEN and TEN/acute generalized exanthematous pustulosis overlap treated with a total of 50 mg of etanercept, demonstrating rapid cessation of lesion progression. Didona et al22 found similar benefit using etanercept 50 mg to treat TEN secondary to rituximab after failure to improve with prednisone and cyclophosphamide. Treatment of TEN with etanercept in an HIV-positive patient also has been reported. Lee et al23 described a patient who was administered 50-mg and 25-mg injections on days 3 and 5 of hospitalization, respectively, with re-epithelialization occurring by day 8. Finally, Owczarczyk-Saczonek et al24 reported a case of SJS in a patient with a 4-year history of etanercept and sulfasalazine treatment of rheumatoid arthritis; sulfasalazine was stopped, but this patient was continued on etanercept until resolution of skin and mucosal symptoms. However, it is important to consider the possibility of publication bias among these cases selected for their positive outcomes.
Perhaps the most compelling literature regarding the use of etanercept for TEN was described in a case series by Paradisi et al.2 This study included 10 patients with TEN, all of whom demonstrated complete re-epithelialization shortly after receiving etanercept 50 mg. Average SCORTEN was 3.6 with a range of 2 to 6. Eight patients in this study had severe comorbidities and all 10 patients survived, with a time to re-epithelialization ranging from 7 to 20 days.2 Additionally, a randomized controlled trial showed that 38 etanercept-treated patients had improved mortality (P=.266) and re-epithelialization time (P=.01) compared to patients treated with intravenous methylprednisolone.25Limitations to our study are similar to other reports of SJS/TEN and included the small number of cases and lack of randomization. Additionally, we do not have data available for all patients for time between onset of disease and treatment initiation. Because of these challenges, data presented in this case series is observational only. Additionally, the patients treated with etanercept alone had a slightly lower SCORTEN compared to the group that received IVIG or supportive care alone (2.1 and 2.4 respectively). However, the etanercept-only group actually had higher involvement of epidermal detachment (33%) compared to the non-etanercept group (23%).
Conclusion
Although treatment with etanercept lacks the support of a randomized controlled trial, similar to all other treatments currently used for SJS and TEN, preliminary reports highlight a benefit in disease progression and improvement in time to re-epithelialization. In particular, if etanercept 50 mg subcutaneously is given as monotherapy or is given early in the disease course (prior to other therapies being attempted and ideally within 6 hours of presentation), our data suggest an even greater trend toward improved mortality and decreased time to re-epithelialization. Additionally, our findings may suggest that in some patients, etanercept monotherapy is not an adequate intervention but the addition of IVIG may be helpful; however, the senior author (S.W.) notes anecdotally that in his experience with the patients treated at the University of California Los Angeles, the order of administration of combination therapies—etanercept followed by IVIG—was important in addition to the choice of therapy. These findings are promising enough to warrant a multicenter randomized controlled trial comparing the efficacy of etanercept to other more commonly used treatments for this spectrum of disease, including IVIG and/or cyclosporine. Based on the data presented in this case series, including the 13 patients who received etanercept and had a 0% mortality rate, etanercept may be viewed as a targeted therapeutic intervention for patients with SJS and TEN.
- Pereira FA, Mudgil AV, Rosmarin DM. Toxic epidermal necrolysis. J Am Acad Dermatol. 2007;56:181-200.
- Paradisi A, Abeni D, Bergamo F, et al. Etanercept therapy for toxic epidermal necrolysis. J Am Acad Dermatol. 2014;71:278-283.
- Hunger RE, Hunziker T, Buettiker U, et al. Rapid resolution of toxic epidermal necrolysis with anti-TNF-α treatment. J Allergy Clin Immunol. 2005;116:923-924.
- Worswick S, Cotliar J. Stevens-Johnson syndrome and toxic epidermal necrolysis: a review of treatment options. Dermatol Ther. 2011;24:207-218.
- Wallace AB. The exposure treatment of burns. Lancet Lond Engl. 1951;1:501-504.
- Bastuji-Garin S, Fouchard N, Bertocchi M, et al. SCORTEN: a severity-of-illness score for toxic epidermal necrolysis. J Invest Dermatol. 2000;115:149-153.
- Mounach A, Rezqi A, Nouijai A, et al. Stevens-Johnson syndrome complicating adalimumab therapy in rheumatoid arthritis disease. Rheumatol Int. 2013;33:1351-1353.
- Salama M, Lawrance I-C. Stevens-Johnson syndrome complicating adalimumab therapy in Crohn’s disease. World J Gastroenterol. 2009;15:4449-4452.
- Wolkenstein P, Latarjet J, Roujeau JC, et al. Randomised comparison of thalidomide versus placebo in toxic epidermal necrolysis. Lancet Lond Engl. 1998;352:1586-1589.
- Fischer M, Fiedler E, Marsch WC, et al Antitumour necrosis factor-α antibodies (infliximab) in the treatment of a patient with toxic epidermal necrolysis. Br J Dermatol. 2002;146:707-709.
- Meiss F, Helmbold P, Meykadeh N, et al. Overlap of acute generalized exanthematous pustulosis and toxic epidermal necrolysis: response to antitumour necrosis factor-alpha antibody infliximab: report of three cases. J Eur Acad Dermatol Venereol. 2007;21:717-719.
- Al-Shouli S, Abouchala N, Bogusz MJ, et al. Toxic epidermal necrolysis associated with high intake of sildenafil and its response to infliximab. Acta Derm Venereol. 2005;85:534-535.
- Wojtkiewicz A, Wysocki M, Fortuna J, et al. Beneficial and rapid effect of infliximab on the course of toxic epidermal necrolysis. Acta Derm Venereol. 2008;88:420-421.
- Kreft B, Wohlrab J, Bramsiepe I, et al. Etoricoxib-induced toxic epidermal necrolysis: successful treatment with infliximab. J Dermatol. 2010;37:904-906.
- Patmanidis K, Sidiras A, Dolianitis K, et al. Combination of infliximab and high-dose intravenous immunoglobulin for toxic epidermal necrolysis: successful treatment of an elderly patient. Case Rep Dermatol Med. 2012;2012:915314.
- Gaitanis G, Spyridonos P, Patmanidis K, et al. Treatment of toxic epidermal necrolysis with the combination of infliximab and high-dose intravenous immunoglobulin. Dermatol Basel Switz. 2012;224:134-139.
- Zárate-Correa LC, Carrillo-Gómez DC, Ramírez-Escobar AF, et al. Toxic epidermal necrolysis successfully treated with infliximab. J Investig Allergol Clin Immunol. 2013;23:61-63.
- Paquet P, Jennes S, Rousseau AF, et al. Effect of N-acetylcysteine combined with infliximab on toxic epidermal necrolysis. a proof-of-concept study. Burns J Int Soc Burn Inj. 2014;40:1707-1712.
- Famularo G, Dona BD, Canzona F, et al. Etanercept for toxic epidermal necrolysis. Ann Pharmacother. 2007;41:1083-1084.
- Gubinelli E, Canzona F, Tonanzi T, et al. Toxic epidermal necrolysis successfully treated with etanercept. J Dermatol. 2009;36:150-153.
- Sadighha A. Etanercept in the treatment of a patient with acute generalized exanthematous pustulosis/toxic epidermal necrolysis: definition of a new model based on translational research. Int J Dermatol. 2009;48:913-914.
- Didona D, Paolino G, Garcovich S, et al. Successful use of etanercept in a case of toxic epidermal necrolysis induced by rituximab. J Eur Acad Dermatol Venereol. 2016;30:E83-E84.
- Lee Y-Y, Ko J-H, Wei C-H, et al. Use of etanercept to treat toxic epidermal necrolysis in a human immunodeficiency virus-positive patient. Dermatol Sin. 2013;31:78-81.
- Owczarczyk-Saczonek A, Zdanowska N, Znajewska-Pander A, et al. Stevens-Johnson syndrome in a patient with rheumatoid arthritis during long-term etanercept therapy. J Dermatol Case Rep. 2016;10:14-16.
- Wang CW, Yang LY, Chen CB, et al. Randomized, controlled trial of TNF-α antagonist in CTL mediated severe cutaneous adverse reactions. J Clin Invest. 2018;128:985-996.
Regarded as dermatologic emergencies, Stevens-Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) represent a spectrum of blistering skin diseases that have a high mortality rate. Because of a misguided immune response to medications or infections, CD8+ T lymphocytes release proinflammatory cytokines, giving rise to the extensive epidermal destruction seen in SJS and TEN. The exact pathogenesis of SJS and TEN is still poorly defined, but studies have proposed that T cells mediate keratinocyte (KC) apoptosis through perforin and granzyme release and activation of the Fas/Fas ligand (FasL). Functioning as a transmembrane death receptor in the tumor necrosis factor (TNF) superfamily, Fas (CD95) activates Fas-associated death domain protein, caspases, and nucleases, resulting in organized cell destruction. Likewise, perforin and granzymes also have been shown to play a similar role in apoptosis via activation of caspases.1
Evidence for the role of TNF-α in SJS and TEN has been supported by findings of elevated levels of TNF-α within the blister fluid, serum, and KC cell surface. Additionally, TNF-α has been shown to upregulate inducible nitric oxide synthase in KCs, causing an accumulation of nitric oxide and subsequent FasL-mediated cell death.1-3 Notably, studies have demonstrated a relative lack of lymphocytes in the tissue of TEN patients despite the extensive destruction that is observed, thus emphasizing the importance of amplification and cell signaling via inflammatory mediators such as TNF-α.1 In this proposed model, T cells release IFN-γ, causing KCs to release TNF-α that subsequently promotes the upregulation of the aforementioned FasL.1 Tumor necrosis factor α also may promote increased MHC class I complex deposition on KC surfaces that may play a role in perforin and granzyme-mediated apoptosis of KCs.1
There is still debate on the standard of care for the treatment of SJS and TEN, attributed to the absence of randomized controlled trials and the rarity of the disease as well as the numerous conflicting studies evaluating potential treatments.1,4 Despite conflicting data to support their use, supportive care and intravenous immunoglobulin (IVIG) continue to be common treatments for SJS and TEN in hospitals worldwide. Elucidation of the role of TNF-α has prompted the use of infliximab and etanercept. In a case series of Italian patients with TEN (average SCORTEN, 3.6) treated with the TNF-α antagonist etanercept, no mortality was observed, which was well below the calculated expected mortality of 46.9%.2 Our retrospective study compared the use of a TNF antagonist to other therapies in the treatment of SJS/TEN. Our data suggest that etanercept is a lifesaving and disease-modifying therapy.
Methods
Twenty-two patients with SJS/TEN were included in this analysis. This included all patients who carried a clinical diagnosis of SJS/TEN with a confirmatory biopsy at our 2 university centers—University of California, Los Angeles, and Keck-LA County-Norris Hospital at the University of Southern California, Los Angeles—from 2013 to 2016. The diagnosis was rendered when a clinical diagnosis of SJS/TEN was given by a dermatologist and a confirmatory biopsy was performed. Every patient given a diagnosis of SJS/TEN at either university system from 2015 onward received an injection of etanercept given the positive results reported by Paradisi et al.2
The 9 patients who presented from 2013 to 2014 to our 2 hospital systems and were given a diagnosis of SJS/TEN received either IVIG or supportive care alone and had an average body surface area (BSA) affected of 23%. The 13 patients who presented from 2015 to 2016 were treated with etanercept in the form of a 50-mg subcutaneous injection given once to the right upper arm. Of this group, 4 patients received dual therapy with both IVIG and etanercept. In the etanercept-treated group (etanercept alone and etanercept plus IVIG), the average BSA affected was 30%. At the time of preliminary diagnosis, all patient medications were evaluated for a possible temporal relationship to the onset of rash and were discontinued if felt to be causative. The causative agent and treatment course for each patient is summarized in Table 1.
Patients were monitored daily in the hospital for improvement, and time to re-epithelialization was measured. Re-epithelialization was defined as progressive healing with residual lesions (erosions, ulcers, or bullae) covering no more than 5% BSA and was contingent on the patient having no new lesions within 24 hours.5 SCORe of Toxic Epidermal Necrosis (SCORTEN), a validated severity-of-illness score,6 was calculated by giving 1 point for each of the following criteria at the time of diagnosis: age ≥40 years, concurrent malignancy, heart rate ≥120 beats/min, serum blood urea nitrogen >27 mg/dL, serum bicarbonate <20 mEq/L, serum glucose >250 mg/dL, and detached or compromised BSA >10%. The total SCORTEN was correlated with the following risk of mortality as supported by prior validation studies: SCORTEN of 0 to 1, 3.2%; SCORTEN of 2, 12.1%; SCORTEN of 3, 35.3%; SCORTEN of 4, 58.3%; SCORTEN of ≥5, >90%.
Results
A total of 13 patients received etanercept. The mean SCORTEN was 2.2. The observed mortality was 0%, which was markedly lower than the predicted mortality of 24.3% (as determined by linear interpolation). Of this cohort, 9 patients received etanercept alone (mean SCORTEN of 2.1, predicted mortality of 22.9%), whereas 4 patients received a combination of etanercept and IVIG (mean SCORTEN of 2.3, predicted mortality of 27.2%).
The 4 patients who received both etanercept and IVIG received dual therapy for varying reasons. In patient 2 (Table 1), the perceived severity of this case ultimately led to the decision to start IVIG in addition to etanercept, resulting in rapid recovery and discharge after only 1 week of hospitalization. Intravenous immunoglobulin also was given in patient 3 (SCORTEN of 4) and patient 6 (SCORTEN of 2) for progression of disease despite administration of etanercept, with subsequent cessation of progression after the addition of the second agent (IVIG). Patient 12 might have done well on etanercept monotherapy but was administered IVIG as a precautionary measure because of hospital treatment algorithms.
Nine patients did not receive etanercept. Of this group, 5 received IVIG and 4 were managed with supportive care alone. The average SCORTEN for this group was 2.4, only slightly higher than the group that received etanercept (Table 2). The mortality rate in this group was 33%, which was higher than the predicted mortality of 28.1%.
Re-epithelialization data were available for 8 patients who received etanercept. The average time to re-epithelialization for these patients was 8.9 days and ranged from 3 to 19 days. Of these patients, 2 received both IVIG and etanercept, with an average time to re-epithelialization of 13 days. For the 6 patients who received etanercept alone, the average time to re-epithelialization was 7.5 days. Re-epithelialization data were not available for any of the patients who received only IVIG or supportive care but to our recollection ranged from 14 to 21 days.
The clinical course of the 13 patients after the administration of a single dose of etanercept was remarkable, as there was complete absence of mortality and an increase in speed of recovery in most patients receiving this intervention (time to re-epithelialization, 3–19 days). We also observed another interesting trend from our patients treated with etanercept, which was the suggestion that treatment with etanercept may be less effective if IVIG and/or steroids are given prior to etanercept; likewise, treatment is more effective when etanercept is given quickly. For patients 1, 4, 5, 7, 9, and 11 (as shown in Table 1), no prior IVIG therapy or other immunosuppressive therapy had been given before etanercept was administered. In these 6 patients, the average time to re-epithelialization after etanercept administration was 7.5 days; average time to re-epithelialization, unfortunately, is not available for the patients who were not treated with etanercept. In addition, as shown in the Figure, it was noted in some patients that the depth of denudation was markedly more superficial than what would typically be clinically observed with TEN after administration of other immunomodulatory therapies such as IVIG or prednisone or with supportive care alone. In these 2 patients with superficial desquamation—patients 7 and 9—etanercept notably was given within 6 hours of onset of skin pain.
Comment
There is no definitive gold standard treatment of SJS, SJS/TEN overlap, or TEN. However, generally agreed upon management includes immediate discontinuation of the offending medication and supportive therapy with aggressive electrolyte replacement and wound care. Management in a burn unit or intensive care unit is recommended in severe cases. Contention over the efficacy of various medications in the treatment of SJS and TEN continues and largely is due to the rarity of SJS and TEN; studies are small and almost all lack randomization. Therapies that have been used include high-dose steroids, IVIG, plasmapheresis, cyclophosphamide, cyclosporine A, and TNF inhibitors (eg, etanercept, infliximab).1
Evidence for the use of anti–TNF-α antibodies has been limited thus far, with most of the literature focusing on infliximab and etanercept. Adalimumab, a fully humanized clonal antibody, has no reported cases in the dermatologic literature for use in patients with SJS/TEN. Two case reports of adalimumab paradoxically causing SJS have been documented. In both cases, adalimumab was stopped and patients responded to intravenous corticosteroids and infliximab.7,8 Similarly, thalidomide has not proven to be a promising anti–TNF-α agent for the treatment of SJS/TEN. In the only attempted randomized controlled trial for SJS and TEN, thalidomide appeared to increase mortality, eventuating in this trial being terminated prior to the planned end date.9Infliximab and etanercept have several case reports and a few case series highlighting potentially efficacious application of TNF-α inhibitors for the treatment of SJS/TEN.10-13 In 2002, Fischer et al10 reported the first case of TEN treated successfully with a single dose of infliximab 5 mg/kg. Kreft et al14 reported on etoricoxib-induced TEN that was treated with infliximab 5 mg/kg, which led to re-epithelialization within 5 weeks (notably a 5-week re-epithelialization time is not necessarily an improvement).
In 2005, Hunger et al3 demonstrated TNF-α’s release by KCs in the epidermis and by inflammatory cells in the dermis of a TEN patient. Twenty-four hours after the administration of infliximab 5 mg/kg in these patients, TNF-α was found to be below normal and epidermal detachment ceased.3 Wojtkietwicz et al13 demonstrated benefit following an infusion of infliximab 5 mg/kg in a patient whose disease continued to progress despite treatment with dexamethasone and 1.8 g/kg of IVIG.
Then 2 subsequent case series added further support for the efficacy of infliximab in the treatment of TEN. Patmanidis et al15 and Gaitanis et al16 reported similar results in 4 patients, each treated with infliximab 5 mg/kg immediately followed by initiation of high-dose IVIG (2 g/kg over 5 days). Zárate-Correa et al17 reported a 0% mortality rate and near-complete re-epithelialization after 5 to 14 days in 4 patients treated with a single 300-mg dose of infliximab.
However, the success of infliximab in the treatment of TEN has been countered by the pilot study by Paquet et al,18 which compared the efficacy of 150 mg/kg of N-acetylcysteine alone vs adding infliximab 5 mg/kg to treat 10 TEN patients. The study demonstrated no benefit at 48 hours in the group given infliximab, the time frame in which prior case reports touting infliximab’s benefit claimed the benefit was observed. Similarly, there was no effect on mortality for either treatment modality as assessed by illness auxiliary score.18
Evidence in support of the use of etanercept in the treatment of SJS/TEN is mounting, and some centers have begun to use it as the first-choice therapy for SJS/TEN. The first case was reported by Famularo et al,19 in which a patient with TEN was given 2 doses of etanercept 25 mg after failure to improve with prednisolone 1 mg/kg. The patient showed near-complete and rapid re-epithelization in 6 days before death due to disseminated intravascular coagulation 10 days after admission.19 Gubinelli et al20 and Sadighha21 independently reported cases of TEN and TEN/acute generalized exanthematous pustulosis overlap treated with a total of 50 mg of etanercept, demonstrating rapid cessation of lesion progression. Didona et al22 found similar benefit using etanercept 50 mg to treat TEN secondary to rituximab after failure to improve with prednisone and cyclophosphamide. Treatment of TEN with etanercept in an HIV-positive patient also has been reported. Lee et al23 described a patient who was administered 50-mg and 25-mg injections on days 3 and 5 of hospitalization, respectively, with re-epithelialization occurring by day 8. Finally, Owczarczyk-Saczonek et al24 reported a case of SJS in a patient with a 4-year history of etanercept and sulfasalazine treatment of rheumatoid arthritis; sulfasalazine was stopped, but this patient was continued on etanercept until resolution of skin and mucosal symptoms. However, it is important to consider the possibility of publication bias among these cases selected for their positive outcomes.
Perhaps the most compelling literature regarding the use of etanercept for TEN was described in a case series by Paradisi et al.2 This study included 10 patients with TEN, all of whom demonstrated complete re-epithelialization shortly after receiving etanercept 50 mg. Average SCORTEN was 3.6 with a range of 2 to 6. Eight patients in this study had severe comorbidities and all 10 patients survived, with a time to re-epithelialization ranging from 7 to 20 days.2 Additionally, a randomized controlled trial showed that 38 etanercept-treated patients had improved mortality (P=.266) and re-epithelialization time (P=.01) compared to patients treated with intravenous methylprednisolone.25Limitations to our study are similar to other reports of SJS/TEN and included the small number of cases and lack of randomization. Additionally, we do not have data available for all patients for time between onset of disease and treatment initiation. Because of these challenges, data presented in this case series is observational only. Additionally, the patients treated with etanercept alone had a slightly lower SCORTEN compared to the group that received IVIG or supportive care alone (2.1 and 2.4 respectively). However, the etanercept-only group actually had higher involvement of epidermal detachment (33%) compared to the non-etanercept group (23%).
Conclusion
Although treatment with etanercept lacks the support of a randomized controlled trial, similar to all other treatments currently used for SJS and TEN, preliminary reports highlight a benefit in disease progression and improvement in time to re-epithelialization. In particular, if etanercept 50 mg subcutaneously is given as monotherapy or is given early in the disease course (prior to other therapies being attempted and ideally within 6 hours of presentation), our data suggest an even greater trend toward improved mortality and decreased time to re-epithelialization. Additionally, our findings may suggest that in some patients, etanercept monotherapy is not an adequate intervention but the addition of IVIG may be helpful; however, the senior author (S.W.) notes anecdotally that in his experience with the patients treated at the University of California Los Angeles, the order of administration of combination therapies—etanercept followed by IVIG—was important in addition to the choice of therapy. These findings are promising enough to warrant a multicenter randomized controlled trial comparing the efficacy of etanercept to other more commonly used treatments for this spectrum of disease, including IVIG and/or cyclosporine. Based on the data presented in this case series, including the 13 patients who received etanercept and had a 0% mortality rate, etanercept may be viewed as a targeted therapeutic intervention for patients with SJS and TEN.
Regarded as dermatologic emergencies, Stevens-Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) represent a spectrum of blistering skin diseases that have a high mortality rate. Because of a misguided immune response to medications or infections, CD8+ T lymphocytes release proinflammatory cytokines, giving rise to the extensive epidermal destruction seen in SJS and TEN. The exact pathogenesis of SJS and TEN is still poorly defined, but studies have proposed that T cells mediate keratinocyte (KC) apoptosis through perforin and granzyme release and activation of the Fas/Fas ligand (FasL). Functioning as a transmembrane death receptor in the tumor necrosis factor (TNF) superfamily, Fas (CD95) activates Fas-associated death domain protein, caspases, and nucleases, resulting in organized cell destruction. Likewise, perforin and granzymes also have been shown to play a similar role in apoptosis via activation of caspases.1
Evidence for the role of TNF-α in SJS and TEN has been supported by findings of elevated levels of TNF-α within the blister fluid, serum, and KC cell surface. Additionally, TNF-α has been shown to upregulate inducible nitric oxide synthase in KCs, causing an accumulation of nitric oxide and subsequent FasL-mediated cell death.1-3 Notably, studies have demonstrated a relative lack of lymphocytes in the tissue of TEN patients despite the extensive destruction that is observed, thus emphasizing the importance of amplification and cell signaling via inflammatory mediators such as TNF-α.1 In this proposed model, T cells release IFN-γ, causing KCs to release TNF-α that subsequently promotes the upregulation of the aforementioned FasL.1 Tumor necrosis factor α also may promote increased MHC class I complex deposition on KC surfaces that may play a role in perforin and granzyme-mediated apoptosis of KCs.1
There is still debate on the standard of care for the treatment of SJS and TEN, attributed to the absence of randomized controlled trials and the rarity of the disease as well as the numerous conflicting studies evaluating potential treatments.1,4 Despite conflicting data to support their use, supportive care and intravenous immunoglobulin (IVIG) continue to be common treatments for SJS and TEN in hospitals worldwide. Elucidation of the role of TNF-α has prompted the use of infliximab and etanercept. In a case series of Italian patients with TEN (average SCORTEN, 3.6) treated with the TNF-α antagonist etanercept, no mortality was observed, which was well below the calculated expected mortality of 46.9%.2 Our retrospective study compared the use of a TNF antagonist to other therapies in the treatment of SJS/TEN. Our data suggest that etanercept is a lifesaving and disease-modifying therapy.
Methods
Twenty-two patients with SJS/TEN were included in this analysis. This included all patients who carried a clinical diagnosis of SJS/TEN with a confirmatory biopsy at our 2 university centers—University of California, Los Angeles, and Keck-LA County-Norris Hospital at the University of Southern California, Los Angeles—from 2013 to 2016. The diagnosis was rendered when a clinical diagnosis of SJS/TEN was given by a dermatologist and a confirmatory biopsy was performed. Every patient given a diagnosis of SJS/TEN at either university system from 2015 onward received an injection of etanercept given the positive results reported by Paradisi et al.2
The 9 patients who presented from 2013 to 2014 to our 2 hospital systems and were given a diagnosis of SJS/TEN received either IVIG or supportive care alone and had an average body surface area (BSA) affected of 23%. The 13 patients who presented from 2015 to 2016 were treated with etanercept in the form of a 50-mg subcutaneous injection given once to the right upper arm. Of this group, 4 patients received dual therapy with both IVIG and etanercept. In the etanercept-treated group (etanercept alone and etanercept plus IVIG), the average BSA affected was 30%. At the time of preliminary diagnosis, all patient medications were evaluated for a possible temporal relationship to the onset of rash and were discontinued if felt to be causative. The causative agent and treatment course for each patient is summarized in Table 1.
Patients were monitored daily in the hospital for improvement, and time to re-epithelialization was measured. Re-epithelialization was defined as progressive healing with residual lesions (erosions, ulcers, or bullae) covering no more than 5% BSA and was contingent on the patient having no new lesions within 24 hours.5 SCORe of Toxic Epidermal Necrosis (SCORTEN), a validated severity-of-illness score,6 was calculated by giving 1 point for each of the following criteria at the time of diagnosis: age ≥40 years, concurrent malignancy, heart rate ≥120 beats/min, serum blood urea nitrogen >27 mg/dL, serum bicarbonate <20 mEq/L, serum glucose >250 mg/dL, and detached or compromised BSA >10%. The total SCORTEN was correlated with the following risk of mortality as supported by prior validation studies: SCORTEN of 0 to 1, 3.2%; SCORTEN of 2, 12.1%; SCORTEN of 3, 35.3%; SCORTEN of 4, 58.3%; SCORTEN of ≥5, >90%.
Results
A total of 13 patients received etanercept. The mean SCORTEN was 2.2. The observed mortality was 0%, which was markedly lower than the predicted mortality of 24.3% (as determined by linear interpolation). Of this cohort, 9 patients received etanercept alone (mean SCORTEN of 2.1, predicted mortality of 22.9%), whereas 4 patients received a combination of etanercept and IVIG (mean SCORTEN of 2.3, predicted mortality of 27.2%).
The 4 patients who received both etanercept and IVIG received dual therapy for varying reasons. In patient 2 (Table 1), the perceived severity of this case ultimately led to the decision to start IVIG in addition to etanercept, resulting in rapid recovery and discharge after only 1 week of hospitalization. Intravenous immunoglobulin also was given in patient 3 (SCORTEN of 4) and patient 6 (SCORTEN of 2) for progression of disease despite administration of etanercept, with subsequent cessation of progression after the addition of the second agent (IVIG). Patient 12 might have done well on etanercept monotherapy but was administered IVIG as a precautionary measure because of hospital treatment algorithms.
Nine patients did not receive etanercept. Of this group, 5 received IVIG and 4 were managed with supportive care alone. The average SCORTEN for this group was 2.4, only slightly higher than the group that received etanercept (Table 2). The mortality rate in this group was 33%, which was higher than the predicted mortality of 28.1%.
Re-epithelialization data were available for 8 patients who received etanercept. The average time to re-epithelialization for these patients was 8.9 days and ranged from 3 to 19 days. Of these patients, 2 received both IVIG and etanercept, with an average time to re-epithelialization of 13 days. For the 6 patients who received etanercept alone, the average time to re-epithelialization was 7.5 days. Re-epithelialization data were not available for any of the patients who received only IVIG or supportive care but to our recollection ranged from 14 to 21 days.
The clinical course of the 13 patients after the administration of a single dose of etanercept was remarkable, as there was complete absence of mortality and an increase in speed of recovery in most patients receiving this intervention (time to re-epithelialization, 3–19 days). We also observed another interesting trend from our patients treated with etanercept, which was the suggestion that treatment with etanercept may be less effective if IVIG and/or steroids are given prior to etanercept; likewise, treatment is more effective when etanercept is given quickly. For patients 1, 4, 5, 7, 9, and 11 (as shown in Table 1), no prior IVIG therapy or other immunosuppressive therapy had been given before etanercept was administered. In these 6 patients, the average time to re-epithelialization after etanercept administration was 7.5 days; average time to re-epithelialization, unfortunately, is not available for the patients who were not treated with etanercept. In addition, as shown in the Figure, it was noted in some patients that the depth of denudation was markedly more superficial than what would typically be clinically observed with TEN after administration of other immunomodulatory therapies such as IVIG or prednisone or with supportive care alone. In these 2 patients with superficial desquamation—patients 7 and 9—etanercept notably was given within 6 hours of onset of skin pain.
Comment
There is no definitive gold standard treatment of SJS, SJS/TEN overlap, or TEN. However, generally agreed upon management includes immediate discontinuation of the offending medication and supportive therapy with aggressive electrolyte replacement and wound care. Management in a burn unit or intensive care unit is recommended in severe cases. Contention over the efficacy of various medications in the treatment of SJS and TEN continues and largely is due to the rarity of SJS and TEN; studies are small and almost all lack randomization. Therapies that have been used include high-dose steroids, IVIG, plasmapheresis, cyclophosphamide, cyclosporine A, and TNF inhibitors (eg, etanercept, infliximab).1
Evidence for the use of anti–TNF-α antibodies has been limited thus far, with most of the literature focusing on infliximab and etanercept. Adalimumab, a fully humanized clonal antibody, has no reported cases in the dermatologic literature for use in patients with SJS/TEN. Two case reports of adalimumab paradoxically causing SJS have been documented. In both cases, adalimumab was stopped and patients responded to intravenous corticosteroids and infliximab.7,8 Similarly, thalidomide has not proven to be a promising anti–TNF-α agent for the treatment of SJS/TEN. In the only attempted randomized controlled trial for SJS and TEN, thalidomide appeared to increase mortality, eventuating in this trial being terminated prior to the planned end date.9Infliximab and etanercept have several case reports and a few case series highlighting potentially efficacious application of TNF-α inhibitors for the treatment of SJS/TEN.10-13 In 2002, Fischer et al10 reported the first case of TEN treated successfully with a single dose of infliximab 5 mg/kg. Kreft et al14 reported on etoricoxib-induced TEN that was treated with infliximab 5 mg/kg, which led to re-epithelialization within 5 weeks (notably a 5-week re-epithelialization time is not necessarily an improvement).
In 2005, Hunger et al3 demonstrated TNF-α’s release by KCs in the epidermis and by inflammatory cells in the dermis of a TEN patient. Twenty-four hours after the administration of infliximab 5 mg/kg in these patients, TNF-α was found to be below normal and epidermal detachment ceased.3 Wojtkietwicz et al13 demonstrated benefit following an infusion of infliximab 5 mg/kg in a patient whose disease continued to progress despite treatment with dexamethasone and 1.8 g/kg of IVIG.
Then 2 subsequent case series added further support for the efficacy of infliximab in the treatment of TEN. Patmanidis et al15 and Gaitanis et al16 reported similar results in 4 patients, each treated with infliximab 5 mg/kg immediately followed by initiation of high-dose IVIG (2 g/kg over 5 days). Zárate-Correa et al17 reported a 0% mortality rate and near-complete re-epithelialization after 5 to 14 days in 4 patients treated with a single 300-mg dose of infliximab.
However, the success of infliximab in the treatment of TEN has been countered by the pilot study by Paquet et al,18 which compared the efficacy of 150 mg/kg of N-acetylcysteine alone vs adding infliximab 5 mg/kg to treat 10 TEN patients. The study demonstrated no benefit at 48 hours in the group given infliximab, the time frame in which prior case reports touting infliximab’s benefit claimed the benefit was observed. Similarly, there was no effect on mortality for either treatment modality as assessed by illness auxiliary score.18
Evidence in support of the use of etanercept in the treatment of SJS/TEN is mounting, and some centers have begun to use it as the first-choice therapy for SJS/TEN. The first case was reported by Famularo et al,19 in which a patient with TEN was given 2 doses of etanercept 25 mg after failure to improve with prednisolone 1 mg/kg. The patient showed near-complete and rapid re-epithelization in 6 days before death due to disseminated intravascular coagulation 10 days after admission.19 Gubinelli et al20 and Sadighha21 independently reported cases of TEN and TEN/acute generalized exanthematous pustulosis overlap treated with a total of 50 mg of etanercept, demonstrating rapid cessation of lesion progression. Didona et al22 found similar benefit using etanercept 50 mg to treat TEN secondary to rituximab after failure to improve with prednisone and cyclophosphamide. Treatment of TEN with etanercept in an HIV-positive patient also has been reported. Lee et al23 described a patient who was administered 50-mg and 25-mg injections on days 3 and 5 of hospitalization, respectively, with re-epithelialization occurring by day 8. Finally, Owczarczyk-Saczonek et al24 reported a case of SJS in a patient with a 4-year history of etanercept and sulfasalazine treatment of rheumatoid arthritis; sulfasalazine was stopped, but this patient was continued on etanercept until resolution of skin and mucosal symptoms. However, it is important to consider the possibility of publication bias among these cases selected for their positive outcomes.
Perhaps the most compelling literature regarding the use of etanercept for TEN was described in a case series by Paradisi et al.2 This study included 10 patients with TEN, all of whom demonstrated complete re-epithelialization shortly after receiving etanercept 50 mg. Average SCORTEN was 3.6 with a range of 2 to 6. Eight patients in this study had severe comorbidities and all 10 patients survived, with a time to re-epithelialization ranging from 7 to 20 days.2 Additionally, a randomized controlled trial showed that 38 etanercept-treated patients had improved mortality (P=.266) and re-epithelialization time (P=.01) compared to patients treated with intravenous methylprednisolone.25Limitations to our study are similar to other reports of SJS/TEN and included the small number of cases and lack of randomization. Additionally, we do not have data available for all patients for time between onset of disease and treatment initiation. Because of these challenges, data presented in this case series is observational only. Additionally, the patients treated with etanercept alone had a slightly lower SCORTEN compared to the group that received IVIG or supportive care alone (2.1 and 2.4 respectively). However, the etanercept-only group actually had higher involvement of epidermal detachment (33%) compared to the non-etanercept group (23%).
Conclusion
Although treatment with etanercept lacks the support of a randomized controlled trial, similar to all other treatments currently used for SJS and TEN, preliminary reports highlight a benefit in disease progression and improvement in time to re-epithelialization. In particular, if etanercept 50 mg subcutaneously is given as monotherapy or is given early in the disease course (prior to other therapies being attempted and ideally within 6 hours of presentation), our data suggest an even greater trend toward improved mortality and decreased time to re-epithelialization. Additionally, our findings may suggest that in some patients, etanercept monotherapy is not an adequate intervention but the addition of IVIG may be helpful; however, the senior author (S.W.) notes anecdotally that in his experience with the patients treated at the University of California Los Angeles, the order of administration of combination therapies—etanercept followed by IVIG—was important in addition to the choice of therapy. These findings are promising enough to warrant a multicenter randomized controlled trial comparing the efficacy of etanercept to other more commonly used treatments for this spectrum of disease, including IVIG and/or cyclosporine. Based on the data presented in this case series, including the 13 patients who received etanercept and had a 0% mortality rate, etanercept may be viewed as a targeted therapeutic intervention for patients with SJS and TEN.
- Pereira FA, Mudgil AV, Rosmarin DM. Toxic epidermal necrolysis. J Am Acad Dermatol. 2007;56:181-200.
- Paradisi A, Abeni D, Bergamo F, et al. Etanercept therapy for toxic epidermal necrolysis. J Am Acad Dermatol. 2014;71:278-283.
- Hunger RE, Hunziker T, Buettiker U, et al. Rapid resolution of toxic epidermal necrolysis with anti-TNF-α treatment. J Allergy Clin Immunol. 2005;116:923-924.
- Worswick S, Cotliar J. Stevens-Johnson syndrome and toxic epidermal necrolysis: a review of treatment options. Dermatol Ther. 2011;24:207-218.
- Wallace AB. The exposure treatment of burns. Lancet Lond Engl. 1951;1:501-504.
- Bastuji-Garin S, Fouchard N, Bertocchi M, et al. SCORTEN: a severity-of-illness score for toxic epidermal necrolysis. J Invest Dermatol. 2000;115:149-153.
- Mounach A, Rezqi A, Nouijai A, et al. Stevens-Johnson syndrome complicating adalimumab therapy in rheumatoid arthritis disease. Rheumatol Int. 2013;33:1351-1353.
- Salama M, Lawrance I-C. Stevens-Johnson syndrome complicating adalimumab therapy in Crohn’s disease. World J Gastroenterol. 2009;15:4449-4452.
- Wolkenstein P, Latarjet J, Roujeau JC, et al. Randomised comparison of thalidomide versus placebo in toxic epidermal necrolysis. Lancet Lond Engl. 1998;352:1586-1589.
- Fischer M, Fiedler E, Marsch WC, et al Antitumour necrosis factor-α antibodies (infliximab) in the treatment of a patient with toxic epidermal necrolysis. Br J Dermatol. 2002;146:707-709.
- Meiss F, Helmbold P, Meykadeh N, et al. Overlap of acute generalized exanthematous pustulosis and toxic epidermal necrolysis: response to antitumour necrosis factor-alpha antibody infliximab: report of three cases. J Eur Acad Dermatol Venereol. 2007;21:717-719.
- Al-Shouli S, Abouchala N, Bogusz MJ, et al. Toxic epidermal necrolysis associated with high intake of sildenafil and its response to infliximab. Acta Derm Venereol. 2005;85:534-535.
- Wojtkiewicz A, Wysocki M, Fortuna J, et al. Beneficial and rapid effect of infliximab on the course of toxic epidermal necrolysis. Acta Derm Venereol. 2008;88:420-421.
- Kreft B, Wohlrab J, Bramsiepe I, et al. Etoricoxib-induced toxic epidermal necrolysis: successful treatment with infliximab. J Dermatol. 2010;37:904-906.
- Patmanidis K, Sidiras A, Dolianitis K, et al. Combination of infliximab and high-dose intravenous immunoglobulin for toxic epidermal necrolysis: successful treatment of an elderly patient. Case Rep Dermatol Med. 2012;2012:915314.
- Gaitanis G, Spyridonos P, Patmanidis K, et al. Treatment of toxic epidermal necrolysis with the combination of infliximab and high-dose intravenous immunoglobulin. Dermatol Basel Switz. 2012;224:134-139.
- Zárate-Correa LC, Carrillo-Gómez DC, Ramírez-Escobar AF, et al. Toxic epidermal necrolysis successfully treated with infliximab. J Investig Allergol Clin Immunol. 2013;23:61-63.
- Paquet P, Jennes S, Rousseau AF, et al. Effect of N-acetylcysteine combined with infliximab on toxic epidermal necrolysis. a proof-of-concept study. Burns J Int Soc Burn Inj. 2014;40:1707-1712.
- Famularo G, Dona BD, Canzona F, et al. Etanercept for toxic epidermal necrolysis. Ann Pharmacother. 2007;41:1083-1084.
- Gubinelli E, Canzona F, Tonanzi T, et al. Toxic epidermal necrolysis successfully treated with etanercept. J Dermatol. 2009;36:150-153.
- Sadighha A. Etanercept in the treatment of a patient with acute generalized exanthematous pustulosis/toxic epidermal necrolysis: definition of a new model based on translational research. Int J Dermatol. 2009;48:913-914.
- Didona D, Paolino G, Garcovich S, et al. Successful use of etanercept in a case of toxic epidermal necrolysis induced by rituximab. J Eur Acad Dermatol Venereol. 2016;30:E83-E84.
- Lee Y-Y, Ko J-H, Wei C-H, et al. Use of etanercept to treat toxic epidermal necrolysis in a human immunodeficiency virus-positive patient. Dermatol Sin. 2013;31:78-81.
- Owczarczyk-Saczonek A, Zdanowska N, Znajewska-Pander A, et al. Stevens-Johnson syndrome in a patient with rheumatoid arthritis during long-term etanercept therapy. J Dermatol Case Rep. 2016;10:14-16.
- Wang CW, Yang LY, Chen CB, et al. Randomized, controlled trial of TNF-α antagonist in CTL mediated severe cutaneous adverse reactions. J Clin Invest. 2018;128:985-996.
- Pereira FA, Mudgil AV, Rosmarin DM. Toxic epidermal necrolysis. J Am Acad Dermatol. 2007;56:181-200.
- Paradisi A, Abeni D, Bergamo F, et al. Etanercept therapy for toxic epidermal necrolysis. J Am Acad Dermatol. 2014;71:278-283.
- Hunger RE, Hunziker T, Buettiker U, et al. Rapid resolution of toxic epidermal necrolysis with anti-TNF-α treatment. J Allergy Clin Immunol. 2005;116:923-924.
- Worswick S, Cotliar J. Stevens-Johnson syndrome and toxic epidermal necrolysis: a review of treatment options. Dermatol Ther. 2011;24:207-218.
- Wallace AB. The exposure treatment of burns. Lancet Lond Engl. 1951;1:501-504.
- Bastuji-Garin S, Fouchard N, Bertocchi M, et al. SCORTEN: a severity-of-illness score for toxic epidermal necrolysis. J Invest Dermatol. 2000;115:149-153.
- Mounach A, Rezqi A, Nouijai A, et al. Stevens-Johnson syndrome complicating adalimumab therapy in rheumatoid arthritis disease. Rheumatol Int. 2013;33:1351-1353.
- Salama M, Lawrance I-C. Stevens-Johnson syndrome complicating adalimumab therapy in Crohn’s disease. World J Gastroenterol. 2009;15:4449-4452.
- Wolkenstein P, Latarjet J, Roujeau JC, et al. Randomised comparison of thalidomide versus placebo in toxic epidermal necrolysis. Lancet Lond Engl. 1998;352:1586-1589.
- Fischer M, Fiedler E, Marsch WC, et al Antitumour necrosis factor-α antibodies (infliximab) in the treatment of a patient with toxic epidermal necrolysis. Br J Dermatol. 2002;146:707-709.
- Meiss F, Helmbold P, Meykadeh N, et al. Overlap of acute generalized exanthematous pustulosis and toxic epidermal necrolysis: response to antitumour necrosis factor-alpha antibody infliximab: report of three cases. J Eur Acad Dermatol Venereol. 2007;21:717-719.
- Al-Shouli S, Abouchala N, Bogusz MJ, et al. Toxic epidermal necrolysis associated with high intake of sildenafil and its response to infliximab. Acta Derm Venereol. 2005;85:534-535.
- Wojtkiewicz A, Wysocki M, Fortuna J, et al. Beneficial and rapid effect of infliximab on the course of toxic epidermal necrolysis. Acta Derm Venereol. 2008;88:420-421.
- Kreft B, Wohlrab J, Bramsiepe I, et al. Etoricoxib-induced toxic epidermal necrolysis: successful treatment with infliximab. J Dermatol. 2010;37:904-906.
- Patmanidis K, Sidiras A, Dolianitis K, et al. Combination of infliximab and high-dose intravenous immunoglobulin for toxic epidermal necrolysis: successful treatment of an elderly patient. Case Rep Dermatol Med. 2012;2012:915314.
- Gaitanis G, Spyridonos P, Patmanidis K, et al. Treatment of toxic epidermal necrolysis with the combination of infliximab and high-dose intravenous immunoglobulin. Dermatol Basel Switz. 2012;224:134-139.
- Zárate-Correa LC, Carrillo-Gómez DC, Ramírez-Escobar AF, et al. Toxic epidermal necrolysis successfully treated with infliximab. J Investig Allergol Clin Immunol. 2013;23:61-63.
- Paquet P, Jennes S, Rousseau AF, et al. Effect of N-acetylcysteine combined with infliximab on toxic epidermal necrolysis. a proof-of-concept study. Burns J Int Soc Burn Inj. 2014;40:1707-1712.
- Famularo G, Dona BD, Canzona F, et al. Etanercept for toxic epidermal necrolysis. Ann Pharmacother. 2007;41:1083-1084.
- Gubinelli E, Canzona F, Tonanzi T, et al. Toxic epidermal necrolysis successfully treated with etanercept. J Dermatol. 2009;36:150-153.
- Sadighha A. Etanercept in the treatment of a patient with acute generalized exanthematous pustulosis/toxic epidermal necrolysis: definition of a new model based on translational research. Int J Dermatol. 2009;48:913-914.
- Didona D, Paolino G, Garcovich S, et al. Successful use of etanercept in a case of toxic epidermal necrolysis induced by rituximab. J Eur Acad Dermatol Venereol. 2016;30:E83-E84.
- Lee Y-Y, Ko J-H, Wei C-H, et al. Use of etanercept to treat toxic epidermal necrolysis in a human immunodeficiency virus-positive patient. Dermatol Sin. 2013;31:78-81.
- Owczarczyk-Saczonek A, Zdanowska N, Znajewska-Pander A, et al. Stevens-Johnson syndrome in a patient with rheumatoid arthritis during long-term etanercept therapy. J Dermatol Case Rep. 2016;10:14-16.
- Wang CW, Yang LY, Chen CB, et al. Randomized, controlled trial of TNF-α antagonist in CTL mediated severe cutaneous adverse reactions. J Clin Invest. 2018;128:985-996.
Practice Points
- Stevens-Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) are life-threatening dermatologic emergencies without a universally accepted treatment.
- Results of this study support the use of single-dose subcutaneous etanercept 50 mg as a potentially lifesaving therapy for patients with SJS/TEN.
Rates and Characteristics of Medical Malpractice Claims Against Hospitalists
The prospect of facing a medical malpractice claim is a source of apprehension for physicians that affects physician behavior, including leading to defensive medicine.1-3 Overall, annual defensive medicine costs have been estimated at $45.6 billion,4 and surveys of hospitalists indicate that 13.0% to 37.5% of hospitalist healthcare costs involve defensive medicine.5,6 Despite the impact of malpractice concerns on hospitalist practice and the unprecedented growth of the field of hospital medicine, relatively few studies have examined the liability environment surrounding hospitalist practice.7,8 The specific issue of malpractice claims rates faced by hospitalists has received even less attention in the medical literature.8
A better understanding of the contributing factors and other attributes of malpractice claims can help guide patient safety initiatives and inform hospitalists’ level of concern regarding liability. Although most medical errors do not result in a malpractice claim,9,10 the majority of malpractice claims in which there is an indemnity payment involve medical injury due to clinician error.11 Even malpractice claims that do not result in an indemnity payment represent opportunities to identify patient safety and risk management vulnerabilities.12
We used a national malpractice claims database to analyze the characteristics of claims made against hospitalists, including claims rates. In addition to claims rates, we also analyzed the other types of providers named in hospitalist claims given the importance of interdisciplinary collaboration to hospital medicine.13,14 To provide context for understanding hospitalist liability data, we present data on other specialties. We also describe a model to predict whether hospitalist malpractice claims will close with an indemnity payment.
METHODS
Data Sources and Elements
This analysis used a repository of malpractice claims maintained by CRICO, the captive malpractice insurer of the Harvard-affiliated medical institutions. This database, the Comparative Benchmarking System, a
Injury severity was based on a widely used scale developed for malpractice claims by the National Association of Insurance Commissioners.16 Low injury severity included emotional injury and temporary insignificant injury. Medium injury severity included temporary minor, temporary major, and permanent minor injury. High injury severity included permanent significant injury through death. Because this study used a database assembled for operational and patient safety purposes and was not human subjects research, institutional review board approval was not needed.
Study Cohort
Malpractice claims included formal lawsuits or written requests for compensation for negligent medical care. Ho
Statistical Analysis
Malpractice claims rates were treated as Poisson rates and compared using a Z-test. Malpractice claims rates are expressed as claims per 100 physician-years. Each physician-year represents 1 year of coverage of one physician by the medical malpractice carrier whose data were used. Physician-years represent the duration of time physicians practiced during which they were insured by the malpractice carrier and, as such, could have been subject to a malpractice claim that would have been included in our data. Claims rates are based on the subset of the malpractice claims in the study for which the number of physician-years of coverage is available, representing 8.2% of hospitalist claims and 11.6% of all claims.
Comparisons of the percentages of cases closing with an indemnity payment, as well as the percentages of cases in different allegation type and clinical severity categories, were made using the Fisher exact test. Indemnity payment amounts were inflation-adjusted to 2018 dollars using the Consumer Price Index. Comparisons of indemnity payment amounts between physician specialties were carried out using the Wilcoxon rank sum test given that the distribution of the payment amounts appeared nonnormal; this was confirmed with the Shapiro-Wilk test. A multivariable logistic regression model was developed to predict the binary outcome of whether a hospitalist case would close with an indemnity payment (compared with no payment), based on the 1,216 hospitalist claims. The predictors used in this regression model were chosen a priori based on hypotheses about what factors drive the likelihood that a case closes with payment. Both the unadjusted and adjusted odds ratios for the predictors are presented. The adjusted model is adjusted for all the other predictors contained in the model. All reported P values are two-sided. The statistical analysis was carried out using JMP Pro version 15 (SAS Institute Inc) and Minitab version 19 (Minitab LLC).
RESULTS
We identified 1,216 hospital medicine malpractice claims from our database. Claims rates were calculated from the subset of our data for which physician-years were available—including 5,140 physician-years encompassing 100 claims, representing 8.2% of all hospitalist claims studied. An additional 18,644 malpractice claims from five other specialties—nonhospitalist general internal medicine, internal medicine subspecialists, emergency medicine, neurosurgery, and psychiatry—were analyzed to provide context for the hospitalist claims.
The malpractice claims rate for hospitalists was significantly higher than the rate for internal medicine subspecialists (1.95 vs 1.30 claims per 100 physician-years; P < .001), though they were not significantly different from the rate for nonhospitalist general internal medicine physicians (1.95 vs 1.92 claims per 100 physician-years; P = .93) (Table 1). Compared with emergency medicine physicians, with whom hospitalists are sometimes compared due to both specialties being defined by their site of practice and the absence of longitudinal patient relationships, hospitalists had a significantly lower claims rate (1.95 vs 4.07 claims per 100 physician-years; P < .001).
An assessment of the temporal trends in the claims rates, based on a comparison between the two halves of the study period (2014-2018 vs 2009-2013), showed that the claims rate for hospitalists was increasing, but at a rate that did not reach statistical significance (Table 1). In contrast, the claims rates for the five other specialties assessed decreased over time, and the decreases were significant for four of these five other specialties (internal medicine subspecialties, emergency medicine, neurosurgery, and psychiatry).
Multiple claims against a single physician were uncommon in our hospitalist malpractice claims data. Among the 100 claims that were used to calculate the claims rates, one physician was named in 2 claims, and all the other physicians were named in only a single claim. Among all of the 1,216 hospitalist malpractice claims we analyzed, there were eight physicians who were named in more than 1 claim, seven of whom were named in 2 claims, and one of whom was named in 3 claims.
The median indemnity payment for hospitalist claims was $231,454 (interquartile range [IQR], $100,000-$503,015), similar to the median indemnity payment for neurosurgery ($233,723; IQR, $85,292-$697,872), though significantly greater than the median indemnity payment for the other four specialties studied (Table 2). Among the hospitalist claims, 29.9% resulted in an indemnity payment, not significantly different from the rate for nonhospitalist general internal medicine, internal medicine subspecialties, or neurosurgery, but significantly lower than the rate for emergency medicine (33.8%; P = .011). No
We performed a multivariable logistic regression analysis to assess the effect of different factors on the likelihood of a hospitalist case closing with an indemnity payment, compared with no payment (Table 3). In the multivariable model, the presence of an error in clinical judgment had an adjusted odds ratio (AOR) of 5.01 (95% CI, 3.37-7.45; P < .001) for a claim closing with payment, the largest effect found. The presence of problems with communication (AOR, 1.89; 95% CI, 1.42-2.51; P < .001), the clinical environment (eg, weekend/holiday or clinical busyness; AOR, 1.70; 95% CI, 1.20-2.40; P = .0026), and documentation (AOR, 1.65; 95% CI, 1.18-2.31; P = .0038) were also positive predictors of claims closing with payment. Greater patient age (per decade) was a negative predictor of the likelihood of a claim closing with payment (AOR, 0.92; 95% CI, 0.86-0.998), though it was of borderline statistical significance (P = .044).
We also assessed multiple clinical attributes of hospitalist malpractice claims, including the major allegation type and injury severity (Appendix Table). Among the 1,216 hospitalist malpractice claims studied, the most common allegation types were for errors related to medical treatment (n = 482; 39.6%), diagnosis (n = 446; 36.7%), and medications (n = 157; 12.9%). Among the hospitalist claims, 888 (73.0%) involved high-severity injury, and 674 (55.4%) involved the death of the patient. The percentages of cases involving high-severity injury and death were significantly greater for hospitalists, compared with that of the other specialties studied (P < .001 for all pairwise comparisons). Of the six specialties studied, hospital medicine was the only one in which the percentage of cases involving death exceeded 50%.
Hospital medicine is typically team-based, and we evaluated which other services were named in claims with hospital medicine as the primary responsible service. The clinician groups most commonly named in hospitalist claims were nursing (n = 269; 22.1%), followed by emergency medicine (n = 91; 7.5%), general surgery (n = 51; 4.2%), cardiology (n = 49; 4.0%), and orthopedic surgery (n = 46; 3.8%) (Appendix Figure). During the first 2 years of the study period, no physician assistants (PAs) or nurse practitioners (NPs) were named in hospitalist claims. Over the study period, the proportion of hospitalist cases also naming PAs and NPs increased steadily, reaching 6.9% and 6.2% of claims, respectively, in 2018 (Figure) (P < .001 for NPs and P = .037 for PAs based on a comparison between the two halves of the study period).
DISCUSSION
We found that the average annual claims rate for hospitalists was similar to that for nonhospitalist general internists (1.95 vs 1.92 claims per 100 physician-years) but significantly greater than that for internal medicine subspecialists (1.95 vs 1.30 claims per 100 physician-years). Hospitalist claims rates showed a notable temporal trend—a nonsignificant increase—over the study period (2009-2018). This contrasts with the five other specialties studied, all of which had decreasing claims rates, four of which were significant. An analysis of a different national malpractice claims database, the NPDB, found that the rate of paid malpractice claims overall decreased 55.7% during the period 1992-2014, again contrasting with the trend we found for hospitalist claims rates.17
We posit several explanations for why the malpractice claims rate trend for hospitalists has diverged from that of other specialties. There has been a large expansion in the number of hospitalists in the United States.18 With this increasing demand, many young physicians have entered the hospital medicine field. In a survey of general internal medicine physicians conducted by the Society of General Internal Medicine, 73% of hospitalists were aged 25 to 44 years, significantly greater than the 45% in this age range among nonhospitalist general internal medicine physicians.19 Hospitalists in their first year of practice have higher mortality rates than more experienced hospitalists.20 Therefore, the relative inexperience of hospitalists, driven by this high demand, could be putting them at increased risk of medical errors and resulting malpractice claims. The higher mortality rate among hospitalists in their first year of practice could be due to a lack of familiarity with the systems of care, such as managing test results and obtaining appropriate consults.20 This possibility suggests that enhanced training and mentorship could be valuable as a strategy to both improve the quality of care and reduce medicolegal risk. The increasing demand for hospitalists could also be affecting the qualification level of physicians entering the field.
Our analysis also showed that the severity of injury in hospitalist claims was greater than that for the other specialties studied. In addition, the percentage of claims involving death was greater for hospitalists than that for the other specialties. The increased acuity of inpatients, compared with that of outpatients—and the trend, at least for some conditions, of increased inpatient acuity over time21,22— could account for the high injury severity seen among hospitalist claims. Given the positive correlation between injury severity and the size of indemnity payments made on malpractice claims,12 the high injury severity seen in hospitalist claims was very likely a driver of the high indemnity payments observed among the hospitalist claims.
The relationship between injury severity and financial outcomes is supported by the results of our multivariable regression model (Table 3). Compared with medium-severity injury claims, both death and high-severity injury cases were significantly more likely to close with an indemnity payment (compared with no payment), with AORs of 1.79 (95% CI, 1.21-2.65) and 2.44 (95% CI, 1.54-3.87), respectively.
The most striking finding in our regression model was the magnitude of the effect of an error in clinical judgement. Cases coded with this contributing factor had five times the AOR of closing with payment (compared with no payment) (AOR, 5.01; 95% CI, 3.37-7.45). A clinical judgment call may be difficult to defend when it is ultimately associated with a bad patient outcome. The importance of clinical judgment in our analysis suggests a risk management strategy: clearly and contemporaneously documenting the rationale behind one’s clinical decision-making. This may help make a claim more defensible in the event of an adverse outcome by demonstrating that the clinician was acting reasonably based on the information available at the time. The importance of specifying a rationale for a clinical decision may be especially important in the era of electronic health records (EHRs). EHRs are not structured as chronologically linear charts, which can make it challenging during a trial to retrospectively show what information was available to the physician at the time the clinical decision was made. The importance of clinical judgment also affirms the importance of effective clinical decision support as a patient safety tool.23
More broadly, it is notable that several contributing factors, including errors in clinical judgment (as discussed previously), problems with communication, and issues with the clinical environment, were significantly associated with malpractice cases closing with payment. This demonstrates that systematically examining malpractice claims to determine the underlying contributing factors can generate predictive analytics, as well as suggest risk management and patient safety strategies.
Interdisciplinary collaboration, as a component of systems-based practice, is a core principle of hospital medicine,13 and so we analyzed the involvement of other clinicians in hospitalist claims. Of the five specialties most frequently named in claims with hospitalists, two were surgical services: general surgery (n = 51; 4.2%) and orthopedic surgery (n = 46; 3.8%). With hospitalists being asked to play an increasing role in the care of surgical patients, they may be providing care to patient populations with whom they have less experience, which could put them at risk of adverse outcomes, leading to malpractice claims.24,25 Hospitalists need to be attuned to the liability risks related to the care of patients requiring surgical management and ensure areas of responsibility are clearly delineated between the hospital medicine and surgical services.26 We also found that hospitalist claims increasingly involve PAs and NPs, likely reflecting their increasing role in providing care on hospitalist services.27,28
A prior analysis of claims rates for hospitalists that covered injury dates from 1997 to 2011 found that hospitalists had a relatively low claims rate, significantly lower than that for other internal medicine physicians.8 In addition to covering an earlier time period, that analysis based its claims rates on data from academic medical centers covered by a single insurer, and physicians at academic medical centers generally have lower claims rates, likely due, at least in part, to their spending a smaller proportion of their time on patient care, compared with nonacademic physicians.29 Another analysis of hospitalist closed claims, which shared some cases with the cohort we analyzed, was performed by The Doctors Company, a commercial liability insurer.7 That analysis astutely emphasized the importance of breakdowns in diagnostic processes as a factor underlying hospitalist claims.
Our study has several limitations. First, although our database of malpractice claims includes approximately 31% of all the claims in the country and includes claims from every state, it may not be nationally representative. Another limitation relates to calculating the claims rates for physicians. Detailed information on the number of years of clinical activity, which is necessary to calculate claims rates, was available for only a subset of our data (8.2% of the hospitalist cases and 11.6% of all cases), so claims rates are based on this subset of our data (among which academic centers are overrepresented). Therefore, the claims rates should be interpreted with caution, especially regarding their application to the community hospital setting. The institutions included in the subset of our data used for determining claims rates were stable over time, so the use of a subset of our data for calculating claims rates reduces the generalizability of our claims rates but should not be a source of bias.
Potentially offsetting strengths of our claims database and study include the availability of unpaid claims (which outnumber paid claims roughly 2:1)11,12; the presence of information on contributing factors and other case characteristics obtained through structured manual review of the cases; and the availability of the specialties of the clinicians involved. These features distinguish the database we used from the NPDB, another national database of malpractice claims, which does not include unpaid claims and which does not include information on contributing factors or physician specialty.
CONCLUSION
First described in 1996, the hospitalist field is the fastest growing specialty in modern medical history.18,30 Therefore, an understanding of the malpractice risk of hospitalists is important and can shed light on the patient safety environment in hospitals. Our analysis showed that hospitalist malpractice claims rates remain roughly stable, in contrast to most other specialties, which have seen a fall in malpractice claims rates.17 In addition, unlike a previous analysis,8 we found that claims rates for hospitalists were essentially equal to those of other general internal medicine physicians (not lower, as had been previously reported), and higher than those of the internal medicine subspecialties. Hospitalist claims also have relatively high severity of injury. Potential factors driving these trends include the increasing demand for hospitalists, which results in a higher proportion of less-experienced physicians entering the field, and the expanding clinical scope of hospitalists, which may lead to their managing patients with conditions with which they may be less comfortable. Overall, our analysis suggests that the malpractice environment for hospitalists is becoming less favorable, and therefore, hospitalists should explore opportunities for mitigating liability risk and enhancing patient safety.
1. Studdert DM, Mello MM, Sage WM, et al. Defensive medicine among high-risk specialist physicians in a volatile malpractice environment. JAMA. 2005;293(21):2609-2617. https://doi.org/10.1001/jama.293.21.2609
2. Carrier ER, Reschovsky JD, Mello MM, Mayrell RC, Katz D. Physicians’ fears of malpractice lawsuits are not assuaged by tort reforms. Health Aff (Millwood). 2010;29(9):1585-1592. https://doi.org/10.1377/hlthaff.2010.0135
3. Kachalia A, Berg A, Fagerlin A, et al. Overuse of testing in preoperative evaluation and syncope: a survey of hospitalists. Ann Intern Med. 2015;162(2):100-108. https://doi.org/10.7326/m14-0694
4. Mello MM, Chandra A, Gawande AA, Studdert DM. National costs of the medical liability system. Health Aff (Millwood). 2010;29(9):1569-1577. https://doi.org/10.1377/hlthaff.2009.0807
5. Rothberg MB, Class J, Bishop TF, Friderici J, Kleppel R, Lindenauer PK. The cost of defensive medicine on 3 hospital medicine services. JAMA Intern Med. 2014;174(11):1867-1868. https://doi.org/10.1001/jamainternmed.2014.4649
6. Saint S, Vaughn VM, Chopra V, Fowler KE, Kachalia A. Perception of resources spent on defensive medicine and history of being sued among hospitalists: results from a national survey. J Hosp Med. 2018;13(1):26-29. https://doi.org/10.12788/jhm.2800
7. Ranum D, Troxel DB, Diamond R. Hospitalist Closed Claims Study: An Expert Analysis of Medical Malpractice Allegations. The Doctors Company. 2016. https://www.thedoctors.com/siteassets/pdfs/risk-management/closed-claims-studies/10392_ccs-hospitalist_academic_single-page_version_frr.pdf
8. Schaffer AC, Puopolo AL, Raman S, Kachalia A. Liability impact of the hospitalist model of care. J Hosp Med. 2014;9(12):750-755. https://doi.org/10.1002/jhm.2244
9. Localio AR, Lawthers AG, Brennan TA, et al. Relation between malpractice claims and adverse events due to negligence. results of the Harvard Medical Practice Study III. N Engl J Med. 1991;325(4):245-251. https://doi.org/10.1056/nejm199107253250405
10. Studdert DM, Thomas EJ, Burstin HR, Zbar BI, Orav EJ, Brennan TA. Negligent care and malpractice claiming behavior in Utah and Colorado. Med Care. 2000;38(3):250-260. https://doi.org/10.1097/00005650-200003000-00002
11. Studdert DM, Mello MM, Gawande AA, et al. Claims, errors, and compensation payments in medical malpractice litigation. N Engl J Med. 2006;354(19):2024-2033. https://doi.org/10.1056/nejmsa054479
12. Medical Malpractice in America: 2018 CRICO Strategies National CBS Report. CRICO Strategies; 2018.
13. Budnitz T, McKean SC. The Core Competencies in Hospital Medicine. In: McKean SC, Ross JJ, Dressler DD, Scheurer DB, eds. Principles and Practice of Hospital Medicine, 2nd ed. McGraw-Hill Education; 2017.
14. O’Leary KJ, Haviley C, Slade ME, Shah HM, Lee J, Williams MV. Improving teamwork: impact of structured interdisciplinary rounds on a hospitalist unit. J Hosp Med. 2011;6(2):88-93. https://doi.org/10.1002/jhm.714
15. National Practitioner Data Bank: Public Use Data File. Division of Practitioner Data Banks, Bureau of Health Professions, Health Resources & Services Administration, U.S. Department of Health & Human Services; June 30, 2019. Updated August 2020.
16. Sowka MP, ed. NAIC Malpractice Claims, Final Compilation. National Association of Insurance Commissioners; 1980.
17. Schaffer AC, Jena AB, Seabury SA, Singh H, Chalasani V, Kachalia A. Rates and characteristics of paid malpractice claims among US physicians by specialty, 1992-2014. JAMA Intern Med. 2017;177(5):710-718. https://doi.org/10.1001/jamainternmed.2017.0311
18. Wachter RM, Goldman L. Zero to 50,000 - the 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/nejmp1607958
19. Miller CS, Fogerty RL, Gann J, Bruti CP, Klein R; The Society of General Internal Medicine Membership Committee. The growth of hospitalists and the future of the Society of General Internal Medicine: results from the 2014 membership survey. J Gen Intern Med. 2017;32(11):1179-1185. https://doi.org/10.1007/s11606-017-4126-7
20. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo YF, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized Medicare population. JAMA Intern Med. 2018;178(2):196-203. https://doi.org/10.1001/jamainternmed.2017.7049
21. Akintoye E, Briasoulis A, Egbe A, et al. National trends in admission and in-hospital mortality of patients with heart failure in the United States (2001-2014). J Am Heart Assoc. 2017;6(12):e006955. https://doi.org/10.1161/jaha.117.006955
22. Clark AV, LoPresti CM, Smith TI. Trends in inpatient admission comorbidity and electronic health data: implications for resident workload intensity. J Hosp Med. 2018;13(8):570-572. https://doi.org/10.12788/jhm.2954
23. Gilmartin HM, Liu VX, Burke RE. Annals for hospitalists inpatient notes - The role of hospitalists in the creation of learning healthcare systems. Ann Intern Med. 2020;172(2):HO2-HO3. https://doi.org/10.7326/m19-3873
24. Siegal EM. Just because you can, doesn’t mean that you should: a call for the rational application of hospitalist comanagement. J Hosp Med. 2008;3(5):398-402. https://doi.org/10.1002/jhm.361
25. Plauth WH 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Med. 2001;111(3):247-254. https://doi.org/10.1016/s0002-9343(01)00837-3
26. Thompson RE, Pfeifer K, Grant PJ, et al. Hospital medicine and perioperative care: a framework for high-quality, high-value collaborative care. J Hosp Med. 2017;12(4):277-282. https://doi.org/10.12788/jhm.2717
27. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2012;7(3):190-194. https://doi.org/10.1002/jhm.1001
28. Kartha A, Restuccia JD, Burgess JF Jr, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. https://doi.org/10.1002/jhm.2231
29. Schaffer AC, Babayan A, Yu-Moe CW, Sato L, Einbinder JS. The effect of clinical volume on annual and per-patient encounter medical malpractice claims risk. J Patient Saf. Published online March 23, 2020. https://doi.org/10.1097/pts.0000000000000706
30. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514-517. https://doi.org/10.1056/nejm199608153350713
The prospect of facing a medical malpractice claim is a source of apprehension for physicians that affects physician behavior, including leading to defensive medicine.1-3 Overall, annual defensive medicine costs have been estimated at $45.6 billion,4 and surveys of hospitalists indicate that 13.0% to 37.5% of hospitalist healthcare costs involve defensive medicine.5,6 Despite the impact of malpractice concerns on hospitalist practice and the unprecedented growth of the field of hospital medicine, relatively few studies have examined the liability environment surrounding hospitalist practice.7,8 The specific issue of malpractice claims rates faced by hospitalists has received even less attention in the medical literature.8
A better understanding of the contributing factors and other attributes of malpractice claims can help guide patient safety initiatives and inform hospitalists’ level of concern regarding liability. Although most medical errors do not result in a malpractice claim,9,10 the majority of malpractice claims in which there is an indemnity payment involve medical injury due to clinician error.11 Even malpractice claims that do not result in an indemnity payment represent opportunities to identify patient safety and risk management vulnerabilities.12
We used a national malpractice claims database to analyze the characteristics of claims made against hospitalists, including claims rates. In addition to claims rates, we also analyzed the other types of providers named in hospitalist claims given the importance of interdisciplinary collaboration to hospital medicine.13,14 To provide context for understanding hospitalist liability data, we present data on other specialties. We also describe a model to predict whether hospitalist malpractice claims will close with an indemnity payment.
METHODS
Data Sources and Elements
This analysis used a repository of malpractice claims maintained by CRICO, the captive malpractice insurer of the Harvard-affiliated medical institutions. This database, the Comparative Benchmarking System, a
Injury severity was based on a widely used scale developed for malpractice claims by the National Association of Insurance Commissioners.16 Low injury severity included emotional injury and temporary insignificant injury. Medium injury severity included temporary minor, temporary major, and permanent minor injury. High injury severity included permanent significant injury through death. Because this study used a database assembled for operational and patient safety purposes and was not human subjects research, institutional review board approval was not needed.
Study Cohort
Malpractice claims included formal lawsuits or written requests for compensation for negligent medical care. Ho
Statistical Analysis
Malpractice claims rates were treated as Poisson rates and compared using a Z-test. Malpractice claims rates are expressed as claims per 100 physician-years. Each physician-year represents 1 year of coverage of one physician by the medical malpractice carrier whose data were used. Physician-years represent the duration of time physicians practiced during which they were insured by the malpractice carrier and, as such, could have been subject to a malpractice claim that would have been included in our data. Claims rates are based on the subset of the malpractice claims in the study for which the number of physician-years of coverage is available, representing 8.2% of hospitalist claims and 11.6% of all claims.
Comparisons of the percentages of cases closing with an indemnity payment, as well as the percentages of cases in different allegation type and clinical severity categories, were made using the Fisher exact test. Indemnity payment amounts were inflation-adjusted to 2018 dollars using the Consumer Price Index. Comparisons of indemnity payment amounts between physician specialties were carried out using the Wilcoxon rank sum test given that the distribution of the payment amounts appeared nonnormal; this was confirmed with the Shapiro-Wilk test. A multivariable logistic regression model was developed to predict the binary outcome of whether a hospitalist case would close with an indemnity payment (compared with no payment), based on the 1,216 hospitalist claims. The predictors used in this regression model were chosen a priori based on hypotheses about what factors drive the likelihood that a case closes with payment. Both the unadjusted and adjusted odds ratios for the predictors are presented. The adjusted model is adjusted for all the other predictors contained in the model. All reported P values are two-sided. The statistical analysis was carried out using JMP Pro version 15 (SAS Institute Inc) and Minitab version 19 (Minitab LLC).
RESULTS
We identified 1,216 hospital medicine malpractice claims from our database. Claims rates were calculated from the subset of our data for which physician-years were available—including 5,140 physician-years encompassing 100 claims, representing 8.2% of all hospitalist claims studied. An additional 18,644 malpractice claims from five other specialties—nonhospitalist general internal medicine, internal medicine subspecialists, emergency medicine, neurosurgery, and psychiatry—were analyzed to provide context for the hospitalist claims.
The malpractice claims rate for hospitalists was significantly higher than the rate for internal medicine subspecialists (1.95 vs 1.30 claims per 100 physician-years; P < .001), though they were not significantly different from the rate for nonhospitalist general internal medicine physicians (1.95 vs 1.92 claims per 100 physician-years; P = .93) (Table 1). Compared with emergency medicine physicians, with whom hospitalists are sometimes compared due to both specialties being defined by their site of practice and the absence of longitudinal patient relationships, hospitalists had a significantly lower claims rate (1.95 vs 4.07 claims per 100 physician-years; P < .001).
An assessment of the temporal trends in the claims rates, based on a comparison between the two halves of the study period (2014-2018 vs 2009-2013), showed that the claims rate for hospitalists was increasing, but at a rate that did not reach statistical significance (Table 1). In contrast, the claims rates for the five other specialties assessed decreased over time, and the decreases were significant for four of these five other specialties (internal medicine subspecialties, emergency medicine, neurosurgery, and psychiatry).
Multiple claims against a single physician were uncommon in our hospitalist malpractice claims data. Among the 100 claims that were used to calculate the claims rates, one physician was named in 2 claims, and all the other physicians were named in only a single claim. Among all of the 1,216 hospitalist malpractice claims we analyzed, there were eight physicians who were named in more than 1 claim, seven of whom were named in 2 claims, and one of whom was named in 3 claims.
The median indemnity payment for hospitalist claims was $231,454 (interquartile range [IQR], $100,000-$503,015), similar to the median indemnity payment for neurosurgery ($233,723; IQR, $85,292-$697,872), though significantly greater than the median indemnity payment for the other four specialties studied (Table 2). Among the hospitalist claims, 29.9% resulted in an indemnity payment, not significantly different from the rate for nonhospitalist general internal medicine, internal medicine subspecialties, or neurosurgery, but significantly lower than the rate for emergency medicine (33.8%; P = .011). No
We performed a multivariable logistic regression analysis to assess the effect of different factors on the likelihood of a hospitalist case closing with an indemnity payment, compared with no payment (Table 3). In the multivariable model, the presence of an error in clinical judgment had an adjusted odds ratio (AOR) of 5.01 (95% CI, 3.37-7.45; P < .001) for a claim closing with payment, the largest effect found. The presence of problems with communication (AOR, 1.89; 95% CI, 1.42-2.51; P < .001), the clinical environment (eg, weekend/holiday or clinical busyness; AOR, 1.70; 95% CI, 1.20-2.40; P = .0026), and documentation (AOR, 1.65; 95% CI, 1.18-2.31; P = .0038) were also positive predictors of claims closing with payment. Greater patient age (per decade) was a negative predictor of the likelihood of a claim closing with payment (AOR, 0.92; 95% CI, 0.86-0.998), though it was of borderline statistical significance (P = .044).
We also assessed multiple clinical attributes of hospitalist malpractice claims, including the major allegation type and injury severity (Appendix Table). Among the 1,216 hospitalist malpractice claims studied, the most common allegation types were for errors related to medical treatment (n = 482; 39.6%), diagnosis (n = 446; 36.7%), and medications (n = 157; 12.9%). Among the hospitalist claims, 888 (73.0%) involved high-severity injury, and 674 (55.4%) involved the death of the patient. The percentages of cases involving high-severity injury and death were significantly greater for hospitalists, compared with that of the other specialties studied (P < .001 for all pairwise comparisons). Of the six specialties studied, hospital medicine was the only one in which the percentage of cases involving death exceeded 50%.
Hospital medicine is typically team-based, and we evaluated which other services were named in claims with hospital medicine as the primary responsible service. The clinician groups most commonly named in hospitalist claims were nursing (n = 269; 22.1%), followed by emergency medicine (n = 91; 7.5%), general surgery (n = 51; 4.2%), cardiology (n = 49; 4.0%), and orthopedic surgery (n = 46; 3.8%) (Appendix Figure). During the first 2 years of the study period, no physician assistants (PAs) or nurse practitioners (NPs) were named in hospitalist claims. Over the study period, the proportion of hospitalist cases also naming PAs and NPs increased steadily, reaching 6.9% and 6.2% of claims, respectively, in 2018 (Figure) (P < .001 for NPs and P = .037 for PAs based on a comparison between the two halves of the study period).
DISCUSSION
We found that the average annual claims rate for hospitalists was similar to that for nonhospitalist general internists (1.95 vs 1.92 claims per 100 physician-years) but significantly greater than that for internal medicine subspecialists (1.95 vs 1.30 claims per 100 physician-years). Hospitalist claims rates showed a notable temporal trend—a nonsignificant increase—over the study period (2009-2018). This contrasts with the five other specialties studied, all of which had decreasing claims rates, four of which were significant. An analysis of a different national malpractice claims database, the NPDB, found that the rate of paid malpractice claims overall decreased 55.7% during the period 1992-2014, again contrasting with the trend we found for hospitalist claims rates.17
We posit several explanations for why the malpractice claims rate trend for hospitalists has diverged from that of other specialties. There has been a large expansion in the number of hospitalists in the United States.18 With this increasing demand, many young physicians have entered the hospital medicine field. In a survey of general internal medicine physicians conducted by the Society of General Internal Medicine, 73% of hospitalists were aged 25 to 44 years, significantly greater than the 45% in this age range among nonhospitalist general internal medicine physicians.19 Hospitalists in their first year of practice have higher mortality rates than more experienced hospitalists.20 Therefore, the relative inexperience of hospitalists, driven by this high demand, could be putting them at increased risk of medical errors and resulting malpractice claims. The higher mortality rate among hospitalists in their first year of practice could be due to a lack of familiarity with the systems of care, such as managing test results and obtaining appropriate consults.20 This possibility suggests that enhanced training and mentorship could be valuable as a strategy to both improve the quality of care and reduce medicolegal risk. The increasing demand for hospitalists could also be affecting the qualification level of physicians entering the field.
Our analysis also showed that the severity of injury in hospitalist claims was greater than that for the other specialties studied. In addition, the percentage of claims involving death was greater for hospitalists than that for the other specialties. The increased acuity of inpatients, compared with that of outpatients—and the trend, at least for some conditions, of increased inpatient acuity over time21,22— could account for the high injury severity seen among hospitalist claims. Given the positive correlation between injury severity and the size of indemnity payments made on malpractice claims,12 the high injury severity seen in hospitalist claims was very likely a driver of the high indemnity payments observed among the hospitalist claims.
The relationship between injury severity and financial outcomes is supported by the results of our multivariable regression model (Table 3). Compared with medium-severity injury claims, both death and high-severity injury cases were significantly more likely to close with an indemnity payment (compared with no payment), with AORs of 1.79 (95% CI, 1.21-2.65) and 2.44 (95% CI, 1.54-3.87), respectively.
The most striking finding in our regression model was the magnitude of the effect of an error in clinical judgement. Cases coded with this contributing factor had five times the AOR of closing with payment (compared with no payment) (AOR, 5.01; 95% CI, 3.37-7.45). A clinical judgment call may be difficult to defend when it is ultimately associated with a bad patient outcome. The importance of clinical judgment in our analysis suggests a risk management strategy: clearly and contemporaneously documenting the rationale behind one’s clinical decision-making. This may help make a claim more defensible in the event of an adverse outcome by demonstrating that the clinician was acting reasonably based on the information available at the time. The importance of specifying a rationale for a clinical decision may be especially important in the era of electronic health records (EHRs). EHRs are not structured as chronologically linear charts, which can make it challenging during a trial to retrospectively show what information was available to the physician at the time the clinical decision was made. The importance of clinical judgment also affirms the importance of effective clinical decision support as a patient safety tool.23
More broadly, it is notable that several contributing factors, including errors in clinical judgment (as discussed previously), problems with communication, and issues with the clinical environment, were significantly associated with malpractice cases closing with payment. This demonstrates that systematically examining malpractice claims to determine the underlying contributing factors can generate predictive analytics, as well as suggest risk management and patient safety strategies.
Interdisciplinary collaboration, as a component of systems-based practice, is a core principle of hospital medicine,13 and so we analyzed the involvement of other clinicians in hospitalist claims. Of the five specialties most frequently named in claims with hospitalists, two were surgical services: general surgery (n = 51; 4.2%) and orthopedic surgery (n = 46; 3.8%). With hospitalists being asked to play an increasing role in the care of surgical patients, they may be providing care to patient populations with whom they have less experience, which could put them at risk of adverse outcomes, leading to malpractice claims.24,25 Hospitalists need to be attuned to the liability risks related to the care of patients requiring surgical management and ensure areas of responsibility are clearly delineated between the hospital medicine and surgical services.26 We also found that hospitalist claims increasingly involve PAs and NPs, likely reflecting their increasing role in providing care on hospitalist services.27,28
A prior analysis of claims rates for hospitalists that covered injury dates from 1997 to 2011 found that hospitalists had a relatively low claims rate, significantly lower than that for other internal medicine physicians.8 In addition to covering an earlier time period, that analysis based its claims rates on data from academic medical centers covered by a single insurer, and physicians at academic medical centers generally have lower claims rates, likely due, at least in part, to their spending a smaller proportion of their time on patient care, compared with nonacademic physicians.29 Another analysis of hospitalist closed claims, which shared some cases with the cohort we analyzed, was performed by The Doctors Company, a commercial liability insurer.7 That analysis astutely emphasized the importance of breakdowns in diagnostic processes as a factor underlying hospitalist claims.
Our study has several limitations. First, although our database of malpractice claims includes approximately 31% of all the claims in the country and includes claims from every state, it may not be nationally representative. Another limitation relates to calculating the claims rates for physicians. Detailed information on the number of years of clinical activity, which is necessary to calculate claims rates, was available for only a subset of our data (8.2% of the hospitalist cases and 11.6% of all cases), so claims rates are based on this subset of our data (among which academic centers are overrepresented). Therefore, the claims rates should be interpreted with caution, especially regarding their application to the community hospital setting. The institutions included in the subset of our data used for determining claims rates were stable over time, so the use of a subset of our data for calculating claims rates reduces the generalizability of our claims rates but should not be a source of bias.
Potentially offsetting strengths of our claims database and study include the availability of unpaid claims (which outnumber paid claims roughly 2:1)11,12; the presence of information on contributing factors and other case characteristics obtained through structured manual review of the cases; and the availability of the specialties of the clinicians involved. These features distinguish the database we used from the NPDB, another national database of malpractice claims, which does not include unpaid claims and which does not include information on contributing factors or physician specialty.
CONCLUSION
First described in 1996, the hospitalist field is the fastest growing specialty in modern medical history.18,30 Therefore, an understanding of the malpractice risk of hospitalists is important and can shed light on the patient safety environment in hospitals. Our analysis showed that hospitalist malpractice claims rates remain roughly stable, in contrast to most other specialties, which have seen a fall in malpractice claims rates.17 In addition, unlike a previous analysis,8 we found that claims rates for hospitalists were essentially equal to those of other general internal medicine physicians (not lower, as had been previously reported), and higher than those of the internal medicine subspecialties. Hospitalist claims also have relatively high severity of injury. Potential factors driving these trends include the increasing demand for hospitalists, which results in a higher proportion of less-experienced physicians entering the field, and the expanding clinical scope of hospitalists, which may lead to their managing patients with conditions with which they may be less comfortable. Overall, our analysis suggests that the malpractice environment for hospitalists is becoming less favorable, and therefore, hospitalists should explore opportunities for mitigating liability risk and enhancing patient safety.
The prospect of facing a medical malpractice claim is a source of apprehension for physicians that affects physician behavior, including leading to defensive medicine.1-3 Overall, annual defensive medicine costs have been estimated at $45.6 billion,4 and surveys of hospitalists indicate that 13.0% to 37.5% of hospitalist healthcare costs involve defensive medicine.5,6 Despite the impact of malpractice concerns on hospitalist practice and the unprecedented growth of the field of hospital medicine, relatively few studies have examined the liability environment surrounding hospitalist practice.7,8 The specific issue of malpractice claims rates faced by hospitalists has received even less attention in the medical literature.8
A better understanding of the contributing factors and other attributes of malpractice claims can help guide patient safety initiatives and inform hospitalists’ level of concern regarding liability. Although most medical errors do not result in a malpractice claim,9,10 the majority of malpractice claims in which there is an indemnity payment involve medical injury due to clinician error.11 Even malpractice claims that do not result in an indemnity payment represent opportunities to identify patient safety and risk management vulnerabilities.12
We used a national malpractice claims database to analyze the characteristics of claims made against hospitalists, including claims rates. In addition to claims rates, we also analyzed the other types of providers named in hospitalist claims given the importance of interdisciplinary collaboration to hospital medicine.13,14 To provide context for understanding hospitalist liability data, we present data on other specialties. We also describe a model to predict whether hospitalist malpractice claims will close with an indemnity payment.
METHODS
Data Sources and Elements
This analysis used a repository of malpractice claims maintained by CRICO, the captive malpractice insurer of the Harvard-affiliated medical institutions. This database, the Comparative Benchmarking System, a
Injury severity was based on a widely used scale developed for malpractice claims by the National Association of Insurance Commissioners.16 Low injury severity included emotional injury and temporary insignificant injury. Medium injury severity included temporary minor, temporary major, and permanent minor injury. High injury severity included permanent significant injury through death. Because this study used a database assembled for operational and patient safety purposes and was not human subjects research, institutional review board approval was not needed.
Study Cohort
Malpractice claims included formal lawsuits or written requests for compensation for negligent medical care. Ho
Statistical Analysis
Malpractice claims rates were treated as Poisson rates and compared using a Z-test. Malpractice claims rates are expressed as claims per 100 physician-years. Each physician-year represents 1 year of coverage of one physician by the medical malpractice carrier whose data were used. Physician-years represent the duration of time physicians practiced during which they were insured by the malpractice carrier and, as such, could have been subject to a malpractice claim that would have been included in our data. Claims rates are based on the subset of the malpractice claims in the study for which the number of physician-years of coverage is available, representing 8.2% of hospitalist claims and 11.6% of all claims.
Comparisons of the percentages of cases closing with an indemnity payment, as well as the percentages of cases in different allegation type and clinical severity categories, were made using the Fisher exact test. Indemnity payment amounts were inflation-adjusted to 2018 dollars using the Consumer Price Index. Comparisons of indemnity payment amounts between physician specialties were carried out using the Wilcoxon rank sum test given that the distribution of the payment amounts appeared nonnormal; this was confirmed with the Shapiro-Wilk test. A multivariable logistic regression model was developed to predict the binary outcome of whether a hospitalist case would close with an indemnity payment (compared with no payment), based on the 1,216 hospitalist claims. The predictors used in this regression model were chosen a priori based on hypotheses about what factors drive the likelihood that a case closes with payment. Both the unadjusted and adjusted odds ratios for the predictors are presented. The adjusted model is adjusted for all the other predictors contained in the model. All reported P values are two-sided. The statistical analysis was carried out using JMP Pro version 15 (SAS Institute Inc) and Minitab version 19 (Minitab LLC).
RESULTS
We identified 1,216 hospital medicine malpractice claims from our database. Claims rates were calculated from the subset of our data for which physician-years were available—including 5,140 physician-years encompassing 100 claims, representing 8.2% of all hospitalist claims studied. An additional 18,644 malpractice claims from five other specialties—nonhospitalist general internal medicine, internal medicine subspecialists, emergency medicine, neurosurgery, and psychiatry—were analyzed to provide context for the hospitalist claims.
The malpractice claims rate for hospitalists was significantly higher than the rate for internal medicine subspecialists (1.95 vs 1.30 claims per 100 physician-years; P < .001), though they were not significantly different from the rate for nonhospitalist general internal medicine physicians (1.95 vs 1.92 claims per 100 physician-years; P = .93) (Table 1). Compared with emergency medicine physicians, with whom hospitalists are sometimes compared due to both specialties being defined by their site of practice and the absence of longitudinal patient relationships, hospitalists had a significantly lower claims rate (1.95 vs 4.07 claims per 100 physician-years; P < .001).
An assessment of the temporal trends in the claims rates, based on a comparison between the two halves of the study period (2014-2018 vs 2009-2013), showed that the claims rate for hospitalists was increasing, but at a rate that did not reach statistical significance (Table 1). In contrast, the claims rates for the five other specialties assessed decreased over time, and the decreases were significant for four of these five other specialties (internal medicine subspecialties, emergency medicine, neurosurgery, and psychiatry).
Multiple claims against a single physician were uncommon in our hospitalist malpractice claims data. Among the 100 claims that were used to calculate the claims rates, one physician was named in 2 claims, and all the other physicians were named in only a single claim. Among all of the 1,216 hospitalist malpractice claims we analyzed, there were eight physicians who were named in more than 1 claim, seven of whom were named in 2 claims, and one of whom was named in 3 claims.
The median indemnity payment for hospitalist claims was $231,454 (interquartile range [IQR], $100,000-$503,015), similar to the median indemnity payment for neurosurgery ($233,723; IQR, $85,292-$697,872), though significantly greater than the median indemnity payment for the other four specialties studied (Table 2). Among the hospitalist claims, 29.9% resulted in an indemnity payment, not significantly different from the rate for nonhospitalist general internal medicine, internal medicine subspecialties, or neurosurgery, but significantly lower than the rate for emergency medicine (33.8%; P = .011). No
We performed a multivariable logistic regression analysis to assess the effect of different factors on the likelihood of a hospitalist case closing with an indemnity payment, compared with no payment (Table 3). In the multivariable model, the presence of an error in clinical judgment had an adjusted odds ratio (AOR) of 5.01 (95% CI, 3.37-7.45; P < .001) for a claim closing with payment, the largest effect found. The presence of problems with communication (AOR, 1.89; 95% CI, 1.42-2.51; P < .001), the clinical environment (eg, weekend/holiday or clinical busyness; AOR, 1.70; 95% CI, 1.20-2.40; P = .0026), and documentation (AOR, 1.65; 95% CI, 1.18-2.31; P = .0038) were also positive predictors of claims closing with payment. Greater patient age (per decade) was a negative predictor of the likelihood of a claim closing with payment (AOR, 0.92; 95% CI, 0.86-0.998), though it was of borderline statistical significance (P = .044).
We also assessed multiple clinical attributes of hospitalist malpractice claims, including the major allegation type and injury severity (Appendix Table). Among the 1,216 hospitalist malpractice claims studied, the most common allegation types were for errors related to medical treatment (n = 482; 39.6%), diagnosis (n = 446; 36.7%), and medications (n = 157; 12.9%). Among the hospitalist claims, 888 (73.0%) involved high-severity injury, and 674 (55.4%) involved the death of the patient. The percentages of cases involving high-severity injury and death were significantly greater for hospitalists, compared with that of the other specialties studied (P < .001 for all pairwise comparisons). Of the six specialties studied, hospital medicine was the only one in which the percentage of cases involving death exceeded 50%.
Hospital medicine is typically team-based, and we evaluated which other services were named in claims with hospital medicine as the primary responsible service. The clinician groups most commonly named in hospitalist claims were nursing (n = 269; 22.1%), followed by emergency medicine (n = 91; 7.5%), general surgery (n = 51; 4.2%), cardiology (n = 49; 4.0%), and orthopedic surgery (n = 46; 3.8%) (Appendix Figure). During the first 2 years of the study period, no physician assistants (PAs) or nurse practitioners (NPs) were named in hospitalist claims. Over the study period, the proportion of hospitalist cases also naming PAs and NPs increased steadily, reaching 6.9% and 6.2% of claims, respectively, in 2018 (Figure) (P < .001 for NPs and P = .037 for PAs based on a comparison between the two halves of the study period).
DISCUSSION
We found that the average annual claims rate for hospitalists was similar to that for nonhospitalist general internists (1.95 vs 1.92 claims per 100 physician-years) but significantly greater than that for internal medicine subspecialists (1.95 vs 1.30 claims per 100 physician-years). Hospitalist claims rates showed a notable temporal trend—a nonsignificant increase—over the study period (2009-2018). This contrasts with the five other specialties studied, all of which had decreasing claims rates, four of which were significant. An analysis of a different national malpractice claims database, the NPDB, found that the rate of paid malpractice claims overall decreased 55.7% during the period 1992-2014, again contrasting with the trend we found for hospitalist claims rates.17
We posit several explanations for why the malpractice claims rate trend for hospitalists has diverged from that of other specialties. There has been a large expansion in the number of hospitalists in the United States.18 With this increasing demand, many young physicians have entered the hospital medicine field. In a survey of general internal medicine physicians conducted by the Society of General Internal Medicine, 73% of hospitalists were aged 25 to 44 years, significantly greater than the 45% in this age range among nonhospitalist general internal medicine physicians.19 Hospitalists in their first year of practice have higher mortality rates than more experienced hospitalists.20 Therefore, the relative inexperience of hospitalists, driven by this high demand, could be putting them at increased risk of medical errors and resulting malpractice claims. The higher mortality rate among hospitalists in their first year of practice could be due to a lack of familiarity with the systems of care, such as managing test results and obtaining appropriate consults.20 This possibility suggests that enhanced training and mentorship could be valuable as a strategy to both improve the quality of care and reduce medicolegal risk. The increasing demand for hospitalists could also be affecting the qualification level of physicians entering the field.
Our analysis also showed that the severity of injury in hospitalist claims was greater than that for the other specialties studied. In addition, the percentage of claims involving death was greater for hospitalists than that for the other specialties. The increased acuity of inpatients, compared with that of outpatients—and the trend, at least for some conditions, of increased inpatient acuity over time21,22— could account for the high injury severity seen among hospitalist claims. Given the positive correlation between injury severity and the size of indemnity payments made on malpractice claims,12 the high injury severity seen in hospitalist claims was very likely a driver of the high indemnity payments observed among the hospitalist claims.
The relationship between injury severity and financial outcomes is supported by the results of our multivariable regression model (Table 3). Compared with medium-severity injury claims, both death and high-severity injury cases were significantly more likely to close with an indemnity payment (compared with no payment), with AORs of 1.79 (95% CI, 1.21-2.65) and 2.44 (95% CI, 1.54-3.87), respectively.
The most striking finding in our regression model was the magnitude of the effect of an error in clinical judgement. Cases coded with this contributing factor had five times the AOR of closing with payment (compared with no payment) (AOR, 5.01; 95% CI, 3.37-7.45). A clinical judgment call may be difficult to defend when it is ultimately associated with a bad patient outcome. The importance of clinical judgment in our analysis suggests a risk management strategy: clearly and contemporaneously documenting the rationale behind one’s clinical decision-making. This may help make a claim more defensible in the event of an adverse outcome by demonstrating that the clinician was acting reasonably based on the information available at the time. The importance of specifying a rationale for a clinical decision may be especially important in the era of electronic health records (EHRs). EHRs are not structured as chronologically linear charts, which can make it challenging during a trial to retrospectively show what information was available to the physician at the time the clinical decision was made. The importance of clinical judgment also affirms the importance of effective clinical decision support as a patient safety tool.23
More broadly, it is notable that several contributing factors, including errors in clinical judgment (as discussed previously), problems with communication, and issues with the clinical environment, were significantly associated with malpractice cases closing with payment. This demonstrates that systematically examining malpractice claims to determine the underlying contributing factors can generate predictive analytics, as well as suggest risk management and patient safety strategies.
Interdisciplinary collaboration, as a component of systems-based practice, is a core principle of hospital medicine,13 and so we analyzed the involvement of other clinicians in hospitalist claims. Of the five specialties most frequently named in claims with hospitalists, two were surgical services: general surgery (n = 51; 4.2%) and orthopedic surgery (n = 46; 3.8%). With hospitalists being asked to play an increasing role in the care of surgical patients, they may be providing care to patient populations with whom they have less experience, which could put them at risk of adverse outcomes, leading to malpractice claims.24,25 Hospitalists need to be attuned to the liability risks related to the care of patients requiring surgical management and ensure areas of responsibility are clearly delineated between the hospital medicine and surgical services.26 We also found that hospitalist claims increasingly involve PAs and NPs, likely reflecting their increasing role in providing care on hospitalist services.27,28
A prior analysis of claims rates for hospitalists that covered injury dates from 1997 to 2011 found that hospitalists had a relatively low claims rate, significantly lower than that for other internal medicine physicians.8 In addition to covering an earlier time period, that analysis based its claims rates on data from academic medical centers covered by a single insurer, and physicians at academic medical centers generally have lower claims rates, likely due, at least in part, to their spending a smaller proportion of their time on patient care, compared with nonacademic physicians.29 Another analysis of hospitalist closed claims, which shared some cases with the cohort we analyzed, was performed by The Doctors Company, a commercial liability insurer.7 That analysis astutely emphasized the importance of breakdowns in diagnostic processes as a factor underlying hospitalist claims.
Our study has several limitations. First, although our database of malpractice claims includes approximately 31% of all the claims in the country and includes claims from every state, it may not be nationally representative. Another limitation relates to calculating the claims rates for physicians. Detailed information on the number of years of clinical activity, which is necessary to calculate claims rates, was available for only a subset of our data (8.2% of the hospitalist cases and 11.6% of all cases), so claims rates are based on this subset of our data (among which academic centers are overrepresented). Therefore, the claims rates should be interpreted with caution, especially regarding their application to the community hospital setting. The institutions included in the subset of our data used for determining claims rates were stable over time, so the use of a subset of our data for calculating claims rates reduces the generalizability of our claims rates but should not be a source of bias.
Potentially offsetting strengths of our claims database and study include the availability of unpaid claims (which outnumber paid claims roughly 2:1)11,12; the presence of information on contributing factors and other case characteristics obtained through structured manual review of the cases; and the availability of the specialties of the clinicians involved. These features distinguish the database we used from the NPDB, another national database of malpractice claims, which does not include unpaid claims and which does not include information on contributing factors or physician specialty.
CONCLUSION
First described in 1996, the hospitalist field is the fastest growing specialty in modern medical history.18,30 Therefore, an understanding of the malpractice risk of hospitalists is important and can shed light on the patient safety environment in hospitals. Our analysis showed that hospitalist malpractice claims rates remain roughly stable, in contrast to most other specialties, which have seen a fall in malpractice claims rates.17 In addition, unlike a previous analysis,8 we found that claims rates for hospitalists were essentially equal to those of other general internal medicine physicians (not lower, as had been previously reported), and higher than those of the internal medicine subspecialties. Hospitalist claims also have relatively high severity of injury. Potential factors driving these trends include the increasing demand for hospitalists, which results in a higher proportion of less-experienced physicians entering the field, and the expanding clinical scope of hospitalists, which may lead to their managing patients with conditions with which they may be less comfortable. Overall, our analysis suggests that the malpractice environment for hospitalists is becoming less favorable, and therefore, hospitalists should explore opportunities for mitigating liability risk and enhancing patient safety.
1. Studdert DM, Mello MM, Sage WM, et al. Defensive medicine among high-risk specialist physicians in a volatile malpractice environment. JAMA. 2005;293(21):2609-2617. https://doi.org/10.1001/jama.293.21.2609
2. Carrier ER, Reschovsky JD, Mello MM, Mayrell RC, Katz D. Physicians’ fears of malpractice lawsuits are not assuaged by tort reforms. Health Aff (Millwood). 2010;29(9):1585-1592. https://doi.org/10.1377/hlthaff.2010.0135
3. Kachalia A, Berg A, Fagerlin A, et al. Overuse of testing in preoperative evaluation and syncope: a survey of hospitalists. Ann Intern Med. 2015;162(2):100-108. https://doi.org/10.7326/m14-0694
4. Mello MM, Chandra A, Gawande AA, Studdert DM. National costs of the medical liability system. Health Aff (Millwood). 2010;29(9):1569-1577. https://doi.org/10.1377/hlthaff.2009.0807
5. Rothberg MB, Class J, Bishop TF, Friderici J, Kleppel R, Lindenauer PK. The cost of defensive medicine on 3 hospital medicine services. JAMA Intern Med. 2014;174(11):1867-1868. https://doi.org/10.1001/jamainternmed.2014.4649
6. Saint S, Vaughn VM, Chopra V, Fowler KE, Kachalia A. Perception of resources spent on defensive medicine and history of being sued among hospitalists: results from a national survey. J Hosp Med. 2018;13(1):26-29. https://doi.org/10.12788/jhm.2800
7. Ranum D, Troxel DB, Diamond R. Hospitalist Closed Claims Study: An Expert Analysis of Medical Malpractice Allegations. The Doctors Company. 2016. https://www.thedoctors.com/siteassets/pdfs/risk-management/closed-claims-studies/10392_ccs-hospitalist_academic_single-page_version_frr.pdf
8. Schaffer AC, Puopolo AL, Raman S, Kachalia A. Liability impact of the hospitalist model of care. J Hosp Med. 2014;9(12):750-755. https://doi.org/10.1002/jhm.2244
9. Localio AR, Lawthers AG, Brennan TA, et al. Relation between malpractice claims and adverse events due to negligence. results of the Harvard Medical Practice Study III. N Engl J Med. 1991;325(4):245-251. https://doi.org/10.1056/nejm199107253250405
10. Studdert DM, Thomas EJ, Burstin HR, Zbar BI, Orav EJ, Brennan TA. Negligent care and malpractice claiming behavior in Utah and Colorado. Med Care. 2000;38(3):250-260. https://doi.org/10.1097/00005650-200003000-00002
11. Studdert DM, Mello MM, Gawande AA, et al. Claims, errors, and compensation payments in medical malpractice litigation. N Engl J Med. 2006;354(19):2024-2033. https://doi.org/10.1056/nejmsa054479
12. Medical Malpractice in America: 2018 CRICO Strategies National CBS Report. CRICO Strategies; 2018.
13. Budnitz T, McKean SC. The Core Competencies in Hospital Medicine. In: McKean SC, Ross JJ, Dressler DD, Scheurer DB, eds. Principles and Practice of Hospital Medicine, 2nd ed. McGraw-Hill Education; 2017.
14. O’Leary KJ, Haviley C, Slade ME, Shah HM, Lee J, Williams MV. Improving teamwork: impact of structured interdisciplinary rounds on a hospitalist unit. J Hosp Med. 2011;6(2):88-93. https://doi.org/10.1002/jhm.714
15. National Practitioner Data Bank: Public Use Data File. Division of Practitioner Data Banks, Bureau of Health Professions, Health Resources & Services Administration, U.S. Department of Health & Human Services; June 30, 2019. Updated August 2020.
16. Sowka MP, ed. NAIC Malpractice Claims, Final Compilation. National Association of Insurance Commissioners; 1980.
17. Schaffer AC, Jena AB, Seabury SA, Singh H, Chalasani V, Kachalia A. Rates and characteristics of paid malpractice claims among US physicians by specialty, 1992-2014. JAMA Intern Med. 2017;177(5):710-718. https://doi.org/10.1001/jamainternmed.2017.0311
18. Wachter RM, Goldman L. Zero to 50,000 - the 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/nejmp1607958
19. Miller CS, Fogerty RL, Gann J, Bruti CP, Klein R; The Society of General Internal Medicine Membership Committee. The growth of hospitalists and the future of the Society of General Internal Medicine: results from the 2014 membership survey. J Gen Intern Med. 2017;32(11):1179-1185. https://doi.org/10.1007/s11606-017-4126-7
20. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo YF, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized Medicare population. JAMA Intern Med. 2018;178(2):196-203. https://doi.org/10.1001/jamainternmed.2017.7049
21. Akintoye E, Briasoulis A, Egbe A, et al. National trends in admission and in-hospital mortality of patients with heart failure in the United States (2001-2014). J Am Heart Assoc. 2017;6(12):e006955. https://doi.org/10.1161/jaha.117.006955
22. Clark AV, LoPresti CM, Smith TI. Trends in inpatient admission comorbidity and electronic health data: implications for resident workload intensity. J Hosp Med. 2018;13(8):570-572. https://doi.org/10.12788/jhm.2954
23. Gilmartin HM, Liu VX, Burke RE. Annals for hospitalists inpatient notes - The role of hospitalists in the creation of learning healthcare systems. Ann Intern Med. 2020;172(2):HO2-HO3. https://doi.org/10.7326/m19-3873
24. Siegal EM. Just because you can, doesn’t mean that you should: a call for the rational application of hospitalist comanagement. J Hosp Med. 2008;3(5):398-402. https://doi.org/10.1002/jhm.361
25. Plauth WH 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Med. 2001;111(3):247-254. https://doi.org/10.1016/s0002-9343(01)00837-3
26. Thompson RE, Pfeifer K, Grant PJ, et al. Hospital medicine and perioperative care: a framework for high-quality, high-value collaborative care. J Hosp Med. 2017;12(4):277-282. https://doi.org/10.12788/jhm.2717
27. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2012;7(3):190-194. https://doi.org/10.1002/jhm.1001
28. Kartha A, Restuccia JD, Burgess JF Jr, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. https://doi.org/10.1002/jhm.2231
29. Schaffer AC, Babayan A, Yu-Moe CW, Sato L, Einbinder JS. The effect of clinical volume on annual and per-patient encounter medical malpractice claims risk. J Patient Saf. Published online March 23, 2020. https://doi.org/10.1097/pts.0000000000000706
30. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514-517. https://doi.org/10.1056/nejm199608153350713
1. Studdert DM, Mello MM, Sage WM, et al. Defensive medicine among high-risk specialist physicians in a volatile malpractice environment. JAMA. 2005;293(21):2609-2617. https://doi.org/10.1001/jama.293.21.2609
2. Carrier ER, Reschovsky JD, Mello MM, Mayrell RC, Katz D. Physicians’ fears of malpractice lawsuits are not assuaged by tort reforms. Health Aff (Millwood). 2010;29(9):1585-1592. https://doi.org/10.1377/hlthaff.2010.0135
3. Kachalia A, Berg A, Fagerlin A, et al. Overuse of testing in preoperative evaluation and syncope: a survey of hospitalists. Ann Intern Med. 2015;162(2):100-108. https://doi.org/10.7326/m14-0694
4. Mello MM, Chandra A, Gawande AA, Studdert DM. National costs of the medical liability system. Health Aff (Millwood). 2010;29(9):1569-1577. https://doi.org/10.1377/hlthaff.2009.0807
5. Rothberg MB, Class J, Bishop TF, Friderici J, Kleppel R, Lindenauer PK. The cost of defensive medicine on 3 hospital medicine services. JAMA Intern Med. 2014;174(11):1867-1868. https://doi.org/10.1001/jamainternmed.2014.4649
6. Saint S, Vaughn VM, Chopra V, Fowler KE, Kachalia A. Perception of resources spent on defensive medicine and history of being sued among hospitalists: results from a national survey. J Hosp Med. 2018;13(1):26-29. https://doi.org/10.12788/jhm.2800
7. Ranum D, Troxel DB, Diamond R. Hospitalist Closed Claims Study: An Expert Analysis of Medical Malpractice Allegations. The Doctors Company. 2016. https://www.thedoctors.com/siteassets/pdfs/risk-management/closed-claims-studies/10392_ccs-hospitalist_academic_single-page_version_frr.pdf
8. Schaffer AC, Puopolo AL, Raman S, Kachalia A. Liability impact of the hospitalist model of care. J Hosp Med. 2014;9(12):750-755. https://doi.org/10.1002/jhm.2244
9. Localio AR, Lawthers AG, Brennan TA, et al. Relation between malpractice claims and adverse events due to negligence. results of the Harvard Medical Practice Study III. N Engl J Med. 1991;325(4):245-251. https://doi.org/10.1056/nejm199107253250405
10. Studdert DM, Thomas EJ, Burstin HR, Zbar BI, Orav EJ, Brennan TA. Negligent care and malpractice claiming behavior in Utah and Colorado. Med Care. 2000;38(3):250-260. https://doi.org/10.1097/00005650-200003000-00002
11. Studdert DM, Mello MM, Gawande AA, et al. Claims, errors, and compensation payments in medical malpractice litigation. N Engl J Med. 2006;354(19):2024-2033. https://doi.org/10.1056/nejmsa054479
12. Medical Malpractice in America: 2018 CRICO Strategies National CBS Report. CRICO Strategies; 2018.
13. Budnitz T, McKean SC. The Core Competencies in Hospital Medicine. In: McKean SC, Ross JJ, Dressler DD, Scheurer DB, eds. Principles and Practice of Hospital Medicine, 2nd ed. McGraw-Hill Education; 2017.
14. O’Leary KJ, Haviley C, Slade ME, Shah HM, Lee J, Williams MV. Improving teamwork: impact of structured interdisciplinary rounds on a hospitalist unit. J Hosp Med. 2011;6(2):88-93. https://doi.org/10.1002/jhm.714
15. National Practitioner Data Bank: Public Use Data File. Division of Practitioner Data Banks, Bureau of Health Professions, Health Resources & Services Administration, U.S. Department of Health & Human Services; June 30, 2019. Updated August 2020.
16. Sowka MP, ed. NAIC Malpractice Claims, Final Compilation. National Association of Insurance Commissioners; 1980.
17. Schaffer AC, Jena AB, Seabury SA, Singh H, Chalasani V, Kachalia A. Rates and characteristics of paid malpractice claims among US physicians by specialty, 1992-2014. JAMA Intern Med. 2017;177(5):710-718. https://doi.org/10.1001/jamainternmed.2017.0311
18. Wachter RM, Goldman L. Zero to 50,000 - the 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/nejmp1607958
19. Miller CS, Fogerty RL, Gann J, Bruti CP, Klein R; The Society of General Internal Medicine Membership Committee. The growth of hospitalists and the future of the Society of General Internal Medicine: results from the 2014 membership survey. J Gen Intern Med. 2017;32(11):1179-1185. https://doi.org/10.1007/s11606-017-4126-7
20. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo YF, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized Medicare population. JAMA Intern Med. 2018;178(2):196-203. https://doi.org/10.1001/jamainternmed.2017.7049
21. Akintoye E, Briasoulis A, Egbe A, et al. National trends in admission and in-hospital mortality of patients with heart failure in the United States (2001-2014). J Am Heart Assoc. 2017;6(12):e006955. https://doi.org/10.1161/jaha.117.006955
22. Clark AV, LoPresti CM, Smith TI. Trends in inpatient admission comorbidity and electronic health data: implications for resident workload intensity. J Hosp Med. 2018;13(8):570-572. https://doi.org/10.12788/jhm.2954
23. Gilmartin HM, Liu VX, Burke RE. Annals for hospitalists inpatient notes - The role of hospitalists in the creation of learning healthcare systems. Ann Intern Med. 2020;172(2):HO2-HO3. https://doi.org/10.7326/m19-3873
24. Siegal EM. Just because you can, doesn’t mean that you should: a call for the rational application of hospitalist comanagement. J Hosp Med. 2008;3(5):398-402. https://doi.org/10.1002/jhm.361
25. Plauth WH 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Med. 2001;111(3):247-254. https://doi.org/10.1016/s0002-9343(01)00837-3
26. Thompson RE, Pfeifer K, Grant PJ, et al. Hospital medicine and perioperative care: a framework for high-quality, high-value collaborative care. J Hosp Med. 2017;12(4):277-282. https://doi.org/10.12788/jhm.2717
27. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2012;7(3):190-194. https://doi.org/10.1002/jhm.1001
28. Kartha A, Restuccia JD, Burgess JF Jr, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. https://doi.org/10.1002/jhm.2231
29. Schaffer AC, Babayan A, Yu-Moe CW, Sato L, Einbinder JS. The effect of clinical volume on annual and per-patient encounter medical malpractice claims risk. J Patient Saf. Published online March 23, 2020. https://doi.org/10.1097/pts.0000000000000706
30. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514-517. https://doi.org/10.1056/nejm199608153350713
© 2021 Society of Hospital Medicine