User login
Evaluation of Intermittent Energy Restriction and Continuous Energy Restriction on Weight Loss and Blood Pressure Control in Overweight and Obese Patients With Hypertension
Study Overview
Objective. To compare the effects of intermittent energy restriction (IER) with those of continuous energy restriction (CER) on blood pressure control and weight loss in overweight and obese patients with hypertension during a 6-month period.
Design. Randomized controlled trial.
Settings and participants. The trial was conducted at the Affiliated Hospital of Jiaxing University from June 1, 2020, to April 30, 2021. Chinese adults were recruited using advertisements and flyers posted in the hospital and local communities. Prior to participation in study activities, all participants gave informed consent prior to recruitment and were provided compensation in the form of a $38 voucher at 3 and 6 months for their time for participating in the study.
The main inclusion criteria were patients between the ages of 18 and 70 years, hypertension, and body mass index (BMI) ranging from 24 to 40 kg/m2. The exclusion criteria were systolic blood pressure (SBP) ≥ 180 mmHg or diastolic blood pressure (DBP) ≥ 120 mmHg, type 1 or 2 diabetes with a history of severe hypoglycemic episodes, pregnancy or breastfeeding, usage of glucagon-like peptide 1 receptor agonists, weight loss > 5 kg within the past 3 months or previous weight loss surgery, and inability to adhere to the dietary protocol.
Of the 294 participants screened for eligibility, 205 were randomized in a 1:1 ratio to the IER group (n = 102) or the CER group (n = 103), stratified by sex and BMI (as overweight or obese). All participants were required to have a stable medication regimen and weight in the 3 months prior to enrollment and not to use weight-loss drugs or vitamin supplements for the duration of the study. Researchers and participants were not blinded to the study group assignment.
Interventions. Participants randomly assigned to the IER group followed a 5:2 eating pattern: a very-low-energy diet of 500-600 kcal for 2 days of the week along with their usual diet for the other 5 days. The 2 days of calorie restriction could be consecutive or nonconsecutive, with a minimum of 0.8 g supplemental protein per kg of body weight per day, in accordance with the 2016 Dietary Guidelines for Chinese Residents. The CER group was advised to consume 1000 kcal/day for women and 1200 kcal/day for men on a 7-day energy restriction. That is, they were prescribed a daily 25% restriction based on the general principles of a Mediterranean-type diet (30% fat, 45-50% carbohydrate, and 20-25% protein).
Both groups received dietary education from a qualified dietitian and were recommended to maintain their current daily activity levels throughout the trial. Written dietary information brochures with portion advice and sample meal plans were provided to improve compliance in each group. All participants received a digital cooking scale to weigh foods to ensure accuracy of intake and were required to keep a food diary while following the recommended recipe on 2 days/week during calorie restriction to help with adherence. No food was provided. All participants were followed up by regular outpatient visits to both cardiologists and dietitians once a month. Diet checklists, activity schedules, and weight were reviewed to assess compliance with dietary advice at each visit.
Of note, participants were encouraged to measure and record their BP twice daily, and if 2 consecutive BP readings were < 110/70 mmHg and/or accompanied by hypotensive episodes with symptoms (dizziness, nausea, headache, and fatigue), they were asked to contact the investigators directly. Antihypertensive medication changes were then made in consultation with cardiologists. In addition, a medication management protocol (ie, doses of antidiabetic medications, including insulin and sulfonylurea) was designed to avoid hypoglycemia. Medication could be reduced in the CER group based on the basal dose at the endocrinologist’s discretion. In the IER group, insulin and sulfonylureas were discontinued on calorie restriction days only, and long-acting insulin was discontinued the night before the IER day. Insulin was not to be resumed until a full day’s caloric intake was achieved.
Measures and analysis. The primary outcomes of this study were changes in BP and weight (measured using an automatic digital sphygmomanometer and an electronic scale), and the secondary outcomes were changes in body composition (assessed by dual-energy x-ray absorptiometry scanning), as well as glycosylated hemoglobin A1c (HbA1c) levels and blood lipids after 6 months. All outcome measures were recorded at baseline and at each monthly visit. Incidence rates of hypoglycemia were based on blood glucose (defined as blood glucose < 70 mg/dL) and/or symptomatic hypoglycemia (symptoms of sweating, paleness, dizziness, and confusion). Two cardiologists who were blind to the patients’ diet condition measured and recorded all pertinent clinical parameters and adjudicated serious adverse events.
Data were compared using independent-samples t-tests or the Mann–Whitney U test for continuous variables, and Pearson’s χ2 test or Fisher’s exact test for categorial variables as appropriate. Repeated-measures ANOVA via a linear mixed model was employed to test the effects of diet, time, and their interaction. In subgroup analyses, differential effects of the intervention on the primary outcomes were evaluated with respect to patients’ level of education, domicile, and sex based on the statistical significance of the interaction term for the subgroup of interest in the multivariate model. Analyses were performed based on completers and on an intention-to-treat principle.
Main results. Among the 205 randomized participants, 118 were women and 87 were men; mean (SD) age was 50.5 (8.8) years; mean (SD) BMI was 28.7 (2.6); mean (SD) SBP was 143 (10) mmHg; and mean (SD) DBP was 91 (9) mmHg. At the end of the 6-month intervention, 173 (84.4%) completed the study (IER group: n = 88; CER group: n = 85). Both groups had similar dropout rates at 6 months (IER group: 14 participants [13.7%]; CER group: 18 participants [17.5%]; P = .83) and were well matched for baseline characteristics except for triglyceride levels.
In the completers analysis, both groups experienced significant reductions in weight (mean [SEM]), but there was no difference between treatment groups (−7.2 [0.6] kg in the IER group vs −7.1 [0.6] kg in the CER group; diet by time P = .72). Similarly, the change in SBP and DBP achieved was statistically significant over time, but there was also no difference between the dietary interventions (−8 [0.7] mmHg in the IER group vs −8 [0.6] mmHg in the CER group, diet by time P = .68; −6 [0.6] mmHg in the IER group vs −6 [0.5] mmHg in the CER group, diet by time P = .53]. Subgroup analyses of the association of the intervention with weight, SBP and DBP by sex, education, and domicile showed no significant between-group differences.
All measures of body composition decreased significantly at 6 months with both groups experiencing comparable reductions in total fat mass (−5.5 [0.6] kg in the IER group vs −4.8 [0.5] kg in the CER group, diet by time P = .08) and android fat mass (−1.1 [0.2] kg in the IER group vs −0.8 [0.2] kg in the CER group, diet by time P = .16). Of note, participants in the CER group lost significantly more total fat-free mass than did participants in the IER group (mean [SEM], −2.3 [0.2] kg vs −1.7 [0.2] kg; P = .03], and there was a trend toward a greater change in total fat mass in the IER group (P = .08). The secondary outcome of mean (SEM) HbA1c (−0.2% [0.1%]) and blood lipid levels (triglyceride level, −1.0 [0.3] mmol/L; total cholesterol level, −0.9 [0.2] mmol/L; low-density lipoprotein cholesterol level, −0.9 [0.2 mmol/L; high-density lipoprotein cholesterol level, 0.7 [0.3] mmol/L] improved with weight loss (P < .05), with no differences between groups (diet by time P > .05).
The intention-to-treat analysis demonstrated that IER and CER are equally effective for weight loss and blood pressure control: both groups experienced significant reductions in weight, SBP, and DBP, but with no difference between treatment groups – mean (SEM) weight change with IER was −7.0 (0.6) kg vs −6.8 (0.6) kg with CER; the mean (SEM) SBP with IER was −7 (0.7) mmHg vs −7 (0.6) mmHg with CER; and the mean (SEM) DBP with IER was −6 (0.5) mmHg vs −5 (0.5) mmHg with CER, (diet by time P = .62, .39, and .41, respectively). There were favorable improvements in
Conclusion. A 2-day severe energy restriction with 5 days of habitual eating compared to 7 days of CER provides an acceptable alternative for BP control and weight loss in overweight and obese individuals with hypertension after 6 months. IER may offer a useful alternative strategy for this population, who find continuous weight-loss diets too difficult to maintain.
Commentary
Globally, obesity represents a major health challenge as it substantially increases the risk of diseases such as hypertension, type 2 diabetes, and coronary heart disease.1 Lifestyle modifications, including weight loss and increased physical activity, are recommended in major guidelines as a first-step intervention in the treatment of hypertensive patients.2 However, lifestyle and behavioral interventions aimed at reducing calorie intake through low-calorie dieting is challenging as it is dependent on individual motivation and adherence to a strict, continuous protocol. Further, CER strategies have limited effectiveness because complex and persistent hormonal, metabolic, and neurochemical adaptations defend against weight loss and promote weight regain.3-4 IER has drawn attention in the popular media as an alternative to CER due to its feasibility and even potential for higher rates of compliance.5
This study adds to the literature as it is the first randomized controlled trial (to the knowledge of the authors at the time of publication) to explore 2 forms of energy restriction – CER and IER – and their impact on weight loss, BP, body composition, HbA1c, and blood lipid levels in overweight and obese patients with high blood pressure. Results from this study showed that IER is as effective as, but not superior to, CER (in terms of the outcomes measures assessed). Specifically, findings highlighted that the 5:2 diet is an effective strategy and noninferior to that of daily calorie restriction for BP and weight control. In addition, both weight loss and BP reduction were greater in a subgroup of obese compared with overweight participants, which indicates that obese populations may benefit more from energy restriction. As the authors highlight, this study both aligns with and expands on current related literature.
This study has both strengths and limitations, especially with regard to the design and data analysis strategy. A key strength is the randomized controlled trial design which enables increased internal validity and decreases several sources of bias, including selection bias and confounding. In addition, it was also designed as a pragmatic trial, with the protocol reflecting efforts to replicate the real-world environment by not supplying meal replacements or food. Notably, only 9 patients could not comply with the protocol, indicating that acceptability of the diet protocol was high. However, as this was only a 6-month long study, further studies are needed to determine whether a 5:2 diet is sustainable (and effective) in the long-term compared with CER, which the authors highlight. The study was also adequately powered to detect clinically meaningful differences in weight loss and SBP, and appropriate analyses were performed on both the basis of completers and on an intention-to-treat principle. However, further studies are needed that are adequately powered to also detect clinically meaningful differences in the other measures, ie, body composition, HbA1c, and blood lipid levels. Importantly, generalizability of findings from this study is limited as the study population comprises only Chinese adults, predominately middle-aged, overweight, and had mildly to moderately elevated SBP and DBP, and excluded diabetic patients. Thus, findings are not necessarily applicable to individuals with highly elevated blood pressure or poorly controlled diabetes.
Applications for Clinical Practice
Results of this study demonstrated that IER is an effective alternative diet strategy for weight loss and blood pressure control in overweight and obese patients with hypertension and is comparable to CER. This is relevant for clinical practice as IER may be easier to maintain in this population compared to continuous weight-loss diets. Importantly, both types of calorie restriction require clinical oversight as medication changes and periodic monitoring of hypotensive and hypoglycemic episodes are needed. Clinicians should consider what is feasible and sustainable for their patients when recommending intermittent energy restriction.
Financial disclosures: None.
1. Blüher M. Obesity: global epidemiology and pathogenesis. Nat Rev Endocrinol. 2019;15(5):288-298. doi:10.1038/s41574-019-0176-8
2. Unger T, Borghi C, Charchar F, et al. 2020 International Society of Hypertension Global hypertension practice guidelines. J Hypertens. 2020;38(6):982-1004. doi:10.1097/HJH.0000000000002453
3. Müller MJ, Enderle J, Bosy-Westphal A. Changes in Energy Expenditure with Weight Gain and Weight Loss in Humans. Curr Obes Rep. 2016;5(4):413-423. doi:10.1007/s13679-016-0237-4
4. Sainsbury A, Wood RE, Seimon RV, et al. Rationale for novel intermittent dieting strategies to attenuate adaptive responses to energy restriction. Obes Rev. 2018;19 Suppl 1:47–60. doi:10.1111/obr.12787
5. Davis CS, Clarke RE, Coulter SN, et al. Intermittent energy restriction and weight loss: a systematic review. Eur J Clin Nutr. 2016;70(3):292-299. doi:10.1038/ejcn.2015.195
Study Overview
Objective. To compare the effects of intermittent energy restriction (IER) with those of continuous energy restriction (CER) on blood pressure control and weight loss in overweight and obese patients with hypertension during a 6-month period.
Design. Randomized controlled trial.
Settings and participants. The trial was conducted at the Affiliated Hospital of Jiaxing University from June 1, 2020, to April 30, 2021. Chinese adults were recruited using advertisements and flyers posted in the hospital and local communities. Prior to participation in study activities, all participants gave informed consent prior to recruitment and were provided compensation in the form of a $38 voucher at 3 and 6 months for their time for participating in the study.
The main inclusion criteria were patients between the ages of 18 and 70 years, hypertension, and body mass index (BMI) ranging from 24 to 40 kg/m2. The exclusion criteria were systolic blood pressure (SBP) ≥ 180 mmHg or diastolic blood pressure (DBP) ≥ 120 mmHg, type 1 or 2 diabetes with a history of severe hypoglycemic episodes, pregnancy or breastfeeding, usage of glucagon-like peptide 1 receptor agonists, weight loss > 5 kg within the past 3 months or previous weight loss surgery, and inability to adhere to the dietary protocol.
Of the 294 participants screened for eligibility, 205 were randomized in a 1:1 ratio to the IER group (n = 102) or the CER group (n = 103), stratified by sex and BMI (as overweight or obese). All participants were required to have a stable medication regimen and weight in the 3 months prior to enrollment and not to use weight-loss drugs or vitamin supplements for the duration of the study. Researchers and participants were not blinded to the study group assignment.
Interventions. Participants randomly assigned to the IER group followed a 5:2 eating pattern: a very-low-energy diet of 500-600 kcal for 2 days of the week along with their usual diet for the other 5 days. The 2 days of calorie restriction could be consecutive or nonconsecutive, with a minimum of 0.8 g supplemental protein per kg of body weight per day, in accordance with the 2016 Dietary Guidelines for Chinese Residents. The CER group was advised to consume 1000 kcal/day for women and 1200 kcal/day for men on a 7-day energy restriction. That is, they were prescribed a daily 25% restriction based on the general principles of a Mediterranean-type diet (30% fat, 45-50% carbohydrate, and 20-25% protein).
Both groups received dietary education from a qualified dietitian and were recommended to maintain their current daily activity levels throughout the trial. Written dietary information brochures with portion advice and sample meal plans were provided to improve compliance in each group. All participants received a digital cooking scale to weigh foods to ensure accuracy of intake and were required to keep a food diary while following the recommended recipe on 2 days/week during calorie restriction to help with adherence. No food was provided. All participants were followed up by regular outpatient visits to both cardiologists and dietitians once a month. Diet checklists, activity schedules, and weight were reviewed to assess compliance with dietary advice at each visit.
Of note, participants were encouraged to measure and record their BP twice daily, and if 2 consecutive BP readings were < 110/70 mmHg and/or accompanied by hypotensive episodes with symptoms (dizziness, nausea, headache, and fatigue), they were asked to contact the investigators directly. Antihypertensive medication changes were then made in consultation with cardiologists. In addition, a medication management protocol (ie, doses of antidiabetic medications, including insulin and sulfonylurea) was designed to avoid hypoglycemia. Medication could be reduced in the CER group based on the basal dose at the endocrinologist’s discretion. In the IER group, insulin and sulfonylureas were discontinued on calorie restriction days only, and long-acting insulin was discontinued the night before the IER day. Insulin was not to be resumed until a full day’s caloric intake was achieved.
Measures and analysis. The primary outcomes of this study were changes in BP and weight (measured using an automatic digital sphygmomanometer and an electronic scale), and the secondary outcomes were changes in body composition (assessed by dual-energy x-ray absorptiometry scanning), as well as glycosylated hemoglobin A1c (HbA1c) levels and blood lipids after 6 months. All outcome measures were recorded at baseline and at each monthly visit. Incidence rates of hypoglycemia were based on blood glucose (defined as blood glucose < 70 mg/dL) and/or symptomatic hypoglycemia (symptoms of sweating, paleness, dizziness, and confusion). Two cardiologists who were blind to the patients’ diet condition measured and recorded all pertinent clinical parameters and adjudicated serious adverse events.
Data were compared using independent-samples t-tests or the Mann–Whitney U test for continuous variables, and Pearson’s χ2 test or Fisher’s exact test for categorial variables as appropriate. Repeated-measures ANOVA via a linear mixed model was employed to test the effects of diet, time, and their interaction. In subgroup analyses, differential effects of the intervention on the primary outcomes were evaluated with respect to patients’ level of education, domicile, and sex based on the statistical significance of the interaction term for the subgroup of interest in the multivariate model. Analyses were performed based on completers and on an intention-to-treat principle.
Main results. Among the 205 randomized participants, 118 were women and 87 were men; mean (SD) age was 50.5 (8.8) years; mean (SD) BMI was 28.7 (2.6); mean (SD) SBP was 143 (10) mmHg; and mean (SD) DBP was 91 (9) mmHg. At the end of the 6-month intervention, 173 (84.4%) completed the study (IER group: n = 88; CER group: n = 85). Both groups had similar dropout rates at 6 months (IER group: 14 participants [13.7%]; CER group: 18 participants [17.5%]; P = .83) and were well matched for baseline characteristics except for triglyceride levels.
In the completers analysis, both groups experienced significant reductions in weight (mean [SEM]), but there was no difference between treatment groups (−7.2 [0.6] kg in the IER group vs −7.1 [0.6] kg in the CER group; diet by time P = .72). Similarly, the change in SBP and DBP achieved was statistically significant over time, but there was also no difference between the dietary interventions (−8 [0.7] mmHg in the IER group vs −8 [0.6] mmHg in the CER group, diet by time P = .68; −6 [0.6] mmHg in the IER group vs −6 [0.5] mmHg in the CER group, diet by time P = .53]. Subgroup analyses of the association of the intervention with weight, SBP and DBP by sex, education, and domicile showed no significant between-group differences.
All measures of body composition decreased significantly at 6 months with both groups experiencing comparable reductions in total fat mass (−5.5 [0.6] kg in the IER group vs −4.8 [0.5] kg in the CER group, diet by time P = .08) and android fat mass (−1.1 [0.2] kg in the IER group vs −0.8 [0.2] kg in the CER group, diet by time P = .16). Of note, participants in the CER group lost significantly more total fat-free mass than did participants in the IER group (mean [SEM], −2.3 [0.2] kg vs −1.7 [0.2] kg; P = .03], and there was a trend toward a greater change in total fat mass in the IER group (P = .08). The secondary outcome of mean (SEM) HbA1c (−0.2% [0.1%]) and blood lipid levels (triglyceride level, −1.0 [0.3] mmol/L; total cholesterol level, −0.9 [0.2] mmol/L; low-density lipoprotein cholesterol level, −0.9 [0.2 mmol/L; high-density lipoprotein cholesterol level, 0.7 [0.3] mmol/L] improved with weight loss (P < .05), with no differences between groups (diet by time P > .05).
The intention-to-treat analysis demonstrated that IER and CER are equally effective for weight loss and blood pressure control: both groups experienced significant reductions in weight, SBP, and DBP, but with no difference between treatment groups – mean (SEM) weight change with IER was −7.0 (0.6) kg vs −6.8 (0.6) kg with CER; the mean (SEM) SBP with IER was −7 (0.7) mmHg vs −7 (0.6) mmHg with CER; and the mean (SEM) DBP with IER was −6 (0.5) mmHg vs −5 (0.5) mmHg with CER, (diet by time P = .62, .39, and .41, respectively). There were favorable improvements in
Conclusion. A 2-day severe energy restriction with 5 days of habitual eating compared to 7 days of CER provides an acceptable alternative for BP control and weight loss in overweight and obese individuals with hypertension after 6 months. IER may offer a useful alternative strategy for this population, who find continuous weight-loss diets too difficult to maintain.
Commentary
Globally, obesity represents a major health challenge as it substantially increases the risk of diseases such as hypertension, type 2 diabetes, and coronary heart disease.1 Lifestyle modifications, including weight loss and increased physical activity, are recommended in major guidelines as a first-step intervention in the treatment of hypertensive patients.2 However, lifestyle and behavioral interventions aimed at reducing calorie intake through low-calorie dieting is challenging as it is dependent on individual motivation and adherence to a strict, continuous protocol. Further, CER strategies have limited effectiveness because complex and persistent hormonal, metabolic, and neurochemical adaptations defend against weight loss and promote weight regain.3-4 IER has drawn attention in the popular media as an alternative to CER due to its feasibility and even potential for higher rates of compliance.5
This study adds to the literature as it is the first randomized controlled trial (to the knowledge of the authors at the time of publication) to explore 2 forms of energy restriction – CER and IER – and their impact on weight loss, BP, body composition, HbA1c, and blood lipid levels in overweight and obese patients with high blood pressure. Results from this study showed that IER is as effective as, but not superior to, CER (in terms of the outcomes measures assessed). Specifically, findings highlighted that the 5:2 diet is an effective strategy and noninferior to that of daily calorie restriction for BP and weight control. In addition, both weight loss and BP reduction were greater in a subgroup of obese compared with overweight participants, which indicates that obese populations may benefit more from energy restriction. As the authors highlight, this study both aligns with and expands on current related literature.
This study has both strengths and limitations, especially with regard to the design and data analysis strategy. A key strength is the randomized controlled trial design which enables increased internal validity and decreases several sources of bias, including selection bias and confounding. In addition, it was also designed as a pragmatic trial, with the protocol reflecting efforts to replicate the real-world environment by not supplying meal replacements or food. Notably, only 9 patients could not comply with the protocol, indicating that acceptability of the diet protocol was high. However, as this was only a 6-month long study, further studies are needed to determine whether a 5:2 diet is sustainable (and effective) in the long-term compared with CER, which the authors highlight. The study was also adequately powered to detect clinically meaningful differences in weight loss and SBP, and appropriate analyses were performed on both the basis of completers and on an intention-to-treat principle. However, further studies are needed that are adequately powered to also detect clinically meaningful differences in the other measures, ie, body composition, HbA1c, and blood lipid levels. Importantly, generalizability of findings from this study is limited as the study population comprises only Chinese adults, predominately middle-aged, overweight, and had mildly to moderately elevated SBP and DBP, and excluded diabetic patients. Thus, findings are not necessarily applicable to individuals with highly elevated blood pressure or poorly controlled diabetes.
Applications for Clinical Practice
Results of this study demonstrated that IER is an effective alternative diet strategy for weight loss and blood pressure control in overweight and obese patients with hypertension and is comparable to CER. This is relevant for clinical practice as IER may be easier to maintain in this population compared to continuous weight-loss diets. Importantly, both types of calorie restriction require clinical oversight as medication changes and periodic monitoring of hypotensive and hypoglycemic episodes are needed. Clinicians should consider what is feasible and sustainable for their patients when recommending intermittent energy restriction.
Financial disclosures: None.
Study Overview
Objective. To compare the effects of intermittent energy restriction (IER) with those of continuous energy restriction (CER) on blood pressure control and weight loss in overweight and obese patients with hypertension during a 6-month period.
Design. Randomized controlled trial.
Settings and participants. The trial was conducted at the Affiliated Hospital of Jiaxing University from June 1, 2020, to April 30, 2021. Chinese adults were recruited using advertisements and flyers posted in the hospital and local communities. Prior to participation in study activities, all participants gave informed consent prior to recruitment and were provided compensation in the form of a $38 voucher at 3 and 6 months for their time for participating in the study.
The main inclusion criteria were patients between the ages of 18 and 70 years, hypertension, and body mass index (BMI) ranging from 24 to 40 kg/m2. The exclusion criteria were systolic blood pressure (SBP) ≥ 180 mmHg or diastolic blood pressure (DBP) ≥ 120 mmHg, type 1 or 2 diabetes with a history of severe hypoglycemic episodes, pregnancy or breastfeeding, usage of glucagon-like peptide 1 receptor agonists, weight loss > 5 kg within the past 3 months or previous weight loss surgery, and inability to adhere to the dietary protocol.
Of the 294 participants screened for eligibility, 205 were randomized in a 1:1 ratio to the IER group (n = 102) or the CER group (n = 103), stratified by sex and BMI (as overweight or obese). All participants were required to have a stable medication regimen and weight in the 3 months prior to enrollment and not to use weight-loss drugs or vitamin supplements for the duration of the study. Researchers and participants were not blinded to the study group assignment.
Interventions. Participants randomly assigned to the IER group followed a 5:2 eating pattern: a very-low-energy diet of 500-600 kcal for 2 days of the week along with their usual diet for the other 5 days. The 2 days of calorie restriction could be consecutive or nonconsecutive, with a minimum of 0.8 g supplemental protein per kg of body weight per day, in accordance with the 2016 Dietary Guidelines for Chinese Residents. The CER group was advised to consume 1000 kcal/day for women and 1200 kcal/day for men on a 7-day energy restriction. That is, they were prescribed a daily 25% restriction based on the general principles of a Mediterranean-type diet (30% fat, 45-50% carbohydrate, and 20-25% protein).
Both groups received dietary education from a qualified dietitian and were recommended to maintain their current daily activity levels throughout the trial. Written dietary information brochures with portion advice and sample meal plans were provided to improve compliance in each group. All participants received a digital cooking scale to weigh foods to ensure accuracy of intake and were required to keep a food diary while following the recommended recipe on 2 days/week during calorie restriction to help with adherence. No food was provided. All participants were followed up by regular outpatient visits to both cardiologists and dietitians once a month. Diet checklists, activity schedules, and weight were reviewed to assess compliance with dietary advice at each visit.
Of note, participants were encouraged to measure and record their BP twice daily, and if 2 consecutive BP readings were < 110/70 mmHg and/or accompanied by hypotensive episodes with symptoms (dizziness, nausea, headache, and fatigue), they were asked to contact the investigators directly. Antihypertensive medication changes were then made in consultation with cardiologists. In addition, a medication management protocol (ie, doses of antidiabetic medications, including insulin and sulfonylurea) was designed to avoid hypoglycemia. Medication could be reduced in the CER group based on the basal dose at the endocrinologist’s discretion. In the IER group, insulin and sulfonylureas were discontinued on calorie restriction days only, and long-acting insulin was discontinued the night before the IER day. Insulin was not to be resumed until a full day’s caloric intake was achieved.
Measures and analysis. The primary outcomes of this study were changes in BP and weight (measured using an automatic digital sphygmomanometer and an electronic scale), and the secondary outcomes were changes in body composition (assessed by dual-energy x-ray absorptiometry scanning), as well as glycosylated hemoglobin A1c (HbA1c) levels and blood lipids after 6 months. All outcome measures were recorded at baseline and at each monthly visit. Incidence rates of hypoglycemia were based on blood glucose (defined as blood glucose < 70 mg/dL) and/or symptomatic hypoglycemia (symptoms of sweating, paleness, dizziness, and confusion). Two cardiologists who were blind to the patients’ diet condition measured and recorded all pertinent clinical parameters and adjudicated serious adverse events.
Data were compared using independent-samples t-tests or the Mann–Whitney U test for continuous variables, and Pearson’s χ2 test or Fisher’s exact test for categorial variables as appropriate. Repeated-measures ANOVA via a linear mixed model was employed to test the effects of diet, time, and their interaction. In subgroup analyses, differential effects of the intervention on the primary outcomes were evaluated with respect to patients’ level of education, domicile, and sex based on the statistical significance of the interaction term for the subgroup of interest in the multivariate model. Analyses were performed based on completers and on an intention-to-treat principle.
Main results. Among the 205 randomized participants, 118 were women and 87 were men; mean (SD) age was 50.5 (8.8) years; mean (SD) BMI was 28.7 (2.6); mean (SD) SBP was 143 (10) mmHg; and mean (SD) DBP was 91 (9) mmHg. At the end of the 6-month intervention, 173 (84.4%) completed the study (IER group: n = 88; CER group: n = 85). Both groups had similar dropout rates at 6 months (IER group: 14 participants [13.7%]; CER group: 18 participants [17.5%]; P = .83) and were well matched for baseline characteristics except for triglyceride levels.
In the completers analysis, both groups experienced significant reductions in weight (mean [SEM]), but there was no difference between treatment groups (−7.2 [0.6] kg in the IER group vs −7.1 [0.6] kg in the CER group; diet by time P = .72). Similarly, the change in SBP and DBP achieved was statistically significant over time, but there was also no difference between the dietary interventions (−8 [0.7] mmHg in the IER group vs −8 [0.6] mmHg in the CER group, diet by time P = .68; −6 [0.6] mmHg in the IER group vs −6 [0.5] mmHg in the CER group, diet by time P = .53]. Subgroup analyses of the association of the intervention with weight, SBP and DBP by sex, education, and domicile showed no significant between-group differences.
All measures of body composition decreased significantly at 6 months with both groups experiencing comparable reductions in total fat mass (−5.5 [0.6] kg in the IER group vs −4.8 [0.5] kg in the CER group, diet by time P = .08) and android fat mass (−1.1 [0.2] kg in the IER group vs −0.8 [0.2] kg in the CER group, diet by time P = .16). Of note, participants in the CER group lost significantly more total fat-free mass than did participants in the IER group (mean [SEM], −2.3 [0.2] kg vs −1.7 [0.2] kg; P = .03], and there was a trend toward a greater change in total fat mass in the IER group (P = .08). The secondary outcome of mean (SEM) HbA1c (−0.2% [0.1%]) and blood lipid levels (triglyceride level, −1.0 [0.3] mmol/L; total cholesterol level, −0.9 [0.2] mmol/L; low-density lipoprotein cholesterol level, −0.9 [0.2 mmol/L; high-density lipoprotein cholesterol level, 0.7 [0.3] mmol/L] improved with weight loss (P < .05), with no differences between groups (diet by time P > .05).
The intention-to-treat analysis demonstrated that IER and CER are equally effective for weight loss and blood pressure control: both groups experienced significant reductions in weight, SBP, and DBP, but with no difference between treatment groups – mean (SEM) weight change with IER was −7.0 (0.6) kg vs −6.8 (0.6) kg with CER; the mean (SEM) SBP with IER was −7 (0.7) mmHg vs −7 (0.6) mmHg with CER; and the mean (SEM) DBP with IER was −6 (0.5) mmHg vs −5 (0.5) mmHg with CER, (diet by time P = .62, .39, and .41, respectively). There were favorable improvements in
Conclusion. A 2-day severe energy restriction with 5 days of habitual eating compared to 7 days of CER provides an acceptable alternative for BP control and weight loss in overweight and obese individuals with hypertension after 6 months. IER may offer a useful alternative strategy for this population, who find continuous weight-loss diets too difficult to maintain.
Commentary
Globally, obesity represents a major health challenge as it substantially increases the risk of diseases such as hypertension, type 2 diabetes, and coronary heart disease.1 Lifestyle modifications, including weight loss and increased physical activity, are recommended in major guidelines as a first-step intervention in the treatment of hypertensive patients.2 However, lifestyle and behavioral interventions aimed at reducing calorie intake through low-calorie dieting is challenging as it is dependent on individual motivation and adherence to a strict, continuous protocol. Further, CER strategies have limited effectiveness because complex and persistent hormonal, metabolic, and neurochemical adaptations defend against weight loss and promote weight regain.3-4 IER has drawn attention in the popular media as an alternative to CER due to its feasibility and even potential for higher rates of compliance.5
This study adds to the literature as it is the first randomized controlled trial (to the knowledge of the authors at the time of publication) to explore 2 forms of energy restriction – CER and IER – and their impact on weight loss, BP, body composition, HbA1c, and blood lipid levels in overweight and obese patients with high blood pressure. Results from this study showed that IER is as effective as, but not superior to, CER (in terms of the outcomes measures assessed). Specifically, findings highlighted that the 5:2 diet is an effective strategy and noninferior to that of daily calorie restriction for BP and weight control. In addition, both weight loss and BP reduction were greater in a subgroup of obese compared with overweight participants, which indicates that obese populations may benefit more from energy restriction. As the authors highlight, this study both aligns with and expands on current related literature.
This study has both strengths and limitations, especially with regard to the design and data analysis strategy. A key strength is the randomized controlled trial design which enables increased internal validity and decreases several sources of bias, including selection bias and confounding. In addition, it was also designed as a pragmatic trial, with the protocol reflecting efforts to replicate the real-world environment by not supplying meal replacements or food. Notably, only 9 patients could not comply with the protocol, indicating that acceptability of the diet protocol was high. However, as this was only a 6-month long study, further studies are needed to determine whether a 5:2 diet is sustainable (and effective) in the long-term compared with CER, which the authors highlight. The study was also adequately powered to detect clinically meaningful differences in weight loss and SBP, and appropriate analyses were performed on both the basis of completers and on an intention-to-treat principle. However, further studies are needed that are adequately powered to also detect clinically meaningful differences in the other measures, ie, body composition, HbA1c, and blood lipid levels. Importantly, generalizability of findings from this study is limited as the study population comprises only Chinese adults, predominately middle-aged, overweight, and had mildly to moderately elevated SBP and DBP, and excluded diabetic patients. Thus, findings are not necessarily applicable to individuals with highly elevated blood pressure or poorly controlled diabetes.
Applications for Clinical Practice
Results of this study demonstrated that IER is an effective alternative diet strategy for weight loss and blood pressure control in overweight and obese patients with hypertension and is comparable to CER. This is relevant for clinical practice as IER may be easier to maintain in this population compared to continuous weight-loss diets. Importantly, both types of calorie restriction require clinical oversight as medication changes and periodic monitoring of hypotensive and hypoglycemic episodes are needed. Clinicians should consider what is feasible and sustainable for their patients when recommending intermittent energy restriction.
Financial disclosures: None.
1. Blüher M. Obesity: global epidemiology and pathogenesis. Nat Rev Endocrinol. 2019;15(5):288-298. doi:10.1038/s41574-019-0176-8
2. Unger T, Borghi C, Charchar F, et al. 2020 International Society of Hypertension Global hypertension practice guidelines. J Hypertens. 2020;38(6):982-1004. doi:10.1097/HJH.0000000000002453
3. Müller MJ, Enderle J, Bosy-Westphal A. Changes in Energy Expenditure with Weight Gain and Weight Loss in Humans. Curr Obes Rep. 2016;5(4):413-423. doi:10.1007/s13679-016-0237-4
4. Sainsbury A, Wood RE, Seimon RV, et al. Rationale for novel intermittent dieting strategies to attenuate adaptive responses to energy restriction. Obes Rev. 2018;19 Suppl 1:47–60. doi:10.1111/obr.12787
5. Davis CS, Clarke RE, Coulter SN, et al. Intermittent energy restriction and weight loss: a systematic review. Eur J Clin Nutr. 2016;70(3):292-299. doi:10.1038/ejcn.2015.195
1. Blüher M. Obesity: global epidemiology and pathogenesis. Nat Rev Endocrinol. 2019;15(5):288-298. doi:10.1038/s41574-019-0176-8
2. Unger T, Borghi C, Charchar F, et al. 2020 International Society of Hypertension Global hypertension practice guidelines. J Hypertens. 2020;38(6):982-1004. doi:10.1097/HJH.0000000000002453
3. Müller MJ, Enderle J, Bosy-Westphal A. Changes in Energy Expenditure with Weight Gain and Weight Loss in Humans. Curr Obes Rep. 2016;5(4):413-423. doi:10.1007/s13679-016-0237-4
4. Sainsbury A, Wood RE, Seimon RV, et al. Rationale for novel intermittent dieting strategies to attenuate adaptive responses to energy restriction. Obes Rev. 2018;19 Suppl 1:47–60. doi:10.1111/obr.12787
5. Davis CS, Clarke RE, Coulter SN, et al. Intermittent energy restriction and weight loss: a systematic review. Eur J Clin Nutr. 2016;70(3):292-299. doi:10.1038/ejcn.2015.195
Preoperative Code Status Discussion in Older Adults: Are We Doing Enough?
Study Overview
Objective. The objective of this study was to evaluate orders and documentation describing perioperative management of code status in adults.
Design. A retrospective case series of all adult inpatients admitted to hospitals at 1 academic health system in the US.
Setting and participants. This retrospective case series was conducted at 5 hospitals within the University of Pennsylvania Health System. Cases included all adult inpatients admitted to hospitals between March 2017 and September 2018 who had a Do-Not-Resuscitate (DNR) order placed in their medical record during admission and subsequently underwent a surgical procedure that required anesthesia care.
Main outcome measures. Medical records of included cases were manually reviewed by the authors to verify whether a DNR order was in place at the time surgical intervention was discussed with a patient. Clinical notes and DNR orders of eligible cases were reviewed to identify documentation and outcome of goals of care discussions that were conducted within 48 hours prior to the surgical procedure. Collected data included patient demographics (age, sex, race); case characteristics (American Society of Anesthesiologists [ASA] physical status score, anesthesia type [general vs others such as regional], emergency status [emergent vs elective surgery], procedures by service [surgical including hip fracture repair, gastrostomy or jejunostomy, or exploratory laparotomy vs medical including endoscopy, bronchoscopy, or transesophageal echocardiogram]); and hospital policy for perioperative management of DNR orders (written policy encouraging discussion vs written policy plus additional initiatives, including procedure-specific DNR form). The primary outcome was the presence of a preoperative order or note documenting code status discussion or change. Data were analyzed using χ2 and Fisher exact tests and the threshold for statistical significance was P < .05.
Main results. Of the 27 665 inpatient procedures identified across 5 hospitals, 444 (1.6%) cases met the inclusion criteria. Patients from these cases aged 75 (SD 13) years (95% CI, 72-77 years); 247 (56%, 95% CI, 55%-57%) were women; and 300 (68%, 95% CI, 65%-71%) were White. A total of 426 patients (96%, 95% CI, 90%-100%) had an ASA physical status score of 3 or higher and 237 (53%, 95% CI, 51%-56%) received general anesthesia. The most common procedures performed were endoscopy (148 [33%]), hip fracture repair (43 [10%]), and gastrostomy or jejunostomy (28 [6%]). Reevaluation of code status was documented in 126 cases (28%, 95% CI, 25%-31%); code status orders were changed in 20 of 126 cases (16%, 95% CI, 7%-24%); and a note was filed without a corresponding order for 106 of 126 cases (84%, 95% CI, 75%-95%). In the majority of cases (109 of 126 [87%], 95% CI, 78%-95%) in which documented discussion occurred, DNR orders were suspended. Of 126 cases in which a discussion was documented, participants of these discussions included surgeons 10% of the time (13 cases, 95% CI, 8%-13%), members of the anesthesia team 51% of the time (64 cases, 95% CI, 49%-53%), and medicine or palliative care clinicians 39% of the time (49 cases, 95% CI, 37%-41%).
The rate of documented preoperative code status discussion was higher in patients with higher ASA physical status score (35% in patients with an ASA physical status score ≥ 4 [55 of 155] vs 25% in those with an ASA physical status score ≤ 3 [71 of 289]; P = .02). The rates of documented preoperative code status discussion were similar by anesthesia type (29% for general anesthesia [69 of 237 cases] vs 28% [57 of 207 cases] for other modalities; P = .70). The hospitals involved in this study all had a written policy encouraging rediscussion of code status before surgery. However, only 1 hospital reported added measures (eg, provision of a procedure-specific DNR form) to increase documentation of preoperative code status discussions. In this specific hospital, documentation of preoperative code status discussions was higher compared to other hospitals (67% [37 of 55 cases] vs 23% [89 of 389 cases]; P < .01).
Conclusion. In a retrospective case series conducted at 5 hospitals within 1 academic health system in the US, fewer than 1 in 5 patients with preexisting DNR orders had a documented discussion of code status prior to undergoing surgery. Additional strategies including the development of institutional protocols that facilitate perioperative management of advance directives, identification of local champions, and patient education, should be explored as means to improve preoperative code status reevaulation per guideline recommendations.
Commentary
It is not unusual that patients with a DNR order may require and undergo surgical interventions to treat reversible conditions, prevent progression of underlying disease, or mitigate distressing symptoms such as pain. For instance, intubation, mechanical ventilation, and administration of vasoactive drugs are resuscitative measures that may be needed to safely anesthetize and sedate a patient. As such, the American College of Surgeons1 has provided a statement on advance directives by patients with an existing DNR order to guide management. Specifically, the statement indicates that the best approach for these patients is a policy of “required reconsideration” of the existing DNR order. Required reconsideration means that “the patient or designated surrogate and the physicians who will be responsible for the patient’s care should, when possible, discuss the new intraoperative and perioperative risks associated with the surgical procedure, the patient’s treatment goals, and an approach for potentially life-threatening problems consistent with the patient’s values and preferences.” Moreover, the required reconsideration discussion needs to occur as early as it is practical once a decision is made to have surgery because the discussion “may result in the patient agreeing to suspend the DNR order during surgery and the perioperative period, retaining the original DNR order, or modifying the DNR order.” Given that surgical patients with DNR orders have significant comorbidities, many sustain postoperative complications, and nearly 1 in 4 die within 30 days of surgery, preoperative advance care planning (ACP) and code status discussions are particularly essential to delivering high quality surgical care.2
In the current study, Hadler et al3 conducted a retrospective analysis to evaluate orders and documentation describing perioperative management of code status in patients with existing DNR order at an academic health system in the US. The authors reported that fewer than 20% of patients with existing DNR orders had a documented discussion of code status prior to undergoing surgery. These findings add to the notion that compliance with such guidance on required reconsideration discussion is suboptimal in perioperative care in the US.4,5 A recently published study focused on patients aged more than 60 years undergoing high-risk oncologic or vascular surgeries similarly showed that the frequency of ACP discussions or advance directive documentations among older patients was low.6 This growing body of evidence is highly clinically relevant in that preoperative discussion on code status is highly relevant to the care of older adults, a population group that accounts for the majority of surgeries and is most vulnerable to poor surgical outcomes. Additionally, it highlights a disconnect between the shared recognition by surgeons and patients that ACP discussion is important in perioperative care and its low implementation rates.
Unsurprisingly, Hadler et al3 reported that added measures such as the provision of a procedure-specific DNR form led to an increase in the documentation of preoperative code status discussions in 1 of the hospitals studied. The authors suggested that strategies such as the development of institutional protocols aimed to facilitate perioperative advance directive discussions, identify local champions, and educate patients may be ways to improve preoperative code status reevaulation. The idea that institutional value and culture are key factors impacting surgeon behavior and may influence the practice of ACP discussion is not new. Thus, creative and adaptable strategies, resources, and trainings that are required by medical institutions and hospitals to support preoperative ACP discussions with patients undergoing surgeries need to be identified, validated, and implemented to optimize perioperative care in vulnerable patients.
Applications for Clinical Practice
The findings from the current study indicate that less than 20% of patients with preexisting DNR orders have a documented discussion of code status prior to undergoing surgery. Physicians and health care institutions need to identify barriers to, and implement strategies that, facilitate and optimize preoperative ACP discussions in order to provide patient-centered care in vulnerable surgical patients.
Financial disclosures: None.
1. American College of Surgeons Board of Regents. Statement on Advance Directives by Patients: “Do Not Resuscitate” in the Operating Room. American College of Surgeons. January 3, 2014. Accessed November 6, 2021. https://www.facs.org/about-acs/statements/19-advance-directives
2. Kazaure H, Roman S, Sosa JA. High mortality in surgical patients with do-not-resuscitate orders: analysis of 8256 patients. Arch Surg. 2011;146(8):922-928. doi:10.1001/archsurg.2011.69
3. Hadler RA, Fatuzzo M, Sahota G, Neuman MD. Perioperative Management of Do-Not-Resuscitate Orders at a Large Academic Health System. JAMA Surg. 2021;e214135. doi:10.1001/jamasurg.2021.4135
4. Coopmans VC, Gries CA. CRNA awareness and experience with perioperative DNR orders. AANA J. 2000;68(3):247-256.
5. Urman RD, Lilley EJ, Changala M, Lindvall C, Hepner DL, Bader AM. A Pilot Study to Evaluate Compliance with Guidelines for Preprocedural Reconsideration of Code Status Limitations. J Palliat Med. 2018;21(8):1152-1156. doi:10.1089/jpm.2017.0601
6. Kalbfell E, Kata A, Buffington AS, et al. Frequency of Preoperative Advance Care Planning for Older Adults Undergoing High-risk Surgery: A Secondary Analysis of a Randomized Clinical Trial. JAMA Surg. 2021;156(7):e211521. doi:10.1001/jamasurg.2021.1521
Study Overview
Objective. The objective of this study was to evaluate orders and documentation describing perioperative management of code status in adults.
Design. A retrospective case series of all adult inpatients admitted to hospitals at 1 academic health system in the US.
Setting and participants. This retrospective case series was conducted at 5 hospitals within the University of Pennsylvania Health System. Cases included all adult inpatients admitted to hospitals between March 2017 and September 2018 who had a Do-Not-Resuscitate (DNR) order placed in their medical record during admission and subsequently underwent a surgical procedure that required anesthesia care.
Main outcome measures. Medical records of included cases were manually reviewed by the authors to verify whether a DNR order was in place at the time surgical intervention was discussed with a patient. Clinical notes and DNR orders of eligible cases were reviewed to identify documentation and outcome of goals of care discussions that were conducted within 48 hours prior to the surgical procedure. Collected data included patient demographics (age, sex, race); case characteristics (American Society of Anesthesiologists [ASA] physical status score, anesthesia type [general vs others such as regional], emergency status [emergent vs elective surgery], procedures by service [surgical including hip fracture repair, gastrostomy or jejunostomy, or exploratory laparotomy vs medical including endoscopy, bronchoscopy, or transesophageal echocardiogram]); and hospital policy for perioperative management of DNR orders (written policy encouraging discussion vs written policy plus additional initiatives, including procedure-specific DNR form). The primary outcome was the presence of a preoperative order or note documenting code status discussion or change. Data were analyzed using χ2 and Fisher exact tests and the threshold for statistical significance was P < .05.
Main results. Of the 27 665 inpatient procedures identified across 5 hospitals, 444 (1.6%) cases met the inclusion criteria. Patients from these cases aged 75 (SD 13) years (95% CI, 72-77 years); 247 (56%, 95% CI, 55%-57%) were women; and 300 (68%, 95% CI, 65%-71%) were White. A total of 426 patients (96%, 95% CI, 90%-100%) had an ASA physical status score of 3 or higher and 237 (53%, 95% CI, 51%-56%) received general anesthesia. The most common procedures performed were endoscopy (148 [33%]), hip fracture repair (43 [10%]), and gastrostomy or jejunostomy (28 [6%]). Reevaluation of code status was documented in 126 cases (28%, 95% CI, 25%-31%); code status orders were changed in 20 of 126 cases (16%, 95% CI, 7%-24%); and a note was filed without a corresponding order for 106 of 126 cases (84%, 95% CI, 75%-95%). In the majority of cases (109 of 126 [87%], 95% CI, 78%-95%) in which documented discussion occurred, DNR orders were suspended. Of 126 cases in which a discussion was documented, participants of these discussions included surgeons 10% of the time (13 cases, 95% CI, 8%-13%), members of the anesthesia team 51% of the time (64 cases, 95% CI, 49%-53%), and medicine or palliative care clinicians 39% of the time (49 cases, 95% CI, 37%-41%).
The rate of documented preoperative code status discussion was higher in patients with higher ASA physical status score (35% in patients with an ASA physical status score ≥ 4 [55 of 155] vs 25% in those with an ASA physical status score ≤ 3 [71 of 289]; P = .02). The rates of documented preoperative code status discussion were similar by anesthesia type (29% for general anesthesia [69 of 237 cases] vs 28% [57 of 207 cases] for other modalities; P = .70). The hospitals involved in this study all had a written policy encouraging rediscussion of code status before surgery. However, only 1 hospital reported added measures (eg, provision of a procedure-specific DNR form) to increase documentation of preoperative code status discussions. In this specific hospital, documentation of preoperative code status discussions was higher compared to other hospitals (67% [37 of 55 cases] vs 23% [89 of 389 cases]; P < .01).
Conclusion. In a retrospective case series conducted at 5 hospitals within 1 academic health system in the US, fewer than 1 in 5 patients with preexisting DNR orders had a documented discussion of code status prior to undergoing surgery. Additional strategies including the development of institutional protocols that facilitate perioperative management of advance directives, identification of local champions, and patient education, should be explored as means to improve preoperative code status reevaulation per guideline recommendations.
Commentary
It is not unusual that patients with a DNR order may require and undergo surgical interventions to treat reversible conditions, prevent progression of underlying disease, or mitigate distressing symptoms such as pain. For instance, intubation, mechanical ventilation, and administration of vasoactive drugs are resuscitative measures that may be needed to safely anesthetize and sedate a patient. As such, the American College of Surgeons1 has provided a statement on advance directives by patients with an existing DNR order to guide management. Specifically, the statement indicates that the best approach for these patients is a policy of “required reconsideration” of the existing DNR order. Required reconsideration means that “the patient or designated surrogate and the physicians who will be responsible for the patient’s care should, when possible, discuss the new intraoperative and perioperative risks associated with the surgical procedure, the patient’s treatment goals, and an approach for potentially life-threatening problems consistent with the patient’s values and preferences.” Moreover, the required reconsideration discussion needs to occur as early as it is practical once a decision is made to have surgery because the discussion “may result in the patient agreeing to suspend the DNR order during surgery and the perioperative period, retaining the original DNR order, or modifying the DNR order.” Given that surgical patients with DNR orders have significant comorbidities, many sustain postoperative complications, and nearly 1 in 4 die within 30 days of surgery, preoperative advance care planning (ACP) and code status discussions are particularly essential to delivering high quality surgical care.2
In the current study, Hadler et al3 conducted a retrospective analysis to evaluate orders and documentation describing perioperative management of code status in patients with existing DNR order at an academic health system in the US. The authors reported that fewer than 20% of patients with existing DNR orders had a documented discussion of code status prior to undergoing surgery. These findings add to the notion that compliance with such guidance on required reconsideration discussion is suboptimal in perioperative care in the US.4,5 A recently published study focused on patients aged more than 60 years undergoing high-risk oncologic or vascular surgeries similarly showed that the frequency of ACP discussions or advance directive documentations among older patients was low.6 This growing body of evidence is highly clinically relevant in that preoperative discussion on code status is highly relevant to the care of older adults, a population group that accounts for the majority of surgeries and is most vulnerable to poor surgical outcomes. Additionally, it highlights a disconnect between the shared recognition by surgeons and patients that ACP discussion is important in perioperative care and its low implementation rates.
Unsurprisingly, Hadler et al3 reported that added measures such as the provision of a procedure-specific DNR form led to an increase in the documentation of preoperative code status discussions in 1 of the hospitals studied. The authors suggested that strategies such as the development of institutional protocols aimed to facilitate perioperative advance directive discussions, identify local champions, and educate patients may be ways to improve preoperative code status reevaulation. The idea that institutional value and culture are key factors impacting surgeon behavior and may influence the practice of ACP discussion is not new. Thus, creative and adaptable strategies, resources, and trainings that are required by medical institutions and hospitals to support preoperative ACP discussions with patients undergoing surgeries need to be identified, validated, and implemented to optimize perioperative care in vulnerable patients.
Applications for Clinical Practice
The findings from the current study indicate that less than 20% of patients with preexisting DNR orders have a documented discussion of code status prior to undergoing surgery. Physicians and health care institutions need to identify barriers to, and implement strategies that, facilitate and optimize preoperative ACP discussions in order to provide patient-centered care in vulnerable surgical patients.
Financial disclosures: None.
Study Overview
Objective. The objective of this study was to evaluate orders and documentation describing perioperative management of code status in adults.
Design. A retrospective case series of all adult inpatients admitted to hospitals at 1 academic health system in the US.
Setting and participants. This retrospective case series was conducted at 5 hospitals within the University of Pennsylvania Health System. Cases included all adult inpatients admitted to hospitals between March 2017 and September 2018 who had a Do-Not-Resuscitate (DNR) order placed in their medical record during admission and subsequently underwent a surgical procedure that required anesthesia care.
Main outcome measures. Medical records of included cases were manually reviewed by the authors to verify whether a DNR order was in place at the time surgical intervention was discussed with a patient. Clinical notes and DNR orders of eligible cases were reviewed to identify documentation and outcome of goals of care discussions that were conducted within 48 hours prior to the surgical procedure. Collected data included patient demographics (age, sex, race); case characteristics (American Society of Anesthesiologists [ASA] physical status score, anesthesia type [general vs others such as regional], emergency status [emergent vs elective surgery], procedures by service [surgical including hip fracture repair, gastrostomy or jejunostomy, or exploratory laparotomy vs medical including endoscopy, bronchoscopy, or transesophageal echocardiogram]); and hospital policy for perioperative management of DNR orders (written policy encouraging discussion vs written policy plus additional initiatives, including procedure-specific DNR form). The primary outcome was the presence of a preoperative order or note documenting code status discussion or change. Data were analyzed using χ2 and Fisher exact tests and the threshold for statistical significance was P < .05.
Main results. Of the 27 665 inpatient procedures identified across 5 hospitals, 444 (1.6%) cases met the inclusion criteria. Patients from these cases aged 75 (SD 13) years (95% CI, 72-77 years); 247 (56%, 95% CI, 55%-57%) were women; and 300 (68%, 95% CI, 65%-71%) were White. A total of 426 patients (96%, 95% CI, 90%-100%) had an ASA physical status score of 3 or higher and 237 (53%, 95% CI, 51%-56%) received general anesthesia. The most common procedures performed were endoscopy (148 [33%]), hip fracture repair (43 [10%]), and gastrostomy or jejunostomy (28 [6%]). Reevaluation of code status was documented in 126 cases (28%, 95% CI, 25%-31%); code status orders were changed in 20 of 126 cases (16%, 95% CI, 7%-24%); and a note was filed without a corresponding order for 106 of 126 cases (84%, 95% CI, 75%-95%). In the majority of cases (109 of 126 [87%], 95% CI, 78%-95%) in which documented discussion occurred, DNR orders were suspended. Of 126 cases in which a discussion was documented, participants of these discussions included surgeons 10% of the time (13 cases, 95% CI, 8%-13%), members of the anesthesia team 51% of the time (64 cases, 95% CI, 49%-53%), and medicine or palliative care clinicians 39% of the time (49 cases, 95% CI, 37%-41%).
The rate of documented preoperative code status discussion was higher in patients with higher ASA physical status score (35% in patients with an ASA physical status score ≥ 4 [55 of 155] vs 25% in those with an ASA physical status score ≤ 3 [71 of 289]; P = .02). The rates of documented preoperative code status discussion were similar by anesthesia type (29% for general anesthesia [69 of 237 cases] vs 28% [57 of 207 cases] for other modalities; P = .70). The hospitals involved in this study all had a written policy encouraging rediscussion of code status before surgery. However, only 1 hospital reported added measures (eg, provision of a procedure-specific DNR form) to increase documentation of preoperative code status discussions. In this specific hospital, documentation of preoperative code status discussions was higher compared to other hospitals (67% [37 of 55 cases] vs 23% [89 of 389 cases]; P < .01).
Conclusion. In a retrospective case series conducted at 5 hospitals within 1 academic health system in the US, fewer than 1 in 5 patients with preexisting DNR orders had a documented discussion of code status prior to undergoing surgery. Additional strategies including the development of institutional protocols that facilitate perioperative management of advance directives, identification of local champions, and patient education, should be explored as means to improve preoperative code status reevaulation per guideline recommendations.
Commentary
It is not unusual that patients with a DNR order may require and undergo surgical interventions to treat reversible conditions, prevent progression of underlying disease, or mitigate distressing symptoms such as pain. For instance, intubation, mechanical ventilation, and administration of vasoactive drugs are resuscitative measures that may be needed to safely anesthetize and sedate a patient. As such, the American College of Surgeons1 has provided a statement on advance directives by patients with an existing DNR order to guide management. Specifically, the statement indicates that the best approach for these patients is a policy of “required reconsideration” of the existing DNR order. Required reconsideration means that “the patient or designated surrogate and the physicians who will be responsible for the patient’s care should, when possible, discuss the new intraoperative and perioperative risks associated with the surgical procedure, the patient’s treatment goals, and an approach for potentially life-threatening problems consistent with the patient’s values and preferences.” Moreover, the required reconsideration discussion needs to occur as early as it is practical once a decision is made to have surgery because the discussion “may result in the patient agreeing to suspend the DNR order during surgery and the perioperative period, retaining the original DNR order, or modifying the DNR order.” Given that surgical patients with DNR orders have significant comorbidities, many sustain postoperative complications, and nearly 1 in 4 die within 30 days of surgery, preoperative advance care planning (ACP) and code status discussions are particularly essential to delivering high quality surgical care.2
In the current study, Hadler et al3 conducted a retrospective analysis to evaluate orders and documentation describing perioperative management of code status in patients with existing DNR order at an academic health system in the US. The authors reported that fewer than 20% of patients with existing DNR orders had a documented discussion of code status prior to undergoing surgery. These findings add to the notion that compliance with such guidance on required reconsideration discussion is suboptimal in perioperative care in the US.4,5 A recently published study focused on patients aged more than 60 years undergoing high-risk oncologic or vascular surgeries similarly showed that the frequency of ACP discussions or advance directive documentations among older patients was low.6 This growing body of evidence is highly clinically relevant in that preoperative discussion on code status is highly relevant to the care of older adults, a population group that accounts for the majority of surgeries and is most vulnerable to poor surgical outcomes. Additionally, it highlights a disconnect between the shared recognition by surgeons and patients that ACP discussion is important in perioperative care and its low implementation rates.
Unsurprisingly, Hadler et al3 reported that added measures such as the provision of a procedure-specific DNR form led to an increase in the documentation of preoperative code status discussions in 1 of the hospitals studied. The authors suggested that strategies such as the development of institutional protocols aimed to facilitate perioperative advance directive discussions, identify local champions, and educate patients may be ways to improve preoperative code status reevaulation. The idea that institutional value and culture are key factors impacting surgeon behavior and may influence the practice of ACP discussion is not new. Thus, creative and adaptable strategies, resources, and trainings that are required by medical institutions and hospitals to support preoperative ACP discussions with patients undergoing surgeries need to be identified, validated, and implemented to optimize perioperative care in vulnerable patients.
Applications for Clinical Practice
The findings from the current study indicate that less than 20% of patients with preexisting DNR orders have a documented discussion of code status prior to undergoing surgery. Physicians and health care institutions need to identify barriers to, and implement strategies that, facilitate and optimize preoperative ACP discussions in order to provide patient-centered care in vulnerable surgical patients.
Financial disclosures: None.
1. American College of Surgeons Board of Regents. Statement on Advance Directives by Patients: “Do Not Resuscitate” in the Operating Room. American College of Surgeons. January 3, 2014. Accessed November 6, 2021. https://www.facs.org/about-acs/statements/19-advance-directives
2. Kazaure H, Roman S, Sosa JA. High mortality in surgical patients with do-not-resuscitate orders: analysis of 8256 patients. Arch Surg. 2011;146(8):922-928. doi:10.1001/archsurg.2011.69
3. Hadler RA, Fatuzzo M, Sahota G, Neuman MD. Perioperative Management of Do-Not-Resuscitate Orders at a Large Academic Health System. JAMA Surg. 2021;e214135. doi:10.1001/jamasurg.2021.4135
4. Coopmans VC, Gries CA. CRNA awareness and experience with perioperative DNR orders. AANA J. 2000;68(3):247-256.
5. Urman RD, Lilley EJ, Changala M, Lindvall C, Hepner DL, Bader AM. A Pilot Study to Evaluate Compliance with Guidelines for Preprocedural Reconsideration of Code Status Limitations. J Palliat Med. 2018;21(8):1152-1156. doi:10.1089/jpm.2017.0601
6. Kalbfell E, Kata A, Buffington AS, et al. Frequency of Preoperative Advance Care Planning for Older Adults Undergoing High-risk Surgery: A Secondary Analysis of a Randomized Clinical Trial. JAMA Surg. 2021;156(7):e211521. doi:10.1001/jamasurg.2021.1521
1. American College of Surgeons Board of Regents. Statement on Advance Directives by Patients: “Do Not Resuscitate” in the Operating Room. American College of Surgeons. January 3, 2014. Accessed November 6, 2021. https://www.facs.org/about-acs/statements/19-advance-directives
2. Kazaure H, Roman S, Sosa JA. High mortality in surgical patients with do-not-resuscitate orders: analysis of 8256 patients. Arch Surg. 2011;146(8):922-928. doi:10.1001/archsurg.2011.69
3. Hadler RA, Fatuzzo M, Sahota G, Neuman MD. Perioperative Management of Do-Not-Resuscitate Orders at a Large Academic Health System. JAMA Surg. 2021;e214135. doi:10.1001/jamasurg.2021.4135
4. Coopmans VC, Gries CA. CRNA awareness and experience with perioperative DNR orders. AANA J. 2000;68(3):247-256.
5. Urman RD, Lilley EJ, Changala M, Lindvall C, Hepner DL, Bader AM. A Pilot Study to Evaluate Compliance with Guidelines for Preprocedural Reconsideration of Code Status Limitations. J Palliat Med. 2018;21(8):1152-1156. doi:10.1089/jpm.2017.0601
6. Kalbfell E, Kata A, Buffington AS, et al. Frequency of Preoperative Advance Care Planning for Older Adults Undergoing High-risk Surgery: A Secondary Analysis of a Randomized Clinical Trial. JAMA Surg. 2021;156(7):e211521. doi:10.1001/jamasurg.2021.1521
Free Clinic Diagnosis Data Improvement Project Using International Classification of Diseases and Electronic Health Record
From Pacific Lutheran School of Nursing, Tacoma, WA.
Objective: This quality improvement project aimed to enhance The Olympia Free Clinic’s (TOFC) data availability using
Methods: A new system was implemented for inputting ICD codes into Practice Fusion, the clinic’s EHR. During the initial phase, TOFC’s 21 volunteer providers entered the codes associated with the appropriate diagnosis for each of 157 encounters using a simplified map of options, including a map of the 20 most common diagnoses and a more comprehensive 60-code map.
Results: An EHR report found that 128 new diagnoses were entered during project implementation, hypertension being the most common diagnosis, followed by depression, then posttraumatic stress disorder.
Conclusion: The knowledge of patient diagnoses enabled the clinic to make more-informed decisions.
Keywords: free clinic, data, quality improvement, electronic health record, International Classification of Diseases
Data creates a starting point, a goal, background, understanding of needs and context, and allows for tracking and improvement over time. This quality improvement (QI) project for The Olympia Free Clinic (TOFC) implemented a new system for tracking patient diagnoses. The 21 primary TOFC providers were encouraged to input mapped International Statistical Classification of Diseases and Related Health Problems (ICD) codes into the electronic health record (EHR). The clinic’s providers consisted of mostly retired, but some actively practicing, medical doctors, doctors of osteopathy, nurse practitioners, physician assistants, and psychiatrists.
Previous to this project, the clinic lacked any concrete data on patient demographics or diagnoses. For example, the clinic was unable to accurately answer the National Association of Free and Charitable Clinics’ questions about how many patients TOFC providers saw with diabetes, hypertension, asthma, and hyperlipidemia.1 Additionally, the needs of the clinic and its population were based on educated guesses.
As a free clinic staffed by volunteers and open 2 days a week, TOFC focused solely on giving care to those who needed it, operating pragmatically and addressing any issues as they arose. However, this strategy left the clinic unable to answer questions like “How many TOFC patients have diabetes?” By answering these questions, the clinic can better assess their resource and staffing needs.
Purpose
The project enlisted 21 volunteer providers to record diagnoses through ICD codes on the approximately 2000 active patients between March 22, 2021, and June 15, 2021. Tracking patient diagnoses improves clinic data, outcomes, and decision-making. By working on data improvement, the clinic can better understand its patient population and their needs, enhance clinical care, create better outcomes, make informed decisions, and raise eligibility for grants. The clinic was at a turning point as they reevaluated their mission statement and decided whether they would continue to focus on acute ailments or expand to formally manage chronic diseases as well. This decision needed to be made with knowledge, understanding, and context, which diagnosis data can provide. For example, the knowledge that the clinic’s 3 most common diagnoses are chronic conditions demonstrated that an official shift in their mission may have been warranted.
Literature Review
QI projects are effective and common in the free clinic setting.2-4 To the author’s knowledge, no literature to date shows the implementation of a system to better track diagnoses using a free clinic’s EHR with ICD codes.
Data bring value to clinics in many ways. It can also lead to more informed and better distribution of resources, such as preventative health and social services, patient education, and medical inventory.4
The focus of the US health care system is shifting to a value-based system under the Patient Protection and Affordable Care Act.5 Outcome measurements and improvement play a key role in this.6 Without knowing diagnoses, we cannot effectively track outcomes and have no data on which to base improvements. Insurance and reimbursement requirements typically hold health care facilities accountable for making these outcomes and improvements a reality.5,6 Free clinics, however, lack these motivations, which explains why a free clinic may be deficient in data and tracking methods. Tracking diagnosis codes will, going forward, allow TOFC to see outcomes and trends over time, track the effectiveness of the treatments, and change course if need be.6
TOFC fully implemented the EHR in 2018, giving the clinic better capabilities for pulling reports and tracking data. Although there were growing pains, many TOFC providers were already familiar with ICD codes, which, along with an EHR, provide a system to easily retrieve, store, and analyze diagnoses for evidence-based and informed decision-making.7 This made using ICD codes and the EHR an obvious choice to track patient diagnoses. However, most of the providers were not putting them in ICD codes before this project was implemented. Instead, diagnoses were typed in the notes and, therefore, not easy to generate in a report without having to open each chart for each individual encounter and combing through the notes. To make matters worse, providers were never trained on how to enter the codes in the EHR, and most providers saw no reason to, because the clinic does not bill for services.
Methods
A needs assessment determined that TOFC lacked data. This QI project used a combination of primary and secondary continuous quality improvement data.8 The primary data came from pulling the reports on Practice Fusion to see how many times each diagnosis code was put in during the implementation phase of this project. Secondary data came from interviewing the providers and asking whether they put in the diagnosis codes.
ICD diagnosis entry
Practice Fusion is the EHR TOFC uses and was therefore the platform for this QI project. Two ICD maps were created, which incorporated both International Classification of Diseases, Ninth Revision (ICD-9) and International Classification of Diseases, Tenth Revision (ICD-10) codes. There are tens of thousands of ICD codes in existence, but because TOFC is a free clinic that does not bill or receive reimbursement, the codes did not need to be as specific as they do in a paid clinic. Therefore, the maps put all the variations of each disease into a single category. For example, every patient with diabetes would receive the same ICD code regardless of whether their diabetes was controlled, uncontrolled, or any other variation. The goal of simplifying the codes was to improve compliance with ICD code entry and make reports easier to generate. The maps allowed the options to be simplified and, therefore, more user friendly for both the providers and the data collectors pulling reports. As some ICD-9 codes were already being used, these codes were incorporated so providers could keep using what they were already familiar with. To create the map, generic ICD codes were selected to represent each disease.
An initial survey was conducted prior to implementation with 10 providers, 2 nurses, and 2 staff members, asking which diagnoses they thought were seen most often in the clinic. Based off those answers, a map was created with the 20 most commonly used ICD codes, which can be seen in the Table. A more comprehensive map was also created, with 61 encompassing diagnoses.
To start the implementation process, providers were emailed an explanation of the project, the ICD code maps, and step-by-step instructions on how to enter a diagnosis into the EHR. Additionally, the 20 most common diagnoses forms were posted on the walls at the provider stations along with pictures illustrating how to input the codes in the EHR. The more comprehensive map was attached to the nurse clipboards that accompanied each encounter. The first night the providers volunteered after receiving the email, the researcher would review with them how to input the diagnosis code and have them test the method on a practice patient, either in person or over the phone.
A starting report was pulled March 22, 2021, covering encounters between September 6, 2017, and March 22, 2021, for the 20 most common diagnoses. Another report was pulled at the completion of the implementation phase, on June 15, 2021, covering March 22, 2021, to June 15, 2021. Willing providers and staff members were surveyed after implementation completion. The providers were asked whether they use the ICD codes, whether they would do so in the future, and whether they found it helpful when other providers had entered diagnoses. If they answered no to any of the questions, there were asked why, and whether they had any suggestions for improvements. The 4 staff members were asked whether they thought the data were helpful for their role and, if so, how they would use it.
Surveys
Surveys were conducted after the project was completed with willing and available providers and staff members in order to assess the utility of the project as well as to ensure future improvements and sustainability of the system.
Provider surveys
Do you currently input mapped ICD-10 codes when you chart for each encounter?
Yes No
If yes, do you intend to continue inputting the ICD codes in your encounters in the future?
Yes No
If no to either question above, please explain:
Do you have any recommendations for making it easier to input ICD codes or another way to track patients’ diagnoses?
Staff surveys
Is this data helpful for your role?
Yes No
If yes, how will you use this data?
Results
During the implementation phase, hypertension was the most common diagnosis seen at TOFC, accounting for 35 of 131 (27%) top 20 diagnoses entered. Depression was second, accounting for about 20% of diagnoses. Posttraumatic stress disorder was the third most common, making up 18% of diagnoses. There were 157 encounters during the implementation phase and 128 ICD diagnoses entered into the chart during this time period, suggesting that most encounters had a corresponding diagnosis code entered. See the Table for more details.
Survey results
Provider surveys
Six providers answered the survey questions. Four answered “yes” to both questions and 2 answered “no” to both questions. Reasons cited for why they did not input the ICD codes included not remembering to enter the codes or not remembering how to enter the codes. Recommendations for making it easier included incorporating the diagnosis in the assessment section of the EHR instead of standing alone as its own section, replacing ICD-9 codes with ICD-10 codes on the maps, making more specific codes for options, like typing more mental health diagnoses, and implementing more training on how to enter the codes.
Staff surveys
Three of 4 staff members responded to the survey. All 3 indicated that the data collected from this project assisted in their role. Stated uses for this data included grant applications and funding; community education, such as presentations and outreach; program development and monitoring; quality improvement; supply purchasing (eg, medications in stock to treat most commonly seen conditions), scheduling clinics and providers; allocating resources and supplies; and accepting or rejecting medical supply donations.
Discussion
Before this project, 668 of the top 20 most common diagnosis codes were entered from when TOFC introduced use of the EHR in the clinic in 2017, until the beginning of the implementation phase of this project in March 2021. During the 3 months of the implementation phase, 131 diagnoses were entered, representing almost 20% of the amount that were entered in 3 and a half years. Pulling the reports for these 20 diagnoses took less than 1 hour. During the needs assessment phase of this project, diagnoses for 3 months were extracted from the EHR by combing through provider notes and extracting the data from the notes—a process that took 11 hours.
Knowledge of diagnoses and the reasons for clinic attendance help the clinic make decisions about staffing, resources, and services. The TOFC board of directors used this data to assist with the decision of whether or not to change the clinic’s mission to include primary care as an official clinic function. The original purpose of the clinic was to address acute issues for people who lacked the resources for medical care. For example, a homeless person with an abscess could come to the clinic and have the abscess drained and treated. The results of this project illustrate that, in reality, most of the diagnoses actually seen in the clinic are more chronic in nature and require consistent, ongoing care. For instance, the project identified 52 clinic patients receiving consistent diabetic care. This type of data can help the clinic determine whether it should accept diabetes-associated donations and whether it needs to recruit a volunteer diabetes educator. Generally, this data can help guide other decisions as well, like what medications should be kept in the pharmacy, whether there are certain specialists the clinic should seek to partner with, and whether the clinic should embark on any particular education campaigns. By inputting ICD codes, diagnosis data are easily obtained to assist with future decisions.
A limitation of this project was that the reports could only be pulled within a certain time frame if the start date of the diagnosis was specified. As most providers did not indicate a start date with their entered diagnosis code, the only way to compare the before and after was to count the total before and the total after the implementation time frame. In other words, comparison reports could not be pulled retroactively, so some data on the less common diagnosis codes are missing from this paper, as reports for the comprehensive map were not pulled ahead of time. Providers may have omitted the start date when entering the diagnosis codes because many of these patients had their diagnoses for years—seeing different providers each time—so starting the diagnosis at that particular encounter did not make sense. Additionally, during training, although how to enter the start date was demonstrated, the emphasis and priority was placed on actually entering the ICD code, in an effort to keep the process simple and increase participation.
Conclusion
Evidence-based care and informed decision-making require data. In a free clinic, this can be difficult to obtain due to limited staffing and the absence of billing and insurance requirements. ICD codes and EHRs are powerful tools to collect data and information about clinic needs. This project improved TOFC’s knowledge about what kind of patients and diagnoses they see.
Corresponding author: Sarah M. Shanahan, MSN, RN, Pacific Lutheran University School of Nursing, Ramstad, Room 214, Tacoma, WA 98447; [email protected].
Financial disclosures: None.
1. National Association of Free and Charitable Clinics. 2021 NAFC Member Data & Standards Report. https://www.nafcclinics.org/sites/default/files/NAFC%202021%20Data%20Report%20Final.pdf
2. Lee JS, Combs K, Pasarica M; KNIGHTS Research Group. Improving efficiency while improving patient care in a student-run free clinic. J Am Board Fam Med. 2017;30(4):513-519. doi:10.3122/jabfm.2017.04.170044
3. Lu KB, Thiel B, Atkins CA, et al. Satisfaction with healthcare received at an interprofessional student-run free clinic: invested in training the next generation of healthcare professionals. Cureus. 2018;10(3):e2282. doi:10.7759/cureus.2282
4. Tran T, Briones C, Gillet AS, et al. “Knowing” your population: who are we caring for at Tulane University School of Medicine’s student-run free clinics? J Public Health (Oxf). 2020:1-7. doi:10.1007/s10389-020-01389-7
5. Sennett C. Healthcare reform: quality outcomes measurement and reporting. Am Health Drug Benefits. 2010;3(5):350-352.
6. Mazzali C, Duca P. Use of administrative data in healthcare research. Intern Emerg Med. 2015;10(4):517-524. doi:10.1007/s11739-015-1213-9
7. Moons E, Khanna A, Akkasi A, Moens MF. A comparison of deep learning methods for ICD coding of clinical records. Appl Sci. 2020;10(15):5262. doi:10.3390/app10155262
8. Finkelman A. Quality Improvement: A Guide for Integration in Nursing. Jones & Bartlett Learning; 2018.
From Pacific Lutheran School of Nursing, Tacoma, WA.
Objective: This quality improvement project aimed to enhance The Olympia Free Clinic’s (TOFC) data availability using
Methods: A new system was implemented for inputting ICD codes into Practice Fusion, the clinic’s EHR. During the initial phase, TOFC’s 21 volunteer providers entered the codes associated with the appropriate diagnosis for each of 157 encounters using a simplified map of options, including a map of the 20 most common diagnoses and a more comprehensive 60-code map.
Results: An EHR report found that 128 new diagnoses were entered during project implementation, hypertension being the most common diagnosis, followed by depression, then posttraumatic stress disorder.
Conclusion: The knowledge of patient diagnoses enabled the clinic to make more-informed decisions.
Keywords: free clinic, data, quality improvement, electronic health record, International Classification of Diseases
Data creates a starting point, a goal, background, understanding of needs and context, and allows for tracking and improvement over time. This quality improvement (QI) project for The Olympia Free Clinic (TOFC) implemented a new system for tracking patient diagnoses. The 21 primary TOFC providers were encouraged to input mapped International Statistical Classification of Diseases and Related Health Problems (ICD) codes into the electronic health record (EHR). The clinic’s providers consisted of mostly retired, but some actively practicing, medical doctors, doctors of osteopathy, nurse practitioners, physician assistants, and psychiatrists.
Previous to this project, the clinic lacked any concrete data on patient demographics or diagnoses. For example, the clinic was unable to accurately answer the National Association of Free and Charitable Clinics’ questions about how many patients TOFC providers saw with diabetes, hypertension, asthma, and hyperlipidemia.1 Additionally, the needs of the clinic and its population were based on educated guesses.
As a free clinic staffed by volunteers and open 2 days a week, TOFC focused solely on giving care to those who needed it, operating pragmatically and addressing any issues as they arose. However, this strategy left the clinic unable to answer questions like “How many TOFC patients have diabetes?” By answering these questions, the clinic can better assess their resource and staffing needs.
Purpose
The project enlisted 21 volunteer providers to record diagnoses through ICD codes on the approximately 2000 active patients between March 22, 2021, and June 15, 2021. Tracking patient diagnoses improves clinic data, outcomes, and decision-making. By working on data improvement, the clinic can better understand its patient population and their needs, enhance clinical care, create better outcomes, make informed decisions, and raise eligibility for grants. The clinic was at a turning point as they reevaluated their mission statement and decided whether they would continue to focus on acute ailments or expand to formally manage chronic diseases as well. This decision needed to be made with knowledge, understanding, and context, which diagnosis data can provide. For example, the knowledge that the clinic’s 3 most common diagnoses are chronic conditions demonstrated that an official shift in their mission may have been warranted.
Literature Review
QI projects are effective and common in the free clinic setting.2-4 To the author’s knowledge, no literature to date shows the implementation of a system to better track diagnoses using a free clinic’s EHR with ICD codes.
Data bring value to clinics in many ways. It can also lead to more informed and better distribution of resources, such as preventative health and social services, patient education, and medical inventory.4
The focus of the US health care system is shifting to a value-based system under the Patient Protection and Affordable Care Act.5 Outcome measurements and improvement play a key role in this.6 Without knowing diagnoses, we cannot effectively track outcomes and have no data on which to base improvements. Insurance and reimbursement requirements typically hold health care facilities accountable for making these outcomes and improvements a reality.5,6 Free clinics, however, lack these motivations, which explains why a free clinic may be deficient in data and tracking methods. Tracking diagnosis codes will, going forward, allow TOFC to see outcomes and trends over time, track the effectiveness of the treatments, and change course if need be.6
TOFC fully implemented the EHR in 2018, giving the clinic better capabilities for pulling reports and tracking data. Although there were growing pains, many TOFC providers were already familiar with ICD codes, which, along with an EHR, provide a system to easily retrieve, store, and analyze diagnoses for evidence-based and informed decision-making.7 This made using ICD codes and the EHR an obvious choice to track patient diagnoses. However, most of the providers were not putting them in ICD codes before this project was implemented. Instead, diagnoses were typed in the notes and, therefore, not easy to generate in a report without having to open each chart for each individual encounter and combing through the notes. To make matters worse, providers were never trained on how to enter the codes in the EHR, and most providers saw no reason to, because the clinic does not bill for services.
Methods
A needs assessment determined that TOFC lacked data. This QI project used a combination of primary and secondary continuous quality improvement data.8 The primary data came from pulling the reports on Practice Fusion to see how many times each diagnosis code was put in during the implementation phase of this project. Secondary data came from interviewing the providers and asking whether they put in the diagnosis codes.
ICD diagnosis entry
Practice Fusion is the EHR TOFC uses and was therefore the platform for this QI project. Two ICD maps were created, which incorporated both International Classification of Diseases, Ninth Revision (ICD-9) and International Classification of Diseases, Tenth Revision (ICD-10) codes. There are tens of thousands of ICD codes in existence, but because TOFC is a free clinic that does not bill or receive reimbursement, the codes did not need to be as specific as they do in a paid clinic. Therefore, the maps put all the variations of each disease into a single category. For example, every patient with diabetes would receive the same ICD code regardless of whether their diabetes was controlled, uncontrolled, or any other variation. The goal of simplifying the codes was to improve compliance with ICD code entry and make reports easier to generate. The maps allowed the options to be simplified and, therefore, more user friendly for both the providers and the data collectors pulling reports. As some ICD-9 codes were already being used, these codes were incorporated so providers could keep using what they were already familiar with. To create the map, generic ICD codes were selected to represent each disease.
An initial survey was conducted prior to implementation with 10 providers, 2 nurses, and 2 staff members, asking which diagnoses they thought were seen most often in the clinic. Based off those answers, a map was created with the 20 most commonly used ICD codes, which can be seen in the Table. A more comprehensive map was also created, with 61 encompassing diagnoses.
To start the implementation process, providers were emailed an explanation of the project, the ICD code maps, and step-by-step instructions on how to enter a diagnosis into the EHR. Additionally, the 20 most common diagnoses forms were posted on the walls at the provider stations along with pictures illustrating how to input the codes in the EHR. The more comprehensive map was attached to the nurse clipboards that accompanied each encounter. The first night the providers volunteered after receiving the email, the researcher would review with them how to input the diagnosis code and have them test the method on a practice patient, either in person or over the phone.
A starting report was pulled March 22, 2021, covering encounters between September 6, 2017, and March 22, 2021, for the 20 most common diagnoses. Another report was pulled at the completion of the implementation phase, on June 15, 2021, covering March 22, 2021, to June 15, 2021. Willing providers and staff members were surveyed after implementation completion. The providers were asked whether they use the ICD codes, whether they would do so in the future, and whether they found it helpful when other providers had entered diagnoses. If they answered no to any of the questions, there were asked why, and whether they had any suggestions for improvements. The 4 staff members were asked whether they thought the data were helpful for their role and, if so, how they would use it.
Surveys
Surveys were conducted after the project was completed with willing and available providers and staff members in order to assess the utility of the project as well as to ensure future improvements and sustainability of the system.
Provider surveys
Do you currently input mapped ICD-10 codes when you chart for each encounter?
Yes No
If yes, do you intend to continue inputting the ICD codes in your encounters in the future?
Yes No
If no to either question above, please explain:
Do you have any recommendations for making it easier to input ICD codes or another way to track patients’ diagnoses?
Staff surveys
Is this data helpful for your role?
Yes No
If yes, how will you use this data?
Results
During the implementation phase, hypertension was the most common diagnosis seen at TOFC, accounting for 35 of 131 (27%) top 20 diagnoses entered. Depression was second, accounting for about 20% of diagnoses. Posttraumatic stress disorder was the third most common, making up 18% of diagnoses. There were 157 encounters during the implementation phase and 128 ICD diagnoses entered into the chart during this time period, suggesting that most encounters had a corresponding diagnosis code entered. See the Table for more details.
Survey results
Provider surveys
Six providers answered the survey questions. Four answered “yes” to both questions and 2 answered “no” to both questions. Reasons cited for why they did not input the ICD codes included not remembering to enter the codes or not remembering how to enter the codes. Recommendations for making it easier included incorporating the diagnosis in the assessment section of the EHR instead of standing alone as its own section, replacing ICD-9 codes with ICD-10 codes on the maps, making more specific codes for options, like typing more mental health diagnoses, and implementing more training on how to enter the codes.
Staff surveys
Three of 4 staff members responded to the survey. All 3 indicated that the data collected from this project assisted in their role. Stated uses for this data included grant applications and funding; community education, such as presentations and outreach; program development and monitoring; quality improvement; supply purchasing (eg, medications in stock to treat most commonly seen conditions), scheduling clinics and providers; allocating resources and supplies; and accepting or rejecting medical supply donations.
Discussion
Before this project, 668 of the top 20 most common diagnosis codes were entered from when TOFC introduced use of the EHR in the clinic in 2017, until the beginning of the implementation phase of this project in March 2021. During the 3 months of the implementation phase, 131 diagnoses were entered, representing almost 20% of the amount that were entered in 3 and a half years. Pulling the reports for these 20 diagnoses took less than 1 hour. During the needs assessment phase of this project, diagnoses for 3 months were extracted from the EHR by combing through provider notes and extracting the data from the notes—a process that took 11 hours.
Knowledge of diagnoses and the reasons for clinic attendance help the clinic make decisions about staffing, resources, and services. The TOFC board of directors used this data to assist with the decision of whether or not to change the clinic’s mission to include primary care as an official clinic function. The original purpose of the clinic was to address acute issues for people who lacked the resources for medical care. For example, a homeless person with an abscess could come to the clinic and have the abscess drained and treated. The results of this project illustrate that, in reality, most of the diagnoses actually seen in the clinic are more chronic in nature and require consistent, ongoing care. For instance, the project identified 52 clinic patients receiving consistent diabetic care. This type of data can help the clinic determine whether it should accept diabetes-associated donations and whether it needs to recruit a volunteer diabetes educator. Generally, this data can help guide other decisions as well, like what medications should be kept in the pharmacy, whether there are certain specialists the clinic should seek to partner with, and whether the clinic should embark on any particular education campaigns. By inputting ICD codes, diagnosis data are easily obtained to assist with future decisions.
A limitation of this project was that the reports could only be pulled within a certain time frame if the start date of the diagnosis was specified. As most providers did not indicate a start date with their entered diagnosis code, the only way to compare the before and after was to count the total before and the total after the implementation time frame. In other words, comparison reports could not be pulled retroactively, so some data on the less common diagnosis codes are missing from this paper, as reports for the comprehensive map were not pulled ahead of time. Providers may have omitted the start date when entering the diagnosis codes because many of these patients had their diagnoses for years—seeing different providers each time—so starting the diagnosis at that particular encounter did not make sense. Additionally, during training, although how to enter the start date was demonstrated, the emphasis and priority was placed on actually entering the ICD code, in an effort to keep the process simple and increase participation.
Conclusion
Evidence-based care and informed decision-making require data. In a free clinic, this can be difficult to obtain due to limited staffing and the absence of billing and insurance requirements. ICD codes and EHRs are powerful tools to collect data and information about clinic needs. This project improved TOFC’s knowledge about what kind of patients and diagnoses they see.
Corresponding author: Sarah M. Shanahan, MSN, RN, Pacific Lutheran University School of Nursing, Ramstad, Room 214, Tacoma, WA 98447; [email protected].
Financial disclosures: None.
From Pacific Lutheran School of Nursing, Tacoma, WA.
Objective: This quality improvement project aimed to enhance The Olympia Free Clinic’s (TOFC) data availability using
Methods: A new system was implemented for inputting ICD codes into Practice Fusion, the clinic’s EHR. During the initial phase, TOFC’s 21 volunteer providers entered the codes associated with the appropriate diagnosis for each of 157 encounters using a simplified map of options, including a map of the 20 most common diagnoses and a more comprehensive 60-code map.
Results: An EHR report found that 128 new diagnoses were entered during project implementation, hypertension being the most common diagnosis, followed by depression, then posttraumatic stress disorder.
Conclusion: The knowledge of patient diagnoses enabled the clinic to make more-informed decisions.
Keywords: free clinic, data, quality improvement, electronic health record, International Classification of Diseases
Data creates a starting point, a goal, background, understanding of needs and context, and allows for tracking and improvement over time. This quality improvement (QI) project for The Olympia Free Clinic (TOFC) implemented a new system for tracking patient diagnoses. The 21 primary TOFC providers were encouraged to input mapped International Statistical Classification of Diseases and Related Health Problems (ICD) codes into the electronic health record (EHR). The clinic’s providers consisted of mostly retired, but some actively practicing, medical doctors, doctors of osteopathy, nurse practitioners, physician assistants, and psychiatrists.
Previous to this project, the clinic lacked any concrete data on patient demographics or diagnoses. For example, the clinic was unable to accurately answer the National Association of Free and Charitable Clinics’ questions about how many patients TOFC providers saw with diabetes, hypertension, asthma, and hyperlipidemia.1 Additionally, the needs of the clinic and its population were based on educated guesses.
As a free clinic staffed by volunteers and open 2 days a week, TOFC focused solely on giving care to those who needed it, operating pragmatically and addressing any issues as they arose. However, this strategy left the clinic unable to answer questions like “How many TOFC patients have diabetes?” By answering these questions, the clinic can better assess their resource and staffing needs.
Purpose
The project enlisted 21 volunteer providers to record diagnoses through ICD codes on the approximately 2000 active patients between March 22, 2021, and June 15, 2021. Tracking patient diagnoses improves clinic data, outcomes, and decision-making. By working on data improvement, the clinic can better understand its patient population and their needs, enhance clinical care, create better outcomes, make informed decisions, and raise eligibility for grants. The clinic was at a turning point as they reevaluated their mission statement and decided whether they would continue to focus on acute ailments or expand to formally manage chronic diseases as well. This decision needed to be made with knowledge, understanding, and context, which diagnosis data can provide. For example, the knowledge that the clinic’s 3 most common diagnoses are chronic conditions demonstrated that an official shift in their mission may have been warranted.
Literature Review
QI projects are effective and common in the free clinic setting.2-4 To the author’s knowledge, no literature to date shows the implementation of a system to better track diagnoses using a free clinic’s EHR with ICD codes.
Data bring value to clinics in many ways. It can also lead to more informed and better distribution of resources, such as preventative health and social services, patient education, and medical inventory.4
The focus of the US health care system is shifting to a value-based system under the Patient Protection and Affordable Care Act.5 Outcome measurements and improvement play a key role in this.6 Without knowing diagnoses, we cannot effectively track outcomes and have no data on which to base improvements. Insurance and reimbursement requirements typically hold health care facilities accountable for making these outcomes and improvements a reality.5,6 Free clinics, however, lack these motivations, which explains why a free clinic may be deficient in data and tracking methods. Tracking diagnosis codes will, going forward, allow TOFC to see outcomes and trends over time, track the effectiveness of the treatments, and change course if need be.6
TOFC fully implemented the EHR in 2018, giving the clinic better capabilities for pulling reports and tracking data. Although there were growing pains, many TOFC providers were already familiar with ICD codes, which, along with an EHR, provide a system to easily retrieve, store, and analyze diagnoses for evidence-based and informed decision-making.7 This made using ICD codes and the EHR an obvious choice to track patient diagnoses. However, most of the providers were not putting them in ICD codes before this project was implemented. Instead, diagnoses were typed in the notes and, therefore, not easy to generate in a report without having to open each chart for each individual encounter and combing through the notes. To make matters worse, providers were never trained on how to enter the codes in the EHR, and most providers saw no reason to, because the clinic does not bill for services.
Methods
A needs assessment determined that TOFC lacked data. This QI project used a combination of primary and secondary continuous quality improvement data.8 The primary data came from pulling the reports on Practice Fusion to see how many times each diagnosis code was put in during the implementation phase of this project. Secondary data came from interviewing the providers and asking whether they put in the diagnosis codes.
ICD diagnosis entry
Practice Fusion is the EHR TOFC uses and was therefore the platform for this QI project. Two ICD maps were created, which incorporated both International Classification of Diseases, Ninth Revision (ICD-9) and International Classification of Diseases, Tenth Revision (ICD-10) codes. There are tens of thousands of ICD codes in existence, but because TOFC is a free clinic that does not bill or receive reimbursement, the codes did not need to be as specific as they do in a paid clinic. Therefore, the maps put all the variations of each disease into a single category. For example, every patient with diabetes would receive the same ICD code regardless of whether their diabetes was controlled, uncontrolled, or any other variation. The goal of simplifying the codes was to improve compliance with ICD code entry and make reports easier to generate. The maps allowed the options to be simplified and, therefore, more user friendly for both the providers and the data collectors pulling reports. As some ICD-9 codes were already being used, these codes were incorporated so providers could keep using what they were already familiar with. To create the map, generic ICD codes were selected to represent each disease.
An initial survey was conducted prior to implementation with 10 providers, 2 nurses, and 2 staff members, asking which diagnoses they thought were seen most often in the clinic. Based off those answers, a map was created with the 20 most commonly used ICD codes, which can be seen in the Table. A more comprehensive map was also created, with 61 encompassing diagnoses.
To start the implementation process, providers were emailed an explanation of the project, the ICD code maps, and step-by-step instructions on how to enter a diagnosis into the EHR. Additionally, the 20 most common diagnoses forms were posted on the walls at the provider stations along with pictures illustrating how to input the codes in the EHR. The more comprehensive map was attached to the nurse clipboards that accompanied each encounter. The first night the providers volunteered after receiving the email, the researcher would review with them how to input the diagnosis code and have them test the method on a practice patient, either in person or over the phone.
A starting report was pulled March 22, 2021, covering encounters between September 6, 2017, and March 22, 2021, for the 20 most common diagnoses. Another report was pulled at the completion of the implementation phase, on June 15, 2021, covering March 22, 2021, to June 15, 2021. Willing providers and staff members were surveyed after implementation completion. The providers were asked whether they use the ICD codes, whether they would do so in the future, and whether they found it helpful when other providers had entered diagnoses. If they answered no to any of the questions, there were asked why, and whether they had any suggestions for improvements. The 4 staff members were asked whether they thought the data were helpful for their role and, if so, how they would use it.
Surveys
Surveys were conducted after the project was completed with willing and available providers and staff members in order to assess the utility of the project as well as to ensure future improvements and sustainability of the system.
Provider surveys
Do you currently input mapped ICD-10 codes when you chart for each encounter?
Yes No
If yes, do you intend to continue inputting the ICD codes in your encounters in the future?
Yes No
If no to either question above, please explain:
Do you have any recommendations for making it easier to input ICD codes or another way to track patients’ diagnoses?
Staff surveys
Is this data helpful for your role?
Yes No
If yes, how will you use this data?
Results
During the implementation phase, hypertension was the most common diagnosis seen at TOFC, accounting for 35 of 131 (27%) top 20 diagnoses entered. Depression was second, accounting for about 20% of diagnoses. Posttraumatic stress disorder was the third most common, making up 18% of diagnoses. There were 157 encounters during the implementation phase and 128 ICD diagnoses entered into the chart during this time period, suggesting that most encounters had a corresponding diagnosis code entered. See the Table for more details.
Survey results
Provider surveys
Six providers answered the survey questions. Four answered “yes” to both questions and 2 answered “no” to both questions. Reasons cited for why they did not input the ICD codes included not remembering to enter the codes or not remembering how to enter the codes. Recommendations for making it easier included incorporating the diagnosis in the assessment section of the EHR instead of standing alone as its own section, replacing ICD-9 codes with ICD-10 codes on the maps, making more specific codes for options, like typing more mental health diagnoses, and implementing more training on how to enter the codes.
Staff surveys
Three of 4 staff members responded to the survey. All 3 indicated that the data collected from this project assisted in their role. Stated uses for this data included grant applications and funding; community education, such as presentations and outreach; program development and monitoring; quality improvement; supply purchasing (eg, medications in stock to treat most commonly seen conditions), scheduling clinics and providers; allocating resources and supplies; and accepting or rejecting medical supply donations.
Discussion
Before this project, 668 of the top 20 most common diagnosis codes were entered from when TOFC introduced use of the EHR in the clinic in 2017, until the beginning of the implementation phase of this project in March 2021. During the 3 months of the implementation phase, 131 diagnoses were entered, representing almost 20% of the amount that were entered in 3 and a half years. Pulling the reports for these 20 diagnoses took less than 1 hour. During the needs assessment phase of this project, diagnoses for 3 months were extracted from the EHR by combing through provider notes and extracting the data from the notes—a process that took 11 hours.
Knowledge of diagnoses and the reasons for clinic attendance help the clinic make decisions about staffing, resources, and services. The TOFC board of directors used this data to assist with the decision of whether or not to change the clinic’s mission to include primary care as an official clinic function. The original purpose of the clinic was to address acute issues for people who lacked the resources for medical care. For example, a homeless person with an abscess could come to the clinic and have the abscess drained and treated. The results of this project illustrate that, in reality, most of the diagnoses actually seen in the clinic are more chronic in nature and require consistent, ongoing care. For instance, the project identified 52 clinic patients receiving consistent diabetic care. This type of data can help the clinic determine whether it should accept diabetes-associated donations and whether it needs to recruit a volunteer diabetes educator. Generally, this data can help guide other decisions as well, like what medications should be kept in the pharmacy, whether there are certain specialists the clinic should seek to partner with, and whether the clinic should embark on any particular education campaigns. By inputting ICD codes, diagnosis data are easily obtained to assist with future decisions.
A limitation of this project was that the reports could only be pulled within a certain time frame if the start date of the diagnosis was specified. As most providers did not indicate a start date with their entered diagnosis code, the only way to compare the before and after was to count the total before and the total after the implementation time frame. In other words, comparison reports could not be pulled retroactively, so some data on the less common diagnosis codes are missing from this paper, as reports for the comprehensive map were not pulled ahead of time. Providers may have omitted the start date when entering the diagnosis codes because many of these patients had their diagnoses for years—seeing different providers each time—so starting the diagnosis at that particular encounter did not make sense. Additionally, during training, although how to enter the start date was demonstrated, the emphasis and priority was placed on actually entering the ICD code, in an effort to keep the process simple and increase participation.
Conclusion
Evidence-based care and informed decision-making require data. In a free clinic, this can be difficult to obtain due to limited staffing and the absence of billing and insurance requirements. ICD codes and EHRs are powerful tools to collect data and information about clinic needs. This project improved TOFC’s knowledge about what kind of patients and diagnoses they see.
Corresponding author: Sarah M. Shanahan, MSN, RN, Pacific Lutheran University School of Nursing, Ramstad, Room 214, Tacoma, WA 98447; [email protected].
Financial disclosures: None.
1. National Association of Free and Charitable Clinics. 2021 NAFC Member Data & Standards Report. https://www.nafcclinics.org/sites/default/files/NAFC%202021%20Data%20Report%20Final.pdf
2. Lee JS, Combs K, Pasarica M; KNIGHTS Research Group. Improving efficiency while improving patient care in a student-run free clinic. J Am Board Fam Med. 2017;30(4):513-519. doi:10.3122/jabfm.2017.04.170044
3. Lu KB, Thiel B, Atkins CA, et al. Satisfaction with healthcare received at an interprofessional student-run free clinic: invested in training the next generation of healthcare professionals. Cureus. 2018;10(3):e2282. doi:10.7759/cureus.2282
4. Tran T, Briones C, Gillet AS, et al. “Knowing” your population: who are we caring for at Tulane University School of Medicine’s student-run free clinics? J Public Health (Oxf). 2020:1-7. doi:10.1007/s10389-020-01389-7
5. Sennett C. Healthcare reform: quality outcomes measurement and reporting. Am Health Drug Benefits. 2010;3(5):350-352.
6. Mazzali C, Duca P. Use of administrative data in healthcare research. Intern Emerg Med. 2015;10(4):517-524. doi:10.1007/s11739-015-1213-9
7. Moons E, Khanna A, Akkasi A, Moens MF. A comparison of deep learning methods for ICD coding of clinical records. Appl Sci. 2020;10(15):5262. doi:10.3390/app10155262
8. Finkelman A. Quality Improvement: A Guide for Integration in Nursing. Jones & Bartlett Learning; 2018.
1. National Association of Free and Charitable Clinics. 2021 NAFC Member Data & Standards Report. https://www.nafcclinics.org/sites/default/files/NAFC%202021%20Data%20Report%20Final.pdf
2. Lee JS, Combs K, Pasarica M; KNIGHTS Research Group. Improving efficiency while improving patient care in a student-run free clinic. J Am Board Fam Med. 2017;30(4):513-519. doi:10.3122/jabfm.2017.04.170044
3. Lu KB, Thiel B, Atkins CA, et al. Satisfaction with healthcare received at an interprofessional student-run free clinic: invested in training the next generation of healthcare professionals. Cureus. 2018;10(3):e2282. doi:10.7759/cureus.2282
4. Tran T, Briones C, Gillet AS, et al. “Knowing” your population: who are we caring for at Tulane University School of Medicine’s student-run free clinics? J Public Health (Oxf). 2020:1-7. doi:10.1007/s10389-020-01389-7
5. Sennett C. Healthcare reform: quality outcomes measurement and reporting. Am Health Drug Benefits. 2010;3(5):350-352.
6. Mazzali C, Duca P. Use of administrative data in healthcare research. Intern Emerg Med. 2015;10(4):517-524. doi:10.1007/s11739-015-1213-9
7. Moons E, Khanna A, Akkasi A, Moens MF. A comparison of deep learning methods for ICD coding of clinical records. Appl Sci. 2020;10(15):5262. doi:10.3390/app10155262
8. Finkelman A. Quality Improvement: A Guide for Integration in Nursing. Jones & Bartlett Learning; 2018.
The Use of Nasogastric Tube Bridle Kits in COVID-19 Intensive Care Unit Patients
From Queen Elizabeth Hospital Birmingham, Mindelsohn Way, Birmingham, United Kingdom.
Objective: To ascertain the extent of nasogastric tube (NGT) dislodgment in COVID-19 intensive care unit (ICU) patients after the introduction of NGT bridle kits as a standard of practice, to see whether this would reduce the number of NGT insertions, patient irradiation, missed feeds, and overall cost.
Background: Nasogastric feeding is the mainstay of enteral feeding for ICU patients. The usual standard of practice is to secure the tube using adhesive tape. Studies show this method has a 40% to 48% dislodgment rate. The COVID-19 ICU patient population may be at even greater risk due to the need for proning, long duration of invasive ventilation, and emergence delirium.
Design: This was a 2-cycle quality improvement project. The first cycle was done retrospectively, looking at the contemporaneous standard of practice where bridle kits were not used. This gave an objective measure of the extent of NGT displacement, associated costs, and missed feeds. The second cycle was carried out prospectively, with the use of NGT bridle kits as the new standard of practice.
Setting: A large United Kingdom teaching hospital with a 100-bed, single-floor ICU.
Participants: Patients admitted to the ICU with COVID-19 who subsequently required sedation and invasive ventilation.
Measurements: Measurements included days of feeding required, hours of feeding missed due to NGT dislodgment, total number of nasogastric tubes required per ICU stay, and number of chest radiographs for NGT position confirmation. NGT-related pressure sores were also recorded.
Results: When compared to the bridled group, the unbridled group required a higher number of NGTs (2.5 vs 1.3; P < .001) and chest radiographs (3.4 vs 1.6; P < .001), had more hours of feeding missed (11.8 vs 5.0), and accumulated a slightly higher total cost (cost of NGT, chest radiographs +/- bridle kit: £211.67 vs £210, [US $284.25 vs US $282.01]).
Conclusions: The use of NGT bridle kits reduces the number of NGT insertions patients require and subsequently reduces the number of chest radiographs for each patient. These patients also miss fewer feeds, with no appreciable increase in cost.
Keywords: nasogastric, bridle, enteral, COVID-19, intensive care, quality improvement, safety.
The COVID-19 pandemic has led to a large influx of patients to critical care units in the United Kingdom (UK) and across the world. Figures from the Intensive Care National Audit & Research Centre in May 2020 show that the median length of stay for COVID-19 survivors requiring invasive ventilatory support while on the intensive care unit (ICU) was 15 days.1 For these days at the very least, patients are completely reliant on enteral feeding in order to meet their nutritional requirements.The standard method of enteral feeding when a patient is sedated and ventilated is via a nasogastric tube (NGT). Incorrect placement of an NGT can have devastating consequences, including pneumothorax, fistula formation, ulceration, sepsis, and death. Between September 2011 and March 2016, the National Patient Safety Agency in the UK recorded 95 incidents of feeding into the respiratory tract as a result of incorrect NGT placement.2 With the onset of the pandemic, the prevalence of NGT misplacement increased, with the NHS Improvement team reporting 7 cases of misplaced NGTs within just 3 months (April 1, 2020, through June 30, 2020).3 With over 3 million nasogastric or orogastric tubes inserted each year in the UK, the risk of adverse events is very real.
NGT dislodgment is common, with 1 study putting this figure at 40%.4 Recurrent dislodgment of NGTs disrupts nutrition and may lead to the patient missing a feed in a time where nutrition is vital during acute illness. Research has showed that NGT bridling reduces the rate of dislodgment significantly (from 40% to 14%).5 Moreover, a 2018 systematic review looking specifically at NGT dislodgment found 10 out of 11 studies showed a significant reduction in dislodgment following use of a bridle kit.6 Bridling an NGT has been shown to significantly reduce the need for percutaneous endoscopic gastrostomy insertion.7 NGT bridle kits have already been used successfully in ICU burn patients, where sloughed skin makes securement particularly difficult with traditional methods.8 With each repeated insertion comes the risk of incorrect placement. COVID-19 ICU patients had specific risk factors for their NGTs becoming dislodged: duration of NGT feeding (in the ICU and on the ward), requirement for proning and de-proning, and post-emergence confusion related to long duration of sedation. Repeated NGT insertion comes with potential risks to the patient and staff, as well as a financial cost. Patient-specific risks include potential for incorrect placement, missed feedings, irradiation (from the patient’s own chest radiograph and from others), and discomfort from manual handling and repeat reinsertions. Staff risk factors include radiation scatter from portable radiographs (especially when dealing with more than 1 patient per bed space), manual handling, and increased pressure on radiographers. Finally, financial costs are related to the NGTs themselves as well as the portable chest radiograph, which our Superintendent Radiographer estimates to be £55 (US $73.86).
The objective of this study was to ascertain the extent of NGT dislodgment in COVID-19 ICU patients after the introduction of NGT bridle kits as a standard of practice and to determine whether this would reduce the number of NGT insertions, patient irradiation, missed feedings, and overall costs. With the introduction of bridle kits, incidence of pressure sores related to the bridle kit were also recorded.
Methods
Data were collected over 2 cycles, the first retrospectively and the second prospectively, once NGT bridle kits were introduced as an intervention.
Cycle 1. Analyzing the current standard of practice: regular NGT insertion with no use of bridle kit
Cycle 1 was done retrospectively, looking at 30 patient notes of COVID-19 patients admitted to the critical care unit (CCU) between March 11, 2020, and April 20, 2020, at Queen Elizabeth Hospital Birmingham, Birmingham, UK. All patients admitted to the ICU with COVID-19 requiring invasive ventilation were eligible for inclusion in the study. A total of 32 patients were admitted during this time; however, 2 patients were excluded due to NGTs being inserted prior to ICU admission.
Individual patient notes were searched for:
- days of feeding required during their inpatient stay (this included NGT feeding on the ward post-ICU discharge).
- hours of feeding missed while waiting for NGT reinsertion or chest radiograph due to dislodged or displaced NGTs (during the entire period of enteral feeding, ICU, and ward).
- number of NGT insertions.
- number of chest radiographs purely for NGT position.
Each patient’s first day of feeding and NGT insertion were noted. Following that, the patient electronic note system, the Prescribing Information and Communication System, was used to look for any further chest radiograph requests, which were primarily for NGT position. Using the date and time, the “critical care observations” tab was used to look at fluids and to calculate how long NGT feeding was stopped while NGT position-check x-rays were being awaited. The notes were also checked at this date and time to work out whether a new NGT was inserted or whether an existing tube had been dislodged (if not evident from the x-ray request). Data collection was stopped once either of the following occurred:
- patient no longer required NGT feeding.
- patient was transferred to another hospital.
- death.
The cost of the NGT was averaged between the cost of size 8 and 12, which worked out to be £10 (US $13.43). As mentioned earlier, each radiograph cost was determined by the Superintendent Radiographer (£55).
Cycle 2. Implementing a change: introduction of NGT bridle kit (Applied Medical Technology Bridle) as standard of practice
The case notes of 54 patients admitted to the COVID-19 CCU at the Queen Elizabeth Hospital Birmingham, Birmingham, UK, were retrospectively reviewed between February 8, 2021, and April 17, 2021. The inclusion criteria consisted of: admitted to the CCU due to COVID-19, required NGT feeding, and was bridled on admission. Case notes were retrospectively reviewed for:
- Length of CCU stay
- Days of feeding required during the hospital stay
- Hours of feeding missed while waiting for a chest radiograph due to displaced NGTs
- Number of NGT insertions
- Number of chest radiographs to confirm NGT position
- Bridling of NGTs
- Documented pressure sores related to the bridle or NGT, or referrals for wound management advice (Tissue Viability Team) as a consequence of the NGT bridle
Results
Of the 54 patients admitted, 31 had their NGTs bridled. Data were collected as in the first cycle, with individual notes analyzed on the online system (Table). Additionally, notes were reviewed for documentation of pressure sores related to NGT bridling, and the “requests” tab as well as the “noting” function were used to identify referrals for “Wound Management Advice” (Tissue Viability Review).
The average length of stay for this ICU cohort was 17.6 days. This reiterates the reliance on NGT feeding of patients admitted to the CCU. The results from this project can be summarized as follows: The use of NGT bridle kits leads to a significant reduction in the total number of NGTs a patient requires during intensive care. As a result, there is a significant reduction in the number of chest radiographs required to confirm NGT position. Feedings missed can also be reduced by using a bridle kit. These advantages all come with no additional cost.
On average, bridled patients required 1.3 NGTs, compared to 2.5 before bridles were introduced. The fewer NGTs inserted, the less chance of an NGT-associated injury occurring.
The number of chest radiographs required to confirm NGT position after resiting also fell, from 3.4 to 1.6. This has numerous advantages. There is a financial savings of £99 (US $133.04) per patient from the reduced number of chest x-rays. Although this does not offset the price of the bridle kit itself, there are other less easily quantifiable costs that are reduced. For instance, patients are highly catabolic during severe infection, and their predominant energy source comes from their feedings. Missed feedings are associated with longer length of stay in the ICU and in the hospital in general.9 Bridle kits have the potential to reduce the number of missed feedings by ensuring the NGT remains in the correct position.
Discussion
Many of the results are aligned with what is already known in the literature. A meta-analysis from 2014 concluded that dislodgment is reduced with the use of a bridle kit.6 This change is what underpins many of the advantages seen, as an NGT that stays in place means additional radiographs are not required and feeding is not delayed.
COVID-19 critical care patients are very fragile and are dependent on ventilators for the majority of their stay. They are often on very high levels of ventilator support and moving the patient can lead to desaturation or difficulties in ventilation. Therefore, reduction in any manual handling occurring as a result of the need for portable chest radiographs minimizes the chances of further negative events. Furthermore, nursing staff, along with the radiographers, are often the ones who must move these patients in order for the x-ray film to be placed behind the patient. This task is not easy, especially with limited personnel, and has the potential to cause injuries to both patients and staff members.
The knock-on effect of reduced NGTs and x-rays is also a reduction of work for the portable radiography team, in what is a very time- and resource-consuming process of coming onto the COVID-19 CCU. Not only does the machine itself need to be wiped down thoroughly after use, but also the individual must use personal protective equipment (PPE) each time. There is a cost associated with PPE itself, as well as the time it takes to don and doff appropriately.
A reduction in chest radiographs reduces the irradiation of the patient and the potential irradiation of staff members. With bridling of the NGT, the radiation exposure is more than halved for the patient. Because the COVID ICU is often very busy, with patients in some cases being doubled up in a bed space, the scatter radiation is high. This can be reduced if fewer chest radiographs are required.
An additional benefit of a reduction in the mean number of NGT insertions per patient is also illustrated by anecdotal evidence. Over the studied period, we identified 2 traumatic pneumothoraces related to NGT insertion on the COVID-19 CCU, highlighting the potential risks of NGT insertion and the need to reduce its frequency, if possible.
One concern noted was that bridles could cause increased incidence of pressure sores. In the patients represented in this study, only 1 suffered a pressure sore (grade 2) directly related to the bridle. A subpopulation of patients not bridled was also noted. This was significantly smaller than the main group; however, we had noted 2 incidences of pressure sores from their standard NGT and securement devices. Some studies have alluded to the potential for increased skin complications with bridle kits; however, studies looking specifically at kits using umbilical tape (as in this study) show no significant increase in skin damage.10 This leaves us confident that there is no increased risk of pressure sores related to the bridling of patients when umbilical tape is used with the bridle kit.
NGT bridles require training to insert safely. With the introduction of bridling, our hospital’s nursing staff underwent training in order to be proficient with the bridle kits. This comes with a time commitment, and, like other equipment usage, it takes time to build confidence. However, in this study, there were no concerns raised from nursing staff regarding difficulty of insertion or the time taken to do so.
Our study adds an objective measure of the benefits provided by bridle kits. Not only was there a reduction in the number of NGT insertions required, but we were also able to show a significant reduction in the number of chest radiographs required as well in the amount of time feeding is missed. While apprehension regarding bridle kits may be focused on cost, this study has shown that the savings more than make up for the initial cost of the kit itself.
Although the patient demographics, systemic effects, and treatment of COVID-19 are similar between different ICUs, a single-center study does have limitations. One of these is the potential for an intervention in a single-center study to lead to a larger effect than that of multicenter studies.11 But as seen in previous studies, the dislodgment of NGTs is not just an issue in this ICU.12 COVID-19–specific risk factors for NGT dislodgment also apply to all patients requiring invasive ventilation and proning.
Identification of whether a new NGT was inserted, or whether the existing NGT was replaced following dislodging of an NGT, relied on accurate documentation by the relevant staff. The case notes did not always make this explicitly clear. Unlike other procedures commonly performed, documentation of NGT insertion is not formally done under the procedures heading, and, on occasion is not done at all. We recognize that manually searching notes only yields NGT insertions that have been formally documented. There is a potential for the number recorded to be lower than the actual number of NGTs inserted. However, when x-ray requests are cross-referenced with the notes, there is a significant degree of confidence that the vast majority of insertions are picked up.
One patient identified in the study required a Ryle’s tube as part of their critical care treatment. While similar in nature to an NGT, these are unable to fit into a bridle and are at increased risk of dislodging during the patient’s critical care stay. The intended benefit of the bridle kit does not therefore extend to patients with Ryle’s tubes.
Conclusion
The COVID-19 critical care population requires significant time on invasive ventilation and remains dependent on NGT feeding during this process. The risk of NGT dislodgment can be mitigated by using a bridle kit, as the number of NGT insertions a patient requires is significantly reduced. Not only does this reduce the risk of inadvertent misplacement but also has a cost savings, as well as increasing safety for staff and patients. From this study, the risk of pressure injuries is not significant. The benefit of NGT bridling may be extended to other non-COVID long-stay ICU patients.
Future research looking at the efficacy of bridle kits in larger patient groups will help confirm the benefits seen in this study and will also provide better information with regard to any long-term complications associated with bridles.
Corresponding author: Rajveer Atkar, MBBS, Queen Elizabeth Hospital Birmingham, Mindelsohn Way, Birmingham B15 2GW, United Kingdom; [email protected].
Financial disclosures: None.
1. Intensive Care National Audit & Research Centre. ICNARC report on COVID-19 in critical care 15 May 2020. https://www.icnarc.org/DataServices/Attachments/Download/cbcb6217-f698-ea11-9125-00505601089b
2. NHS. Nasogastric tube misplacement: continuing risk of death and severe harm. July 22, 2016. https://www.england.nhs.uk/2016/07/nasogastric-tube-misplacement-continuing-risk-of-death-severe-harm/
3. NHS. Provisional publication of never events reported as occurring between 1 April and 30 June 2020. https://www.england.nhs.uk/wp-content/uploads/2020/08/Provisional_publication_-_NE_1_April_-_30_June_2020.pdf
4. Meer JA. Inadvertent dislodgement of nasoenteral feeding tubes: incidence and prevention. JPEN J Parenter Enteral Nutr. 1987;11(2):187- 189. doi:10.1177/0148607187011002187
5. Bechtold ML, Nguyen DL, Palmer L, et al. Nasal bridles for securing nasoenteric tubes: a meta-analysis. Nutr Clin Pract. 2014;29(5):667-671. doi:10.1177/0884533614536737
6. Lynch A, Tang CS, Jeganathan LS, Rockey JG. A systematic review of the effectiveness and complications of using nasal bridles to secure nasoenteral feeding tubes. Aust J Otolaryngol. 2018;1:8. doi:10.21037/ajo.2018.01.01
7. Johnston R, O’Dell L, Patrick M, Cole OT, Cunliffe N. Outcome of patients fed via a nasogastric tube retained with a bridle loop: Do bridle loops reduce the requirement for percutaneous endoscopic gastrostomy insertion and 30-day mortality? Proc Nutr Soc. 2008;67:E116. doi:10.1017/S0029665108007489
8. Li AY, Rustad KC, Long C, et al. Reduced incidence of feeding tube dislodgement and missed feeds in burn patients with nasal bridle securement. Burns. 2018;44(5):1203-1209. doi:10.1016/j.burns.2017.05.025
9. Peev MP, Yeh DD, Quraishi SA, et al. Causes and consequences of interrupted enteral nutrition: a prospective observational study in critically ill surgical patients. JPEN J Parenter Enteral Nutr. 2015;39(1):21-27. doi:10.1177/0148607114526887
10. Seder CW, Janczyk R. The routine bridling of nasjejunal tubes is a safe and effective method of reducing dislodgement in the intensive care unit. Nutr Clin Pract. 2008;23(6):651-654. doi:10.1177/0148607114526887
11. Dechartres A, Boutron I, Trinquart L, Charles P, Ravaud P. Single-center trials show larger treatment effects than multicenter trials: evidence from a meta-epidemiologic study. Ann Intern Med. 2011;155:39-51. doi:10.7326/0003-4819-155-1-201107050-00006
12. Morton B, Hall R, Ridgway T, Al-Rawi O. Nasogastric tube dislodgement: a problem on our ICU. Crit Care. 2013;17(suppl 2):P242. doi:10.1186/cc12180
From Queen Elizabeth Hospital Birmingham, Mindelsohn Way, Birmingham, United Kingdom.
Objective: To ascertain the extent of nasogastric tube (NGT) dislodgment in COVID-19 intensive care unit (ICU) patients after the introduction of NGT bridle kits as a standard of practice, to see whether this would reduce the number of NGT insertions, patient irradiation, missed feeds, and overall cost.
Background: Nasogastric feeding is the mainstay of enteral feeding for ICU patients. The usual standard of practice is to secure the tube using adhesive tape. Studies show this method has a 40% to 48% dislodgment rate. The COVID-19 ICU patient population may be at even greater risk due to the need for proning, long duration of invasive ventilation, and emergence delirium.
Design: This was a 2-cycle quality improvement project. The first cycle was done retrospectively, looking at the contemporaneous standard of practice where bridle kits were not used. This gave an objective measure of the extent of NGT displacement, associated costs, and missed feeds. The second cycle was carried out prospectively, with the use of NGT bridle kits as the new standard of practice.
Setting: A large United Kingdom teaching hospital with a 100-bed, single-floor ICU.
Participants: Patients admitted to the ICU with COVID-19 who subsequently required sedation and invasive ventilation.
Measurements: Measurements included days of feeding required, hours of feeding missed due to NGT dislodgment, total number of nasogastric tubes required per ICU stay, and number of chest radiographs for NGT position confirmation. NGT-related pressure sores were also recorded.
Results: When compared to the bridled group, the unbridled group required a higher number of NGTs (2.5 vs 1.3; P < .001) and chest radiographs (3.4 vs 1.6; P < .001), had more hours of feeding missed (11.8 vs 5.0), and accumulated a slightly higher total cost (cost of NGT, chest radiographs +/- bridle kit: £211.67 vs £210, [US $284.25 vs US $282.01]).
Conclusions: The use of NGT bridle kits reduces the number of NGT insertions patients require and subsequently reduces the number of chest radiographs for each patient. These patients also miss fewer feeds, with no appreciable increase in cost.
Keywords: nasogastric, bridle, enteral, COVID-19, intensive care, quality improvement, safety.
The COVID-19 pandemic has led to a large influx of patients to critical care units in the United Kingdom (UK) and across the world. Figures from the Intensive Care National Audit & Research Centre in May 2020 show that the median length of stay for COVID-19 survivors requiring invasive ventilatory support while on the intensive care unit (ICU) was 15 days.1 For these days at the very least, patients are completely reliant on enteral feeding in order to meet their nutritional requirements.The standard method of enteral feeding when a patient is sedated and ventilated is via a nasogastric tube (NGT). Incorrect placement of an NGT can have devastating consequences, including pneumothorax, fistula formation, ulceration, sepsis, and death. Between September 2011 and March 2016, the National Patient Safety Agency in the UK recorded 95 incidents of feeding into the respiratory tract as a result of incorrect NGT placement.2 With the onset of the pandemic, the prevalence of NGT misplacement increased, with the NHS Improvement team reporting 7 cases of misplaced NGTs within just 3 months (April 1, 2020, through June 30, 2020).3 With over 3 million nasogastric or orogastric tubes inserted each year in the UK, the risk of adverse events is very real.
NGT dislodgment is common, with 1 study putting this figure at 40%.4 Recurrent dislodgment of NGTs disrupts nutrition and may lead to the patient missing a feed in a time where nutrition is vital during acute illness. Research has showed that NGT bridling reduces the rate of dislodgment significantly (from 40% to 14%).5 Moreover, a 2018 systematic review looking specifically at NGT dislodgment found 10 out of 11 studies showed a significant reduction in dislodgment following use of a bridle kit.6 Bridling an NGT has been shown to significantly reduce the need for percutaneous endoscopic gastrostomy insertion.7 NGT bridle kits have already been used successfully in ICU burn patients, where sloughed skin makes securement particularly difficult with traditional methods.8 With each repeated insertion comes the risk of incorrect placement. COVID-19 ICU patients had specific risk factors for their NGTs becoming dislodged: duration of NGT feeding (in the ICU and on the ward), requirement for proning and de-proning, and post-emergence confusion related to long duration of sedation. Repeated NGT insertion comes with potential risks to the patient and staff, as well as a financial cost. Patient-specific risks include potential for incorrect placement, missed feedings, irradiation (from the patient’s own chest radiograph and from others), and discomfort from manual handling and repeat reinsertions. Staff risk factors include radiation scatter from portable radiographs (especially when dealing with more than 1 patient per bed space), manual handling, and increased pressure on radiographers. Finally, financial costs are related to the NGTs themselves as well as the portable chest radiograph, which our Superintendent Radiographer estimates to be £55 (US $73.86).
The objective of this study was to ascertain the extent of NGT dislodgment in COVID-19 ICU patients after the introduction of NGT bridle kits as a standard of practice and to determine whether this would reduce the number of NGT insertions, patient irradiation, missed feedings, and overall costs. With the introduction of bridle kits, incidence of pressure sores related to the bridle kit were also recorded.
Methods
Data were collected over 2 cycles, the first retrospectively and the second prospectively, once NGT bridle kits were introduced as an intervention.
Cycle 1. Analyzing the current standard of practice: regular NGT insertion with no use of bridle kit
Cycle 1 was done retrospectively, looking at 30 patient notes of COVID-19 patients admitted to the critical care unit (CCU) between March 11, 2020, and April 20, 2020, at Queen Elizabeth Hospital Birmingham, Birmingham, UK. All patients admitted to the ICU with COVID-19 requiring invasive ventilation were eligible for inclusion in the study. A total of 32 patients were admitted during this time; however, 2 patients were excluded due to NGTs being inserted prior to ICU admission.
Individual patient notes were searched for:
- days of feeding required during their inpatient stay (this included NGT feeding on the ward post-ICU discharge).
- hours of feeding missed while waiting for NGT reinsertion or chest radiograph due to dislodged or displaced NGTs (during the entire period of enteral feeding, ICU, and ward).
- number of NGT insertions.
- number of chest radiographs purely for NGT position.
Each patient’s first day of feeding and NGT insertion were noted. Following that, the patient electronic note system, the Prescribing Information and Communication System, was used to look for any further chest radiograph requests, which were primarily for NGT position. Using the date and time, the “critical care observations” tab was used to look at fluids and to calculate how long NGT feeding was stopped while NGT position-check x-rays were being awaited. The notes were also checked at this date and time to work out whether a new NGT was inserted or whether an existing tube had been dislodged (if not evident from the x-ray request). Data collection was stopped once either of the following occurred:
- patient no longer required NGT feeding.
- patient was transferred to another hospital.
- death.
The cost of the NGT was averaged between the cost of size 8 and 12, which worked out to be £10 (US $13.43). As mentioned earlier, each radiograph cost was determined by the Superintendent Radiographer (£55).
Cycle 2. Implementing a change: introduction of NGT bridle kit (Applied Medical Technology Bridle) as standard of practice
The case notes of 54 patients admitted to the COVID-19 CCU at the Queen Elizabeth Hospital Birmingham, Birmingham, UK, were retrospectively reviewed between February 8, 2021, and April 17, 2021. The inclusion criteria consisted of: admitted to the CCU due to COVID-19, required NGT feeding, and was bridled on admission. Case notes were retrospectively reviewed for:
- Length of CCU stay
- Days of feeding required during the hospital stay
- Hours of feeding missed while waiting for a chest radiograph due to displaced NGTs
- Number of NGT insertions
- Number of chest radiographs to confirm NGT position
- Bridling of NGTs
- Documented pressure sores related to the bridle or NGT, or referrals for wound management advice (Tissue Viability Team) as a consequence of the NGT bridle
Results
Of the 54 patients admitted, 31 had their NGTs bridled. Data were collected as in the first cycle, with individual notes analyzed on the online system (Table). Additionally, notes were reviewed for documentation of pressure sores related to NGT bridling, and the “requests” tab as well as the “noting” function were used to identify referrals for “Wound Management Advice” (Tissue Viability Review).
The average length of stay for this ICU cohort was 17.6 days. This reiterates the reliance on NGT feeding of patients admitted to the CCU. The results from this project can be summarized as follows: The use of NGT bridle kits leads to a significant reduction in the total number of NGTs a patient requires during intensive care. As a result, there is a significant reduction in the number of chest radiographs required to confirm NGT position. Feedings missed can also be reduced by using a bridle kit. These advantages all come with no additional cost.
On average, bridled patients required 1.3 NGTs, compared to 2.5 before bridles were introduced. The fewer NGTs inserted, the less chance of an NGT-associated injury occurring.
The number of chest radiographs required to confirm NGT position after resiting also fell, from 3.4 to 1.6. This has numerous advantages. There is a financial savings of £99 (US $133.04) per patient from the reduced number of chest x-rays. Although this does not offset the price of the bridle kit itself, there are other less easily quantifiable costs that are reduced. For instance, patients are highly catabolic during severe infection, and their predominant energy source comes from their feedings. Missed feedings are associated with longer length of stay in the ICU and in the hospital in general.9 Bridle kits have the potential to reduce the number of missed feedings by ensuring the NGT remains in the correct position.
Discussion
Many of the results are aligned with what is already known in the literature. A meta-analysis from 2014 concluded that dislodgment is reduced with the use of a bridle kit.6 This change is what underpins many of the advantages seen, as an NGT that stays in place means additional radiographs are not required and feeding is not delayed.
COVID-19 critical care patients are very fragile and are dependent on ventilators for the majority of their stay. They are often on very high levels of ventilator support and moving the patient can lead to desaturation or difficulties in ventilation. Therefore, reduction in any manual handling occurring as a result of the need for portable chest radiographs minimizes the chances of further negative events. Furthermore, nursing staff, along with the radiographers, are often the ones who must move these patients in order for the x-ray film to be placed behind the patient. This task is not easy, especially with limited personnel, and has the potential to cause injuries to both patients and staff members.
The knock-on effect of reduced NGTs and x-rays is also a reduction of work for the portable radiography team, in what is a very time- and resource-consuming process of coming onto the COVID-19 CCU. Not only does the machine itself need to be wiped down thoroughly after use, but also the individual must use personal protective equipment (PPE) each time. There is a cost associated with PPE itself, as well as the time it takes to don and doff appropriately.
A reduction in chest radiographs reduces the irradiation of the patient and the potential irradiation of staff members. With bridling of the NGT, the radiation exposure is more than halved for the patient. Because the COVID ICU is often very busy, with patients in some cases being doubled up in a bed space, the scatter radiation is high. This can be reduced if fewer chest radiographs are required.
An additional benefit of a reduction in the mean number of NGT insertions per patient is also illustrated by anecdotal evidence. Over the studied period, we identified 2 traumatic pneumothoraces related to NGT insertion on the COVID-19 CCU, highlighting the potential risks of NGT insertion and the need to reduce its frequency, if possible.
One concern noted was that bridles could cause increased incidence of pressure sores. In the patients represented in this study, only 1 suffered a pressure sore (grade 2) directly related to the bridle. A subpopulation of patients not bridled was also noted. This was significantly smaller than the main group; however, we had noted 2 incidences of pressure sores from their standard NGT and securement devices. Some studies have alluded to the potential for increased skin complications with bridle kits; however, studies looking specifically at kits using umbilical tape (as in this study) show no significant increase in skin damage.10 This leaves us confident that there is no increased risk of pressure sores related to the bridling of patients when umbilical tape is used with the bridle kit.
NGT bridles require training to insert safely. With the introduction of bridling, our hospital’s nursing staff underwent training in order to be proficient with the bridle kits. This comes with a time commitment, and, like other equipment usage, it takes time to build confidence. However, in this study, there were no concerns raised from nursing staff regarding difficulty of insertion or the time taken to do so.
Our study adds an objective measure of the benefits provided by bridle kits. Not only was there a reduction in the number of NGT insertions required, but we were also able to show a significant reduction in the number of chest radiographs required as well in the amount of time feeding is missed. While apprehension regarding bridle kits may be focused on cost, this study has shown that the savings more than make up for the initial cost of the kit itself.
Although the patient demographics, systemic effects, and treatment of COVID-19 are similar between different ICUs, a single-center study does have limitations. One of these is the potential for an intervention in a single-center study to lead to a larger effect than that of multicenter studies.11 But as seen in previous studies, the dislodgment of NGTs is not just an issue in this ICU.12 COVID-19–specific risk factors for NGT dislodgment also apply to all patients requiring invasive ventilation and proning.
Identification of whether a new NGT was inserted, or whether the existing NGT was replaced following dislodging of an NGT, relied on accurate documentation by the relevant staff. The case notes did not always make this explicitly clear. Unlike other procedures commonly performed, documentation of NGT insertion is not formally done under the procedures heading, and, on occasion is not done at all. We recognize that manually searching notes only yields NGT insertions that have been formally documented. There is a potential for the number recorded to be lower than the actual number of NGTs inserted. However, when x-ray requests are cross-referenced with the notes, there is a significant degree of confidence that the vast majority of insertions are picked up.
One patient identified in the study required a Ryle’s tube as part of their critical care treatment. While similar in nature to an NGT, these are unable to fit into a bridle and are at increased risk of dislodging during the patient’s critical care stay. The intended benefit of the bridle kit does not therefore extend to patients with Ryle’s tubes.
Conclusion
The COVID-19 critical care population requires significant time on invasive ventilation and remains dependent on NGT feeding during this process. The risk of NGT dislodgment can be mitigated by using a bridle kit, as the number of NGT insertions a patient requires is significantly reduced. Not only does this reduce the risk of inadvertent misplacement but also has a cost savings, as well as increasing safety for staff and patients. From this study, the risk of pressure injuries is not significant. The benefit of NGT bridling may be extended to other non-COVID long-stay ICU patients.
Future research looking at the efficacy of bridle kits in larger patient groups will help confirm the benefits seen in this study and will also provide better information with regard to any long-term complications associated with bridles.
Corresponding author: Rajveer Atkar, MBBS, Queen Elizabeth Hospital Birmingham, Mindelsohn Way, Birmingham B15 2GW, United Kingdom; [email protected].
Financial disclosures: None.
From Queen Elizabeth Hospital Birmingham, Mindelsohn Way, Birmingham, United Kingdom.
Objective: To ascertain the extent of nasogastric tube (NGT) dislodgment in COVID-19 intensive care unit (ICU) patients after the introduction of NGT bridle kits as a standard of practice, to see whether this would reduce the number of NGT insertions, patient irradiation, missed feeds, and overall cost.
Background: Nasogastric feeding is the mainstay of enteral feeding for ICU patients. The usual standard of practice is to secure the tube using adhesive tape. Studies show this method has a 40% to 48% dislodgment rate. The COVID-19 ICU patient population may be at even greater risk due to the need for proning, long duration of invasive ventilation, and emergence delirium.
Design: This was a 2-cycle quality improvement project. The first cycle was done retrospectively, looking at the contemporaneous standard of practice where bridle kits were not used. This gave an objective measure of the extent of NGT displacement, associated costs, and missed feeds. The second cycle was carried out prospectively, with the use of NGT bridle kits as the new standard of practice.
Setting: A large United Kingdom teaching hospital with a 100-bed, single-floor ICU.
Participants: Patients admitted to the ICU with COVID-19 who subsequently required sedation and invasive ventilation.
Measurements: Measurements included days of feeding required, hours of feeding missed due to NGT dislodgment, total number of nasogastric tubes required per ICU stay, and number of chest radiographs for NGT position confirmation. NGT-related pressure sores were also recorded.
Results: When compared to the bridled group, the unbridled group required a higher number of NGTs (2.5 vs 1.3; P < .001) and chest radiographs (3.4 vs 1.6; P < .001), had more hours of feeding missed (11.8 vs 5.0), and accumulated a slightly higher total cost (cost of NGT, chest radiographs +/- bridle kit: £211.67 vs £210, [US $284.25 vs US $282.01]).
Conclusions: The use of NGT bridle kits reduces the number of NGT insertions patients require and subsequently reduces the number of chest radiographs for each patient. These patients also miss fewer feeds, with no appreciable increase in cost.
Keywords: nasogastric, bridle, enteral, COVID-19, intensive care, quality improvement, safety.
The COVID-19 pandemic has led to a large influx of patients to critical care units in the United Kingdom (UK) and across the world. Figures from the Intensive Care National Audit & Research Centre in May 2020 show that the median length of stay for COVID-19 survivors requiring invasive ventilatory support while on the intensive care unit (ICU) was 15 days.1 For these days at the very least, patients are completely reliant on enteral feeding in order to meet their nutritional requirements.The standard method of enteral feeding when a patient is sedated and ventilated is via a nasogastric tube (NGT). Incorrect placement of an NGT can have devastating consequences, including pneumothorax, fistula formation, ulceration, sepsis, and death. Between September 2011 and March 2016, the National Patient Safety Agency in the UK recorded 95 incidents of feeding into the respiratory tract as a result of incorrect NGT placement.2 With the onset of the pandemic, the prevalence of NGT misplacement increased, with the NHS Improvement team reporting 7 cases of misplaced NGTs within just 3 months (April 1, 2020, through June 30, 2020).3 With over 3 million nasogastric or orogastric tubes inserted each year in the UK, the risk of adverse events is very real.
NGT dislodgment is common, with 1 study putting this figure at 40%.4 Recurrent dislodgment of NGTs disrupts nutrition and may lead to the patient missing a feed in a time where nutrition is vital during acute illness. Research has showed that NGT bridling reduces the rate of dislodgment significantly (from 40% to 14%).5 Moreover, a 2018 systematic review looking specifically at NGT dislodgment found 10 out of 11 studies showed a significant reduction in dislodgment following use of a bridle kit.6 Bridling an NGT has been shown to significantly reduce the need for percutaneous endoscopic gastrostomy insertion.7 NGT bridle kits have already been used successfully in ICU burn patients, where sloughed skin makes securement particularly difficult with traditional methods.8 With each repeated insertion comes the risk of incorrect placement. COVID-19 ICU patients had specific risk factors for their NGTs becoming dislodged: duration of NGT feeding (in the ICU and on the ward), requirement for proning and de-proning, and post-emergence confusion related to long duration of sedation. Repeated NGT insertion comes with potential risks to the patient and staff, as well as a financial cost. Patient-specific risks include potential for incorrect placement, missed feedings, irradiation (from the patient’s own chest radiograph and from others), and discomfort from manual handling and repeat reinsertions. Staff risk factors include radiation scatter from portable radiographs (especially when dealing with more than 1 patient per bed space), manual handling, and increased pressure on radiographers. Finally, financial costs are related to the NGTs themselves as well as the portable chest radiograph, which our Superintendent Radiographer estimates to be £55 (US $73.86).
The objective of this study was to ascertain the extent of NGT dislodgment in COVID-19 ICU patients after the introduction of NGT bridle kits as a standard of practice and to determine whether this would reduce the number of NGT insertions, patient irradiation, missed feedings, and overall costs. With the introduction of bridle kits, incidence of pressure sores related to the bridle kit were also recorded.
Methods
Data were collected over 2 cycles, the first retrospectively and the second prospectively, once NGT bridle kits were introduced as an intervention.
Cycle 1. Analyzing the current standard of practice: regular NGT insertion with no use of bridle kit
Cycle 1 was done retrospectively, looking at 30 patient notes of COVID-19 patients admitted to the critical care unit (CCU) between March 11, 2020, and April 20, 2020, at Queen Elizabeth Hospital Birmingham, Birmingham, UK. All patients admitted to the ICU with COVID-19 requiring invasive ventilation were eligible for inclusion in the study. A total of 32 patients were admitted during this time; however, 2 patients were excluded due to NGTs being inserted prior to ICU admission.
Individual patient notes were searched for:
- days of feeding required during their inpatient stay (this included NGT feeding on the ward post-ICU discharge).
- hours of feeding missed while waiting for NGT reinsertion or chest radiograph due to dislodged or displaced NGTs (during the entire period of enteral feeding, ICU, and ward).
- number of NGT insertions.
- number of chest radiographs purely for NGT position.
Each patient’s first day of feeding and NGT insertion were noted. Following that, the patient electronic note system, the Prescribing Information and Communication System, was used to look for any further chest radiograph requests, which were primarily for NGT position. Using the date and time, the “critical care observations” tab was used to look at fluids and to calculate how long NGT feeding was stopped while NGT position-check x-rays were being awaited. The notes were also checked at this date and time to work out whether a new NGT was inserted or whether an existing tube had been dislodged (if not evident from the x-ray request). Data collection was stopped once either of the following occurred:
- patient no longer required NGT feeding.
- patient was transferred to another hospital.
- death.
The cost of the NGT was averaged between the cost of size 8 and 12, which worked out to be £10 (US $13.43). As mentioned earlier, each radiograph cost was determined by the Superintendent Radiographer (£55).
Cycle 2. Implementing a change: introduction of NGT bridle kit (Applied Medical Technology Bridle) as standard of practice
The case notes of 54 patients admitted to the COVID-19 CCU at the Queen Elizabeth Hospital Birmingham, Birmingham, UK, were retrospectively reviewed between February 8, 2021, and April 17, 2021. The inclusion criteria consisted of: admitted to the CCU due to COVID-19, required NGT feeding, and was bridled on admission. Case notes were retrospectively reviewed for:
- Length of CCU stay
- Days of feeding required during the hospital stay
- Hours of feeding missed while waiting for a chest radiograph due to displaced NGTs
- Number of NGT insertions
- Number of chest radiographs to confirm NGT position
- Bridling of NGTs
- Documented pressure sores related to the bridle or NGT, or referrals for wound management advice (Tissue Viability Team) as a consequence of the NGT bridle
Results
Of the 54 patients admitted, 31 had their NGTs bridled. Data were collected as in the first cycle, with individual notes analyzed on the online system (Table). Additionally, notes were reviewed for documentation of pressure sores related to NGT bridling, and the “requests” tab as well as the “noting” function were used to identify referrals for “Wound Management Advice” (Tissue Viability Review).
The average length of stay for this ICU cohort was 17.6 days. This reiterates the reliance on NGT feeding of patients admitted to the CCU. The results from this project can be summarized as follows: The use of NGT bridle kits leads to a significant reduction in the total number of NGTs a patient requires during intensive care. As a result, there is a significant reduction in the number of chest radiographs required to confirm NGT position. Feedings missed can also be reduced by using a bridle kit. These advantages all come with no additional cost.
On average, bridled patients required 1.3 NGTs, compared to 2.5 before bridles were introduced. The fewer NGTs inserted, the less chance of an NGT-associated injury occurring.
The number of chest radiographs required to confirm NGT position after resiting also fell, from 3.4 to 1.6. This has numerous advantages. There is a financial savings of £99 (US $133.04) per patient from the reduced number of chest x-rays. Although this does not offset the price of the bridle kit itself, there are other less easily quantifiable costs that are reduced. For instance, patients are highly catabolic during severe infection, and their predominant energy source comes from their feedings. Missed feedings are associated with longer length of stay in the ICU and in the hospital in general.9 Bridle kits have the potential to reduce the number of missed feedings by ensuring the NGT remains in the correct position.
Discussion
Many of the results are aligned with what is already known in the literature. A meta-analysis from 2014 concluded that dislodgment is reduced with the use of a bridle kit.6 This change is what underpins many of the advantages seen, as an NGT that stays in place means additional radiographs are not required and feeding is not delayed.
COVID-19 critical care patients are very fragile and are dependent on ventilators for the majority of their stay. They are often on very high levels of ventilator support and moving the patient can lead to desaturation or difficulties in ventilation. Therefore, reduction in any manual handling occurring as a result of the need for portable chest radiographs minimizes the chances of further negative events. Furthermore, nursing staff, along with the radiographers, are often the ones who must move these patients in order for the x-ray film to be placed behind the patient. This task is not easy, especially with limited personnel, and has the potential to cause injuries to both patients and staff members.
The knock-on effect of reduced NGTs and x-rays is also a reduction of work for the portable radiography team, in what is a very time- and resource-consuming process of coming onto the COVID-19 CCU. Not only does the machine itself need to be wiped down thoroughly after use, but also the individual must use personal protective equipment (PPE) each time. There is a cost associated with PPE itself, as well as the time it takes to don and doff appropriately.
A reduction in chest radiographs reduces the irradiation of the patient and the potential irradiation of staff members. With bridling of the NGT, the radiation exposure is more than halved for the patient. Because the COVID ICU is often very busy, with patients in some cases being doubled up in a bed space, the scatter radiation is high. This can be reduced if fewer chest radiographs are required.
An additional benefit of a reduction in the mean number of NGT insertions per patient is also illustrated by anecdotal evidence. Over the studied period, we identified 2 traumatic pneumothoraces related to NGT insertion on the COVID-19 CCU, highlighting the potential risks of NGT insertion and the need to reduce its frequency, if possible.
One concern noted was that bridles could cause increased incidence of pressure sores. In the patients represented in this study, only 1 suffered a pressure sore (grade 2) directly related to the bridle. A subpopulation of patients not bridled was also noted. This was significantly smaller than the main group; however, we had noted 2 incidences of pressure sores from their standard NGT and securement devices. Some studies have alluded to the potential for increased skin complications with bridle kits; however, studies looking specifically at kits using umbilical tape (as in this study) show no significant increase in skin damage.10 This leaves us confident that there is no increased risk of pressure sores related to the bridling of patients when umbilical tape is used with the bridle kit.
NGT bridles require training to insert safely. With the introduction of bridling, our hospital’s nursing staff underwent training in order to be proficient with the bridle kits. This comes with a time commitment, and, like other equipment usage, it takes time to build confidence. However, in this study, there were no concerns raised from nursing staff regarding difficulty of insertion or the time taken to do so.
Our study adds an objective measure of the benefits provided by bridle kits. Not only was there a reduction in the number of NGT insertions required, but we were also able to show a significant reduction in the number of chest radiographs required as well in the amount of time feeding is missed. While apprehension regarding bridle kits may be focused on cost, this study has shown that the savings more than make up for the initial cost of the kit itself.
Although the patient demographics, systemic effects, and treatment of COVID-19 are similar between different ICUs, a single-center study does have limitations. One of these is the potential for an intervention in a single-center study to lead to a larger effect than that of multicenter studies.11 But as seen in previous studies, the dislodgment of NGTs is not just an issue in this ICU.12 COVID-19–specific risk factors for NGT dislodgment also apply to all patients requiring invasive ventilation and proning.
Identification of whether a new NGT was inserted, or whether the existing NGT was replaced following dislodging of an NGT, relied on accurate documentation by the relevant staff. The case notes did not always make this explicitly clear. Unlike other procedures commonly performed, documentation of NGT insertion is not formally done under the procedures heading, and, on occasion is not done at all. We recognize that manually searching notes only yields NGT insertions that have been formally documented. There is a potential for the number recorded to be lower than the actual number of NGTs inserted. However, when x-ray requests are cross-referenced with the notes, there is a significant degree of confidence that the vast majority of insertions are picked up.
One patient identified in the study required a Ryle’s tube as part of their critical care treatment. While similar in nature to an NGT, these are unable to fit into a bridle and are at increased risk of dislodging during the patient’s critical care stay. The intended benefit of the bridle kit does not therefore extend to patients with Ryle’s tubes.
Conclusion
The COVID-19 critical care population requires significant time on invasive ventilation and remains dependent on NGT feeding during this process. The risk of NGT dislodgment can be mitigated by using a bridle kit, as the number of NGT insertions a patient requires is significantly reduced. Not only does this reduce the risk of inadvertent misplacement but also has a cost savings, as well as increasing safety for staff and patients. From this study, the risk of pressure injuries is not significant. The benefit of NGT bridling may be extended to other non-COVID long-stay ICU patients.
Future research looking at the efficacy of bridle kits in larger patient groups will help confirm the benefits seen in this study and will also provide better information with regard to any long-term complications associated with bridles.
Corresponding author: Rajveer Atkar, MBBS, Queen Elizabeth Hospital Birmingham, Mindelsohn Way, Birmingham B15 2GW, United Kingdom; [email protected].
Financial disclosures: None.
1. Intensive Care National Audit & Research Centre. ICNARC report on COVID-19 in critical care 15 May 2020. https://www.icnarc.org/DataServices/Attachments/Download/cbcb6217-f698-ea11-9125-00505601089b
2. NHS. Nasogastric tube misplacement: continuing risk of death and severe harm. July 22, 2016. https://www.england.nhs.uk/2016/07/nasogastric-tube-misplacement-continuing-risk-of-death-severe-harm/
3. NHS. Provisional publication of never events reported as occurring between 1 April and 30 June 2020. https://www.england.nhs.uk/wp-content/uploads/2020/08/Provisional_publication_-_NE_1_April_-_30_June_2020.pdf
4. Meer JA. Inadvertent dislodgement of nasoenteral feeding tubes: incidence and prevention. JPEN J Parenter Enteral Nutr. 1987;11(2):187- 189. doi:10.1177/0148607187011002187
5. Bechtold ML, Nguyen DL, Palmer L, et al. Nasal bridles for securing nasoenteric tubes: a meta-analysis. Nutr Clin Pract. 2014;29(5):667-671. doi:10.1177/0884533614536737
6. Lynch A, Tang CS, Jeganathan LS, Rockey JG. A systematic review of the effectiveness and complications of using nasal bridles to secure nasoenteral feeding tubes. Aust J Otolaryngol. 2018;1:8. doi:10.21037/ajo.2018.01.01
7. Johnston R, O’Dell L, Patrick M, Cole OT, Cunliffe N. Outcome of patients fed via a nasogastric tube retained with a bridle loop: Do bridle loops reduce the requirement for percutaneous endoscopic gastrostomy insertion and 30-day mortality? Proc Nutr Soc. 2008;67:E116. doi:10.1017/S0029665108007489
8. Li AY, Rustad KC, Long C, et al. Reduced incidence of feeding tube dislodgement and missed feeds in burn patients with nasal bridle securement. Burns. 2018;44(5):1203-1209. doi:10.1016/j.burns.2017.05.025
9. Peev MP, Yeh DD, Quraishi SA, et al. Causes and consequences of interrupted enteral nutrition: a prospective observational study in critically ill surgical patients. JPEN J Parenter Enteral Nutr. 2015;39(1):21-27. doi:10.1177/0148607114526887
10. Seder CW, Janczyk R. The routine bridling of nasjejunal tubes is a safe and effective method of reducing dislodgement in the intensive care unit. Nutr Clin Pract. 2008;23(6):651-654. doi:10.1177/0148607114526887
11. Dechartres A, Boutron I, Trinquart L, Charles P, Ravaud P. Single-center trials show larger treatment effects than multicenter trials: evidence from a meta-epidemiologic study. Ann Intern Med. 2011;155:39-51. doi:10.7326/0003-4819-155-1-201107050-00006
12. Morton B, Hall R, Ridgway T, Al-Rawi O. Nasogastric tube dislodgement: a problem on our ICU. Crit Care. 2013;17(suppl 2):P242. doi:10.1186/cc12180
1. Intensive Care National Audit & Research Centre. ICNARC report on COVID-19 in critical care 15 May 2020. https://www.icnarc.org/DataServices/Attachments/Download/cbcb6217-f698-ea11-9125-00505601089b
2. NHS. Nasogastric tube misplacement: continuing risk of death and severe harm. July 22, 2016. https://www.england.nhs.uk/2016/07/nasogastric-tube-misplacement-continuing-risk-of-death-severe-harm/
3. NHS. Provisional publication of never events reported as occurring between 1 April and 30 June 2020. https://www.england.nhs.uk/wp-content/uploads/2020/08/Provisional_publication_-_NE_1_April_-_30_June_2020.pdf
4. Meer JA. Inadvertent dislodgement of nasoenteral feeding tubes: incidence and prevention. JPEN J Parenter Enteral Nutr. 1987;11(2):187- 189. doi:10.1177/0148607187011002187
5. Bechtold ML, Nguyen DL, Palmer L, et al. Nasal bridles for securing nasoenteric tubes: a meta-analysis. Nutr Clin Pract. 2014;29(5):667-671. doi:10.1177/0884533614536737
6. Lynch A, Tang CS, Jeganathan LS, Rockey JG. A systematic review of the effectiveness and complications of using nasal bridles to secure nasoenteral feeding tubes. Aust J Otolaryngol. 2018;1:8. doi:10.21037/ajo.2018.01.01
7. Johnston R, O’Dell L, Patrick M, Cole OT, Cunliffe N. Outcome of patients fed via a nasogastric tube retained with a bridle loop: Do bridle loops reduce the requirement for percutaneous endoscopic gastrostomy insertion and 30-day mortality? Proc Nutr Soc. 2008;67:E116. doi:10.1017/S0029665108007489
8. Li AY, Rustad KC, Long C, et al. Reduced incidence of feeding tube dislodgement and missed feeds in burn patients with nasal bridle securement. Burns. 2018;44(5):1203-1209. doi:10.1016/j.burns.2017.05.025
9. Peev MP, Yeh DD, Quraishi SA, et al. Causes and consequences of interrupted enteral nutrition: a prospective observational study in critically ill surgical patients. JPEN J Parenter Enteral Nutr. 2015;39(1):21-27. doi:10.1177/0148607114526887
10. Seder CW, Janczyk R. The routine bridling of nasjejunal tubes is a safe and effective method of reducing dislodgement in the intensive care unit. Nutr Clin Pract. 2008;23(6):651-654. doi:10.1177/0148607114526887
11. Dechartres A, Boutron I, Trinquart L, Charles P, Ravaud P. Single-center trials show larger treatment effects than multicenter trials: evidence from a meta-epidemiologic study. Ann Intern Med. 2011;155:39-51. doi:10.7326/0003-4819-155-1-201107050-00006
12. Morton B, Hall R, Ridgway T, Al-Rawi O. Nasogastric tube dislodgement: a problem on our ICU. Crit Care. 2013;17(suppl 2):P242. doi:10.1186/cc12180
Predicting cardiac shock mortality in the ICU
Addition of echocardiogram measurement of biventricular dysfunction improved the accuracy of prognosis among patients with cardiac shock (CS) in the cardiac intensive care unit.
In patients in the cardiac ICU with CS, biventricular dysfunction (BVD), as assessed using transthoracic echocardiography, improves clinical risk stratification when combined with the Society for Cardiovascular Angiography and Interventions shock stage.
No improvements in risk stratification was seen with patients with left or right ventricular systolic dysfunction (LVSD or RVSD) alone, according to an article published in the journal Chest.
Ventricular systolic dysfunction is commonly seen in patients who have suffered cardiac shock, most often on the left side. Although echocardiography is often performed on these patients during diagnosis, previous studies looking at ventricular dysfunction used invasive hemodynamic parameters, which made it challenging to incorporate their findings into general cardiac ICU practice.
Pinning down cardiac shock
Although treatment of acute MI and heart failure has improved greatly, particularly with the implementation of percutaneous coronary intervention (primary PCI) for ST-segment elevation MI. This has reduced the rate of future heart failure, but cardiac shock can occur before or after the procedure, with a 30-day mortality of 30%-40%. This outcome hasn’t improved in the last 20 years.
Efforts to improve cardiac shock outcomes through percutaneous mechanical circulatory support devices have been hindered by the fact that CS patients are heterogeneous, and prognosis may depend on a range of factors.
SCAI was developed as a five-stage classification system for CS to improve communication of patient status, as well as to improve differentiation among patients participation in clinical trials. It does not include measures of ventricular dysfunction.
Simple measure boosts prognosis accuracy
The new work adds an additional layer to the SCAI shock stage. “Adding echocardiography allows discrimination between levels of risk for each SCAI stage,” said David Baran, MD, who was asked for comment. Dr. Baran was the lead author on the original SCAI study and is system director of advanced heart failure at Sentara Heart Hospital, as well as a professor of medicine at Eastern Virginia Medical School, both in Norfolk.
The work also underscores the value of repeated measures of prognosis during a patient’s stay in the ICU. “If a patient is not improving, it may prompt a consideration of whether transfer or consultation with a tertiary center may be of value. Conversely, if a patient doesn’t have high-risk features and is responding to therapy, it is reassuring to have data supporting low mortality with that care plan,” said Dr. Baran.
The study may be biased, since not every patient undergoes an echocardiogram. Still, “the authors make a convincing case that biventricular dysfunction is a powerful negative marker across the spectrum of SCAI stages,” said Dr. Baran.
Echocardiography is simple and generally available, and some are even portable and used with a smartphone. But patient body size interferes with echocardiography, as can the presence of a ventilator or multiple surgical dressings. “The key advantage of echo is that it is completely noninvasive and can be brought to the patient in the ICU, unlike other testing which involves moving the patient to the testing environment,” said Dr. Baran.
The researchers analyzed data from 3,158 patients admitted to the cardiac ICU at the Mayo Clinic Hospital St. Mary’s Campus in Rochester, Minn., 51.8% of whom had acute coronary syndromes. They defined LVSD as a left ventricular ejection fraction less than 40%, and RVSD as at least moderate systolic dysfunction determined by semiquantitative measurement. BVD constituted the presence of both LVSD and RVSD. They examined the association of in-hospital mortality with these parameters combined with SCAI stage.
BVD a risk factor
Overall in-hospital mortality was 10%. A total of 22.3% of patients had LVSD and 11.8% had RVSD; 16.4% had moderate or greater BVD. There was no association between LVSD or RVSD and in-hospital mortality after adjustment for SCAI stage, but there was a significant association for BVD (adjusted hazard ratio, 1.815; P = .0023). When combined with SCAI, BVC led to an improved ability to predict hospital mortality (area under the curve, 0.784 vs. 0.766; P < .001). Adding semiquantitative RVSD and LVSD led to more improvement (AUC, 0.794; P < .01 vs. both).
RVSD was associated with higher in-hospital mortality (adjusted odds ratio, 1.421; P = .02), and there was a trend toward greater mortality with LVSD (aOR, 1.336; P = .06). There was little change when SCAI shock stage A patients were excluded (aOR, 1.840; P < .001).
Patients with BVD had greater in-hospital mortality than those without ventricular dysfunction (aOR, 1.815; P = .0023), but other between-group comparisons were not significant.
The researchers performed a classification and regression tree analysis using left ventricular ejection fraction (LVEF) and semiquantitative RVSD. It found that RVSD was a better predictor of in-hospital mortality than LVSD, and the best cutoff for LVSD was different among patients with RVSD and patients without RVSD.
Patients with mild or greater RVD and LVEF greater than 24% were considered high risk; those with borderline or low RVSD and LVEF less than 33%, or mild or greater RVSD with LVEF of at least 24%, were considered intermediate risk. Patients with borderline or no RVSD and LVEF of at least 33% were considered low risk. Hospital mortality was 22% in the high-risk group, 12.2% in the intermediate group, and 3.3% in the low-risk group (aOR vs. intermediate, 0.493; P = .0006; aOR vs. high risk, 0.357; P < .0001).
The study authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Addition of echocardiogram measurement of biventricular dysfunction improved the accuracy of prognosis among patients with cardiac shock (CS) in the cardiac intensive care unit.
In patients in the cardiac ICU with CS, biventricular dysfunction (BVD), as assessed using transthoracic echocardiography, improves clinical risk stratification when combined with the Society for Cardiovascular Angiography and Interventions shock stage.
No improvements in risk stratification was seen with patients with left or right ventricular systolic dysfunction (LVSD or RVSD) alone, according to an article published in the journal Chest.
Ventricular systolic dysfunction is commonly seen in patients who have suffered cardiac shock, most often on the left side. Although echocardiography is often performed on these patients during diagnosis, previous studies looking at ventricular dysfunction used invasive hemodynamic parameters, which made it challenging to incorporate their findings into general cardiac ICU practice.
Pinning down cardiac shock
Although treatment of acute MI and heart failure has improved greatly, particularly with the implementation of percutaneous coronary intervention (primary PCI) for ST-segment elevation MI. This has reduced the rate of future heart failure, but cardiac shock can occur before or after the procedure, with a 30-day mortality of 30%-40%. This outcome hasn’t improved in the last 20 years.
Efforts to improve cardiac shock outcomes through percutaneous mechanical circulatory support devices have been hindered by the fact that CS patients are heterogeneous, and prognosis may depend on a range of factors.
SCAI was developed as a five-stage classification system for CS to improve communication of patient status, as well as to improve differentiation among patients participation in clinical trials. It does not include measures of ventricular dysfunction.
Simple measure boosts prognosis accuracy
The new work adds an additional layer to the SCAI shock stage. “Adding echocardiography allows discrimination between levels of risk for each SCAI stage,” said David Baran, MD, who was asked for comment. Dr. Baran was the lead author on the original SCAI study and is system director of advanced heart failure at Sentara Heart Hospital, as well as a professor of medicine at Eastern Virginia Medical School, both in Norfolk.
The work also underscores the value of repeated measures of prognosis during a patient’s stay in the ICU. “If a patient is not improving, it may prompt a consideration of whether transfer or consultation with a tertiary center may be of value. Conversely, if a patient doesn’t have high-risk features and is responding to therapy, it is reassuring to have data supporting low mortality with that care plan,” said Dr. Baran.
The study may be biased, since not every patient undergoes an echocardiogram. Still, “the authors make a convincing case that biventricular dysfunction is a powerful negative marker across the spectrum of SCAI stages,” said Dr. Baran.
Echocardiography is simple and generally available, and some are even portable and used with a smartphone. But patient body size interferes with echocardiography, as can the presence of a ventilator or multiple surgical dressings. “The key advantage of echo is that it is completely noninvasive and can be brought to the patient in the ICU, unlike other testing which involves moving the patient to the testing environment,” said Dr. Baran.
The researchers analyzed data from 3,158 patients admitted to the cardiac ICU at the Mayo Clinic Hospital St. Mary’s Campus in Rochester, Minn., 51.8% of whom had acute coronary syndromes. They defined LVSD as a left ventricular ejection fraction less than 40%, and RVSD as at least moderate systolic dysfunction determined by semiquantitative measurement. BVD constituted the presence of both LVSD and RVSD. They examined the association of in-hospital mortality with these parameters combined with SCAI stage.
BVD a risk factor
Overall in-hospital mortality was 10%. A total of 22.3% of patients had LVSD and 11.8% had RVSD; 16.4% had moderate or greater BVD. There was no association between LVSD or RVSD and in-hospital mortality after adjustment for SCAI stage, but there was a significant association for BVD (adjusted hazard ratio, 1.815; P = .0023). When combined with SCAI, BVC led to an improved ability to predict hospital mortality (area under the curve, 0.784 vs. 0.766; P < .001). Adding semiquantitative RVSD and LVSD led to more improvement (AUC, 0.794; P < .01 vs. both).
RVSD was associated with higher in-hospital mortality (adjusted odds ratio, 1.421; P = .02), and there was a trend toward greater mortality with LVSD (aOR, 1.336; P = .06). There was little change when SCAI shock stage A patients were excluded (aOR, 1.840; P < .001).
Patients with BVD had greater in-hospital mortality than those without ventricular dysfunction (aOR, 1.815; P = .0023), but other between-group comparisons were not significant.
The researchers performed a classification and regression tree analysis using left ventricular ejection fraction (LVEF) and semiquantitative RVSD. It found that RVSD was a better predictor of in-hospital mortality than LVSD, and the best cutoff for LVSD was different among patients with RVSD and patients without RVSD.
Patients with mild or greater RVD and LVEF greater than 24% were considered high risk; those with borderline or low RVSD and LVEF less than 33%, or mild or greater RVSD with LVEF of at least 24%, were considered intermediate risk. Patients with borderline or no RVSD and LVEF of at least 33% were considered low risk. Hospital mortality was 22% in the high-risk group, 12.2% in the intermediate group, and 3.3% in the low-risk group (aOR vs. intermediate, 0.493; P = .0006; aOR vs. high risk, 0.357; P < .0001).
The study authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Addition of echocardiogram measurement of biventricular dysfunction improved the accuracy of prognosis among patients with cardiac shock (CS) in the cardiac intensive care unit.
In patients in the cardiac ICU with CS, biventricular dysfunction (BVD), as assessed using transthoracic echocardiography, improves clinical risk stratification when combined with the Society for Cardiovascular Angiography and Interventions shock stage.
No improvements in risk stratification was seen with patients with left or right ventricular systolic dysfunction (LVSD or RVSD) alone, according to an article published in the journal Chest.
Ventricular systolic dysfunction is commonly seen in patients who have suffered cardiac shock, most often on the left side. Although echocardiography is often performed on these patients during diagnosis, previous studies looking at ventricular dysfunction used invasive hemodynamic parameters, which made it challenging to incorporate their findings into general cardiac ICU practice.
Pinning down cardiac shock
Although treatment of acute MI and heart failure has improved greatly, particularly with the implementation of percutaneous coronary intervention (primary PCI) for ST-segment elevation MI. This has reduced the rate of future heart failure, but cardiac shock can occur before or after the procedure, with a 30-day mortality of 30%-40%. This outcome hasn’t improved in the last 20 years.
Efforts to improve cardiac shock outcomes through percutaneous mechanical circulatory support devices have been hindered by the fact that CS patients are heterogeneous, and prognosis may depend on a range of factors.
SCAI was developed as a five-stage classification system for CS to improve communication of patient status, as well as to improve differentiation among patients participation in clinical trials. It does not include measures of ventricular dysfunction.
Simple measure boosts prognosis accuracy
The new work adds an additional layer to the SCAI shock stage. “Adding echocardiography allows discrimination between levels of risk for each SCAI stage,” said David Baran, MD, who was asked for comment. Dr. Baran was the lead author on the original SCAI study and is system director of advanced heart failure at Sentara Heart Hospital, as well as a professor of medicine at Eastern Virginia Medical School, both in Norfolk.
The work also underscores the value of repeated measures of prognosis during a patient’s stay in the ICU. “If a patient is not improving, it may prompt a consideration of whether transfer or consultation with a tertiary center may be of value. Conversely, if a patient doesn’t have high-risk features and is responding to therapy, it is reassuring to have data supporting low mortality with that care plan,” said Dr. Baran.
The study may be biased, since not every patient undergoes an echocardiogram. Still, “the authors make a convincing case that biventricular dysfunction is a powerful negative marker across the spectrum of SCAI stages,” said Dr. Baran.
Echocardiography is simple and generally available, and some are even portable and used with a smartphone. But patient body size interferes with echocardiography, as can the presence of a ventilator or multiple surgical dressings. “The key advantage of echo is that it is completely noninvasive and can be brought to the patient in the ICU, unlike other testing which involves moving the patient to the testing environment,” said Dr. Baran.
The researchers analyzed data from 3,158 patients admitted to the cardiac ICU at the Mayo Clinic Hospital St. Mary’s Campus in Rochester, Minn., 51.8% of whom had acute coronary syndromes. They defined LVSD as a left ventricular ejection fraction less than 40%, and RVSD as at least moderate systolic dysfunction determined by semiquantitative measurement. BVD constituted the presence of both LVSD and RVSD. They examined the association of in-hospital mortality with these parameters combined with SCAI stage.
BVD a risk factor
Overall in-hospital mortality was 10%. A total of 22.3% of patients had LVSD and 11.8% had RVSD; 16.4% had moderate or greater BVD. There was no association between LVSD or RVSD and in-hospital mortality after adjustment for SCAI stage, but there was a significant association for BVD (adjusted hazard ratio, 1.815; P = .0023). When combined with SCAI, BVC led to an improved ability to predict hospital mortality (area under the curve, 0.784 vs. 0.766; P < .001). Adding semiquantitative RVSD and LVSD led to more improvement (AUC, 0.794; P < .01 vs. both).
RVSD was associated with higher in-hospital mortality (adjusted odds ratio, 1.421; P = .02), and there was a trend toward greater mortality with LVSD (aOR, 1.336; P = .06). There was little change when SCAI shock stage A patients were excluded (aOR, 1.840; P < .001).
Patients with BVD had greater in-hospital mortality than those without ventricular dysfunction (aOR, 1.815; P = .0023), but other between-group comparisons were not significant.
The researchers performed a classification and regression tree analysis using left ventricular ejection fraction (LVEF) and semiquantitative RVSD. It found that RVSD was a better predictor of in-hospital mortality than LVSD, and the best cutoff for LVSD was different among patients with RVSD and patients without RVSD.
Patients with mild or greater RVD and LVEF greater than 24% were considered high risk; those with borderline or low RVSD and LVEF less than 33%, or mild or greater RVSD with LVEF of at least 24%, were considered intermediate risk. Patients with borderline or no RVSD and LVEF of at least 33% were considered low risk. Hospital mortality was 22% in the high-risk group, 12.2% in the intermediate group, and 3.3% in the low-risk group (aOR vs. intermediate, 0.493; P = .0006; aOR vs. high risk, 0.357; P < .0001).
The study authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Adjuvant Olaparib Improves Outcomes in High-Risk, HER2-Negative Early Breast Cancer Patients With Germline BRCA1 and BRCA2 Mutations
Study Overview
Objective. To assess the efficacy and safety of olaparib as an adjuvant treatment in patients with BRCA1 or BRCA2 germline mutations who are at a high-risk for relapse.
Design. A randomized, double-blind, placebo-controlled, multicenter phase III study. The published results are from the prespecified interim analysis.
Intervention. Patients were randomized in 1:1 ratio to either receive 300 mg of olaparib orally twice daily or to receive a matching placebo. Randomization was stratified by hormone receptor status (estrogen receptor and/or progesterone receptor positive/HER2-negative vs triple negative), prior neoadjuvant vs adjuvant chemotherapy, and prior platinum use for breast cancer. Treatment was continued for 52 weeks.
Setting and participants. A total of 1836 patients were randomized in a 1:1 fashion to receive olaparib or a placebo. Eligible patients had a germline BRCA1 or BRCA1 pathogenic or likely pathogenic variant. Patients had high-risk, HER2-negative primary breast cancers and all had received definitive local therapy and neoadjuvant or adjuvant chemotherapy. Patients were enrolled between 2 to 12 weeks after completion of all local therapy. Platinum chemotherapy was allowed. Patients received adjuvant endocrine therapy for hormone receptor positive disease as well as adjuvant bisphosphonates per institutional guidelines. Patients with triple negative disease who received adjuvant chemotherapy were required to be lymph node positive or have at least 2 cm invasive disease. Patients who received neoadjuvant chemotherapy were required to have residual invasive disease to be eligible. For hormone receptor positive patients receiving adjuvant chemotherapy to be eligible they had to have at least 4 pathologically confirmed lymph nodes involved. Hormone receptor positive patients who had neoadjuvant chemotherapy were required to have had residual invasive disease.
Main outcome measures. The primary endpoint for the study was invasive disease-free survival which was defined as time from randomization to date of recurrence or death from any cause. The secondary endpoints included overall survival (OS), distant disease-free survival, safety, and tolerability of olaparib.
Main results. At the time of data cutoff, 284 events had occurred with a median follow-up of 2.5 years in the intention to treat population. A total of 81% of patients had triple negative breast cancer. Most patients (94% in the olaparib group and 92% in the placebo group) received both taxane and anthracycline based chemotherapy regimens. Platinum based chemotherapy was used in 26% of patients in each group. The groups were otherwise well balanced. Germline mutations in BRCA1 were present in 72% of patients and BRCA2 in 27% of patients. These were balanced between groups.
At the time of this analysis, adjuvant olaparib reduced the risk of invasive disease-free survival by 42% compared with placebo (P < .001). At 3 years, invasive disease-free survival was 85.9% in the olaparib group and 77.1% in the placebo group (difference, 8.8 percentage points; 95% CI, 4.5-13.0; hazard ratio [HR], 0.58; 99.5% CI, 0.41-0.82; P < .001). The 3-year distant disease-free survival was 87.5% in the olaparib group and 80.4% in the placebo group (HR 0.57; 99.5% CI, 0.39-0.83; P < .001). Results also showed that olaparib was associated with fewer deaths than placebo (59 and 86, respectively) (HR, 0.68; 99% CI, 0.44-1.05; P = .02); however, there was no significant difference between treatment arms at the time of this interim analysis. Subgroup analysis showed a consistent benefit across all groups with no difference noted regarding BRCA mutation, hormone receptor status or use of neoadjuvant vs adjuvant chemotherapy.
The side effects were consistent with the safety profile of olaparib. Adverse events of grade 3 or higher more common with olaparib included anemia (8.7%), leukopenia (3%), and fatigue (1.8%). Early discontinuation of trial regimen due to adverse events of disease recurrence occurred in 25.9% in the olaparib group and 20.7% in the placebo group. Blood transfusions were required in 5.8% of patients in the olaparib group. Myelodysplasia or acute myleoid leukemia was observed in 2 patients in the olaparib group and 3 patients in the placebo group. Adverse events leading to death occurred in 1 patient in the olaparib group and 2 patients in the placebo group.
Conclusion. Among patients with high-risk, HER2-negative early breast cancer and germline BRCA1 or BRCA2 pathogenic or likely pathogenic variants, adjuvant olaparib after completion of local treatment and neoadjuvant or adjuvant chemotherapy was associated with significantly longer invasive disease-free and distant disease-free survival compared with placebo.
Commentary
The results from the current OlympiA trial provide the first evidence that adjuvant therapy with poly adenosine diphosphate-ribose polymerase (PARP) inhibitors can improve outcomes in high-risk, HER2-negative breast cancer in patients with pathogenic BRCA1 and BRCA2 mutations. The OS, while favoring olaparib, is not yet mature at the time of this analysis. Nevertheless, these results represent an important step forward in improving outcomes in this patient population. The efficacy and safety of PARP inhibitors in BRCA-mutated breast cancer has previously been shown in patients with advanced disease leading to FDA approval of both olaparib and talazoparib in this setting.1,2 With the current results, PARP inhibitors will certainly play an important role in the adjuvant setting in patients with deleterious BRCA1 or BRCA2 mutations at high risk for relapse. Importantly, the side effect profile appears acceptable with no unexpected events and a very low rate of secondary myeloid malignancies.
Subgroup analysis appears to indicate a benefit across all groups including hormone receptor–positive disease and triple negative breast cancer. Interestingly, approximately 25% of patients in both cohorts received platinum-based chemotherapy. The efficacy of adjuvant olaparib did not appear to be impacted by prior use of platinum-containing chemotherapy regimens. It is important to consider that postneoadjuvant capecitabine, per the results of the CREATE-X trial, in triple-negative patients was not permitted in the current study. Although, this has been widely adopted in clinical practice.3 The CREATE-X trial did not specify the benefit of adjuvant capecitabine in the BRCA-mutated cohort, thus, it is not clear how this subgroup fares with this approach. Thus, one cannot extrapolate the relative efficacy of olaparib compared with capecitabine, as pointed out by the authors, and whether we consider the use of capecitabine and/or olaparib in triple-negative patients with residual invasive disease after neoadjuvant chemotherapy is not clear at this time.
Nevertheless, the magnitude of benefit seen in this trial certainly provide clinically relevant and potentially practice changing results. It will be imperative to follow these results as the survival data matures and ensure no further long-term toxicity, particularly secondary myeloid malignancies, develop. These results should be discussed with each patient and informed decisions regarding the use of adjuvant olaparib should be considered for this patient population. Lastly, these results highlight the importance of germline testing for patients with breast cancer in accordance with national guideline recommendations. Moreover, these results certainly call into question whether it is time to consider expansion of our current germline testing guidelines to detect all potential patients who may benefit from this therapy.
Application for Clinical Practice
Adjuvant olaparib in high-risk patients with germline BRCA1 or BRCA2 mutations improves invasive and distant disease-free survival and should be considered in patients who meet the enrollment criteria of the current study. Furthermore, this highlights the importance of appropriate germline genetic testing in patients with breast cancer.
Financial disclosures: None.
1. Robson M, Im SA, Senkus E, et al. Olaparib for metastatic breast cancer in patients with a germline BRCA mutation. N Engl J Med. 2017;377(6):523-533. doi:10.1056/NEJMoa1706450
2. Litton JK, Rugo HS, Ettl J, et al. Talazoparib in Patients with Advanced Breast Cancer and a Germline BRCA Mutation. N Engl J Med. 2018;379(8):753-763. doi:10.1056/NEJMoa1802905
3. Masuda N, Lee SJ, Ohtani S, et al. Adjuvant Capecitabine for Breast Cancer after Preoperative Chemotherapy. N Engl J Med. 2017;376(22):2147-2159. doi:10.1056/NEJMoa1612645
Study Overview
Objective. To assess the efficacy and safety of olaparib as an adjuvant treatment in patients with BRCA1 or BRCA2 germline mutations who are at a high-risk for relapse.
Design. A randomized, double-blind, placebo-controlled, multicenter phase III study. The published results are from the prespecified interim analysis.
Intervention. Patients were randomized in 1:1 ratio to either receive 300 mg of olaparib orally twice daily or to receive a matching placebo. Randomization was stratified by hormone receptor status (estrogen receptor and/or progesterone receptor positive/HER2-negative vs triple negative), prior neoadjuvant vs adjuvant chemotherapy, and prior platinum use for breast cancer. Treatment was continued for 52 weeks.
Setting and participants. A total of 1836 patients were randomized in a 1:1 fashion to receive olaparib or a placebo. Eligible patients had a germline BRCA1 or BRCA1 pathogenic or likely pathogenic variant. Patients had high-risk, HER2-negative primary breast cancers and all had received definitive local therapy and neoadjuvant or adjuvant chemotherapy. Patients were enrolled between 2 to 12 weeks after completion of all local therapy. Platinum chemotherapy was allowed. Patients received adjuvant endocrine therapy for hormone receptor positive disease as well as adjuvant bisphosphonates per institutional guidelines. Patients with triple negative disease who received adjuvant chemotherapy were required to be lymph node positive or have at least 2 cm invasive disease. Patients who received neoadjuvant chemotherapy were required to have residual invasive disease to be eligible. For hormone receptor positive patients receiving adjuvant chemotherapy to be eligible they had to have at least 4 pathologically confirmed lymph nodes involved. Hormone receptor positive patients who had neoadjuvant chemotherapy were required to have had residual invasive disease.
Main outcome measures. The primary endpoint for the study was invasive disease-free survival which was defined as time from randomization to date of recurrence or death from any cause. The secondary endpoints included overall survival (OS), distant disease-free survival, safety, and tolerability of olaparib.
Main results. At the time of data cutoff, 284 events had occurred with a median follow-up of 2.5 years in the intention to treat population. A total of 81% of patients had triple negative breast cancer. Most patients (94% in the olaparib group and 92% in the placebo group) received both taxane and anthracycline based chemotherapy regimens. Platinum based chemotherapy was used in 26% of patients in each group. The groups were otherwise well balanced. Germline mutations in BRCA1 were present in 72% of patients and BRCA2 in 27% of patients. These were balanced between groups.
At the time of this analysis, adjuvant olaparib reduced the risk of invasive disease-free survival by 42% compared with placebo (P < .001). At 3 years, invasive disease-free survival was 85.9% in the olaparib group and 77.1% in the placebo group (difference, 8.8 percentage points; 95% CI, 4.5-13.0; hazard ratio [HR], 0.58; 99.5% CI, 0.41-0.82; P < .001). The 3-year distant disease-free survival was 87.5% in the olaparib group and 80.4% in the placebo group (HR 0.57; 99.5% CI, 0.39-0.83; P < .001). Results also showed that olaparib was associated with fewer deaths than placebo (59 and 86, respectively) (HR, 0.68; 99% CI, 0.44-1.05; P = .02); however, there was no significant difference between treatment arms at the time of this interim analysis. Subgroup analysis showed a consistent benefit across all groups with no difference noted regarding BRCA mutation, hormone receptor status or use of neoadjuvant vs adjuvant chemotherapy.
The side effects were consistent with the safety profile of olaparib. Adverse events of grade 3 or higher more common with olaparib included anemia (8.7%), leukopenia (3%), and fatigue (1.8%). Early discontinuation of trial regimen due to adverse events of disease recurrence occurred in 25.9% in the olaparib group and 20.7% in the placebo group. Blood transfusions were required in 5.8% of patients in the olaparib group. Myelodysplasia or acute myleoid leukemia was observed in 2 patients in the olaparib group and 3 patients in the placebo group. Adverse events leading to death occurred in 1 patient in the olaparib group and 2 patients in the placebo group.
Conclusion. Among patients with high-risk, HER2-negative early breast cancer and germline BRCA1 or BRCA2 pathogenic or likely pathogenic variants, adjuvant olaparib after completion of local treatment and neoadjuvant or adjuvant chemotherapy was associated with significantly longer invasive disease-free and distant disease-free survival compared with placebo.
Commentary
The results from the current OlympiA trial provide the first evidence that adjuvant therapy with poly adenosine diphosphate-ribose polymerase (PARP) inhibitors can improve outcomes in high-risk, HER2-negative breast cancer in patients with pathogenic BRCA1 and BRCA2 mutations. The OS, while favoring olaparib, is not yet mature at the time of this analysis. Nevertheless, these results represent an important step forward in improving outcomes in this patient population. The efficacy and safety of PARP inhibitors in BRCA-mutated breast cancer has previously been shown in patients with advanced disease leading to FDA approval of both olaparib and talazoparib in this setting.1,2 With the current results, PARP inhibitors will certainly play an important role in the adjuvant setting in patients with deleterious BRCA1 or BRCA2 mutations at high risk for relapse. Importantly, the side effect profile appears acceptable with no unexpected events and a very low rate of secondary myeloid malignancies.
Subgroup analysis appears to indicate a benefit across all groups including hormone receptor–positive disease and triple negative breast cancer. Interestingly, approximately 25% of patients in both cohorts received platinum-based chemotherapy. The efficacy of adjuvant olaparib did not appear to be impacted by prior use of platinum-containing chemotherapy regimens. It is important to consider that postneoadjuvant capecitabine, per the results of the CREATE-X trial, in triple-negative patients was not permitted in the current study. Although, this has been widely adopted in clinical practice.3 The CREATE-X trial did not specify the benefit of adjuvant capecitabine in the BRCA-mutated cohort, thus, it is not clear how this subgroup fares with this approach. Thus, one cannot extrapolate the relative efficacy of olaparib compared with capecitabine, as pointed out by the authors, and whether we consider the use of capecitabine and/or olaparib in triple-negative patients with residual invasive disease after neoadjuvant chemotherapy is not clear at this time.
Nevertheless, the magnitude of benefit seen in this trial certainly provide clinically relevant and potentially practice changing results. It will be imperative to follow these results as the survival data matures and ensure no further long-term toxicity, particularly secondary myeloid malignancies, develop. These results should be discussed with each patient and informed decisions regarding the use of adjuvant olaparib should be considered for this patient population. Lastly, these results highlight the importance of germline testing for patients with breast cancer in accordance with national guideline recommendations. Moreover, these results certainly call into question whether it is time to consider expansion of our current germline testing guidelines to detect all potential patients who may benefit from this therapy.
Application for Clinical Practice
Adjuvant olaparib in high-risk patients with germline BRCA1 or BRCA2 mutations improves invasive and distant disease-free survival and should be considered in patients who meet the enrollment criteria of the current study. Furthermore, this highlights the importance of appropriate germline genetic testing in patients with breast cancer.
Financial disclosures: None.
Study Overview
Objective. To assess the efficacy and safety of olaparib as an adjuvant treatment in patients with BRCA1 or BRCA2 germline mutations who are at a high-risk for relapse.
Design. A randomized, double-blind, placebo-controlled, multicenter phase III study. The published results are from the prespecified interim analysis.
Intervention. Patients were randomized in 1:1 ratio to either receive 300 mg of olaparib orally twice daily or to receive a matching placebo. Randomization was stratified by hormone receptor status (estrogen receptor and/or progesterone receptor positive/HER2-negative vs triple negative), prior neoadjuvant vs adjuvant chemotherapy, and prior platinum use for breast cancer. Treatment was continued for 52 weeks.
Setting and participants. A total of 1836 patients were randomized in a 1:1 fashion to receive olaparib or a placebo. Eligible patients had a germline BRCA1 or BRCA1 pathogenic or likely pathogenic variant. Patients had high-risk, HER2-negative primary breast cancers and all had received definitive local therapy and neoadjuvant or adjuvant chemotherapy. Patients were enrolled between 2 to 12 weeks after completion of all local therapy. Platinum chemotherapy was allowed. Patients received adjuvant endocrine therapy for hormone receptor positive disease as well as adjuvant bisphosphonates per institutional guidelines. Patients with triple negative disease who received adjuvant chemotherapy were required to be lymph node positive or have at least 2 cm invasive disease. Patients who received neoadjuvant chemotherapy were required to have residual invasive disease to be eligible. For hormone receptor positive patients receiving adjuvant chemotherapy to be eligible they had to have at least 4 pathologically confirmed lymph nodes involved. Hormone receptor positive patients who had neoadjuvant chemotherapy were required to have had residual invasive disease.
Main outcome measures. The primary endpoint for the study was invasive disease-free survival which was defined as time from randomization to date of recurrence or death from any cause. The secondary endpoints included overall survival (OS), distant disease-free survival, safety, and tolerability of olaparib.
Main results. At the time of data cutoff, 284 events had occurred with a median follow-up of 2.5 years in the intention to treat population. A total of 81% of patients had triple negative breast cancer. Most patients (94% in the olaparib group and 92% in the placebo group) received both taxane and anthracycline based chemotherapy regimens. Platinum based chemotherapy was used in 26% of patients in each group. The groups were otherwise well balanced. Germline mutations in BRCA1 were present in 72% of patients and BRCA2 in 27% of patients. These were balanced between groups.
At the time of this analysis, adjuvant olaparib reduced the risk of invasive disease-free survival by 42% compared with placebo (P < .001). At 3 years, invasive disease-free survival was 85.9% in the olaparib group and 77.1% in the placebo group (difference, 8.8 percentage points; 95% CI, 4.5-13.0; hazard ratio [HR], 0.58; 99.5% CI, 0.41-0.82; P < .001). The 3-year distant disease-free survival was 87.5% in the olaparib group and 80.4% in the placebo group (HR 0.57; 99.5% CI, 0.39-0.83; P < .001). Results also showed that olaparib was associated with fewer deaths than placebo (59 and 86, respectively) (HR, 0.68; 99% CI, 0.44-1.05; P = .02); however, there was no significant difference between treatment arms at the time of this interim analysis. Subgroup analysis showed a consistent benefit across all groups with no difference noted regarding BRCA mutation, hormone receptor status or use of neoadjuvant vs adjuvant chemotherapy.
The side effects were consistent with the safety profile of olaparib. Adverse events of grade 3 or higher more common with olaparib included anemia (8.7%), leukopenia (3%), and fatigue (1.8%). Early discontinuation of trial regimen due to adverse events of disease recurrence occurred in 25.9% in the olaparib group and 20.7% in the placebo group. Blood transfusions were required in 5.8% of patients in the olaparib group. Myelodysplasia or acute myleoid leukemia was observed in 2 patients in the olaparib group and 3 patients in the placebo group. Adverse events leading to death occurred in 1 patient in the olaparib group and 2 patients in the placebo group.
Conclusion. Among patients with high-risk, HER2-negative early breast cancer and germline BRCA1 or BRCA2 pathogenic or likely pathogenic variants, adjuvant olaparib after completion of local treatment and neoadjuvant or adjuvant chemotherapy was associated with significantly longer invasive disease-free and distant disease-free survival compared with placebo.
Commentary
The results from the current OlympiA trial provide the first evidence that adjuvant therapy with poly adenosine diphosphate-ribose polymerase (PARP) inhibitors can improve outcomes in high-risk, HER2-negative breast cancer in patients with pathogenic BRCA1 and BRCA2 mutations. The OS, while favoring olaparib, is not yet mature at the time of this analysis. Nevertheless, these results represent an important step forward in improving outcomes in this patient population. The efficacy and safety of PARP inhibitors in BRCA-mutated breast cancer has previously been shown in patients with advanced disease leading to FDA approval of both olaparib and talazoparib in this setting.1,2 With the current results, PARP inhibitors will certainly play an important role in the adjuvant setting in patients with deleterious BRCA1 or BRCA2 mutations at high risk for relapse. Importantly, the side effect profile appears acceptable with no unexpected events and a very low rate of secondary myeloid malignancies.
Subgroup analysis appears to indicate a benefit across all groups including hormone receptor–positive disease and triple negative breast cancer. Interestingly, approximately 25% of patients in both cohorts received platinum-based chemotherapy. The efficacy of adjuvant olaparib did not appear to be impacted by prior use of platinum-containing chemotherapy regimens. It is important to consider that postneoadjuvant capecitabine, per the results of the CREATE-X trial, in triple-negative patients was not permitted in the current study. Although, this has been widely adopted in clinical practice.3 The CREATE-X trial did not specify the benefit of adjuvant capecitabine in the BRCA-mutated cohort, thus, it is not clear how this subgroup fares with this approach. Thus, one cannot extrapolate the relative efficacy of olaparib compared with capecitabine, as pointed out by the authors, and whether we consider the use of capecitabine and/or olaparib in triple-negative patients with residual invasive disease after neoadjuvant chemotherapy is not clear at this time.
Nevertheless, the magnitude of benefit seen in this trial certainly provide clinically relevant and potentially practice changing results. It will be imperative to follow these results as the survival data matures and ensure no further long-term toxicity, particularly secondary myeloid malignancies, develop. These results should be discussed with each patient and informed decisions regarding the use of adjuvant olaparib should be considered for this patient population. Lastly, these results highlight the importance of germline testing for patients with breast cancer in accordance with national guideline recommendations. Moreover, these results certainly call into question whether it is time to consider expansion of our current germline testing guidelines to detect all potential patients who may benefit from this therapy.
Application for Clinical Practice
Adjuvant olaparib in high-risk patients with germline BRCA1 or BRCA2 mutations improves invasive and distant disease-free survival and should be considered in patients who meet the enrollment criteria of the current study. Furthermore, this highlights the importance of appropriate germline genetic testing in patients with breast cancer.
Financial disclosures: None.
1. Robson M, Im SA, Senkus E, et al. Olaparib for metastatic breast cancer in patients with a germline BRCA mutation. N Engl J Med. 2017;377(6):523-533. doi:10.1056/NEJMoa1706450
2. Litton JK, Rugo HS, Ettl J, et al. Talazoparib in Patients with Advanced Breast Cancer and a Germline BRCA Mutation. N Engl J Med. 2018;379(8):753-763. doi:10.1056/NEJMoa1802905
3. Masuda N, Lee SJ, Ohtani S, et al. Adjuvant Capecitabine for Breast Cancer after Preoperative Chemotherapy. N Engl J Med. 2017;376(22):2147-2159. doi:10.1056/NEJMoa1612645
1. Robson M, Im SA, Senkus E, et al. Olaparib for metastatic breast cancer in patients with a germline BRCA mutation. N Engl J Med. 2017;377(6):523-533. doi:10.1056/NEJMoa1706450
2. Litton JK, Rugo HS, Ettl J, et al. Talazoparib in Patients with Advanced Breast Cancer and a Germline BRCA Mutation. N Engl J Med. 2018;379(8):753-763. doi:10.1056/NEJMoa1802905
3. Masuda N, Lee SJ, Ohtani S, et al. Adjuvant Capecitabine for Breast Cancer after Preoperative Chemotherapy. N Engl J Med. 2017;376(22):2147-2159. doi:10.1056/NEJMoa1612645
Timing of renal-replacement therapy for AKI in the ICU
Background: Acute kidney injury (AKI) in the ICU is associated with high mortality. It is hypothesized that earlier initiation of RRT may benefit patients by controlling fluid overload and reducing metabolic stress caused by electrolyte and acid-base imbalances. However, prior studies have been conflicting, with the IDEAL-ICU study (2018) demonstrating no improvement in 90-day mortality with early RRT in septic shock.
Study design: Open-label randomized controlled trial.
Setting: 168 hospitals in 15 countries.
Synopsis: Of ICU patients with severe AKI, 3,019 were randomized to either early or standard initiation of RRT. Early RRT was defined as occurring within 12 hours of eligibility; in the standard-therapy group, RRT was delayed until specifically indicated or if there was no improvement after 72 hours. Those needing immediate renal replacement or deemed likely to recover without need for RRT were excluded in order to study only those in whom ideal timing of dialysis was uncertain. There was no difference in 90-day mortality between the groups (43.9% vs. 43.7%; P = .92). Early initiation did not improve length of ICU stay, ventilator-free days, days out of the hospital, or quality of life. The early-initiation patients experienced more adverse events related to RRT and were more likely to have continued dependence on RRT at 90 days (10.4% vs. 6.0% in standard initiation). Of note, approximately 40% of those randomized to standard initiation never required RRT.
Bottom line: This large, multicenter, well-conducted trial demonstrates no benefit for early initiation of RRT in critically ill patients.
Citation: STARRT-AKI investigators. Timing of initiation of renal-replacement therapy in acute kidney injury. N Engl J Med. 2020;383:240-51. doi: 10.1056/NEJMoa2000741.
Dr. Lee is a hospitalist at Northwestern Memorial Hospital and Lurie Children’s Hospital and assistant professor of medicine, Feinberg School of Medicine, all in Chicago.
Background: Acute kidney injury (AKI) in the ICU is associated with high mortality. It is hypothesized that earlier initiation of RRT may benefit patients by controlling fluid overload and reducing metabolic stress caused by electrolyte and acid-base imbalances. However, prior studies have been conflicting, with the IDEAL-ICU study (2018) demonstrating no improvement in 90-day mortality with early RRT in septic shock.
Study design: Open-label randomized controlled trial.
Setting: 168 hospitals in 15 countries.
Synopsis: Of ICU patients with severe AKI, 3,019 were randomized to either early or standard initiation of RRT. Early RRT was defined as occurring within 12 hours of eligibility; in the standard-therapy group, RRT was delayed until specifically indicated or if there was no improvement after 72 hours. Those needing immediate renal replacement or deemed likely to recover without need for RRT were excluded in order to study only those in whom ideal timing of dialysis was uncertain. There was no difference in 90-day mortality between the groups (43.9% vs. 43.7%; P = .92). Early initiation did not improve length of ICU stay, ventilator-free days, days out of the hospital, or quality of life. The early-initiation patients experienced more adverse events related to RRT and were more likely to have continued dependence on RRT at 90 days (10.4% vs. 6.0% in standard initiation). Of note, approximately 40% of those randomized to standard initiation never required RRT.
Bottom line: This large, multicenter, well-conducted trial demonstrates no benefit for early initiation of RRT in critically ill patients.
Citation: STARRT-AKI investigators. Timing of initiation of renal-replacement therapy in acute kidney injury. N Engl J Med. 2020;383:240-51. doi: 10.1056/NEJMoa2000741.
Dr. Lee is a hospitalist at Northwestern Memorial Hospital and Lurie Children’s Hospital and assistant professor of medicine, Feinberg School of Medicine, all in Chicago.
Background: Acute kidney injury (AKI) in the ICU is associated with high mortality. It is hypothesized that earlier initiation of RRT may benefit patients by controlling fluid overload and reducing metabolic stress caused by electrolyte and acid-base imbalances. However, prior studies have been conflicting, with the IDEAL-ICU study (2018) demonstrating no improvement in 90-day mortality with early RRT in septic shock.
Study design: Open-label randomized controlled trial.
Setting: 168 hospitals in 15 countries.
Synopsis: Of ICU patients with severe AKI, 3,019 were randomized to either early or standard initiation of RRT. Early RRT was defined as occurring within 12 hours of eligibility; in the standard-therapy group, RRT was delayed until specifically indicated or if there was no improvement after 72 hours. Those needing immediate renal replacement or deemed likely to recover without need for RRT were excluded in order to study only those in whom ideal timing of dialysis was uncertain. There was no difference in 90-day mortality between the groups (43.9% vs. 43.7%; P = .92). Early initiation did not improve length of ICU stay, ventilator-free days, days out of the hospital, or quality of life. The early-initiation patients experienced more adverse events related to RRT and were more likely to have continued dependence on RRT at 90 days (10.4% vs. 6.0% in standard initiation). Of note, approximately 40% of those randomized to standard initiation never required RRT.
Bottom line: This large, multicenter, well-conducted trial demonstrates no benefit for early initiation of RRT in critically ill patients.
Citation: STARRT-AKI investigators. Timing of initiation of renal-replacement therapy in acute kidney injury. N Engl J Med. 2020;383:240-51. doi: 10.1056/NEJMoa2000741.
Dr. Lee is a hospitalist at Northwestern Memorial Hospital and Lurie Children’s Hospital and assistant professor of medicine, Feinberg School of Medicine, all in Chicago.
Faster testing possible for secondary ICU infections
The SARS-CoV-2 pandemic has given added impetus for metagenomic testing using nanopore sequencing to progress from a research tool to routine clinical application. A study led by researchers from Guy’s and St. Thomas’ NHS Foundation Trust has shown the potential for clinical metagenomics to become a same-day test for identifying secondary infection in ventilated ICU patients. Getting results in hours rather than days would help to ensure rapid treatment with the correct antibiotic, minimize unnecessary prescriptions, and thus reduce the growing menace of antimicrobial resistance.
‘SARS-CoV-2 has put considerable strain on ICUs’
The researchers point out that the setting of an intensive care unit involves frequent staff-patient contact that imparts a risk of secondary or nosocomial infection. In addition, invasive ventilation may introduce organisms into the lungs and lead to ventilator-acquired pneumonia. This carries a high mortality and is responsible for up to 70% of antimicrobial prescribing, with current guidelines requiring empiric antibiotics pending culture results, which typically takes 2-4 days.
Many of these infection problems worsened during SARS-CoV-2. Expanded critical care capacity raised the risk of nosocomial infections, with attendant increased antimicrobial prescriptions and the threat of antimicrobial resistance. In addition, treatment of COVID-19 patients with steroid therapy potentially exacerbates bacterial or fungal infections.
The researchers, from the National Institute for Health Research (NIHR) Biomedical Research Centre at Guy’s and St. Thomas’ NHS Foundation Trust and King’s College London, in collaboration with the Quadram Institute in Norwich, Oxford Nanopore Technologies, and Viapath, the U.K.’s largest independent pathology service provider, noted that the pandemic thus reinforced “a need for rapid comprehensive diagnostics to improve antimicrobial stewardship and help prevent emergence and transmission of multi-drug-resistant organisms.”
“As soon as the pandemic started, our scientists realized there would be a benefit to sequencing genomes of all bacteria and fungi causing infection in COVID-19 patients while on ICU,” said Professor Jonathan Edgeworth, who led the research team.
“Within a few weeks we showed it can diagnose secondary infection, target antibiotic treatment, and detect outbreaks much earlier than current technologies – all from a single sample.”
Proof-of-concept study
The team performed a proof-of-concept study of nanopore metagenomics sequencing – a type of DNA sequencing that allows direct rapid unbiased detection of all organisms present in a clinical sample – on 43 surplus respiratory samples from 34 intubated COVID-19 patients with suspected secondary bacterial or fungal pneumonia. Patients were drawn from seven ICUs at St. Thomas’ Hospital, London over a 9-week period between April 11 and June 15 2020, during the first wave of COVID-19.
Their median age was 52, 70% were male, 47% White, and 44% Black or minority ethnicities. Median length of stay was 32 days and mortality 24%. Samples sent for metagenomic analysis and culture included 10 bronchoalveolar lavages, 6 tracheal aspirates, and 27 non-direct bronchoalveolar lavages.
The study, published in Genome Medicine, showed that an 8-hour metagenomics workflow was 92% sensitive (95% CI, 75% to 99%) and 82% specific (95% CI, 57% to 96%) for bacterial identification, based on culture-positive and culture-negative samples, respectively.
The main Gram-negative bacteria identified were Klebsiella spp. (53%), Citrobacter spp. (15%), and E coli (9%). The main Gram-positive bacteria were S aureus (9%), C striatum (24%) and Enterococcus spp. (12%). In addition, C albicans, other Candida spp. and Aspergillus spp. were cultured from 38%, 15%, and 9% of patients, respectively.
In every case, the initial antibiotics prescribed according to prevailing guideline recommendations would have been modified by metagenomic sequencing demonstrating the presence or absence of β-lactam-resistant genes carried by Enterobacterales.
Next day results of sequencing also detected Aspergillus fumigatus in four samples, with results 100% concordant with quantitative PCR for both the four positive and 39 negative samples. It identified two multi-drug–resistant outbreaks, one involving K pneumoniae ST307 affecting four patients and one a C striatum outbreak involving 14 patients across three ICUs.
Thus, a single sample can provide enough genetic sequence data to compare pathogen genomes with a database and accurately identify patients carrying the same strain, enabling early detection of outbreaks. This is the first time this combined benefit of a single test has been demonstrated, the team say.
Gordon Sanghera, CEO of Oxford Nanopore commented that “rapidly characterizing co-infections for precision prescribing is a vital next step for both COVID-19 patients and respiratory disease in general.”
Dr. Andrew Page of the Quadram Institute said: “We have been working on metagenomics technology for the last 7 years. It is great to see it applied to patient care during the COVID-19 pandemic.”
He said in an interview: “The pandemic has accelerated the transition from using sequencing purely in research labs to using it in the clinic to rapidly provide clinicians with information they can use to improve outcomes for patients.”
Potential to inform antimicrobial prescribing and infection control
“Clinical metagenomic testing provides accurate pathogen detection and antibiotic resistance prediction in a same-day laboratory workflow, with assembled genomes available the next day for genomic surveillance,” the researchers say.
The technology “could fundamentally change the multi-disciplinary team approach to managing ICU infections.” It has the potential to improve initial targeted antimicrobial treatment and infection control decisions, as well as help rapidly detect unsuspected outbreaks of multi-drug–resistant pathogens.
Professor Edgeworth told this news organization that since the study, “secondary bacterial and fungal infections have increased, perhaps due to immunomodulatory treatments or just the length of time patients spend on ICU recovering from COVID-19. This makes rapid diagnosis even more important to ensure patients get more targeted antibiotics earlier, rather than relying on generic guidelines.”
The team “are planning to move respiratory metagenomics into pilot service under our Trust’s quality improvement framework,” he revealed. This will enable them to gather data on patient benefits.
“We also need to see how clinicians use these tests to improve antibiotic treatment, to stop antibiotics when not needed or to identify outbreaks earlier, and then how that translates into tangible benefits for individual patients and the wider NHS.”
He predicts that the technique will revolutionize the approach to prevention and treatment of serious infection in ICUs, and it is now planned to offer it as a clinical service for COVID-19 and influenza patients during the coming winter.
In addition, he said: “It can be equally applied to other samples such as tissue fluids and biopsies, including those removed at operation. It therefore has potential to impact on diagnostics for many clinical services, particularly if the progress is maintained at the current pace.”
This article first appeared on Medscape UK/Univadis.
The SARS-CoV-2 pandemic has given added impetus for metagenomic testing using nanopore sequencing to progress from a research tool to routine clinical application. A study led by researchers from Guy’s and St. Thomas’ NHS Foundation Trust has shown the potential for clinical metagenomics to become a same-day test for identifying secondary infection in ventilated ICU patients. Getting results in hours rather than days would help to ensure rapid treatment with the correct antibiotic, minimize unnecessary prescriptions, and thus reduce the growing menace of antimicrobial resistance.
‘SARS-CoV-2 has put considerable strain on ICUs’
The researchers point out that the setting of an intensive care unit involves frequent staff-patient contact that imparts a risk of secondary or nosocomial infection. In addition, invasive ventilation may introduce organisms into the lungs and lead to ventilator-acquired pneumonia. This carries a high mortality and is responsible for up to 70% of antimicrobial prescribing, with current guidelines requiring empiric antibiotics pending culture results, which typically takes 2-4 days.
Many of these infection problems worsened during SARS-CoV-2. Expanded critical care capacity raised the risk of nosocomial infections, with attendant increased antimicrobial prescriptions and the threat of antimicrobial resistance. In addition, treatment of COVID-19 patients with steroid therapy potentially exacerbates bacterial or fungal infections.
The researchers, from the National Institute for Health Research (NIHR) Biomedical Research Centre at Guy’s and St. Thomas’ NHS Foundation Trust and King’s College London, in collaboration with the Quadram Institute in Norwich, Oxford Nanopore Technologies, and Viapath, the U.K.’s largest independent pathology service provider, noted that the pandemic thus reinforced “a need for rapid comprehensive diagnostics to improve antimicrobial stewardship and help prevent emergence and transmission of multi-drug-resistant organisms.”
“As soon as the pandemic started, our scientists realized there would be a benefit to sequencing genomes of all bacteria and fungi causing infection in COVID-19 patients while on ICU,” said Professor Jonathan Edgeworth, who led the research team.
“Within a few weeks we showed it can diagnose secondary infection, target antibiotic treatment, and detect outbreaks much earlier than current technologies – all from a single sample.”
Proof-of-concept study
The team performed a proof-of-concept study of nanopore metagenomics sequencing – a type of DNA sequencing that allows direct rapid unbiased detection of all organisms present in a clinical sample – on 43 surplus respiratory samples from 34 intubated COVID-19 patients with suspected secondary bacterial or fungal pneumonia. Patients were drawn from seven ICUs at St. Thomas’ Hospital, London over a 9-week period between April 11 and June 15 2020, during the first wave of COVID-19.
Their median age was 52, 70% were male, 47% White, and 44% Black or minority ethnicities. Median length of stay was 32 days and mortality 24%. Samples sent for metagenomic analysis and culture included 10 bronchoalveolar lavages, 6 tracheal aspirates, and 27 non-direct bronchoalveolar lavages.
The study, published in Genome Medicine, showed that an 8-hour metagenomics workflow was 92% sensitive (95% CI, 75% to 99%) and 82% specific (95% CI, 57% to 96%) for bacterial identification, based on culture-positive and culture-negative samples, respectively.
The main Gram-negative bacteria identified were Klebsiella spp. (53%), Citrobacter spp. (15%), and E coli (9%). The main Gram-positive bacteria were S aureus (9%), C striatum (24%) and Enterococcus spp. (12%). In addition, C albicans, other Candida spp. and Aspergillus spp. were cultured from 38%, 15%, and 9% of patients, respectively.
In every case, the initial antibiotics prescribed according to prevailing guideline recommendations would have been modified by metagenomic sequencing demonstrating the presence or absence of β-lactam-resistant genes carried by Enterobacterales.
Next day results of sequencing also detected Aspergillus fumigatus in four samples, with results 100% concordant with quantitative PCR for both the four positive and 39 negative samples. It identified two multi-drug–resistant outbreaks, one involving K pneumoniae ST307 affecting four patients and one a C striatum outbreak involving 14 patients across three ICUs.
Thus, a single sample can provide enough genetic sequence data to compare pathogen genomes with a database and accurately identify patients carrying the same strain, enabling early detection of outbreaks. This is the first time this combined benefit of a single test has been demonstrated, the team say.
Gordon Sanghera, CEO of Oxford Nanopore commented that “rapidly characterizing co-infections for precision prescribing is a vital next step for both COVID-19 patients and respiratory disease in general.”
Dr. Andrew Page of the Quadram Institute said: “We have been working on metagenomics technology for the last 7 years. It is great to see it applied to patient care during the COVID-19 pandemic.”
He said in an interview: “The pandemic has accelerated the transition from using sequencing purely in research labs to using it in the clinic to rapidly provide clinicians with information they can use to improve outcomes for patients.”
Potential to inform antimicrobial prescribing and infection control
“Clinical metagenomic testing provides accurate pathogen detection and antibiotic resistance prediction in a same-day laboratory workflow, with assembled genomes available the next day for genomic surveillance,” the researchers say.
The technology “could fundamentally change the multi-disciplinary team approach to managing ICU infections.” It has the potential to improve initial targeted antimicrobial treatment and infection control decisions, as well as help rapidly detect unsuspected outbreaks of multi-drug–resistant pathogens.
Professor Edgeworth told this news organization that since the study, “secondary bacterial and fungal infections have increased, perhaps due to immunomodulatory treatments or just the length of time patients spend on ICU recovering from COVID-19. This makes rapid diagnosis even more important to ensure patients get more targeted antibiotics earlier, rather than relying on generic guidelines.”
The team “are planning to move respiratory metagenomics into pilot service under our Trust’s quality improvement framework,” he revealed. This will enable them to gather data on patient benefits.
“We also need to see how clinicians use these tests to improve antibiotic treatment, to stop antibiotics when not needed or to identify outbreaks earlier, and then how that translates into tangible benefits for individual patients and the wider NHS.”
He predicts that the technique will revolutionize the approach to prevention and treatment of serious infection in ICUs, and it is now planned to offer it as a clinical service for COVID-19 and influenza patients during the coming winter.
In addition, he said: “It can be equally applied to other samples such as tissue fluids and biopsies, including those removed at operation. It therefore has potential to impact on diagnostics for many clinical services, particularly if the progress is maintained at the current pace.”
This article first appeared on Medscape UK/Univadis.
The SARS-CoV-2 pandemic has given added impetus for metagenomic testing using nanopore sequencing to progress from a research tool to routine clinical application. A study led by researchers from Guy’s and St. Thomas’ NHS Foundation Trust has shown the potential for clinical metagenomics to become a same-day test for identifying secondary infection in ventilated ICU patients. Getting results in hours rather than days would help to ensure rapid treatment with the correct antibiotic, minimize unnecessary prescriptions, and thus reduce the growing menace of antimicrobial resistance.
‘SARS-CoV-2 has put considerable strain on ICUs’
The researchers point out that the setting of an intensive care unit involves frequent staff-patient contact that imparts a risk of secondary or nosocomial infection. In addition, invasive ventilation may introduce organisms into the lungs and lead to ventilator-acquired pneumonia. This carries a high mortality and is responsible for up to 70% of antimicrobial prescribing, with current guidelines requiring empiric antibiotics pending culture results, which typically takes 2-4 days.
Many of these infection problems worsened during SARS-CoV-2. Expanded critical care capacity raised the risk of nosocomial infections, with attendant increased antimicrobial prescriptions and the threat of antimicrobial resistance. In addition, treatment of COVID-19 patients with steroid therapy potentially exacerbates bacterial or fungal infections.
The researchers, from the National Institute for Health Research (NIHR) Biomedical Research Centre at Guy’s and St. Thomas’ NHS Foundation Trust and King’s College London, in collaboration with the Quadram Institute in Norwich, Oxford Nanopore Technologies, and Viapath, the U.K.’s largest independent pathology service provider, noted that the pandemic thus reinforced “a need for rapid comprehensive diagnostics to improve antimicrobial stewardship and help prevent emergence and transmission of multi-drug-resistant organisms.”
“As soon as the pandemic started, our scientists realized there would be a benefit to sequencing genomes of all bacteria and fungi causing infection in COVID-19 patients while on ICU,” said Professor Jonathan Edgeworth, who led the research team.
“Within a few weeks we showed it can diagnose secondary infection, target antibiotic treatment, and detect outbreaks much earlier than current technologies – all from a single sample.”
Proof-of-concept study
The team performed a proof-of-concept study of nanopore metagenomics sequencing – a type of DNA sequencing that allows direct rapid unbiased detection of all organisms present in a clinical sample – on 43 surplus respiratory samples from 34 intubated COVID-19 patients with suspected secondary bacterial or fungal pneumonia. Patients were drawn from seven ICUs at St. Thomas’ Hospital, London over a 9-week period between April 11 and June 15 2020, during the first wave of COVID-19.
Their median age was 52, 70% were male, 47% White, and 44% Black or minority ethnicities. Median length of stay was 32 days and mortality 24%. Samples sent for metagenomic analysis and culture included 10 bronchoalveolar lavages, 6 tracheal aspirates, and 27 non-direct bronchoalveolar lavages.
The study, published in Genome Medicine, showed that an 8-hour metagenomics workflow was 92% sensitive (95% CI, 75% to 99%) and 82% specific (95% CI, 57% to 96%) for bacterial identification, based on culture-positive and culture-negative samples, respectively.
The main Gram-negative bacteria identified were Klebsiella spp. (53%), Citrobacter spp. (15%), and E coli (9%). The main Gram-positive bacteria were S aureus (9%), C striatum (24%) and Enterococcus spp. (12%). In addition, C albicans, other Candida spp. and Aspergillus spp. were cultured from 38%, 15%, and 9% of patients, respectively.
In every case, the initial antibiotics prescribed according to prevailing guideline recommendations would have been modified by metagenomic sequencing demonstrating the presence or absence of β-lactam-resistant genes carried by Enterobacterales.
Next day results of sequencing also detected Aspergillus fumigatus in four samples, with results 100% concordant with quantitative PCR for both the four positive and 39 negative samples. It identified two multi-drug–resistant outbreaks, one involving K pneumoniae ST307 affecting four patients and one a C striatum outbreak involving 14 patients across three ICUs.
Thus, a single sample can provide enough genetic sequence data to compare pathogen genomes with a database and accurately identify patients carrying the same strain, enabling early detection of outbreaks. This is the first time this combined benefit of a single test has been demonstrated, the team say.
Gordon Sanghera, CEO of Oxford Nanopore commented that “rapidly characterizing co-infections for precision prescribing is a vital next step for both COVID-19 patients and respiratory disease in general.”
Dr. Andrew Page of the Quadram Institute said: “We have been working on metagenomics technology for the last 7 years. It is great to see it applied to patient care during the COVID-19 pandemic.”
He said in an interview: “The pandemic has accelerated the transition from using sequencing purely in research labs to using it in the clinic to rapidly provide clinicians with information they can use to improve outcomes for patients.”
Potential to inform antimicrobial prescribing and infection control
“Clinical metagenomic testing provides accurate pathogen detection and antibiotic resistance prediction in a same-day laboratory workflow, with assembled genomes available the next day for genomic surveillance,” the researchers say.
The technology “could fundamentally change the multi-disciplinary team approach to managing ICU infections.” It has the potential to improve initial targeted antimicrobial treatment and infection control decisions, as well as help rapidly detect unsuspected outbreaks of multi-drug–resistant pathogens.
Professor Edgeworth told this news organization that since the study, “secondary bacterial and fungal infections have increased, perhaps due to immunomodulatory treatments or just the length of time patients spend on ICU recovering from COVID-19. This makes rapid diagnosis even more important to ensure patients get more targeted antibiotics earlier, rather than relying on generic guidelines.”
The team “are planning to move respiratory metagenomics into pilot service under our Trust’s quality improvement framework,” he revealed. This will enable them to gather data on patient benefits.
“We also need to see how clinicians use these tests to improve antibiotic treatment, to stop antibiotics when not needed or to identify outbreaks earlier, and then how that translates into tangible benefits for individual patients and the wider NHS.”
He predicts that the technique will revolutionize the approach to prevention and treatment of serious infection in ICUs, and it is now planned to offer it as a clinical service for COVID-19 and influenza patients during the coming winter.
In addition, he said: “It can be equally applied to other samples such as tissue fluids and biopsies, including those removed at operation. It therefore has potential to impact on diagnostics for many clinical services, particularly if the progress is maintained at the current pace.”
This article first appeared on Medscape UK/Univadis.
Hospitalists helped plan COVID-19 field hospitals
‘It’s a great thing to be overprepared’
At the height of the COVID-19 pandemic’s terrifying first wave in the spring of 2020, dozens of hospitals in high-incidence areas either planned or opened temporary, emergency field hospitals to cover anticipated demand for beds beyond the capacity of local permanent hospitals.
Chastened by images of overwhelmed health care systems in Northern Italy and other hard-hit areas,1 the planners used available modeling tools and estimates for projecting maximum potential need in worst-case scenarios. Some of these temporary hospitals never opened. Others opened in convention centers, parking garages, or parking lot tents, and ended up being used to a lesser degree than the worst-case scenarios.
But those who participated in the planning – including, in many cases, hospitalists – believe they created alternate care site manuals that could be quickly revived in the event of future COVID surges or other, similar crises. Better to plan for too much, they say, than not plan for enough.
Field hospitals or alternate care sites are defined in a recent journal article in Prehospital Disaster Medicine as “locations that can be converted to provide either inpatient and/or outpatient health services when existing facilities are compromised by a hazard impact or the volume of patients exceeds available capacity and/or capabilities.”2
The lead author of that report, Sue Anne Bell, PhD, FNP-BC, a disaster expert and assistant professor of nursing at the University of Michigan (UM), was one of five members of the leadership team for planning UM’s field hospital. They used an organizational unit structure based on the U.S. military’s staffing structure, with their work organized around six units of planning: personnel and labor, security, clinical operations, logistics and supply, planning and training, and communications. This team planned a 519-bed step-down care facility, the Michigan Medicine Field Hospital, for a 73,000-foot indoor track and performance facility at the university, three miles from UM’s main hospital. The aim was to provide safe care in a resource-limited environment.
“We were prepared, but the need never materialized as the peak of COVID cases started to subside,” Dr. Bell said. The team was ready to open within days using a “T-Minus” framework of days remaining on an official countdown clock. But when the need and deadlines kept getting pushed back, that gave them more time to develop clearer procedures.
Two Michigan Medicine hospitalists, Christopher Smith, MD, and David Paje, MD, MPH, both professors at UM’s medical school, were intimately involved in the process. “I was the medical director for the respiratory care unit that was opened for COVID patients, so I was pulled in to assist in the field hospital planning,” said Dr. Smith.
Dr. Paje was director of the short-stay unit and had been a medical officer in the U.S. Army, with training in how to set up military field hospitals. He credits that background as helpful for UM’s COVID field hospital planning, along with his experience in hospital medicine operations.
“We expected that these patients would need the expertise of hospitalists, who had quickly become familiar with the peculiarities of the new disease. That played a role in the decisions we made. Hospitalists were at the front lines of COVID care and had unique clinical insights about managing those with severe disease,” Dr. Paje added.
“When we started, the projections were dire. You don’t want to believe something like that is going to happen. When COVID started to cool off, it was more of a relief to us than anything else,” Dr. Smith said. “Still, it was a very worthwhile exercise. At the end of the day, we put together a comprehensive guide, which is ready for the next crisis.”
Baltimore builds a convention center hospital
A COVID-19 field hospital was planned and executed at an exhibit hall in the Baltimore Convention Center, starting in March 2020 under the leadership of Johns Hopkins Bayview hospitalist Eric Howell, MD, MHM, who eventually handed over responsibilities as chief medical officer when he assumed the position of CEO for the Society of Hospital Medicine in July of that year.
Hopkins collaborated with the University of Maryland health system and state leaders, including the Secretary of Health, to open a 252-bed temporary facility, which at its peak carried a census of 48 patients, with no on-site mortality or cardiac arrests, before it was closed in June 2021 – ready to reopen if necessary. It also served as Baltimore’s major site for polymerase chain reaction COVID-19 testing, vaccinations, and monoclonal antibody infusions, along with medical research.
“My belief at the time we started was that my entire 20-year career as a hospitalist had prepared me for the challenge of opening a COVID field hospital,” Dr. Howell said. “I had learned how to build clinical programs. The difference was that instead of months and years to build a program, we only had a few weeks.”
His first request was to bring on an associate medical director for the field hospital, Melinda E. Kantsiper, MD, a hospitalist and director of clinical operations in the Division of Hospital Medicine at Johns Hopkins Bayview. She became the field hospital’s CMO when Dr. Howell moved to SHM. “As hospitalists, we are trained to care for the patient in front of us while at the same time creating systems that can adjust to rapidly changing circumstances,” Dr. Kantsiper said. “We did what was asked and set up a field hospital that cared for a total of 1,500 COVID patients.”
Hospitalists have the tools that are needed for this work, and shouldn’t be reluctant to contribute to field hospital planning, she said. “This was a real eye-opener for me. Eric explained to me that hospitalists really practice acute care medicine, which doesn’t have to be within the four walls of a hospital.”
The Baltimore field hospital has been a fantastic experience, Dr. Kantsiper added. “But it’s not a building designed for health care delivery.” For the right group of providers, the experience of working in a temporary facility such as this can be positive and exhilarating. “But we need to make sure we take care of our staff. It takes a toll. How we keep them safe – physically and emotionally – has to be top of mind,” she said.
The leaders at Hopkins Medicine and their collaborators truly engaged with the field hospital’s mission, Dr. Howell added.
“They gave us a lot of autonomy and helped us break down barriers. They gave us the political capital to say proper PPE was absolutely essential. As hard and devastating as the pandemic has been, one take-away is that we showed that we can be more flexible and elastic in response to actual needs than we used to think.”
Range of challenges
Among the questions that need to be answered by a field hospital’s planners, the first is ‘where to put it?’ The answer is, hopefully, someplace not too far away, large enough, with ready access to supplies and intake. The next question is ‘who is the patient?’ Clinicians must determine who goes to the field hospital versus who stays at the standing hospital. How sick should these patients be? And when do they need to go back to the permanent hospital? Can staff be trained to recognize when patients in the field hospital are starting to decompensate? The EPIC Deterioration Index3 is a proprietary prediction model that was used by more than a hundred hospitals during the pandemic.
The hospitalist team may develop specific inclusion and exclusion criteria – for example, don’t admit patients who are receiving oxygen therapy above a certain threshold or who are hemodynamically unstable. These criteria should reflect the capacity of the field hospital and the needs of the permanent hospital. At Michigan, as at other field hospital sites, the goal was to offer a step-down or postacute setting for patients with COVID-19 who were too sick to return home but didn’t need acute or ICU-level care, thereby freeing up beds at the permanent hospital for patients who were sicker.
Other questions: What is the process for admissions and discharges? How will patients be transported? What kind of staffing is needed, and what levels of care will be provided? What about rehabilitation services, or palliative care? What about patients with substance abuse or psychiatric comorbidities?
“Are we going to do paper charting? How will that work out for long-term documentation and billing?” Dr. Bell said. A clear reporting structure and communication pathways are essential. Among the other operational processes to address, outlined in Dr. Bell’s article, are orientation and training, PPE donning and doffing procedures, the code or rapid response team, patient and staff food and nutrition, infection control protocols, pharmacy services, access to radiology, rounding procedures, staff support, and the morgue.
One other issue that shouldn’t be overlooked is health equity in the field hospital. “Providing safe and equitable care should be the focus. Thinking who goes to the field hospital should be done within a health equity framework,” Dr. Bell said.4 She also wonders if field hospital planners are sharing their experience with colleagues across the country and developing more collaborative relationships with other hospitals in their communities.
“Field hospitals can be different things,” Dr. Bell said. “The important take-home is it doesn’t have to be in a tent or a parking garage, which can be suboptimal.” In many cases, it may be better to focus on finding unused space within the hospital – whether a lobby, staff lounge, or unoccupied unit – closer to personnel, supplies, pharmacy, and the like. “I think the pandemic showed us how unprepared we were as a health care system, and how much more we need to do in preparation for future crises.”
Limits to the temporary hospital
In New York City, which had the country’s worst COVID-19 outbreak during the first surge in the spring of 2020, a 1,000-bed field hospital was opened at the Jacob Javits Center in March 2020 and closed that June. “I was in the field hospital early, in March and April, when our hospitals were temporarily overrun,” said hospitalist Mona Krouss, MD, FACP, CPPS, NYC Health + Hospitals’ director of patient safety. “My role was to figure out how to get patients on our medical floors into these field hospitals, with responsibility for helping to revise admission criteria,” she said.
“No one knew how horrible it would become. This was so unanticipated, so difficult to operationalize. What they were able to create was amazing, but there were just too many barriers to have it work smoothly,” Dr. Krouss said.
“The military stepped in, and they helped us so much. We wouldn’t have been able to survive without their help.” But there is only so much a field hospital can do to provide acute medical care. Later, military medical teams shifted to roles in temporary units inside the permanent hospitals. “They came to the hospital wanting to be deployed,” she said.
“We could only send patients [to the field hospital] who were fairly stable, and choosing the right ones was difficult.” Dr. Krouss said. In the end, not a lot of COVID-19 patients from NYC Health + Hospitals ended up going to the Javits Center, in part because the paperwork and logistics of getting someone in was a barrier, Dr. Krouss said. A process was established for referring doctors to call a phone number and speak with a New York City Department of Health employee to go through the criteria for admission to the field hospital.
“That could take up to 30 minutes before getting approval. Then you had to go through the same process all over again for sign-out to another physician, and then register the patient with a special bar code. Then you had to arrange ambulance transfer. Doctors didn’t want to go through all of that – everybody was too busy,” she explained. Hospitalists have since worked on streamlining the criteria. “Now we have a good process for the future. We made it more seamless,” she noted.
Susan Lee, DO, MBA, hospitalist and chief medical officer for Renown Regional Medical Center in Reno, Nev., helped to plan an alternate care site in anticipation of up to a thousand COVID patients in her community – far beyond the scope of the existing hospitals. Hospitalists were involved the entire time in planning, design of the unit, design of staffing models, care protocols, and the like, working through an evidence-based medical committee and a COVID-19 provider task force for the Renown Health System.
“Because of a history of fires and earthquakes in this region, we had an emergency planning infrastructure in place. We put the field hospital on the first and second floors of a parking garage, with built-in negative pressure capacity. We also built space for staff break rooms and desk space. It took 10 days to build the hospital, thanks to some very talented people in management and facility design,” Dr. Lee said.
Then, the hospital was locked up and sat empty for 7 months, until the surge in December 2020, when Reno was hit by a bigger wave – this time exceeding the hospitals’ capacity. Through mid-January of 2021, clinicians cared for approximately 240 COVID-19 patients, up to 47 at a time, in the field hospital. A third wave in the autumn of 2021 plateaued at a level lower than the previous fall, so the field hospital is not currently needed.
Replicating hospital work flows
“We ensured that everybody who needed to be within the walls of the permanent hospitals was able to stay there,” said Dr. Lee’s colleague, hospitalist Adnan (Eddy) Akbar, MD. “The postacute system we ordinarily rely on was no longer accepting patients. Other hospitals in the area were able to manage within their capacity because Renown’s field hospital could admit excess patients. We tried to replicate in the field hospital, as much as possible, the work flows and systems of our main hospital.”
When the field hospital finally opened, Dr. Akbar said, “we had a good feeling. We were ready. If something more catastrophic had come down, we were ready to care for more patients. In the field hospital you have to keep monitoring your work flow – almost on a daily basis. But we felt privileged to be working for a system where you knew you can go and care for everyone who needed care.”
One upside of the field hospital experience for participating clinicians, Dr. Lee added, is the opportunity to practice creatively. “The downside is it’s extremely expensive, and has consequences for the mental health of staff. Like so many of these things, it wore on people over time – such as all the time spent donning and doffing protective equipment. And recently the patients have become a lot less gracious.”
Amy Baughman, MD, a hospitalist at Massachusetts General Hospital in Boston, was co-medical director of the postacute care section of a 1,000-bed field hospital, Boston Hope Medical Center, opened in April 2020 at the Boston Convention and Exhibition Center. The other half of the facility was dedicated to undomiciled COVID-19 patients who had no place else to go. Peak census was around 100 patients, housed on four units, each with a clinical team led by a physician.
Dr. Baughman’s field hospital experience has taught her the importance of “staying within your domain of expertise. Physicians are attracted to difficult problems and want to do everything themselves. Next time I won’t be the one installing hand sanitizer dispensers.” A big part of running a field hospital is logistics, she said, and physicians are trained clinicians, not necessarily logistics engineers.
“So it’s important to partner with logistics experts. A huge part of our success in building a facility in 9 days of almost continuous construction was the involvement of the National Guard,” she said. An incident command system was led by an experienced military general incident commander, with two clinical codirectors. The army also sent in full teams of health professionals.
The facility admitted a lot fewer patients than the worst-case projections before it closed in June 2020. “But at the end of the day, we provided a lot of excellent care,” Dr. Baughman said. “This was about preparing for a disaster. It was all hands on deck, and the hands were health professionals. We spent a lot of money for the patients we took care of, but we had no choice, based on what we believed could happen. At that time, so many nursing facilities and homeless shelters were closed to us. It was impossible to predict what utilization would be.”
Subsequent experience has taught that a lot of even seriously ill COVID-19 patients can be managed safely at home, for example, using accelerated home oxygen monitoring with telelinked pulse oximeters. But in the beginning, Dr. Baughman said, “it was a new situation for us. We had seen what happened in Europe and China. It’s a great thing to be overprepared.”
References
1. Horowitz J. Italy’s health care system groans under coronavirus – a warning to the world. New York Times. 2020 Mar 12.
2. Bell SA et al. T-Minus 10 days: The role of an academic medical institution in field hospital planning. Prehosp Disaster Med. 2021 Feb 18:1-6. doi: 10.1017/S1049023X21000224.
3. Singh K et al. Evaluating a widely implemented proprietary deterioration index model among hospitalized patients with COVID-19. Ann Am Thorac Soc. 2021 Jul;18(7):1129-37. doi: 10.1513/AnnalsATS.202006-698OC.
4. Bell SA et al. Alternate care sites during COVID-19 pandemic: Policy implications for pandemic surge planning. Disaster Med Public Health Prep. 2021 Jul 23;1-3. doi: 10.1017/dmp.2021.241.
‘It’s a great thing to be overprepared’
‘It’s a great thing to be overprepared’
At the height of the COVID-19 pandemic’s terrifying first wave in the spring of 2020, dozens of hospitals in high-incidence areas either planned or opened temporary, emergency field hospitals to cover anticipated demand for beds beyond the capacity of local permanent hospitals.
Chastened by images of overwhelmed health care systems in Northern Italy and other hard-hit areas,1 the planners used available modeling tools and estimates for projecting maximum potential need in worst-case scenarios. Some of these temporary hospitals never opened. Others opened in convention centers, parking garages, or parking lot tents, and ended up being used to a lesser degree than the worst-case scenarios.
But those who participated in the planning – including, in many cases, hospitalists – believe they created alternate care site manuals that could be quickly revived in the event of future COVID surges or other, similar crises. Better to plan for too much, they say, than not plan for enough.
Field hospitals or alternate care sites are defined in a recent journal article in Prehospital Disaster Medicine as “locations that can be converted to provide either inpatient and/or outpatient health services when existing facilities are compromised by a hazard impact or the volume of patients exceeds available capacity and/or capabilities.”2
The lead author of that report, Sue Anne Bell, PhD, FNP-BC, a disaster expert and assistant professor of nursing at the University of Michigan (UM), was one of five members of the leadership team for planning UM’s field hospital. They used an organizational unit structure based on the U.S. military’s staffing structure, with their work organized around six units of planning: personnel and labor, security, clinical operations, logistics and supply, planning and training, and communications. This team planned a 519-bed step-down care facility, the Michigan Medicine Field Hospital, for a 73,000-foot indoor track and performance facility at the university, three miles from UM’s main hospital. The aim was to provide safe care in a resource-limited environment.
“We were prepared, but the need never materialized as the peak of COVID cases started to subside,” Dr. Bell said. The team was ready to open within days using a “T-Minus” framework of days remaining on an official countdown clock. But when the need and deadlines kept getting pushed back, that gave them more time to develop clearer procedures.
Two Michigan Medicine hospitalists, Christopher Smith, MD, and David Paje, MD, MPH, both professors at UM’s medical school, were intimately involved in the process. “I was the medical director for the respiratory care unit that was opened for COVID patients, so I was pulled in to assist in the field hospital planning,” said Dr. Smith.
Dr. Paje was director of the short-stay unit and had been a medical officer in the U.S. Army, with training in how to set up military field hospitals. He credits that background as helpful for UM’s COVID field hospital planning, along with his experience in hospital medicine operations.
“We expected that these patients would need the expertise of hospitalists, who had quickly become familiar with the peculiarities of the new disease. That played a role in the decisions we made. Hospitalists were at the front lines of COVID care and had unique clinical insights about managing those with severe disease,” Dr. Paje added.
“When we started, the projections were dire. You don’t want to believe something like that is going to happen. When COVID started to cool off, it was more of a relief to us than anything else,” Dr. Smith said. “Still, it was a very worthwhile exercise. At the end of the day, we put together a comprehensive guide, which is ready for the next crisis.”
Baltimore builds a convention center hospital
A COVID-19 field hospital was planned and executed at an exhibit hall in the Baltimore Convention Center, starting in March 2020 under the leadership of Johns Hopkins Bayview hospitalist Eric Howell, MD, MHM, who eventually handed over responsibilities as chief medical officer when he assumed the position of CEO for the Society of Hospital Medicine in July of that year.
Hopkins collaborated with the University of Maryland health system and state leaders, including the Secretary of Health, to open a 252-bed temporary facility, which at its peak carried a census of 48 patients, with no on-site mortality or cardiac arrests, before it was closed in June 2021 – ready to reopen if necessary. It also served as Baltimore’s major site for polymerase chain reaction COVID-19 testing, vaccinations, and monoclonal antibody infusions, along with medical research.
“My belief at the time we started was that my entire 20-year career as a hospitalist had prepared me for the challenge of opening a COVID field hospital,” Dr. Howell said. “I had learned how to build clinical programs. The difference was that instead of months and years to build a program, we only had a few weeks.”
His first request was to bring on an associate medical director for the field hospital, Melinda E. Kantsiper, MD, a hospitalist and director of clinical operations in the Division of Hospital Medicine at Johns Hopkins Bayview. She became the field hospital’s CMO when Dr. Howell moved to SHM. “As hospitalists, we are trained to care for the patient in front of us while at the same time creating systems that can adjust to rapidly changing circumstances,” Dr. Kantsiper said. “We did what was asked and set up a field hospital that cared for a total of 1,500 COVID patients.”
Hospitalists have the tools that are needed for this work, and shouldn’t be reluctant to contribute to field hospital planning, she said. “This was a real eye-opener for me. Eric explained to me that hospitalists really practice acute care medicine, which doesn’t have to be within the four walls of a hospital.”
The Baltimore field hospital has been a fantastic experience, Dr. Kantsiper added. “But it’s not a building designed for health care delivery.” For the right group of providers, the experience of working in a temporary facility such as this can be positive and exhilarating. “But we need to make sure we take care of our staff. It takes a toll. How we keep them safe – physically and emotionally – has to be top of mind,” she said.
The leaders at Hopkins Medicine and their collaborators truly engaged with the field hospital’s mission, Dr. Howell added.
“They gave us a lot of autonomy and helped us break down barriers. They gave us the political capital to say proper PPE was absolutely essential. As hard and devastating as the pandemic has been, one take-away is that we showed that we can be more flexible and elastic in response to actual needs than we used to think.”
Range of challenges
Among the questions that need to be answered by a field hospital’s planners, the first is ‘where to put it?’ The answer is, hopefully, someplace not too far away, large enough, with ready access to supplies and intake. The next question is ‘who is the patient?’ Clinicians must determine who goes to the field hospital versus who stays at the standing hospital. How sick should these patients be? And when do they need to go back to the permanent hospital? Can staff be trained to recognize when patients in the field hospital are starting to decompensate? The EPIC Deterioration Index3 is a proprietary prediction model that was used by more than a hundred hospitals during the pandemic.
The hospitalist team may develop specific inclusion and exclusion criteria – for example, don’t admit patients who are receiving oxygen therapy above a certain threshold or who are hemodynamically unstable. These criteria should reflect the capacity of the field hospital and the needs of the permanent hospital. At Michigan, as at other field hospital sites, the goal was to offer a step-down or postacute setting for patients with COVID-19 who were too sick to return home but didn’t need acute or ICU-level care, thereby freeing up beds at the permanent hospital for patients who were sicker.
Other questions: What is the process for admissions and discharges? How will patients be transported? What kind of staffing is needed, and what levels of care will be provided? What about rehabilitation services, or palliative care? What about patients with substance abuse or psychiatric comorbidities?
“Are we going to do paper charting? How will that work out for long-term documentation and billing?” Dr. Bell said. A clear reporting structure and communication pathways are essential. Among the other operational processes to address, outlined in Dr. Bell’s article, are orientation and training, PPE donning and doffing procedures, the code or rapid response team, patient and staff food and nutrition, infection control protocols, pharmacy services, access to radiology, rounding procedures, staff support, and the morgue.
One other issue that shouldn’t be overlooked is health equity in the field hospital. “Providing safe and equitable care should be the focus. Thinking who goes to the field hospital should be done within a health equity framework,” Dr. Bell said.4 She also wonders if field hospital planners are sharing their experience with colleagues across the country and developing more collaborative relationships with other hospitals in their communities.
“Field hospitals can be different things,” Dr. Bell said. “The important take-home is it doesn’t have to be in a tent or a parking garage, which can be suboptimal.” In many cases, it may be better to focus on finding unused space within the hospital – whether a lobby, staff lounge, or unoccupied unit – closer to personnel, supplies, pharmacy, and the like. “I think the pandemic showed us how unprepared we were as a health care system, and how much more we need to do in preparation for future crises.”
Limits to the temporary hospital
In New York City, which had the country’s worst COVID-19 outbreak during the first surge in the spring of 2020, a 1,000-bed field hospital was opened at the Jacob Javits Center in March 2020 and closed that June. “I was in the field hospital early, in March and April, when our hospitals were temporarily overrun,” said hospitalist Mona Krouss, MD, FACP, CPPS, NYC Health + Hospitals’ director of patient safety. “My role was to figure out how to get patients on our medical floors into these field hospitals, with responsibility for helping to revise admission criteria,” she said.
“No one knew how horrible it would become. This was so unanticipated, so difficult to operationalize. What they were able to create was amazing, but there were just too many barriers to have it work smoothly,” Dr. Krouss said.
“The military stepped in, and they helped us so much. We wouldn’t have been able to survive without their help.” But there is only so much a field hospital can do to provide acute medical care. Later, military medical teams shifted to roles in temporary units inside the permanent hospitals. “They came to the hospital wanting to be deployed,” she said.
“We could only send patients [to the field hospital] who were fairly stable, and choosing the right ones was difficult.” Dr. Krouss said. In the end, not a lot of COVID-19 patients from NYC Health + Hospitals ended up going to the Javits Center, in part because the paperwork and logistics of getting someone in was a barrier, Dr. Krouss said. A process was established for referring doctors to call a phone number and speak with a New York City Department of Health employee to go through the criteria for admission to the field hospital.
“That could take up to 30 minutes before getting approval. Then you had to go through the same process all over again for sign-out to another physician, and then register the patient with a special bar code. Then you had to arrange ambulance transfer. Doctors didn’t want to go through all of that – everybody was too busy,” she explained. Hospitalists have since worked on streamlining the criteria. “Now we have a good process for the future. We made it more seamless,” she noted.
Susan Lee, DO, MBA, hospitalist and chief medical officer for Renown Regional Medical Center in Reno, Nev., helped to plan an alternate care site in anticipation of up to a thousand COVID patients in her community – far beyond the scope of the existing hospitals. Hospitalists were involved the entire time in planning, design of the unit, design of staffing models, care protocols, and the like, working through an evidence-based medical committee and a COVID-19 provider task force for the Renown Health System.
“Because of a history of fires and earthquakes in this region, we had an emergency planning infrastructure in place. We put the field hospital on the first and second floors of a parking garage, with built-in negative pressure capacity. We also built space for staff break rooms and desk space. It took 10 days to build the hospital, thanks to some very talented people in management and facility design,” Dr. Lee said.
Then, the hospital was locked up and sat empty for 7 months, until the surge in December 2020, when Reno was hit by a bigger wave – this time exceeding the hospitals’ capacity. Through mid-January of 2021, clinicians cared for approximately 240 COVID-19 patients, up to 47 at a time, in the field hospital. A third wave in the autumn of 2021 plateaued at a level lower than the previous fall, so the field hospital is not currently needed.
Replicating hospital work flows
“We ensured that everybody who needed to be within the walls of the permanent hospitals was able to stay there,” said Dr. Lee’s colleague, hospitalist Adnan (Eddy) Akbar, MD. “The postacute system we ordinarily rely on was no longer accepting patients. Other hospitals in the area were able to manage within their capacity because Renown’s field hospital could admit excess patients. We tried to replicate in the field hospital, as much as possible, the work flows and systems of our main hospital.”
When the field hospital finally opened, Dr. Akbar said, “we had a good feeling. We were ready. If something more catastrophic had come down, we were ready to care for more patients. In the field hospital you have to keep monitoring your work flow – almost on a daily basis. But we felt privileged to be working for a system where you knew you can go and care for everyone who needed care.”
One upside of the field hospital experience for participating clinicians, Dr. Lee added, is the opportunity to practice creatively. “The downside is it’s extremely expensive, and has consequences for the mental health of staff. Like so many of these things, it wore on people over time – such as all the time spent donning and doffing protective equipment. And recently the patients have become a lot less gracious.”
Amy Baughman, MD, a hospitalist at Massachusetts General Hospital in Boston, was co-medical director of the postacute care section of a 1,000-bed field hospital, Boston Hope Medical Center, opened in April 2020 at the Boston Convention and Exhibition Center. The other half of the facility was dedicated to undomiciled COVID-19 patients who had no place else to go. Peak census was around 100 patients, housed on four units, each with a clinical team led by a physician.
Dr. Baughman’s field hospital experience has taught her the importance of “staying within your domain of expertise. Physicians are attracted to difficult problems and want to do everything themselves. Next time I won’t be the one installing hand sanitizer dispensers.” A big part of running a field hospital is logistics, she said, and physicians are trained clinicians, not necessarily logistics engineers.
“So it’s important to partner with logistics experts. A huge part of our success in building a facility in 9 days of almost continuous construction was the involvement of the National Guard,” she said. An incident command system was led by an experienced military general incident commander, with two clinical codirectors. The army also sent in full teams of health professionals.
The facility admitted a lot fewer patients than the worst-case projections before it closed in June 2020. “But at the end of the day, we provided a lot of excellent care,” Dr. Baughman said. “This was about preparing for a disaster. It was all hands on deck, and the hands were health professionals. We spent a lot of money for the patients we took care of, but we had no choice, based on what we believed could happen. At that time, so many nursing facilities and homeless shelters were closed to us. It was impossible to predict what utilization would be.”
Subsequent experience has taught that a lot of even seriously ill COVID-19 patients can be managed safely at home, for example, using accelerated home oxygen monitoring with telelinked pulse oximeters. But in the beginning, Dr. Baughman said, “it was a new situation for us. We had seen what happened in Europe and China. It’s a great thing to be overprepared.”
References
1. Horowitz J. Italy’s health care system groans under coronavirus – a warning to the world. New York Times. 2020 Mar 12.
2. Bell SA et al. T-Minus 10 days: The role of an academic medical institution in field hospital planning. Prehosp Disaster Med. 2021 Feb 18:1-6. doi: 10.1017/S1049023X21000224.
3. Singh K et al. Evaluating a widely implemented proprietary deterioration index model among hospitalized patients with COVID-19. Ann Am Thorac Soc. 2021 Jul;18(7):1129-37. doi: 10.1513/AnnalsATS.202006-698OC.
4. Bell SA et al. Alternate care sites during COVID-19 pandemic: Policy implications for pandemic surge planning. Disaster Med Public Health Prep. 2021 Jul 23;1-3. doi: 10.1017/dmp.2021.241.
At the height of the COVID-19 pandemic’s terrifying first wave in the spring of 2020, dozens of hospitals in high-incidence areas either planned or opened temporary, emergency field hospitals to cover anticipated demand for beds beyond the capacity of local permanent hospitals.
Chastened by images of overwhelmed health care systems in Northern Italy and other hard-hit areas,1 the planners used available modeling tools and estimates for projecting maximum potential need in worst-case scenarios. Some of these temporary hospitals never opened. Others opened in convention centers, parking garages, or parking lot tents, and ended up being used to a lesser degree than the worst-case scenarios.
But those who participated in the planning – including, in many cases, hospitalists – believe they created alternate care site manuals that could be quickly revived in the event of future COVID surges or other, similar crises. Better to plan for too much, they say, than not plan for enough.
Field hospitals or alternate care sites are defined in a recent journal article in Prehospital Disaster Medicine as “locations that can be converted to provide either inpatient and/or outpatient health services when existing facilities are compromised by a hazard impact or the volume of patients exceeds available capacity and/or capabilities.”2
The lead author of that report, Sue Anne Bell, PhD, FNP-BC, a disaster expert and assistant professor of nursing at the University of Michigan (UM), was one of five members of the leadership team for planning UM’s field hospital. They used an organizational unit structure based on the U.S. military’s staffing structure, with their work organized around six units of planning: personnel and labor, security, clinical operations, logistics and supply, planning and training, and communications. This team planned a 519-bed step-down care facility, the Michigan Medicine Field Hospital, for a 73,000-foot indoor track and performance facility at the university, three miles from UM’s main hospital. The aim was to provide safe care in a resource-limited environment.
“We were prepared, but the need never materialized as the peak of COVID cases started to subside,” Dr. Bell said. The team was ready to open within days using a “T-Minus” framework of days remaining on an official countdown clock. But when the need and deadlines kept getting pushed back, that gave them more time to develop clearer procedures.
Two Michigan Medicine hospitalists, Christopher Smith, MD, and David Paje, MD, MPH, both professors at UM’s medical school, were intimately involved in the process. “I was the medical director for the respiratory care unit that was opened for COVID patients, so I was pulled in to assist in the field hospital planning,” said Dr. Smith.
Dr. Paje was director of the short-stay unit and had been a medical officer in the U.S. Army, with training in how to set up military field hospitals. He credits that background as helpful for UM’s COVID field hospital planning, along with his experience in hospital medicine operations.
“We expected that these patients would need the expertise of hospitalists, who had quickly become familiar with the peculiarities of the new disease. That played a role in the decisions we made. Hospitalists were at the front lines of COVID care and had unique clinical insights about managing those with severe disease,” Dr. Paje added.
“When we started, the projections were dire. You don’t want to believe something like that is going to happen. When COVID started to cool off, it was more of a relief to us than anything else,” Dr. Smith said. “Still, it was a very worthwhile exercise. At the end of the day, we put together a comprehensive guide, which is ready for the next crisis.”
Baltimore builds a convention center hospital
A COVID-19 field hospital was planned and executed at an exhibit hall in the Baltimore Convention Center, starting in March 2020 under the leadership of Johns Hopkins Bayview hospitalist Eric Howell, MD, MHM, who eventually handed over responsibilities as chief medical officer when he assumed the position of CEO for the Society of Hospital Medicine in July of that year.
Hopkins collaborated with the University of Maryland health system and state leaders, including the Secretary of Health, to open a 252-bed temporary facility, which at its peak carried a census of 48 patients, with no on-site mortality or cardiac arrests, before it was closed in June 2021 – ready to reopen if necessary. It also served as Baltimore’s major site for polymerase chain reaction COVID-19 testing, vaccinations, and monoclonal antibody infusions, along with medical research.
“My belief at the time we started was that my entire 20-year career as a hospitalist had prepared me for the challenge of opening a COVID field hospital,” Dr. Howell said. “I had learned how to build clinical programs. The difference was that instead of months and years to build a program, we only had a few weeks.”
His first request was to bring on an associate medical director for the field hospital, Melinda E. Kantsiper, MD, a hospitalist and director of clinical operations in the Division of Hospital Medicine at Johns Hopkins Bayview. She became the field hospital’s CMO when Dr. Howell moved to SHM. “As hospitalists, we are trained to care for the patient in front of us while at the same time creating systems that can adjust to rapidly changing circumstances,” Dr. Kantsiper said. “We did what was asked and set up a field hospital that cared for a total of 1,500 COVID patients.”
Hospitalists have the tools that are needed for this work, and shouldn’t be reluctant to contribute to field hospital planning, she said. “This was a real eye-opener for me. Eric explained to me that hospitalists really practice acute care medicine, which doesn’t have to be within the four walls of a hospital.”
The Baltimore field hospital has been a fantastic experience, Dr. Kantsiper added. “But it’s not a building designed for health care delivery.” For the right group of providers, the experience of working in a temporary facility such as this can be positive and exhilarating. “But we need to make sure we take care of our staff. It takes a toll. How we keep them safe – physically and emotionally – has to be top of mind,” she said.
The leaders at Hopkins Medicine and their collaborators truly engaged with the field hospital’s mission, Dr. Howell added.
“They gave us a lot of autonomy and helped us break down barriers. They gave us the political capital to say proper PPE was absolutely essential. As hard and devastating as the pandemic has been, one take-away is that we showed that we can be more flexible and elastic in response to actual needs than we used to think.”
Range of challenges
Among the questions that need to be answered by a field hospital’s planners, the first is ‘where to put it?’ The answer is, hopefully, someplace not too far away, large enough, with ready access to supplies and intake. The next question is ‘who is the patient?’ Clinicians must determine who goes to the field hospital versus who stays at the standing hospital. How sick should these patients be? And when do they need to go back to the permanent hospital? Can staff be trained to recognize when patients in the field hospital are starting to decompensate? The EPIC Deterioration Index3 is a proprietary prediction model that was used by more than a hundred hospitals during the pandemic.
The hospitalist team may develop specific inclusion and exclusion criteria – for example, don’t admit patients who are receiving oxygen therapy above a certain threshold or who are hemodynamically unstable. These criteria should reflect the capacity of the field hospital and the needs of the permanent hospital. At Michigan, as at other field hospital sites, the goal was to offer a step-down or postacute setting for patients with COVID-19 who were too sick to return home but didn’t need acute or ICU-level care, thereby freeing up beds at the permanent hospital for patients who were sicker.
Other questions: What is the process for admissions and discharges? How will patients be transported? What kind of staffing is needed, and what levels of care will be provided? What about rehabilitation services, or palliative care? What about patients with substance abuse or psychiatric comorbidities?
“Are we going to do paper charting? How will that work out for long-term documentation and billing?” Dr. Bell said. A clear reporting structure and communication pathways are essential. Among the other operational processes to address, outlined in Dr. Bell’s article, are orientation and training, PPE donning and doffing procedures, the code or rapid response team, patient and staff food and nutrition, infection control protocols, pharmacy services, access to radiology, rounding procedures, staff support, and the morgue.
One other issue that shouldn’t be overlooked is health equity in the field hospital. “Providing safe and equitable care should be the focus. Thinking who goes to the field hospital should be done within a health equity framework,” Dr. Bell said.4 She also wonders if field hospital planners are sharing their experience with colleagues across the country and developing more collaborative relationships with other hospitals in their communities.
“Field hospitals can be different things,” Dr. Bell said. “The important take-home is it doesn’t have to be in a tent or a parking garage, which can be suboptimal.” In many cases, it may be better to focus on finding unused space within the hospital – whether a lobby, staff lounge, or unoccupied unit – closer to personnel, supplies, pharmacy, and the like. “I think the pandemic showed us how unprepared we were as a health care system, and how much more we need to do in preparation for future crises.”
Limits to the temporary hospital
In New York City, which had the country’s worst COVID-19 outbreak during the first surge in the spring of 2020, a 1,000-bed field hospital was opened at the Jacob Javits Center in March 2020 and closed that June. “I was in the field hospital early, in March and April, when our hospitals were temporarily overrun,” said hospitalist Mona Krouss, MD, FACP, CPPS, NYC Health + Hospitals’ director of patient safety. “My role was to figure out how to get patients on our medical floors into these field hospitals, with responsibility for helping to revise admission criteria,” she said.
“No one knew how horrible it would become. This was so unanticipated, so difficult to operationalize. What they were able to create was amazing, but there were just too many barriers to have it work smoothly,” Dr. Krouss said.
“The military stepped in, and they helped us so much. We wouldn’t have been able to survive without their help.” But there is only so much a field hospital can do to provide acute medical care. Later, military medical teams shifted to roles in temporary units inside the permanent hospitals. “They came to the hospital wanting to be deployed,” she said.
“We could only send patients [to the field hospital] who were fairly stable, and choosing the right ones was difficult.” Dr. Krouss said. In the end, not a lot of COVID-19 patients from NYC Health + Hospitals ended up going to the Javits Center, in part because the paperwork and logistics of getting someone in was a barrier, Dr. Krouss said. A process was established for referring doctors to call a phone number and speak with a New York City Department of Health employee to go through the criteria for admission to the field hospital.
“That could take up to 30 minutes before getting approval. Then you had to go through the same process all over again for sign-out to another physician, and then register the patient with a special bar code. Then you had to arrange ambulance transfer. Doctors didn’t want to go through all of that – everybody was too busy,” she explained. Hospitalists have since worked on streamlining the criteria. “Now we have a good process for the future. We made it more seamless,” she noted.
Susan Lee, DO, MBA, hospitalist and chief medical officer for Renown Regional Medical Center in Reno, Nev., helped to plan an alternate care site in anticipation of up to a thousand COVID patients in her community – far beyond the scope of the existing hospitals. Hospitalists were involved the entire time in planning, design of the unit, design of staffing models, care protocols, and the like, working through an evidence-based medical committee and a COVID-19 provider task force for the Renown Health System.
“Because of a history of fires and earthquakes in this region, we had an emergency planning infrastructure in place. We put the field hospital on the first and second floors of a parking garage, with built-in negative pressure capacity. We also built space for staff break rooms and desk space. It took 10 days to build the hospital, thanks to some very talented people in management and facility design,” Dr. Lee said.
Then, the hospital was locked up and sat empty for 7 months, until the surge in December 2020, when Reno was hit by a bigger wave – this time exceeding the hospitals’ capacity. Through mid-January of 2021, clinicians cared for approximately 240 COVID-19 patients, up to 47 at a time, in the field hospital. A third wave in the autumn of 2021 plateaued at a level lower than the previous fall, so the field hospital is not currently needed.
Replicating hospital work flows
“We ensured that everybody who needed to be within the walls of the permanent hospitals was able to stay there,” said Dr. Lee’s colleague, hospitalist Adnan (Eddy) Akbar, MD. “The postacute system we ordinarily rely on was no longer accepting patients. Other hospitals in the area were able to manage within their capacity because Renown’s field hospital could admit excess patients. We tried to replicate in the field hospital, as much as possible, the work flows and systems of our main hospital.”
When the field hospital finally opened, Dr. Akbar said, “we had a good feeling. We were ready. If something more catastrophic had come down, we were ready to care for more patients. In the field hospital you have to keep monitoring your work flow – almost on a daily basis. But we felt privileged to be working for a system where you knew you can go and care for everyone who needed care.”
One upside of the field hospital experience for participating clinicians, Dr. Lee added, is the opportunity to practice creatively. “The downside is it’s extremely expensive, and has consequences for the mental health of staff. Like so many of these things, it wore on people over time – such as all the time spent donning and doffing protective equipment. And recently the patients have become a lot less gracious.”
Amy Baughman, MD, a hospitalist at Massachusetts General Hospital in Boston, was co-medical director of the postacute care section of a 1,000-bed field hospital, Boston Hope Medical Center, opened in April 2020 at the Boston Convention and Exhibition Center. The other half of the facility was dedicated to undomiciled COVID-19 patients who had no place else to go. Peak census was around 100 patients, housed on four units, each with a clinical team led by a physician.
Dr. Baughman’s field hospital experience has taught her the importance of “staying within your domain of expertise. Physicians are attracted to difficult problems and want to do everything themselves. Next time I won’t be the one installing hand sanitizer dispensers.” A big part of running a field hospital is logistics, she said, and physicians are trained clinicians, not necessarily logistics engineers.
“So it’s important to partner with logistics experts. A huge part of our success in building a facility in 9 days of almost continuous construction was the involvement of the National Guard,” she said. An incident command system was led by an experienced military general incident commander, with two clinical codirectors. The army also sent in full teams of health professionals.
The facility admitted a lot fewer patients than the worst-case projections before it closed in June 2020. “But at the end of the day, we provided a lot of excellent care,” Dr. Baughman said. “This was about preparing for a disaster. It was all hands on deck, and the hands were health professionals. We spent a lot of money for the patients we took care of, but we had no choice, based on what we believed could happen. At that time, so many nursing facilities and homeless shelters were closed to us. It was impossible to predict what utilization would be.”
Subsequent experience has taught that a lot of even seriously ill COVID-19 patients can be managed safely at home, for example, using accelerated home oxygen monitoring with telelinked pulse oximeters. But in the beginning, Dr. Baughman said, “it was a new situation for us. We had seen what happened in Europe and China. It’s a great thing to be overprepared.”
References
1. Horowitz J. Italy’s health care system groans under coronavirus – a warning to the world. New York Times. 2020 Mar 12.
2. Bell SA et al. T-Minus 10 days: The role of an academic medical institution in field hospital planning. Prehosp Disaster Med. 2021 Feb 18:1-6. doi: 10.1017/S1049023X21000224.
3. Singh K et al. Evaluating a widely implemented proprietary deterioration index model among hospitalized patients with COVID-19. Ann Am Thorac Soc. 2021 Jul;18(7):1129-37. doi: 10.1513/AnnalsATS.202006-698OC.
4. Bell SA et al. Alternate care sites during COVID-19 pandemic: Policy implications for pandemic surge planning. Disaster Med Public Health Prep. 2021 Jul 23;1-3. doi: 10.1017/dmp.2021.241.
Droperidol/midazolam combo curbs agitation in ED patients
a study involving 86 adult patients at a single tertiary medical care center.
Patients with acute agitation present significant safety concerns in the emergency department, according to Jessica Javed, MD, of the University of Louisville (Ky.) and colleagues.
A combination of haloperidol and lorazepam has been widely used to curb agitation in these patients, but droperidol and midazolam could be more effective, owing to faster onset of action, Dr. Javed noted in a presentation at the annual meeting of the American College of Emergency Physicians.
Dr. Javed and colleagues conducted a prospective study to compare time to adequate sedation in agitated patients in the ED. In the trial, 43 patients received droperidol 5 mg plus midazolam 5 mg, and 43 patients received haloperidol plus lorazepam 2 mg. The average age of the patients in the droperidol/midazolam group was 34 years; the average age of the patients in the haloperidol/lorazepam group was 38 years. Baseline demographics, including height, weight, body mass index, and baseline Sedation Assessment Tool (SAT) scores, were similar between the groups.
The SAT score scale ranges from +3 (combative, violent, or out of control) to –3 (no response to stimulation); zero indicates being awake and calm/cooperative. The median baseline SAT score was 3 for both treatment groups.
The primary outcome was the proportion of patients with adequate sedation (defined as SAT scores of ≤0) 10 min after treatment.
Significantly more patients in the droperidol/midazolam group met this outcome, compared with the patients in the haloperidol/lorazepam group (51.2% vs. 7%). Also, significantly more patients in the droperidol/midazolam group achieved adequate sedation at 5, 10, 15, and 30 min than in the haloperidol/lorazepam group.
Fewer patients in the haloperidol/lorazepam group required supplemental oxygen, compared with the droperidol/midazolam group (9.3% vs. 25.6%). However, none of the droperidol/midazolam patients required rescue sedation, compared with 16.3% of the haloperidol/lorazepam patients, Dr. Javed noted. None of the patients required endotracheal intubation or experienced extrapyramidal symptoms, she said.
The study was limited by the small sample size and inclusion of data from only a single center.
The results suggest that droperidol/midazolam is superior to intramuscular haloperidol/lorazepam for producing adequate sedation after 10 min in agitated patients, Dr. Javed concluded.
The study received no outside funding. The researchers have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
a study involving 86 adult patients at a single tertiary medical care center.
Patients with acute agitation present significant safety concerns in the emergency department, according to Jessica Javed, MD, of the University of Louisville (Ky.) and colleagues.
A combination of haloperidol and lorazepam has been widely used to curb agitation in these patients, but droperidol and midazolam could be more effective, owing to faster onset of action, Dr. Javed noted in a presentation at the annual meeting of the American College of Emergency Physicians.
Dr. Javed and colleagues conducted a prospective study to compare time to adequate sedation in agitated patients in the ED. In the trial, 43 patients received droperidol 5 mg plus midazolam 5 mg, and 43 patients received haloperidol plus lorazepam 2 mg. The average age of the patients in the droperidol/midazolam group was 34 years; the average age of the patients in the haloperidol/lorazepam group was 38 years. Baseline demographics, including height, weight, body mass index, and baseline Sedation Assessment Tool (SAT) scores, were similar between the groups.
The SAT score scale ranges from +3 (combative, violent, or out of control) to –3 (no response to stimulation); zero indicates being awake and calm/cooperative. The median baseline SAT score was 3 for both treatment groups.
The primary outcome was the proportion of patients with adequate sedation (defined as SAT scores of ≤0) 10 min after treatment.
Significantly more patients in the droperidol/midazolam group met this outcome, compared with the patients in the haloperidol/lorazepam group (51.2% vs. 7%). Also, significantly more patients in the droperidol/midazolam group achieved adequate sedation at 5, 10, 15, and 30 min than in the haloperidol/lorazepam group.
Fewer patients in the haloperidol/lorazepam group required supplemental oxygen, compared with the droperidol/midazolam group (9.3% vs. 25.6%). However, none of the droperidol/midazolam patients required rescue sedation, compared with 16.3% of the haloperidol/lorazepam patients, Dr. Javed noted. None of the patients required endotracheal intubation or experienced extrapyramidal symptoms, she said.
The study was limited by the small sample size and inclusion of data from only a single center.
The results suggest that droperidol/midazolam is superior to intramuscular haloperidol/lorazepam for producing adequate sedation after 10 min in agitated patients, Dr. Javed concluded.
The study received no outside funding. The researchers have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
a study involving 86 adult patients at a single tertiary medical care center.
Patients with acute agitation present significant safety concerns in the emergency department, according to Jessica Javed, MD, of the University of Louisville (Ky.) and colleagues.
A combination of haloperidol and lorazepam has been widely used to curb agitation in these patients, but droperidol and midazolam could be more effective, owing to faster onset of action, Dr. Javed noted in a presentation at the annual meeting of the American College of Emergency Physicians.
Dr. Javed and colleagues conducted a prospective study to compare time to adequate sedation in agitated patients in the ED. In the trial, 43 patients received droperidol 5 mg plus midazolam 5 mg, and 43 patients received haloperidol plus lorazepam 2 mg. The average age of the patients in the droperidol/midazolam group was 34 years; the average age of the patients in the haloperidol/lorazepam group was 38 years. Baseline demographics, including height, weight, body mass index, and baseline Sedation Assessment Tool (SAT) scores, were similar between the groups.
The SAT score scale ranges from +3 (combative, violent, or out of control) to –3 (no response to stimulation); zero indicates being awake and calm/cooperative. The median baseline SAT score was 3 for both treatment groups.
The primary outcome was the proportion of patients with adequate sedation (defined as SAT scores of ≤0) 10 min after treatment.
Significantly more patients in the droperidol/midazolam group met this outcome, compared with the patients in the haloperidol/lorazepam group (51.2% vs. 7%). Also, significantly more patients in the droperidol/midazolam group achieved adequate sedation at 5, 10, 15, and 30 min than in the haloperidol/lorazepam group.
Fewer patients in the haloperidol/lorazepam group required supplemental oxygen, compared with the droperidol/midazolam group (9.3% vs. 25.6%). However, none of the droperidol/midazolam patients required rescue sedation, compared with 16.3% of the haloperidol/lorazepam patients, Dr. Javed noted. None of the patients required endotracheal intubation or experienced extrapyramidal symptoms, she said.
The study was limited by the small sample size and inclusion of data from only a single center.
The results suggest that droperidol/midazolam is superior to intramuscular haloperidol/lorazepam for producing adequate sedation after 10 min in agitated patients, Dr. Javed concluded.
The study received no outside funding. The researchers have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.