User login
Positive Outcomes Following a Multidisciplinary Approach in the Diagnosis and Prevention of Hospital Delirium
From the Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, CA (Drs. Ching, Darwish, Li, Wong, Simpson, and Funk), the Department of Anesthesia, Cedars-Sinai Medical Center, Los Angeles, CA (Keith Siegel), and the Department of Psychiatry, Cedars-Sinai Medical Center, Los Angeles, CA (Dr. Bamgbose).
Objectives: To reduce the incidence and duration of delirium among patients in a hospital ward through standardized delirium screening tools and nonpharmacologic interventions. To advance nursing-focused education on delirium-prevention strategies. To measure the efficacy of the interventions with the aim of reproducing best practices.
Background: Delirium is associated with poor patient outcomes but may be preventable in a significant percentage of hospitalized patients.
Methods: Following nursing-focused education to prevent delirium, we prospectively evaluated patient care outcomes in a consecutive series of patients who were admitted to a hospital medical-surgical ward within a 25-week period. All patients who had at least 1 Confusion Assessment Method (CAM) documented by a nurse during hospitalization met our inclusion criteria (N = 353). Standards for Quality Improvement Reporting Excellence guidelines were adhered to.
Results: There were 187 patients in the control group, and 166 in the postintervention group. Compared to the control group, the postintervention group had a significant decrease in the incidence of delirium during hospitalization (14.4% vs 4.2%) and a significant decrease in the mean percentage of tested nursing shifts with 1 or more positive CAM (4.9% vs 1.1%). Significant differences in secondary outcomes between the control and postintervention groups included median length of stay (6 days vs 4 days), mean length of stay (8.5 days vs 5.9 days), and use of an indwelling urinary catheter (9.1% vs 2.4%).
Conclusion: A multimodal strategy involving nursing-focused training and nonpharmacologic interventions to address hospital delirium is associated with improved patient care outcomes and nursing confidence. Nurses play an integral role in the early recognition and prevention of hospital delirium, which directly translates to reducing burdens in both patient functionality and health care costs.
Delirium is a disorder characterized by inattention and acute changes in cognition. It is defined by the American Psychiatric Association’s fifth edition of the Diagnostic and Statistical Manual of Mental Disorders as a disturbance in attention, awareness, and cognition over hours to a few days that is not better explained by a preexisting, established, or other evolving neurocognitive disorder.1 Delirium is common yet often under-recognized among hospitalized patients, particularly in the elderly. The incidence of delirium in elderly patients on admission is estimated to be 11% to 25%, and an additional 29% to 31% of elderly patients will develop delirium during the hospitalization.2 Delirium costs the health care system an estimated $38 billion to $152 billion per year.3 It is associated with negative outcomes, such as increased new placements to nursing homes, increased mortality, increased risk of dementia, and further cognitive deterioration among patients with dementia.4-6
Despite its prevalence, delirium may be preventable in a significant percentage of hospitalized patients. Targeted intervention strategies, such as frequent reorientation, maximizing sleep, early mobilization, restricting use of psychoactive medications, and addressing hearing or vision impairment, have been demonstrated to significantly reduce the incidence of hospital delirium.7,8 To achieve these goals, we explored the use of a multimodal strategy centered on nursing education. We integrated consistent, standardized delirium screening and nonpharmacologic interventions as part of a preventative protocol to reduce the incidence of delirium in the hospital ward.
Methods
We evaluated a consecutive series of patients who were admitted to a designated hospital medical-surgical ward within a 25-week period between October 2019 and April 2020. All patients during this period who had at least 1 Confusion Assessment Method (CAM) documented by a nurse during hospitalization met our inclusion criteria. Patients who did not have a CAM documented were excluded from the analysis. Delirium was defined according to the CAM diagnostic algorithm.9
Core nursing staff regularly assigned to the ward completed a multimodal training program designed to improve recognition, documentation, and prevention of hospital delirium. Prior to the training, the nurses completed a 5-point Likert scale survey assessing their level of confidence with recognizing delirium risk factors, preventing delirium, addressing delirium, utilizing the CAM tool, and educating others about delirium. Nurses completed the same survey after the study period ended.
The training curriculum for nurses began with an online module reviewing the epidemiology and risk factors for delirium. Nurses then participated in a series of in-service training sessions led by a team of physicians, during which the CAM and nonpharmacologic delirium prevention measures were reviewed then practiced first-hand. Nursing staff attended an in-person lecture reviewing the current body of literature on delirium risk factors and effective nursing interventions. After formal training was completed, nurses were instructed to document CAM screens
Patients admitted to the hospital unit from the start of the training program (week 1) until the order set was made available (week 15) constituted our control group. The postintervention study group consisted of patients admitted for 10 weeks after the completion of the interventions (weeks 16-25). A timeline of the study events is shown in Figure 1.
Patient demographics and hospital-stay metrics determined a priori were attained via the Cedars-Sinai Enterprise Information Services core. Age, sex, medical history, and incidence of surgery with anesthesia during hospitalization were recorded. The Charlson Comorbidity Index was calculated from patients’ listed diagnoses following discharge. Primary outcomes included incidence of patients with delirium during hospitalization, percentage of tested shifts with positive CAM screens, length of hospital stay, and survival. Secondary outcomes included measures associated with delirium, including the use of chemical restraints, physical restraints, sitters, indwelling urinary catheters, and new psychiatry and neurology consults. Chemical restraints were defined as administration of a new antipsychotic medication or benzodiazepine for the specific indication of hyperactive delirium or agitation.
Statistical analysis was conducted by a statistician, using R version 3.6.3.10P values of < .05 were considered significant. Categorical variables were analyzed using Fisher’s exact test. Continuous variables were analyzed with Welch’s t-test or, for highly skewed continuous variables, with Wilcoxon rank-sum test or Mood’s median test. All patient data were anonymized and stored securely in accordance with institutional guidelines.
Our project was deemed to represent nonhuman subject research and therefore did not require Institutional Review Board (IRB) approval upon review by our institution’s IRB committee and Office of Research Compliance and Quality Improvement. Standards for Quality Improvement Reporting Excellence (SQUIRE 2.0) guidelines were adhered to (Supplementary File can be found at mdedge.com/jcomjournal).
Results
We evaluated 353 patients who met our inclusion criteria: 187 in the control group, and 166 in the postintervention group. Ten patients were readmitted to the ward after their initial discharge; only the initial admission encounters were included in our analysis. Median age, sex, median Charlson Comorbidity Index, and incidence of surgery with anesthesia during hospitalization were comparable between the control and postintervention groups and are summarized in Table 2.
In the control group, 1572 CAMs were performed, with 74 positive CAMs recorded among 27 patients with delirium. In the postintervention group, 1298 CAMs were performed, with 12 positive CAMs recorded among 7 patients with delirium (Figure 2). Primary and secondary outcomes, as well as CAM compliance measures, are summarized in Table 3.
Compared to the control group, the postintervention group had a significant decrease in the incidence of delirium during hospitalization (14.4% vs 4.2%, P = .002) and a significant decrease in the mean percentage of tested nursing shifts with 1 or more positive CAM (4.9% vs 1.1%, P = .002). Significant differences in secondary outcomes between the control and postintervention groups included median length of stay (6 days vs 4 days, P = .004), mean length of stay (8.5 days vs 5.9 days, P = .003), and use of an indwelling urinary catheter (9.1% vs 2.4%, P = .012). There was a trend towards decreased incidence of chemical restraints and psychiatry consults, which did not reach statistical significance. Differences in mortality during hospitalization, physical restraint use, and sitter use could not be assessed due to low incidence.
Compliance with nursing CAM assessments was evaluated. Compared to the control group, the postintervention group saw a significant increase in the percentage of shifts with a CAM performed (54.7% vs 69.1%, P < .001). The median and mean number of CAMs performed per patient were similar between the control and postintervention groups.
Results of nursing surveys completed before and after the training program are listed in Table 4. After training, nurses had a greater level of confidence with recognizing delirium risk factors, preventing delirium, addressing delirium, utilizing the CAM tool, and educating others about delirium.
Discussion
Our study utilized a standardized delirium assessment tool to compare patient cohorts before and after nurse-targeted training interventions on delirium recognition and prevention. Our interventions emphasized nonpharmacologic intervention strategies, which are recommended as first-line in the management of patients with delirium.11 Patients were not excluded from the analysis based on preexisting medical conditions or recent surgery with anesthesia, to allow for conditions that are representative of community hospitals. We also did not use an inclusion criterion based on age; however, the majority of our patients were greater than 70 years old, representing those at highest risk for delirium.2 Significant outcomes among patients in the postintervention group include decreased incidence of delirium, lower average length of stay, decreased indwelling urinary catheter use, and increased compliance with delirium screening by nursing staff.
While the study’s focus was primarily on delirium prevention rather than treatment, these strategies may also have conferred the benefit of reversing delirium symptoms. In addition to measuring incidence of delirium, our primary outcome of percentage of tested shifts with 1 or more positive CAM was intended to assess the overall duration in which patients had delirium during their hospitalization. The reduction in shifts with positive CAMs observed in the postintervention group is notable, given that a significant percentage of patients with hospital delirium have the potential for symptom reversibility.12
Multiple studies have shown that admitted patients who develop delirium experience prolonged hospital stays, often up to 5 to 10 days longer.12-14 The decreased incidence and duration of delirium in our postintervention group is a reasonable explanation for the observed decrease in average length of stay. Our study is in line with previously documented initiatives that show that nonpharmacologic interventions can effectively address downstream health and fiscal sequelae of hospital delirium. For example, a volunteer-based initiative named the Hospital Elder Life Program, from which elements in our order set were modeled after, demonstrated significant reductions in delirium incidence, length of stay, and health care costs.14-16 Other initiatives that focused on educational training for nurses to assess and prevent delirium have also demonstrated similar positive results.17-19 Our study provides a model for effective nursing-focused education that can be reproduced in the hospital setting.
Unlike some other studies, which identified delirium based only on physician assessments, our initiative utilized the CAM performed by floor nurses to identify delirium. While this method
Our study demonstrated an increase in the overall compliance with the CAM screening during the postintervention period, which is significant given the under-recognition of delirium by health care professionals.20 We attribute this increase to greater realized importance and a higher level of confidence from nursing staff in recognizing and addressing delirium, as supported by survey data. While the increased screening of patients should be considered a positive outcome, it also poses the possibility that the observed decrease in delirium incidence in the postintervention group was in fact due to more CAMs performed on patients without delirium. Likewise, nurses may have become more adept at recognizing true delirium, as opposed to delirium mimics, in the latter period of the study.
Perhaps the greatest limitation of our study is the variability in performing and recording CAMs, as some patients had multiple CAMs recorded while others did not have any CAMs recorded. This may have been affected in part by the increase in COVID-19 cases in our hospital towards the latter half of the study, which resulted in changes in nursing assignments as well as patient comorbidities in ways that cannot be easily quantified. Given the limited size of our patient cohorts, certain outcomes, such as the use of sitters, physical restraints, and in-hospital mortality, were unable to be assessed for changes statistically. Causative relationships between our interventions and associated outcome measures are necessarily limited in a binary comparison between control and postintervention groups.
Within these limitations, our study demonstrates promising results in core dimensions of patient care. We anticipate further quality improvement initiatives involving greater numbers of nursing staff and patients to better quantify the impact of nonpharmacologic nursing-centered interventions for preventing hospital delirium.
Conclusion
A multimodal strategy involving nursing-focused training and nonpharmacologic interventions to address hospital delirium is associated with improved patient care outcomes and nursing confidence. Nurses play an integral role in the early recognition and prevention of hospital delirium, which directly translates to reducing burdens in both patient functionality and health care costs. Education and tools to equip nurses to perform standardized delirium screening and interventions should be prioritized.
Acknowledgment: The authors thanks Olena Svetlov, NP, Oscar Abarca, Jose Chavez, and Jenita Gutierrez.
Corresponding author: Jason Ching, MD, Department of Neurology, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Los Angeles, CA 90048; [email protected].
Financial disclosures: None.
Funding: This research was supported by NIH National Center for Advancing Translational Science (NCATS) UCLA CTSI Grant Number UL1TR001881.
1. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, 5th edition. American Psychiatric Association; 2013.
2. Vasilevskis EE, Han JH, Hughes CG, et al. Epidemiology and risk factors for delirium across hospital settings. Best Pract Res Clin Anaesthesiol. 2012;26(3):277-287. doi:10.1016/j.bpa.2012.07003
3. Leslie DL, Marcantonio ER, Zhang Y, et al. One-year health care costs associated with delirium in the elderly population. Arch Intern Med. 2008;168(1):27-32. doi:10.1001/archinternmed.2007.4
4. McCusker J, Cole M, Abrahamowicz M, et al. Delirium predicts 12-month mortality. Arch Intern Med. 2002;162(4):457-463. doi:10.1001/archinte.162.4.457
5. Witlox J, Eurelings LS, de Jonghe JF, et al. Delirium in elderly patients and the risk of postdischarge mortality, institutionalization, and dementia: a meta-analysis. JAMA. 2010;304(4):443-451. doi:10.1001/jama.2010.1013
6. Gross AL, Jones RN, Habtemariam DA, et al. Delirium and long-term cognitive trajectory among persons with dementia. Arch Intern Med. 2012;172(17):1324-1331. doi:10.1001/archinternmed.2012.3203
7. Inouye SK. Prevention of delirium in hospitalized older patients: risk factors and targeted intervention strategies. Ann Med. 2000;32(4):257-263. doi:10.3109/07853890009011770
8. Siddiqi N, Harrison JK, Clegg A, et al. Interventions for preventing delirium in hospitalised non-ICU patients. Cochrane Database Syst Rev. 2016;3:CD005563. doi:10.1002/14651858.CD005563.pub3
9. Inouye SK, van Dyck CH, Alessi CA, et al. Clarifying confusion: the confusion assessment method. A new method for detection of delirium. Ann Intern Med. 1990;113(12):941-948. doi:10.7326/0003-4819-113-12-941
10. R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing; 2017.
11. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24
12. Siddiqi N, House AO, Holmes JD. Occurrence and outcome of delirium in medical in-patients: a systematic literature review. Age Ageing. 2006;35(4):350-364. doi:10.1093/ageing/afl005
13. Ely EW, Shintani A, Truman B, et al. Delirium as a predictor of mortality in mechanically ventilated patients in the intensive care unit. JAMA. 2004;291(14):1753-1762. doi:10.1001/jama.291.14.1753
14. Chen CC, Lin MT, Tien YW, et al. Modified Hospital Elder Life Program: effects on abdominal surgery patients. J Am Coll Surg. 2011;213(2):245-252. doi:10.1016/j.jamcollsurg.2011.05.004
15. Zaubler TS, Murphy K, Rizzuto L, et al. Quality improvement and cost savings with multicomponent delirium interventions: replication of the Hospital Elder Life Program in a community hospital. Psychosomatics. 2013;54(3):219-226. doi:10.1016/j.psym.2013.01.010
16. Rubin FH, Neal K, Fenlon K, et al. Sustainability and scalability of the Hospital Elder Life Program at a community hospital. J Am Geriatr Soc. 2011;59(2):359-365. doi:10.1111/j.1532-5415.2010.03243.x
17. Milisen K, Foreman MD, Abraham IL, et al. A nurse-led interdisciplinary intervention program for delirium in elderly hip-fracture patients. J Am Geriatr Soc. 2001;49(5):523-532. doi:10.1046/j.1532-5415.2001.49109.x
18. Lundström M, Edlund A, Karlsson S, et al. A multifactorial intervention program reduces the duration of delirium, length of hospitalization, and mortality in delirious patients. J Am Geriatr Soc. 2005;53(4):622-628. doi:10.1111/j.1532-5415.2005.53210.x
19. Tabet N, Hudson S, Sweeney V, et al. An educational intervention can prevent delirium on acute medical wards. Age Ageing. 2005;34(2):152-156. doi:10.1093/ageing/afi0320. Han JH, Zimmerman EE, Cutler N, et al. Delirium in older emergency department patients: recognition, risk factors, and psychomotor subtypes. Acad Emerg Med. 2009;16(3):193-200. doi:10.1111/j.1553-2712.2008.00339.x
From the Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, CA (Drs. Ching, Darwish, Li, Wong, Simpson, and Funk), the Department of Anesthesia, Cedars-Sinai Medical Center, Los Angeles, CA (Keith Siegel), and the Department of Psychiatry, Cedars-Sinai Medical Center, Los Angeles, CA (Dr. Bamgbose).
Objectives: To reduce the incidence and duration of delirium among patients in a hospital ward through standardized delirium screening tools and nonpharmacologic interventions. To advance nursing-focused education on delirium-prevention strategies. To measure the efficacy of the interventions with the aim of reproducing best practices.
Background: Delirium is associated with poor patient outcomes but may be preventable in a significant percentage of hospitalized patients.
Methods: Following nursing-focused education to prevent delirium, we prospectively evaluated patient care outcomes in a consecutive series of patients who were admitted to a hospital medical-surgical ward within a 25-week period. All patients who had at least 1 Confusion Assessment Method (CAM) documented by a nurse during hospitalization met our inclusion criteria (N = 353). Standards for Quality Improvement Reporting Excellence guidelines were adhered to.
Results: There were 187 patients in the control group, and 166 in the postintervention group. Compared to the control group, the postintervention group had a significant decrease in the incidence of delirium during hospitalization (14.4% vs 4.2%) and a significant decrease in the mean percentage of tested nursing shifts with 1 or more positive CAM (4.9% vs 1.1%). Significant differences in secondary outcomes between the control and postintervention groups included median length of stay (6 days vs 4 days), mean length of stay (8.5 days vs 5.9 days), and use of an indwelling urinary catheter (9.1% vs 2.4%).
Conclusion: A multimodal strategy involving nursing-focused training and nonpharmacologic interventions to address hospital delirium is associated with improved patient care outcomes and nursing confidence. Nurses play an integral role in the early recognition and prevention of hospital delirium, which directly translates to reducing burdens in both patient functionality and health care costs.
Delirium is a disorder characterized by inattention and acute changes in cognition. It is defined by the American Psychiatric Association’s fifth edition of the Diagnostic and Statistical Manual of Mental Disorders as a disturbance in attention, awareness, and cognition over hours to a few days that is not better explained by a preexisting, established, or other evolving neurocognitive disorder.1 Delirium is common yet often under-recognized among hospitalized patients, particularly in the elderly. The incidence of delirium in elderly patients on admission is estimated to be 11% to 25%, and an additional 29% to 31% of elderly patients will develop delirium during the hospitalization.2 Delirium costs the health care system an estimated $38 billion to $152 billion per year.3 It is associated with negative outcomes, such as increased new placements to nursing homes, increased mortality, increased risk of dementia, and further cognitive deterioration among patients with dementia.4-6
Despite its prevalence, delirium may be preventable in a significant percentage of hospitalized patients. Targeted intervention strategies, such as frequent reorientation, maximizing sleep, early mobilization, restricting use of psychoactive medications, and addressing hearing or vision impairment, have been demonstrated to significantly reduce the incidence of hospital delirium.7,8 To achieve these goals, we explored the use of a multimodal strategy centered on nursing education. We integrated consistent, standardized delirium screening and nonpharmacologic interventions as part of a preventative protocol to reduce the incidence of delirium in the hospital ward.
Methods
We evaluated a consecutive series of patients who were admitted to a designated hospital medical-surgical ward within a 25-week period between October 2019 and April 2020. All patients during this period who had at least 1 Confusion Assessment Method (CAM) documented by a nurse during hospitalization met our inclusion criteria. Patients who did not have a CAM documented were excluded from the analysis. Delirium was defined according to the CAM diagnostic algorithm.9
Core nursing staff regularly assigned to the ward completed a multimodal training program designed to improve recognition, documentation, and prevention of hospital delirium. Prior to the training, the nurses completed a 5-point Likert scale survey assessing their level of confidence with recognizing delirium risk factors, preventing delirium, addressing delirium, utilizing the CAM tool, and educating others about delirium. Nurses completed the same survey after the study period ended.
The training curriculum for nurses began with an online module reviewing the epidemiology and risk factors for delirium. Nurses then participated in a series of in-service training sessions led by a team of physicians, during which the CAM and nonpharmacologic delirium prevention measures were reviewed then practiced first-hand. Nursing staff attended an in-person lecture reviewing the current body of literature on delirium risk factors and effective nursing interventions. After formal training was completed, nurses were instructed to document CAM screens
Patients admitted to the hospital unit from the start of the training program (week 1) until the order set was made available (week 15) constituted our control group. The postintervention study group consisted of patients admitted for 10 weeks after the completion of the interventions (weeks 16-25). A timeline of the study events is shown in Figure 1.
Patient demographics and hospital-stay metrics determined a priori were attained via the Cedars-Sinai Enterprise Information Services core. Age, sex, medical history, and incidence of surgery with anesthesia during hospitalization were recorded. The Charlson Comorbidity Index was calculated from patients’ listed diagnoses following discharge. Primary outcomes included incidence of patients with delirium during hospitalization, percentage of tested shifts with positive CAM screens, length of hospital stay, and survival. Secondary outcomes included measures associated with delirium, including the use of chemical restraints, physical restraints, sitters, indwelling urinary catheters, and new psychiatry and neurology consults. Chemical restraints were defined as administration of a new antipsychotic medication or benzodiazepine for the specific indication of hyperactive delirium or agitation.
Statistical analysis was conducted by a statistician, using R version 3.6.3.10P values of < .05 were considered significant. Categorical variables were analyzed using Fisher’s exact test. Continuous variables were analyzed with Welch’s t-test or, for highly skewed continuous variables, with Wilcoxon rank-sum test or Mood’s median test. All patient data were anonymized and stored securely in accordance with institutional guidelines.
Our project was deemed to represent nonhuman subject research and therefore did not require Institutional Review Board (IRB) approval upon review by our institution’s IRB committee and Office of Research Compliance and Quality Improvement. Standards for Quality Improvement Reporting Excellence (SQUIRE 2.0) guidelines were adhered to (Supplementary File can be found at mdedge.com/jcomjournal).
Results
We evaluated 353 patients who met our inclusion criteria: 187 in the control group, and 166 in the postintervention group. Ten patients were readmitted to the ward after their initial discharge; only the initial admission encounters were included in our analysis. Median age, sex, median Charlson Comorbidity Index, and incidence of surgery with anesthesia during hospitalization were comparable between the control and postintervention groups and are summarized in Table 2.
In the control group, 1572 CAMs were performed, with 74 positive CAMs recorded among 27 patients with delirium. In the postintervention group, 1298 CAMs were performed, with 12 positive CAMs recorded among 7 patients with delirium (Figure 2). Primary and secondary outcomes, as well as CAM compliance measures, are summarized in Table 3.
Compared to the control group, the postintervention group had a significant decrease in the incidence of delirium during hospitalization (14.4% vs 4.2%, P = .002) and a significant decrease in the mean percentage of tested nursing shifts with 1 or more positive CAM (4.9% vs 1.1%, P = .002). Significant differences in secondary outcomes between the control and postintervention groups included median length of stay (6 days vs 4 days, P = .004), mean length of stay (8.5 days vs 5.9 days, P = .003), and use of an indwelling urinary catheter (9.1% vs 2.4%, P = .012). There was a trend towards decreased incidence of chemical restraints and psychiatry consults, which did not reach statistical significance. Differences in mortality during hospitalization, physical restraint use, and sitter use could not be assessed due to low incidence.
Compliance with nursing CAM assessments was evaluated. Compared to the control group, the postintervention group saw a significant increase in the percentage of shifts with a CAM performed (54.7% vs 69.1%, P < .001). The median and mean number of CAMs performed per patient were similar between the control and postintervention groups.
Results of nursing surveys completed before and after the training program are listed in Table 4. After training, nurses had a greater level of confidence with recognizing delirium risk factors, preventing delirium, addressing delirium, utilizing the CAM tool, and educating others about delirium.
Discussion
Our study utilized a standardized delirium assessment tool to compare patient cohorts before and after nurse-targeted training interventions on delirium recognition and prevention. Our interventions emphasized nonpharmacologic intervention strategies, which are recommended as first-line in the management of patients with delirium.11 Patients were not excluded from the analysis based on preexisting medical conditions or recent surgery with anesthesia, to allow for conditions that are representative of community hospitals. We also did not use an inclusion criterion based on age; however, the majority of our patients were greater than 70 years old, representing those at highest risk for delirium.2 Significant outcomes among patients in the postintervention group include decreased incidence of delirium, lower average length of stay, decreased indwelling urinary catheter use, and increased compliance with delirium screening by nursing staff.
While the study’s focus was primarily on delirium prevention rather than treatment, these strategies may also have conferred the benefit of reversing delirium symptoms. In addition to measuring incidence of delirium, our primary outcome of percentage of tested shifts with 1 or more positive CAM was intended to assess the overall duration in which patients had delirium during their hospitalization. The reduction in shifts with positive CAMs observed in the postintervention group is notable, given that a significant percentage of patients with hospital delirium have the potential for symptom reversibility.12
Multiple studies have shown that admitted patients who develop delirium experience prolonged hospital stays, often up to 5 to 10 days longer.12-14 The decreased incidence and duration of delirium in our postintervention group is a reasonable explanation for the observed decrease in average length of stay. Our study is in line with previously documented initiatives that show that nonpharmacologic interventions can effectively address downstream health and fiscal sequelae of hospital delirium. For example, a volunteer-based initiative named the Hospital Elder Life Program, from which elements in our order set were modeled after, demonstrated significant reductions in delirium incidence, length of stay, and health care costs.14-16 Other initiatives that focused on educational training for nurses to assess and prevent delirium have also demonstrated similar positive results.17-19 Our study provides a model for effective nursing-focused education that can be reproduced in the hospital setting.
Unlike some other studies, which identified delirium based only on physician assessments, our initiative utilized the CAM performed by floor nurses to identify delirium. While this method
Our study demonstrated an increase in the overall compliance with the CAM screening during the postintervention period, which is significant given the under-recognition of delirium by health care professionals.20 We attribute this increase to greater realized importance and a higher level of confidence from nursing staff in recognizing and addressing delirium, as supported by survey data. While the increased screening of patients should be considered a positive outcome, it also poses the possibility that the observed decrease in delirium incidence in the postintervention group was in fact due to more CAMs performed on patients without delirium. Likewise, nurses may have become more adept at recognizing true delirium, as opposed to delirium mimics, in the latter period of the study.
Perhaps the greatest limitation of our study is the variability in performing and recording CAMs, as some patients had multiple CAMs recorded while others did not have any CAMs recorded. This may have been affected in part by the increase in COVID-19 cases in our hospital towards the latter half of the study, which resulted in changes in nursing assignments as well as patient comorbidities in ways that cannot be easily quantified. Given the limited size of our patient cohorts, certain outcomes, such as the use of sitters, physical restraints, and in-hospital mortality, were unable to be assessed for changes statistically. Causative relationships between our interventions and associated outcome measures are necessarily limited in a binary comparison between control and postintervention groups.
Within these limitations, our study demonstrates promising results in core dimensions of patient care. We anticipate further quality improvement initiatives involving greater numbers of nursing staff and patients to better quantify the impact of nonpharmacologic nursing-centered interventions for preventing hospital delirium.
Conclusion
A multimodal strategy involving nursing-focused training and nonpharmacologic interventions to address hospital delirium is associated with improved patient care outcomes and nursing confidence. Nurses play an integral role in the early recognition and prevention of hospital delirium, which directly translates to reducing burdens in both patient functionality and health care costs. Education and tools to equip nurses to perform standardized delirium screening and interventions should be prioritized.
Acknowledgment: The authors thanks Olena Svetlov, NP, Oscar Abarca, Jose Chavez, and Jenita Gutierrez.
Corresponding author: Jason Ching, MD, Department of Neurology, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Los Angeles, CA 90048; [email protected].
Financial disclosures: None.
Funding: This research was supported by NIH National Center for Advancing Translational Science (NCATS) UCLA CTSI Grant Number UL1TR001881.
From the Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, CA (Drs. Ching, Darwish, Li, Wong, Simpson, and Funk), the Department of Anesthesia, Cedars-Sinai Medical Center, Los Angeles, CA (Keith Siegel), and the Department of Psychiatry, Cedars-Sinai Medical Center, Los Angeles, CA (Dr. Bamgbose).
Objectives: To reduce the incidence and duration of delirium among patients in a hospital ward through standardized delirium screening tools and nonpharmacologic interventions. To advance nursing-focused education on delirium-prevention strategies. To measure the efficacy of the interventions with the aim of reproducing best practices.
Background: Delirium is associated with poor patient outcomes but may be preventable in a significant percentage of hospitalized patients.
Methods: Following nursing-focused education to prevent delirium, we prospectively evaluated patient care outcomes in a consecutive series of patients who were admitted to a hospital medical-surgical ward within a 25-week period. All patients who had at least 1 Confusion Assessment Method (CAM) documented by a nurse during hospitalization met our inclusion criteria (N = 353). Standards for Quality Improvement Reporting Excellence guidelines were adhered to.
Results: There were 187 patients in the control group, and 166 in the postintervention group. Compared to the control group, the postintervention group had a significant decrease in the incidence of delirium during hospitalization (14.4% vs 4.2%) and a significant decrease in the mean percentage of tested nursing shifts with 1 or more positive CAM (4.9% vs 1.1%). Significant differences in secondary outcomes between the control and postintervention groups included median length of stay (6 days vs 4 days), mean length of stay (8.5 days vs 5.9 days), and use of an indwelling urinary catheter (9.1% vs 2.4%).
Conclusion: A multimodal strategy involving nursing-focused training and nonpharmacologic interventions to address hospital delirium is associated with improved patient care outcomes and nursing confidence. Nurses play an integral role in the early recognition and prevention of hospital delirium, which directly translates to reducing burdens in both patient functionality and health care costs.
Delirium is a disorder characterized by inattention and acute changes in cognition. It is defined by the American Psychiatric Association’s fifth edition of the Diagnostic and Statistical Manual of Mental Disorders as a disturbance in attention, awareness, and cognition over hours to a few days that is not better explained by a preexisting, established, or other evolving neurocognitive disorder.1 Delirium is common yet often under-recognized among hospitalized patients, particularly in the elderly. The incidence of delirium in elderly patients on admission is estimated to be 11% to 25%, and an additional 29% to 31% of elderly patients will develop delirium during the hospitalization.2 Delirium costs the health care system an estimated $38 billion to $152 billion per year.3 It is associated with negative outcomes, such as increased new placements to nursing homes, increased mortality, increased risk of dementia, and further cognitive deterioration among patients with dementia.4-6
Despite its prevalence, delirium may be preventable in a significant percentage of hospitalized patients. Targeted intervention strategies, such as frequent reorientation, maximizing sleep, early mobilization, restricting use of psychoactive medications, and addressing hearing or vision impairment, have been demonstrated to significantly reduce the incidence of hospital delirium.7,8 To achieve these goals, we explored the use of a multimodal strategy centered on nursing education. We integrated consistent, standardized delirium screening and nonpharmacologic interventions as part of a preventative protocol to reduce the incidence of delirium in the hospital ward.
Methods
We evaluated a consecutive series of patients who were admitted to a designated hospital medical-surgical ward within a 25-week period between October 2019 and April 2020. All patients during this period who had at least 1 Confusion Assessment Method (CAM) documented by a nurse during hospitalization met our inclusion criteria. Patients who did not have a CAM documented were excluded from the analysis. Delirium was defined according to the CAM diagnostic algorithm.9
Core nursing staff regularly assigned to the ward completed a multimodal training program designed to improve recognition, documentation, and prevention of hospital delirium. Prior to the training, the nurses completed a 5-point Likert scale survey assessing their level of confidence with recognizing delirium risk factors, preventing delirium, addressing delirium, utilizing the CAM tool, and educating others about delirium. Nurses completed the same survey after the study period ended.
The training curriculum for nurses began with an online module reviewing the epidemiology and risk factors for delirium. Nurses then participated in a series of in-service training sessions led by a team of physicians, during which the CAM and nonpharmacologic delirium prevention measures were reviewed then practiced first-hand. Nursing staff attended an in-person lecture reviewing the current body of literature on delirium risk factors and effective nursing interventions. After formal training was completed, nurses were instructed to document CAM screens
Patients admitted to the hospital unit from the start of the training program (week 1) until the order set was made available (week 15) constituted our control group. The postintervention study group consisted of patients admitted for 10 weeks after the completion of the interventions (weeks 16-25). A timeline of the study events is shown in Figure 1.
Patient demographics and hospital-stay metrics determined a priori were attained via the Cedars-Sinai Enterprise Information Services core. Age, sex, medical history, and incidence of surgery with anesthesia during hospitalization were recorded. The Charlson Comorbidity Index was calculated from patients’ listed diagnoses following discharge. Primary outcomes included incidence of patients with delirium during hospitalization, percentage of tested shifts with positive CAM screens, length of hospital stay, and survival. Secondary outcomes included measures associated with delirium, including the use of chemical restraints, physical restraints, sitters, indwelling urinary catheters, and new psychiatry and neurology consults. Chemical restraints were defined as administration of a new antipsychotic medication or benzodiazepine for the specific indication of hyperactive delirium or agitation.
Statistical analysis was conducted by a statistician, using R version 3.6.3.10P values of < .05 were considered significant. Categorical variables were analyzed using Fisher’s exact test. Continuous variables were analyzed with Welch’s t-test or, for highly skewed continuous variables, with Wilcoxon rank-sum test or Mood’s median test. All patient data were anonymized and stored securely in accordance with institutional guidelines.
Our project was deemed to represent nonhuman subject research and therefore did not require Institutional Review Board (IRB) approval upon review by our institution’s IRB committee and Office of Research Compliance and Quality Improvement. Standards for Quality Improvement Reporting Excellence (SQUIRE 2.0) guidelines were adhered to (Supplementary File can be found at mdedge.com/jcomjournal).
Results
We evaluated 353 patients who met our inclusion criteria: 187 in the control group, and 166 in the postintervention group. Ten patients were readmitted to the ward after their initial discharge; only the initial admission encounters were included in our analysis. Median age, sex, median Charlson Comorbidity Index, and incidence of surgery with anesthesia during hospitalization were comparable between the control and postintervention groups and are summarized in Table 2.
In the control group, 1572 CAMs were performed, with 74 positive CAMs recorded among 27 patients with delirium. In the postintervention group, 1298 CAMs were performed, with 12 positive CAMs recorded among 7 patients with delirium (Figure 2). Primary and secondary outcomes, as well as CAM compliance measures, are summarized in Table 3.
Compared to the control group, the postintervention group had a significant decrease in the incidence of delirium during hospitalization (14.4% vs 4.2%, P = .002) and a significant decrease in the mean percentage of tested nursing shifts with 1 or more positive CAM (4.9% vs 1.1%, P = .002). Significant differences in secondary outcomes between the control and postintervention groups included median length of stay (6 days vs 4 days, P = .004), mean length of stay (8.5 days vs 5.9 days, P = .003), and use of an indwelling urinary catheter (9.1% vs 2.4%, P = .012). There was a trend towards decreased incidence of chemical restraints and psychiatry consults, which did not reach statistical significance. Differences in mortality during hospitalization, physical restraint use, and sitter use could not be assessed due to low incidence.
Compliance with nursing CAM assessments was evaluated. Compared to the control group, the postintervention group saw a significant increase in the percentage of shifts with a CAM performed (54.7% vs 69.1%, P < .001). The median and mean number of CAMs performed per patient were similar between the control and postintervention groups.
Results of nursing surveys completed before and after the training program are listed in Table 4. After training, nurses had a greater level of confidence with recognizing delirium risk factors, preventing delirium, addressing delirium, utilizing the CAM tool, and educating others about delirium.
Discussion
Our study utilized a standardized delirium assessment tool to compare patient cohorts before and after nurse-targeted training interventions on delirium recognition and prevention. Our interventions emphasized nonpharmacologic intervention strategies, which are recommended as first-line in the management of patients with delirium.11 Patients were not excluded from the analysis based on preexisting medical conditions or recent surgery with anesthesia, to allow for conditions that are representative of community hospitals. We also did not use an inclusion criterion based on age; however, the majority of our patients were greater than 70 years old, representing those at highest risk for delirium.2 Significant outcomes among patients in the postintervention group include decreased incidence of delirium, lower average length of stay, decreased indwelling urinary catheter use, and increased compliance with delirium screening by nursing staff.
While the study’s focus was primarily on delirium prevention rather than treatment, these strategies may also have conferred the benefit of reversing delirium symptoms. In addition to measuring incidence of delirium, our primary outcome of percentage of tested shifts with 1 or more positive CAM was intended to assess the overall duration in which patients had delirium during their hospitalization. The reduction in shifts with positive CAMs observed in the postintervention group is notable, given that a significant percentage of patients with hospital delirium have the potential for symptom reversibility.12
Multiple studies have shown that admitted patients who develop delirium experience prolonged hospital stays, often up to 5 to 10 days longer.12-14 The decreased incidence and duration of delirium in our postintervention group is a reasonable explanation for the observed decrease in average length of stay. Our study is in line with previously documented initiatives that show that nonpharmacologic interventions can effectively address downstream health and fiscal sequelae of hospital delirium. For example, a volunteer-based initiative named the Hospital Elder Life Program, from which elements in our order set were modeled after, demonstrated significant reductions in delirium incidence, length of stay, and health care costs.14-16 Other initiatives that focused on educational training for nurses to assess and prevent delirium have also demonstrated similar positive results.17-19 Our study provides a model for effective nursing-focused education that can be reproduced in the hospital setting.
Unlike some other studies, which identified delirium based only on physician assessments, our initiative utilized the CAM performed by floor nurses to identify delirium. While this method
Our study demonstrated an increase in the overall compliance with the CAM screening during the postintervention period, which is significant given the under-recognition of delirium by health care professionals.20 We attribute this increase to greater realized importance and a higher level of confidence from nursing staff in recognizing and addressing delirium, as supported by survey data. While the increased screening of patients should be considered a positive outcome, it also poses the possibility that the observed decrease in delirium incidence in the postintervention group was in fact due to more CAMs performed on patients without delirium. Likewise, nurses may have become more adept at recognizing true delirium, as opposed to delirium mimics, in the latter period of the study.
Perhaps the greatest limitation of our study is the variability in performing and recording CAMs, as some patients had multiple CAMs recorded while others did not have any CAMs recorded. This may have been affected in part by the increase in COVID-19 cases in our hospital towards the latter half of the study, which resulted in changes in nursing assignments as well as patient comorbidities in ways that cannot be easily quantified. Given the limited size of our patient cohorts, certain outcomes, such as the use of sitters, physical restraints, and in-hospital mortality, were unable to be assessed for changes statistically. Causative relationships between our interventions and associated outcome measures are necessarily limited in a binary comparison between control and postintervention groups.
Within these limitations, our study demonstrates promising results in core dimensions of patient care. We anticipate further quality improvement initiatives involving greater numbers of nursing staff and patients to better quantify the impact of nonpharmacologic nursing-centered interventions for preventing hospital delirium.
Conclusion
A multimodal strategy involving nursing-focused training and nonpharmacologic interventions to address hospital delirium is associated with improved patient care outcomes and nursing confidence. Nurses play an integral role in the early recognition and prevention of hospital delirium, which directly translates to reducing burdens in both patient functionality and health care costs. Education and tools to equip nurses to perform standardized delirium screening and interventions should be prioritized.
Acknowledgment: The authors thanks Olena Svetlov, NP, Oscar Abarca, Jose Chavez, and Jenita Gutierrez.
Corresponding author: Jason Ching, MD, Department of Neurology, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Los Angeles, CA 90048; [email protected].
Financial disclosures: None.
Funding: This research was supported by NIH National Center for Advancing Translational Science (NCATS) UCLA CTSI Grant Number UL1TR001881.
1. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, 5th edition. American Psychiatric Association; 2013.
2. Vasilevskis EE, Han JH, Hughes CG, et al. Epidemiology and risk factors for delirium across hospital settings. Best Pract Res Clin Anaesthesiol. 2012;26(3):277-287. doi:10.1016/j.bpa.2012.07003
3. Leslie DL, Marcantonio ER, Zhang Y, et al. One-year health care costs associated with delirium in the elderly population. Arch Intern Med. 2008;168(1):27-32. doi:10.1001/archinternmed.2007.4
4. McCusker J, Cole M, Abrahamowicz M, et al. Delirium predicts 12-month mortality. Arch Intern Med. 2002;162(4):457-463. doi:10.1001/archinte.162.4.457
5. Witlox J, Eurelings LS, de Jonghe JF, et al. Delirium in elderly patients and the risk of postdischarge mortality, institutionalization, and dementia: a meta-analysis. JAMA. 2010;304(4):443-451. doi:10.1001/jama.2010.1013
6. Gross AL, Jones RN, Habtemariam DA, et al. Delirium and long-term cognitive trajectory among persons with dementia. Arch Intern Med. 2012;172(17):1324-1331. doi:10.1001/archinternmed.2012.3203
7. Inouye SK. Prevention of delirium in hospitalized older patients: risk factors and targeted intervention strategies. Ann Med. 2000;32(4):257-263. doi:10.3109/07853890009011770
8. Siddiqi N, Harrison JK, Clegg A, et al. Interventions for preventing delirium in hospitalised non-ICU patients. Cochrane Database Syst Rev. 2016;3:CD005563. doi:10.1002/14651858.CD005563.pub3
9. Inouye SK, van Dyck CH, Alessi CA, et al. Clarifying confusion: the confusion assessment method. A new method for detection of delirium. Ann Intern Med. 1990;113(12):941-948. doi:10.7326/0003-4819-113-12-941
10. R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing; 2017.
11. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24
12. Siddiqi N, House AO, Holmes JD. Occurrence and outcome of delirium in medical in-patients: a systematic literature review. Age Ageing. 2006;35(4):350-364. doi:10.1093/ageing/afl005
13. Ely EW, Shintani A, Truman B, et al. Delirium as a predictor of mortality in mechanically ventilated patients in the intensive care unit. JAMA. 2004;291(14):1753-1762. doi:10.1001/jama.291.14.1753
14. Chen CC, Lin MT, Tien YW, et al. Modified Hospital Elder Life Program: effects on abdominal surgery patients. J Am Coll Surg. 2011;213(2):245-252. doi:10.1016/j.jamcollsurg.2011.05.004
15. Zaubler TS, Murphy K, Rizzuto L, et al. Quality improvement and cost savings with multicomponent delirium interventions: replication of the Hospital Elder Life Program in a community hospital. Psychosomatics. 2013;54(3):219-226. doi:10.1016/j.psym.2013.01.010
16. Rubin FH, Neal K, Fenlon K, et al. Sustainability and scalability of the Hospital Elder Life Program at a community hospital. J Am Geriatr Soc. 2011;59(2):359-365. doi:10.1111/j.1532-5415.2010.03243.x
17. Milisen K, Foreman MD, Abraham IL, et al. A nurse-led interdisciplinary intervention program for delirium in elderly hip-fracture patients. J Am Geriatr Soc. 2001;49(5):523-532. doi:10.1046/j.1532-5415.2001.49109.x
18. Lundström M, Edlund A, Karlsson S, et al. A multifactorial intervention program reduces the duration of delirium, length of hospitalization, and mortality in delirious patients. J Am Geriatr Soc. 2005;53(4):622-628. doi:10.1111/j.1532-5415.2005.53210.x
19. Tabet N, Hudson S, Sweeney V, et al. An educational intervention can prevent delirium on acute medical wards. Age Ageing. 2005;34(2):152-156. doi:10.1093/ageing/afi0320. Han JH, Zimmerman EE, Cutler N, et al. Delirium in older emergency department patients: recognition, risk factors, and psychomotor subtypes. Acad Emerg Med. 2009;16(3):193-200. doi:10.1111/j.1553-2712.2008.00339.x
1. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, 5th edition. American Psychiatric Association; 2013.
2. Vasilevskis EE, Han JH, Hughes CG, et al. Epidemiology and risk factors for delirium across hospital settings. Best Pract Res Clin Anaesthesiol. 2012;26(3):277-287. doi:10.1016/j.bpa.2012.07003
3. Leslie DL, Marcantonio ER, Zhang Y, et al. One-year health care costs associated with delirium in the elderly population. Arch Intern Med. 2008;168(1):27-32. doi:10.1001/archinternmed.2007.4
4. McCusker J, Cole M, Abrahamowicz M, et al. Delirium predicts 12-month mortality. Arch Intern Med. 2002;162(4):457-463. doi:10.1001/archinte.162.4.457
5. Witlox J, Eurelings LS, de Jonghe JF, et al. Delirium in elderly patients and the risk of postdischarge mortality, institutionalization, and dementia: a meta-analysis. JAMA. 2010;304(4):443-451. doi:10.1001/jama.2010.1013
6. Gross AL, Jones RN, Habtemariam DA, et al. Delirium and long-term cognitive trajectory among persons with dementia. Arch Intern Med. 2012;172(17):1324-1331. doi:10.1001/archinternmed.2012.3203
7. Inouye SK. Prevention of delirium in hospitalized older patients: risk factors and targeted intervention strategies. Ann Med. 2000;32(4):257-263. doi:10.3109/07853890009011770
8. Siddiqi N, Harrison JK, Clegg A, et al. Interventions for preventing delirium in hospitalised non-ICU patients. Cochrane Database Syst Rev. 2016;3:CD005563. doi:10.1002/14651858.CD005563.pub3
9. Inouye SK, van Dyck CH, Alessi CA, et al. Clarifying confusion: the confusion assessment method. A new method for detection of delirium. Ann Intern Med. 1990;113(12):941-948. doi:10.7326/0003-4819-113-12-941
10. R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing; 2017.
11. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24
12. Siddiqi N, House AO, Holmes JD. Occurrence and outcome of delirium in medical in-patients: a systematic literature review. Age Ageing. 2006;35(4):350-364. doi:10.1093/ageing/afl005
13. Ely EW, Shintani A, Truman B, et al. Delirium as a predictor of mortality in mechanically ventilated patients in the intensive care unit. JAMA. 2004;291(14):1753-1762. doi:10.1001/jama.291.14.1753
14. Chen CC, Lin MT, Tien YW, et al. Modified Hospital Elder Life Program: effects on abdominal surgery patients. J Am Coll Surg. 2011;213(2):245-252. doi:10.1016/j.jamcollsurg.2011.05.004
15. Zaubler TS, Murphy K, Rizzuto L, et al. Quality improvement and cost savings with multicomponent delirium interventions: replication of the Hospital Elder Life Program in a community hospital. Psychosomatics. 2013;54(3):219-226. doi:10.1016/j.psym.2013.01.010
16. Rubin FH, Neal K, Fenlon K, et al. Sustainability and scalability of the Hospital Elder Life Program at a community hospital. J Am Geriatr Soc. 2011;59(2):359-365. doi:10.1111/j.1532-5415.2010.03243.x
17. Milisen K, Foreman MD, Abraham IL, et al. A nurse-led interdisciplinary intervention program for delirium in elderly hip-fracture patients. J Am Geriatr Soc. 2001;49(5):523-532. doi:10.1046/j.1532-5415.2001.49109.x
18. Lundström M, Edlund A, Karlsson S, et al. A multifactorial intervention program reduces the duration of delirium, length of hospitalization, and mortality in delirious patients. J Am Geriatr Soc. 2005;53(4):622-628. doi:10.1111/j.1532-5415.2005.53210.x
19. Tabet N, Hudson S, Sweeney V, et al. An educational intervention can prevent delirium on acute medical wards. Age Ageing. 2005;34(2):152-156. doi:10.1093/ageing/afi0320. Han JH, Zimmerman EE, Cutler N, et al. Delirium in older emergency department patients: recognition, risk factors, and psychomotor subtypes. Acad Emerg Med. 2009;16(3):193-200. doi:10.1111/j.1553-2712.2008.00339.x
Social media use associated with depression in adults
Use of social media has been linked to increased anxiety and depression, as well as reduced well-being in adolescents and young adults, but similar associations in older adults have not been well studied, and longitudinal data are lacking, Ron H. Perlis, MD, of Massachusetts General Hospital, Boston, and colleagues wrote in their paper, which was published in JAMA Network Open.
To examine the association between social media use and depressive symptoms in older adults, the researchers reviewed data from 13 waves of an internet survey conducted each month between May 2020 and May 2021. The survey respondents included individuals aged 18 years and older, with a mean age of 56 years.
In the study the researchers analyzed responses from 5,395 individuals aged 18 years and older, with a mean age of 56 years. The study participants had minimal or no depressive symptoms at baseline, according to scores on the nine-item Patient Health Questionnaire (PHQ-9).
Overall, 8.9% of the respondents reported a worsening of 5 points or more on the PHQ-9 score on a follow-up survey, which was the primary outcome. Participants who reported using social media platforms Snapchat, Facebook, or TikTok were significantly more likely to report increased depressive symptoms, compared with those who did not report use of social media. The fully adjusted odds ratio was largest for Snapchat (aOR, 1.53), followed by Facebook (aOR, 1.42), and TikTok (aOR, 1.39).
Incorporating recent television and internet news terms, such as COVID-19, changed the association for Snapchat, for which the aOR decreased from 1.53 to 1.12 when news source terms were included in the survey. TikTok and Facebook associations remained similar.
When the results were further stratified by age, use of TikTok and Snapchat was associated with depressive symptoms in those aged 35 years and older, but not in those younger than 35 years. However, the opposite pattern emerged for Facebook; use was associated with depressive symptoms for individuals younger than 35 years, but not in those aged 35 years and older (aOR, 2.60 vs. aOR, 1.12).
The association between increased self-reported depressive symptoms and use of certain social media platforms was not impacted by baseline social support or face-to-face interactions, the researchers noted.
Family physician was surprised results weren’t more significant
In the current study, “I was honestly surprised the results weren’t more significant,” Mary Ann Dakkak, MD, of Boston University said in an interview. “That said, social media uses during the COVID pandemic may have been a necessary social outlet and form of connection for many people who were otherwise isolated.”
To still see a significant increase in depression when social media could have been a positive force may suggest a heavier impact during “normal” times, she added.
“It is not surprising that what we see in youth is shown among adults,” noted Dr. Dakkak, who was not involved with this study. “I always tell my patients that what is good for their children is good for the adults too, and vice versa.
“We expect to see outcomes of this on youth and adults who have been more isolated, who have used more screen time for learning, work, connection and boredom, in the near future,” she said. “The complex nature of why social media may have been used more heavily for connection during a time when in-person meetings were not possible may be a heavy confounder as the typical profile of heavy social media users may have differed during the COVID shutdowns.”
Psychiatrist: Balance benefits of social media with mental health risks
The current study was likely conducted before the recent news on “hidden” Facebook data and the implications that Facebook knew it was contributing to worsened mental health in teens, particularly around self-esteem, Jessica “Jessi” Gold, MD, a psychiatrist at Washington University, St. Louis, said in an interview.
“If you look more specifically at other studies, however, the data around social media and mental health is constantly varied, with some showing benefits and some showing negatives, and none conclusively suggesting either way,” said Dr. Gold, who also was not involved with the new research. “More data are needed, especially longitudinally and on a broader age group, to understand social media’s impact on mental health over time.
“It is also even more important in the wake of COVID-19, as so many people have turned to social media as a primary source of social support and connection, and are using it even more than before,” she emphasized.
In the current study, “I think the most interesting information is that, for TikTok and Snapchat, the effects seemed to be more pronounced in those older than 35 years who used social media,” said Dr. Gold.
What this study leaves unanswered is “whether people who might develop depression are simply more prone to use social media in the first place, such as to seek out social support,” Dr. Gold said. “Also, we don’t know anything about how long they are using social media or what they are using it for, which to me is important for understanding more about the nuance of the relationship with mental health and social media.”
Experts advise clinicians to discuss social media with patients
This new research suggests that clinicians should be talking to their patients about how social media impacts their emotional reactions, as well as their sleep, Dr. Gold said.
“Patients should be asking themselves how they are feeling when they are on social media and not using it before sleep. They should also be considering time limits and how to effectively use social media while taking care of their mental health,” she said. This conversation between clinician and patient should be had with any patient of any age, who uses social media, not only with teenagers.
“This is also a conversation about moderation, and knowing that individuals may feel they benefit from social media, that they should balance these benefits with potential mental health risks,” she said.
“Studies such as this one shed light onto why social media consumption should be at least a point of discussion with our patients,” said Dr. Dakkak.
She advised clinicians to ask and listen to patients and their families when it comes to screen time habits. “Whenever I see a patient with mood symptoms, I ask about their habits – eating, sleeping, socializing, screen time – including phone time. I ask about the family dynamics around screen time.
“I’ve added screen time to my adolescent assessment. Discussing safe use of cell phones and social media can have a significant impact on adolescent behavior and wellbeing, and parents are very thankful for the help,” she said. “This study encourages us to add screen time to the assessments we do at all adult ages, especially if mood symptoms exist,” Dr. Dakkak emphasized.
Suggestions for future research
Dr. Dakkak added that more areas for research include the differences in the impact of social media use on content creators versus content consumers. Also, “I would like to see research using the real data of use, the times of use, interruptions in sleep and use, possible confounding variables to include exercise, presence of intimate relationship and school/job performance.”
Given the many confounding variables, more controlled studies are needed to examine mental health outcomes in use, how long people use social media, and the impact of interventions such as time limits, Dr. Gold said.
“We can’t ignore the benefits of social media, such as helping those with social anxiety, finding peer support, and normalizing mental health, and those factors need to be studied and measured more effectively as well, she said.
Take-home message
It is important to recognize that the current study represents a correlation, not causality, said Dr. Gold. In addressing the issues of how social media impact mental health, “as always, the hardest thing is that many people get their news from social media, and often get social support from social media, so there has to be a balance of not removing social media completely, but of helping people see how it affects their mental health and how to find balance.”
The study findings were limited by several factors, including the inability to control for all potential confounders, the inability to assess the nature of social media use, and the lack of dose-response data, the researchers noted. Although the surveys in the current study were not specific to COVID-19, the effects of social media on depression may be specific to the content, and the findings may not generalize beyond the COVID-19 pandemic period.
Approximately two-thirds (66%) of the study participants identified as female, and 76% as White; 11% as Black; 6% as Asian; 5% as Hispanic; and 2% as American Indian or Alaska Native, Pacific Islander or Native Hawaiian, or other.
The National Institute of Mental Health provided a grant for the study to Dr. Pelis, who disclosed consulting fees from various companies and equity in Psy Therapeutics. The study’s lead author also serves as associate editor for JAMA Network Open, but was not involved in the decision process for publication of this study. Dr. Gold disclosed conducting a conference for Johnson & Johnson about social media and health care workers, and was on the advisory council.
Use of social media has been linked to increased anxiety and depression, as well as reduced well-being in adolescents and young adults, but similar associations in older adults have not been well studied, and longitudinal data are lacking, Ron H. Perlis, MD, of Massachusetts General Hospital, Boston, and colleagues wrote in their paper, which was published in JAMA Network Open.
To examine the association between social media use and depressive symptoms in older adults, the researchers reviewed data from 13 waves of an internet survey conducted each month between May 2020 and May 2021. The survey respondents included individuals aged 18 years and older, with a mean age of 56 years.
In the study the researchers analyzed responses from 5,395 individuals aged 18 years and older, with a mean age of 56 years. The study participants had minimal or no depressive symptoms at baseline, according to scores on the nine-item Patient Health Questionnaire (PHQ-9).
Overall, 8.9% of the respondents reported a worsening of 5 points or more on the PHQ-9 score on a follow-up survey, which was the primary outcome. Participants who reported using social media platforms Snapchat, Facebook, or TikTok were significantly more likely to report increased depressive symptoms, compared with those who did not report use of social media. The fully adjusted odds ratio was largest for Snapchat (aOR, 1.53), followed by Facebook (aOR, 1.42), and TikTok (aOR, 1.39).
Incorporating recent television and internet news terms, such as COVID-19, changed the association for Snapchat, for which the aOR decreased from 1.53 to 1.12 when news source terms were included in the survey. TikTok and Facebook associations remained similar.
When the results were further stratified by age, use of TikTok and Snapchat was associated with depressive symptoms in those aged 35 years and older, but not in those younger than 35 years. However, the opposite pattern emerged for Facebook; use was associated with depressive symptoms for individuals younger than 35 years, but not in those aged 35 years and older (aOR, 2.60 vs. aOR, 1.12).
The association between increased self-reported depressive symptoms and use of certain social media platforms was not impacted by baseline social support or face-to-face interactions, the researchers noted.
Family physician was surprised results weren’t more significant
In the current study, “I was honestly surprised the results weren’t more significant,” Mary Ann Dakkak, MD, of Boston University said in an interview. “That said, social media uses during the COVID pandemic may have been a necessary social outlet and form of connection for many people who were otherwise isolated.”
To still see a significant increase in depression when social media could have been a positive force may suggest a heavier impact during “normal” times, she added.
“It is not surprising that what we see in youth is shown among adults,” noted Dr. Dakkak, who was not involved with this study. “I always tell my patients that what is good for their children is good for the adults too, and vice versa.
“We expect to see outcomes of this on youth and adults who have been more isolated, who have used more screen time for learning, work, connection and boredom, in the near future,” she said. “The complex nature of why social media may have been used more heavily for connection during a time when in-person meetings were not possible may be a heavy confounder as the typical profile of heavy social media users may have differed during the COVID shutdowns.”
Psychiatrist: Balance benefits of social media with mental health risks
The current study was likely conducted before the recent news on “hidden” Facebook data and the implications that Facebook knew it was contributing to worsened mental health in teens, particularly around self-esteem, Jessica “Jessi” Gold, MD, a psychiatrist at Washington University, St. Louis, said in an interview.
“If you look more specifically at other studies, however, the data around social media and mental health is constantly varied, with some showing benefits and some showing negatives, and none conclusively suggesting either way,” said Dr. Gold, who also was not involved with the new research. “More data are needed, especially longitudinally and on a broader age group, to understand social media’s impact on mental health over time.
“It is also even more important in the wake of COVID-19, as so many people have turned to social media as a primary source of social support and connection, and are using it even more than before,” she emphasized.
In the current study, “I think the most interesting information is that, for TikTok and Snapchat, the effects seemed to be more pronounced in those older than 35 years who used social media,” said Dr. Gold.
What this study leaves unanswered is “whether people who might develop depression are simply more prone to use social media in the first place, such as to seek out social support,” Dr. Gold said. “Also, we don’t know anything about how long they are using social media or what they are using it for, which to me is important for understanding more about the nuance of the relationship with mental health and social media.”
Experts advise clinicians to discuss social media with patients
This new research suggests that clinicians should be talking to their patients about how social media impacts their emotional reactions, as well as their sleep, Dr. Gold said.
“Patients should be asking themselves how they are feeling when they are on social media and not using it before sleep. They should also be considering time limits and how to effectively use social media while taking care of their mental health,” she said. This conversation between clinician and patient should be had with any patient of any age, who uses social media, not only with teenagers.
“This is also a conversation about moderation, and knowing that individuals may feel they benefit from social media, that they should balance these benefits with potential mental health risks,” she said.
“Studies such as this one shed light onto why social media consumption should be at least a point of discussion with our patients,” said Dr. Dakkak.
She advised clinicians to ask and listen to patients and their families when it comes to screen time habits. “Whenever I see a patient with mood symptoms, I ask about their habits – eating, sleeping, socializing, screen time – including phone time. I ask about the family dynamics around screen time.
“I’ve added screen time to my adolescent assessment. Discussing safe use of cell phones and social media can have a significant impact on adolescent behavior and wellbeing, and parents are very thankful for the help,” she said. “This study encourages us to add screen time to the assessments we do at all adult ages, especially if mood symptoms exist,” Dr. Dakkak emphasized.
Suggestions for future research
Dr. Dakkak added that more areas for research include the differences in the impact of social media use on content creators versus content consumers. Also, “I would like to see research using the real data of use, the times of use, interruptions in sleep and use, possible confounding variables to include exercise, presence of intimate relationship and school/job performance.”
Given the many confounding variables, more controlled studies are needed to examine mental health outcomes in use, how long people use social media, and the impact of interventions such as time limits, Dr. Gold said.
“We can’t ignore the benefits of social media, such as helping those with social anxiety, finding peer support, and normalizing mental health, and those factors need to be studied and measured more effectively as well, she said.
Take-home message
It is important to recognize that the current study represents a correlation, not causality, said Dr. Gold. In addressing the issues of how social media impact mental health, “as always, the hardest thing is that many people get their news from social media, and often get social support from social media, so there has to be a balance of not removing social media completely, but of helping people see how it affects their mental health and how to find balance.”
The study findings were limited by several factors, including the inability to control for all potential confounders, the inability to assess the nature of social media use, and the lack of dose-response data, the researchers noted. Although the surveys in the current study were not specific to COVID-19, the effects of social media on depression may be specific to the content, and the findings may not generalize beyond the COVID-19 pandemic period.
Approximately two-thirds (66%) of the study participants identified as female, and 76% as White; 11% as Black; 6% as Asian; 5% as Hispanic; and 2% as American Indian or Alaska Native, Pacific Islander or Native Hawaiian, or other.
The National Institute of Mental Health provided a grant for the study to Dr. Pelis, who disclosed consulting fees from various companies and equity in Psy Therapeutics. The study’s lead author also serves as associate editor for JAMA Network Open, but was not involved in the decision process for publication of this study. Dr. Gold disclosed conducting a conference for Johnson & Johnson about social media and health care workers, and was on the advisory council.
Use of social media has been linked to increased anxiety and depression, as well as reduced well-being in adolescents and young adults, but similar associations in older adults have not been well studied, and longitudinal data are lacking, Ron H. Perlis, MD, of Massachusetts General Hospital, Boston, and colleagues wrote in their paper, which was published in JAMA Network Open.
To examine the association between social media use and depressive symptoms in older adults, the researchers reviewed data from 13 waves of an internet survey conducted each month between May 2020 and May 2021. The survey respondents included individuals aged 18 years and older, with a mean age of 56 years.
In the study the researchers analyzed responses from 5,395 individuals aged 18 years and older, with a mean age of 56 years. The study participants had minimal or no depressive symptoms at baseline, according to scores on the nine-item Patient Health Questionnaire (PHQ-9).
Overall, 8.9% of the respondents reported a worsening of 5 points or more on the PHQ-9 score on a follow-up survey, which was the primary outcome. Participants who reported using social media platforms Snapchat, Facebook, or TikTok were significantly more likely to report increased depressive symptoms, compared with those who did not report use of social media. The fully adjusted odds ratio was largest for Snapchat (aOR, 1.53), followed by Facebook (aOR, 1.42), and TikTok (aOR, 1.39).
Incorporating recent television and internet news terms, such as COVID-19, changed the association for Snapchat, for which the aOR decreased from 1.53 to 1.12 when news source terms were included in the survey. TikTok and Facebook associations remained similar.
When the results were further stratified by age, use of TikTok and Snapchat was associated with depressive symptoms in those aged 35 years and older, but not in those younger than 35 years. However, the opposite pattern emerged for Facebook; use was associated with depressive symptoms for individuals younger than 35 years, but not in those aged 35 years and older (aOR, 2.60 vs. aOR, 1.12).
The association between increased self-reported depressive symptoms and use of certain social media platforms was not impacted by baseline social support or face-to-face interactions, the researchers noted.
Family physician was surprised results weren’t more significant
In the current study, “I was honestly surprised the results weren’t more significant,” Mary Ann Dakkak, MD, of Boston University said in an interview. “That said, social media uses during the COVID pandemic may have been a necessary social outlet and form of connection for many people who were otherwise isolated.”
To still see a significant increase in depression when social media could have been a positive force may suggest a heavier impact during “normal” times, she added.
“It is not surprising that what we see in youth is shown among adults,” noted Dr. Dakkak, who was not involved with this study. “I always tell my patients that what is good for their children is good for the adults too, and vice versa.
“We expect to see outcomes of this on youth and adults who have been more isolated, who have used more screen time for learning, work, connection and boredom, in the near future,” she said. “The complex nature of why social media may have been used more heavily for connection during a time when in-person meetings were not possible may be a heavy confounder as the typical profile of heavy social media users may have differed during the COVID shutdowns.”
Psychiatrist: Balance benefits of social media with mental health risks
The current study was likely conducted before the recent news on “hidden” Facebook data and the implications that Facebook knew it was contributing to worsened mental health in teens, particularly around self-esteem, Jessica “Jessi” Gold, MD, a psychiatrist at Washington University, St. Louis, said in an interview.
“If you look more specifically at other studies, however, the data around social media and mental health is constantly varied, with some showing benefits and some showing negatives, and none conclusively suggesting either way,” said Dr. Gold, who also was not involved with the new research. “More data are needed, especially longitudinally and on a broader age group, to understand social media’s impact on mental health over time.
“It is also even more important in the wake of COVID-19, as so many people have turned to social media as a primary source of social support and connection, and are using it even more than before,” she emphasized.
In the current study, “I think the most interesting information is that, for TikTok and Snapchat, the effects seemed to be more pronounced in those older than 35 years who used social media,” said Dr. Gold.
What this study leaves unanswered is “whether people who might develop depression are simply more prone to use social media in the first place, such as to seek out social support,” Dr. Gold said. “Also, we don’t know anything about how long they are using social media or what they are using it for, which to me is important for understanding more about the nuance of the relationship with mental health and social media.”
Experts advise clinicians to discuss social media with patients
This new research suggests that clinicians should be talking to their patients about how social media impacts their emotional reactions, as well as their sleep, Dr. Gold said.
“Patients should be asking themselves how they are feeling when they are on social media and not using it before sleep. They should also be considering time limits and how to effectively use social media while taking care of their mental health,” she said. This conversation between clinician and patient should be had with any patient of any age, who uses social media, not only with teenagers.
“This is also a conversation about moderation, and knowing that individuals may feel they benefit from social media, that they should balance these benefits with potential mental health risks,” she said.
“Studies such as this one shed light onto why social media consumption should be at least a point of discussion with our patients,” said Dr. Dakkak.
She advised clinicians to ask and listen to patients and their families when it comes to screen time habits. “Whenever I see a patient with mood symptoms, I ask about their habits – eating, sleeping, socializing, screen time – including phone time. I ask about the family dynamics around screen time.
“I’ve added screen time to my adolescent assessment. Discussing safe use of cell phones and social media can have a significant impact on adolescent behavior and wellbeing, and parents are very thankful for the help,” she said. “This study encourages us to add screen time to the assessments we do at all adult ages, especially if mood symptoms exist,” Dr. Dakkak emphasized.
Suggestions for future research
Dr. Dakkak added that more areas for research include the differences in the impact of social media use on content creators versus content consumers. Also, “I would like to see research using the real data of use, the times of use, interruptions in sleep and use, possible confounding variables to include exercise, presence of intimate relationship and school/job performance.”
Given the many confounding variables, more controlled studies are needed to examine mental health outcomes in use, how long people use social media, and the impact of interventions such as time limits, Dr. Gold said.
“We can’t ignore the benefits of social media, such as helping those with social anxiety, finding peer support, and normalizing mental health, and those factors need to be studied and measured more effectively as well, she said.
Take-home message
It is important to recognize that the current study represents a correlation, not causality, said Dr. Gold. In addressing the issues of how social media impact mental health, “as always, the hardest thing is that many people get their news from social media, and often get social support from social media, so there has to be a balance of not removing social media completely, but of helping people see how it affects their mental health and how to find balance.”
The study findings were limited by several factors, including the inability to control for all potential confounders, the inability to assess the nature of social media use, and the lack of dose-response data, the researchers noted. Although the surveys in the current study were not specific to COVID-19, the effects of social media on depression may be specific to the content, and the findings may not generalize beyond the COVID-19 pandemic period.
Approximately two-thirds (66%) of the study participants identified as female, and 76% as White; 11% as Black; 6% as Asian; 5% as Hispanic; and 2% as American Indian or Alaska Native, Pacific Islander or Native Hawaiian, or other.
The National Institute of Mental Health provided a grant for the study to Dr. Pelis, who disclosed consulting fees from various companies and equity in Psy Therapeutics. The study’s lead author also serves as associate editor for JAMA Network Open, but was not involved in the decision process for publication of this study. Dr. Gold disclosed conducting a conference for Johnson & Johnson about social media and health care workers, and was on the advisory council.
FROM JAMA NETWORK OPEN
Oakland score identifies patients with lower GI bleed at low risk for adverse events
Background: The Oakland score was initially designed to be used in patients presenting with LGIB in the urgent, emergent, or primary care setting to help predict risk of readmission and determine if outpatient management is feasible. National guidelines in the United Kingdom have recommended use of the Oakland score despite limited external validation for the triage of patients with acute LGIB. This study aimed to externally validate the Oakland score in a large population in the United States and compare the performance at two thresholds.
Study design: Retrospective observational study.
Setting: 140 hospitals across the United States.
Synopsis: In this prognostic study, 38,067 patients were identified retrospectively using ICD-10 codes that were consistent with a diagnosis of LGIB and were admitted to the hospital. The Oakland score consisted of seven variables, including age, sex, prior hospitalization with LGIB, digital rectal exam results, heart rate, systolic blood pressure, and hemoglobin concentration. The primary outcome was safe discharge from the hospital, defined as absence of in-hospital rebleeding, RBC transfusion, therapeutic colonoscopy, mesenteric embolization or laparotomy for bleeding, in-hospital death, or readmission with subsequent LGIB in 28 days. In total, 47.9% of the identified patients experienced no adverse outcomes and were classified as meeting criteria for safe discharge. In addition, 8.7% of patients scored 8 points or fewer with a sensitivity of 98.4% and specificity of 16.0% for safe discharge. A sensitivity of 96% was maintained after increasing the threshold to 10 points or fewer with a specificity of 31.9%, suggesting the threshold can be increased while still maintaining adequate sensitivity. The study suggests that, by using the Oakland score threshold of 8, hospital admission may be avoided in low-risk patients leading to a savings of at least $44.5 million and even more if the threshold is increased to 10. Low specificity does present limitation of the score as some patients considered to be at risk for adverse events may have been safely discharged and managed as an outpatient, avoiding hospitalization.
Bottom line: The Oakland score was externally validated for use in assessing risk of adverse outcomes in patients with LGIB and had a high sensitivity but low specificity for identifying low-risk patients.
Citation: Oakland K et al. External validation of the Oakland score to assess safe hospital discharge among adult patients with acute lower gastrointestinal bleeding in the US. JAMA Netw Open. 2020 Jul 1;3:e209630. doi:
Dr. Steker is a hospitalist at Northwestern Memorial Hospital and instructor of medicine, Feinberg School of Medicine, both in Chicago.
Background: The Oakland score was initially designed to be used in patients presenting with LGIB in the urgent, emergent, or primary care setting to help predict risk of readmission and determine if outpatient management is feasible. National guidelines in the United Kingdom have recommended use of the Oakland score despite limited external validation for the triage of patients with acute LGIB. This study aimed to externally validate the Oakland score in a large population in the United States and compare the performance at two thresholds.
Study design: Retrospective observational study.
Setting: 140 hospitals across the United States.
Synopsis: In this prognostic study, 38,067 patients were identified retrospectively using ICD-10 codes that were consistent with a diagnosis of LGIB and were admitted to the hospital. The Oakland score consisted of seven variables, including age, sex, prior hospitalization with LGIB, digital rectal exam results, heart rate, systolic blood pressure, and hemoglobin concentration. The primary outcome was safe discharge from the hospital, defined as absence of in-hospital rebleeding, RBC transfusion, therapeutic colonoscopy, mesenteric embolization or laparotomy for bleeding, in-hospital death, or readmission with subsequent LGIB in 28 days. In total, 47.9% of the identified patients experienced no adverse outcomes and were classified as meeting criteria for safe discharge. In addition, 8.7% of patients scored 8 points or fewer with a sensitivity of 98.4% and specificity of 16.0% for safe discharge. A sensitivity of 96% was maintained after increasing the threshold to 10 points or fewer with a specificity of 31.9%, suggesting the threshold can be increased while still maintaining adequate sensitivity. The study suggests that, by using the Oakland score threshold of 8, hospital admission may be avoided in low-risk patients leading to a savings of at least $44.5 million and even more if the threshold is increased to 10. Low specificity does present limitation of the score as some patients considered to be at risk for adverse events may have been safely discharged and managed as an outpatient, avoiding hospitalization.
Bottom line: The Oakland score was externally validated for use in assessing risk of adverse outcomes in patients with LGIB and had a high sensitivity but low specificity for identifying low-risk patients.
Citation: Oakland K et al. External validation of the Oakland score to assess safe hospital discharge among adult patients with acute lower gastrointestinal bleeding in the US. JAMA Netw Open. 2020 Jul 1;3:e209630. doi:
Dr. Steker is a hospitalist at Northwestern Memorial Hospital and instructor of medicine, Feinberg School of Medicine, both in Chicago.
Background: The Oakland score was initially designed to be used in patients presenting with LGIB in the urgent, emergent, or primary care setting to help predict risk of readmission and determine if outpatient management is feasible. National guidelines in the United Kingdom have recommended use of the Oakland score despite limited external validation for the triage of patients with acute LGIB. This study aimed to externally validate the Oakland score in a large population in the United States and compare the performance at two thresholds.
Study design: Retrospective observational study.
Setting: 140 hospitals across the United States.
Synopsis: In this prognostic study, 38,067 patients were identified retrospectively using ICD-10 codes that were consistent with a diagnosis of LGIB and were admitted to the hospital. The Oakland score consisted of seven variables, including age, sex, prior hospitalization with LGIB, digital rectal exam results, heart rate, systolic blood pressure, and hemoglobin concentration. The primary outcome was safe discharge from the hospital, defined as absence of in-hospital rebleeding, RBC transfusion, therapeutic colonoscopy, mesenteric embolization or laparotomy for bleeding, in-hospital death, or readmission with subsequent LGIB in 28 days. In total, 47.9% of the identified patients experienced no adverse outcomes and were classified as meeting criteria for safe discharge. In addition, 8.7% of patients scored 8 points or fewer with a sensitivity of 98.4% and specificity of 16.0% for safe discharge. A sensitivity of 96% was maintained after increasing the threshold to 10 points or fewer with a specificity of 31.9%, suggesting the threshold can be increased while still maintaining adequate sensitivity. The study suggests that, by using the Oakland score threshold of 8, hospital admission may be avoided in low-risk patients leading to a savings of at least $44.5 million and even more if the threshold is increased to 10. Low specificity does present limitation of the score as some patients considered to be at risk for adverse events may have been safely discharged and managed as an outpatient, avoiding hospitalization.
Bottom line: The Oakland score was externally validated for use in assessing risk of adverse outcomes in patients with LGIB and had a high sensitivity but low specificity for identifying low-risk patients.
Citation: Oakland K et al. External validation of the Oakland score to assess safe hospital discharge among adult patients with acute lower gastrointestinal bleeding in the US. JAMA Netw Open. 2020 Jul 1;3:e209630. doi:
Dr. Steker is a hospitalist at Northwestern Memorial Hospital and instructor of medicine, Feinberg School of Medicine, both in Chicago.
Prevalence of undiagnosed vitiligo is ‘remarkably high’
A new

“The remarkably high number of participants with undiagnosed vitiligo” indicates a need for “the development and validation of teledermatology apps that allow for potential diagnosis,” Kavita Gandhi, MS, of the patient and health impact group at Pfizer in Collegeville, Pa., and associates said in JAMA Dermatology.
The estimated range of 0.76%-1.11% prevalence represents 1.9 million to 2.8 million adults with vitiligo in the general population, based on responses from 40,888 participants surveyed between Dec. 30, 2019, and March 11, 2020, and further physician evaluation of photos uploaded by 113 respondents, they explained. The investigators used a representative sample of the U.S. population, of people ages 18-85 years.
A prior vitiligo diagnosis was reported by 314 participants, and another 249 screened positive through the survey, for a self-reported overall prevalence of 1.38% in the adult population and a previously undiagnosed prevalence of 0.61%. The physician adjudication brought the overall prevalence down to 0.76% and the undiagnosed prevalence to 0.29%. “These findings suggest that up to 40% of adults with vitiligo in the U.S. may be undiagnosed,” the investigators wrote.
Survey questions covering the laterality of lesions broke the 1.38% overall prevalence down to 0.77% nonsegmental vitiligo (self-reported as bilateral) and 0.61% segmental (unilateral). The 0.76% overall prevalence provided by the three dermatologist reviewers worked out to 0.58% classified as nonsegmental and 0.18% as segmental, Ms. Gandhi and associates said.
“The distinction between segmental and nonsegmental vitiligo is of prime importance [since] patients are usually concerned by the spreading of the disease and its unpredictable course, which is the hallmark of nonsegmental vitiligo,” the researchers noted.
The analysis was the first, to the authors’ knowledge, to identify several trends among the undiagnosed population. The proportion of nonwhite adults was higher in the undiagnosed group (40.2%) than among those with a diagnosis (31.5%), as was Hispanic, Latino, or Spanish origin (21.3% vs. 15.3%). Unilateral presentation was seen in 54.2% of the undiagnosed adults and 37.3% of those with diagnosed vitiligo, they reported.
The study was sponsored by Pfizer, which employs several of the investigators. Two of the investigators disclosed multiple conflicts of interest involving other companies.
A new

“The remarkably high number of participants with undiagnosed vitiligo” indicates a need for “the development and validation of teledermatology apps that allow for potential diagnosis,” Kavita Gandhi, MS, of the patient and health impact group at Pfizer in Collegeville, Pa., and associates said in JAMA Dermatology.
The estimated range of 0.76%-1.11% prevalence represents 1.9 million to 2.8 million adults with vitiligo in the general population, based on responses from 40,888 participants surveyed between Dec. 30, 2019, and March 11, 2020, and further physician evaluation of photos uploaded by 113 respondents, they explained. The investigators used a representative sample of the U.S. population, of people ages 18-85 years.
A prior vitiligo diagnosis was reported by 314 participants, and another 249 screened positive through the survey, for a self-reported overall prevalence of 1.38% in the adult population and a previously undiagnosed prevalence of 0.61%. The physician adjudication brought the overall prevalence down to 0.76% and the undiagnosed prevalence to 0.29%. “These findings suggest that up to 40% of adults with vitiligo in the U.S. may be undiagnosed,” the investigators wrote.
Survey questions covering the laterality of lesions broke the 1.38% overall prevalence down to 0.77% nonsegmental vitiligo (self-reported as bilateral) and 0.61% segmental (unilateral). The 0.76% overall prevalence provided by the three dermatologist reviewers worked out to 0.58% classified as nonsegmental and 0.18% as segmental, Ms. Gandhi and associates said.
“The distinction between segmental and nonsegmental vitiligo is of prime importance [since] patients are usually concerned by the spreading of the disease and its unpredictable course, which is the hallmark of nonsegmental vitiligo,” the researchers noted.
The analysis was the first, to the authors’ knowledge, to identify several trends among the undiagnosed population. The proportion of nonwhite adults was higher in the undiagnosed group (40.2%) than among those with a diagnosis (31.5%), as was Hispanic, Latino, or Spanish origin (21.3% vs. 15.3%). Unilateral presentation was seen in 54.2% of the undiagnosed adults and 37.3% of those with diagnosed vitiligo, they reported.
The study was sponsored by Pfizer, which employs several of the investigators. Two of the investigators disclosed multiple conflicts of interest involving other companies.
A new

“The remarkably high number of participants with undiagnosed vitiligo” indicates a need for “the development and validation of teledermatology apps that allow for potential diagnosis,” Kavita Gandhi, MS, of the patient and health impact group at Pfizer in Collegeville, Pa., and associates said in JAMA Dermatology.
The estimated range of 0.76%-1.11% prevalence represents 1.9 million to 2.8 million adults with vitiligo in the general population, based on responses from 40,888 participants surveyed between Dec. 30, 2019, and March 11, 2020, and further physician evaluation of photos uploaded by 113 respondents, they explained. The investigators used a representative sample of the U.S. population, of people ages 18-85 years.
A prior vitiligo diagnosis was reported by 314 participants, and another 249 screened positive through the survey, for a self-reported overall prevalence of 1.38% in the adult population and a previously undiagnosed prevalence of 0.61%. The physician adjudication brought the overall prevalence down to 0.76% and the undiagnosed prevalence to 0.29%. “These findings suggest that up to 40% of adults with vitiligo in the U.S. may be undiagnosed,” the investigators wrote.
Survey questions covering the laterality of lesions broke the 1.38% overall prevalence down to 0.77% nonsegmental vitiligo (self-reported as bilateral) and 0.61% segmental (unilateral). The 0.76% overall prevalence provided by the three dermatologist reviewers worked out to 0.58% classified as nonsegmental and 0.18% as segmental, Ms. Gandhi and associates said.
“The distinction between segmental and nonsegmental vitiligo is of prime importance [since] patients are usually concerned by the spreading of the disease and its unpredictable course, which is the hallmark of nonsegmental vitiligo,” the researchers noted.
The analysis was the first, to the authors’ knowledge, to identify several trends among the undiagnosed population. The proportion of nonwhite adults was higher in the undiagnosed group (40.2%) than among those with a diagnosis (31.5%), as was Hispanic, Latino, or Spanish origin (21.3% vs. 15.3%). Unilateral presentation was seen in 54.2% of the undiagnosed adults and 37.3% of those with diagnosed vitiligo, they reported.
The study was sponsored by Pfizer, which employs several of the investigators. Two of the investigators disclosed multiple conflicts of interest involving other companies.
FROM JAMA DERMATOLOGY
Predicting cardiac shock mortality in the ICU
Addition of echocardiogram measurement of biventricular dysfunction improved the accuracy of prognosis among patients with cardiac shock (CS) in the cardiac intensive care unit.
In patients in the cardiac ICU with CS, biventricular dysfunction (BVD), as assessed using transthoracic echocardiography, improves clinical risk stratification when combined with the Society for Cardiovascular Angiography and Interventions shock stage.
No improvements in risk stratification was seen with patients with left or right ventricular systolic dysfunction (LVSD or RVSD) alone, according to an article published in the journal Chest.
Ventricular systolic dysfunction is commonly seen in patients who have suffered cardiac shock, most often on the left side. Although echocardiography is often performed on these patients during diagnosis, previous studies looking at ventricular dysfunction used invasive hemodynamic parameters, which made it challenging to incorporate their findings into general cardiac ICU practice.
Pinning down cardiac shock
Although treatment of acute MI and heart failure has improved greatly, particularly with the implementation of percutaneous coronary intervention (primary PCI) for ST-segment elevation MI. This has reduced the rate of future heart failure, but cardiac shock can occur before or after the procedure, with a 30-day mortality of 30%-40%. This outcome hasn’t improved in the last 20 years.
Efforts to improve cardiac shock outcomes through percutaneous mechanical circulatory support devices have been hindered by the fact that CS patients are heterogeneous, and prognosis may depend on a range of factors.
SCAI was developed as a five-stage classification system for CS to improve communication of patient status, as well as to improve differentiation among patients participation in clinical trials. It does not include measures of ventricular dysfunction.
Simple measure boosts prognosis accuracy
The new work adds an additional layer to the SCAI shock stage. “Adding echocardiography allows discrimination between levels of risk for each SCAI stage,” said David Baran, MD, who was asked for comment. Dr. Baran was the lead author on the original SCAI study and is system director of advanced heart failure at Sentara Heart Hospital, as well as a professor of medicine at Eastern Virginia Medical School, both in Norfolk.
The work also underscores the value of repeated measures of prognosis during a patient’s stay in the ICU. “If a patient is not improving, it may prompt a consideration of whether transfer or consultation with a tertiary center may be of value. Conversely, if a patient doesn’t have high-risk features and is responding to therapy, it is reassuring to have data supporting low mortality with that care plan,” said Dr. Baran.
The study may be biased, since not every patient undergoes an echocardiogram. Still, “the authors make a convincing case that biventricular dysfunction is a powerful negative marker across the spectrum of SCAI stages,” said Dr. Baran.
Echocardiography is simple and generally available, and some are even portable and used with a smartphone. But patient body size interferes with echocardiography, as can the presence of a ventilator or multiple surgical dressings. “The key advantage of echo is that it is completely noninvasive and can be brought to the patient in the ICU, unlike other testing which involves moving the patient to the testing environment,” said Dr. Baran.
The researchers analyzed data from 3,158 patients admitted to the cardiac ICU at the Mayo Clinic Hospital St. Mary’s Campus in Rochester, Minn., 51.8% of whom had acute coronary syndromes. They defined LVSD as a left ventricular ejection fraction less than 40%, and RVSD as at least moderate systolic dysfunction determined by semiquantitative measurement. BVD constituted the presence of both LVSD and RVSD. They examined the association of in-hospital mortality with these parameters combined with SCAI stage.
BVD a risk factor
Overall in-hospital mortality was 10%. A total of 22.3% of patients had LVSD and 11.8% had RVSD; 16.4% had moderate or greater BVD. There was no association between LVSD or RVSD and in-hospital mortality after adjustment for SCAI stage, but there was a significant association for BVD (adjusted hazard ratio, 1.815; P = .0023). When combined with SCAI, BVC led to an improved ability to predict hospital mortality (area under the curve, 0.784 vs. 0.766; P < .001). Adding semiquantitative RVSD and LVSD led to more improvement (AUC, 0.794; P < .01 vs. both).
RVSD was associated with higher in-hospital mortality (adjusted odds ratio, 1.421; P = .02), and there was a trend toward greater mortality with LVSD (aOR, 1.336; P = .06). There was little change when SCAI shock stage A patients were excluded (aOR, 1.840; P < .001).
Patients with BVD had greater in-hospital mortality than those without ventricular dysfunction (aOR, 1.815; P = .0023), but other between-group comparisons were not significant.
The researchers performed a classification and regression tree analysis using left ventricular ejection fraction (LVEF) and semiquantitative RVSD. It found that RVSD was a better predictor of in-hospital mortality than LVSD, and the best cutoff for LVSD was different among patients with RVSD and patients without RVSD.
Patients with mild or greater RVD and LVEF greater than 24% were considered high risk; those with borderline or low RVSD and LVEF less than 33%, or mild or greater RVSD with LVEF of at least 24%, were considered intermediate risk. Patients with borderline or no RVSD and LVEF of at least 33% were considered low risk. Hospital mortality was 22% in the high-risk group, 12.2% in the intermediate group, and 3.3% in the low-risk group (aOR vs. intermediate, 0.493; P = .0006; aOR vs. high risk, 0.357; P < .0001).
The study authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Addition of echocardiogram measurement of biventricular dysfunction improved the accuracy of prognosis among patients with cardiac shock (CS) in the cardiac intensive care unit.
In patients in the cardiac ICU with CS, biventricular dysfunction (BVD), as assessed using transthoracic echocardiography, improves clinical risk stratification when combined with the Society for Cardiovascular Angiography and Interventions shock stage.
No improvements in risk stratification was seen with patients with left or right ventricular systolic dysfunction (LVSD or RVSD) alone, according to an article published in the journal Chest.
Ventricular systolic dysfunction is commonly seen in patients who have suffered cardiac shock, most often on the left side. Although echocardiography is often performed on these patients during diagnosis, previous studies looking at ventricular dysfunction used invasive hemodynamic parameters, which made it challenging to incorporate their findings into general cardiac ICU practice.
Pinning down cardiac shock
Although treatment of acute MI and heart failure has improved greatly, particularly with the implementation of percutaneous coronary intervention (primary PCI) for ST-segment elevation MI. This has reduced the rate of future heart failure, but cardiac shock can occur before or after the procedure, with a 30-day mortality of 30%-40%. This outcome hasn’t improved in the last 20 years.
Efforts to improve cardiac shock outcomes through percutaneous mechanical circulatory support devices have been hindered by the fact that CS patients are heterogeneous, and prognosis may depend on a range of factors.
SCAI was developed as a five-stage classification system for CS to improve communication of patient status, as well as to improve differentiation among patients participation in clinical trials. It does not include measures of ventricular dysfunction.
Simple measure boosts prognosis accuracy
The new work adds an additional layer to the SCAI shock stage. “Adding echocardiography allows discrimination between levels of risk for each SCAI stage,” said David Baran, MD, who was asked for comment. Dr. Baran was the lead author on the original SCAI study and is system director of advanced heart failure at Sentara Heart Hospital, as well as a professor of medicine at Eastern Virginia Medical School, both in Norfolk.
The work also underscores the value of repeated measures of prognosis during a patient’s stay in the ICU. “If a patient is not improving, it may prompt a consideration of whether transfer or consultation with a tertiary center may be of value. Conversely, if a patient doesn’t have high-risk features and is responding to therapy, it is reassuring to have data supporting low mortality with that care plan,” said Dr. Baran.
The study may be biased, since not every patient undergoes an echocardiogram. Still, “the authors make a convincing case that biventricular dysfunction is a powerful negative marker across the spectrum of SCAI stages,” said Dr. Baran.
Echocardiography is simple and generally available, and some are even portable and used with a smartphone. But patient body size interferes with echocardiography, as can the presence of a ventilator or multiple surgical dressings. “The key advantage of echo is that it is completely noninvasive and can be brought to the patient in the ICU, unlike other testing which involves moving the patient to the testing environment,” said Dr. Baran.
The researchers analyzed data from 3,158 patients admitted to the cardiac ICU at the Mayo Clinic Hospital St. Mary’s Campus in Rochester, Minn., 51.8% of whom had acute coronary syndromes. They defined LVSD as a left ventricular ejection fraction less than 40%, and RVSD as at least moderate systolic dysfunction determined by semiquantitative measurement. BVD constituted the presence of both LVSD and RVSD. They examined the association of in-hospital mortality with these parameters combined with SCAI stage.
BVD a risk factor
Overall in-hospital mortality was 10%. A total of 22.3% of patients had LVSD and 11.8% had RVSD; 16.4% had moderate or greater BVD. There was no association between LVSD or RVSD and in-hospital mortality after adjustment for SCAI stage, but there was a significant association for BVD (adjusted hazard ratio, 1.815; P = .0023). When combined with SCAI, BVC led to an improved ability to predict hospital mortality (area under the curve, 0.784 vs. 0.766; P < .001). Adding semiquantitative RVSD and LVSD led to more improvement (AUC, 0.794; P < .01 vs. both).
RVSD was associated with higher in-hospital mortality (adjusted odds ratio, 1.421; P = .02), and there was a trend toward greater mortality with LVSD (aOR, 1.336; P = .06). There was little change when SCAI shock stage A patients were excluded (aOR, 1.840; P < .001).
Patients with BVD had greater in-hospital mortality than those without ventricular dysfunction (aOR, 1.815; P = .0023), but other between-group comparisons were not significant.
The researchers performed a classification and regression tree analysis using left ventricular ejection fraction (LVEF) and semiquantitative RVSD. It found that RVSD was a better predictor of in-hospital mortality than LVSD, and the best cutoff for LVSD was different among patients with RVSD and patients without RVSD.
Patients with mild or greater RVD and LVEF greater than 24% were considered high risk; those with borderline or low RVSD and LVEF less than 33%, or mild or greater RVSD with LVEF of at least 24%, were considered intermediate risk. Patients with borderline or no RVSD and LVEF of at least 33% were considered low risk. Hospital mortality was 22% in the high-risk group, 12.2% in the intermediate group, and 3.3% in the low-risk group (aOR vs. intermediate, 0.493; P = .0006; aOR vs. high risk, 0.357; P < .0001).
The study authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Addition of echocardiogram measurement of biventricular dysfunction improved the accuracy of prognosis among patients with cardiac shock (CS) in the cardiac intensive care unit.
In patients in the cardiac ICU with CS, biventricular dysfunction (BVD), as assessed using transthoracic echocardiography, improves clinical risk stratification when combined with the Society for Cardiovascular Angiography and Interventions shock stage.
No improvements in risk stratification was seen with patients with left or right ventricular systolic dysfunction (LVSD or RVSD) alone, according to an article published in the journal Chest.
Ventricular systolic dysfunction is commonly seen in patients who have suffered cardiac shock, most often on the left side. Although echocardiography is often performed on these patients during diagnosis, previous studies looking at ventricular dysfunction used invasive hemodynamic parameters, which made it challenging to incorporate their findings into general cardiac ICU practice.
Pinning down cardiac shock
Although treatment of acute MI and heart failure has improved greatly, particularly with the implementation of percutaneous coronary intervention (primary PCI) for ST-segment elevation MI. This has reduced the rate of future heart failure, but cardiac shock can occur before or after the procedure, with a 30-day mortality of 30%-40%. This outcome hasn’t improved in the last 20 years.
Efforts to improve cardiac shock outcomes through percutaneous mechanical circulatory support devices have been hindered by the fact that CS patients are heterogeneous, and prognosis may depend on a range of factors.
SCAI was developed as a five-stage classification system for CS to improve communication of patient status, as well as to improve differentiation among patients participation in clinical trials. It does not include measures of ventricular dysfunction.
Simple measure boosts prognosis accuracy
The new work adds an additional layer to the SCAI shock stage. “Adding echocardiography allows discrimination between levels of risk for each SCAI stage,” said David Baran, MD, who was asked for comment. Dr. Baran was the lead author on the original SCAI study and is system director of advanced heart failure at Sentara Heart Hospital, as well as a professor of medicine at Eastern Virginia Medical School, both in Norfolk.
The work also underscores the value of repeated measures of prognosis during a patient’s stay in the ICU. “If a patient is not improving, it may prompt a consideration of whether transfer or consultation with a tertiary center may be of value. Conversely, if a patient doesn’t have high-risk features and is responding to therapy, it is reassuring to have data supporting low mortality with that care plan,” said Dr. Baran.
The study may be biased, since not every patient undergoes an echocardiogram. Still, “the authors make a convincing case that biventricular dysfunction is a powerful negative marker across the spectrum of SCAI stages,” said Dr. Baran.
Echocardiography is simple and generally available, and some are even portable and used with a smartphone. But patient body size interferes with echocardiography, as can the presence of a ventilator or multiple surgical dressings. “The key advantage of echo is that it is completely noninvasive and can be brought to the patient in the ICU, unlike other testing which involves moving the patient to the testing environment,” said Dr. Baran.
The researchers analyzed data from 3,158 patients admitted to the cardiac ICU at the Mayo Clinic Hospital St. Mary’s Campus in Rochester, Minn., 51.8% of whom had acute coronary syndromes. They defined LVSD as a left ventricular ejection fraction less than 40%, and RVSD as at least moderate systolic dysfunction determined by semiquantitative measurement. BVD constituted the presence of both LVSD and RVSD. They examined the association of in-hospital mortality with these parameters combined with SCAI stage.
BVD a risk factor
Overall in-hospital mortality was 10%. A total of 22.3% of patients had LVSD and 11.8% had RVSD; 16.4% had moderate or greater BVD. There was no association between LVSD or RVSD and in-hospital mortality after adjustment for SCAI stage, but there was a significant association for BVD (adjusted hazard ratio, 1.815; P = .0023). When combined with SCAI, BVC led to an improved ability to predict hospital mortality (area under the curve, 0.784 vs. 0.766; P < .001). Adding semiquantitative RVSD and LVSD led to more improvement (AUC, 0.794; P < .01 vs. both).
RVSD was associated with higher in-hospital mortality (adjusted odds ratio, 1.421; P = .02), and there was a trend toward greater mortality with LVSD (aOR, 1.336; P = .06). There was little change when SCAI shock stage A patients were excluded (aOR, 1.840; P < .001).
Patients with BVD had greater in-hospital mortality than those without ventricular dysfunction (aOR, 1.815; P = .0023), but other between-group comparisons were not significant.
The researchers performed a classification and regression tree analysis using left ventricular ejection fraction (LVEF) and semiquantitative RVSD. It found that RVSD was a better predictor of in-hospital mortality than LVSD, and the best cutoff for LVSD was different among patients with RVSD and patients without RVSD.
Patients with mild or greater RVD and LVEF greater than 24% were considered high risk; those with borderline or low RVSD and LVEF less than 33%, or mild or greater RVSD with LVEF of at least 24%, were considered intermediate risk. Patients with borderline or no RVSD and LVEF of at least 33% were considered low risk. Hospital mortality was 22% in the high-risk group, 12.2% in the intermediate group, and 3.3% in the low-risk group (aOR vs. intermediate, 0.493; P = .0006; aOR vs. high risk, 0.357; P < .0001).
The study authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Rhinosinusitis without nasal polyps lowers QoL in COPD
Concomitant rhinosinusitis without nasal polyps (RSsNP) in patients with chronic obstructive pulmonary disease (COPD) is associated with a poorer, disease-specific, health-related quality of life (HRQoL), a Norwegian study is showing.
“Chronic rhinosinusitis has an impact on patients’ HRQoL,” lead author Marte Rystad Øie, Trondheim (Norway) University Hospital, said in an interview.
“We found that RSsNP in COPD was associated with more psychological issues, higher COPD symptom burden, and overall COPD-related HRQoL after adjusting for lung function, so RSsNP does have clinical relevance and [our findings] support previous studies that have suggested that rhinosinusitis should be recognized as a comorbidity in COPD,” she emphasized.
The study was published in the Nov. 1 issue of Respiratory Medicine.
Study sample
The study sample consisted of 90 patients with COPD and 93 control subjects, all age 40-80 years. “Generic HRQoL was measured with the Norwegian version of the SF-36v2 Health Survey Standard questionnaire,” the authors wrote, and responses were compared between patients with COPD and controls as well as between subgroups of patients who had COPD both with and without RSsNP.
Disease-specific HRQoL was assessed by the Sinonasal Outcome Test-22 (SNOT-22); the St. Georges Respiratory Questionnaire (SGRQ), and the COPD Assessment Test (CAT), and responses were again compared between patients who had COPD with and without RSsNP. In the COPD group, “severe” and “very severe” airflow obstruction was present in 56.5% of patients with RSsNP compared with 38.6% of patients without RSsNP, as Ms. Øie reported.
Furthermore, total SNOT-22 along with psychological subscale scores were both significantly higher in patients who had COPD with RSsNP than those without RSsNP. Among those with RSsNP, the mean value of the total SNOT-22 score was 36.8 whereas the mean value of the psychological subscale score was 22.6. Comparable mean values among patients who had COPD without RSsNP were 9.5 and 6.5, respectively (P < .05).
Total scores on the SGRQ were again significantly greater in patients who had COPD with RSsNP at a mean of 43.3 compared with a mean of 34 in those without RSsNP, investigators observe. Similarly, scores for the symptom and activity domains again on the SGRQ were significantly greater for patients who had COPD with RSsNP than those without nasal polyps. As for the total CAT score, once again it was significantly higher in patients who had COPD with RSsNP at a mean of 18.8 compared with a mean of 13.5 in those without RSsNP (P < .05).
Indeed, patients with RSsNP were four times more likely to have CAT scores indicating the condition was having a high or very high impact on their HRQoL compared with patients without RSsNP (P < .001). As the authors pointed out, having a high impact on HRQoL translates into patients having to stop their desired activities and having no good days in the week.
“This suggests that having RSsNP substantially adds to the activity limitation experienced by patients with COPD,” they emphasized. The authors also found that RSsNP was significantly associated with poorer physical functioning after adjusting for COPD as reflected by SF-36v2 findings, again suggesting that patients who had COPD with concomitant RSsNP have an additional limitation in activity and a heavier symptom burden.
As Ms. Øie explained, rhinosinusitis has two clinical phenotypes: that with nasal polyps and that without nasal polyps, the latter being twice as prevalent. In fact, rhinosinusitis with nasal polyps is associated with asthma, as she pointed out. Given, however, that rhinosinusitis without polyps is amenable to treatment with daily use of nasal steroids, it is possible to reduce the burden of symptoms and psychological stress associated with RSsNP in COPD.
Limitations of the study include the fact that investigators did not assess patients for the presence of any comorbidities that could contribute to poorer HRQoL in this patient population.
The study was funded by Liaison Committee between the Central Norway Regional Health Authority and the Norwegian University of Science and Technology. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Concomitant rhinosinusitis without nasal polyps (RSsNP) in patients with chronic obstructive pulmonary disease (COPD) is associated with a poorer, disease-specific, health-related quality of life (HRQoL), a Norwegian study is showing.
“Chronic rhinosinusitis has an impact on patients’ HRQoL,” lead author Marte Rystad Øie, Trondheim (Norway) University Hospital, said in an interview.
“We found that RSsNP in COPD was associated with more psychological issues, higher COPD symptom burden, and overall COPD-related HRQoL after adjusting for lung function, so RSsNP does have clinical relevance and [our findings] support previous studies that have suggested that rhinosinusitis should be recognized as a comorbidity in COPD,” she emphasized.
The study was published in the Nov. 1 issue of Respiratory Medicine.
Study sample
The study sample consisted of 90 patients with COPD and 93 control subjects, all age 40-80 years. “Generic HRQoL was measured with the Norwegian version of the SF-36v2 Health Survey Standard questionnaire,” the authors wrote, and responses were compared between patients with COPD and controls as well as between subgroups of patients who had COPD both with and without RSsNP.
Disease-specific HRQoL was assessed by the Sinonasal Outcome Test-22 (SNOT-22); the St. Georges Respiratory Questionnaire (SGRQ), and the COPD Assessment Test (CAT), and responses were again compared between patients who had COPD with and without RSsNP. In the COPD group, “severe” and “very severe” airflow obstruction was present in 56.5% of patients with RSsNP compared with 38.6% of patients without RSsNP, as Ms. Øie reported.
Furthermore, total SNOT-22 along with psychological subscale scores were both significantly higher in patients who had COPD with RSsNP than those without RSsNP. Among those with RSsNP, the mean value of the total SNOT-22 score was 36.8 whereas the mean value of the psychological subscale score was 22.6. Comparable mean values among patients who had COPD without RSsNP were 9.5 and 6.5, respectively (P < .05).
Total scores on the SGRQ were again significantly greater in patients who had COPD with RSsNP at a mean of 43.3 compared with a mean of 34 in those without RSsNP, investigators observe. Similarly, scores for the symptom and activity domains again on the SGRQ were significantly greater for patients who had COPD with RSsNP than those without nasal polyps. As for the total CAT score, once again it was significantly higher in patients who had COPD with RSsNP at a mean of 18.8 compared with a mean of 13.5 in those without RSsNP (P < .05).
Indeed, patients with RSsNP were four times more likely to have CAT scores indicating the condition was having a high or very high impact on their HRQoL compared with patients without RSsNP (P < .001). As the authors pointed out, having a high impact on HRQoL translates into patients having to stop their desired activities and having no good days in the week.
“This suggests that having RSsNP substantially adds to the activity limitation experienced by patients with COPD,” they emphasized. The authors also found that RSsNP was significantly associated with poorer physical functioning after adjusting for COPD as reflected by SF-36v2 findings, again suggesting that patients who had COPD with concomitant RSsNP have an additional limitation in activity and a heavier symptom burden.
As Ms. Øie explained, rhinosinusitis has two clinical phenotypes: that with nasal polyps and that without nasal polyps, the latter being twice as prevalent. In fact, rhinosinusitis with nasal polyps is associated with asthma, as she pointed out. Given, however, that rhinosinusitis without polyps is amenable to treatment with daily use of nasal steroids, it is possible to reduce the burden of symptoms and psychological stress associated with RSsNP in COPD.
Limitations of the study include the fact that investigators did not assess patients for the presence of any comorbidities that could contribute to poorer HRQoL in this patient population.
The study was funded by Liaison Committee between the Central Norway Regional Health Authority and the Norwegian University of Science and Technology. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Concomitant rhinosinusitis without nasal polyps (RSsNP) in patients with chronic obstructive pulmonary disease (COPD) is associated with a poorer, disease-specific, health-related quality of life (HRQoL), a Norwegian study is showing.
“Chronic rhinosinusitis has an impact on patients’ HRQoL,” lead author Marte Rystad Øie, Trondheim (Norway) University Hospital, said in an interview.
“We found that RSsNP in COPD was associated with more psychological issues, higher COPD symptom burden, and overall COPD-related HRQoL after adjusting for lung function, so RSsNP does have clinical relevance and [our findings] support previous studies that have suggested that rhinosinusitis should be recognized as a comorbidity in COPD,” she emphasized.
The study was published in the Nov. 1 issue of Respiratory Medicine.
Study sample
The study sample consisted of 90 patients with COPD and 93 control subjects, all age 40-80 years. “Generic HRQoL was measured with the Norwegian version of the SF-36v2 Health Survey Standard questionnaire,” the authors wrote, and responses were compared between patients with COPD and controls as well as between subgroups of patients who had COPD both with and without RSsNP.
Disease-specific HRQoL was assessed by the Sinonasal Outcome Test-22 (SNOT-22); the St. Georges Respiratory Questionnaire (SGRQ), and the COPD Assessment Test (CAT), and responses were again compared between patients who had COPD with and without RSsNP. In the COPD group, “severe” and “very severe” airflow obstruction was present in 56.5% of patients with RSsNP compared with 38.6% of patients without RSsNP, as Ms. Øie reported.
Furthermore, total SNOT-22 along with psychological subscale scores were both significantly higher in patients who had COPD with RSsNP than those without RSsNP. Among those with RSsNP, the mean value of the total SNOT-22 score was 36.8 whereas the mean value of the psychological subscale score was 22.6. Comparable mean values among patients who had COPD without RSsNP were 9.5 and 6.5, respectively (P < .05).
Total scores on the SGRQ were again significantly greater in patients who had COPD with RSsNP at a mean of 43.3 compared with a mean of 34 in those without RSsNP, investigators observe. Similarly, scores for the symptom and activity domains again on the SGRQ were significantly greater for patients who had COPD with RSsNP than those without nasal polyps. As for the total CAT score, once again it was significantly higher in patients who had COPD with RSsNP at a mean of 18.8 compared with a mean of 13.5 in those without RSsNP (P < .05).
Indeed, patients with RSsNP were four times more likely to have CAT scores indicating the condition was having a high or very high impact on their HRQoL compared with patients without RSsNP (P < .001). As the authors pointed out, having a high impact on HRQoL translates into patients having to stop their desired activities and having no good days in the week.
“This suggests that having RSsNP substantially adds to the activity limitation experienced by patients with COPD,” they emphasized. The authors also found that RSsNP was significantly associated with poorer physical functioning after adjusting for COPD as reflected by SF-36v2 findings, again suggesting that patients who had COPD with concomitant RSsNP have an additional limitation in activity and a heavier symptom burden.
As Ms. Øie explained, rhinosinusitis has two clinical phenotypes: that with nasal polyps and that without nasal polyps, the latter being twice as prevalent. In fact, rhinosinusitis with nasal polyps is associated with asthma, as she pointed out. Given, however, that rhinosinusitis without polyps is amenable to treatment with daily use of nasal steroids, it is possible to reduce the burden of symptoms and psychological stress associated with RSsNP in COPD.
Limitations of the study include the fact that investigators did not assess patients for the presence of any comorbidities that could contribute to poorer HRQoL in this patient population.
The study was funded by Liaison Committee between the Central Norway Regional Health Authority and the Norwegian University of Science and Technology. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Penicillin slows latent rheumatic heart disease progression
In a randomized controlled trial of close to 1,000 Ugandan children and youth with latent rheumatic heart disease (RHD), those who received monthly injections of penicillin G benzathine for 2 years had less disease progression than those who did not.
RHD, a valvular heart disease caused by rheumatic fever that develops after untreated Streptococcus pyogenes infection, is the most common acquired cardiovascular disease among children and young adults.
“It is clear that secondary antibiotic prophylaxis can improve outcomes for children with echo-detected rheumatic RHD,” co–lead author of the study, Andrea Z. Beaton, MD, said in an interview.
“There is huge potential here, but we are not quite ready to advocate for this strategy as a broad public health approach,” said Dr. Beaton, a pediatric cardiologist at Cincinnati Children’s Hospital Medical Center.
“We need to understand more the practical translation of this strategy to a low-resourced public health system at scale, improve [penicillin G benzathine] supply, and improve community and health care worker knowledge of this disease.”
Dr. Beaton presented the findings at the American Heart Association scientific sessions, and the study was simultaneously published in the New England Journal of Medicine on Nov. 13, 2021.
The GOAL trial – or the Gwoko Adunu pa Lutino trial, meaning “protect the heart of a child” – screened 102,200 children and adolescents aged 5-17. Of these kids and teenagers, 926 (0.9%) were diagnosed with latent RHD based on a confirmatory electrocardiogram.
“For now, I would say, if you are screening, then kids found to have latent RHD should be put on prophylaxis,” Dr. Beaton said.
“I think this is also a powerful call for more research [severely lacking in RHD],” to improve risk stratification, determine how to implement screening and prophylaxis programs, and develop new and better approaches for RHD prevention and care.
“This essential trial partially addresses the clinical equipoise that has developed regarding penicillin administration in latent RHD,” said Gabriele Rossi, MD, MPH, who was not involved with this research.
It showed that, out of the final 818 participants included in the modified intention-to-treat analysis, a total of 3 (0.8%) in the prophylaxis group had echocardiographic progression at 2 years, compared with 33 participants (8.2%) in the control group (risk difference, −7.5 percentage points; 95% confidence interval, −10.2 to −4.7; P < .001).
“This is a significant difference,” Dr. Rossi, from Médecins Sans Frontières (Doctors Without Borders), Brussels, said in an interview, noting that, however, it is not known what happens after 2 years.
The authors estimated that 13 children or adolescents with latent rheumatic heart disease would need to be treated to prevent disease progression in one person at 2 years, which is “acceptable,” he continued.
However, “screening, diagnosis, clinical follow-up, treatment, and program management [would] require substantial strengthening of health systems and the workforce, which is still far from being realizable in many African and low-income country settings,” Dr. Rossi noted.
Related study in Italy
Previously, Dr. Rossi and colleagues conducted a trial, published in 2019, that showed it was feasible to screen for asymptomatic RHD among refugee/migrant children and youths in Rome.
From February 2016 to January 2018, they screened more than 650 refugee/migrant children and adolescents who were younger than 18. They came largely from Egypt (65%) but also from 22 other countries and were often unaccompanied or with just one parent.
The number needed to screen was 5 to identify a child/youth with borderline RHD and around 40 to identify a child/youth with definite RHD.
Dr. Rossi noted that local resurgences of RHD have also been also documented in high-income countries such as Europe, Australia, New Zealand, Canada, and the United States, often among disadvantaged indigenous people, as described in a 2018 Letter to the Editor in the New England Journal of Medicine.
Dr. Beaton noted that a review of 10-year data (2008-2018) from 22 U.S. pediatric institutions showed that in the United States the prevalence of RHD “is higher in immigrant children from RHD endemic areas, but because of total numbers, more RHD cases than not are domestic.” Children living in more deprived communities are at risk for more severe disease, and the burden in U.S. territories is also quite high.
Screening and secondary prophylaxis
The aim of the current GOAL study was to evaluate if screening and treatment with penicillin G benzathine could detect and prevent progression of latent rheumatic heart disease in 5- to 17-year-olds living in Gulu, Uganda. The trial was conducted from July 2018 to October 2020.
“School education and community sensitization was done prior to the trial,” through radio shows or school-based education, Dr. Beaton explained. About 99% of the children/adolescents/families agreed to be screened.
The group has been conducting echo screening research in Uganda for 10 years, she noted. They have developed peer group and case manager strategies to aid participant retention, as they describe in an article about the study protocol.
The screening echocardiograms were interpreted by about 30 providers and four cardiologists reviewed confirmatory echocardiograms.
Two participants in the prophylaxis group had serious adverse events that were attributable to receipt of prophylaxis, including one episode of a mild anaphylactic reaction (representing <0.1% of all administered doses of prophylaxis).
Once children and adolescents have moderate/severe RHD, there is not much that can be done in lower- and middle-income countries, where surgery for this is uncommon, Dr. Beaton explained. Around 30% of children and adolescents with this condition who come to clinical attention in Uganda die within 9 months.
Further research
Dr. Beaton and colleagues have just started a trial to investigate the burden of RHD among Native American youth, which has not been studied since the 1970s.
They also have an ongoing study looking at the efficacy of a pragmatic, community-based sore throat program to prevent RHD.
“Unfortunately, this strategy has not worked well in low-to-middle income countries, for a variety of reasons so far,” Dr. Beaton noted, and the cost-effectiveness of this preventive strategy is questionable.
The trial was supported by the Thrasher Research Fund, Gift of Life International, Children’s National Hospital Foundation (Zachary Blumenfeld Fund and Race for Every Child [Team Jocelyn]), the Elias-Ginsburg Family, Wiley Rein, Philips Foundation, AT&T Foundation, Heart Healers International, the Karp Family Foundation, Huron Philanthropies, and the Cincinnati Children’s Hospital Heart Institute Research Core. Dr. Beaton and Dr. Rossi disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In a randomized controlled trial of close to 1,000 Ugandan children and youth with latent rheumatic heart disease (RHD), those who received monthly injections of penicillin G benzathine for 2 years had less disease progression than those who did not.
RHD, a valvular heart disease caused by rheumatic fever that develops after untreated Streptococcus pyogenes infection, is the most common acquired cardiovascular disease among children and young adults.
“It is clear that secondary antibiotic prophylaxis can improve outcomes for children with echo-detected rheumatic RHD,” co–lead author of the study, Andrea Z. Beaton, MD, said in an interview.
“There is huge potential here, but we are not quite ready to advocate for this strategy as a broad public health approach,” said Dr. Beaton, a pediatric cardiologist at Cincinnati Children’s Hospital Medical Center.
“We need to understand more the practical translation of this strategy to a low-resourced public health system at scale, improve [penicillin G benzathine] supply, and improve community and health care worker knowledge of this disease.”
Dr. Beaton presented the findings at the American Heart Association scientific sessions, and the study was simultaneously published in the New England Journal of Medicine on Nov. 13, 2021.
The GOAL trial – or the Gwoko Adunu pa Lutino trial, meaning “protect the heart of a child” – screened 102,200 children and adolescents aged 5-17. Of these kids and teenagers, 926 (0.9%) were diagnosed with latent RHD based on a confirmatory electrocardiogram.
“For now, I would say, if you are screening, then kids found to have latent RHD should be put on prophylaxis,” Dr. Beaton said.
“I think this is also a powerful call for more research [severely lacking in RHD],” to improve risk stratification, determine how to implement screening and prophylaxis programs, and develop new and better approaches for RHD prevention and care.
“This essential trial partially addresses the clinical equipoise that has developed regarding penicillin administration in latent RHD,” said Gabriele Rossi, MD, MPH, who was not involved with this research.
It showed that, out of the final 818 participants included in the modified intention-to-treat analysis, a total of 3 (0.8%) in the prophylaxis group had echocardiographic progression at 2 years, compared with 33 participants (8.2%) in the control group (risk difference, −7.5 percentage points; 95% confidence interval, −10.2 to −4.7; P < .001).
“This is a significant difference,” Dr. Rossi, from Médecins Sans Frontières (Doctors Without Borders), Brussels, said in an interview, noting that, however, it is not known what happens after 2 years.
The authors estimated that 13 children or adolescents with latent rheumatic heart disease would need to be treated to prevent disease progression in one person at 2 years, which is “acceptable,” he continued.
However, “screening, diagnosis, clinical follow-up, treatment, and program management [would] require substantial strengthening of health systems and the workforce, which is still far from being realizable in many African and low-income country settings,” Dr. Rossi noted.
Related study in Italy
Previously, Dr. Rossi and colleagues conducted a trial, published in 2019, that showed it was feasible to screen for asymptomatic RHD among refugee/migrant children and youths in Rome.
From February 2016 to January 2018, they screened more than 650 refugee/migrant children and adolescents who were younger than 18. They came largely from Egypt (65%) but also from 22 other countries and were often unaccompanied or with just one parent.
The number needed to screen was 5 to identify a child/youth with borderline RHD and around 40 to identify a child/youth with definite RHD.
Dr. Rossi noted that local resurgences of RHD have also been also documented in high-income countries such as Europe, Australia, New Zealand, Canada, and the United States, often among disadvantaged indigenous people, as described in a 2018 Letter to the Editor in the New England Journal of Medicine.
Dr. Beaton noted that a review of 10-year data (2008-2018) from 22 U.S. pediatric institutions showed that in the United States the prevalence of RHD “is higher in immigrant children from RHD endemic areas, but because of total numbers, more RHD cases than not are domestic.” Children living in more deprived communities are at risk for more severe disease, and the burden in U.S. territories is also quite high.
Screening and secondary prophylaxis
The aim of the current GOAL study was to evaluate if screening and treatment with penicillin G benzathine could detect and prevent progression of latent rheumatic heart disease in 5- to 17-year-olds living in Gulu, Uganda. The trial was conducted from July 2018 to October 2020.
“School education and community sensitization was done prior to the trial,” through radio shows or school-based education, Dr. Beaton explained. About 99% of the children/adolescents/families agreed to be screened.
The group has been conducting echo screening research in Uganda for 10 years, she noted. They have developed peer group and case manager strategies to aid participant retention, as they describe in an article about the study protocol.
The screening echocardiograms were interpreted by about 30 providers and four cardiologists reviewed confirmatory echocardiograms.
Two participants in the prophylaxis group had serious adverse events that were attributable to receipt of prophylaxis, including one episode of a mild anaphylactic reaction (representing <0.1% of all administered doses of prophylaxis).
Once children and adolescents have moderate/severe RHD, there is not much that can be done in lower- and middle-income countries, where surgery for this is uncommon, Dr. Beaton explained. Around 30% of children and adolescents with this condition who come to clinical attention in Uganda die within 9 months.
Further research
Dr. Beaton and colleagues have just started a trial to investigate the burden of RHD among Native American youth, which has not been studied since the 1970s.
They also have an ongoing study looking at the efficacy of a pragmatic, community-based sore throat program to prevent RHD.
“Unfortunately, this strategy has not worked well in low-to-middle income countries, for a variety of reasons so far,” Dr. Beaton noted, and the cost-effectiveness of this preventive strategy is questionable.
The trial was supported by the Thrasher Research Fund, Gift of Life International, Children’s National Hospital Foundation (Zachary Blumenfeld Fund and Race for Every Child [Team Jocelyn]), the Elias-Ginsburg Family, Wiley Rein, Philips Foundation, AT&T Foundation, Heart Healers International, the Karp Family Foundation, Huron Philanthropies, and the Cincinnati Children’s Hospital Heart Institute Research Core. Dr. Beaton and Dr. Rossi disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In a randomized controlled trial of close to 1,000 Ugandan children and youth with latent rheumatic heart disease (RHD), those who received monthly injections of penicillin G benzathine for 2 years had less disease progression than those who did not.
RHD, a valvular heart disease caused by rheumatic fever that develops after untreated Streptococcus pyogenes infection, is the most common acquired cardiovascular disease among children and young adults.
“It is clear that secondary antibiotic prophylaxis can improve outcomes for children with echo-detected rheumatic RHD,” co–lead author of the study, Andrea Z. Beaton, MD, said in an interview.
“There is huge potential here, but we are not quite ready to advocate for this strategy as a broad public health approach,” said Dr. Beaton, a pediatric cardiologist at Cincinnati Children’s Hospital Medical Center.
“We need to understand more the practical translation of this strategy to a low-resourced public health system at scale, improve [penicillin G benzathine] supply, and improve community and health care worker knowledge of this disease.”
Dr. Beaton presented the findings at the American Heart Association scientific sessions, and the study was simultaneously published in the New England Journal of Medicine on Nov. 13, 2021.
The GOAL trial – or the Gwoko Adunu pa Lutino trial, meaning “protect the heart of a child” – screened 102,200 children and adolescents aged 5-17. Of these kids and teenagers, 926 (0.9%) were diagnosed with latent RHD based on a confirmatory electrocardiogram.
“For now, I would say, if you are screening, then kids found to have latent RHD should be put on prophylaxis,” Dr. Beaton said.
“I think this is also a powerful call for more research [severely lacking in RHD],” to improve risk stratification, determine how to implement screening and prophylaxis programs, and develop new and better approaches for RHD prevention and care.
“This essential trial partially addresses the clinical equipoise that has developed regarding penicillin administration in latent RHD,” said Gabriele Rossi, MD, MPH, who was not involved with this research.
It showed that, out of the final 818 participants included in the modified intention-to-treat analysis, a total of 3 (0.8%) in the prophylaxis group had echocardiographic progression at 2 years, compared with 33 participants (8.2%) in the control group (risk difference, −7.5 percentage points; 95% confidence interval, −10.2 to −4.7; P < .001).
“This is a significant difference,” Dr. Rossi, from Médecins Sans Frontières (Doctors Without Borders), Brussels, said in an interview, noting that, however, it is not known what happens after 2 years.
The authors estimated that 13 children or adolescents with latent rheumatic heart disease would need to be treated to prevent disease progression in one person at 2 years, which is “acceptable,” he continued.
However, “screening, diagnosis, clinical follow-up, treatment, and program management [would] require substantial strengthening of health systems and the workforce, which is still far from being realizable in many African and low-income country settings,” Dr. Rossi noted.
Related study in Italy
Previously, Dr. Rossi and colleagues conducted a trial, published in 2019, that showed it was feasible to screen for asymptomatic RHD among refugee/migrant children and youths in Rome.
From February 2016 to January 2018, they screened more than 650 refugee/migrant children and adolescents who were younger than 18. They came largely from Egypt (65%) but also from 22 other countries and were often unaccompanied or with just one parent.
The number needed to screen was 5 to identify a child/youth with borderline RHD and around 40 to identify a child/youth with definite RHD.
Dr. Rossi noted that local resurgences of RHD have also been also documented in high-income countries such as Europe, Australia, New Zealand, Canada, and the United States, often among disadvantaged indigenous people, as described in a 2018 Letter to the Editor in the New England Journal of Medicine.
Dr. Beaton noted that a review of 10-year data (2008-2018) from 22 U.S. pediatric institutions showed that in the United States the prevalence of RHD “is higher in immigrant children from RHD endemic areas, but because of total numbers, more RHD cases than not are domestic.” Children living in more deprived communities are at risk for more severe disease, and the burden in U.S. territories is also quite high.
Screening and secondary prophylaxis
The aim of the current GOAL study was to evaluate if screening and treatment with penicillin G benzathine could detect and prevent progression of latent rheumatic heart disease in 5- to 17-year-olds living in Gulu, Uganda. The trial was conducted from July 2018 to October 2020.
“School education and community sensitization was done prior to the trial,” through radio shows or school-based education, Dr. Beaton explained. About 99% of the children/adolescents/families agreed to be screened.
The group has been conducting echo screening research in Uganda for 10 years, she noted. They have developed peer group and case manager strategies to aid participant retention, as they describe in an article about the study protocol.
The screening echocardiograms were interpreted by about 30 providers and four cardiologists reviewed confirmatory echocardiograms.
Two participants in the prophylaxis group had serious adverse events that were attributable to receipt of prophylaxis, including one episode of a mild anaphylactic reaction (representing <0.1% of all administered doses of prophylaxis).
Once children and adolescents have moderate/severe RHD, there is not much that can be done in lower- and middle-income countries, where surgery for this is uncommon, Dr. Beaton explained. Around 30% of children and adolescents with this condition who come to clinical attention in Uganda die within 9 months.
Further research
Dr. Beaton and colleagues have just started a trial to investigate the burden of RHD among Native American youth, which has not been studied since the 1970s.
They also have an ongoing study looking at the efficacy of a pragmatic, community-based sore throat program to prevent RHD.
“Unfortunately, this strategy has not worked well in low-to-middle income countries, for a variety of reasons so far,” Dr. Beaton noted, and the cost-effectiveness of this preventive strategy is questionable.
The trial was supported by the Thrasher Research Fund, Gift of Life International, Children’s National Hospital Foundation (Zachary Blumenfeld Fund and Race for Every Child [Team Jocelyn]), the Elias-Ginsburg Family, Wiley Rein, Philips Foundation, AT&T Foundation, Heart Healers International, the Karp Family Foundation, Huron Philanthropies, and the Cincinnati Children’s Hospital Heart Institute Research Core. Dr. Beaton and Dr. Rossi disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM AHA 2021
Adjuvant Olaparib Improves Outcomes in High-Risk, HER2-Negative Early Breast Cancer Patients With Germline BRCA1 and BRCA2 Mutations
Study Overview
Objective. To assess the efficacy and safety of olaparib as an adjuvant treatment in patients with BRCA1 or BRCA2 germline mutations who are at a high-risk for relapse.
Design. A randomized, double-blind, placebo-controlled, multicenter phase III study. The published results are from the prespecified interim analysis.
Intervention. Patients were randomized in 1:1 ratio to either receive 300 mg of olaparib orally twice daily or to receive a matching placebo. Randomization was stratified by hormone receptor status (estrogen receptor and/or progesterone receptor positive/HER2-negative vs triple negative), prior neoadjuvant vs adjuvant chemotherapy, and prior platinum use for breast cancer. Treatment was continued for 52 weeks.
Setting and participants. A total of 1836 patients were randomized in a 1:1 fashion to receive olaparib or a placebo. Eligible patients had a germline BRCA1 or BRCA1 pathogenic or likely pathogenic variant. Patients had high-risk, HER2-negative primary breast cancers and all had received definitive local therapy and neoadjuvant or adjuvant chemotherapy. Patients were enrolled between 2 to 12 weeks after completion of all local therapy. Platinum chemotherapy was allowed. Patients received adjuvant endocrine therapy for hormone receptor positive disease as well as adjuvant bisphosphonates per institutional guidelines. Patients with triple negative disease who received adjuvant chemotherapy were required to be lymph node positive or have at least 2 cm invasive disease. Patients who received neoadjuvant chemotherapy were required to have residual invasive disease to be eligible. For hormone receptor positive patients receiving adjuvant chemotherapy to be eligible they had to have at least 4 pathologically confirmed lymph nodes involved. Hormone receptor positive patients who had neoadjuvant chemotherapy were required to have had residual invasive disease.
Main outcome measures. The primary endpoint for the study was invasive disease-free survival which was defined as time from randomization to date of recurrence or death from any cause. The secondary endpoints included overall survival (OS), distant disease-free survival, safety, and tolerability of olaparib.
Main results. At the time of data cutoff, 284 events had occurred with a median follow-up of 2.5 years in the intention to treat population. A total of 81% of patients had triple negative breast cancer. Most patients (94% in the olaparib group and 92% in the placebo group) received both taxane and anthracycline based chemotherapy regimens. Platinum based chemotherapy was used in 26% of patients in each group. The groups were otherwise well balanced. Germline mutations in BRCA1 were present in 72% of patients and BRCA2 in 27% of patients. These were balanced between groups.
At the time of this analysis, adjuvant olaparib reduced the risk of invasive disease-free survival by 42% compared with placebo (P < .001). At 3 years, invasive disease-free survival was 85.9% in the olaparib group and 77.1% in the placebo group (difference, 8.8 percentage points; 95% CI, 4.5-13.0; hazard ratio [HR], 0.58; 99.5% CI, 0.41-0.82; P < .001). The 3-year distant disease-free survival was 87.5% in the olaparib group and 80.4% in the placebo group (HR 0.57; 99.5% CI, 0.39-0.83; P < .001). Results also showed that olaparib was associated with fewer deaths than placebo (59 and 86, respectively) (HR, 0.68; 99% CI, 0.44-1.05; P = .02); however, there was no significant difference between treatment arms at the time of this interim analysis. Subgroup analysis showed a consistent benefit across all groups with no difference noted regarding BRCA mutation, hormone receptor status or use of neoadjuvant vs adjuvant chemotherapy.
The side effects were consistent with the safety profile of olaparib. Adverse events of grade 3 or higher more common with olaparib included anemia (8.7%), leukopenia (3%), and fatigue (1.8%). Early discontinuation of trial regimen due to adverse events of disease recurrence occurred in 25.9% in the olaparib group and 20.7% in the placebo group. Blood transfusions were required in 5.8% of patients in the olaparib group. Myelodysplasia or acute myleoid leukemia was observed in 2 patients in the olaparib group and 3 patients in the placebo group. Adverse events leading to death occurred in 1 patient in the olaparib group and 2 patients in the placebo group.
Conclusion. Among patients with high-risk, HER2-negative early breast cancer and germline BRCA1 or BRCA2 pathogenic or likely pathogenic variants, adjuvant olaparib after completion of local treatment and neoadjuvant or adjuvant chemotherapy was associated with significantly longer invasive disease-free and distant disease-free survival compared with placebo.
Commentary
The results from the current OlympiA trial provide the first evidence that adjuvant therapy with poly adenosine diphosphate-ribose polymerase (PARP) inhibitors can improve outcomes in high-risk, HER2-negative breast cancer in patients with pathogenic BRCA1 and BRCA2 mutations. The OS, while favoring olaparib, is not yet mature at the time of this analysis. Nevertheless, these results represent an important step forward in improving outcomes in this patient population. The efficacy and safety of PARP inhibitors in BRCA-mutated breast cancer has previously been shown in patients with advanced disease leading to FDA approval of both olaparib and talazoparib in this setting.1,2 With the current results, PARP inhibitors will certainly play an important role in the adjuvant setting in patients with deleterious BRCA1 or BRCA2 mutations at high risk for relapse. Importantly, the side effect profile appears acceptable with no unexpected events and a very low rate of secondary myeloid malignancies.
Subgroup analysis appears to indicate a benefit across all groups including hormone receptor–positive disease and triple negative breast cancer. Interestingly, approximately 25% of patients in both cohorts received platinum-based chemotherapy. The efficacy of adjuvant olaparib did not appear to be impacted by prior use of platinum-containing chemotherapy regimens. It is important to consider that postneoadjuvant capecitabine, per the results of the CREATE-X trial, in triple-negative patients was not permitted in the current study. Although, this has been widely adopted in clinical practice.3 The CREATE-X trial did not specify the benefit of adjuvant capecitabine in the BRCA-mutated cohort, thus, it is not clear how this subgroup fares with this approach. Thus, one cannot extrapolate the relative efficacy of olaparib compared with capecitabine, as pointed out by the authors, and whether we consider the use of capecitabine and/or olaparib in triple-negative patients with residual invasive disease after neoadjuvant chemotherapy is not clear at this time.
Nevertheless, the magnitude of benefit seen in this trial certainly provide clinically relevant and potentially practice changing results. It will be imperative to follow these results as the survival data matures and ensure no further long-term toxicity, particularly secondary myeloid malignancies, develop. These results should be discussed with each patient and informed decisions regarding the use of adjuvant olaparib should be considered for this patient population. Lastly, these results highlight the importance of germline testing for patients with breast cancer in accordance with national guideline recommendations. Moreover, these results certainly call into question whether it is time to consider expansion of our current germline testing guidelines to detect all potential patients who may benefit from this therapy.
Application for Clinical Practice
Adjuvant olaparib in high-risk patients with germline BRCA1 or BRCA2 mutations improves invasive and distant disease-free survival and should be considered in patients who meet the enrollment criteria of the current study. Furthermore, this highlights the importance of appropriate germline genetic testing in patients with breast cancer.
Financial disclosures: None.
1. Robson M, Im SA, Senkus E, et al. Olaparib for metastatic breast cancer in patients with a germline BRCA mutation. N Engl J Med. 2017;377(6):523-533. doi:10.1056/NEJMoa1706450
2. Litton JK, Rugo HS, Ettl J, et al. Talazoparib in Patients with Advanced Breast Cancer and a Germline BRCA Mutation. N Engl J Med. 2018;379(8):753-763. doi:10.1056/NEJMoa1802905
3. Masuda N, Lee SJ, Ohtani S, et al. Adjuvant Capecitabine for Breast Cancer after Preoperative Chemotherapy. N Engl J Med. 2017;376(22):2147-2159. doi:10.1056/NEJMoa1612645
Study Overview
Objective. To assess the efficacy and safety of olaparib as an adjuvant treatment in patients with BRCA1 or BRCA2 germline mutations who are at a high-risk for relapse.
Design. A randomized, double-blind, placebo-controlled, multicenter phase III study. The published results are from the prespecified interim analysis.
Intervention. Patients were randomized in 1:1 ratio to either receive 300 mg of olaparib orally twice daily or to receive a matching placebo. Randomization was stratified by hormone receptor status (estrogen receptor and/or progesterone receptor positive/HER2-negative vs triple negative), prior neoadjuvant vs adjuvant chemotherapy, and prior platinum use for breast cancer. Treatment was continued for 52 weeks.
Setting and participants. A total of 1836 patients were randomized in a 1:1 fashion to receive olaparib or a placebo. Eligible patients had a germline BRCA1 or BRCA1 pathogenic or likely pathogenic variant. Patients had high-risk, HER2-negative primary breast cancers and all had received definitive local therapy and neoadjuvant or adjuvant chemotherapy. Patients were enrolled between 2 to 12 weeks after completion of all local therapy. Platinum chemotherapy was allowed. Patients received adjuvant endocrine therapy for hormone receptor positive disease as well as adjuvant bisphosphonates per institutional guidelines. Patients with triple negative disease who received adjuvant chemotherapy were required to be lymph node positive or have at least 2 cm invasive disease. Patients who received neoadjuvant chemotherapy were required to have residual invasive disease to be eligible. For hormone receptor positive patients receiving adjuvant chemotherapy to be eligible they had to have at least 4 pathologically confirmed lymph nodes involved. Hormone receptor positive patients who had neoadjuvant chemotherapy were required to have had residual invasive disease.
Main outcome measures. The primary endpoint for the study was invasive disease-free survival which was defined as time from randomization to date of recurrence or death from any cause. The secondary endpoints included overall survival (OS), distant disease-free survival, safety, and tolerability of olaparib.
Main results. At the time of data cutoff, 284 events had occurred with a median follow-up of 2.5 years in the intention to treat population. A total of 81% of patients had triple negative breast cancer. Most patients (94% in the olaparib group and 92% in the placebo group) received both taxane and anthracycline based chemotherapy regimens. Platinum based chemotherapy was used in 26% of patients in each group. The groups were otherwise well balanced. Germline mutations in BRCA1 were present in 72% of patients and BRCA2 in 27% of patients. These were balanced between groups.
At the time of this analysis, adjuvant olaparib reduced the risk of invasive disease-free survival by 42% compared with placebo (P < .001). At 3 years, invasive disease-free survival was 85.9% in the olaparib group and 77.1% in the placebo group (difference, 8.8 percentage points; 95% CI, 4.5-13.0; hazard ratio [HR], 0.58; 99.5% CI, 0.41-0.82; P < .001). The 3-year distant disease-free survival was 87.5% in the olaparib group and 80.4% in the placebo group (HR 0.57; 99.5% CI, 0.39-0.83; P < .001). Results also showed that olaparib was associated with fewer deaths than placebo (59 and 86, respectively) (HR, 0.68; 99% CI, 0.44-1.05; P = .02); however, there was no significant difference between treatment arms at the time of this interim analysis. Subgroup analysis showed a consistent benefit across all groups with no difference noted regarding BRCA mutation, hormone receptor status or use of neoadjuvant vs adjuvant chemotherapy.
The side effects were consistent with the safety profile of olaparib. Adverse events of grade 3 or higher more common with olaparib included anemia (8.7%), leukopenia (3%), and fatigue (1.8%). Early discontinuation of trial regimen due to adverse events of disease recurrence occurred in 25.9% in the olaparib group and 20.7% in the placebo group. Blood transfusions were required in 5.8% of patients in the olaparib group. Myelodysplasia or acute myleoid leukemia was observed in 2 patients in the olaparib group and 3 patients in the placebo group. Adverse events leading to death occurred in 1 patient in the olaparib group and 2 patients in the placebo group.
Conclusion. Among patients with high-risk, HER2-negative early breast cancer and germline BRCA1 or BRCA2 pathogenic or likely pathogenic variants, adjuvant olaparib after completion of local treatment and neoadjuvant or adjuvant chemotherapy was associated with significantly longer invasive disease-free and distant disease-free survival compared with placebo.
Commentary
The results from the current OlympiA trial provide the first evidence that adjuvant therapy with poly adenosine diphosphate-ribose polymerase (PARP) inhibitors can improve outcomes in high-risk, HER2-negative breast cancer in patients with pathogenic BRCA1 and BRCA2 mutations. The OS, while favoring olaparib, is not yet mature at the time of this analysis. Nevertheless, these results represent an important step forward in improving outcomes in this patient population. The efficacy and safety of PARP inhibitors in BRCA-mutated breast cancer has previously been shown in patients with advanced disease leading to FDA approval of both olaparib and talazoparib in this setting.1,2 With the current results, PARP inhibitors will certainly play an important role in the adjuvant setting in patients with deleterious BRCA1 or BRCA2 mutations at high risk for relapse. Importantly, the side effect profile appears acceptable with no unexpected events and a very low rate of secondary myeloid malignancies.
Subgroup analysis appears to indicate a benefit across all groups including hormone receptor–positive disease and triple negative breast cancer. Interestingly, approximately 25% of patients in both cohorts received platinum-based chemotherapy. The efficacy of adjuvant olaparib did not appear to be impacted by prior use of platinum-containing chemotherapy regimens. It is important to consider that postneoadjuvant capecitabine, per the results of the CREATE-X trial, in triple-negative patients was not permitted in the current study. Although, this has been widely adopted in clinical practice.3 The CREATE-X trial did not specify the benefit of adjuvant capecitabine in the BRCA-mutated cohort, thus, it is not clear how this subgroup fares with this approach. Thus, one cannot extrapolate the relative efficacy of olaparib compared with capecitabine, as pointed out by the authors, and whether we consider the use of capecitabine and/or olaparib in triple-negative patients with residual invasive disease after neoadjuvant chemotherapy is not clear at this time.
Nevertheless, the magnitude of benefit seen in this trial certainly provide clinically relevant and potentially practice changing results. It will be imperative to follow these results as the survival data matures and ensure no further long-term toxicity, particularly secondary myeloid malignancies, develop. These results should be discussed with each patient and informed decisions regarding the use of adjuvant olaparib should be considered for this patient population. Lastly, these results highlight the importance of germline testing for patients with breast cancer in accordance with national guideline recommendations. Moreover, these results certainly call into question whether it is time to consider expansion of our current germline testing guidelines to detect all potential patients who may benefit from this therapy.
Application for Clinical Practice
Adjuvant olaparib in high-risk patients with germline BRCA1 or BRCA2 mutations improves invasive and distant disease-free survival and should be considered in patients who meet the enrollment criteria of the current study. Furthermore, this highlights the importance of appropriate germline genetic testing in patients with breast cancer.
Financial disclosures: None.
Study Overview
Objective. To assess the efficacy and safety of olaparib as an adjuvant treatment in patients with BRCA1 or BRCA2 germline mutations who are at a high-risk for relapse.
Design. A randomized, double-blind, placebo-controlled, multicenter phase III study. The published results are from the prespecified interim analysis.
Intervention. Patients were randomized in 1:1 ratio to either receive 300 mg of olaparib orally twice daily or to receive a matching placebo. Randomization was stratified by hormone receptor status (estrogen receptor and/or progesterone receptor positive/HER2-negative vs triple negative), prior neoadjuvant vs adjuvant chemotherapy, and prior platinum use for breast cancer. Treatment was continued for 52 weeks.
Setting and participants. A total of 1836 patients were randomized in a 1:1 fashion to receive olaparib or a placebo. Eligible patients had a germline BRCA1 or BRCA1 pathogenic or likely pathogenic variant. Patients had high-risk, HER2-negative primary breast cancers and all had received definitive local therapy and neoadjuvant or adjuvant chemotherapy. Patients were enrolled between 2 to 12 weeks after completion of all local therapy. Platinum chemotherapy was allowed. Patients received adjuvant endocrine therapy for hormone receptor positive disease as well as adjuvant bisphosphonates per institutional guidelines. Patients with triple negative disease who received adjuvant chemotherapy were required to be lymph node positive or have at least 2 cm invasive disease. Patients who received neoadjuvant chemotherapy were required to have residual invasive disease to be eligible. For hormone receptor positive patients receiving adjuvant chemotherapy to be eligible they had to have at least 4 pathologically confirmed lymph nodes involved. Hormone receptor positive patients who had neoadjuvant chemotherapy were required to have had residual invasive disease.
Main outcome measures. The primary endpoint for the study was invasive disease-free survival which was defined as time from randomization to date of recurrence or death from any cause. The secondary endpoints included overall survival (OS), distant disease-free survival, safety, and tolerability of olaparib.
Main results. At the time of data cutoff, 284 events had occurred with a median follow-up of 2.5 years in the intention to treat population. A total of 81% of patients had triple negative breast cancer. Most patients (94% in the olaparib group and 92% in the placebo group) received both taxane and anthracycline based chemotherapy regimens. Platinum based chemotherapy was used in 26% of patients in each group. The groups were otherwise well balanced. Germline mutations in BRCA1 were present in 72% of patients and BRCA2 in 27% of patients. These were balanced between groups.
At the time of this analysis, adjuvant olaparib reduced the risk of invasive disease-free survival by 42% compared with placebo (P < .001). At 3 years, invasive disease-free survival was 85.9% in the olaparib group and 77.1% in the placebo group (difference, 8.8 percentage points; 95% CI, 4.5-13.0; hazard ratio [HR], 0.58; 99.5% CI, 0.41-0.82; P < .001). The 3-year distant disease-free survival was 87.5% in the olaparib group and 80.4% in the placebo group (HR 0.57; 99.5% CI, 0.39-0.83; P < .001). Results also showed that olaparib was associated with fewer deaths than placebo (59 and 86, respectively) (HR, 0.68; 99% CI, 0.44-1.05; P = .02); however, there was no significant difference between treatment arms at the time of this interim analysis. Subgroup analysis showed a consistent benefit across all groups with no difference noted regarding BRCA mutation, hormone receptor status or use of neoadjuvant vs adjuvant chemotherapy.
The side effects were consistent with the safety profile of olaparib. Adverse events of grade 3 or higher more common with olaparib included anemia (8.7%), leukopenia (3%), and fatigue (1.8%). Early discontinuation of trial regimen due to adverse events of disease recurrence occurred in 25.9% in the olaparib group and 20.7% in the placebo group. Blood transfusions were required in 5.8% of patients in the olaparib group. Myelodysplasia or acute myleoid leukemia was observed in 2 patients in the olaparib group and 3 patients in the placebo group. Adverse events leading to death occurred in 1 patient in the olaparib group and 2 patients in the placebo group.
Conclusion. Among patients with high-risk, HER2-negative early breast cancer and germline BRCA1 or BRCA2 pathogenic or likely pathogenic variants, adjuvant olaparib after completion of local treatment and neoadjuvant or adjuvant chemotherapy was associated with significantly longer invasive disease-free and distant disease-free survival compared with placebo.
Commentary
The results from the current OlympiA trial provide the first evidence that adjuvant therapy with poly adenosine diphosphate-ribose polymerase (PARP) inhibitors can improve outcomes in high-risk, HER2-negative breast cancer in patients with pathogenic BRCA1 and BRCA2 mutations. The OS, while favoring olaparib, is not yet mature at the time of this analysis. Nevertheless, these results represent an important step forward in improving outcomes in this patient population. The efficacy and safety of PARP inhibitors in BRCA-mutated breast cancer has previously been shown in patients with advanced disease leading to FDA approval of both olaparib and talazoparib in this setting.1,2 With the current results, PARP inhibitors will certainly play an important role in the adjuvant setting in patients with deleterious BRCA1 or BRCA2 mutations at high risk for relapse. Importantly, the side effect profile appears acceptable with no unexpected events and a very low rate of secondary myeloid malignancies.
Subgroup analysis appears to indicate a benefit across all groups including hormone receptor–positive disease and triple negative breast cancer. Interestingly, approximately 25% of patients in both cohorts received platinum-based chemotherapy. The efficacy of adjuvant olaparib did not appear to be impacted by prior use of platinum-containing chemotherapy regimens. It is important to consider that postneoadjuvant capecitabine, per the results of the CREATE-X trial, in triple-negative patients was not permitted in the current study. Although, this has been widely adopted in clinical practice.3 The CREATE-X trial did not specify the benefit of adjuvant capecitabine in the BRCA-mutated cohort, thus, it is not clear how this subgroup fares with this approach. Thus, one cannot extrapolate the relative efficacy of olaparib compared with capecitabine, as pointed out by the authors, and whether we consider the use of capecitabine and/or olaparib in triple-negative patients with residual invasive disease after neoadjuvant chemotherapy is not clear at this time.
Nevertheless, the magnitude of benefit seen in this trial certainly provide clinically relevant and potentially practice changing results. It will be imperative to follow these results as the survival data matures and ensure no further long-term toxicity, particularly secondary myeloid malignancies, develop. These results should be discussed with each patient and informed decisions regarding the use of adjuvant olaparib should be considered for this patient population. Lastly, these results highlight the importance of germline testing for patients with breast cancer in accordance with national guideline recommendations. Moreover, these results certainly call into question whether it is time to consider expansion of our current germline testing guidelines to detect all potential patients who may benefit from this therapy.
Application for Clinical Practice
Adjuvant olaparib in high-risk patients with germline BRCA1 or BRCA2 mutations improves invasive and distant disease-free survival and should be considered in patients who meet the enrollment criteria of the current study. Furthermore, this highlights the importance of appropriate germline genetic testing in patients with breast cancer.
Financial disclosures: None.
1. Robson M, Im SA, Senkus E, et al. Olaparib for metastatic breast cancer in patients with a germline BRCA mutation. N Engl J Med. 2017;377(6):523-533. doi:10.1056/NEJMoa1706450
2. Litton JK, Rugo HS, Ettl J, et al. Talazoparib in Patients with Advanced Breast Cancer and a Germline BRCA Mutation. N Engl J Med. 2018;379(8):753-763. doi:10.1056/NEJMoa1802905
3. Masuda N, Lee SJ, Ohtani S, et al. Adjuvant Capecitabine for Breast Cancer after Preoperative Chemotherapy. N Engl J Med. 2017;376(22):2147-2159. doi:10.1056/NEJMoa1612645
1. Robson M, Im SA, Senkus E, et al. Olaparib for metastatic breast cancer in patients with a germline BRCA mutation. N Engl J Med. 2017;377(6):523-533. doi:10.1056/NEJMoa1706450
2. Litton JK, Rugo HS, Ettl J, et al. Talazoparib in Patients with Advanced Breast Cancer and a Germline BRCA Mutation. N Engl J Med. 2018;379(8):753-763. doi:10.1056/NEJMoa1802905
3. Masuda N, Lee SJ, Ohtani S, et al. Adjuvant Capecitabine for Breast Cancer after Preoperative Chemotherapy. N Engl J Med. 2017;376(22):2147-2159. doi:10.1056/NEJMoa1612645
HCV screening in pregnancy: Reducing the risk for casualties in the quest for elimination
Because hepatitis C virus (HCV) infection is typically asymptomatic, its presence can easily be overlooked without appropriate screening efforts. For those screening efforts to be effective, they must keep pace with the changing demographic face of this increasingly prevalent but treatable disease.
Perhaps the most dramatic shift in HCV demographics in recent years has been the increase of infections among those born after 1965, a trend primarily driven by the opioid epidemic. In addition, data from the National Notifiable Diseases Surveillance System show that cases of diagnosed HCV doubled among women of childbearing age from 2006 to 2014, with new infections in younger women surpassing those in older age groups.
With such trends in mind, the Centers for Disease Control and Prevention broadened their recommendations regarding HCV in 2020 to include one-time testing in all adults aged 18 years and older and screening of all pregnant women during each pregnancy, except where the prevalence of infection is less than 0.1%, a threshold that no state has yet achieved.
The US Preventive Services Task Force (USPSTF) subsequently followed suit in their own recommendations.
The American Association for the Study of Liver Diseases/Infectious Diseases Society of America have long advocated for extensive expansion in their screening recommendations for HCV, including pregnancy.
Although the American College of Obstetricians and Gynecologists and the Society for Maternal-Fetal Medicine did not immediately adopt these recommendations, they have since endorsed them in May 2021 and June 2021, respectively.
The hepatologist perspective
As a practicing hepatologist, this seems like an uncontroversial recommendation. Obstetricians already screen for hepatitis B virus in each pregnancy. It should be easy to add HCV testing to the same lab testing.
Risk-based screening has repeatedly been demonstrated to be ineffective. It should be easier to test all women than to ask prying questions about high-risk behaviors.
Given the increase of injection drug use and resultant HCV infections in women of childbearing age, this seems like a perfect opportunity to identify chronically infected women and counsel them on transmission and cure. And pregnancy is also unique in that it is a time of near-universal health coverage.
Let’s address some of the operational issues.
The diagnostic cascade for HCV can be made very easy. HCV antibody testing is our standard screening test and, when positive, can automatically reflex to HCV polymerase chain reaction (PCR), the diagnostic test. Thus, with one blood sample, you can both screen for and diagnose infection.
Current guidelines do not recommend treating HCV during pregnancy, although therapy can be considered on an individual basis. Linkage to a knowledgeable provider who can discuss transmission and treatment, as well as assess the stage of liver injury, should decrease the burden on the ob.gyn.
The impact on pregnancy is marginal. HCV should not change either the mode of delivery or the decision to breastfeed. The AASLD/IDSA guidance outlines only four recommendations for monitoring during pregnancy:
- Obtain HCV RNA to see whether the infection is active and assess liver function at initiation of prenatal care.
- Prenatal care should be tailored to the pregnancy. There is no modification recommended to decrease mother-to-child transmission (MTCT).
- Be aware that intrahepatic is more common with HCV.
- Women with have a higher rate of adverse outcomes and should be linked to a high-risk obstetrics specialist.
But of course, what seems easy to one specialist may not be true of another. With that in mind, let’s hear the ob.gyn. perspective on these updated screening recommendations.
The ob.gyn. perspective
Recent guidelines from the CDC, ACOG, and SMFM recommend universal screening for HCV in all pregnant women. The increased availability of highly effective antiviral regimens makes universal screening a logical strategy, especially to identify candidates for this curative treatment. What is questionable, however, is the recommended timing by which this screening should take place.
HCV screening during pregnancy, as currently recommended, provides no immediate benefit for the pregnant woman or the fetus/neonate, given that antiviral treatments have not been approved during gestation, and there are no known measures that decrease MTCT or change routine perinatal care.
We also must not forget that a significant proportion of women in the United States, particularly those with limited resources, do not receive prenatal care at all. Most of them, however, will present to a hospital for delivery. Consequently, compliance with screening might be higher if performed at the time of delivery rather than antepartum.
Deferring screening until the intrapartum or immediate postpartum period, at least until antiviral treatment during pregnancy becomes a reality, was discussed. The rationale was that this approach might obviate the need to deal with the unintended consequences and burden of testing for HCV during pregnancy. Ultimately, ACOG and SMFM fell in line with the CDC recommendations.
Despite the lack of robust evidence regarding the risk for MTCT associated with commonly performed obstetric procedures (for example, genetic amniocentesis, artificial rupture of the membranes during labor, placement of an intrauterine pressure catheter), clinicians may be reluctant to perform them in HCV-infected women, resulting in potential deviations from the obstetric standard of care.
Similarly, it is likely that patients may choose to have a cesarean delivery for the sole purpose of decreasing MTCT, despite the lack of evidence for this. Such ill-advised patient-driven decisions are increasingly likely in the current environment, where social media can rapidly disseminate misinformation.
Implications for pediatric patients
One cannot isolate HCV screening in pregnancy from the consequences that may potentially occur as part of the infant’s transition to the care of a pediatrician.
Even though MTCT is estimated to occur in just 5%-15% of cases, all children born to HCV viremic mothers should be screened for HCV.
Traditionally, screening for HCV antibodies occurred after 18 months of age. In those who test positive, HCV PCR testing is recommended at 3 years. However, this algorithm is being called into question because only approximately one-third of infants are successfully screened.
HCV RNA testing in the first year after birth has been suggested. However, even proponents of this approach concur that all management decisions should be deferred until after the age of 3 years, when medications are approved for pediatric use.
In addition, HCV testing would be required again before considering therapy because children have higher rates of spontaneous clearance.
Seeking consensus beyond the controversy
Controversy remains surrounding the most recent update to the HCV screening guidelines. The current recommendation to screen during pregnancy cannot modify the risk for MTCT, has no impact on decisions regarding mode of delivery or breastfeeding, and could potentially cause harm by making obstetricians defer necessary invasive procedures even though there are no data linking them to an increase in MTCT.
Yet after extensive debate, the CDC, USPSTF, AASLD/IDSA, ACOG, and SMFM all developed their current recommendations to initiate HCV screening during pregnancy. To make this successful, screening algorithms need to be simple and consistent across all society recommendations.
HCV antibody testing should always reflex to the diagnostic test (HCV PCR) to allow confirmation in those who test positive without requiring an additional blood test. Viremic mothers (those who are HCV positive on PCR) should be linked to a provider who can discuss prognosis, transmission, and treatment. The importance of screening the infant also must be communicated to the parents and pediatrician alike.
Dr. Reau has served as a director, officer, partner, employee, adviser, consultant, or trustee for AbbVie, Gilead, Arbutus, Intercept, and Salix; received research grants from AbbVie and Gilead; and received income from AASLD. Dr. Pacheco disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Because hepatitis C virus (HCV) infection is typically asymptomatic, its presence can easily be overlooked without appropriate screening efforts. For those screening efforts to be effective, they must keep pace with the changing demographic face of this increasingly prevalent but treatable disease.
Perhaps the most dramatic shift in HCV demographics in recent years has been the increase of infections among those born after 1965, a trend primarily driven by the opioid epidemic. In addition, data from the National Notifiable Diseases Surveillance System show that cases of diagnosed HCV doubled among women of childbearing age from 2006 to 2014, with new infections in younger women surpassing those in older age groups.
With such trends in mind, the Centers for Disease Control and Prevention broadened their recommendations regarding HCV in 2020 to include one-time testing in all adults aged 18 years and older and screening of all pregnant women during each pregnancy, except where the prevalence of infection is less than 0.1%, a threshold that no state has yet achieved.
The US Preventive Services Task Force (USPSTF) subsequently followed suit in their own recommendations.
The American Association for the Study of Liver Diseases/Infectious Diseases Society of America have long advocated for extensive expansion in their screening recommendations for HCV, including pregnancy.
Although the American College of Obstetricians and Gynecologists and the Society for Maternal-Fetal Medicine did not immediately adopt these recommendations, they have since endorsed them in May 2021 and June 2021, respectively.
The hepatologist perspective
As a practicing hepatologist, this seems like an uncontroversial recommendation. Obstetricians already screen for hepatitis B virus in each pregnancy. It should be easy to add HCV testing to the same lab testing.
Risk-based screening has repeatedly been demonstrated to be ineffective. It should be easier to test all women than to ask prying questions about high-risk behaviors.
Given the increase of injection drug use and resultant HCV infections in women of childbearing age, this seems like a perfect opportunity to identify chronically infected women and counsel them on transmission and cure. And pregnancy is also unique in that it is a time of near-universal health coverage.
Let’s address some of the operational issues.
The diagnostic cascade for HCV can be made very easy. HCV antibody testing is our standard screening test and, when positive, can automatically reflex to HCV polymerase chain reaction (PCR), the diagnostic test. Thus, with one blood sample, you can both screen for and diagnose infection.
Current guidelines do not recommend treating HCV during pregnancy, although therapy can be considered on an individual basis. Linkage to a knowledgeable provider who can discuss transmission and treatment, as well as assess the stage of liver injury, should decrease the burden on the ob.gyn.
The impact on pregnancy is marginal. HCV should not change either the mode of delivery or the decision to breastfeed. The AASLD/IDSA guidance outlines only four recommendations for monitoring during pregnancy:
- Obtain HCV RNA to see whether the infection is active and assess liver function at initiation of prenatal care.
- Prenatal care should be tailored to the pregnancy. There is no modification recommended to decrease mother-to-child transmission (MTCT).
- Be aware that intrahepatic is more common with HCV.
- Women with have a higher rate of adverse outcomes and should be linked to a high-risk obstetrics specialist.
But of course, what seems easy to one specialist may not be true of another. With that in mind, let’s hear the ob.gyn. perspective on these updated screening recommendations.
The ob.gyn. perspective
Recent guidelines from the CDC, ACOG, and SMFM recommend universal screening for HCV in all pregnant women. The increased availability of highly effective antiviral regimens makes universal screening a logical strategy, especially to identify candidates for this curative treatment. What is questionable, however, is the recommended timing by which this screening should take place.
HCV screening during pregnancy, as currently recommended, provides no immediate benefit for the pregnant woman or the fetus/neonate, given that antiviral treatments have not been approved during gestation, and there are no known measures that decrease MTCT or change routine perinatal care.
We also must not forget that a significant proportion of women in the United States, particularly those with limited resources, do not receive prenatal care at all. Most of them, however, will present to a hospital for delivery. Consequently, compliance with screening might be higher if performed at the time of delivery rather than antepartum.
Deferring screening until the intrapartum or immediate postpartum period, at least until antiviral treatment during pregnancy becomes a reality, was discussed. The rationale was that this approach might obviate the need to deal with the unintended consequences and burden of testing for HCV during pregnancy. Ultimately, ACOG and SMFM fell in line with the CDC recommendations.
Despite the lack of robust evidence regarding the risk for MTCT associated with commonly performed obstetric procedures (for example, genetic amniocentesis, artificial rupture of the membranes during labor, placement of an intrauterine pressure catheter), clinicians may be reluctant to perform them in HCV-infected women, resulting in potential deviations from the obstetric standard of care.
Similarly, it is likely that patients may choose to have a cesarean delivery for the sole purpose of decreasing MTCT, despite the lack of evidence for this. Such ill-advised patient-driven decisions are increasingly likely in the current environment, where social media can rapidly disseminate misinformation.
Implications for pediatric patients
One cannot isolate HCV screening in pregnancy from the consequences that may potentially occur as part of the infant’s transition to the care of a pediatrician.
Even though MTCT is estimated to occur in just 5%-15% of cases, all children born to HCV viremic mothers should be screened for HCV.
Traditionally, screening for HCV antibodies occurred after 18 months of age. In those who test positive, HCV PCR testing is recommended at 3 years. However, this algorithm is being called into question because only approximately one-third of infants are successfully screened.
HCV RNA testing in the first year after birth has been suggested. However, even proponents of this approach concur that all management decisions should be deferred until after the age of 3 years, when medications are approved for pediatric use.
In addition, HCV testing would be required again before considering therapy because children have higher rates of spontaneous clearance.
Seeking consensus beyond the controversy
Controversy remains surrounding the most recent update to the HCV screening guidelines. The current recommendation to screen during pregnancy cannot modify the risk for MTCT, has no impact on decisions regarding mode of delivery or breastfeeding, and could potentially cause harm by making obstetricians defer necessary invasive procedures even though there are no data linking them to an increase in MTCT.
Yet after extensive debate, the CDC, USPSTF, AASLD/IDSA, ACOG, and SMFM all developed their current recommendations to initiate HCV screening during pregnancy. To make this successful, screening algorithms need to be simple and consistent across all society recommendations.
HCV antibody testing should always reflex to the diagnostic test (HCV PCR) to allow confirmation in those who test positive without requiring an additional blood test. Viremic mothers (those who are HCV positive on PCR) should be linked to a provider who can discuss prognosis, transmission, and treatment. The importance of screening the infant also must be communicated to the parents and pediatrician alike.
Dr. Reau has served as a director, officer, partner, employee, adviser, consultant, or trustee for AbbVie, Gilead, Arbutus, Intercept, and Salix; received research grants from AbbVie and Gilead; and received income from AASLD. Dr. Pacheco disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Because hepatitis C virus (HCV) infection is typically asymptomatic, its presence can easily be overlooked without appropriate screening efforts. For those screening efforts to be effective, they must keep pace with the changing demographic face of this increasingly prevalent but treatable disease.
Perhaps the most dramatic shift in HCV demographics in recent years has been the increase of infections among those born after 1965, a trend primarily driven by the opioid epidemic. In addition, data from the National Notifiable Diseases Surveillance System show that cases of diagnosed HCV doubled among women of childbearing age from 2006 to 2014, with new infections in younger women surpassing those in older age groups.
With such trends in mind, the Centers for Disease Control and Prevention broadened their recommendations regarding HCV in 2020 to include one-time testing in all adults aged 18 years and older and screening of all pregnant women during each pregnancy, except where the prevalence of infection is less than 0.1%, a threshold that no state has yet achieved.
The US Preventive Services Task Force (USPSTF) subsequently followed suit in their own recommendations.
The American Association for the Study of Liver Diseases/Infectious Diseases Society of America have long advocated for extensive expansion in their screening recommendations for HCV, including pregnancy.
Although the American College of Obstetricians and Gynecologists and the Society for Maternal-Fetal Medicine did not immediately adopt these recommendations, they have since endorsed them in May 2021 and June 2021, respectively.
The hepatologist perspective
As a practicing hepatologist, this seems like an uncontroversial recommendation. Obstetricians already screen for hepatitis B virus in each pregnancy. It should be easy to add HCV testing to the same lab testing.
Risk-based screening has repeatedly been demonstrated to be ineffective. It should be easier to test all women than to ask prying questions about high-risk behaviors.
Given the increase of injection drug use and resultant HCV infections in women of childbearing age, this seems like a perfect opportunity to identify chronically infected women and counsel them on transmission and cure. And pregnancy is also unique in that it is a time of near-universal health coverage.
Let’s address some of the operational issues.
The diagnostic cascade for HCV can be made very easy. HCV antibody testing is our standard screening test and, when positive, can automatically reflex to HCV polymerase chain reaction (PCR), the diagnostic test. Thus, with one blood sample, you can both screen for and diagnose infection.
Current guidelines do not recommend treating HCV during pregnancy, although therapy can be considered on an individual basis. Linkage to a knowledgeable provider who can discuss transmission and treatment, as well as assess the stage of liver injury, should decrease the burden on the ob.gyn.
The impact on pregnancy is marginal. HCV should not change either the mode of delivery or the decision to breastfeed. The AASLD/IDSA guidance outlines only four recommendations for monitoring during pregnancy:
- Obtain HCV RNA to see whether the infection is active and assess liver function at initiation of prenatal care.
- Prenatal care should be tailored to the pregnancy. There is no modification recommended to decrease mother-to-child transmission (MTCT).
- Be aware that intrahepatic is more common with HCV.
- Women with have a higher rate of adverse outcomes and should be linked to a high-risk obstetrics specialist.
But of course, what seems easy to one specialist may not be true of another. With that in mind, let’s hear the ob.gyn. perspective on these updated screening recommendations.
The ob.gyn. perspective
Recent guidelines from the CDC, ACOG, and SMFM recommend universal screening for HCV in all pregnant women. The increased availability of highly effective antiviral regimens makes universal screening a logical strategy, especially to identify candidates for this curative treatment. What is questionable, however, is the recommended timing by which this screening should take place.
HCV screening during pregnancy, as currently recommended, provides no immediate benefit for the pregnant woman or the fetus/neonate, given that antiviral treatments have not been approved during gestation, and there are no known measures that decrease MTCT or change routine perinatal care.
We also must not forget that a significant proportion of women in the United States, particularly those with limited resources, do not receive prenatal care at all. Most of them, however, will present to a hospital for delivery. Consequently, compliance with screening might be higher if performed at the time of delivery rather than antepartum.
Deferring screening until the intrapartum or immediate postpartum period, at least until antiviral treatment during pregnancy becomes a reality, was discussed. The rationale was that this approach might obviate the need to deal with the unintended consequences and burden of testing for HCV during pregnancy. Ultimately, ACOG and SMFM fell in line with the CDC recommendations.
Despite the lack of robust evidence regarding the risk for MTCT associated with commonly performed obstetric procedures (for example, genetic amniocentesis, artificial rupture of the membranes during labor, placement of an intrauterine pressure catheter), clinicians may be reluctant to perform them in HCV-infected women, resulting in potential deviations from the obstetric standard of care.
Similarly, it is likely that patients may choose to have a cesarean delivery for the sole purpose of decreasing MTCT, despite the lack of evidence for this. Such ill-advised patient-driven decisions are increasingly likely in the current environment, where social media can rapidly disseminate misinformation.
Implications for pediatric patients
One cannot isolate HCV screening in pregnancy from the consequences that may potentially occur as part of the infant’s transition to the care of a pediatrician.
Even though MTCT is estimated to occur in just 5%-15% of cases, all children born to HCV viremic mothers should be screened for HCV.
Traditionally, screening for HCV antibodies occurred after 18 months of age. In those who test positive, HCV PCR testing is recommended at 3 years. However, this algorithm is being called into question because only approximately one-third of infants are successfully screened.
HCV RNA testing in the first year after birth has been suggested. However, even proponents of this approach concur that all management decisions should be deferred until after the age of 3 years, when medications are approved for pediatric use.
In addition, HCV testing would be required again before considering therapy because children have higher rates of spontaneous clearance.
Seeking consensus beyond the controversy
Controversy remains surrounding the most recent update to the HCV screening guidelines. The current recommendation to screen during pregnancy cannot modify the risk for MTCT, has no impact on decisions regarding mode of delivery or breastfeeding, and could potentially cause harm by making obstetricians defer necessary invasive procedures even though there are no data linking them to an increase in MTCT.
Yet after extensive debate, the CDC, USPSTF, AASLD/IDSA, ACOG, and SMFM all developed their current recommendations to initiate HCV screening during pregnancy. To make this successful, screening algorithms need to be simple and consistent across all society recommendations.
HCV antibody testing should always reflex to the diagnostic test (HCV PCR) to allow confirmation in those who test positive without requiring an additional blood test. Viremic mothers (those who are HCV positive on PCR) should be linked to a provider who can discuss prognosis, transmission, and treatment. The importance of screening the infant also must be communicated to the parents and pediatrician alike.
Dr. Reau has served as a director, officer, partner, employee, adviser, consultant, or trustee for AbbVie, Gilead, Arbutus, Intercept, and Salix; received research grants from AbbVie and Gilead; and received income from AASLD. Dr. Pacheco disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Rosacea is in the eye of the beholder, expert says
In the clinical experience of Emmy Graber, MD, MBA, rosacea is in the eye of the beholder.
“It’s not really up to us as the providers as to what’s important to the patient or how bad their rosacea is,” she said during MedscapeLive’s annual Las Vegas Dermatology Seminar. “It really is up to the patient,” added Dr. Graber, president of The Dermatology Institute of Boston, who recommends asking patients about how severe they consider their rosacea to be, and what about rosacea bothers them most. Their responses may be surprising.
A study published in 2017 showed that complete resolution of even mild rosacea prolongs remission of rosacea, and most importantly, improves the quality of life for patients. “So, don’t discount what you consider to be mild rosacea in patients,” she said.
Skin care recommendations
“And don’t forget about basic skin care,” she advised. A recently published Chinese study of 999 rosacea patients and 1,010 controls with healthy skin found that a high frequency of cleansing and expansive use of cleansers were positively correlated with rosacea occurrence, suggesting that overcleansing can be a risk factor for rosacea. “Ask your patient, ‘how often are you cleaning your face?’ ” Dr. Graber suggested. “You might find that they’re overdoing it by washing three or four times a day. Several studies have shown that basic skin care alone improves rosacea.”
Skin care recommendations for patients with rosacea include avoiding chemical or physical exfoliants and alcohol-based topical products, and moisturizing and washing their faces with mild, synthetic detergent-based products rather than traditional soaps, which may further alkalinize and irritate the skin. “Patients should also be counseled to use physical-based sunscreens rather than chemical-based sunscreens,” she said.
Treating erythema
For treating erythema with topicals, a systematic review published in 2019 found the most evidence for brimonidine 0.33% gel, an alpha2-adrenergic agonist, and oxymetazoline 1% cream, an alpha1-adrenergic agonist. “Both of these products functionally constrict facial blood vessels,” and are Food and Drug Administration approved for treating persistent erythema, Dr. Graber said. “These products improve erythema within 3 hours of and up to 12 hours after application and overall, they are well tolerated.”
Based on clinical trial results, about 15% of patients on brimonidine report adverse reactions such as dermatitis, burning, pruritus, and erythema, compared with 8% of patients on oxymetazoline. At the same time, up to 20% of individuals on brimonidine report rebound erythema, compared with fewer than 1% of those using oxymetazoline. Laser and light therapies such as pulse-dye lasers, potassium-titanyl-phosphate lasers, and intense-pulse light devices are also effective in treating persistent erythema but are less effective for transient flushing.
Treatment of papules and pustules
For treating papules and pustules, the 2019 systemic review also found high-certainty evidence for using azelaic acid and topical ivermectin, and moderate-certainty evidence for using topical metronidazole and topical minocycline. “Topical ivermectin was demonstrated to be the most effective topical treatment for papulopustular rosacea and to provide the greatest psychological benefit to these patients,” Dr. Graber said.
In a double-blind, multicenter 15-week trial comparing azelaic acid 15% gel with metronidazole 0.75% gel in patients with papulopustular rosacea, both agents were found to be effective. But those treated with azelaic acid 15% gel had a greater reduction in lesion counts and erythema, and improvement in global assessments, compared with metronidazole 0.75% gel. However, the azelaic acid 15% gel was associated with more stinging compared with metronidazole 0.75% gel, although it was usually transient.
Another study, a double-blind, single-center, 15-week trial, compared the efficacy of azelaic acid 20% cream with metronidazole 0.75% cream. Both agents were found to be effective and had similar levels of reductions in papules and pustules. However, patients in the azelaic acid 20% cream arm had significantly higher physician ratings of global improvement, as well as overall higher patient satisfaction.
More recently, a phase 3 study of 962 patients found that ivermectin 1% cream once daily improved quality of life slightly more than metronidazole 0.75% cream twice daily. No difference in adverse events were noted between the two agents.
Other options for treating papules and pustules include topical minocycline 1.5% foam, which is FDA approved for rosacea, as well as second-line agents topical sodium sulfacetamide with sulfur cleanser (cream or lotion), and permethrin, Dr. Graber said.
As for treating papules and pustules with oral agents, the strongest evidence favors oral tetracyclines and isotretinoin, she noted.
Doxycycline, minocycline, tetracycline, and sarecycline can be used as monotherapy or coadministered with topical agents. “The addition of topical agents may also help to shorten the duration of antibiotic use, which is very important,” Dr. Graber said.
She noted that oral beta-blockers might be useful to treat persistent erythema and flushing because they antagonize the effects of sympathetic nerve stimulation and circulating catecholamines at b-adrenoceptors. Carvedilol and propranolol have been the most studied. The most common potential side effects are hypotension and bradycardia.
Dr. Graber disclosed that she is a consultant/adviser for Digital Diagnostics, Almirall, Hovione, Keratin Biosciences, La Roche Posay, Ortho Dermatologics, Sebacia, Sol-Gel, Verrica, and WebMD. She is also a research investigator for Hovione, Ortho Dermatologics, Sebacia, and she receives royalties from Wolters Kluwer Health.
MedscapeLive and this news organization are owned by the same parent company.
In the clinical experience of Emmy Graber, MD, MBA, rosacea is in the eye of the beholder.
“It’s not really up to us as the providers as to what’s important to the patient or how bad their rosacea is,” she said during MedscapeLive’s annual Las Vegas Dermatology Seminar. “It really is up to the patient,” added Dr. Graber, president of The Dermatology Institute of Boston, who recommends asking patients about how severe they consider their rosacea to be, and what about rosacea bothers them most. Their responses may be surprising.
A study published in 2017 showed that complete resolution of even mild rosacea prolongs remission of rosacea, and most importantly, improves the quality of life for patients. “So, don’t discount what you consider to be mild rosacea in patients,” she said.
Skin care recommendations
“And don’t forget about basic skin care,” she advised. A recently published Chinese study of 999 rosacea patients and 1,010 controls with healthy skin found that a high frequency of cleansing and expansive use of cleansers were positively correlated with rosacea occurrence, suggesting that overcleansing can be a risk factor for rosacea. “Ask your patient, ‘how often are you cleaning your face?’ ” Dr. Graber suggested. “You might find that they’re overdoing it by washing three or four times a day. Several studies have shown that basic skin care alone improves rosacea.”
Skin care recommendations for patients with rosacea include avoiding chemical or physical exfoliants and alcohol-based topical products, and moisturizing and washing their faces with mild, synthetic detergent-based products rather than traditional soaps, which may further alkalinize and irritate the skin. “Patients should also be counseled to use physical-based sunscreens rather than chemical-based sunscreens,” she said.
Treating erythema
For treating erythema with topicals, a systematic review published in 2019 found the most evidence for brimonidine 0.33% gel, an alpha2-adrenergic agonist, and oxymetazoline 1% cream, an alpha1-adrenergic agonist. “Both of these products functionally constrict facial blood vessels,” and are Food and Drug Administration approved for treating persistent erythema, Dr. Graber said. “These products improve erythema within 3 hours of and up to 12 hours after application and overall, they are well tolerated.”
Based on clinical trial results, about 15% of patients on brimonidine report adverse reactions such as dermatitis, burning, pruritus, and erythema, compared with 8% of patients on oxymetazoline. At the same time, up to 20% of individuals on brimonidine report rebound erythema, compared with fewer than 1% of those using oxymetazoline. Laser and light therapies such as pulse-dye lasers, potassium-titanyl-phosphate lasers, and intense-pulse light devices are also effective in treating persistent erythema but are less effective for transient flushing.
Treatment of papules and pustules
For treating papules and pustules, the 2019 systemic review also found high-certainty evidence for using azelaic acid and topical ivermectin, and moderate-certainty evidence for using topical metronidazole and topical minocycline. “Topical ivermectin was demonstrated to be the most effective topical treatment for papulopustular rosacea and to provide the greatest psychological benefit to these patients,” Dr. Graber said.
In a double-blind, multicenter 15-week trial comparing azelaic acid 15% gel with metronidazole 0.75% gel in patients with papulopustular rosacea, both agents were found to be effective. But those treated with azelaic acid 15% gel had a greater reduction in lesion counts and erythema, and improvement in global assessments, compared with metronidazole 0.75% gel. However, the azelaic acid 15% gel was associated with more stinging compared with metronidazole 0.75% gel, although it was usually transient.
Another study, a double-blind, single-center, 15-week trial, compared the efficacy of azelaic acid 20% cream with metronidazole 0.75% cream. Both agents were found to be effective and had similar levels of reductions in papules and pustules. However, patients in the azelaic acid 20% cream arm had significantly higher physician ratings of global improvement, as well as overall higher patient satisfaction.
More recently, a phase 3 study of 962 patients found that ivermectin 1% cream once daily improved quality of life slightly more than metronidazole 0.75% cream twice daily. No difference in adverse events were noted between the two agents.
Other options for treating papules and pustules include topical minocycline 1.5% foam, which is FDA approved for rosacea, as well as second-line agents topical sodium sulfacetamide with sulfur cleanser (cream or lotion), and permethrin, Dr. Graber said.
As for treating papules and pustules with oral agents, the strongest evidence favors oral tetracyclines and isotretinoin, she noted.
Doxycycline, minocycline, tetracycline, and sarecycline can be used as monotherapy or coadministered with topical agents. “The addition of topical agents may also help to shorten the duration of antibiotic use, which is very important,” Dr. Graber said.
She noted that oral beta-blockers might be useful to treat persistent erythema and flushing because they antagonize the effects of sympathetic nerve stimulation and circulating catecholamines at b-adrenoceptors. Carvedilol and propranolol have been the most studied. The most common potential side effects are hypotension and bradycardia.
Dr. Graber disclosed that she is a consultant/adviser for Digital Diagnostics, Almirall, Hovione, Keratin Biosciences, La Roche Posay, Ortho Dermatologics, Sebacia, Sol-Gel, Verrica, and WebMD. She is also a research investigator for Hovione, Ortho Dermatologics, Sebacia, and she receives royalties from Wolters Kluwer Health.
MedscapeLive and this news organization are owned by the same parent company.
In the clinical experience of Emmy Graber, MD, MBA, rosacea is in the eye of the beholder.
“It’s not really up to us as the providers as to what’s important to the patient or how bad their rosacea is,” she said during MedscapeLive’s annual Las Vegas Dermatology Seminar. “It really is up to the patient,” added Dr. Graber, president of The Dermatology Institute of Boston, who recommends asking patients about how severe they consider their rosacea to be, and what about rosacea bothers them most. Their responses may be surprising.
A study published in 2017 showed that complete resolution of even mild rosacea prolongs remission of rosacea, and most importantly, improves the quality of life for patients. “So, don’t discount what you consider to be mild rosacea in patients,” she said.
Skin care recommendations
“And don’t forget about basic skin care,” she advised. A recently published Chinese study of 999 rosacea patients and 1,010 controls with healthy skin found that a high frequency of cleansing and expansive use of cleansers were positively correlated with rosacea occurrence, suggesting that overcleansing can be a risk factor for rosacea. “Ask your patient, ‘how often are you cleaning your face?’ ” Dr. Graber suggested. “You might find that they’re overdoing it by washing three or four times a day. Several studies have shown that basic skin care alone improves rosacea.”
Skin care recommendations for patients with rosacea include avoiding chemical or physical exfoliants and alcohol-based topical products, and moisturizing and washing their faces with mild, synthetic detergent-based products rather than traditional soaps, which may further alkalinize and irritate the skin. “Patients should also be counseled to use physical-based sunscreens rather than chemical-based sunscreens,” she said.
Treating erythema
For treating erythema with topicals, a systematic review published in 2019 found the most evidence for brimonidine 0.33% gel, an alpha2-adrenergic agonist, and oxymetazoline 1% cream, an alpha1-adrenergic agonist. “Both of these products functionally constrict facial blood vessels,” and are Food and Drug Administration approved for treating persistent erythema, Dr. Graber said. “These products improve erythema within 3 hours of and up to 12 hours after application and overall, they are well tolerated.”
Based on clinical trial results, about 15% of patients on brimonidine report adverse reactions such as dermatitis, burning, pruritus, and erythema, compared with 8% of patients on oxymetazoline. At the same time, up to 20% of individuals on brimonidine report rebound erythema, compared with fewer than 1% of those using oxymetazoline. Laser and light therapies such as pulse-dye lasers, potassium-titanyl-phosphate lasers, and intense-pulse light devices are also effective in treating persistent erythema but are less effective for transient flushing.
Treatment of papules and pustules
For treating papules and pustules, the 2019 systemic review also found high-certainty evidence for using azelaic acid and topical ivermectin, and moderate-certainty evidence for using topical metronidazole and topical minocycline. “Topical ivermectin was demonstrated to be the most effective topical treatment for papulopustular rosacea and to provide the greatest psychological benefit to these patients,” Dr. Graber said.
In a double-blind, multicenter 15-week trial comparing azelaic acid 15% gel with metronidazole 0.75% gel in patients with papulopustular rosacea, both agents were found to be effective. But those treated with azelaic acid 15% gel had a greater reduction in lesion counts and erythema, and improvement in global assessments, compared with metronidazole 0.75% gel. However, the azelaic acid 15% gel was associated with more stinging compared with metronidazole 0.75% gel, although it was usually transient.
Another study, a double-blind, single-center, 15-week trial, compared the efficacy of azelaic acid 20% cream with metronidazole 0.75% cream. Both agents were found to be effective and had similar levels of reductions in papules and pustules. However, patients in the azelaic acid 20% cream arm had significantly higher physician ratings of global improvement, as well as overall higher patient satisfaction.
More recently, a phase 3 study of 962 patients found that ivermectin 1% cream once daily improved quality of life slightly more than metronidazole 0.75% cream twice daily. No difference in adverse events were noted between the two agents.
Other options for treating papules and pustules include topical minocycline 1.5% foam, which is FDA approved for rosacea, as well as second-line agents topical sodium sulfacetamide with sulfur cleanser (cream or lotion), and permethrin, Dr. Graber said.
As for treating papules and pustules with oral agents, the strongest evidence favors oral tetracyclines and isotretinoin, she noted.
Doxycycline, minocycline, tetracycline, and sarecycline can be used as monotherapy or coadministered with topical agents. “The addition of topical agents may also help to shorten the duration of antibiotic use, which is very important,” Dr. Graber said.
She noted that oral beta-blockers might be useful to treat persistent erythema and flushing because they antagonize the effects of sympathetic nerve stimulation and circulating catecholamines at b-adrenoceptors. Carvedilol and propranolol have been the most studied. The most common potential side effects are hypotension and bradycardia.
Dr. Graber disclosed that she is a consultant/adviser for Digital Diagnostics, Almirall, Hovione, Keratin Biosciences, La Roche Posay, Ortho Dermatologics, Sebacia, Sol-Gel, Verrica, and WebMD. She is also a research investigator for Hovione, Ortho Dermatologics, Sebacia, and she receives royalties from Wolters Kluwer Health.
MedscapeLive and this news organization are owned by the same parent company.
FROM MEDSCAPELIVE LAS VEGAS DERMATOLOGY SEMINAR

