EWRS for Sepsis

Article Type
Changed
Display Headline
Development, implementation, and impact of an automated early warning and response system for sepsis

There are as many as 3 million cases of severe sepsis and 750,000 resulting deaths in the United States annually.[1] Interventions such as goal‐directed resuscitation and antibiotics can reduce sepsis mortality, but their effectiveness depends on early administration. Thus, timely recognition is critical.[2, 3, 4, 5]

Despite this, early recognition in hospitalized patients can be challenging. Using chart documentation as a surrogate for provider recognition, we recently found only 20% of patients with severe sepsis admitted to our hospital from the emergency department were recognized.[6] Given these challenges, there has been increasing interest in developing automated systems to improve the timeliness of sepsis detection.[7, 8, 9, 10] Systems described in the literature have varied considerably in triggering criteria, effector responses, and study settings. Of those examining the impact of automated surveillance and response in the nonintensive care unit (ICU) acute inpatient setting, results suggest an increase in the timeliness of diagnostic and therapeutic interventions,[10] but less impact on patient outcomes.[7] Whether these results reflect inadequacies in the criteria used to identify patients (parameters or their thresholds) or an ineffective response to the alert (magnitude or timeliness) is unclear.

Given the consequences of severe sepsis in hospitalized patients, as well as the introduction of vital sign (VS) and provider data in our electronic health record (EHR), we sought to develop and implement an electronic sepsis detection and response system to improve patient outcomes. This study describes the development, validation, and impact of that system.

METHODS

Setting and Data Sources

The University of Pennsylvania Health System (UPHS) includes 3 hospitals with a capacity of over 1500 beds and 70,000 annual admissions. All hospitals use the EHR Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL). The study period began in October 2011, when VS and provider contact information became available electronically. Data were retrieved from the Penn Data Store, which includes professionally coded data as well as clinical data from our EHRs. The study received expedited approval and a Health Insurance Portability and Accountability Act waiver from our institutional review board.

Development of the Intervention

The early warning and response system (EWRS) for sepsis was designed to monitor laboratory values and VSs in real time in our inpatient EHR to detect patients at risk for clinical deterioration and development of severe sepsis. The development team was multidisciplinary, including informaticians, physicians, nurses, and data analysts from all 3 hospitals.

To identify at‐risk patients, we used established criteria for severe sepsis, including the systemic inflammatory response syndrome criteria (temperature <36C or >38C, heart rate >90 bpm, respiratory rate >20 breaths/min or PaCO2 <32 mm Hg, and total white blood cell count <4000 or >12,000 or >10% bands) coupled with criteria suggesting organ dysfunction (cardiovascular dysfunction based on a systolic blood pressure <100 mm Hg, and hypoperfusion based on a serum lactate measure >2.2 mmol/L [the threshold for an abnormal result in our lab]).[11, 12]

To establish a threshold for triggering the system, a derivation cohort was used and defined as patients admitted between October 1, 2011 to October 31, 2011 1 to any inpatient acute care service. Those <18 years old or admitted to hospice, research, and obstetrics services were excluded. We calculated a risk score for each patient, defined as the sum of criteria met at any single time during their visit. At any given point in time, we used the most recent value for each criteria, with a look‐back period of 24 hours for VSs and 48 hours for labs. The minimum and maximum number of criteria that a patient could achieve at any single time was 0 and 6, respectively. We then categorized patients by the maximum number of criteria achieved and estimated the proportion of patients in each category who: (1) were transferred to an ICU during their hospital visit; (2) had a rapid response team (RRT) called during their visit; (3) died during their visit; (4) had a composite of 1, 2, or 3; or (5) were coded as sepsis at discharge (see Supporting Information in the online version of this article for further information). Once a threshold was chosen, we examined the time from first trigger to: (1) any ICU transfer; (2) any RRT; (3) death; or (4) a composite of 1, 2, or 3. We then estimated the screen positive rate, test characteristics, predictive values, and likelihood ratios of the specified threshold.

The efferent response arm of the EWRS included the covering provider (usually an intern), the bedside nurse, and rapid response coordinators, who were engaged from the outset in developing the operational response to the alert. This team was required to perform a bedside evaluation within 30 minutes of the alert, and enact changes in management if warranted. The rapid response coordinator was required to complete a 3‐question follow‐up assessment in the EHR asking whether all 3 team members gathered at the bedside, the most likely condition triggering the EWRS, and whether management changed (see Supporting Figure 1 in the online version of this article). To minimize the number of triggers, once a patient triggered an alert, any additional alert triggers during the same hospital stay were censored.

Implementation of the EWRS

All inpatients on noncritical care services were screened continuously. Hospice, research, and obstetrics services were excluded. If a patient met the EWRS criteria threshold, an alert was sent to the covering provider and rapid response coordinator by text page. The bedside nurses, who do not carry text‐enabled devices, were alerted by pop‐up notification in the EHR (see Supporting Figure 2 in the online version of this article). The notification was linked to a task that required nurses to verify in the EHR the VSs triggering the EWRS, and adverse trends in VSs or labs (see Supporting Figure 3 in the online version of this article).

The Preimplementation (Silent) Period and EWRS Validation

The EWRS was initially activated for a preimplementation silent period (June 6, 2012September 4, 2012) to both validate the tool and provide the baseline data to which the postimplementation period was compared. During this time, new admissions could trigger the alert, but notifications were not sent. We used admissions from the first 30 days of the preimplementation period to estimate the tool's screen positive rate, test characteristics, predictive values, and likelihood ratios.

The Postimplementation (Live) Period and Impact Analysis

The EWRS went live September 12, 2012, upon which new admissions triggering the alert would result in a notification and response. Unadjusted analyses using the [2] test for dichotomous variables and the Wilcoxon rank sum test for continuous variables compared demographics and the proportion of clinical process and outcome measures for those admitted during the silent period (June 6, 2012September 4, 2012) and a similar timeframe 1 year later when the intervention was live (June 6, 2013September 4, 2013). To be included in either of the time periods, patients had to trigger the alert during the period and be discharged within 45 days of the end of the period. The pre‐ and post‐sepsis mortality index was also examined (see the Supporting Information in the online version of this article for a detailed description of study measures). Multivariable regression models estimated the impact of the EWRS on the process and outcome measures, adjusted for differences between the patients in the preimplementation and postimplementation periods with respect to age, gender, Charlson index on admission, admitting service, hospital, and admission month. Logistic regression models examined dichotomous variables. Continuous variables were log transformed and examined using linear regression models. Cox regression models explored time to ICU transfer from trigger. Among patients with sepsis, a logistic regression model was used to compare the odds of mortality between the silent and live periods, adjusted for expected mortality, both within each hospital and across all hospitals.

Because there is a risk of providers becoming overly reliant on automated systems and overlooking those not triggering the system, we also examined the discharge disposition and mortality outcomes of those in both study periods not identified by the EWRS.

The primary analysis examined the impact of the EWRS across UPHS; we also examined the EWRS impact at each of our hospitals. Last, we performed subgroup analyses examining the EWRS impact in those assigned an International Classification of Diseases, 9th Revision code for sepsis at discharge or death. All analyses were performed using SAS version 9.3 (SAS Institute Inc., Cary, NC).

RESULTS

In the derivation cohort, 4575 patients met the inclusion criteria. The proportion of those in each category (06) achieving our outcomes of interest are described in Supporting Table 1 in the online version of this article. We defined a positive trigger as a score 4, as this threshold identified a limited number of patients (3.9% [180/4575]) with a high proportion experiencing our composite outcome (25.6% [46/180]). The proportion of patients with an EWRS score 4 and their time to event by hospital and health system is described in Supporting Table 2 in the online version of this article. Those with a score 4 were almost 4 times as likely to be transferred to the ICU, almost 7 times as likely to experience an RRT, and almost 10 times as likely to die. The screen positive, sensitivity, specificity, and positive and negative predictive values and likelihood ratios using this threshold and our composite outcome in the derivation cohort were 6%, 16%, 97%, 26%, 94%, 5.3, and 0.9, respectively, and in our validation cohort were 6%, 17%, 97%, 28%, 95%, 5.7, and 0.9, respectively.

In the preimplementation period, 3.8% of admissions (595/15,567) triggered the alert, as compared to 3.5% (545/15,526) in the postimplementation period. Demographics were similar across periods, except that in the postimplementation period patients were slightly younger and had a lower Charlson Comorbidity Index at admission (Table 1). The distribution of alerts across medicine and surgery services were similar (Table 1).

Descriptive Statistics of the Study Population Before and After Implementation of the Early Warning and Response System
 Hospitals AC
 PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: BMI, body mass index; ED, emergency department; ICU, intensive care unit; IQR, interquartile range; RRT, rapid response team; Y, years.

No. of encounters15,56715,526 
No. of alerts595 (4%)545 (4%)0.14
Age, y, median (IQR)62.0 (48.570.5)59.7 (46.169.6)0.04
Female298 (50%)274 (50%)0.95
Race   
White343 (58%)312 (57%)0.14
Black207 (35%)171 (31%) 
Other23 (4%)31 (6%) 
Unknown22 (4%)31 (6%) 
Admission type   
Elective201 (34%)167 (31%)0.40
ED300 (50%)278 (51%) 
Transfer94 (16%)99 (18%) 
BMI, kg/m2, median (IQR)27.0 (23.032.0)26.0 (22.031.0)0.24
Previous ICU admission137 (23%)127 (23%)0.91
RRT before alert27 (5%)20 (4%)0.46
Admission Charlson index, median (IQR)2.0 (1.04.0)2.0 (1.04.0)0.04
Admitting service   
Medicine398 (67%)364 (67%)0.18
Surgery173 (29%)169 (31%) 
Other24 (4%)12 (2%) 
Service where alert fired   
Medicine391 (66%)365 (67%)0.18
Surgery175 (29%)164 (30%) 
Other29 (5%)15 (3%) 

In our postimplementation period, 99% of coordinator pages and over three‐fourths of provider notifications were sent successfully. Almost three‐fourths of nurses reviewed the initial alert notification, and over 99% completed the electronic data verification and adverse trend review, with over half documenting adverse trends. Ninety‐five percent of the time the coordinators completed the follow‐up assessment. Over 90% of the time, the entire team evaluated the patient at bedside within 30 minutes. Almost half of the time, the team thought the patient had no critical illness. Over a third of the time, they thought the patient had sepsis, but reported over 90% of the time that they were aware of the diagnosis prior to the alert. (Supporting Table 3 in the online version of this article includes more details about the responses to the electronic notifications and follow‐up assessments.)

In unadjusted and adjusted analyses, ordering of antibiotics, intravenous fluid boluses, and lactate and blood cultures within 3 hours of the trigger increased significantly, as did ordering of blood products, chest radiographs, and cardiac monitoring within 6 hours of the trigger (Tables 2 and 3).

Clinical Process Measures Before and After Implementation of the Early Warning and Response System
 Hospitals AC
PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: ABD, abdomen; AV, atrioventricular; BMP, basic metabolic panel; CBC, complete blood count; CT, computed tomography; CXR, chest radiograph; ECG, electrocardiogram; H, hours; IV, intravenous; PO, oral; RBC, red blood cell.

No. of alerts595545 
500 mL IV bolus order <3 h after alert92 (15%)142 (26%)<0.01
IV/PO antibiotic order <3 h after alert75 (13%)123 (23%)<0.01
IV/PO sepsis antibiotic order <3 h after alert61 (10%)85 (16%)<0.01
Lactic acid order <3 h after alert57 (10%)128 (23%)<0.01
Blood culture order <3 h after alert68 (11%)99 (18%)<0.01
Blood gas order <6 h after alert53 (9%)59 (11%)0.28
CBC or BMP <6 h after alert247 (42%)219 (40%)0.65
Vasopressor <6 h after alert17 (3%)21 (4%)0.35
Bronchodilator administration <6 h after alert71 (12%)64 (12%)0.92
RBC, plasma, or platelet transfusion order <6 h after alert31 (5%)52 (10%)<0.01
Naloxone order <6 h after alert0 (0%)1 (0%)0.30
AV node blocker order <6 h after alert35 (6%)35 (6%)0.70
Loop diuretic order <6 h after alert35 (6%)28 (5%)0.58
CXR <6 h after alert92 (15%)113 (21%)0.02
CT head, chest, or ABD <6 h after alert29 (5%)34 (6%)0.31
Cardiac monitoring (ECG or telemetry) <6 h after alert70 (12%)90 (17%)0.02
Adjusted Analysis for Clinical Process Measures for All Patients and Those Discharged With a Sepsis Diagnosis
 All Alerted PatientsDischarged With Sepsis Code*
Unadjusted Odds RatioAdjusted Odds RatioUnadjusted Odds RatioAdjusted Odds Ratio
  • NOTE: Odds ratios compare the odds of the outcome after versus before implementation of the early warning system. Abbreviations: AV, atrioventricular; BMP, basic metabolic panel; CBC, complete blood count, CT, computed tomography; CXR, chest radiograph; H, hours; IV, intravenous; PO, oral. *Sepsis definition based on International Classification of Diseases, 9th Revision diagnosis at discharge (790.7, 995.94, 995.92, 995.90, 995.91, 995.93, 785.52). Adusted for log‐transformed age, gender, log‐transformed Charlson index at admission, admitting service, hospital, and admission month.

500 mL IV bolus order <3 h after alert1.93 (1.442.58)1.93 (1.432.61)1.64 (1.112.43)1.65 (1.102.47)
IV/PO antibiotic order <3 h after alert2.02 (1.482.77)2.02 (1.462.78)1.99 (1.323.00)2.02 (1.323.09)
IV/PO sepsis antibiotic order <3 h after alert1.62 (1.142.30)1.57 (1.102.25)1.63 (1.052.53)1.65 (1.052.58)
Lactic acid order <3 h after alert2.90 (2.074.06)3.11 (2.194.41)2.41 (1.583.67)2.79 (1.794.34)
Blood culture <3 h after alert1.72 (1.232.40)1.76 (1.252.47)1.36 (0.872.10)1.40 (0.902.20)
Blood gas order <6 h after alert1.24 (0.841.83)1.32 (0.891.97)1.06 (0.631.77)1.13 (0.671.92)
BMP or CBC order <6 h after alert0.95 (0.751.20)0.96 (0.751.21)1.00 (0.701.44)1.04 (0.721.50)
Vasopressor order <6 h after alert1.36 (0.712.61)1.47 (0.762.83)1.32 (0.583.04)1.38 (0.593.25)
Bronchodilator administration <6 h after alert0.98 (0.691.41)1.02 (0.701.47)1.13 (0.641.99)1.17 (0.652.10)
Transfusion order <6 h after alert1.92 (1.213.04)1.95 (1.233.11)1.65 (0.913.01)1.68 (0.913.10)
AV node blocker order <6 h after alert1.10 (0.681.78)1.20 (0.722.00)0.38 (0.131.08)0.39 (0.121.20)
Loop diuretic order <6 h after alert0.87 (0.521.44)0.93 (0.561.57)1.63 (0.634.21)1.87 (0.705.00)
CXR <6 h after alert1.43 (1.061.94)1.47 (1.081.99)1.45 (0.942.24)1.56 (1.002.43)
CT <6 h after alert1.30 (0.782.16)1.30 (0.782.19)0.97 (0.521.82)0.94 (0.491.79)
Cardiac monitoring <6 h after alert1.48 (1.062.08)1.54 (1.092.16)1.32 (0.792.18)1.44 (0.862.41)

Hospital and ICU length of stay were similar in the preimplementation and postimplementation periods. There was no difference in the proportion of patients transferred to the ICU following the alert; however, the proportion transferred within 6 hours of the alert increased, and the time to ICU transfer was halved (see Supporting Figure 4 in the online version of this article), but neither change was statistically significant in unadjusted analyses. Transfer to the ICU within 6 hours became statistically significant after adjustment. All mortality measures were lower in the postimplementation period, but none reached statistical significance. Discharge to home and sepsis documentation were both statistically higher in the postimplementation period, but discharge to home lost statistical significance after adjustment (Tables 4 and 5) (see Supporting Table 4 in the online version of this article).

Clinical Outcome Measures Before and After Implementation of the Early Warning and Response System
 Hospitals AC
PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: H, hours; ICU, intensive care unit; IP, inpatient; IQR, interquartile range; LOS, length of stay; LTC, long‐term care; O/E, observed to expected; Rehab, rehabilitation; RRT, rapid response team; SNF, skilled nursing facility.

No. of alerts595545 
Hospital LOS, d, median (IQR)10.1 (5.119.1)9.4 (5.218.9)0.92
ICU LOS after alert, d, median (IQR)3.4 (1.77.4)3.6 (1.96.8)0.72
ICU transfer <6 h after alert40 (7%)53 (10%)0.06
ICU transfer <24 h after alert71 (12%)79 (14%)0.20
ICU transfer any time after alert134 (23%)124 (23%)0.93
Time to first ICU after alert, h, median (IQR)21.3 (4.463.9)11.0 (2.358.7)0.22
RRT 6 h after alert13 (2%)9 (2%)0.51
Mortality of all patients52 (9%)41 (8%)0.45
Mortality 30 days after alert48 (8%)33 (6%)0.19
Mortality of those transferred to ICU40 (30%)32 (26%)0.47
Deceased or IP hospice94 (16%)72 (13%)0.22
Discharge to home347 (58%)351 (64%)0.04
Disposition location   
Home347 (58%)351 (64%)0.25
SNF89 (15%)65 (12%) 
Rehab24 (4%)20 (4%) 
LTC8 (1%)9 (2%) 
Other hospital16 (3%)6 (1%) 
Expired52 (9%)41 (8%) 
Hospice IP42 (7%)31 (6%) 
Hospice other11 (2%)14 (3%) 
Other location6 (1%)8 (1%) 
Sepsis discharge diagnosis230 (39%)247 (45%)0.02
Sepsis O/E 1.371.060.18
Adjusted Analysis for Clinical Outcome Measures for All Patients and Those Discharged With a Sepsis Diagnosis
 All Alerted PatientsDischarged With Sepsis Code*
Unadjusted EstimateAdjusted EstimateUnadjusted EstimateAdjusted Estimate
  • NOTE: Estimates compare the mean, odds, or hazard of the outcome after versus before implementation of the early warning system. Abbreviations: H, hours; ICU, intensive care unit; LOS, length of stay; NA, not applicable; RRT, rapid response team. *Sepsis definition based on International Classification of Diseases, 9th Revision diagnosis at discharge (790.7, 995.94, 995.92, 995.90, 995.91, 995.93, 85.52). Adjusted for gender, age, present on admission Charlson comorbidity score, admit service, hospital, and admission month (June+July or August+Sep). Coefficient. Odds ratio. Hazard ratio.

Hospital LOS, d1.01 (0.921.11)1.02 (0.931.12)0.99 (0.851.15)1.00 (0.871.16)
ICU transfer1.49 (0.972.29)1.65 (1.072.55)1.61 (0.922.84)1.82 (1.023.25)
Time to first ICU transfer after alert, h1.17 (0.871.57)1.23 (0.921.66)1.21 (0.831.75)1.31 (0.901.90)
ICU LOS, d1.01 (0.771.31)0.99 (0.761.28)0.87 (0.621.21)0.88 (0.641.21)
RRT0.75 (0.321.77)0.84 (0.352.02)0.81 (0.292.27)0.82 (0.272.43)
Mortality0.85 (0.551.30)0.98 (0.631.53)0.85 (0.551.30)0.98 (0.631.53)
Mortality within 30 days of alert0.73 (0.461.16)0.87 (0.541.40)0.59 (0.341.04)0.69 (0.381.26)
Mortality or inpatient hospice transfer0.82 (0.471.41)0.78 (0.441.41)0.67 (0.361.25)0.65 (0.331.29)
Discharge to home1.29 (1.021.64)1.18 (0.911.52)1.36 (0.951.95)1.22 (0.811.84)
Sepsis discharge diagnosis1.32 (1.041.67)1.43 (1.101.85)NANA

In a subanalysis of EWRS impact on patients documented with sepsis at discharge, unadjusted and adjusted changes in clinical process and outcome measures across the time periods were similar to that of the total population (see Supporting Tables 5 and 6 and Supporting Figure 5 in the online version of this article). The unadjusted composite outcome of mortality or inpatient hospice was statistically lower in the postimplementation period, but lost statistical significance after adjustment.

The disposition and mortality outcomes of those not triggering the alert were unchanged across the 2 periods (see Supporting Tables 7, 8, and 9 in the online version of this article).

DISCUSSION

This study demonstrated that a predictive tool can accurately identify non‐ICU inpatients at increased risk for deterioration and death. In addition, we demonstrated the feasibility of deploying our EHR to screen patients in real time for deterioration and to trigger electronically a timely, robust, multidisciplinary bedside clinical evaluation. Compared to the control (silent) period, the EWRS resulted in a marked increase in early sepsis care, transfer to the ICU, and sepsis documentation, and an indication of a decreased sepsis mortality index and mortality, and increased discharge to home, although none of these latter 3 findings reached statistical significance.

Our study is unique in that it was implemented across a multihospital health system, which has identical EHRs, but diverse cultures, populations, staffing, and practice models. In addition, our study includes a preimplementation population similar to the postimplementation population (in terms of setting, month of admission, and adjustment for potential confounders).

Interestingly, patients identified by the EWRS who were subsequently transferred to an ICU had higher mortality rates (30% and 26% in the preimplementation and postimplementation periods, respectively, across UPHS) than those transferred to an ICU who were not identified by the EWRS (7% and 6% in the preimplementation and postimplementation periods, respectively, across UPHS) (Table 4) (see Supporting Table 7 in the online version of this article). This finding was robust to the study period, so is likely not related to the bedside evaluation prompted by the EWRS. It suggests the EWRS could help triage patients for appropriateness of ICU transfer, a particularly valuable role that should be explored further given the typical strains on ICU capacity,[13] and the mortality resulting from delays in patient transfers into ICUs.[14, 15]

Although we did not find a statistically significant mortality reduction, our study may have been underpowered to detect this outcome. Our study has other limitations. First, our preimplementation/postimplementation design may not fully account for secular changes in sepsis mortality. However, our comparison of similar time periods and our adjustment for observed demographic differences allow us to estimate with more certainty the change in sepsis care and mortality attributable to the intervention. Second, our study did not examine the effect of the EWRS on mortality after hospital discharge, where many such events occur. However, our capture of at least 45 hospital days on all study patients, as well as our inclusion of only those who died or were discharged during our study period, and our assessment of discharge disposition such as hospice, increase the chance that mortality reductions directly attributable to the EWRS were captured. Third, although the EWRS changed patient management, we did not assess the appropriateness of management changes. However, the impact of care changes was captured crudely by examining mortality rates and discharge disposition. Fourth, our study was limited to a single academic healthcare system, and our experience may not be generalizable to other healthcare systems with different EHRs and staff. However, the integration of our automated alert into a commercial EHR serving a diverse array of patient populations, clinical services, and service models throughout our healthcare system may improve the generalizability of our experience to other settings.

CONCLUSION

By leveraging readily available electronic data, an automated prediction tool identified at‐risk patients and mobilized care teams, resulting in more timely sepsis care, improved sepsis documentation, and a suggestion of reduced mortality. This alert may be scalable to other healthcare systems.

Acknowledgements

The authors thank Jennifer Barger, MS, BSN, RN; Patty Baroni, MSN, RN; Patrick J. Donnelly, MS, RN, CCRN; Mika Epps, MSN, RN; Allen L. Fasnacht, MSN, RN; Neil O. Fishman, MD; Kevin M. Fosnocht, MD; David F. Gaieski, MD; Tonya Johnson, MSN, RN, CCRN; Craig R. Kean, MS; Arash Kia, MD, MS; Matthew D. Mitchell, PhD; Stacie Neefe, BSN, RN; Nina J. Renzi, BSN, RN, CCRN; Alexander Roederer, Jean C. Romano, MSN, RN, NE‐BC; Heather Ross, BSN, RN, CCRN; William D. Schweickert, MD; Esme Singer, MD; and Kendal Williams, MD, MPH for their help in developing, testing and operationalizing the EWRS examined in this study; their assistance in data acquisition; and for advice regarding data analysis. This study was previously presented as an oral abstract at the 2013 American Medical Informatics Association Meeting, November 1620, 2013, Washington, DC.

Disclosures: Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no potential financial conflicts of interest relevant to this article.

Files
References
  1. Gaieski DF, Edwards JM, Kallan MJ, Carr BG. Benchmarking the incidence and mortality of severe sepsis in the United States. Crit Care Med. 2013;41(5):11671174.
  2. Dellinger RP, Levy MM, Rhodes A, et al. Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med. 2013;41(2):580637.
  3. Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Crit Care Med. 2010;38(2):367374.
  4. Otero RM, Nguyen HB, Huang DT, et al. Early goal‐directed therapy in severe sepsis and septic shock revisited: concepts, controversies, and contemporary findings. Chest. 2006;130(5):15791595.
  5. Rivers E, Nguyen B, Havstad S, et al. Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345(19):13681377.
  6. Whittaker SA, Mikkelsen ME, Gaieski DF, Koshy S, Kean C, Fuchs BD. Severe sepsis cohorts derived from claims‐based strategies appear to be biased toward a more severely ill patient population. Crit Care Med. 2013;41(4):945953.
  7. Bailey TC, Chen Y, Mao Y, et al. A trial of a real‐time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013;8(5):236242.
  8. Jones S, Mullally M, Ingleby S, Buist M, Bailey M, Eddleston JM. Bedside electronic capture of clinical observations and automated clinical alerts to improve compliance with an Early Warning Score protocol. Crit Care Resusc. 2011;13(2):8388.
  9. Nelson JL, Smith BL, Jared JD, Younger JG. Prospective trial of real‐time electronic surveillance to expedite early care of severe sepsis. Ann Emerg Med. 2011;57(5):500504.
  10. Sawyer AM, Deal EN, Labelle AJ, et al. Implementation of a real‐time computerized sepsis alert in nonintensive care unit patients. Crit Care Med. 2011;39(3):469473.
  11. Bone RC, Balk RA, Cerra FB, et al. Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest. 1992;101(6):16441655.
  12. Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions Conference. Crit Care Med. 2003;31(4):12501256.
  13. Sinuff T, Kahnamoui K, Cook DJ, Luce JM, Levy MM. Rationing critical care beds: a systematic review. Crit Care Med. 2004;32(7):15881597.
  14. Bing‐Hua YU. Delayed admission to intensive care unit for critically surgical patients is associated with increased mortality. Am J Surg. 2014;208:268274.
  15. Cardoso LT, Grion CM, Matsuo T, et al. Impact of delayed admission to intensive care units on mortality of critically ill patients: a cohort study. Crit Care. 2011;15(1):R28.
Article PDF
Issue
Journal of Hospital Medicine - 10(1)
Page Number
26-31
Sections
Files
Files
Article PDF
Article PDF

There are as many as 3 million cases of severe sepsis and 750,000 resulting deaths in the United States annually.[1] Interventions such as goal‐directed resuscitation and antibiotics can reduce sepsis mortality, but their effectiveness depends on early administration. Thus, timely recognition is critical.[2, 3, 4, 5]

Despite this, early recognition in hospitalized patients can be challenging. Using chart documentation as a surrogate for provider recognition, we recently found only 20% of patients with severe sepsis admitted to our hospital from the emergency department were recognized.[6] Given these challenges, there has been increasing interest in developing automated systems to improve the timeliness of sepsis detection.[7, 8, 9, 10] Systems described in the literature have varied considerably in triggering criteria, effector responses, and study settings. Of those examining the impact of automated surveillance and response in the nonintensive care unit (ICU) acute inpatient setting, results suggest an increase in the timeliness of diagnostic and therapeutic interventions,[10] but less impact on patient outcomes.[7] Whether these results reflect inadequacies in the criteria used to identify patients (parameters or their thresholds) or an ineffective response to the alert (magnitude or timeliness) is unclear.

Given the consequences of severe sepsis in hospitalized patients, as well as the introduction of vital sign (VS) and provider data in our electronic health record (EHR), we sought to develop and implement an electronic sepsis detection and response system to improve patient outcomes. This study describes the development, validation, and impact of that system.

METHODS

Setting and Data Sources

The University of Pennsylvania Health System (UPHS) includes 3 hospitals with a capacity of over 1500 beds and 70,000 annual admissions. All hospitals use the EHR Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL). The study period began in October 2011, when VS and provider contact information became available electronically. Data were retrieved from the Penn Data Store, which includes professionally coded data as well as clinical data from our EHRs. The study received expedited approval and a Health Insurance Portability and Accountability Act waiver from our institutional review board.

Development of the Intervention

The early warning and response system (EWRS) for sepsis was designed to monitor laboratory values and VSs in real time in our inpatient EHR to detect patients at risk for clinical deterioration and development of severe sepsis. The development team was multidisciplinary, including informaticians, physicians, nurses, and data analysts from all 3 hospitals.

To identify at‐risk patients, we used established criteria for severe sepsis, including the systemic inflammatory response syndrome criteria (temperature <36C or >38C, heart rate >90 bpm, respiratory rate >20 breaths/min or PaCO2 <32 mm Hg, and total white blood cell count <4000 or >12,000 or >10% bands) coupled with criteria suggesting organ dysfunction (cardiovascular dysfunction based on a systolic blood pressure <100 mm Hg, and hypoperfusion based on a serum lactate measure >2.2 mmol/L [the threshold for an abnormal result in our lab]).[11, 12]

To establish a threshold for triggering the system, a derivation cohort was used and defined as patients admitted between October 1, 2011 to October 31, 2011 1 to any inpatient acute care service. Those <18 years old or admitted to hospice, research, and obstetrics services were excluded. We calculated a risk score for each patient, defined as the sum of criteria met at any single time during their visit. At any given point in time, we used the most recent value for each criteria, with a look‐back period of 24 hours for VSs and 48 hours for labs. The minimum and maximum number of criteria that a patient could achieve at any single time was 0 and 6, respectively. We then categorized patients by the maximum number of criteria achieved and estimated the proportion of patients in each category who: (1) were transferred to an ICU during their hospital visit; (2) had a rapid response team (RRT) called during their visit; (3) died during their visit; (4) had a composite of 1, 2, or 3; or (5) were coded as sepsis at discharge (see Supporting Information in the online version of this article for further information). Once a threshold was chosen, we examined the time from first trigger to: (1) any ICU transfer; (2) any RRT; (3) death; or (4) a composite of 1, 2, or 3. We then estimated the screen positive rate, test characteristics, predictive values, and likelihood ratios of the specified threshold.

The efferent response arm of the EWRS included the covering provider (usually an intern), the bedside nurse, and rapid response coordinators, who were engaged from the outset in developing the operational response to the alert. This team was required to perform a bedside evaluation within 30 minutes of the alert, and enact changes in management if warranted. The rapid response coordinator was required to complete a 3‐question follow‐up assessment in the EHR asking whether all 3 team members gathered at the bedside, the most likely condition triggering the EWRS, and whether management changed (see Supporting Figure 1 in the online version of this article). To minimize the number of triggers, once a patient triggered an alert, any additional alert triggers during the same hospital stay were censored.

Implementation of the EWRS

All inpatients on noncritical care services were screened continuously. Hospice, research, and obstetrics services were excluded. If a patient met the EWRS criteria threshold, an alert was sent to the covering provider and rapid response coordinator by text page. The bedside nurses, who do not carry text‐enabled devices, were alerted by pop‐up notification in the EHR (see Supporting Figure 2 in the online version of this article). The notification was linked to a task that required nurses to verify in the EHR the VSs triggering the EWRS, and adverse trends in VSs or labs (see Supporting Figure 3 in the online version of this article).

The Preimplementation (Silent) Period and EWRS Validation

The EWRS was initially activated for a preimplementation silent period (June 6, 2012September 4, 2012) to both validate the tool and provide the baseline data to which the postimplementation period was compared. During this time, new admissions could trigger the alert, but notifications were not sent. We used admissions from the first 30 days of the preimplementation period to estimate the tool's screen positive rate, test characteristics, predictive values, and likelihood ratios.

The Postimplementation (Live) Period and Impact Analysis

The EWRS went live September 12, 2012, upon which new admissions triggering the alert would result in a notification and response. Unadjusted analyses using the [2] test for dichotomous variables and the Wilcoxon rank sum test for continuous variables compared demographics and the proportion of clinical process and outcome measures for those admitted during the silent period (June 6, 2012September 4, 2012) and a similar timeframe 1 year later when the intervention was live (June 6, 2013September 4, 2013). To be included in either of the time periods, patients had to trigger the alert during the period and be discharged within 45 days of the end of the period. The pre‐ and post‐sepsis mortality index was also examined (see the Supporting Information in the online version of this article for a detailed description of study measures). Multivariable regression models estimated the impact of the EWRS on the process and outcome measures, adjusted for differences between the patients in the preimplementation and postimplementation periods with respect to age, gender, Charlson index on admission, admitting service, hospital, and admission month. Logistic regression models examined dichotomous variables. Continuous variables were log transformed and examined using linear regression models. Cox regression models explored time to ICU transfer from trigger. Among patients with sepsis, a logistic regression model was used to compare the odds of mortality between the silent and live periods, adjusted for expected mortality, both within each hospital and across all hospitals.

Because there is a risk of providers becoming overly reliant on automated systems and overlooking those not triggering the system, we also examined the discharge disposition and mortality outcomes of those in both study periods not identified by the EWRS.

The primary analysis examined the impact of the EWRS across UPHS; we also examined the EWRS impact at each of our hospitals. Last, we performed subgroup analyses examining the EWRS impact in those assigned an International Classification of Diseases, 9th Revision code for sepsis at discharge or death. All analyses were performed using SAS version 9.3 (SAS Institute Inc., Cary, NC).

RESULTS

In the derivation cohort, 4575 patients met the inclusion criteria. The proportion of those in each category (06) achieving our outcomes of interest are described in Supporting Table 1 in the online version of this article. We defined a positive trigger as a score 4, as this threshold identified a limited number of patients (3.9% [180/4575]) with a high proportion experiencing our composite outcome (25.6% [46/180]). The proportion of patients with an EWRS score 4 and their time to event by hospital and health system is described in Supporting Table 2 in the online version of this article. Those with a score 4 were almost 4 times as likely to be transferred to the ICU, almost 7 times as likely to experience an RRT, and almost 10 times as likely to die. The screen positive, sensitivity, specificity, and positive and negative predictive values and likelihood ratios using this threshold and our composite outcome in the derivation cohort were 6%, 16%, 97%, 26%, 94%, 5.3, and 0.9, respectively, and in our validation cohort were 6%, 17%, 97%, 28%, 95%, 5.7, and 0.9, respectively.

In the preimplementation period, 3.8% of admissions (595/15,567) triggered the alert, as compared to 3.5% (545/15,526) in the postimplementation period. Demographics were similar across periods, except that in the postimplementation period patients were slightly younger and had a lower Charlson Comorbidity Index at admission (Table 1). The distribution of alerts across medicine and surgery services were similar (Table 1).

Descriptive Statistics of the Study Population Before and After Implementation of the Early Warning and Response System
 Hospitals AC
 PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: BMI, body mass index; ED, emergency department; ICU, intensive care unit; IQR, interquartile range; RRT, rapid response team; Y, years.

No. of encounters15,56715,526 
No. of alerts595 (4%)545 (4%)0.14
Age, y, median (IQR)62.0 (48.570.5)59.7 (46.169.6)0.04
Female298 (50%)274 (50%)0.95
Race   
White343 (58%)312 (57%)0.14
Black207 (35%)171 (31%) 
Other23 (4%)31 (6%) 
Unknown22 (4%)31 (6%) 
Admission type   
Elective201 (34%)167 (31%)0.40
ED300 (50%)278 (51%) 
Transfer94 (16%)99 (18%) 
BMI, kg/m2, median (IQR)27.0 (23.032.0)26.0 (22.031.0)0.24
Previous ICU admission137 (23%)127 (23%)0.91
RRT before alert27 (5%)20 (4%)0.46
Admission Charlson index, median (IQR)2.0 (1.04.0)2.0 (1.04.0)0.04
Admitting service   
Medicine398 (67%)364 (67%)0.18
Surgery173 (29%)169 (31%) 
Other24 (4%)12 (2%) 
Service where alert fired   
Medicine391 (66%)365 (67%)0.18
Surgery175 (29%)164 (30%) 
Other29 (5%)15 (3%) 

In our postimplementation period, 99% of coordinator pages and over three‐fourths of provider notifications were sent successfully. Almost three‐fourths of nurses reviewed the initial alert notification, and over 99% completed the electronic data verification and adverse trend review, with over half documenting adverse trends. Ninety‐five percent of the time the coordinators completed the follow‐up assessment. Over 90% of the time, the entire team evaluated the patient at bedside within 30 minutes. Almost half of the time, the team thought the patient had no critical illness. Over a third of the time, they thought the patient had sepsis, but reported over 90% of the time that they were aware of the diagnosis prior to the alert. (Supporting Table 3 in the online version of this article includes more details about the responses to the electronic notifications and follow‐up assessments.)

In unadjusted and adjusted analyses, ordering of antibiotics, intravenous fluid boluses, and lactate and blood cultures within 3 hours of the trigger increased significantly, as did ordering of blood products, chest radiographs, and cardiac monitoring within 6 hours of the trigger (Tables 2 and 3).

Clinical Process Measures Before and After Implementation of the Early Warning and Response System
 Hospitals AC
PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: ABD, abdomen; AV, atrioventricular; BMP, basic metabolic panel; CBC, complete blood count; CT, computed tomography; CXR, chest radiograph; ECG, electrocardiogram; H, hours; IV, intravenous; PO, oral; RBC, red blood cell.

No. of alerts595545 
500 mL IV bolus order <3 h after alert92 (15%)142 (26%)<0.01
IV/PO antibiotic order <3 h after alert75 (13%)123 (23%)<0.01
IV/PO sepsis antibiotic order <3 h after alert61 (10%)85 (16%)<0.01
Lactic acid order <3 h after alert57 (10%)128 (23%)<0.01
Blood culture order <3 h after alert68 (11%)99 (18%)<0.01
Blood gas order <6 h after alert53 (9%)59 (11%)0.28
CBC or BMP <6 h after alert247 (42%)219 (40%)0.65
Vasopressor <6 h after alert17 (3%)21 (4%)0.35
Bronchodilator administration <6 h after alert71 (12%)64 (12%)0.92
RBC, plasma, or platelet transfusion order <6 h after alert31 (5%)52 (10%)<0.01
Naloxone order <6 h after alert0 (0%)1 (0%)0.30
AV node blocker order <6 h after alert35 (6%)35 (6%)0.70
Loop diuretic order <6 h after alert35 (6%)28 (5%)0.58
CXR <6 h after alert92 (15%)113 (21%)0.02
CT head, chest, or ABD <6 h after alert29 (5%)34 (6%)0.31
Cardiac monitoring (ECG or telemetry) <6 h after alert70 (12%)90 (17%)0.02
Adjusted Analysis for Clinical Process Measures for All Patients and Those Discharged With a Sepsis Diagnosis
 All Alerted PatientsDischarged With Sepsis Code*
Unadjusted Odds RatioAdjusted Odds RatioUnadjusted Odds RatioAdjusted Odds Ratio
  • NOTE: Odds ratios compare the odds of the outcome after versus before implementation of the early warning system. Abbreviations: AV, atrioventricular; BMP, basic metabolic panel; CBC, complete blood count, CT, computed tomography; CXR, chest radiograph; H, hours; IV, intravenous; PO, oral. *Sepsis definition based on International Classification of Diseases, 9th Revision diagnosis at discharge (790.7, 995.94, 995.92, 995.90, 995.91, 995.93, 785.52). Adusted for log‐transformed age, gender, log‐transformed Charlson index at admission, admitting service, hospital, and admission month.

500 mL IV bolus order <3 h after alert1.93 (1.442.58)1.93 (1.432.61)1.64 (1.112.43)1.65 (1.102.47)
IV/PO antibiotic order <3 h after alert2.02 (1.482.77)2.02 (1.462.78)1.99 (1.323.00)2.02 (1.323.09)
IV/PO sepsis antibiotic order <3 h after alert1.62 (1.142.30)1.57 (1.102.25)1.63 (1.052.53)1.65 (1.052.58)
Lactic acid order <3 h after alert2.90 (2.074.06)3.11 (2.194.41)2.41 (1.583.67)2.79 (1.794.34)
Blood culture <3 h after alert1.72 (1.232.40)1.76 (1.252.47)1.36 (0.872.10)1.40 (0.902.20)
Blood gas order <6 h after alert1.24 (0.841.83)1.32 (0.891.97)1.06 (0.631.77)1.13 (0.671.92)
BMP or CBC order <6 h after alert0.95 (0.751.20)0.96 (0.751.21)1.00 (0.701.44)1.04 (0.721.50)
Vasopressor order <6 h after alert1.36 (0.712.61)1.47 (0.762.83)1.32 (0.583.04)1.38 (0.593.25)
Bronchodilator administration <6 h after alert0.98 (0.691.41)1.02 (0.701.47)1.13 (0.641.99)1.17 (0.652.10)
Transfusion order <6 h after alert1.92 (1.213.04)1.95 (1.233.11)1.65 (0.913.01)1.68 (0.913.10)
AV node blocker order <6 h after alert1.10 (0.681.78)1.20 (0.722.00)0.38 (0.131.08)0.39 (0.121.20)
Loop diuretic order <6 h after alert0.87 (0.521.44)0.93 (0.561.57)1.63 (0.634.21)1.87 (0.705.00)
CXR <6 h after alert1.43 (1.061.94)1.47 (1.081.99)1.45 (0.942.24)1.56 (1.002.43)
CT <6 h after alert1.30 (0.782.16)1.30 (0.782.19)0.97 (0.521.82)0.94 (0.491.79)
Cardiac monitoring <6 h after alert1.48 (1.062.08)1.54 (1.092.16)1.32 (0.792.18)1.44 (0.862.41)

Hospital and ICU length of stay were similar in the preimplementation and postimplementation periods. There was no difference in the proportion of patients transferred to the ICU following the alert; however, the proportion transferred within 6 hours of the alert increased, and the time to ICU transfer was halved (see Supporting Figure 4 in the online version of this article), but neither change was statistically significant in unadjusted analyses. Transfer to the ICU within 6 hours became statistically significant after adjustment. All mortality measures were lower in the postimplementation period, but none reached statistical significance. Discharge to home and sepsis documentation were both statistically higher in the postimplementation period, but discharge to home lost statistical significance after adjustment (Tables 4 and 5) (see Supporting Table 4 in the online version of this article).

Clinical Outcome Measures Before and After Implementation of the Early Warning and Response System
 Hospitals AC
PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: H, hours; ICU, intensive care unit; IP, inpatient; IQR, interquartile range; LOS, length of stay; LTC, long‐term care; O/E, observed to expected; Rehab, rehabilitation; RRT, rapid response team; SNF, skilled nursing facility.

No. of alerts595545 
Hospital LOS, d, median (IQR)10.1 (5.119.1)9.4 (5.218.9)0.92
ICU LOS after alert, d, median (IQR)3.4 (1.77.4)3.6 (1.96.8)0.72
ICU transfer <6 h after alert40 (7%)53 (10%)0.06
ICU transfer <24 h after alert71 (12%)79 (14%)0.20
ICU transfer any time after alert134 (23%)124 (23%)0.93
Time to first ICU after alert, h, median (IQR)21.3 (4.463.9)11.0 (2.358.7)0.22
RRT 6 h after alert13 (2%)9 (2%)0.51
Mortality of all patients52 (9%)41 (8%)0.45
Mortality 30 days after alert48 (8%)33 (6%)0.19
Mortality of those transferred to ICU40 (30%)32 (26%)0.47
Deceased or IP hospice94 (16%)72 (13%)0.22
Discharge to home347 (58%)351 (64%)0.04
Disposition location   
Home347 (58%)351 (64%)0.25
SNF89 (15%)65 (12%) 
Rehab24 (4%)20 (4%) 
LTC8 (1%)9 (2%) 
Other hospital16 (3%)6 (1%) 
Expired52 (9%)41 (8%) 
Hospice IP42 (7%)31 (6%) 
Hospice other11 (2%)14 (3%) 
Other location6 (1%)8 (1%) 
Sepsis discharge diagnosis230 (39%)247 (45%)0.02
Sepsis O/E 1.371.060.18
Adjusted Analysis for Clinical Outcome Measures for All Patients and Those Discharged With a Sepsis Diagnosis
 All Alerted PatientsDischarged With Sepsis Code*
Unadjusted EstimateAdjusted EstimateUnadjusted EstimateAdjusted Estimate
  • NOTE: Estimates compare the mean, odds, or hazard of the outcome after versus before implementation of the early warning system. Abbreviations: H, hours; ICU, intensive care unit; LOS, length of stay; NA, not applicable; RRT, rapid response team. *Sepsis definition based on International Classification of Diseases, 9th Revision diagnosis at discharge (790.7, 995.94, 995.92, 995.90, 995.91, 995.93, 85.52). Adjusted for gender, age, present on admission Charlson comorbidity score, admit service, hospital, and admission month (June+July or August+Sep). Coefficient. Odds ratio. Hazard ratio.

Hospital LOS, d1.01 (0.921.11)1.02 (0.931.12)0.99 (0.851.15)1.00 (0.871.16)
ICU transfer1.49 (0.972.29)1.65 (1.072.55)1.61 (0.922.84)1.82 (1.023.25)
Time to first ICU transfer after alert, h1.17 (0.871.57)1.23 (0.921.66)1.21 (0.831.75)1.31 (0.901.90)
ICU LOS, d1.01 (0.771.31)0.99 (0.761.28)0.87 (0.621.21)0.88 (0.641.21)
RRT0.75 (0.321.77)0.84 (0.352.02)0.81 (0.292.27)0.82 (0.272.43)
Mortality0.85 (0.551.30)0.98 (0.631.53)0.85 (0.551.30)0.98 (0.631.53)
Mortality within 30 days of alert0.73 (0.461.16)0.87 (0.541.40)0.59 (0.341.04)0.69 (0.381.26)
Mortality or inpatient hospice transfer0.82 (0.471.41)0.78 (0.441.41)0.67 (0.361.25)0.65 (0.331.29)
Discharge to home1.29 (1.021.64)1.18 (0.911.52)1.36 (0.951.95)1.22 (0.811.84)
Sepsis discharge diagnosis1.32 (1.041.67)1.43 (1.101.85)NANA

In a subanalysis of EWRS impact on patients documented with sepsis at discharge, unadjusted and adjusted changes in clinical process and outcome measures across the time periods were similar to that of the total population (see Supporting Tables 5 and 6 and Supporting Figure 5 in the online version of this article). The unadjusted composite outcome of mortality or inpatient hospice was statistically lower in the postimplementation period, but lost statistical significance after adjustment.

The disposition and mortality outcomes of those not triggering the alert were unchanged across the 2 periods (see Supporting Tables 7, 8, and 9 in the online version of this article).

DISCUSSION

This study demonstrated that a predictive tool can accurately identify non‐ICU inpatients at increased risk for deterioration and death. In addition, we demonstrated the feasibility of deploying our EHR to screen patients in real time for deterioration and to trigger electronically a timely, robust, multidisciplinary bedside clinical evaluation. Compared to the control (silent) period, the EWRS resulted in a marked increase in early sepsis care, transfer to the ICU, and sepsis documentation, and an indication of a decreased sepsis mortality index and mortality, and increased discharge to home, although none of these latter 3 findings reached statistical significance.

Our study is unique in that it was implemented across a multihospital health system, which has identical EHRs, but diverse cultures, populations, staffing, and practice models. In addition, our study includes a preimplementation population similar to the postimplementation population (in terms of setting, month of admission, and adjustment for potential confounders).

Interestingly, patients identified by the EWRS who were subsequently transferred to an ICU had higher mortality rates (30% and 26% in the preimplementation and postimplementation periods, respectively, across UPHS) than those transferred to an ICU who were not identified by the EWRS (7% and 6% in the preimplementation and postimplementation periods, respectively, across UPHS) (Table 4) (see Supporting Table 7 in the online version of this article). This finding was robust to the study period, so is likely not related to the bedside evaluation prompted by the EWRS. It suggests the EWRS could help triage patients for appropriateness of ICU transfer, a particularly valuable role that should be explored further given the typical strains on ICU capacity,[13] and the mortality resulting from delays in patient transfers into ICUs.[14, 15]

Although we did not find a statistically significant mortality reduction, our study may have been underpowered to detect this outcome. Our study has other limitations. First, our preimplementation/postimplementation design may not fully account for secular changes in sepsis mortality. However, our comparison of similar time periods and our adjustment for observed demographic differences allow us to estimate with more certainty the change in sepsis care and mortality attributable to the intervention. Second, our study did not examine the effect of the EWRS on mortality after hospital discharge, where many such events occur. However, our capture of at least 45 hospital days on all study patients, as well as our inclusion of only those who died or were discharged during our study period, and our assessment of discharge disposition such as hospice, increase the chance that mortality reductions directly attributable to the EWRS were captured. Third, although the EWRS changed patient management, we did not assess the appropriateness of management changes. However, the impact of care changes was captured crudely by examining mortality rates and discharge disposition. Fourth, our study was limited to a single academic healthcare system, and our experience may not be generalizable to other healthcare systems with different EHRs and staff. However, the integration of our automated alert into a commercial EHR serving a diverse array of patient populations, clinical services, and service models throughout our healthcare system may improve the generalizability of our experience to other settings.

CONCLUSION

By leveraging readily available electronic data, an automated prediction tool identified at‐risk patients and mobilized care teams, resulting in more timely sepsis care, improved sepsis documentation, and a suggestion of reduced mortality. This alert may be scalable to other healthcare systems.

Acknowledgements

The authors thank Jennifer Barger, MS, BSN, RN; Patty Baroni, MSN, RN; Patrick J. Donnelly, MS, RN, CCRN; Mika Epps, MSN, RN; Allen L. Fasnacht, MSN, RN; Neil O. Fishman, MD; Kevin M. Fosnocht, MD; David F. Gaieski, MD; Tonya Johnson, MSN, RN, CCRN; Craig R. Kean, MS; Arash Kia, MD, MS; Matthew D. Mitchell, PhD; Stacie Neefe, BSN, RN; Nina J. Renzi, BSN, RN, CCRN; Alexander Roederer, Jean C. Romano, MSN, RN, NE‐BC; Heather Ross, BSN, RN, CCRN; William D. Schweickert, MD; Esme Singer, MD; and Kendal Williams, MD, MPH for their help in developing, testing and operationalizing the EWRS examined in this study; their assistance in data acquisition; and for advice regarding data analysis. This study was previously presented as an oral abstract at the 2013 American Medical Informatics Association Meeting, November 1620, 2013, Washington, DC.

Disclosures: Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no potential financial conflicts of interest relevant to this article.

There are as many as 3 million cases of severe sepsis and 750,000 resulting deaths in the United States annually.[1] Interventions such as goal‐directed resuscitation and antibiotics can reduce sepsis mortality, but their effectiveness depends on early administration. Thus, timely recognition is critical.[2, 3, 4, 5]

Despite this, early recognition in hospitalized patients can be challenging. Using chart documentation as a surrogate for provider recognition, we recently found only 20% of patients with severe sepsis admitted to our hospital from the emergency department were recognized.[6] Given these challenges, there has been increasing interest in developing automated systems to improve the timeliness of sepsis detection.[7, 8, 9, 10] Systems described in the literature have varied considerably in triggering criteria, effector responses, and study settings. Of those examining the impact of automated surveillance and response in the nonintensive care unit (ICU) acute inpatient setting, results suggest an increase in the timeliness of diagnostic and therapeutic interventions,[10] but less impact on patient outcomes.[7] Whether these results reflect inadequacies in the criteria used to identify patients (parameters or their thresholds) or an ineffective response to the alert (magnitude or timeliness) is unclear.

Given the consequences of severe sepsis in hospitalized patients, as well as the introduction of vital sign (VS) and provider data in our electronic health record (EHR), we sought to develop and implement an electronic sepsis detection and response system to improve patient outcomes. This study describes the development, validation, and impact of that system.

METHODS

Setting and Data Sources

The University of Pennsylvania Health System (UPHS) includes 3 hospitals with a capacity of over 1500 beds and 70,000 annual admissions. All hospitals use the EHR Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL). The study period began in October 2011, when VS and provider contact information became available electronically. Data were retrieved from the Penn Data Store, which includes professionally coded data as well as clinical data from our EHRs. The study received expedited approval and a Health Insurance Portability and Accountability Act waiver from our institutional review board.

Development of the Intervention

The early warning and response system (EWRS) for sepsis was designed to monitor laboratory values and VSs in real time in our inpatient EHR to detect patients at risk for clinical deterioration and development of severe sepsis. The development team was multidisciplinary, including informaticians, physicians, nurses, and data analysts from all 3 hospitals.

To identify at‐risk patients, we used established criteria for severe sepsis, including the systemic inflammatory response syndrome criteria (temperature <36C or >38C, heart rate >90 bpm, respiratory rate >20 breaths/min or PaCO2 <32 mm Hg, and total white blood cell count <4000 or >12,000 or >10% bands) coupled with criteria suggesting organ dysfunction (cardiovascular dysfunction based on a systolic blood pressure <100 mm Hg, and hypoperfusion based on a serum lactate measure >2.2 mmol/L [the threshold for an abnormal result in our lab]).[11, 12]

To establish a threshold for triggering the system, a derivation cohort was used and defined as patients admitted between October 1, 2011 to October 31, 2011 1 to any inpatient acute care service. Those <18 years old or admitted to hospice, research, and obstetrics services were excluded. We calculated a risk score for each patient, defined as the sum of criteria met at any single time during their visit. At any given point in time, we used the most recent value for each criteria, with a look‐back period of 24 hours for VSs and 48 hours for labs. The minimum and maximum number of criteria that a patient could achieve at any single time was 0 and 6, respectively. We then categorized patients by the maximum number of criteria achieved and estimated the proportion of patients in each category who: (1) were transferred to an ICU during their hospital visit; (2) had a rapid response team (RRT) called during their visit; (3) died during their visit; (4) had a composite of 1, 2, or 3; or (5) were coded as sepsis at discharge (see Supporting Information in the online version of this article for further information). Once a threshold was chosen, we examined the time from first trigger to: (1) any ICU transfer; (2) any RRT; (3) death; or (4) a composite of 1, 2, or 3. We then estimated the screen positive rate, test characteristics, predictive values, and likelihood ratios of the specified threshold.

The efferent response arm of the EWRS included the covering provider (usually an intern), the bedside nurse, and rapid response coordinators, who were engaged from the outset in developing the operational response to the alert. This team was required to perform a bedside evaluation within 30 minutes of the alert, and enact changes in management if warranted. The rapid response coordinator was required to complete a 3‐question follow‐up assessment in the EHR asking whether all 3 team members gathered at the bedside, the most likely condition triggering the EWRS, and whether management changed (see Supporting Figure 1 in the online version of this article). To minimize the number of triggers, once a patient triggered an alert, any additional alert triggers during the same hospital stay were censored.

Implementation of the EWRS

All inpatients on noncritical care services were screened continuously. Hospice, research, and obstetrics services were excluded. If a patient met the EWRS criteria threshold, an alert was sent to the covering provider and rapid response coordinator by text page. The bedside nurses, who do not carry text‐enabled devices, were alerted by pop‐up notification in the EHR (see Supporting Figure 2 in the online version of this article). The notification was linked to a task that required nurses to verify in the EHR the VSs triggering the EWRS, and adverse trends in VSs or labs (see Supporting Figure 3 in the online version of this article).

The Preimplementation (Silent) Period and EWRS Validation

The EWRS was initially activated for a preimplementation silent period (June 6, 2012September 4, 2012) to both validate the tool and provide the baseline data to which the postimplementation period was compared. During this time, new admissions could trigger the alert, but notifications were not sent. We used admissions from the first 30 days of the preimplementation period to estimate the tool's screen positive rate, test characteristics, predictive values, and likelihood ratios.

The Postimplementation (Live) Period and Impact Analysis

The EWRS went live September 12, 2012, upon which new admissions triggering the alert would result in a notification and response. Unadjusted analyses using the [2] test for dichotomous variables and the Wilcoxon rank sum test for continuous variables compared demographics and the proportion of clinical process and outcome measures for those admitted during the silent period (June 6, 2012September 4, 2012) and a similar timeframe 1 year later when the intervention was live (June 6, 2013September 4, 2013). To be included in either of the time periods, patients had to trigger the alert during the period and be discharged within 45 days of the end of the period. The pre‐ and post‐sepsis mortality index was also examined (see the Supporting Information in the online version of this article for a detailed description of study measures). Multivariable regression models estimated the impact of the EWRS on the process and outcome measures, adjusted for differences between the patients in the preimplementation and postimplementation periods with respect to age, gender, Charlson index on admission, admitting service, hospital, and admission month. Logistic regression models examined dichotomous variables. Continuous variables were log transformed and examined using linear regression models. Cox regression models explored time to ICU transfer from trigger. Among patients with sepsis, a logistic regression model was used to compare the odds of mortality between the silent and live periods, adjusted for expected mortality, both within each hospital and across all hospitals.

Because there is a risk of providers becoming overly reliant on automated systems and overlooking those not triggering the system, we also examined the discharge disposition and mortality outcomes of those in both study periods not identified by the EWRS.

The primary analysis examined the impact of the EWRS across UPHS; we also examined the EWRS impact at each of our hospitals. Last, we performed subgroup analyses examining the EWRS impact in those assigned an International Classification of Diseases, 9th Revision code for sepsis at discharge or death. All analyses were performed using SAS version 9.3 (SAS Institute Inc., Cary, NC).

RESULTS

In the derivation cohort, 4575 patients met the inclusion criteria. The proportion of those in each category (06) achieving our outcomes of interest are described in Supporting Table 1 in the online version of this article. We defined a positive trigger as a score 4, as this threshold identified a limited number of patients (3.9% [180/4575]) with a high proportion experiencing our composite outcome (25.6% [46/180]). The proportion of patients with an EWRS score 4 and their time to event by hospital and health system is described in Supporting Table 2 in the online version of this article. Those with a score 4 were almost 4 times as likely to be transferred to the ICU, almost 7 times as likely to experience an RRT, and almost 10 times as likely to die. The screen positive, sensitivity, specificity, and positive and negative predictive values and likelihood ratios using this threshold and our composite outcome in the derivation cohort were 6%, 16%, 97%, 26%, 94%, 5.3, and 0.9, respectively, and in our validation cohort were 6%, 17%, 97%, 28%, 95%, 5.7, and 0.9, respectively.

In the preimplementation period, 3.8% of admissions (595/15,567) triggered the alert, as compared to 3.5% (545/15,526) in the postimplementation period. Demographics were similar across periods, except that in the postimplementation period patients were slightly younger and had a lower Charlson Comorbidity Index at admission (Table 1). The distribution of alerts across medicine and surgery services were similar (Table 1).

Descriptive Statistics of the Study Population Before and After Implementation of the Early Warning and Response System
 Hospitals AC
 PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: BMI, body mass index; ED, emergency department; ICU, intensive care unit; IQR, interquartile range; RRT, rapid response team; Y, years.

No. of encounters15,56715,526 
No. of alerts595 (4%)545 (4%)0.14
Age, y, median (IQR)62.0 (48.570.5)59.7 (46.169.6)0.04
Female298 (50%)274 (50%)0.95
Race   
White343 (58%)312 (57%)0.14
Black207 (35%)171 (31%) 
Other23 (4%)31 (6%) 
Unknown22 (4%)31 (6%) 
Admission type   
Elective201 (34%)167 (31%)0.40
ED300 (50%)278 (51%) 
Transfer94 (16%)99 (18%) 
BMI, kg/m2, median (IQR)27.0 (23.032.0)26.0 (22.031.0)0.24
Previous ICU admission137 (23%)127 (23%)0.91
RRT before alert27 (5%)20 (4%)0.46
Admission Charlson index, median (IQR)2.0 (1.04.0)2.0 (1.04.0)0.04
Admitting service   
Medicine398 (67%)364 (67%)0.18
Surgery173 (29%)169 (31%) 
Other24 (4%)12 (2%) 
Service where alert fired   
Medicine391 (66%)365 (67%)0.18
Surgery175 (29%)164 (30%) 
Other29 (5%)15 (3%) 

In our postimplementation period, 99% of coordinator pages and over three‐fourths of provider notifications were sent successfully. Almost three‐fourths of nurses reviewed the initial alert notification, and over 99% completed the electronic data verification and adverse trend review, with over half documenting adverse trends. Ninety‐five percent of the time the coordinators completed the follow‐up assessment. Over 90% of the time, the entire team evaluated the patient at bedside within 30 minutes. Almost half of the time, the team thought the patient had no critical illness. Over a third of the time, they thought the patient had sepsis, but reported over 90% of the time that they were aware of the diagnosis prior to the alert. (Supporting Table 3 in the online version of this article includes more details about the responses to the electronic notifications and follow‐up assessments.)

In unadjusted and adjusted analyses, ordering of antibiotics, intravenous fluid boluses, and lactate and blood cultures within 3 hours of the trigger increased significantly, as did ordering of blood products, chest radiographs, and cardiac monitoring within 6 hours of the trigger (Tables 2 and 3).

Clinical Process Measures Before and After Implementation of the Early Warning and Response System
 Hospitals AC
PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: ABD, abdomen; AV, atrioventricular; BMP, basic metabolic panel; CBC, complete blood count; CT, computed tomography; CXR, chest radiograph; ECG, electrocardiogram; H, hours; IV, intravenous; PO, oral; RBC, red blood cell.

No. of alerts595545 
500 mL IV bolus order <3 h after alert92 (15%)142 (26%)<0.01
IV/PO antibiotic order <3 h after alert75 (13%)123 (23%)<0.01
IV/PO sepsis antibiotic order <3 h after alert61 (10%)85 (16%)<0.01
Lactic acid order <3 h after alert57 (10%)128 (23%)<0.01
Blood culture order <3 h after alert68 (11%)99 (18%)<0.01
Blood gas order <6 h after alert53 (9%)59 (11%)0.28
CBC or BMP <6 h after alert247 (42%)219 (40%)0.65
Vasopressor <6 h after alert17 (3%)21 (4%)0.35
Bronchodilator administration <6 h after alert71 (12%)64 (12%)0.92
RBC, plasma, or platelet transfusion order <6 h after alert31 (5%)52 (10%)<0.01
Naloxone order <6 h after alert0 (0%)1 (0%)0.30
AV node blocker order <6 h after alert35 (6%)35 (6%)0.70
Loop diuretic order <6 h after alert35 (6%)28 (5%)0.58
CXR <6 h after alert92 (15%)113 (21%)0.02
CT head, chest, or ABD <6 h after alert29 (5%)34 (6%)0.31
Cardiac monitoring (ECG or telemetry) <6 h after alert70 (12%)90 (17%)0.02
Adjusted Analysis for Clinical Process Measures for All Patients and Those Discharged With a Sepsis Diagnosis
 All Alerted PatientsDischarged With Sepsis Code*
Unadjusted Odds RatioAdjusted Odds RatioUnadjusted Odds RatioAdjusted Odds Ratio
  • NOTE: Odds ratios compare the odds of the outcome after versus before implementation of the early warning system. Abbreviations: AV, atrioventricular; BMP, basic metabolic panel; CBC, complete blood count, CT, computed tomography; CXR, chest radiograph; H, hours; IV, intravenous; PO, oral. *Sepsis definition based on International Classification of Diseases, 9th Revision diagnosis at discharge (790.7, 995.94, 995.92, 995.90, 995.91, 995.93, 785.52). Adusted for log‐transformed age, gender, log‐transformed Charlson index at admission, admitting service, hospital, and admission month.

500 mL IV bolus order <3 h after alert1.93 (1.442.58)1.93 (1.432.61)1.64 (1.112.43)1.65 (1.102.47)
IV/PO antibiotic order <3 h after alert2.02 (1.482.77)2.02 (1.462.78)1.99 (1.323.00)2.02 (1.323.09)
IV/PO sepsis antibiotic order <3 h after alert1.62 (1.142.30)1.57 (1.102.25)1.63 (1.052.53)1.65 (1.052.58)
Lactic acid order <3 h after alert2.90 (2.074.06)3.11 (2.194.41)2.41 (1.583.67)2.79 (1.794.34)
Blood culture <3 h after alert1.72 (1.232.40)1.76 (1.252.47)1.36 (0.872.10)1.40 (0.902.20)
Blood gas order <6 h after alert1.24 (0.841.83)1.32 (0.891.97)1.06 (0.631.77)1.13 (0.671.92)
BMP or CBC order <6 h after alert0.95 (0.751.20)0.96 (0.751.21)1.00 (0.701.44)1.04 (0.721.50)
Vasopressor order <6 h after alert1.36 (0.712.61)1.47 (0.762.83)1.32 (0.583.04)1.38 (0.593.25)
Bronchodilator administration <6 h after alert0.98 (0.691.41)1.02 (0.701.47)1.13 (0.641.99)1.17 (0.652.10)
Transfusion order <6 h after alert1.92 (1.213.04)1.95 (1.233.11)1.65 (0.913.01)1.68 (0.913.10)
AV node blocker order <6 h after alert1.10 (0.681.78)1.20 (0.722.00)0.38 (0.131.08)0.39 (0.121.20)
Loop diuretic order <6 h after alert0.87 (0.521.44)0.93 (0.561.57)1.63 (0.634.21)1.87 (0.705.00)
CXR <6 h after alert1.43 (1.061.94)1.47 (1.081.99)1.45 (0.942.24)1.56 (1.002.43)
CT <6 h after alert1.30 (0.782.16)1.30 (0.782.19)0.97 (0.521.82)0.94 (0.491.79)
Cardiac monitoring <6 h after alert1.48 (1.062.08)1.54 (1.092.16)1.32 (0.792.18)1.44 (0.862.41)

Hospital and ICU length of stay were similar in the preimplementation and postimplementation periods. There was no difference in the proportion of patients transferred to the ICU following the alert; however, the proportion transferred within 6 hours of the alert increased, and the time to ICU transfer was halved (see Supporting Figure 4 in the online version of this article), but neither change was statistically significant in unadjusted analyses. Transfer to the ICU within 6 hours became statistically significant after adjustment. All mortality measures were lower in the postimplementation period, but none reached statistical significance. Discharge to home and sepsis documentation were both statistically higher in the postimplementation period, but discharge to home lost statistical significance after adjustment (Tables 4 and 5) (see Supporting Table 4 in the online version of this article).

Clinical Outcome Measures Before and After Implementation of the Early Warning and Response System
 Hospitals AC
PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: H, hours; ICU, intensive care unit; IP, inpatient; IQR, interquartile range; LOS, length of stay; LTC, long‐term care; O/E, observed to expected; Rehab, rehabilitation; RRT, rapid response team; SNF, skilled nursing facility.

No. of alerts595545 
Hospital LOS, d, median (IQR)10.1 (5.119.1)9.4 (5.218.9)0.92
ICU LOS after alert, d, median (IQR)3.4 (1.77.4)3.6 (1.96.8)0.72
ICU transfer <6 h after alert40 (7%)53 (10%)0.06
ICU transfer <24 h after alert71 (12%)79 (14%)0.20
ICU transfer any time after alert134 (23%)124 (23%)0.93
Time to first ICU after alert, h, median (IQR)21.3 (4.463.9)11.0 (2.358.7)0.22
RRT 6 h after alert13 (2%)9 (2%)0.51
Mortality of all patients52 (9%)41 (8%)0.45
Mortality 30 days after alert48 (8%)33 (6%)0.19
Mortality of those transferred to ICU40 (30%)32 (26%)0.47
Deceased or IP hospice94 (16%)72 (13%)0.22
Discharge to home347 (58%)351 (64%)0.04
Disposition location   
Home347 (58%)351 (64%)0.25
SNF89 (15%)65 (12%) 
Rehab24 (4%)20 (4%) 
LTC8 (1%)9 (2%) 
Other hospital16 (3%)6 (1%) 
Expired52 (9%)41 (8%) 
Hospice IP42 (7%)31 (6%) 
Hospice other11 (2%)14 (3%) 
Other location6 (1%)8 (1%) 
Sepsis discharge diagnosis230 (39%)247 (45%)0.02
Sepsis O/E 1.371.060.18
Adjusted Analysis for Clinical Outcome Measures for All Patients and Those Discharged With a Sepsis Diagnosis
 All Alerted PatientsDischarged With Sepsis Code*
Unadjusted EstimateAdjusted EstimateUnadjusted EstimateAdjusted Estimate
  • NOTE: Estimates compare the mean, odds, or hazard of the outcome after versus before implementation of the early warning system. Abbreviations: H, hours; ICU, intensive care unit; LOS, length of stay; NA, not applicable; RRT, rapid response team. *Sepsis definition based on International Classification of Diseases, 9th Revision diagnosis at discharge (790.7, 995.94, 995.92, 995.90, 995.91, 995.93, 85.52). Adjusted for gender, age, present on admission Charlson comorbidity score, admit service, hospital, and admission month (June+July or August+Sep). Coefficient. Odds ratio. Hazard ratio.

Hospital LOS, d1.01 (0.921.11)1.02 (0.931.12)0.99 (0.851.15)1.00 (0.871.16)
ICU transfer1.49 (0.972.29)1.65 (1.072.55)1.61 (0.922.84)1.82 (1.023.25)
Time to first ICU transfer after alert, h1.17 (0.871.57)1.23 (0.921.66)1.21 (0.831.75)1.31 (0.901.90)
ICU LOS, d1.01 (0.771.31)0.99 (0.761.28)0.87 (0.621.21)0.88 (0.641.21)
RRT0.75 (0.321.77)0.84 (0.352.02)0.81 (0.292.27)0.82 (0.272.43)
Mortality0.85 (0.551.30)0.98 (0.631.53)0.85 (0.551.30)0.98 (0.631.53)
Mortality within 30 days of alert0.73 (0.461.16)0.87 (0.541.40)0.59 (0.341.04)0.69 (0.381.26)
Mortality or inpatient hospice transfer0.82 (0.471.41)0.78 (0.441.41)0.67 (0.361.25)0.65 (0.331.29)
Discharge to home1.29 (1.021.64)1.18 (0.911.52)1.36 (0.951.95)1.22 (0.811.84)
Sepsis discharge diagnosis1.32 (1.041.67)1.43 (1.101.85)NANA

In a subanalysis of EWRS impact on patients documented with sepsis at discharge, unadjusted and adjusted changes in clinical process and outcome measures across the time periods were similar to that of the total population (see Supporting Tables 5 and 6 and Supporting Figure 5 in the online version of this article). The unadjusted composite outcome of mortality or inpatient hospice was statistically lower in the postimplementation period, but lost statistical significance after adjustment.

The disposition and mortality outcomes of those not triggering the alert were unchanged across the 2 periods (see Supporting Tables 7, 8, and 9 in the online version of this article).

DISCUSSION

This study demonstrated that a predictive tool can accurately identify non‐ICU inpatients at increased risk for deterioration and death. In addition, we demonstrated the feasibility of deploying our EHR to screen patients in real time for deterioration and to trigger electronically a timely, robust, multidisciplinary bedside clinical evaluation. Compared to the control (silent) period, the EWRS resulted in a marked increase in early sepsis care, transfer to the ICU, and sepsis documentation, and an indication of a decreased sepsis mortality index and mortality, and increased discharge to home, although none of these latter 3 findings reached statistical significance.

Our study is unique in that it was implemented across a multihospital health system, which has identical EHRs, but diverse cultures, populations, staffing, and practice models. In addition, our study includes a preimplementation population similar to the postimplementation population (in terms of setting, month of admission, and adjustment for potential confounders).

Interestingly, patients identified by the EWRS who were subsequently transferred to an ICU had higher mortality rates (30% and 26% in the preimplementation and postimplementation periods, respectively, across UPHS) than those transferred to an ICU who were not identified by the EWRS (7% and 6% in the preimplementation and postimplementation periods, respectively, across UPHS) (Table 4) (see Supporting Table 7 in the online version of this article). This finding was robust to the study period, so is likely not related to the bedside evaluation prompted by the EWRS. It suggests the EWRS could help triage patients for appropriateness of ICU transfer, a particularly valuable role that should be explored further given the typical strains on ICU capacity,[13] and the mortality resulting from delays in patient transfers into ICUs.[14, 15]

Although we did not find a statistically significant mortality reduction, our study may have been underpowered to detect this outcome. Our study has other limitations. First, our preimplementation/postimplementation design may not fully account for secular changes in sepsis mortality. However, our comparison of similar time periods and our adjustment for observed demographic differences allow us to estimate with more certainty the change in sepsis care and mortality attributable to the intervention. Second, our study did not examine the effect of the EWRS on mortality after hospital discharge, where many such events occur. However, our capture of at least 45 hospital days on all study patients, as well as our inclusion of only those who died or were discharged during our study period, and our assessment of discharge disposition such as hospice, increase the chance that mortality reductions directly attributable to the EWRS were captured. Third, although the EWRS changed patient management, we did not assess the appropriateness of management changes. However, the impact of care changes was captured crudely by examining mortality rates and discharge disposition. Fourth, our study was limited to a single academic healthcare system, and our experience may not be generalizable to other healthcare systems with different EHRs and staff. However, the integration of our automated alert into a commercial EHR serving a diverse array of patient populations, clinical services, and service models throughout our healthcare system may improve the generalizability of our experience to other settings.

CONCLUSION

By leveraging readily available electronic data, an automated prediction tool identified at‐risk patients and mobilized care teams, resulting in more timely sepsis care, improved sepsis documentation, and a suggestion of reduced mortality. This alert may be scalable to other healthcare systems.

Acknowledgements

The authors thank Jennifer Barger, MS, BSN, RN; Patty Baroni, MSN, RN; Patrick J. Donnelly, MS, RN, CCRN; Mika Epps, MSN, RN; Allen L. Fasnacht, MSN, RN; Neil O. Fishman, MD; Kevin M. Fosnocht, MD; David F. Gaieski, MD; Tonya Johnson, MSN, RN, CCRN; Craig R. Kean, MS; Arash Kia, MD, MS; Matthew D. Mitchell, PhD; Stacie Neefe, BSN, RN; Nina J. Renzi, BSN, RN, CCRN; Alexander Roederer, Jean C. Romano, MSN, RN, NE‐BC; Heather Ross, BSN, RN, CCRN; William D. Schweickert, MD; Esme Singer, MD; and Kendal Williams, MD, MPH for their help in developing, testing and operationalizing the EWRS examined in this study; their assistance in data acquisition; and for advice regarding data analysis. This study was previously presented as an oral abstract at the 2013 American Medical Informatics Association Meeting, November 1620, 2013, Washington, DC.

Disclosures: Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no potential financial conflicts of interest relevant to this article.

References
  1. Gaieski DF, Edwards JM, Kallan MJ, Carr BG. Benchmarking the incidence and mortality of severe sepsis in the United States. Crit Care Med. 2013;41(5):11671174.
  2. Dellinger RP, Levy MM, Rhodes A, et al. Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med. 2013;41(2):580637.
  3. Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Crit Care Med. 2010;38(2):367374.
  4. Otero RM, Nguyen HB, Huang DT, et al. Early goal‐directed therapy in severe sepsis and septic shock revisited: concepts, controversies, and contemporary findings. Chest. 2006;130(5):15791595.
  5. Rivers E, Nguyen B, Havstad S, et al. Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345(19):13681377.
  6. Whittaker SA, Mikkelsen ME, Gaieski DF, Koshy S, Kean C, Fuchs BD. Severe sepsis cohorts derived from claims‐based strategies appear to be biased toward a more severely ill patient population. Crit Care Med. 2013;41(4):945953.
  7. Bailey TC, Chen Y, Mao Y, et al. A trial of a real‐time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013;8(5):236242.
  8. Jones S, Mullally M, Ingleby S, Buist M, Bailey M, Eddleston JM. Bedside electronic capture of clinical observations and automated clinical alerts to improve compliance with an Early Warning Score protocol. Crit Care Resusc. 2011;13(2):8388.
  9. Nelson JL, Smith BL, Jared JD, Younger JG. Prospective trial of real‐time electronic surveillance to expedite early care of severe sepsis. Ann Emerg Med. 2011;57(5):500504.
  10. Sawyer AM, Deal EN, Labelle AJ, et al. Implementation of a real‐time computerized sepsis alert in nonintensive care unit patients. Crit Care Med. 2011;39(3):469473.
  11. Bone RC, Balk RA, Cerra FB, et al. Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest. 1992;101(6):16441655.
  12. Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions Conference. Crit Care Med. 2003;31(4):12501256.
  13. Sinuff T, Kahnamoui K, Cook DJ, Luce JM, Levy MM. Rationing critical care beds: a systematic review. Crit Care Med. 2004;32(7):15881597.
  14. Bing‐Hua YU. Delayed admission to intensive care unit for critically surgical patients is associated with increased mortality. Am J Surg. 2014;208:268274.
  15. Cardoso LT, Grion CM, Matsuo T, et al. Impact of delayed admission to intensive care units on mortality of critically ill patients: a cohort study. Crit Care. 2011;15(1):R28.
References
  1. Gaieski DF, Edwards JM, Kallan MJ, Carr BG. Benchmarking the incidence and mortality of severe sepsis in the United States. Crit Care Med. 2013;41(5):11671174.
  2. Dellinger RP, Levy MM, Rhodes A, et al. Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med. 2013;41(2):580637.
  3. Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Crit Care Med. 2010;38(2):367374.
  4. Otero RM, Nguyen HB, Huang DT, et al. Early goal‐directed therapy in severe sepsis and septic shock revisited: concepts, controversies, and contemporary findings. Chest. 2006;130(5):15791595.
  5. Rivers E, Nguyen B, Havstad S, et al. Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345(19):13681377.
  6. Whittaker SA, Mikkelsen ME, Gaieski DF, Koshy S, Kean C, Fuchs BD. Severe sepsis cohorts derived from claims‐based strategies appear to be biased toward a more severely ill patient population. Crit Care Med. 2013;41(4):945953.
  7. Bailey TC, Chen Y, Mao Y, et al. A trial of a real‐time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013;8(5):236242.
  8. Jones S, Mullally M, Ingleby S, Buist M, Bailey M, Eddleston JM. Bedside electronic capture of clinical observations and automated clinical alerts to improve compliance with an Early Warning Score protocol. Crit Care Resusc. 2011;13(2):8388.
  9. Nelson JL, Smith BL, Jared JD, Younger JG. Prospective trial of real‐time electronic surveillance to expedite early care of severe sepsis. Ann Emerg Med. 2011;57(5):500504.
  10. Sawyer AM, Deal EN, Labelle AJ, et al. Implementation of a real‐time computerized sepsis alert in nonintensive care unit patients. Crit Care Med. 2011;39(3):469473.
  11. Bone RC, Balk RA, Cerra FB, et al. Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest. 1992;101(6):16441655.
  12. Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions Conference. Crit Care Med. 2003;31(4):12501256.
  13. Sinuff T, Kahnamoui K, Cook DJ, Luce JM, Levy MM. Rationing critical care beds: a systematic review. Crit Care Med. 2004;32(7):15881597.
  14. Bing‐Hua YU. Delayed admission to intensive care unit for critically surgical patients is associated with increased mortality. Am J Surg. 2014;208:268274.
  15. Cardoso LT, Grion CM, Matsuo T, et al. Impact of delayed admission to intensive care units on mortality of critically ill patients: a cohort study. Crit Care. 2011;15(1):R28.
Issue
Journal of Hospital Medicine - 10(1)
Issue
Journal of Hospital Medicine - 10(1)
Page Number
26-31
Page Number
26-31
Article Type
Display Headline
Development, implementation, and impact of an automated early warning and response system for sepsis
Display Headline
Development, implementation, and impact of an automated early warning and response system for sepsis
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Craig A Umscheid, MD, Assistant Professor of Medicine and Epidemiology, Director, Center for Evidence‐based Practice, Medical Director, Clinical Decision Support, Penn Medicine, 3535 Market Street, Mezzanine, Suite 50, Philadelphia, PA 19104; Telephone: 215‐349‐8098; Fax: 215‐349‐5829; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Later transplant for renal failure in lupus nephritis may raise graft failure risk

Article Type
Changed
Display Headline
Later transplant for renal failure in lupus nephritis may raise graft failure risk

Delaying kidney transplantation to allow for quiescence of systemic lupus erythematosus–related immune activity in patients with lupus nephritis and end-stage renal disease does not appear to improve graft outcomes, according to an analysis of national surveillance data.

Of 4,743 transplant recipients with lupus nephritis and end-stage renal disease (LN-ESRD), 1,239 experienced graft failure. Overall, wait times of 3-12 months and 12-24 months were associated with 25% and 37% increased risk of graft failure, respectively, compared with wait times of less than 3 months, after adjustment for age, race, insurance, hemoglobin, and donor type.

A similar pattern was seen in white patients, except that wait times of more than 36 months in white patients were associated with a near doubling of graft failure risk (hazard ratio, 1.98), Laura C. Plantinga of Emory University, Atlanta, and her colleagues reported (Arthritis Care Res. 2014 Sept. 23 [doi:10.1002/acr.22482]).

Among black patients, longer wait times were not associated with graft failure in the adjusted analysis, and, in fact, there was a nonstatistically significant suggestion of a protective effect for wait time of 2 years or more. This finding may reflect unexplained differences in disease pathology between white and black LN-ESRD patients, the investigators said, adding that there was no increased risk of graft failure in black patients who were transplanted early.

“Our results suggest U.S. recommendations for transplantation in LN-ESRD may not align with evidence from the target population,” they said, noting that the results should be considered hypotheses-generating because of the limitations of the study and that additional study is needed to examine the potential confounding effects of clinically recognized SLE activity on the associations observed in this study.

Some of the investigators were supported through grants from the National Institutes of Health.

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
SLE, lupus, lupus nephritis, kidney, transplantation, end-stage renal disease, ESRD, Laura C. Plantinga, graft failure
Author and Disclosure Information

Author and Disclosure Information

Delaying kidney transplantation to allow for quiescence of systemic lupus erythematosus–related immune activity in patients with lupus nephritis and end-stage renal disease does not appear to improve graft outcomes, according to an analysis of national surveillance data.

Of 4,743 transplant recipients with lupus nephritis and end-stage renal disease (LN-ESRD), 1,239 experienced graft failure. Overall, wait times of 3-12 months and 12-24 months were associated with 25% and 37% increased risk of graft failure, respectively, compared with wait times of less than 3 months, after adjustment for age, race, insurance, hemoglobin, and donor type.

A similar pattern was seen in white patients, except that wait times of more than 36 months in white patients were associated with a near doubling of graft failure risk (hazard ratio, 1.98), Laura C. Plantinga of Emory University, Atlanta, and her colleagues reported (Arthritis Care Res. 2014 Sept. 23 [doi:10.1002/acr.22482]).

Among black patients, longer wait times were not associated with graft failure in the adjusted analysis, and, in fact, there was a nonstatistically significant suggestion of a protective effect for wait time of 2 years or more. This finding may reflect unexplained differences in disease pathology between white and black LN-ESRD patients, the investigators said, adding that there was no increased risk of graft failure in black patients who were transplanted early.

“Our results suggest U.S. recommendations for transplantation in LN-ESRD may not align with evidence from the target population,” they said, noting that the results should be considered hypotheses-generating because of the limitations of the study and that additional study is needed to examine the potential confounding effects of clinically recognized SLE activity on the associations observed in this study.

Some of the investigators were supported through grants from the National Institutes of Health.

Delaying kidney transplantation to allow for quiescence of systemic lupus erythematosus–related immune activity in patients with lupus nephritis and end-stage renal disease does not appear to improve graft outcomes, according to an analysis of national surveillance data.

Of 4,743 transplant recipients with lupus nephritis and end-stage renal disease (LN-ESRD), 1,239 experienced graft failure. Overall, wait times of 3-12 months and 12-24 months were associated with 25% and 37% increased risk of graft failure, respectively, compared with wait times of less than 3 months, after adjustment for age, race, insurance, hemoglobin, and donor type.

A similar pattern was seen in white patients, except that wait times of more than 36 months in white patients were associated with a near doubling of graft failure risk (hazard ratio, 1.98), Laura C. Plantinga of Emory University, Atlanta, and her colleagues reported (Arthritis Care Res. 2014 Sept. 23 [doi:10.1002/acr.22482]).

Among black patients, longer wait times were not associated with graft failure in the adjusted analysis, and, in fact, there was a nonstatistically significant suggestion of a protective effect for wait time of 2 years or more. This finding may reflect unexplained differences in disease pathology between white and black LN-ESRD patients, the investigators said, adding that there was no increased risk of graft failure in black patients who were transplanted early.

“Our results suggest U.S. recommendations for transplantation in LN-ESRD may not align with evidence from the target population,” they said, noting that the results should be considered hypotheses-generating because of the limitations of the study and that additional study is needed to examine the potential confounding effects of clinically recognized SLE activity on the associations observed in this study.

Some of the investigators were supported through grants from the National Institutes of Health.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Later transplant for renal failure in lupus nephritis may raise graft failure risk
Display Headline
Later transplant for renal failure in lupus nephritis may raise graft failure risk
Legacy Keywords
SLE, lupus, lupus nephritis, kidney, transplantation, end-stage renal disease, ESRD, Laura C. Plantinga, graft failure
Legacy Keywords
SLE, lupus, lupus nephritis, kidney, transplantation, end-stage renal disease, ESRD, Laura C. Plantinga, graft failure
Article Source

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Delaying transplantation in LN-ESRD patients may do more harm than good, although future studies should determine if longer wait times remain associated with increased risk of graft failure, independent of clinically recognized SLE activity.

Major finding: Overall risk of graft failure was increased by 25% and 37% with wait times of 3-12 months and 12-24 months, respectively (vs. less than 3 months).

Data source: National ESRD surveillance data (U.S. Renal Data System) for 4,743 LN-ESRD transplant recipients.

Disclosures: Some of the investigators were supported through grants from the National Institutes of Health.

Shrink Rap News: Suicide hotline calls increase after Robin Williams’ death

Article Type
Changed
Display Headline
Shrink Rap News: Suicide hotline calls increase after Robin Williams’ death

National Suicide Prevention Day fell on Sept. 10 this year, surrounded by National Suicide Prevention Week Sept. 8-14. The conversation, as I’m sure everyone noticed, was focused on the suicide of actor Robin Williams. As we move out a few weeks, my patients – especially those who have contemplated ending their own lives – continue to talk about this tragic loss.

The fear is that the suicide of a celebrity will lead to an increase in the suicide rate in the general public – copycat suicides, if you will. In the month after Marilyn Monroe died of an overdose in 1962, the suicide rate rose by more than 10%. On the other hand, the death of a celebrity may lead to a decrease in the suicide rate, as happened after Kurt Cobain’s death from a self-inflicted gunshot wound in 1994. In the period after Cobain’s death, an effort was made to publicize resources for those who need help. The suicide rate dropped, while calls to hotlines rose.

Dr. Dinah Miller

After Robin Williams’ death, my own social media feeds were full of ads for the National Suicide Prevention Lifeline (NSPL), a hotline with the number 1-800-273-TALK. There are other hotlines, but this was the one I saw most. I wanted to learn about suicide hotlines, so I did a few things: I asked readers of our Shrink Rap blog to tell me about their experiences, and I called the hotline myself to see if I could learn about the structure of the organization, what resources they had to offer a distraught caller, and whether there had been a change in the number of calls they’d received in the time following Mr. Williams’ death.

I called from my cell phone, which is registered in Maryland, while sitting in my home in Baltimore City. The call was routed to Grassroots Crisis Intervention Center in Columbia, Md. Google Maps tells me the center is 25 miles from my house, and it would take me 32 minutes to drive there. In addition to being part of a network of 160 hotline centers across the country, Grassroots has a walk-in crisis center and a mobile treatment center, and is adjacent to a homeless shelter.

“Most of the people who call the National Suicide Prevention Lifeline are suicidal,” said Nicole DeChirico, director of crisis intervention services for Grassroots. “There is a gradation in suicidal thinking, but about 90% of our callers are considering it.”

“We first form rapport, and then we try to quickly assess if an attempt has already been made, and if they are in any danger. We use the assessment of suicidality that is put out by the NSPL. It’s a structured template that is used as a guideline.”

Ms. DeChirico noted that the people who man the hotlines have bachelor’s or master’s degrees – often in psychology, social work, counseling, or education. If feasible, a Safety Planning Intervention is implemented, based on the work of Barbara Stanley, Ph.D., at Columbia University in New York.

“We talk to people about what they need to do to feel safe. If they allow it, we set up a follow-up call. Of the total number of people who have attempted suicide once in the past and lived, 90%-96% never go on to attempt suicide again,” Ms. DeChirico noted. Suicide is a time-limited acute crisis.”

The Grassroots team can see patients on site while they wait for appointments with an outpatient clinician, and can send a mobile crisis team to those who need it if they are in the county served by the organization. I wondered if all 160 agencies that received calls from the NSPL could also provide crisis services.

Marcia Epstein, LMSW, was director of the Headquarters Counseling Center in Lawrence, Kan., from 1979 to 2013. The center became part of the first national suicide prevention hotline network, the National Hopeline Network, 1-800-SUICIDE, in 2001, and then became part of the National Suicide Prevention Lifeline, 1-800-274-TALK (8255), when that network began in January 2005.

“The types of programs and agencies which are part of NSPL vary greatly. The accreditation that allows them to be part of the NSPL network also varies. Some centers are staffed totally by licensed mental health therapists, while others might include trained volunteers and paid counselors who have no professional degree or licensure. Service may be delivered by phone, as well as in person, by text, and by live chat. In person might be on site or through mobile crisis outreach. Some centers are part of other organizations, while others are free-standing, and some serve entire states, while others serve geographically smaller regions,” Ms. Epstein explained in a series of e-mails. She noted that some centers assess and refer, while others, like Grassroots, are able to provide more counseling.

 

 

“So if it sounds like I’m saying there is little consistency between centers, yes, that is my experience. But the centers all bring strong commitment to preventing suicide.”

Ms. Epstein continued to discuss the power of the work done with hotline callers.

“The really helpful counseling comes from the heart, from connecting to people with caring and respect and patience, and using our skills in helping them stay safer through the crisis and then, when needed, to stay safer in the long run. It takes a lot of bravery from the people letting us help. And it takes a lot of creativity and flexibility in coming up together with realistic plans to support safety.”

I was curious about the patient response, and I found that was mixed. It was also notable that different patients found different forms of communication to be helpful.

A woman who identified herself only as “Virginia Woolf” wrote, “I have contacted the Samaritans on the [email protected] line because I could write to them via e-mail. I don’t like phones and I also know too many of the counselors on the local crisis line. Each time I was definitely close to suicide. I was in despair and I had the means at hand. I think what stopped me was knowing they would reply. They always did, within a few hours, but waiting for their reply kept me safe.”

Not every response was as positive.

One writer noted, “It was not a productive, supportive, or empathetic person. I felt like she was arrogant, judgmental, and didn’t really care about why I was calling.” The same writer, however, was able to find solace elsewhere. “I have texted CrisisChat and it was an excellent chat and I did feel better.”

Finally, Ms. DeChirico sent me information about the call volume from our local NPSL center in Columbia. From July 1, 2013, to July 31, 2014, the Lifeline received an average of 134 calls per month. December had the highest number of calls, with 163, while August had the lowest with 118. September, February, and April all had 120 calls or fewer.

Robin Williams died on Aug. 11, 2014, and the center received 200 calls in August – a 49% increase over the average volume. Hopefully, we’ll end up seeing a decline in suicide in the months following Mr. Williams’ tragic death.

Dr. Miller is a coauthor of “Shrink Rap: Three Psychiatrists Explain Their Work” (Baltimore: Johns Hopkins University Press, 2011).

References

Author and Disclosure Information

Publications
Legacy Keywords
Robin Williams, suicide, suicide hotlines
Sections
Author and Disclosure Information

Author and Disclosure Information

National Suicide Prevention Day fell on Sept. 10 this year, surrounded by National Suicide Prevention Week Sept. 8-14. The conversation, as I’m sure everyone noticed, was focused on the suicide of actor Robin Williams. As we move out a few weeks, my patients – especially those who have contemplated ending their own lives – continue to talk about this tragic loss.

The fear is that the suicide of a celebrity will lead to an increase in the suicide rate in the general public – copycat suicides, if you will. In the month after Marilyn Monroe died of an overdose in 1962, the suicide rate rose by more than 10%. On the other hand, the death of a celebrity may lead to a decrease in the suicide rate, as happened after Kurt Cobain’s death from a self-inflicted gunshot wound in 1994. In the period after Cobain’s death, an effort was made to publicize resources for those who need help. The suicide rate dropped, while calls to hotlines rose.

Dr. Dinah Miller

After Robin Williams’ death, my own social media feeds were full of ads for the National Suicide Prevention Lifeline (NSPL), a hotline with the number 1-800-273-TALK. There are other hotlines, but this was the one I saw most. I wanted to learn about suicide hotlines, so I did a few things: I asked readers of our Shrink Rap blog to tell me about their experiences, and I called the hotline myself to see if I could learn about the structure of the organization, what resources they had to offer a distraught caller, and whether there had been a change in the number of calls they’d received in the time following Mr. Williams’ death.

I called from my cell phone, which is registered in Maryland, while sitting in my home in Baltimore City. The call was routed to Grassroots Crisis Intervention Center in Columbia, Md. Google Maps tells me the center is 25 miles from my house, and it would take me 32 minutes to drive there. In addition to being part of a network of 160 hotline centers across the country, Grassroots has a walk-in crisis center and a mobile treatment center, and is adjacent to a homeless shelter.

“Most of the people who call the National Suicide Prevention Lifeline are suicidal,” said Nicole DeChirico, director of crisis intervention services for Grassroots. “There is a gradation in suicidal thinking, but about 90% of our callers are considering it.”

“We first form rapport, and then we try to quickly assess if an attempt has already been made, and if they are in any danger. We use the assessment of suicidality that is put out by the NSPL. It’s a structured template that is used as a guideline.”

Ms. DeChirico noted that the people who man the hotlines have bachelor’s or master’s degrees – often in psychology, social work, counseling, or education. If feasible, a Safety Planning Intervention is implemented, based on the work of Barbara Stanley, Ph.D., at Columbia University in New York.

“We talk to people about what they need to do to feel safe. If they allow it, we set up a follow-up call. Of the total number of people who have attempted suicide once in the past and lived, 90%-96% never go on to attempt suicide again,” Ms. DeChirico noted. Suicide is a time-limited acute crisis.”

The Grassroots team can see patients on site while they wait for appointments with an outpatient clinician, and can send a mobile crisis team to those who need it if they are in the county served by the organization. I wondered if all 160 agencies that received calls from the NSPL could also provide crisis services.

Marcia Epstein, LMSW, was director of the Headquarters Counseling Center in Lawrence, Kan., from 1979 to 2013. The center became part of the first national suicide prevention hotline network, the National Hopeline Network, 1-800-SUICIDE, in 2001, and then became part of the National Suicide Prevention Lifeline, 1-800-274-TALK (8255), when that network began in January 2005.

“The types of programs and agencies which are part of NSPL vary greatly. The accreditation that allows them to be part of the NSPL network also varies. Some centers are staffed totally by licensed mental health therapists, while others might include trained volunteers and paid counselors who have no professional degree or licensure. Service may be delivered by phone, as well as in person, by text, and by live chat. In person might be on site or through mobile crisis outreach. Some centers are part of other organizations, while others are free-standing, and some serve entire states, while others serve geographically smaller regions,” Ms. Epstein explained in a series of e-mails. She noted that some centers assess and refer, while others, like Grassroots, are able to provide more counseling.

 

 

“So if it sounds like I’m saying there is little consistency between centers, yes, that is my experience. But the centers all bring strong commitment to preventing suicide.”

Ms. Epstein continued to discuss the power of the work done with hotline callers.

“The really helpful counseling comes from the heart, from connecting to people with caring and respect and patience, and using our skills in helping them stay safer through the crisis and then, when needed, to stay safer in the long run. It takes a lot of bravery from the people letting us help. And it takes a lot of creativity and flexibility in coming up together with realistic plans to support safety.”

I was curious about the patient response, and I found that was mixed. It was also notable that different patients found different forms of communication to be helpful.

A woman who identified herself only as “Virginia Woolf” wrote, “I have contacted the Samaritans on the [email protected] line because I could write to them via e-mail. I don’t like phones and I also know too many of the counselors on the local crisis line. Each time I was definitely close to suicide. I was in despair and I had the means at hand. I think what stopped me was knowing they would reply. They always did, within a few hours, but waiting for their reply kept me safe.”

Not every response was as positive.

One writer noted, “It was not a productive, supportive, or empathetic person. I felt like she was arrogant, judgmental, and didn’t really care about why I was calling.” The same writer, however, was able to find solace elsewhere. “I have texted CrisisChat and it was an excellent chat and I did feel better.”

Finally, Ms. DeChirico sent me information about the call volume from our local NPSL center in Columbia. From July 1, 2013, to July 31, 2014, the Lifeline received an average of 134 calls per month. December had the highest number of calls, with 163, while August had the lowest with 118. September, February, and April all had 120 calls or fewer.

Robin Williams died on Aug. 11, 2014, and the center received 200 calls in August – a 49% increase over the average volume. Hopefully, we’ll end up seeing a decline in suicide in the months following Mr. Williams’ tragic death.

Dr. Miller is a coauthor of “Shrink Rap: Three Psychiatrists Explain Their Work” (Baltimore: Johns Hopkins University Press, 2011).

National Suicide Prevention Day fell on Sept. 10 this year, surrounded by National Suicide Prevention Week Sept. 8-14. The conversation, as I’m sure everyone noticed, was focused on the suicide of actor Robin Williams. As we move out a few weeks, my patients – especially those who have contemplated ending their own lives – continue to talk about this tragic loss.

The fear is that the suicide of a celebrity will lead to an increase in the suicide rate in the general public – copycat suicides, if you will. In the month after Marilyn Monroe died of an overdose in 1962, the suicide rate rose by more than 10%. On the other hand, the death of a celebrity may lead to a decrease in the suicide rate, as happened after Kurt Cobain’s death from a self-inflicted gunshot wound in 1994. In the period after Cobain’s death, an effort was made to publicize resources for those who need help. The suicide rate dropped, while calls to hotlines rose.

Dr. Dinah Miller

After Robin Williams’ death, my own social media feeds were full of ads for the National Suicide Prevention Lifeline (NSPL), a hotline with the number 1-800-273-TALK. There are other hotlines, but this was the one I saw most. I wanted to learn about suicide hotlines, so I did a few things: I asked readers of our Shrink Rap blog to tell me about their experiences, and I called the hotline myself to see if I could learn about the structure of the organization, what resources they had to offer a distraught caller, and whether there had been a change in the number of calls they’d received in the time following Mr. Williams’ death.

I called from my cell phone, which is registered in Maryland, while sitting in my home in Baltimore City. The call was routed to Grassroots Crisis Intervention Center in Columbia, Md. Google Maps tells me the center is 25 miles from my house, and it would take me 32 minutes to drive there. In addition to being part of a network of 160 hotline centers across the country, Grassroots has a walk-in crisis center and a mobile treatment center, and is adjacent to a homeless shelter.

“Most of the people who call the National Suicide Prevention Lifeline are suicidal,” said Nicole DeChirico, director of crisis intervention services for Grassroots. “There is a gradation in suicidal thinking, but about 90% of our callers are considering it.”

“We first form rapport, and then we try to quickly assess if an attempt has already been made, and if they are in any danger. We use the assessment of suicidality that is put out by the NSPL. It’s a structured template that is used as a guideline.”

Ms. DeChirico noted that the people who man the hotlines have bachelor’s or master’s degrees – often in psychology, social work, counseling, or education. If feasible, a Safety Planning Intervention is implemented, based on the work of Barbara Stanley, Ph.D., at Columbia University in New York.

“We talk to people about what they need to do to feel safe. If they allow it, we set up a follow-up call. Of the total number of people who have attempted suicide once in the past and lived, 90%-96% never go on to attempt suicide again,” Ms. DeChirico noted. Suicide is a time-limited acute crisis.”

The Grassroots team can see patients on site while they wait for appointments with an outpatient clinician, and can send a mobile crisis team to those who need it if they are in the county served by the organization. I wondered if all 160 agencies that received calls from the NSPL could also provide crisis services.

Marcia Epstein, LMSW, was director of the Headquarters Counseling Center in Lawrence, Kan., from 1979 to 2013. The center became part of the first national suicide prevention hotline network, the National Hopeline Network, 1-800-SUICIDE, in 2001, and then became part of the National Suicide Prevention Lifeline, 1-800-274-TALK (8255), when that network began in January 2005.

“The types of programs and agencies which are part of NSPL vary greatly. The accreditation that allows them to be part of the NSPL network also varies. Some centers are staffed totally by licensed mental health therapists, while others might include trained volunteers and paid counselors who have no professional degree or licensure. Service may be delivered by phone, as well as in person, by text, and by live chat. In person might be on site or through mobile crisis outreach. Some centers are part of other organizations, while others are free-standing, and some serve entire states, while others serve geographically smaller regions,” Ms. Epstein explained in a series of e-mails. She noted that some centers assess and refer, while others, like Grassroots, are able to provide more counseling.

 

 

“So if it sounds like I’m saying there is little consistency between centers, yes, that is my experience. But the centers all bring strong commitment to preventing suicide.”

Ms. Epstein continued to discuss the power of the work done with hotline callers.

“The really helpful counseling comes from the heart, from connecting to people with caring and respect and patience, and using our skills in helping them stay safer through the crisis and then, when needed, to stay safer in the long run. It takes a lot of bravery from the people letting us help. And it takes a lot of creativity and flexibility in coming up together with realistic plans to support safety.”

I was curious about the patient response, and I found that was mixed. It was also notable that different patients found different forms of communication to be helpful.

A woman who identified herself only as “Virginia Woolf” wrote, “I have contacted the Samaritans on the [email protected] line because I could write to them via e-mail. I don’t like phones and I also know too many of the counselors on the local crisis line. Each time I was definitely close to suicide. I was in despair and I had the means at hand. I think what stopped me was knowing they would reply. They always did, within a few hours, but waiting for their reply kept me safe.”

Not every response was as positive.

One writer noted, “It was not a productive, supportive, or empathetic person. I felt like she was arrogant, judgmental, and didn’t really care about why I was calling.” The same writer, however, was able to find solace elsewhere. “I have texted CrisisChat and it was an excellent chat and I did feel better.”

Finally, Ms. DeChirico sent me information about the call volume from our local NPSL center in Columbia. From July 1, 2013, to July 31, 2014, the Lifeline received an average of 134 calls per month. December had the highest number of calls, with 163, while August had the lowest with 118. September, February, and April all had 120 calls or fewer.

Robin Williams died on Aug. 11, 2014, and the center received 200 calls in August – a 49% increase over the average volume. Hopefully, we’ll end up seeing a decline in suicide in the months following Mr. Williams’ tragic death.

Dr. Miller is a coauthor of “Shrink Rap: Three Psychiatrists Explain Their Work” (Baltimore: Johns Hopkins University Press, 2011).

References

References

Publications
Publications
Article Type
Display Headline
Shrink Rap News: Suicide hotline calls increase after Robin Williams’ death
Display Headline
Shrink Rap News: Suicide hotline calls increase after Robin Williams’ death
Legacy Keywords
Robin Williams, suicide, suicide hotlines
Legacy Keywords
Robin Williams, suicide, suicide hotlines
Sections
Article Source

PURLs Copyright

Inside the Article

Study reveals mutation that causes aplastic anemia

Article Type
Changed
Display Headline
Study reveals mutation that causes aplastic anemia

Three generations

of women in a family

By studying 3 generations of a family plagued by blood disorders, researchers discovered a genetic mutation that causes aplastic anemia.

The team performed whole-exome sequencing on DNA from the families and identified an inherited mutation on the ACD gene, which codes for the telomere-binding protein TPP1.

The mutation disrupts the interactions between telomeres and telomerase, which causes blood cells to die and results in aplastic anemia.

“Identifying this causal defect may help suggest future molecular-based treatments that bypass the gene defect and restore blood cell production,” said Hakon Hakonarson, MD, PhD, of The Children’s Hospital of Philadelphia in Pennsylvania.

Dr Hakonarson and his colleagues described this research in Blood.

The team studied an Australian family with aplastic anemia and other hematopoietic disorders, including leukemia. Whole-exome sequencing of the family’s DNA revealed an inherited mutation on the ACD gene.

The mutation is an amino acid deletion in the TEL patch of TPP1 (ΔK170). All of the family members with this mutation had short telomeres, and those with wild-type TPP1 did not.

The researchers introduced TPP1 with the ΔK170 mutation into 293T cells and found the protein could localize to telomeres but failed to recruit telomerase. The team said this indicates a causal relationship between the mutation and bone marrow disorders.

Without access to telomerase to help maintain telomeres, blood cells lose their structural integrity and die, resulting in bone marrow failure and aplastic anemia.

Nine other genes were previously found to play a role in bone marrow failure disorders. The current study adds ACD to the list and is the first time the gene has been shown to have a disease-causing role.

“This improved understanding of the underlying molecular mechanisms may suggest new approaches to treating disorders such as aplastic anemia,” Dr Hakonarson said. “For instance, investigators may identify other avenues for recruiting telomerase to telomeres to restore its protective function.”

Publications
Topics

Three generations

of women in a family

By studying 3 generations of a family plagued by blood disorders, researchers discovered a genetic mutation that causes aplastic anemia.

The team performed whole-exome sequencing on DNA from the families and identified an inherited mutation on the ACD gene, which codes for the telomere-binding protein TPP1.

The mutation disrupts the interactions between telomeres and telomerase, which causes blood cells to die and results in aplastic anemia.

“Identifying this causal defect may help suggest future molecular-based treatments that bypass the gene defect and restore blood cell production,” said Hakon Hakonarson, MD, PhD, of The Children’s Hospital of Philadelphia in Pennsylvania.

Dr Hakonarson and his colleagues described this research in Blood.

The team studied an Australian family with aplastic anemia and other hematopoietic disorders, including leukemia. Whole-exome sequencing of the family’s DNA revealed an inherited mutation on the ACD gene.

The mutation is an amino acid deletion in the TEL patch of TPP1 (ΔK170). All of the family members with this mutation had short telomeres, and those with wild-type TPP1 did not.

The researchers introduced TPP1 with the ΔK170 mutation into 293T cells and found the protein could localize to telomeres but failed to recruit telomerase. The team said this indicates a causal relationship between the mutation and bone marrow disorders.

Without access to telomerase to help maintain telomeres, blood cells lose their structural integrity and die, resulting in bone marrow failure and aplastic anemia.

Nine other genes were previously found to play a role in bone marrow failure disorders. The current study adds ACD to the list and is the first time the gene has been shown to have a disease-causing role.

“This improved understanding of the underlying molecular mechanisms may suggest new approaches to treating disorders such as aplastic anemia,” Dr Hakonarson said. “For instance, investigators may identify other avenues for recruiting telomerase to telomeres to restore its protective function.”

Three generations

of women in a family

By studying 3 generations of a family plagued by blood disorders, researchers discovered a genetic mutation that causes aplastic anemia.

The team performed whole-exome sequencing on DNA from the families and identified an inherited mutation on the ACD gene, which codes for the telomere-binding protein TPP1.

The mutation disrupts the interactions between telomeres and telomerase, which causes blood cells to die and results in aplastic anemia.

“Identifying this causal defect may help suggest future molecular-based treatments that bypass the gene defect and restore blood cell production,” said Hakon Hakonarson, MD, PhD, of The Children’s Hospital of Philadelphia in Pennsylvania.

Dr Hakonarson and his colleagues described this research in Blood.

The team studied an Australian family with aplastic anemia and other hematopoietic disorders, including leukemia. Whole-exome sequencing of the family’s DNA revealed an inherited mutation on the ACD gene.

The mutation is an amino acid deletion in the TEL patch of TPP1 (ΔK170). All of the family members with this mutation had short telomeres, and those with wild-type TPP1 did not.

The researchers introduced TPP1 with the ΔK170 mutation into 293T cells and found the protein could localize to telomeres but failed to recruit telomerase. The team said this indicates a causal relationship between the mutation and bone marrow disorders.

Without access to telomerase to help maintain telomeres, blood cells lose their structural integrity and die, resulting in bone marrow failure and aplastic anemia.

Nine other genes were previously found to play a role in bone marrow failure disorders. The current study adds ACD to the list and is the first time the gene has been shown to have a disease-causing role.

“This improved understanding of the underlying molecular mechanisms may suggest new approaches to treating disorders such as aplastic anemia,” Dr Hakonarson said. “For instance, investigators may identify other avenues for recruiting telomerase to telomeres to restore its protective function.”

Publications
Publications
Topics
Article Type
Display Headline
Study reveals mutation that causes aplastic anemia
Display Headline
Study reveals mutation that causes aplastic anemia
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

New Guidelines on Concussion and Sleep Disturbance

Article Type
Changed
Display Headline
New Guidelines on Concussion and Sleep Disturbance

According to the DoD, 300,707 U.S. service members were diagnosed with a traumatic brain injury (TBI) between 2000 and the first quarter of 2014. Of those, 82% had mild TBI (mTBI), also known as a concussion. Usually, a patient recovers from concussion relatively quickly—in days to weeks. But some patients, especially those with preexisting and concomitant conditions, have persistent symptoms that interfere with daily life. The most common of these symptoms are sleep disturbances, usually insomnia, which is a critical issue, given that sleep is so important to the brain’s—and the rest of the body’s—ability to heal. Poor sleep also exacerbates other symptoms, such as pain and irritability, has a negative impact on cognition, and may partially mediate the development of posttraumatic stress disorder or depression.

The Defense and Veterans Brain Injury Center (DVBIC) has released a new clinical recommendation and support tools to help clinicians identify and treat post-TBI sleep disturbances. The suite includes Management of Sleep Disturbances Following Concussion/Mild Traumatic Brain Injury: Guidance for Primary Care Management in Deployed and Non-Deployed Settings, a companion clinical support tool, and a fact sheet for patients. The clinical recommendation (CR) and companion tool are based on a review of current literature and expert contributions from the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury, in collaboration with clinical subject matter experts.

The CR strongly advises that all patients with concussion be screened for a sleep disorder. The key question to ask during the patient interview is “Are you experiencing frequent difficulty in falling or staying asleep, excessive daytime sleepiness, or unusual events during sleep?”

The DVBIC Clinical Affairs Officer PHS Capt. Cynthia Spells says “the initial step in the diagnosis of a sleep disorder includes a focused sleep assessment.” The clinical interview should include the “3 Ps”: predisposing, precipitating, and perpetuating factors. Predisposing factors include excessive weight, older age, and medications. Precipitating factors include concussion, deployment, and acute stress. Perpetuating factors include excessive use of caffeine or other stimulants, time zone changes, and familial stress. Noting that comorbid conditions are common with sleep disorders, the CR notes an anxiety disorder postinjury is a more significant predictor of sleep disruption than is pain, other comorbid conditions, or the adverse effects of medication.

A guide for primary care providers (PCPs) in addition to giving an overview of the suite and how to use the components provides insight into the research and science behind managing TBI-related sleep disturbances. The clinical support tool is an algorithm for PCPs to use in assessing sleep disturbances, a step-by-step process to determine the level of care required. The tool is offered as a pocket-sized reference card and can be downloaded. (Health care providers can also take a self-guided course in identifying and treating mTBI at http://www.brainlinemilitary.org.)

According to the CR, nonpharmacologic measures are the first-line treatment for post-TBI sleep problems. These include teaching patients good sleep hygiene and stimulus control; that is, doing as much as possible to physically and environmentally promote sleep. (See App Corner) The patient fact sheet gives tips on getting a healthy night’ s sleep, such as avoiding naps, avoiding alcohol close to bedtime, and getting exposure to natural light as much as possible.

The CR and other components are available at https://dvbic.dcoe.mil/resources/management-sleep-disturbances.

References

Author and Disclosure Information

Issue
Federal Practitioner - 31(9)
Publications
Topics
Page Number
39
Legacy Keywords
traumatic brain injury, TBI, sleep disturbances, sleep hygeine, stimulus control, Defense and Veterans Brain Injury Center, DVBIC, Management of Sleep Disturbances Following Concussion/Mild Traumatic Brain Injury: Guidance fo rPrimary Care Management in Deployed and Non-Deplaoyed Settings, Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury, Capt Cynthia Spells, mild TBI, mTBI, avoid naps
Sections
Author and Disclosure Information

Author and Disclosure Information

Related Articles

According to the DoD, 300,707 U.S. service members were diagnosed with a traumatic brain injury (TBI) between 2000 and the first quarter of 2014. Of those, 82% had mild TBI (mTBI), also known as a concussion. Usually, a patient recovers from concussion relatively quickly—in days to weeks. But some patients, especially those with preexisting and concomitant conditions, have persistent symptoms that interfere with daily life. The most common of these symptoms are sleep disturbances, usually insomnia, which is a critical issue, given that sleep is so important to the brain’s—and the rest of the body’s—ability to heal. Poor sleep also exacerbates other symptoms, such as pain and irritability, has a negative impact on cognition, and may partially mediate the development of posttraumatic stress disorder or depression.

The Defense and Veterans Brain Injury Center (DVBIC) has released a new clinical recommendation and support tools to help clinicians identify and treat post-TBI sleep disturbances. The suite includes Management of Sleep Disturbances Following Concussion/Mild Traumatic Brain Injury: Guidance for Primary Care Management in Deployed and Non-Deployed Settings, a companion clinical support tool, and a fact sheet for patients. The clinical recommendation (CR) and companion tool are based on a review of current literature and expert contributions from the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury, in collaboration with clinical subject matter experts.

The CR strongly advises that all patients with concussion be screened for a sleep disorder. The key question to ask during the patient interview is “Are you experiencing frequent difficulty in falling or staying asleep, excessive daytime sleepiness, or unusual events during sleep?”

The DVBIC Clinical Affairs Officer PHS Capt. Cynthia Spells says “the initial step in the diagnosis of a sleep disorder includes a focused sleep assessment.” The clinical interview should include the “3 Ps”: predisposing, precipitating, and perpetuating factors. Predisposing factors include excessive weight, older age, and medications. Precipitating factors include concussion, deployment, and acute stress. Perpetuating factors include excessive use of caffeine or other stimulants, time zone changes, and familial stress. Noting that comorbid conditions are common with sleep disorders, the CR notes an anxiety disorder postinjury is a more significant predictor of sleep disruption than is pain, other comorbid conditions, or the adverse effects of medication.

A guide for primary care providers (PCPs) in addition to giving an overview of the suite and how to use the components provides insight into the research and science behind managing TBI-related sleep disturbances. The clinical support tool is an algorithm for PCPs to use in assessing sleep disturbances, a step-by-step process to determine the level of care required. The tool is offered as a pocket-sized reference card and can be downloaded. (Health care providers can also take a self-guided course in identifying and treating mTBI at http://www.brainlinemilitary.org.)

According to the CR, nonpharmacologic measures are the first-line treatment for post-TBI sleep problems. These include teaching patients good sleep hygiene and stimulus control; that is, doing as much as possible to physically and environmentally promote sleep. (See App Corner) The patient fact sheet gives tips on getting a healthy night’ s sleep, such as avoiding naps, avoiding alcohol close to bedtime, and getting exposure to natural light as much as possible.

The CR and other components are available at https://dvbic.dcoe.mil/resources/management-sleep-disturbances.

According to the DoD, 300,707 U.S. service members were diagnosed with a traumatic brain injury (TBI) between 2000 and the first quarter of 2014. Of those, 82% had mild TBI (mTBI), also known as a concussion. Usually, a patient recovers from concussion relatively quickly—in days to weeks. But some patients, especially those with preexisting and concomitant conditions, have persistent symptoms that interfere with daily life. The most common of these symptoms are sleep disturbances, usually insomnia, which is a critical issue, given that sleep is so important to the brain’s—and the rest of the body’s—ability to heal. Poor sleep also exacerbates other symptoms, such as pain and irritability, has a negative impact on cognition, and may partially mediate the development of posttraumatic stress disorder or depression.

The Defense and Veterans Brain Injury Center (DVBIC) has released a new clinical recommendation and support tools to help clinicians identify and treat post-TBI sleep disturbances. The suite includes Management of Sleep Disturbances Following Concussion/Mild Traumatic Brain Injury: Guidance for Primary Care Management in Deployed and Non-Deployed Settings, a companion clinical support tool, and a fact sheet for patients. The clinical recommendation (CR) and companion tool are based on a review of current literature and expert contributions from the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury, in collaboration with clinical subject matter experts.

The CR strongly advises that all patients with concussion be screened for a sleep disorder. The key question to ask during the patient interview is “Are you experiencing frequent difficulty in falling or staying asleep, excessive daytime sleepiness, or unusual events during sleep?”

The DVBIC Clinical Affairs Officer PHS Capt. Cynthia Spells says “the initial step in the diagnosis of a sleep disorder includes a focused sleep assessment.” The clinical interview should include the “3 Ps”: predisposing, precipitating, and perpetuating factors. Predisposing factors include excessive weight, older age, and medications. Precipitating factors include concussion, deployment, and acute stress. Perpetuating factors include excessive use of caffeine or other stimulants, time zone changes, and familial stress. Noting that comorbid conditions are common with sleep disorders, the CR notes an anxiety disorder postinjury is a more significant predictor of sleep disruption than is pain, other comorbid conditions, or the adverse effects of medication.

A guide for primary care providers (PCPs) in addition to giving an overview of the suite and how to use the components provides insight into the research and science behind managing TBI-related sleep disturbances. The clinical support tool is an algorithm for PCPs to use in assessing sleep disturbances, a step-by-step process to determine the level of care required. The tool is offered as a pocket-sized reference card and can be downloaded. (Health care providers can also take a self-guided course in identifying and treating mTBI at http://www.brainlinemilitary.org.)

According to the CR, nonpharmacologic measures are the first-line treatment for post-TBI sleep problems. These include teaching patients good sleep hygiene and stimulus control; that is, doing as much as possible to physically and environmentally promote sleep. (See App Corner) The patient fact sheet gives tips on getting a healthy night’ s sleep, such as avoiding naps, avoiding alcohol close to bedtime, and getting exposure to natural light as much as possible.

The CR and other components are available at https://dvbic.dcoe.mil/resources/management-sleep-disturbances.

References

References

Issue
Federal Practitioner - 31(9)
Issue
Federal Practitioner - 31(9)
Page Number
39
Page Number
39
Publications
Publications
Topics
Article Type
Display Headline
New Guidelines on Concussion and Sleep Disturbance
Display Headline
New Guidelines on Concussion and Sleep Disturbance
Legacy Keywords
traumatic brain injury, TBI, sleep disturbances, sleep hygeine, stimulus control, Defense and Veterans Brain Injury Center, DVBIC, Management of Sleep Disturbances Following Concussion/Mild Traumatic Brain Injury: Guidance fo rPrimary Care Management in Deployed and Non-Deplaoyed Settings, Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury, Capt Cynthia Spells, mild TBI, mTBI, avoid naps
Legacy Keywords
traumatic brain injury, TBI, sleep disturbances, sleep hygeine, stimulus control, Defense and Veterans Brain Injury Center, DVBIC, Management of Sleep Disturbances Following Concussion/Mild Traumatic Brain Injury: Guidance fo rPrimary Care Management in Deployed and Non-Deplaoyed Settings, Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury, Capt Cynthia Spells, mild TBI, mTBI, avoid naps
Sections
Article Source

PURLs Copyright

Inside the Article

Method can detect drivers of AML

Article Type
Changed
Display Headline
Method can detect drivers of AML

AML in the bone marrow

PHILADELPHIA—Super-enhancer profiling can unearth biomarkers and therapeutic targets for acute myeloid leukemia (AML), according to research presented at the AACR conference Hematologic Malignancies: Translating Discoveries to Novel Therapies.

Researchers used high-throughput ChIP sequencing to identify super-enhancer domains in a cohort of AML patients.

And this revealed both known and previously unknown genes that are important for AML disease biology.

Eric Olson, PhD, and his colleagues from Syros Pharmaceuticals in Watertown, Massachusetts, presented this research during one of the meeting’s poster sessions.

The investigators explained that super-enhancers are a class of densely clustered cis-regulatory elements that are key to initiating and maintaining cell-type-specific gene expression in cancer and other settings. Tumor cells acquire super-enhancers at key oncogenes and at genes that participate in the acquisition of hallmark capabilities in cancer.

So the researchers set out to identify and characterize super-enhancer domains in a cohort of AML patients.

The team collected primary AML samples and performed chromatin fragmentation, chromatin immunoprecipitation, and DNA purification and sequencing.

They then mapped enhancer regions and characterized enhancer profiles. This revealed AML-specific super-enhancers and associated genes.

For example, in one patient, the investigators identified 392 AML-specific super-enhancers, which were associated with 11 genes important for AML disease biology: HOXA7, LMO2, HLX, MYADM, ETV6, AFF1, RUNX1, GFI1, SPI1, MEIS1, and MYB.

In another patient, the team identified 279 AML-specific super-enhancers that were associated with 9 genes: MLLT10, AKT3, FLT3, ETV6, KLF13, RELA, FOSB, BMI1, and RUNX1.

The researchers said these findings suggest that super-enhancer profiling provides a new option for identifying biomarkers and therapeutic targets in AML and other malignancies.

“Syros’s gene control platform can systematically and efficiently identify known and previously unrecognized tumor biomarkers and cancer dependencies directly from patient tissue,” Dr Olson said. “Our data demonstrate unique gene control elements in AML patient subsets that hold promise in the classification and treatment of AML.”

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

AML in the bone marrow

PHILADELPHIA—Super-enhancer profiling can unearth biomarkers and therapeutic targets for acute myeloid leukemia (AML), according to research presented at the AACR conference Hematologic Malignancies: Translating Discoveries to Novel Therapies.

Researchers used high-throughput ChIP sequencing to identify super-enhancer domains in a cohort of AML patients.

And this revealed both known and previously unknown genes that are important for AML disease biology.

Eric Olson, PhD, and his colleagues from Syros Pharmaceuticals in Watertown, Massachusetts, presented this research during one of the meeting’s poster sessions.

The investigators explained that super-enhancers are a class of densely clustered cis-regulatory elements that are key to initiating and maintaining cell-type-specific gene expression in cancer and other settings. Tumor cells acquire super-enhancers at key oncogenes and at genes that participate in the acquisition of hallmark capabilities in cancer.

So the researchers set out to identify and characterize super-enhancer domains in a cohort of AML patients.

The team collected primary AML samples and performed chromatin fragmentation, chromatin immunoprecipitation, and DNA purification and sequencing.

They then mapped enhancer regions and characterized enhancer profiles. This revealed AML-specific super-enhancers and associated genes.

For example, in one patient, the investigators identified 392 AML-specific super-enhancers, which were associated with 11 genes important for AML disease biology: HOXA7, LMO2, HLX, MYADM, ETV6, AFF1, RUNX1, GFI1, SPI1, MEIS1, and MYB.

In another patient, the team identified 279 AML-specific super-enhancers that were associated with 9 genes: MLLT10, AKT3, FLT3, ETV6, KLF13, RELA, FOSB, BMI1, and RUNX1.

The researchers said these findings suggest that super-enhancer profiling provides a new option for identifying biomarkers and therapeutic targets in AML and other malignancies.

“Syros’s gene control platform can systematically and efficiently identify known and previously unrecognized tumor biomarkers and cancer dependencies directly from patient tissue,” Dr Olson said. “Our data demonstrate unique gene control elements in AML patient subsets that hold promise in the classification and treatment of AML.”

AML in the bone marrow

PHILADELPHIA—Super-enhancer profiling can unearth biomarkers and therapeutic targets for acute myeloid leukemia (AML), according to research presented at the AACR conference Hematologic Malignancies: Translating Discoveries to Novel Therapies.

Researchers used high-throughput ChIP sequencing to identify super-enhancer domains in a cohort of AML patients.

And this revealed both known and previously unknown genes that are important for AML disease biology.

Eric Olson, PhD, and his colleagues from Syros Pharmaceuticals in Watertown, Massachusetts, presented this research during one of the meeting’s poster sessions.

The investigators explained that super-enhancers are a class of densely clustered cis-regulatory elements that are key to initiating and maintaining cell-type-specific gene expression in cancer and other settings. Tumor cells acquire super-enhancers at key oncogenes and at genes that participate in the acquisition of hallmark capabilities in cancer.

So the researchers set out to identify and characterize super-enhancer domains in a cohort of AML patients.

The team collected primary AML samples and performed chromatin fragmentation, chromatin immunoprecipitation, and DNA purification and sequencing.

They then mapped enhancer regions and characterized enhancer profiles. This revealed AML-specific super-enhancers and associated genes.

For example, in one patient, the investigators identified 392 AML-specific super-enhancers, which were associated with 11 genes important for AML disease biology: HOXA7, LMO2, HLX, MYADM, ETV6, AFF1, RUNX1, GFI1, SPI1, MEIS1, and MYB.

In another patient, the team identified 279 AML-specific super-enhancers that were associated with 9 genes: MLLT10, AKT3, FLT3, ETV6, KLF13, RELA, FOSB, BMI1, and RUNX1.

The researchers said these findings suggest that super-enhancer profiling provides a new option for identifying biomarkers and therapeutic targets in AML and other malignancies.

“Syros’s gene control platform can systematically and efficiently identify known and previously unrecognized tumor biomarkers and cancer dependencies directly from patient tissue,” Dr Olson said. “Our data demonstrate unique gene control elements in AML patient subsets that hold promise in the classification and treatment of AML.”

Publications
Publications
Topics
Article Type
Display Headline
Method can detect drivers of AML
Display Headline
Method can detect drivers of AML
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Self-monitoring coagulometers get thumbs-up from NICE

Article Type
Changed
Display Headline
Self-monitoring coagulometers get thumbs-up from NICE

Pricked finger

The UK’s National Institute for Health and Care Excellence (NICE) has published a guidance recommending 2 technologies that enable patients on long-term anticoagulant therapy to self-monitor their clotting time.

The guidance supports use of the Coaguchek XS system (Roche Diagnostics) and the InRatio2 PT/INR Monitor (Alere) as options for some adults with atrial

fibrillation or heart valve disease who are on long-term anticoagulant therapy.

“The evidence shows that greater use of self-monitoring offers clinical and patient benefit and, over time, is likely to result in reductions in heart attacks and strokes caused by blood clots,” said Carole Longson, NICE Health Technology Evaluation Centre Director.

“Because self-monitoring provides almost instant results, self-monitoring can reduce anxiety, provide a sense of control for the patient, and remove the need to frequently attend clinics or hospitals.”

About the Coaguchek XS system

The Coaguchek XS system (Roche Diagnostics) consists of a meter and specifically designed test strips that can analyze a blood sample (fresh capillary blood or fresh untreated whole venous blood) and calculate the prothrombin time (PT) and the international normalized ratio (INR).

A code chip, which contains calibration data and the expiration date of the test strips, is inserted into the meter before it is switched on. Once the device is switched on, a test strip is inserted, and the blood sample is applied.

The test result is displayed approximately 1 minute after application of the sample, and the device automatically stores the result in its memory. The user is guided through the process by on-screen graphical instructions.

About the InRatio2 PT/INR Monitor

The INRatio2 PT/INR monitor (Alere) does a modified version of the 1-stage PT test using a recombinant human thromboplastin reagent. The clot formed in the reaction is detected by the change in the electrical impedance of the sample during the coagulation process.

The system consists of a monitor and disposable test strips. The monitor provides a user interface, heats the test strip to the appropriate reaction temperature, measures the impedance of blood samples, and calculates and reports PT and INR results.

Instructions and test results are displayed on an LCD. The monitor can store test results so that past results can be reviewed.

Publications
Topics

Pricked finger

The UK’s National Institute for Health and Care Excellence (NICE) has published a guidance recommending 2 technologies that enable patients on long-term anticoagulant therapy to self-monitor their clotting time.

The guidance supports use of the Coaguchek XS system (Roche Diagnostics) and the InRatio2 PT/INR Monitor (Alere) as options for some adults with atrial

fibrillation or heart valve disease who are on long-term anticoagulant therapy.

“The evidence shows that greater use of self-monitoring offers clinical and patient benefit and, over time, is likely to result in reductions in heart attacks and strokes caused by blood clots,” said Carole Longson, NICE Health Technology Evaluation Centre Director.

“Because self-monitoring provides almost instant results, self-monitoring can reduce anxiety, provide a sense of control for the patient, and remove the need to frequently attend clinics or hospitals.”

About the Coaguchek XS system

The Coaguchek XS system (Roche Diagnostics) consists of a meter and specifically designed test strips that can analyze a blood sample (fresh capillary blood or fresh untreated whole venous blood) and calculate the prothrombin time (PT) and the international normalized ratio (INR).

A code chip, which contains calibration data and the expiration date of the test strips, is inserted into the meter before it is switched on. Once the device is switched on, a test strip is inserted, and the blood sample is applied.

The test result is displayed approximately 1 minute after application of the sample, and the device automatically stores the result in its memory. The user is guided through the process by on-screen graphical instructions.

About the InRatio2 PT/INR Monitor

The INRatio2 PT/INR monitor (Alere) does a modified version of the 1-stage PT test using a recombinant human thromboplastin reagent. The clot formed in the reaction is detected by the change in the electrical impedance of the sample during the coagulation process.

The system consists of a monitor and disposable test strips. The monitor provides a user interface, heats the test strip to the appropriate reaction temperature, measures the impedance of blood samples, and calculates and reports PT and INR results.

Instructions and test results are displayed on an LCD. The monitor can store test results so that past results can be reviewed.

Pricked finger

The UK’s National Institute for Health and Care Excellence (NICE) has published a guidance recommending 2 technologies that enable patients on long-term anticoagulant therapy to self-monitor their clotting time.

The guidance supports use of the Coaguchek XS system (Roche Diagnostics) and the InRatio2 PT/INR Monitor (Alere) as options for some adults with atrial

fibrillation or heart valve disease who are on long-term anticoagulant therapy.

“The evidence shows that greater use of self-monitoring offers clinical and patient benefit and, over time, is likely to result in reductions in heart attacks and strokes caused by blood clots,” said Carole Longson, NICE Health Technology Evaluation Centre Director.

“Because self-monitoring provides almost instant results, self-monitoring can reduce anxiety, provide a sense of control for the patient, and remove the need to frequently attend clinics or hospitals.”

About the Coaguchek XS system

The Coaguchek XS system (Roche Diagnostics) consists of a meter and specifically designed test strips that can analyze a blood sample (fresh capillary blood or fresh untreated whole venous blood) and calculate the prothrombin time (PT) and the international normalized ratio (INR).

A code chip, which contains calibration data and the expiration date of the test strips, is inserted into the meter before it is switched on. Once the device is switched on, a test strip is inserted, and the blood sample is applied.

The test result is displayed approximately 1 minute after application of the sample, and the device automatically stores the result in its memory. The user is guided through the process by on-screen graphical instructions.

About the InRatio2 PT/INR Monitor

The INRatio2 PT/INR monitor (Alere) does a modified version of the 1-stage PT test using a recombinant human thromboplastin reagent. The clot formed in the reaction is detected by the change in the electrical impedance of the sample during the coagulation process.

The system consists of a monitor and disposable test strips. The monitor provides a user interface, heats the test strip to the appropriate reaction temperature, measures the impedance of blood samples, and calculates and reports PT and INR results.

Instructions and test results are displayed on an LCD. The monitor can store test results so that past results can be reviewed.

Publications
Publications
Topics
Article Type
Display Headline
Self-monitoring coagulometers get thumbs-up from NICE
Display Headline
Self-monitoring coagulometers get thumbs-up from NICE
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Murine studies support use of TKIs in ALL subtype

Article Type
Changed
Display Headline
Murine studies support use of TKIs in ALL subtype

Lab mouse

PHILADELPHIA—Experiments in mice reinforce the idea that tyrosine kinase inhibitors (TKIs) can treat patients with Ph-like acute lymphoblastic leukemia (ALL).

Investigators recently identified genomic alterations in Ph-like ALL that suggest these patients might respond to TKIs, and tests in a small number of patients supported this theory.

Now, preclinical results show that kinase fusions in Ph-like ALL activate signaling pathways differently, and this affects sensitivity to TKIs.

Kathryn Roberts, PhD, of St Jude Children’s Research Hospital in Memphis, Tennessee, and her colleagues presented these results at the AACR conference Hematologic Malignancies: Translating Discoveries to Novel Therapies.

“We recently described a subtype of B-cell acute lymphoblastic leukemia with very poor outcome that is characterized by genetic alterations involving tyrosine kinases, termed Ph-like ALL,” Dr Roberts said. “We wanted to examine whether these alterations contribute to the development of Ph-like ALL and determine if they could be targeted with tyrosine kinase inhibitors.”

“We showed, for the first time, that the kinase alterations we tested contribute to the development of Ph-like ALL, and that Ph-like ALL can be treated effectively with tyrosine kinase inhibitors in animal models. These findings provide a strong rationale for treating Ph-like ALL patients with targeted therapies to improve their survival.”

Dr Roberts and her colleagues first introduced kinase alterations—RCSD1-ABL2, SSBP2-CSF1R, or PAX5-JAK2—in IL-7-dependent, Arf-/- mouse pre-B cells expressing IK6.

They found that each fusion conferred cytokine-independent growth in vitro. And mice that received transplants of pre-B cells expressing RCSD1-ABL2 or SSBP2-CSF1R developed ALL with a pre-B immunophenotype.

The investigators then assessed the activation of kinase signaling pathways and TKI sensitivity in Arf-/- pre-B cells and human leukemic cells harvested from xenografted mice expressing ETV6-ABL1, RANBP2-ABL1, PAG1-ABL2, RCSD1-ABL2, SSBP2-CSF1R, IGH-EPOR, ATF7IP-JAK2, and PAX5-JAK2.

In both cell types, signaling pathway activation and TKI sensitivity differed according to the kinase fusion.

Cells expressing ABL1-class kinase fusions (ABL1, ABL2, CSF1R, and PDGFRB) exhibited pSTAT5 activation that was inhibited by imatinib or dasatinib. But in cells expressing ATF7IP-JAK2, PAX5-JAK2, or IGH-EPOR, pSTAT5 activation was only inhibited by ruxolitinib.

Finally, the investigators tested dasatinib in xenograft models of ETV6-ABL1, RCSD1-ABL2, PAG1-ABL2, or SSBP2-CSF1R ALL.

They found that treated mice had significantly lower leukemic burdens and splenic weights than control mice. And STAT5 phosphorylation was attenuated in cells from treated mice.

“Our studies show that different FDA-approved TKIs such as imatinib, dasatinib, ruxolitinib, or crizotinib could potentially be used to treat Ph-like ALL patients, depending on the type of kinase alterations their tumors bear,” Dr Roberts said.

“We were able to gain a better understanding of the genetics underlying Ph-like ALL, and our studies could help identify patients who will not respond optimally to current therapy. By knowing the exact genetic alteration upfront, we may be able to implement different therapeutic strategies to improve the survival rate of future patients with ALL.”

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Lab mouse

PHILADELPHIA—Experiments in mice reinforce the idea that tyrosine kinase inhibitors (TKIs) can treat patients with Ph-like acute lymphoblastic leukemia (ALL).

Investigators recently identified genomic alterations in Ph-like ALL that suggest these patients might respond to TKIs, and tests in a small number of patients supported this theory.

Now, preclinical results show that kinase fusions in Ph-like ALL activate signaling pathways differently, and this affects sensitivity to TKIs.

Kathryn Roberts, PhD, of St Jude Children’s Research Hospital in Memphis, Tennessee, and her colleagues presented these results at the AACR conference Hematologic Malignancies: Translating Discoveries to Novel Therapies.

“We recently described a subtype of B-cell acute lymphoblastic leukemia with very poor outcome that is characterized by genetic alterations involving tyrosine kinases, termed Ph-like ALL,” Dr Roberts said. “We wanted to examine whether these alterations contribute to the development of Ph-like ALL and determine if they could be targeted with tyrosine kinase inhibitors.”

“We showed, for the first time, that the kinase alterations we tested contribute to the development of Ph-like ALL, and that Ph-like ALL can be treated effectively with tyrosine kinase inhibitors in animal models. These findings provide a strong rationale for treating Ph-like ALL patients with targeted therapies to improve their survival.”

Dr Roberts and her colleagues first introduced kinase alterations—RCSD1-ABL2, SSBP2-CSF1R, or PAX5-JAK2—in IL-7-dependent, Arf-/- mouse pre-B cells expressing IK6.

They found that each fusion conferred cytokine-independent growth in vitro. And mice that received transplants of pre-B cells expressing RCSD1-ABL2 or SSBP2-CSF1R developed ALL with a pre-B immunophenotype.

The investigators then assessed the activation of kinase signaling pathways and TKI sensitivity in Arf-/- pre-B cells and human leukemic cells harvested from xenografted mice expressing ETV6-ABL1, RANBP2-ABL1, PAG1-ABL2, RCSD1-ABL2, SSBP2-CSF1R, IGH-EPOR, ATF7IP-JAK2, and PAX5-JAK2.

In both cell types, signaling pathway activation and TKI sensitivity differed according to the kinase fusion.

Cells expressing ABL1-class kinase fusions (ABL1, ABL2, CSF1R, and PDGFRB) exhibited pSTAT5 activation that was inhibited by imatinib or dasatinib. But in cells expressing ATF7IP-JAK2, PAX5-JAK2, or IGH-EPOR, pSTAT5 activation was only inhibited by ruxolitinib.

Finally, the investigators tested dasatinib in xenograft models of ETV6-ABL1, RCSD1-ABL2, PAG1-ABL2, or SSBP2-CSF1R ALL.

They found that treated mice had significantly lower leukemic burdens and splenic weights than control mice. And STAT5 phosphorylation was attenuated in cells from treated mice.

“Our studies show that different FDA-approved TKIs such as imatinib, dasatinib, ruxolitinib, or crizotinib could potentially be used to treat Ph-like ALL patients, depending on the type of kinase alterations their tumors bear,” Dr Roberts said.

“We were able to gain a better understanding of the genetics underlying Ph-like ALL, and our studies could help identify patients who will not respond optimally to current therapy. By knowing the exact genetic alteration upfront, we may be able to implement different therapeutic strategies to improve the survival rate of future patients with ALL.”

Lab mouse

PHILADELPHIA—Experiments in mice reinforce the idea that tyrosine kinase inhibitors (TKIs) can treat patients with Ph-like acute lymphoblastic leukemia (ALL).

Investigators recently identified genomic alterations in Ph-like ALL that suggest these patients might respond to TKIs, and tests in a small number of patients supported this theory.

Now, preclinical results show that kinase fusions in Ph-like ALL activate signaling pathways differently, and this affects sensitivity to TKIs.

Kathryn Roberts, PhD, of St Jude Children’s Research Hospital in Memphis, Tennessee, and her colleagues presented these results at the AACR conference Hematologic Malignancies: Translating Discoveries to Novel Therapies.

“We recently described a subtype of B-cell acute lymphoblastic leukemia with very poor outcome that is characterized by genetic alterations involving tyrosine kinases, termed Ph-like ALL,” Dr Roberts said. “We wanted to examine whether these alterations contribute to the development of Ph-like ALL and determine if they could be targeted with tyrosine kinase inhibitors.”

“We showed, for the first time, that the kinase alterations we tested contribute to the development of Ph-like ALL, and that Ph-like ALL can be treated effectively with tyrosine kinase inhibitors in animal models. These findings provide a strong rationale for treating Ph-like ALL patients with targeted therapies to improve their survival.”

Dr Roberts and her colleagues first introduced kinase alterations—RCSD1-ABL2, SSBP2-CSF1R, or PAX5-JAK2—in IL-7-dependent, Arf-/- mouse pre-B cells expressing IK6.

They found that each fusion conferred cytokine-independent growth in vitro. And mice that received transplants of pre-B cells expressing RCSD1-ABL2 or SSBP2-CSF1R developed ALL with a pre-B immunophenotype.

The investigators then assessed the activation of kinase signaling pathways and TKI sensitivity in Arf-/- pre-B cells and human leukemic cells harvested from xenografted mice expressing ETV6-ABL1, RANBP2-ABL1, PAG1-ABL2, RCSD1-ABL2, SSBP2-CSF1R, IGH-EPOR, ATF7IP-JAK2, and PAX5-JAK2.

In both cell types, signaling pathway activation and TKI sensitivity differed according to the kinase fusion.

Cells expressing ABL1-class kinase fusions (ABL1, ABL2, CSF1R, and PDGFRB) exhibited pSTAT5 activation that was inhibited by imatinib or dasatinib. But in cells expressing ATF7IP-JAK2, PAX5-JAK2, or IGH-EPOR, pSTAT5 activation was only inhibited by ruxolitinib.

Finally, the investigators tested dasatinib in xenograft models of ETV6-ABL1, RCSD1-ABL2, PAG1-ABL2, or SSBP2-CSF1R ALL.

They found that treated mice had significantly lower leukemic burdens and splenic weights than control mice. And STAT5 phosphorylation was attenuated in cells from treated mice.

“Our studies show that different FDA-approved TKIs such as imatinib, dasatinib, ruxolitinib, or crizotinib could potentially be used to treat Ph-like ALL patients, depending on the type of kinase alterations their tumors bear,” Dr Roberts said.

“We were able to gain a better understanding of the genetics underlying Ph-like ALL, and our studies could help identify patients who will not respond optimally to current therapy. By knowing the exact genetic alteration upfront, we may be able to implement different therapeutic strategies to improve the survival rate of future patients with ALL.”

Publications
Publications
Topics
Article Type
Display Headline
Murine studies support use of TKIs in ALL subtype
Display Headline
Murine studies support use of TKIs in ALL subtype
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

New and Noteworthy Information—October 2014

Article Type
Changed
Display Headline
New and Noteworthy Information—October 2014

Fish oil may reduce seizure frequency in patients with epilepsy, according to a study published online ahead of print September 8 in the Journal of Neurology, Neurosurgery, and Psychiatry. Twenty-four patients with drug-resistant epilepsy were given three separate treatments for 10 weeks and separated by a six-week period. Participants were given three capsules of fish oil daily, plus three capsules of corn oil (placebo); six capsules of fish oil daily; and three capsules of corn oil twice daily. The average number of seizures among those taking low-dose fish oil was around 12 per month, compared with slightly more than 17 for the high dose, and slightly more than 18 for the placebo. Two people who had the low dose were seizure free during the 10-week trial. No one taking the high-dose fish oil or the placebo was seizure free.

Blood type AB and higher factor VIII (FVIII) are associated with increased incidence of cognitive impairment, according to a study published online ahead of print September 10 in Neurology. Findings are based on a cohort from the REGARDS Study, in which more than 30,000 people were followed for an average of 3.4 years. After adjusting for age, race, region, and sex, the researchers found that people with blood group AB (odds ratio [OR], 1.82) and those with higher FVIII (OR, 1.24) had an increased risk of cognitive impairment. The mean FVIII was higher in people with blood type AB (142 IU/dL), compared with O (104 IU/dL), and FVIII mediated 18% of the association between AB group and incident cognitive impairment, according to the researchers.

Magnesium sulfate administered IV to pregnant women at risk of giving birth before 30 weeks gestation was not associated with neurologic, cognitive, behavioral, growth, or functional outcomes in their children at school age, investigators reported in the September 17 issue of JAMA. Researchers randomly assigned magnesium sulfate or placebo to pregnant women (n = 535 magnesium; n = 527 placebo) for whom birth was planned or expected before 30 weeks gestation; 1,255 fetuses were known to be alive at randomization. Of the 867 survivors available for follow-up, outcomes at school age (6 to 11) were determined for 669 children (77%). The investigators found that receiving antenatal magnesium sulfate was not associated with any long-term benefits or harms, compared with placebo. The study authors also observed a nonsignificant reduction in the risk of death in the magnesium sulfate group.

Older patients with Parkinson’s disease who underwent deep brain stimulation (DBS) had a similar 90-day complication risk, compared with that in younger patients, according to a study published online ahead of print August 25 in JAMA Neurology. Researchers analyzed data from more than 1,750 patients who had DBS from 2000 to 2009. Of those, 7.5% of subjects experienced at least one complication within 90 days of having the device implanted. The investigators determined that increasing age did not significantly affect the overall complication rates. The findings suggest that age alone should not be a primary exclusion factor for determining candidacy for DBS. “Instead, a clear focus on patients with medication-refractory and difficult to control on-off fluctuations with preserved cognition, regardless of age, may allow for an expansion of the traditional therapeutic window,” the researchers concluded.

Confusional arousals are highly prevalent in the general population, according to a study published in the August 26 issue of Neurology. A total of 19,136 people age 18 and older were interviewed about their sleep habits and whether they had experienced symptoms of the disorder. Participants also were asked about any medications they took and about mental illness diagnoses. Results showed that 15% had experienced an episode in the last year, with more than half reporting more than one episode per week. In the majority of cases, 84% of those with confusional arousals (also known as sleep drunkenness) also had a sleep disorder, mental health disorder, or were taking psychotropic drugs. Fewer than 1% of the people with confusional arousals had no known cause or related condition. “These episodes of waking up confused have received considerably less attention than sleepwalking even though the consequences can be just as serious,” stated researchers.

High potassium intake is associated with a lower risk of all stroke and ischemic stroke and all-cause mortality in older women, investigators reported online ahead of print September 4 in Stroke. Researchers studied 90,137 postmenopausal women ages 50 to 79 for an average of 11 years. Women who consumed the most potassium were 10% less likely to die than were those who had consumed the least amount. The women also were 12% less likely to have a stroke and 16% less likely to have an ischemic stroke than were women who consumed the least amount. Those without hypertension who had consumed the most potassium had a 27% lower ischemic stroke risk and 21% reduced risk for all stroke types, compared with women who had the least potassium in their diets. Among women with hypertension, those who consumed the most potassium had a lower risk of mortality.

 

 

Regular blood transfusion therapy significantly reduced the recurrence of cerebral infarct in children with sickle cell anemia, according to a study published in the August 21 issue of the New England Journal of Medicine. During the three-year study, 196 children ages 5 through 15 with sickle cell anemia who had previously had a silent stroke were followed. Children who underwent regular transfusions were 58% less likely to have another silent stroke or an overt stroke, while those who had no transfusions were more than twice as likely to experience repeat strokes. In addition, children who had monthly transfusions were less likely to have a range of other sickle cell anemia–related problems, such as episodes of extreme pain. Overall, 295 pain episodes occurred among children who did not receive transfusions, compared with 126 episodes among those receiving treatment.

Stroke incidence and mortality rates decreased from 1987 to 2011, according to a study published in the July 16 issue of JAMA. The findings were based on data from the Atherosclerosis Risk in Communities cohort of 15,792 US residents between the ages of 45 and 64 who were monitored during the 1980s. The new study followed the progress of 14,357 participants who were free of stroke in 1987 and monitored hospitalizations from stroke and deaths from 1987 to 2011. Stroke incidence decreased over time in Caucasians and African Americans, with an age-adjusted incidence rate ratio of 0.76. The absolute decrease was 0.93 per 1,000 person-years overall. The overall mortality rate after stroke decreased over time (hazard ratio, 0.80), with an absolute decrease of 8.09 per 100 strokes after 10 years.

The FDA has approved Vimpat (lacosamide) C-V as monotherapy in the treatment of partial-onset seizures in patients with epilepsy ages 17 and older. The monotherapy approval for Vimpat is based on a phase III historical-control conversion to lacosamide monotherapy study in adult patients with epilepsy with partial-onset seizures. This study met its primary end point, demonstrating that the exit percentage for patients converting to lacosamide (400 mg/day) was lower than the historical control exit percentage used as a comparator. Lacosamide (300 mg/day) also met the prespecified criteria for efficacy. Based on individual patients’ needs, physicians can choose between Vimpat formulations—tablets, oral solution, or injection. Vimpat (UCB; Brussels) is already approved in the US as adjunctive treatment for partial-onset seizures in patients in this age group.

Disruption of intestinal homeostasis is an early and immune-mediated event in experimental autoimmune encephalomyelitis, according to a study published September 3 in PLoS ONE. Investigators observed structural changes in the mucous membrane of the small intestine and an increase in inflammatory T cells, as well as a reduction in immunosuppressive cells. “Our findings provide support for the idea that a damaged intestinal barrier can prevent the body ending an autoimmune reaction in the normal manner, leading to a chronic disease such as multiple sclerosis,” stated the study authors. “In particular, an increased understanding of the regulation of tight junctions at the blood–brain barrier and in the intestinal wall may be crucial for design of future innovative therapies.”

Children and adolescents with autism have a surplus of synapses in the brain due to reduced developmental spine pruning, investigators reported in the September 3 issue of Neuron. Researchers examined brains from children with autism who had died from other causes. Thirteen brains were from children ages 2 to 9, 13 brains were from children ages 13 to 20, and 22 brains were from children without autism. The investigators measured synapse density in a small section of tissue in each brain by counting the number of tiny spines that branch from the cortical neurons. During late childhood, spine density had decreased by about half in the control brains, compared with 16% in the brains from patients with autism. “Hundreds of genes have been linked to autism, but almost all of our human subjects had overactive mTOR and decreased autophagy, and all appear to have a lack of normal synaptic pruning,” stated study authors.

Macromolecular proton fraction (MPF) mapping enables quantitative assessment of demyelination in normal-appearing brain tissues and shows primary clinical relevance of gray matter damage in multiple sclerosis (MS), according to a study published online ahead of print September 10 in Radiology. Researchers examined 30 patients with MS, 18 with relapsing-remitting MS (RRMS) and 12 with secondary progressive MS. Fourteen healthy controls also were included. Each participant underwent MRI on a 3-T imager, and the investigators reconstructed 3-D whole-brain MPF maps to examine normal-appearing white matter, gray matter, and MS lesions. MPF was significantly lower in both white and gray matter in patients with RRMS, compared with healthy controls, and it was significantly reduced in normal-appearing brain tissues and lesions of patients with secondary progressive MS, compared with patients with RRMS with the largest relative decrease in gray matter.

 

 

Type 2 diabetes mellitus is associated with mild cognitive impairment (MCI) and MCI subtypes in middle-aged, but not in elderly participants, according to a study published online ahead of print July 7 in the Journal of Alzheimer’s Disease. A total of 560 participants diagnosed with MCI were compared with 1,376 cognitively normal participants from the Heinz Nixdorf Recall study. Of participants with MCI, 289 had amnestic MCI and 271 had nonamnestic MCI. Type 2 diabetes mellitus was strongly associated with MCI and MCI subtypes in those ages 50 to 65. Examination of differences by gender revealed a stronger association of diabetes with amnestic MCI in middle-aged women and an even stronger association with nonamnestic MCI in middle-aged men.

Kimberly D. Williams

References

Author and Disclosure Information

Issue
Neurology Reviews - 22(10)
Publications
Page Number
3-4
Legacy Keywords
Kimberly Williams, epilepsy, Parkinson’s disease, stroke, deep brain stimulation, Atherosclerosis
Sections
Author and Disclosure Information

Author and Disclosure Information

Fish oil may reduce seizure frequency in patients with epilepsy, according to a study published online ahead of print September 8 in the Journal of Neurology, Neurosurgery, and Psychiatry. Twenty-four patients with drug-resistant epilepsy were given three separate treatments for 10 weeks and separated by a six-week period. Participants were given three capsules of fish oil daily, plus three capsules of corn oil (placebo); six capsules of fish oil daily; and three capsules of corn oil twice daily. The average number of seizures among those taking low-dose fish oil was around 12 per month, compared with slightly more than 17 for the high dose, and slightly more than 18 for the placebo. Two people who had the low dose were seizure free during the 10-week trial. No one taking the high-dose fish oil or the placebo was seizure free.

Blood type AB and higher factor VIII (FVIII) are associated with increased incidence of cognitive impairment, according to a study published online ahead of print September 10 in Neurology. Findings are based on a cohort from the REGARDS Study, in which more than 30,000 people were followed for an average of 3.4 years. After adjusting for age, race, region, and sex, the researchers found that people with blood group AB (odds ratio [OR], 1.82) and those with higher FVIII (OR, 1.24) had an increased risk of cognitive impairment. The mean FVIII was higher in people with blood type AB (142 IU/dL), compared with O (104 IU/dL), and FVIII mediated 18% of the association between AB group and incident cognitive impairment, according to the researchers.

Magnesium sulfate administered IV to pregnant women at risk of giving birth before 30 weeks gestation was not associated with neurologic, cognitive, behavioral, growth, or functional outcomes in their children at school age, investigators reported in the September 17 issue of JAMA. Researchers randomly assigned magnesium sulfate or placebo to pregnant women (n = 535 magnesium; n = 527 placebo) for whom birth was planned or expected before 30 weeks gestation; 1,255 fetuses were known to be alive at randomization. Of the 867 survivors available for follow-up, outcomes at school age (6 to 11) were determined for 669 children (77%). The investigators found that receiving antenatal magnesium sulfate was not associated with any long-term benefits or harms, compared with placebo. The study authors also observed a nonsignificant reduction in the risk of death in the magnesium sulfate group.

Older patients with Parkinson’s disease who underwent deep brain stimulation (DBS) had a similar 90-day complication risk, compared with that in younger patients, according to a study published online ahead of print August 25 in JAMA Neurology. Researchers analyzed data from more than 1,750 patients who had DBS from 2000 to 2009. Of those, 7.5% of subjects experienced at least one complication within 90 days of having the device implanted. The investigators determined that increasing age did not significantly affect the overall complication rates. The findings suggest that age alone should not be a primary exclusion factor for determining candidacy for DBS. “Instead, a clear focus on patients with medication-refractory and difficult to control on-off fluctuations with preserved cognition, regardless of age, may allow for an expansion of the traditional therapeutic window,” the researchers concluded.

Confusional arousals are highly prevalent in the general population, according to a study published in the August 26 issue of Neurology. A total of 19,136 people age 18 and older were interviewed about their sleep habits and whether they had experienced symptoms of the disorder. Participants also were asked about any medications they took and about mental illness diagnoses. Results showed that 15% had experienced an episode in the last year, with more than half reporting more than one episode per week. In the majority of cases, 84% of those with confusional arousals (also known as sleep drunkenness) also had a sleep disorder, mental health disorder, or were taking psychotropic drugs. Fewer than 1% of the people with confusional arousals had no known cause or related condition. “These episodes of waking up confused have received considerably less attention than sleepwalking even though the consequences can be just as serious,” stated researchers.

High potassium intake is associated with a lower risk of all stroke and ischemic stroke and all-cause mortality in older women, investigators reported online ahead of print September 4 in Stroke. Researchers studied 90,137 postmenopausal women ages 50 to 79 for an average of 11 years. Women who consumed the most potassium were 10% less likely to die than were those who had consumed the least amount. The women also were 12% less likely to have a stroke and 16% less likely to have an ischemic stroke than were women who consumed the least amount. Those without hypertension who had consumed the most potassium had a 27% lower ischemic stroke risk and 21% reduced risk for all stroke types, compared with women who had the least potassium in their diets. Among women with hypertension, those who consumed the most potassium had a lower risk of mortality.

 

 

Regular blood transfusion therapy significantly reduced the recurrence of cerebral infarct in children with sickle cell anemia, according to a study published in the August 21 issue of the New England Journal of Medicine. During the three-year study, 196 children ages 5 through 15 with sickle cell anemia who had previously had a silent stroke were followed. Children who underwent regular transfusions were 58% less likely to have another silent stroke or an overt stroke, while those who had no transfusions were more than twice as likely to experience repeat strokes. In addition, children who had monthly transfusions were less likely to have a range of other sickle cell anemia–related problems, such as episodes of extreme pain. Overall, 295 pain episodes occurred among children who did not receive transfusions, compared with 126 episodes among those receiving treatment.

Stroke incidence and mortality rates decreased from 1987 to 2011, according to a study published in the July 16 issue of JAMA. The findings were based on data from the Atherosclerosis Risk in Communities cohort of 15,792 US residents between the ages of 45 and 64 who were monitored during the 1980s. The new study followed the progress of 14,357 participants who were free of stroke in 1987 and monitored hospitalizations from stroke and deaths from 1987 to 2011. Stroke incidence decreased over time in Caucasians and African Americans, with an age-adjusted incidence rate ratio of 0.76. The absolute decrease was 0.93 per 1,000 person-years overall. The overall mortality rate after stroke decreased over time (hazard ratio, 0.80), with an absolute decrease of 8.09 per 100 strokes after 10 years.

The FDA has approved Vimpat (lacosamide) C-V as monotherapy in the treatment of partial-onset seizures in patients with epilepsy ages 17 and older. The monotherapy approval for Vimpat is based on a phase III historical-control conversion to lacosamide monotherapy study in adult patients with epilepsy with partial-onset seizures. This study met its primary end point, demonstrating that the exit percentage for patients converting to lacosamide (400 mg/day) was lower than the historical control exit percentage used as a comparator. Lacosamide (300 mg/day) also met the prespecified criteria for efficacy. Based on individual patients’ needs, physicians can choose between Vimpat formulations—tablets, oral solution, or injection. Vimpat (UCB; Brussels) is already approved in the US as adjunctive treatment for partial-onset seizures in patients in this age group.

Disruption of intestinal homeostasis is an early and immune-mediated event in experimental autoimmune encephalomyelitis, according to a study published September 3 in PLoS ONE. Investigators observed structural changes in the mucous membrane of the small intestine and an increase in inflammatory T cells, as well as a reduction in immunosuppressive cells. “Our findings provide support for the idea that a damaged intestinal barrier can prevent the body ending an autoimmune reaction in the normal manner, leading to a chronic disease such as multiple sclerosis,” stated the study authors. “In particular, an increased understanding of the regulation of tight junctions at the blood–brain barrier and in the intestinal wall may be crucial for design of future innovative therapies.”

Children and adolescents with autism have a surplus of synapses in the brain due to reduced developmental spine pruning, investigators reported in the September 3 issue of Neuron. Researchers examined brains from children with autism who had died from other causes. Thirteen brains were from children ages 2 to 9, 13 brains were from children ages 13 to 20, and 22 brains were from children without autism. The investigators measured synapse density in a small section of tissue in each brain by counting the number of tiny spines that branch from the cortical neurons. During late childhood, spine density had decreased by about half in the control brains, compared with 16% in the brains from patients with autism. “Hundreds of genes have been linked to autism, but almost all of our human subjects had overactive mTOR and decreased autophagy, and all appear to have a lack of normal synaptic pruning,” stated study authors.

Macromolecular proton fraction (MPF) mapping enables quantitative assessment of demyelination in normal-appearing brain tissues and shows primary clinical relevance of gray matter damage in multiple sclerosis (MS), according to a study published online ahead of print September 10 in Radiology. Researchers examined 30 patients with MS, 18 with relapsing-remitting MS (RRMS) and 12 with secondary progressive MS. Fourteen healthy controls also were included. Each participant underwent MRI on a 3-T imager, and the investigators reconstructed 3-D whole-brain MPF maps to examine normal-appearing white matter, gray matter, and MS lesions. MPF was significantly lower in both white and gray matter in patients with RRMS, compared with healthy controls, and it was significantly reduced in normal-appearing brain tissues and lesions of patients with secondary progressive MS, compared with patients with RRMS with the largest relative decrease in gray matter.

 

 

Type 2 diabetes mellitus is associated with mild cognitive impairment (MCI) and MCI subtypes in middle-aged, but not in elderly participants, according to a study published online ahead of print July 7 in the Journal of Alzheimer’s Disease. A total of 560 participants diagnosed with MCI were compared with 1,376 cognitively normal participants from the Heinz Nixdorf Recall study. Of participants with MCI, 289 had amnestic MCI and 271 had nonamnestic MCI. Type 2 diabetes mellitus was strongly associated with MCI and MCI subtypes in those ages 50 to 65. Examination of differences by gender revealed a stronger association of diabetes with amnestic MCI in middle-aged women and an even stronger association with nonamnestic MCI in middle-aged men.

Kimberly D. Williams

Fish oil may reduce seizure frequency in patients with epilepsy, according to a study published online ahead of print September 8 in the Journal of Neurology, Neurosurgery, and Psychiatry. Twenty-four patients with drug-resistant epilepsy were given three separate treatments for 10 weeks and separated by a six-week period. Participants were given three capsules of fish oil daily, plus three capsules of corn oil (placebo); six capsules of fish oil daily; and three capsules of corn oil twice daily. The average number of seizures among those taking low-dose fish oil was around 12 per month, compared with slightly more than 17 for the high dose, and slightly more than 18 for the placebo. Two people who had the low dose were seizure free during the 10-week trial. No one taking the high-dose fish oil or the placebo was seizure free.

Blood type AB and higher factor VIII (FVIII) are associated with increased incidence of cognitive impairment, according to a study published online ahead of print September 10 in Neurology. Findings are based on a cohort from the REGARDS Study, in which more than 30,000 people were followed for an average of 3.4 years. After adjusting for age, race, region, and sex, the researchers found that people with blood group AB (odds ratio [OR], 1.82) and those with higher FVIII (OR, 1.24) had an increased risk of cognitive impairment. The mean FVIII was higher in people with blood type AB (142 IU/dL), compared with O (104 IU/dL), and FVIII mediated 18% of the association between AB group and incident cognitive impairment, according to the researchers.

Magnesium sulfate administered IV to pregnant women at risk of giving birth before 30 weeks gestation was not associated with neurologic, cognitive, behavioral, growth, or functional outcomes in their children at school age, investigators reported in the September 17 issue of JAMA. Researchers randomly assigned magnesium sulfate or placebo to pregnant women (n = 535 magnesium; n = 527 placebo) for whom birth was planned or expected before 30 weeks gestation; 1,255 fetuses were known to be alive at randomization. Of the 867 survivors available for follow-up, outcomes at school age (6 to 11) were determined for 669 children (77%). The investigators found that receiving antenatal magnesium sulfate was not associated with any long-term benefits or harms, compared with placebo. The study authors also observed a nonsignificant reduction in the risk of death in the magnesium sulfate group.

Older patients with Parkinson’s disease who underwent deep brain stimulation (DBS) had a similar 90-day complication risk, compared with that in younger patients, according to a study published online ahead of print August 25 in JAMA Neurology. Researchers analyzed data from more than 1,750 patients who had DBS from 2000 to 2009. Of those, 7.5% of subjects experienced at least one complication within 90 days of having the device implanted. The investigators determined that increasing age did not significantly affect the overall complication rates. The findings suggest that age alone should not be a primary exclusion factor for determining candidacy for DBS. “Instead, a clear focus on patients with medication-refractory and difficult to control on-off fluctuations with preserved cognition, regardless of age, may allow for an expansion of the traditional therapeutic window,” the researchers concluded.

Confusional arousals are highly prevalent in the general population, according to a study published in the August 26 issue of Neurology. A total of 19,136 people age 18 and older were interviewed about their sleep habits and whether they had experienced symptoms of the disorder. Participants also were asked about any medications they took and about mental illness diagnoses. Results showed that 15% had experienced an episode in the last year, with more than half reporting more than one episode per week. In the majority of cases, 84% of those with confusional arousals (also known as sleep drunkenness) also had a sleep disorder, mental health disorder, or were taking psychotropic drugs. Fewer than 1% of the people with confusional arousals had no known cause or related condition. “These episodes of waking up confused have received considerably less attention than sleepwalking even though the consequences can be just as serious,” stated researchers.

High potassium intake is associated with a lower risk of all stroke and ischemic stroke and all-cause mortality in older women, investigators reported online ahead of print September 4 in Stroke. Researchers studied 90,137 postmenopausal women ages 50 to 79 for an average of 11 years. Women who consumed the most potassium were 10% less likely to die than were those who had consumed the least amount. The women also were 12% less likely to have a stroke and 16% less likely to have an ischemic stroke than were women who consumed the least amount. Those without hypertension who had consumed the most potassium had a 27% lower ischemic stroke risk and 21% reduced risk for all stroke types, compared with women who had the least potassium in their diets. Among women with hypertension, those who consumed the most potassium had a lower risk of mortality.

 

 

Regular blood transfusion therapy significantly reduced the recurrence of cerebral infarct in children with sickle cell anemia, according to a study published in the August 21 issue of the New England Journal of Medicine. During the three-year study, 196 children ages 5 through 15 with sickle cell anemia who had previously had a silent stroke were followed. Children who underwent regular transfusions were 58% less likely to have another silent stroke or an overt stroke, while those who had no transfusions were more than twice as likely to experience repeat strokes. In addition, children who had monthly transfusions were less likely to have a range of other sickle cell anemia–related problems, such as episodes of extreme pain. Overall, 295 pain episodes occurred among children who did not receive transfusions, compared with 126 episodes among those receiving treatment.

Stroke incidence and mortality rates decreased from 1987 to 2011, according to a study published in the July 16 issue of JAMA. The findings were based on data from the Atherosclerosis Risk in Communities cohort of 15,792 US residents between the ages of 45 and 64 who were monitored during the 1980s. The new study followed the progress of 14,357 participants who were free of stroke in 1987 and monitored hospitalizations from stroke and deaths from 1987 to 2011. Stroke incidence decreased over time in Caucasians and African Americans, with an age-adjusted incidence rate ratio of 0.76. The absolute decrease was 0.93 per 1,000 person-years overall. The overall mortality rate after stroke decreased over time (hazard ratio, 0.80), with an absolute decrease of 8.09 per 100 strokes after 10 years.

The FDA has approved Vimpat (lacosamide) C-V as monotherapy in the treatment of partial-onset seizures in patients with epilepsy ages 17 and older. The monotherapy approval for Vimpat is based on a phase III historical-control conversion to lacosamide monotherapy study in adult patients with epilepsy with partial-onset seizures. This study met its primary end point, demonstrating that the exit percentage for patients converting to lacosamide (400 mg/day) was lower than the historical control exit percentage used as a comparator. Lacosamide (300 mg/day) also met the prespecified criteria for efficacy. Based on individual patients’ needs, physicians can choose between Vimpat formulations—tablets, oral solution, or injection. Vimpat (UCB; Brussels) is already approved in the US as adjunctive treatment for partial-onset seizures in patients in this age group.

Disruption of intestinal homeostasis is an early and immune-mediated event in experimental autoimmune encephalomyelitis, according to a study published September 3 in PLoS ONE. Investigators observed structural changes in the mucous membrane of the small intestine and an increase in inflammatory T cells, as well as a reduction in immunosuppressive cells. “Our findings provide support for the idea that a damaged intestinal barrier can prevent the body ending an autoimmune reaction in the normal manner, leading to a chronic disease such as multiple sclerosis,” stated the study authors. “In particular, an increased understanding of the regulation of tight junctions at the blood–brain barrier and in the intestinal wall may be crucial for design of future innovative therapies.”

Children and adolescents with autism have a surplus of synapses in the brain due to reduced developmental spine pruning, investigators reported in the September 3 issue of Neuron. Researchers examined brains from children with autism who had died from other causes. Thirteen brains were from children ages 2 to 9, 13 brains were from children ages 13 to 20, and 22 brains were from children without autism. The investigators measured synapse density in a small section of tissue in each brain by counting the number of tiny spines that branch from the cortical neurons. During late childhood, spine density had decreased by about half in the control brains, compared with 16% in the brains from patients with autism. “Hundreds of genes have been linked to autism, but almost all of our human subjects had overactive mTOR and decreased autophagy, and all appear to have a lack of normal synaptic pruning,” stated study authors.

Macromolecular proton fraction (MPF) mapping enables quantitative assessment of demyelination in normal-appearing brain tissues and shows primary clinical relevance of gray matter damage in multiple sclerosis (MS), according to a study published online ahead of print September 10 in Radiology. Researchers examined 30 patients with MS, 18 with relapsing-remitting MS (RRMS) and 12 with secondary progressive MS. Fourteen healthy controls also were included. Each participant underwent MRI on a 3-T imager, and the investigators reconstructed 3-D whole-brain MPF maps to examine normal-appearing white matter, gray matter, and MS lesions. MPF was significantly lower in both white and gray matter in patients with RRMS, compared with healthy controls, and it was significantly reduced in normal-appearing brain tissues and lesions of patients with secondary progressive MS, compared with patients with RRMS with the largest relative decrease in gray matter.

 

 

Type 2 diabetes mellitus is associated with mild cognitive impairment (MCI) and MCI subtypes in middle-aged, but not in elderly participants, according to a study published online ahead of print July 7 in the Journal of Alzheimer’s Disease. A total of 560 participants diagnosed with MCI were compared with 1,376 cognitively normal participants from the Heinz Nixdorf Recall study. Of participants with MCI, 289 had amnestic MCI and 271 had nonamnestic MCI. Type 2 diabetes mellitus was strongly associated with MCI and MCI subtypes in those ages 50 to 65. Examination of differences by gender revealed a stronger association of diabetes with amnestic MCI in middle-aged women and an even stronger association with nonamnestic MCI in middle-aged men.

Kimberly D. Williams

References

References

Issue
Neurology Reviews - 22(10)
Issue
Neurology Reviews - 22(10)
Page Number
3-4
Page Number
3-4
Publications
Publications
Article Type
Display Headline
New and Noteworthy Information—October 2014
Display Headline
New and Noteworthy Information—October 2014
Legacy Keywords
Kimberly Williams, epilepsy, Parkinson’s disease, stroke, deep brain stimulation, Atherosclerosis
Legacy Keywords
Kimberly Williams, epilepsy, Parkinson’s disease, stroke, deep brain stimulation, Atherosclerosis
Sections
Article Source

PURLs Copyright

Inside the Article

Esophagus/Upper GI section

Article Type
Changed
Display Headline
Esophagus/Upper GI section

One highlight of the AGA Postgraduate Course was the esophageal disease session. The presentation by Dr. Michael B. Wallace summarized recent studies using advanced imaging modalities in patients with Barrett’s esophagus. Studies using chromoscopy and virtual chromoscopy techniques such as narrow-band imaging have increased the detection of dysplasia in BE patients. These are so-called red flag techniques that image large areas of mucosa to detect mucosal abnormalities suspicious for the presence of dysplasia or neoplasia.

Dr. Herbert C. Wolfsen

Endomicroscopy describes the use of real-time, targeted endoscopic imaging modalities that are capable of producing histologic-like images of mucosa at depths up to 200 microns. Confocal laser endomicroscopy (CLE) uses a blue light laser (405 nm) and collimated light detection and analysis to produce 1000-fold magnified images. When used with a fluorescent contrast agent such as fluorescein or acriflavin dye, these systems produce cellular level images that are comparable to those images seen with optical microscopy. A recent study from Canto et al found that the use of CLE detected BE dysplasia at rates similar to targeted plus random biopsy protocols. Further, a multicenter study will soon begin using a tethered-capsule (nonendoscopic) form of volumetric laser endomicroscopy as a method to screen for BE.

Dr. Amitabh Chak expanded on these issues and reviewed the issues surrounding screening and surveillance of BE patients for the early detection and treatment of esophageal adenocarcinoma. This presentation suggested that necessary future improvements include cost-effective advanced imaging techniques optimized for use in clinical practice, molecular biomarker panels for prediction of which patients may progress to dysplasia and neoplasia, and high-quality intensive endoscopic surveillance for high risk BE patients.

Dr. Joe Murray’s comprehensive presentation of celiac disease described the protean clinical presentations of this disease as well as optimal use of serologic and endoscopic testing. Celiac disease is increasingly identified in middle-aged patients (median 45 years) without diarrhea. Classic malabsorption symptoms of diarrhea, weight loss, steatorrhea, and nutritional deficiencies are found in 25% of patients. Half of celiac patients will have only one symptom such as anemia, diarrhea, lactose intolerance, or weight loss. Nongastrointestinal symptoms are present in another 25% of patients such as infertility, bone disease, chronic fatigue, or abnormal liver enzyme test results.

Optimal use of serologic and endoscopic testing was reviewed, including the differential diagnosis of lymphocytic duodenosis including use of nonsteroidal anti-inflammatory agents (NSAIDs), Helicobacter pylori infection, Crohn’s disease, and Sjogren’s syndrome. Proper duodenal biopsy technique was emphasized with two forceps biopsy samples obtained from the duodenal bulb and four biopsy samples obtained from the second portion of the duodenum. Also discussed was the utility of HLA typing for DQ2/8 in patients currently using a gluten free diet, patients with negative serology results but abnormal duodenal biopsy findings, and those with negative serology results who are at increased genetic risk.

Dr. James Scheiman discussed management of the complex interaction and risks associated with the use of NSAIDs, aspirin, clopidogrel, and proton pump inhibitors in the setting of previous ulcer disease, gastrointestinal bleeding, and Helicobacter pylori infection. Results from randomized controlled studies and observational studies were the basis for the Consensus Group to recommend the use of proton pump inhibitor therapy as the GI bleeding protective strategy of choice. PPI therapy was also recommended as cost-effective treatment for aspirin-using patients, although the risks and benefits of long-term PPI treatment require patient education and individualization.

Finally, Dr. Rhonda Souza discussed eosinophilic esophagitis (EoE), a chronic immune/antigen-mediated esophageal disease characterized clinically by symptoms related to esophageal dysfunction associated with eosinophil-predominant inflammation such as dysphagia, food impaction, chest pain, heartburn, abdominal pain, and refractory reflux dyspepsia. Endoscopic features include the ringed esophagus, white specks, linear furrows and stricture. Histologic features of EoE are eosinophilia (more than 15 intraepithelial eosinophils per high power field), basal zone hyperplasia, and dilated intercellular spaces. These eosinophils are activated via T-helper 2 immune system via interleukins-4, -5 and -13. This inflammation is mediated by the dramatic upregulation involving the eotaxin-3 gene that produces a potent chemoattractant for eosinophils. Treatment of EoE usually requires the use of proton pump inhibitors based on their acid suppression, anti-oxidant and anti-inflammatory effects. The use of topical corticosteroids and endoscopic dilation for symptomatic strictures may also be necessary. Nondrug treatment approaches such as the six food elimination diet (SFED) of the most common food allergens such as milk, soy, eggs, wheat, nuts and seafood have also been successful.

Dr. Wolfsen is in the division of gastroenterology and hepatology, Mayo Clinic, Jacksonville, Fla. He moderated this session during the 2014 Digestive Diseases Week.

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

One highlight of the AGA Postgraduate Course was the esophageal disease session. The presentation by Dr. Michael B. Wallace summarized recent studies using advanced imaging modalities in patients with Barrett’s esophagus. Studies using chromoscopy and virtual chromoscopy techniques such as narrow-band imaging have increased the detection of dysplasia in BE patients. These are so-called red flag techniques that image large areas of mucosa to detect mucosal abnormalities suspicious for the presence of dysplasia or neoplasia.

Dr. Herbert C. Wolfsen

Endomicroscopy describes the use of real-time, targeted endoscopic imaging modalities that are capable of producing histologic-like images of mucosa at depths up to 200 microns. Confocal laser endomicroscopy (CLE) uses a blue light laser (405 nm) and collimated light detection and analysis to produce 1000-fold magnified images. When used with a fluorescent contrast agent such as fluorescein or acriflavin dye, these systems produce cellular level images that are comparable to those images seen with optical microscopy. A recent study from Canto et al found that the use of CLE detected BE dysplasia at rates similar to targeted plus random biopsy protocols. Further, a multicenter study will soon begin using a tethered-capsule (nonendoscopic) form of volumetric laser endomicroscopy as a method to screen for BE.

Dr. Amitabh Chak expanded on these issues and reviewed the issues surrounding screening and surveillance of BE patients for the early detection and treatment of esophageal adenocarcinoma. This presentation suggested that necessary future improvements include cost-effective advanced imaging techniques optimized for use in clinical practice, molecular biomarker panels for prediction of which patients may progress to dysplasia and neoplasia, and high-quality intensive endoscopic surveillance for high risk BE patients.

Dr. Joe Murray’s comprehensive presentation of celiac disease described the protean clinical presentations of this disease as well as optimal use of serologic and endoscopic testing. Celiac disease is increasingly identified in middle-aged patients (median 45 years) without diarrhea. Classic malabsorption symptoms of diarrhea, weight loss, steatorrhea, and nutritional deficiencies are found in 25% of patients. Half of celiac patients will have only one symptom such as anemia, diarrhea, lactose intolerance, or weight loss. Nongastrointestinal symptoms are present in another 25% of patients such as infertility, bone disease, chronic fatigue, or abnormal liver enzyme test results.

Optimal use of serologic and endoscopic testing was reviewed, including the differential diagnosis of lymphocytic duodenosis including use of nonsteroidal anti-inflammatory agents (NSAIDs), Helicobacter pylori infection, Crohn’s disease, and Sjogren’s syndrome. Proper duodenal biopsy technique was emphasized with two forceps biopsy samples obtained from the duodenal bulb and four biopsy samples obtained from the second portion of the duodenum. Also discussed was the utility of HLA typing for DQ2/8 in patients currently using a gluten free diet, patients with negative serology results but abnormal duodenal biopsy findings, and those with negative serology results who are at increased genetic risk.

Dr. James Scheiman discussed management of the complex interaction and risks associated with the use of NSAIDs, aspirin, clopidogrel, and proton pump inhibitors in the setting of previous ulcer disease, gastrointestinal bleeding, and Helicobacter pylori infection. Results from randomized controlled studies and observational studies were the basis for the Consensus Group to recommend the use of proton pump inhibitor therapy as the GI bleeding protective strategy of choice. PPI therapy was also recommended as cost-effective treatment for aspirin-using patients, although the risks and benefits of long-term PPI treatment require patient education and individualization.

Finally, Dr. Rhonda Souza discussed eosinophilic esophagitis (EoE), a chronic immune/antigen-mediated esophageal disease characterized clinically by symptoms related to esophageal dysfunction associated with eosinophil-predominant inflammation such as dysphagia, food impaction, chest pain, heartburn, abdominal pain, and refractory reflux dyspepsia. Endoscopic features include the ringed esophagus, white specks, linear furrows and stricture. Histologic features of EoE are eosinophilia (more than 15 intraepithelial eosinophils per high power field), basal zone hyperplasia, and dilated intercellular spaces. These eosinophils are activated via T-helper 2 immune system via interleukins-4, -5 and -13. This inflammation is mediated by the dramatic upregulation involving the eotaxin-3 gene that produces a potent chemoattractant for eosinophils. Treatment of EoE usually requires the use of proton pump inhibitors based on their acid suppression, anti-oxidant and anti-inflammatory effects. The use of topical corticosteroids and endoscopic dilation for symptomatic strictures may also be necessary. Nondrug treatment approaches such as the six food elimination diet (SFED) of the most common food allergens such as milk, soy, eggs, wheat, nuts and seafood have also been successful.

Dr. Wolfsen is in the division of gastroenterology and hepatology, Mayo Clinic, Jacksonville, Fla. He moderated this session during the 2014 Digestive Diseases Week.

One highlight of the AGA Postgraduate Course was the esophageal disease session. The presentation by Dr. Michael B. Wallace summarized recent studies using advanced imaging modalities in patients with Barrett’s esophagus. Studies using chromoscopy and virtual chromoscopy techniques such as narrow-band imaging have increased the detection of dysplasia in BE patients. These are so-called red flag techniques that image large areas of mucosa to detect mucosal abnormalities suspicious for the presence of dysplasia or neoplasia.

Dr. Herbert C. Wolfsen

Endomicroscopy describes the use of real-time, targeted endoscopic imaging modalities that are capable of producing histologic-like images of mucosa at depths up to 200 microns. Confocal laser endomicroscopy (CLE) uses a blue light laser (405 nm) and collimated light detection and analysis to produce 1000-fold magnified images. When used with a fluorescent contrast agent such as fluorescein or acriflavin dye, these systems produce cellular level images that are comparable to those images seen with optical microscopy. A recent study from Canto et al found that the use of CLE detected BE dysplasia at rates similar to targeted plus random biopsy protocols. Further, a multicenter study will soon begin using a tethered-capsule (nonendoscopic) form of volumetric laser endomicroscopy as a method to screen for BE.

Dr. Amitabh Chak expanded on these issues and reviewed the issues surrounding screening and surveillance of BE patients for the early detection and treatment of esophageal adenocarcinoma. This presentation suggested that necessary future improvements include cost-effective advanced imaging techniques optimized for use in clinical practice, molecular biomarker panels for prediction of which patients may progress to dysplasia and neoplasia, and high-quality intensive endoscopic surveillance for high risk BE patients.

Dr. Joe Murray’s comprehensive presentation of celiac disease described the protean clinical presentations of this disease as well as optimal use of serologic and endoscopic testing. Celiac disease is increasingly identified in middle-aged patients (median 45 years) without diarrhea. Classic malabsorption symptoms of diarrhea, weight loss, steatorrhea, and nutritional deficiencies are found in 25% of patients. Half of celiac patients will have only one symptom such as anemia, diarrhea, lactose intolerance, or weight loss. Nongastrointestinal symptoms are present in another 25% of patients such as infertility, bone disease, chronic fatigue, or abnormal liver enzyme test results.

Optimal use of serologic and endoscopic testing was reviewed, including the differential diagnosis of lymphocytic duodenosis including use of nonsteroidal anti-inflammatory agents (NSAIDs), Helicobacter pylori infection, Crohn’s disease, and Sjogren’s syndrome. Proper duodenal biopsy technique was emphasized with two forceps biopsy samples obtained from the duodenal bulb and four biopsy samples obtained from the second portion of the duodenum. Also discussed was the utility of HLA typing for DQ2/8 in patients currently using a gluten free diet, patients with negative serology results but abnormal duodenal biopsy findings, and those with negative serology results who are at increased genetic risk.

Dr. James Scheiman discussed management of the complex interaction and risks associated with the use of NSAIDs, aspirin, clopidogrel, and proton pump inhibitors in the setting of previous ulcer disease, gastrointestinal bleeding, and Helicobacter pylori infection. Results from randomized controlled studies and observational studies were the basis for the Consensus Group to recommend the use of proton pump inhibitor therapy as the GI bleeding protective strategy of choice. PPI therapy was also recommended as cost-effective treatment for aspirin-using patients, although the risks and benefits of long-term PPI treatment require patient education and individualization.

Finally, Dr. Rhonda Souza discussed eosinophilic esophagitis (EoE), a chronic immune/antigen-mediated esophageal disease characterized clinically by symptoms related to esophageal dysfunction associated with eosinophil-predominant inflammation such as dysphagia, food impaction, chest pain, heartburn, abdominal pain, and refractory reflux dyspepsia. Endoscopic features include the ringed esophagus, white specks, linear furrows and stricture. Histologic features of EoE are eosinophilia (more than 15 intraepithelial eosinophils per high power field), basal zone hyperplasia, and dilated intercellular spaces. These eosinophils are activated via T-helper 2 immune system via interleukins-4, -5 and -13. This inflammation is mediated by the dramatic upregulation involving the eotaxin-3 gene that produces a potent chemoattractant for eosinophils. Treatment of EoE usually requires the use of proton pump inhibitors based on their acid suppression, anti-oxidant and anti-inflammatory effects. The use of topical corticosteroids and endoscopic dilation for symptomatic strictures may also be necessary. Nondrug treatment approaches such as the six food elimination diet (SFED) of the most common food allergens such as milk, soy, eggs, wheat, nuts and seafood have also been successful.

Dr. Wolfsen is in the division of gastroenterology and hepatology, Mayo Clinic, Jacksonville, Fla. He moderated this session during the 2014 Digestive Diseases Week.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Esophagus/Upper GI section
Display Headline
Esophagus/Upper GI section
Sections
Article Source

PURLs Copyright

Inside the Article