Affiliations
Division of General Pediatrics, The Children's Hospital of Philadelphia
Given name(s)
Miriam
Family name
Zander
Degrees
BA

Review of Physiologic Monitor Alarms

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Systematic Review of Physiologic Monitor Alarm Characteristics and Pragmatic Interventions to Reduce Alarm Frequency

Clinical alarm safety has become a recent target for improvement in many hospitals. In 2013, The Joint Commission released a National Patient Safety Goal prompting accredited hospitals to establish alarm safety as a hospital priority, identify the most important alarm signals to manage, and, by 2016, develop policies and procedures that address alarm management.[1] In addition, the Emergency Care Research Institute has named alarm hazards the top health technology hazard each year since 2012.[2]

The primary arguments supporting the elevation of alarm management to a national hospital priority in the United States include the following: (1) clinicians rely on alarms to notify them of important physiologic changes, (2) alarms occur frequently and usually do not warrant clinical intervention, and (3) alarm overload renders clinicians unable to respond to all alarms, resulting in alarm fatigue: responding more slowly or ignoring alarms that may represent actual clinical deterioration.[3, 4] These arguments are built largely on anecdotal data, reported safety event databases, and small studies that have not previously been systematically analyzed.

Despite the national focus on alarms, we still know very little about fundamental questions key to improving alarm safety. In this systematic review, we aimed to answer 3 key questions about physiologic monitor alarms: (1) What proportion of alarms warrant attention or clinical intervention (ie, actionable alarms), and how does this proportion vary between adult and pediatric populations and between intensive care unit (ICU) and ward settings? (2) What is the relationship between alarm exposure and clinician response time? (3) What interventions are effective in reducing the frequency of alarms?

We limited our scope to monitor alarms because few studies have evaluated the characteristics of alarms from other medical devices, and because missing relevant monitor alarms could adversely impact patient safety.

METHODS

We performed a systematic review of the literature in accordance with the Meta‐Analysis of Observational Studies in Epidemiology guidelines[5] and developed this manuscript using the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) statement.[6]

Eligibility Criteria

With help from an experienced biomedical librarian (C.D.S.), we searched PubMed, the Cumulative Index to Nursing and Allied Health Literature, Scopus, Cochrane Library, ClinicalTrials.gov, and Google Scholar from January 1980 through April 2015 (see Supporting Information in the online version of this article for the search terms and queries). We hand searched the reference lists of included articles and reviewed our personal libraries to identify additional relevant studies.

We included peer‐reviewed, original research studies published in English, Spanish, or French that addressed the questions outlined above. Eligible patient populations were children and adults admitted to hospital inpatient units and emergency departments (EDs). We excluded alarms in procedural suites or operating rooms (typically responded to by anesthesiologists already with the patient) because of the differences in environment of care, staff‐to‐patient ratio, and equipment. We included observational studies reporting the actionability of physiologic monitor alarms (ie, alarms warranting special attention or clinical intervention), as well as nurse responses to these alarms. We excluded studies focused on the effects of alarms unrelated to patient safety, such as families' and patients' stress, noise, or sleep disturbance. We included only intervention studies evaluating pragmatic interventions ready for clinical implementation (ie, not experimental devices or software algorithms).

Selection Process and Data Extraction

First, 2 authors screened the titles and abstracts of articles for eligibility. To maximize sensitivity, if at least 1 author considered the article relevant, the article proceeded to full‐text review. Second, the full texts of articles screened were independently reviewed by 2 authors in an unblinded fashion to determine their eligibility. Any disagreements concerning eligibility were resolved by team consensus. To assure consistency in eligibility determinations across the team, a core group of the authors (C.W.P, C.P.B., E.E., and V.V.G.) held a series of meetings to review and discuss each potentially eligible article and reach consensus on the final list of included articles. Two authors independently extracted the following characteristics from included studies: alarm review methods, analytic design, fidelity measurement, consideration of unintended adverse safety consequences, and key results. Reviewers were not blinded to journal, authors, or affiliations.

Synthesis of Results and Risk Assessment

Given the high degree of heterogeneity in methodology, we were unable to generate summary proportions of the observational studies or perform a meta‐analysis of the intervention studies. Thus, we organized the studies into clinically relevant categories and presented key aspects in tables. Due to the heterogeneity of the studies and the controversy surrounding quality scores,[5] we did not generate summary scores of study quality. Instead, we evaluated and reported key design elements that had the potential to bias the results. To recognize the more comprehensive studies in the field, we developed by consensus a set of characteristics that distinguished studies with lower risk of bias. These characteristics are shown and defined in Table 1.

General Characteristics of Included Studies
First Author and Publication Year Alarm Review Method Indicators of Potential Bias for Observational Studies Indicators of Potential Bias for Intervention Studies
Monitor System Direct Observation Medical Record Review Rhythm Annotation Video Observation Remote Monitoring Staff Medical Device Industry Involved Two Independent Reviewers At Least 1 Reviewer Is a Clinical Expert Reviewer Not Simultaneously in Patient Care Clear Definition of Alarm Actionability Census Included Statistical Testing or QI SPC Methods Fidelity Assessed Safety Assessed Lower Risk of Bias
  • NOTE: Lower risk of bias for observational studies required all of the following characteristics be reported: (1) two independent reviewers of alarms, (2) at least 1 clinical expert reviewer (ie, physician or nurse), (3) the reviewer was not simultaneously involved in clinical care, and (4) there was a clear definition of alarm actionability provided in the article. These indicators assess detection bias, observer bias, analytical bias, and reporting bias and were derived from the Meta‐analysis of Observational Studies in Epidemiology checklist.[5] Lower risk of bias for intervention studies required all of the following characteristics be reported: (1) patient census accounted for in analysis, (2) formal statistical testing of effect or statistical process control methods used to evaluate effect, (3) intervention fidelity measured and reported (defined broadly as some measurement of whether the intervention was delivered as intended), and (4) relevant patient safety outcomes measured and reported (eg, the rate of code blue events before and after the intervention was implemented). These indicators assess reporting bias and internal validity bias and were derived from the Downs and Black checklist.[42] Monitor system: alarm data were electronically collected directly from the physiologic monitors and saved on a computer device through software such as BedMasterEx. Direct observation: an in‐person observer, such as a research assistant or a nurse, takes note of the alarm data and/or responses to alarms. Medical record review: data on alarms and/or responses to alarms were extracted from the patient medical records. Rhythm annotation: data on waveforms from cardiac monitors were collected and saved on a computer device through software such as BedMasterEx. Video observation: video cameras were set up in the patient's room and recorded data on alarms and/or responses to alarms. Remote monitor staff: clinicians situated at a remote location observe the patient via video camera and may be able to communicate with the patient or the patient's assigned nurse. Abbreviations: QI, quality improvement; RN, registered nurse; SPC, statistical process control. *Monitor system + RN interrogation. Assigned nurse making observations. Monitor from central station. Alarm outcome reported using run chart, and fidelity outcomes presented using statistical process control charts.

Adult Observational
Atzema 2006[7] ✓*
Billinghurst 2003[8]
Biot 2000[9]
Chambrin 1999[10]
Drew 2014[11]
Gazarian 2014[12]
Grges 2009[13]
Gross 2011[15]
Inokuchi 2013[14]
Koski 1990[16]
Morales Snchez 2014[17]
Pergher 2014[18]
Siebig 2010[19]
Voepel‐Lewis 2013[20]
Way 2014[21]
Pediatric Observational
Bonafide 2015[22]
Lawless 1994[23]
Rosman 2013[24]
Talley 2011[25]
Tsien 1997[26]
van Pul 2015[27]
Varpio 2012[28]
Mixed Adult and Pediatric Observational
O'Carroll 1986[29]
Wiklund 1994[30]
Adult Intervention
Albert 2015[32]
Cvach 2013[33]
Cvach 2014[34]
Graham 2010[35]
Rheineck‐Leyssius 1997[36]
Taenzer 2010[31]
Whalen 2014[37]
Pediatric Intervention
Dandoy 2014[38]

For the purposes of this review, we defined nonactionable alarms as including both invalid (false) alarms that do not that accurately represent the physiologic status of the patient and alarms that are valid but do not warrant special attention or clinical intervention (nuisance alarms). We did not separate out invalid alarms due to the tremendous variation between studies in how validity was measured.

RESULTS

Study Selection

Search results produced 4629 articles (see the flow diagram in the Supporting Information in the online version of this article), of which 32 articles were eligible: 24 observational studies describing alarm characteristics and 8 studies describing interventions to reduce alarm frequency.

Observational Study Characteristics

Characteristics of included studies are shown in Table 1. Of the 24 observational studies,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] 15 included adult patients,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] 7 included pediatric patients,[22, 23, 24, 25, 26, 27, 28] and 2 included both adult and pediatric patients.[29, 30] All were single‐hospital studies, except for 1 study by Chambrin and colleagues[10] that included 5 sites. The number of patient‐hours examined in each study ranged from 60 to 113,880.[7, 8, 9, 10, 11, 13, 14, 15, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 29, 30] Hospital settings included ICUs (n = 16),[9, 10, 11, 13, 14, 16, 17, 18, 19, 22, 23, 24, 25, 26, 27, 29] general wards (n = 5),[12, 15, 20, 22, 28] EDs (n = 2),[7, 21] postanesthesia care unit (PACU) (n = 1),[30] and cardiac care unit (CCU) (n = 1).[8] Studies varied in the type of physiologic signals recorded and data collection methods, ranging from direct observation by a nurse who was simultaneously caring for patients[29] to video recording with expert review.[14, 19, 22] Four observational studies met the criteria for lower risk of bias.[11, 14, 15, 22]

Intervention Study Characteristics

Of the 8 intervention studies, 7 included adult patients,[31, 32, 33, 34, 35, 36, 37] and 1 included pediatric patients.[38] All were single‐hospital studies; 6 were quasi‐experimental[31, 33, 34, 35, 37, 38] and 2 were experimental.[32, 36] Settings included progressive care units (n = 3),[33, 34, 35] CCUs (n = 3),[32, 33, 37] wards (n = 2),[31, 38] PACU (n = 1),[36] and a step‐down unit (n = 1).[32] All except 1 study[32] used the monitoring system to record alarm data. Several studies evaluated multicomponent interventions that included combinations of the following: widening alarm parameters,[31, 35, 36, 37, 38] instituting alarm delays,[31, 34, 36, 38] reconfiguring alarm acuity,[35, 37] use of secondary notifications,[34] daily change of electrocardiographic electrodes or use of disposable electrocardiographic wires,[32, 33, 38] universal monitoring in high‐risk populations,[31] and timely discontinuation of monitoring in low‐risk populations.[38] Four intervention studies met our prespecified lower risk of bias criteria.[31, 32, 36, 38]

Proportion of Alarms Considered Actionable

Results of the observational studies are provided in Table 2. The proportion of alarms that were actionable was <1% to 26% in adult ICU settings,[9, 10, 11, 13, 14, 16, 17, 19] 20% to 36% in adult ward settings,[12, 15, 20] 17% in a mixed adult and pediatric PACU setting,[30] 3% to 13% in pediatric ICU settings,[22, 23, 24, 25, 26] and 1% in a pediatric ward setting.[22]

Results of Included Observational Studies
Signals Included
First Author and Publication Year Setting Monitored Patient‐Hours SpO2 ECG Arrhythmia ECG Parametersa Blood Pressure Total Alarms Actionable Alarms Alarm Response Lower Risk of Bias
  • NOTE: Lower risk of bias for observational studies required all of the following characteristics be reported: (1) two independent reviewers of alarms, (2) at least 1 clinical expert reviewer (i.e. physician or nurse), (3) the reviewer was not simultaneously involved in clinical care, and (4) there was a clear definition of alarm actionability provided in the article. Abbreviations: CCU, cardiac/telemetry care unit; ECG, electrocardiogram; ED, emergency department; ICU, intensive care unit; PACU, postanesthesia care unit; SpO2, oxygen saturation; VT, ventricular tachycardia.

  • Includes respiratory rate measured via ECG leads. Actionable is defined as alarms warranting special attention or clinical intervention. Valid is defined as the alarm accurately representing the physiologic status of the patient. Directly addresses relationship between alarm exposure and response time. ∥Not provided directly; estimated from description of data collection methods.

Adult
Atzema 2006[7] ED 371 1,762 0.20%
Billinghurst 2003[8] CCU 420 751 Not reported; 17% were valid Nurses with higher acuity patients and smaller % of valid alarms had slower response rates
Biot 2000[9] ICU 250 3,665 3%
Chambrin 1999[10] ICU 1,971 3,188 26%
Drew 2014[11] ICU 48,173 2,558,760 0.3% of 3,861 VT alarms
Gazarian 2014[12] Ward 54 nurse‐hours 205 22% Response to 47% of alarms
Grges 2009[13] ICU 200 1,214 5%
Gross 2011[15] Ward 530 4,393 20%
Inokuchi 2013[14] ICU 2,697 11,591 6%
Koski 1990[16] ICU 400 2,322 12%
Morales Snchez 2014[17] ICU 434 sessions 215 25% Response to 93% of alarms, of which 50% were within 10 seconds
Pergher 2014[18] ICU 60 76 Not reported 72% of alarms stopped before nurse response or had >10 minutes response time
Siebig 2010[19] ICU 982 5,934 15%
Voepel‐Lewis 2013[20] Ward 1,616 710 36% Response time was longer for patients in highest quartile of total alarms
Way 2014[21] ED 93 572 Not reported; 75% were valid Nurses responded to more alarms in resuscitation room vs acute care area, but response time was longer
Pediatric
Bonafide 2015[22] Ward + ICU 210 5,070 13% PICU, 1% ward Incremental increases in response time as number of nonactionable alarms in preceding 120 minutes increased
Lawless 1994[23] ICU 928 2,176 6%
Rosman 2013[24] ICU 8,232 54,656 4% of rhythm alarms true critical"
Talley 2011[25] ICU 1,470∥ 2,245 3%
Tsien 1997[26] ICU 298 2,942 8%
van Pul 2015[27] ICU 113,880∥ 222,751 Not reported Assigned nurse did not respond to 6% of alarms within 45 seconds
Varpio 2012[28] Ward 49 unit‐hours 446 Not reported 70% of all alarms and 41% of crisis alarms were not responded to within 1 minute
Both
O'Carroll 1986[29] ICU 2,258∥ 284 2%
Wiklund 1994[30] PACU 207 1,891 17%

Relationship Between Alarm Exposure and Response Time

Whereas 9 studies addressed response time,[8, 12, 17, 18, 20, 21, 22, 27, 28] only 2 evaluated the relationship between alarm burden and nurse response time.[20, 22] Voepel‐Lewis and colleagues found that nurse responses were slower to patients with the highest quartile of alarms (57.6 seconds) compared to those with the lowest (45.4 seconds) or medium (42.3 seconds) quartiles of alarms on an adult ward (P = 0.046). They did not find an association between false alarm exposure and response time.[20] Bonafide and colleagues found incremental increases in response time as the number of nonactionable alarms in the preceding 120 minutes increased (P < 0.001 in the pediatric ICU, P = 0.009 on the pediatric ward).[22]

Interventions Effective in Reducing Alarms

Results of the 8 intervention studies are provided in Table 3. Three studies evaluated single interventions;[32, 33, 36] the remainder of the studies tested interventions with multiple components such that it was impossible to separate the effect of each component. Below, we have summarized study results, arranged by component. Because only 1 study focused on pediatric patients,[38] results from pediatric and adult settings are combined.

Results of Included Intervention Studies
First Author and Publication Year Design Setting Main Intervention Components Other/ Comments Key Results Results Statistically Significant? Lower Risk of Bias
Widen Default Settings Alarm Delays Reconfigure Alarm Acuity Secondary Notification ECG Changes
  • NOTE: Lower risk of bias for intervention studies required all of the following characteristics be reported: (1) patient census accounted for in analysis, (2) formal statistical testing of effect or statistical process control methods used to evaluate effect, (3) intervention fidelity measured and reported (defined broadly as some measurement of whether the intervention was delivered as intended), and (4) relevant patient safety outcomes measured and reported (eg, the rate of code blue events before and after the intervention was implemented). Abbreviations: CCU, cardiac/telemetry care unit; ECG, electrocardiogram; ICU, intensive care unit; ITS, interrupted time series; PACU, postanesthesia care unit; PCU, progressive care unit; SpO2, oxygen saturation. *Delays were part of secondary notification system only. Delays explored retrospectively only; not part of prospective evaluation. Preimplementation count not reported.

Adult
Albert 2015[32] Experimental (cluster‐randomized) CCU Disposable vs reusable wires Disposable leads had 29% fewer no‐telemetry, leads‐fail, and leads‐off alarms and similar artifact alarms
Cvach 2013[33] Quasi‐experimental (before and after) CCU and PCU Daily change of electrodes 46% fewer alarms/bed/day
Cvach 2014[34] Quasi‐experimental (ITS) PCU ✓* Slope of regression line suggests decrease of 0.75 alarms/bed/day
Graham 2010[35] Quasi‐experimental (before and after) PCU 43% fewer crisis, warning, and system warning alarms on unit
Rheineck‐Leyssius 1997[36] Experimental (RCT) PACU Alarm limit of 85% had fewer alarms/patient but higher incidence of true hypoxemia for >1 minute (6% vs 2%)
Taenzer 2010[31] Quasi‐experimental (before and after with concurrent controls) Ward Universal SpO2 monitoring Rescue events decreased from 3.4 to 1.2 per 1,000 discharges; transfers to ICU decreased from 5.6 to 2.9 per 1,000 patient‐days, only 4 alarms/patient‐day
Whalen 2014[37] Quasi‐experimental (before and after) CCU 89% fewer audible alarms on unit
Pediatric
Dandoy 2014[38] Quasi‐experimental (ITS) Ward Timely monitor discontinuation; daily change of ECG electrodes Decrease in alarms/patient‐days from 180 to 40

Widening alarm parameter default settings was evaluated in 5 studies:[31, 35, 36, 37, 38] 1 single intervention randomized controlled trial (RCT),[36] and 4 multiple‐intervention, quasi‐experimental studies.[31, 35, 37, 38] In the RCT, using a lower SpO2 limit of 85% instead of the standard 90% resulted in 61% fewer alarms. In the 4 multiple intervention studies, 1 study reported significant reductions in alarm rates (P < 0.001),[37] 1 study did not report preintervention alarm rates but reported a postintervention alarm rate of 4 alarms per patient‐day,[31] and 2 studies reported reductions in alarm rates but did not report any statistical testing.[35, 38] Of the 3 studies examining patient safety, 1 study with universal monitoring reported fewer rescue events and transfers to the ICU postimplementation,[31] 1 study reported no missed acute decompensations,[38] and 1 study (the RCT) reported significantly more true hypoxemia events (P = 0.001).[36]

Alarm delays were evaluated in 4 studies:[31, 34, 36, 38] 3 multiple‐intervention, quasi‐experimental studies[31, 34, 38] and 1 retrospective analysis of data from an RCT.[36] One study combined alarm delays with widening defaults in a universal monitoring strategy and reported a postintervention alarm rate of 4 alarms per patient.[31] Another study evaluated delays as part of a secondary notification pager system and found a negatively sloping regression line that suggested a decreasing alarm rate, but did not report statistical testing.[34] The third study reported a reduction in alarm rates but did not report statistical testing.[38] The RCT compared the impact of a hypothetical 15‐second alarm delay to that of a lower SpO2 limit reduction and reported a similar reduction in alarms.[36] Of the 4 studies examining patient safety, 1 study with universal monitoring reported improvements,[31] 2 studies reported no adverse outcomes,[35, 38] and the retrospective analysis of data from the RCT reported the theoretical adverse outcome of delayed detection of sudden, severe desaturations.[36]

Reconfiguring alarm acuity was evaluated in 2 studies, both of which were multiple‐intervention quasi‐experimental studies.[35, 37] Both showed reductions in alarm rates: 1 was significant without increasing adverse events (P < 0.001),[37] and the other did not report statistical testing or safety outcomes.[35]

Secondary notification of nurses using pagers was the main intervention component of 1 study incorporating delays between the alarms and the alarm pages.[34] As mentioned above, a negatively sloping regression line was displayed, but no statistical testing or safety outcomes were reported.

Disposable electrocardiographic lead wires or daily electrode changes were evaluated in 3 studies:[32, 33, 38] 1 single intervention cluster‐randomized trial[32] and 2 quasi‐experimental studies.[33, 38] In the cluster‐randomized trial, disposable lead wires were compared to reusable lead wires, with disposable lead wires having significantly fewer technical alarms for lead signal failures (P = 0.03) but a similar number of monitoring artifact alarms (P = 0.44).[32] In a single‐intervention, quasi‐experimental study, daily electrode change showed a reduction in alarms, but no statistical testing was reported.[33] One multiple‐intervention, quasi‐experimental study incorporating daily electrode change showed fewer alarms without statistical testing.[38] Of the 2 studies examining patient safety, both reported no adverse outcomes.[32, 38]

DISCUSSION

This systematic review of physiologic monitor alarms in the hospital yielded the following main findings: (1) between 74% and 99% of physiologic monitor alarms were not actionable, (2) a significant relationship between alarm exposure and nurse response time was demonstrated in 2 small observational studies, and (3) although interventions were most often studied in combination, results from the studies with lower risk of bias suggest that widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and/or changing electrodes daily are the most promising interventions for reducing alarms. Only 5 of 8 intervention studies measured intervention safety and found that widening alarm parameters and implementing alarm delays had mixed safety outcomes, whereas disposable electrocardiographic lead wires and daily electrode changes had no adverse safety outcomes.[29, 30, 34, 35, 36] Safety measures are crucial to ensuring the highest level of patient safety is met; interventions are rendered useless without ensuring actionable alarms are not disabled. The variation in results across studies likely reflects the wide range of care settings as well as differences in design and quality.

This field is still in its infancy, with 18 of the 32 articles published in the past 5 years. We anticipate improvements in quality and rigor as the field matures, as well as clinically tested interventions that incorporate smart alarms. Smart alarms integrate data from multiple physiologic signals and the patient's history to better detect physiologic changes in the patient and improve the positive predictive value of alarms. Academicindustry partnerships will be required to implement and rigorously test smart alarms and other emerging technologies in the hospital.

To our knowledge, this is the first systematic review focused on monitor alarms with specific review questions relevant to alarm fatigue. Cvach recently published an integrative review of alarm fatigue using research published through 2011.[39] Our review builds upon her work by contributing a more extensive and systematic search strategy with databases spanning nursing, medicine, and engineering, including additional languages, and including newer studies published through April 2015. In addition, we included multiple cross‐team checks in our eligibility review to ensure high sensitivity and specificity of the resulting set of studies.

Although we focused on interventions aiming to reduce alarms, there has also been important recent work focused on reducing telemetry utilization in adult hospital populations as well as work focused on reducing pulse oximetry utilization in children admitted with respiratory conditions. Dressler and colleagues reported an immediate and sustained reduction in telemetry utilization in hospitalized adults upon redesign of cardiac telemetry order sets to include the clinical indication, which defaulted to the American Heart Association guideline‐recommended telemetry duration.[40] Instructions for bedside nurses were also included in the order set to facilitate appropriate telemetry discontinuation. Schondelmeyer and colleagues reported reductions in continuous pulse oximetry utilization in hospitalized children with asthma and bronchiolitis upon introduction of a multifaceted quality improvement program that included provider education, a nurse handoff checklist, and discontinuation criteria incorporated into order sets.[41]

Limitations of This Review and the Underlying Body of Work

There are limitations to this systematic review and its underlying body of work. With respect to our approach to this systematic review, we focused only on monitor alarms. Numerous other medical devices generate alarms in the patient‐care environment that also can contribute to alarm fatigue and deserve equally rigorous evaluation. With respect to the underlying body of work, the quality of individual studies was generally low. For example, determinations of alarm actionability were often made by a single rater without evaluation of the reliability or validity of these determinations, and statistical testing was often missing. There were also limitations specific to intervention studies, including evaluation of nongeneralizable patient populations, failure to measure the fidelity of the interventions, inadequate measures of intervention safety, and failure to statistically evaluate alarm reductions. Finally, though not necessarily a limitation, several studies were conducted by authors involved in or funded by the medical device industry.[11, 15, 19, 31, 32] This has the potential to introduce bias, although we have no indication that the quality of the science was adversely impacted.

Moving forward, the research agenda for physiologic monitor alarms should include the following: (1) more intensive focus on evaluating the relationship between alarm exposure and response time with analysis of important mediating factors that may promote or prevent alarm fatigue, (2) emphasis on studying interventions aimed at improving alarm management using rigorous designs such as cluster‐randomized trials and trials randomized by individual participant, (3) monitoring and reporting clinically meaningful balancing measures that represent unintended consequences of disabling or delaying potentially important alarms and possibly reducing the clinicians' ability to detect true patient deterioration and intervene in a timely manner, and (4) support for transparent academicindustry partnerships to evaluate new alarm technology in real‐world settings. As evidence‐based interventions emerge, there will be new opportunities to study different implementation strategies of these interventions to optimize effectiveness.

CONCLUSIONS

The body of literature relevant to physiologic monitor alarm characteristics and alarm fatigue is limited but growing rapidly. Although we know that most alarms are not actionable and that there appears to be a relationship between alarm exposure and response time that could be caused by alarm fatigue, we cannot yet say with certainty that we know which interventions are most effective in safely reducing unnecessary alarms. Interventions that appear most promising and should be prioritized for intensive evaluation include widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and changing electrodes daily. Careful evaluation of these interventions must include systematically examining adverse patient safety consequences.

Acknowledgements

The authors thank Amogh Karnik and Micheal Sellars for their technical assistance during the review and extraction process.

Disclosures: Ms. Zander is supported by the Society of Hospital Medicine Student Hospitalist Scholar Grant. Dr. Bonafide and Ms. Stemler are supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number K23HL116427. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no conflicts of interest.

Files
References
  1. National Patient Safety Goals Effective January 1, 2015. The Joint Commission Web site. http://www.jointcommission.org/assets/1/6/2015_NPSG_HAP.pdf. Accessed July 17, 2015.
  2. ECRI Institute. 2015 Top 10 Health Technology Hazards. Available at: https://www.ecri.org/Pages/2015‐Hazards.aspx. Accessed June 23, 2015.
  3. Sendelbach S, Funk M. Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378386.
  4. Chopra V, McMahon LF. Redesigning hospital alarms for patient safety: alarmed and potentially dangerous. JAMA. 2014;311(12):11991200.
  5. Stroup DF, Berlin JA, Morton SC, et al. Meta‐analysis of observational studies in epidemiology: a proposal for reporting. Meta‐analysis Of Observational Studies in Epidemiology (MOOSE) Group. JAMA. 2000;283(15):20082012.
  6. Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta‐analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264269, W64.
  7. Atzema C, Schull MJ, Borgundvaag B, Slaughter GRD, Lee CK. ALARMED: adverse events in low‐risk patients with chest pain receiving continuous electrocardiographic monitoring in the emergency department. A pilot study. Am J Emerg Med. 2006;24:6267.
  8. Billinghurst F, Morgan B, Arthur HM. Patient and nurse‐related implications of remote cardiac telemetry. Clin Nurs Res. 2003;12(4):356370.
  9. Biot L, Carry PY, Perdrix JP, Eberhard A, Baconnier P. Clinical evaluation of alarm efficiency in intensive care [in French]. Ann Fr Anesth Reanim. 2000;19:459466.
  10. Chambrin MC, Ravaux P, Calvelo‐Aros D, Jaborska A, Chopin C, Boniface B. Multicentric study of monitoring alarms in the adult intensive care unit (ICU): a descriptive analysis. Intensive Care Med. 1999;25:13601366.
  11. Drew BJ, Harris P, Zègre‐Hemsey JK, et al. Insights into the problem of alarm fatigue with physiologic monitor devices: a comprehensive observational study of consecutive intensive care unit patients. PloS One. 2014;9(10):e110274.
  12. Gazarian PK. Nurses' response to frequency and types of electrocardiography alarms in a non‐ critical care setting: a descriptive study. Int J Nurs Stud. 2014;51(2):190197.
  13. Görges M, Markewitz BA, Westenskow DR. Improving alarm performance in the medical intensive care unit using delays and clinical context. Anesth Analg. 2009;108:15461552.
  14. Inokuchi R, Sato H, Nanjo Y, et al. The proportion of clinically relevant alarms decreases as patient clinical severity decreases in intensive care units: a pilot study. BMJ Open. 2013;3(9):e003354e003354.
  15. Gross B, Dahl D, Nielsen L. Physiologic monitoring alarm load on medical/surgical floors of a community hospital. Biomed Instrum Technol. 2011;45:2936.
  16. Koski EM, Mäkivirta A, Sukuvaara T, Kari A. Frequency and reliability of alarms in the monitoring of cardiac postoperative patients. Int J Clin Monit Comput. 1990;7(2):129133.
  17. Morales Sánchez C, Murillo Pérez MA, Torrente Vela S, et al. Audit of the bedside monitor alarms in a critical care unit [in Spanish]. Enferm Intensiva. 2014;25(3):8390.
  18. Pergher AK, Silva RCL. Stimulus‐response time to invasive blood pressure alarms: implications for the safety of critical‐care patients. Rev Gaúcha Enferm. 2014;35(2):135141.
  19. Siebig S, Kuhls S, Imhoff M, Gather U, Scholmerich J, Wrede CE. Intensive care unit alarms— how many do we need? Crit Care Med. 2010;38:451456.
  20. Voepel‐Lewis T, Parker ML, Burke CN, et al. Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):13511358.
  21. Way RB, Beer SA, Wilson SJ. Whats that noise? Bedside monitoring in the Emergency Department. Int Emerg Nurs. 2014;22(4):197201.
  22. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345351.
  23. Lawless ST. Crying wolf: false alarms in a pediatric intensive care unit. Crit Care Med. 1994;22(6):981985.
  24. Rosman EC, Blaufox AD, Menco A, Trope R, Seiden HS. What are we missing? Arrhythmia detection in the pediatric intensive care unit. J Pediatr. 2013;163(2):511514.
  25. Talley LB, Hooper J, Jacobs B, et al. Cardiopulmonary monitors and clinically significant events in critically ill children. Biomed Instrum Technol. 2011;45(s1):3845.
  26. Tsien CL, Fackler JC. Poor prognosis for existing monitors in the intensive care unit. Crit Care Med. 1997;25:614619.
  27. Pul C, Mortel H, Bogaart J, Mohns T, Andriessen P. Safe patient monitoring is challenging but still feasible in a neonatal intensive care unit with single family rooms. Acta Paediatr Oslo Nor 1992. 2015;104(6):e247e254.
  28. Varpio L, Kuziemsky C, Macdonald C, King WJ. The helpful or hindering effects of in‐hospital patient monitor alarms on nurses: a qualitative analysis. CIN Comput Inform Nurs. 2012;30(4):210217.
  29. O'Carroll T. Survey of alarms in an intensive therapy unit. Anaesthesia. 1986;41(7):742744.
  30. Wiklund L, Hök B, Ståhl K, Jordeby‐Jönsson A. Postanesthesia monitoring revisited: frequency of true and false alarms from different monitoring devices. J Clin Anesth. 1994;6(3):182188.
  31. Taenzer AH, Pyke JB, McGrath SP, Blike GT. Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before‐and‐after concurrence study. Anesthesiology. 2010;112(2):282287.
  32. Albert NM, Murray T, Bena JF, et al. Differences in alarm events between disposable and reusable electrocardiography lead wires. Am J Crit Care. 2015;24(1):6774.
  33. Cvach MM, Biggs M, Rothwell KJ, Charles‐Hudson C. Daily electrode change and effect on cardiac monitor alarms: an evidence‐based practice approach. J Nurs Care Qual. 2013;28:265271.
  34. Cvach MM, Frank RJ, Doyle P, Stevens ZK. Use of pagers with an alarm escalation system to reduce cardiac monitor alarm signals. J Nurs Care Qual. 2014;29(1):918.
  35. Graham KC, Cvach M. Monitor alarm fatigue: standardizing use of physiological monitors and decreasing nuisance alarms. Am J Crit Care. 2010;19:2834.
  36. Rheineck‐Leyssius AT, Kalkman CJ. Influence of pulse oximeter lower alarm limit on the incidence of hypoxaemia in the recovery room. Br J Anaesth. 1997;79(4):460464.
  37. Whalen DA, Covelle PM, Piepenbrink JC, Villanova KL, Cuneo CL, Awtry EH. Novel approach to cardiac alarm management on telemetry units. J Cardiovasc Nurs. 2014;29(5):E13E22.
  38. Dandoy CE, Davies SM, Flesch L, et al. A team‐based approach to reducing cardiac monitor alarms. Pediatrics. 2014;134(6):e1686e1694.
  39. Cvach M. Monitor alarm fatigue: an integrative review. Biomed Instrum Technol. 2012;46(4):268277.
  40. Dressler R, Dryer MM, Coletti C, Mahoney D, Doorey AJ. Altering overuse of cardiac telemetry in non‐intensive care unit settings by hardwiring the use of American Heart Association guidelines. JAMA Intern Med. 2014;174(11):18521854.
  41. Schondelmeyer AC, Simmons JM, Statile AM, et al. Using quality improvement to reduce continuous pulse oximetry use in children with wheezing. Pediatrics. 2015;135(4):e1044e1051.
  42. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non‐randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377384.
Article PDF
Issue
Journal of Hospital Medicine - 11(2)
Publications
Page Number
136-144
Sections
Files
Files
Article PDF
Article PDF

Clinical alarm safety has become a recent target for improvement in many hospitals. In 2013, The Joint Commission released a National Patient Safety Goal prompting accredited hospitals to establish alarm safety as a hospital priority, identify the most important alarm signals to manage, and, by 2016, develop policies and procedures that address alarm management.[1] In addition, the Emergency Care Research Institute has named alarm hazards the top health technology hazard each year since 2012.[2]

The primary arguments supporting the elevation of alarm management to a national hospital priority in the United States include the following: (1) clinicians rely on alarms to notify them of important physiologic changes, (2) alarms occur frequently and usually do not warrant clinical intervention, and (3) alarm overload renders clinicians unable to respond to all alarms, resulting in alarm fatigue: responding more slowly or ignoring alarms that may represent actual clinical deterioration.[3, 4] These arguments are built largely on anecdotal data, reported safety event databases, and small studies that have not previously been systematically analyzed.

Despite the national focus on alarms, we still know very little about fundamental questions key to improving alarm safety. In this systematic review, we aimed to answer 3 key questions about physiologic monitor alarms: (1) What proportion of alarms warrant attention or clinical intervention (ie, actionable alarms), and how does this proportion vary between adult and pediatric populations and between intensive care unit (ICU) and ward settings? (2) What is the relationship between alarm exposure and clinician response time? (3) What interventions are effective in reducing the frequency of alarms?

We limited our scope to monitor alarms because few studies have evaluated the characteristics of alarms from other medical devices, and because missing relevant monitor alarms could adversely impact patient safety.

METHODS

We performed a systematic review of the literature in accordance with the Meta‐Analysis of Observational Studies in Epidemiology guidelines[5] and developed this manuscript using the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) statement.[6]

Eligibility Criteria

With help from an experienced biomedical librarian (C.D.S.), we searched PubMed, the Cumulative Index to Nursing and Allied Health Literature, Scopus, Cochrane Library, ClinicalTrials.gov, and Google Scholar from January 1980 through April 2015 (see Supporting Information in the online version of this article for the search terms and queries). We hand searched the reference lists of included articles and reviewed our personal libraries to identify additional relevant studies.

We included peer‐reviewed, original research studies published in English, Spanish, or French that addressed the questions outlined above. Eligible patient populations were children and adults admitted to hospital inpatient units and emergency departments (EDs). We excluded alarms in procedural suites or operating rooms (typically responded to by anesthesiologists already with the patient) because of the differences in environment of care, staff‐to‐patient ratio, and equipment. We included observational studies reporting the actionability of physiologic monitor alarms (ie, alarms warranting special attention or clinical intervention), as well as nurse responses to these alarms. We excluded studies focused on the effects of alarms unrelated to patient safety, such as families' and patients' stress, noise, or sleep disturbance. We included only intervention studies evaluating pragmatic interventions ready for clinical implementation (ie, not experimental devices or software algorithms).

Selection Process and Data Extraction

First, 2 authors screened the titles and abstracts of articles for eligibility. To maximize sensitivity, if at least 1 author considered the article relevant, the article proceeded to full‐text review. Second, the full texts of articles screened were independently reviewed by 2 authors in an unblinded fashion to determine their eligibility. Any disagreements concerning eligibility were resolved by team consensus. To assure consistency in eligibility determinations across the team, a core group of the authors (C.W.P, C.P.B., E.E., and V.V.G.) held a series of meetings to review and discuss each potentially eligible article and reach consensus on the final list of included articles. Two authors independently extracted the following characteristics from included studies: alarm review methods, analytic design, fidelity measurement, consideration of unintended adverse safety consequences, and key results. Reviewers were not blinded to journal, authors, or affiliations.

Synthesis of Results and Risk Assessment

Given the high degree of heterogeneity in methodology, we were unable to generate summary proportions of the observational studies or perform a meta‐analysis of the intervention studies. Thus, we organized the studies into clinically relevant categories and presented key aspects in tables. Due to the heterogeneity of the studies and the controversy surrounding quality scores,[5] we did not generate summary scores of study quality. Instead, we evaluated and reported key design elements that had the potential to bias the results. To recognize the more comprehensive studies in the field, we developed by consensus a set of characteristics that distinguished studies with lower risk of bias. These characteristics are shown and defined in Table 1.

General Characteristics of Included Studies
First Author and Publication Year Alarm Review Method Indicators of Potential Bias for Observational Studies Indicators of Potential Bias for Intervention Studies
Monitor System Direct Observation Medical Record Review Rhythm Annotation Video Observation Remote Monitoring Staff Medical Device Industry Involved Two Independent Reviewers At Least 1 Reviewer Is a Clinical Expert Reviewer Not Simultaneously in Patient Care Clear Definition of Alarm Actionability Census Included Statistical Testing or QI SPC Methods Fidelity Assessed Safety Assessed Lower Risk of Bias
  • NOTE: Lower risk of bias for observational studies required all of the following characteristics be reported: (1) two independent reviewers of alarms, (2) at least 1 clinical expert reviewer (ie, physician or nurse), (3) the reviewer was not simultaneously involved in clinical care, and (4) there was a clear definition of alarm actionability provided in the article. These indicators assess detection bias, observer bias, analytical bias, and reporting bias and were derived from the Meta‐analysis of Observational Studies in Epidemiology checklist.[5] Lower risk of bias for intervention studies required all of the following characteristics be reported: (1) patient census accounted for in analysis, (2) formal statistical testing of effect or statistical process control methods used to evaluate effect, (3) intervention fidelity measured and reported (defined broadly as some measurement of whether the intervention was delivered as intended), and (4) relevant patient safety outcomes measured and reported (eg, the rate of code blue events before and after the intervention was implemented). These indicators assess reporting bias and internal validity bias and were derived from the Downs and Black checklist.[42] Monitor system: alarm data were electronically collected directly from the physiologic monitors and saved on a computer device through software such as BedMasterEx. Direct observation: an in‐person observer, such as a research assistant or a nurse, takes note of the alarm data and/or responses to alarms. Medical record review: data on alarms and/or responses to alarms were extracted from the patient medical records. Rhythm annotation: data on waveforms from cardiac monitors were collected and saved on a computer device through software such as BedMasterEx. Video observation: video cameras were set up in the patient's room and recorded data on alarms and/or responses to alarms. Remote monitor staff: clinicians situated at a remote location observe the patient via video camera and may be able to communicate with the patient or the patient's assigned nurse. Abbreviations: QI, quality improvement; RN, registered nurse; SPC, statistical process control. *Monitor system + RN interrogation. Assigned nurse making observations. Monitor from central station. Alarm outcome reported using run chart, and fidelity outcomes presented using statistical process control charts.

Adult Observational
Atzema 2006[7] ✓*
Billinghurst 2003[8]
Biot 2000[9]
Chambrin 1999[10]
Drew 2014[11]
Gazarian 2014[12]
Grges 2009[13]
Gross 2011[15]
Inokuchi 2013[14]
Koski 1990[16]
Morales Snchez 2014[17]
Pergher 2014[18]
Siebig 2010[19]
Voepel‐Lewis 2013[20]
Way 2014[21]
Pediatric Observational
Bonafide 2015[22]
Lawless 1994[23]
Rosman 2013[24]
Talley 2011[25]
Tsien 1997[26]
van Pul 2015[27]
Varpio 2012[28]
Mixed Adult and Pediatric Observational
O'Carroll 1986[29]
Wiklund 1994[30]
Adult Intervention
Albert 2015[32]
Cvach 2013[33]
Cvach 2014[34]
Graham 2010[35]
Rheineck‐Leyssius 1997[36]
Taenzer 2010[31]
Whalen 2014[37]
Pediatric Intervention
Dandoy 2014[38]

For the purposes of this review, we defined nonactionable alarms as including both invalid (false) alarms that do not that accurately represent the physiologic status of the patient and alarms that are valid but do not warrant special attention or clinical intervention (nuisance alarms). We did not separate out invalid alarms due to the tremendous variation between studies in how validity was measured.

RESULTS

Study Selection

Search results produced 4629 articles (see the flow diagram in the Supporting Information in the online version of this article), of which 32 articles were eligible: 24 observational studies describing alarm characteristics and 8 studies describing interventions to reduce alarm frequency.

Observational Study Characteristics

Characteristics of included studies are shown in Table 1. Of the 24 observational studies,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] 15 included adult patients,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] 7 included pediatric patients,[22, 23, 24, 25, 26, 27, 28] and 2 included both adult and pediatric patients.[29, 30] All were single‐hospital studies, except for 1 study by Chambrin and colleagues[10] that included 5 sites. The number of patient‐hours examined in each study ranged from 60 to 113,880.[7, 8, 9, 10, 11, 13, 14, 15, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 29, 30] Hospital settings included ICUs (n = 16),[9, 10, 11, 13, 14, 16, 17, 18, 19, 22, 23, 24, 25, 26, 27, 29] general wards (n = 5),[12, 15, 20, 22, 28] EDs (n = 2),[7, 21] postanesthesia care unit (PACU) (n = 1),[30] and cardiac care unit (CCU) (n = 1).[8] Studies varied in the type of physiologic signals recorded and data collection methods, ranging from direct observation by a nurse who was simultaneously caring for patients[29] to video recording with expert review.[14, 19, 22] Four observational studies met the criteria for lower risk of bias.[11, 14, 15, 22]

Intervention Study Characteristics

Of the 8 intervention studies, 7 included adult patients,[31, 32, 33, 34, 35, 36, 37] and 1 included pediatric patients.[38] All were single‐hospital studies; 6 were quasi‐experimental[31, 33, 34, 35, 37, 38] and 2 were experimental.[32, 36] Settings included progressive care units (n = 3),[33, 34, 35] CCUs (n = 3),[32, 33, 37] wards (n = 2),[31, 38] PACU (n = 1),[36] and a step‐down unit (n = 1).[32] All except 1 study[32] used the monitoring system to record alarm data. Several studies evaluated multicomponent interventions that included combinations of the following: widening alarm parameters,[31, 35, 36, 37, 38] instituting alarm delays,[31, 34, 36, 38] reconfiguring alarm acuity,[35, 37] use of secondary notifications,[34] daily change of electrocardiographic electrodes or use of disposable electrocardiographic wires,[32, 33, 38] universal monitoring in high‐risk populations,[31] and timely discontinuation of monitoring in low‐risk populations.[38] Four intervention studies met our prespecified lower risk of bias criteria.[31, 32, 36, 38]

Proportion of Alarms Considered Actionable

Results of the observational studies are provided in Table 2. The proportion of alarms that were actionable was <1% to 26% in adult ICU settings,[9, 10, 11, 13, 14, 16, 17, 19] 20% to 36% in adult ward settings,[12, 15, 20] 17% in a mixed adult and pediatric PACU setting,[30] 3% to 13% in pediatric ICU settings,[22, 23, 24, 25, 26] and 1% in a pediatric ward setting.[22]

Results of Included Observational Studies
Signals Included
First Author and Publication Year Setting Monitored Patient‐Hours SpO2 ECG Arrhythmia ECG Parametersa Blood Pressure Total Alarms Actionable Alarms Alarm Response Lower Risk of Bias
  • NOTE: Lower risk of bias for observational studies required all of the following characteristics be reported: (1) two independent reviewers of alarms, (2) at least 1 clinical expert reviewer (i.e. physician or nurse), (3) the reviewer was not simultaneously involved in clinical care, and (4) there was a clear definition of alarm actionability provided in the article. Abbreviations: CCU, cardiac/telemetry care unit; ECG, electrocardiogram; ED, emergency department; ICU, intensive care unit; PACU, postanesthesia care unit; SpO2, oxygen saturation; VT, ventricular tachycardia.

  • Includes respiratory rate measured via ECG leads. Actionable is defined as alarms warranting special attention or clinical intervention. Valid is defined as the alarm accurately representing the physiologic status of the patient. Directly addresses relationship between alarm exposure and response time. ∥Not provided directly; estimated from description of data collection methods.

Adult
Atzema 2006[7] ED 371 1,762 0.20%
Billinghurst 2003[8] CCU 420 751 Not reported; 17% were valid Nurses with higher acuity patients and smaller % of valid alarms had slower response rates
Biot 2000[9] ICU 250 3,665 3%
Chambrin 1999[10] ICU 1,971 3,188 26%
Drew 2014[11] ICU 48,173 2,558,760 0.3% of 3,861 VT alarms
Gazarian 2014[12] Ward 54 nurse‐hours 205 22% Response to 47% of alarms
Grges 2009[13] ICU 200 1,214 5%
Gross 2011[15] Ward 530 4,393 20%
Inokuchi 2013[14] ICU 2,697 11,591 6%
Koski 1990[16] ICU 400 2,322 12%
Morales Snchez 2014[17] ICU 434 sessions 215 25% Response to 93% of alarms, of which 50% were within 10 seconds
Pergher 2014[18] ICU 60 76 Not reported 72% of alarms stopped before nurse response or had >10 minutes response time
Siebig 2010[19] ICU 982 5,934 15%
Voepel‐Lewis 2013[20] Ward 1,616 710 36% Response time was longer for patients in highest quartile of total alarms
Way 2014[21] ED 93 572 Not reported; 75% were valid Nurses responded to more alarms in resuscitation room vs acute care area, but response time was longer
Pediatric
Bonafide 2015[22] Ward + ICU 210 5,070 13% PICU, 1% ward Incremental increases in response time as number of nonactionable alarms in preceding 120 minutes increased
Lawless 1994[23] ICU 928 2,176 6%
Rosman 2013[24] ICU 8,232 54,656 4% of rhythm alarms true critical"
Talley 2011[25] ICU 1,470∥ 2,245 3%
Tsien 1997[26] ICU 298 2,942 8%
van Pul 2015[27] ICU 113,880∥ 222,751 Not reported Assigned nurse did not respond to 6% of alarms within 45 seconds
Varpio 2012[28] Ward 49 unit‐hours 446 Not reported 70% of all alarms and 41% of crisis alarms were not responded to within 1 minute
Both
O'Carroll 1986[29] ICU 2,258∥ 284 2%
Wiklund 1994[30] PACU 207 1,891 17%

Relationship Between Alarm Exposure and Response Time

Whereas 9 studies addressed response time,[8, 12, 17, 18, 20, 21, 22, 27, 28] only 2 evaluated the relationship between alarm burden and nurse response time.[20, 22] Voepel‐Lewis and colleagues found that nurse responses were slower to patients with the highest quartile of alarms (57.6 seconds) compared to those with the lowest (45.4 seconds) or medium (42.3 seconds) quartiles of alarms on an adult ward (P = 0.046). They did not find an association between false alarm exposure and response time.[20] Bonafide and colleagues found incremental increases in response time as the number of nonactionable alarms in the preceding 120 minutes increased (P < 0.001 in the pediatric ICU, P = 0.009 on the pediatric ward).[22]

Interventions Effective in Reducing Alarms

Results of the 8 intervention studies are provided in Table 3. Three studies evaluated single interventions;[32, 33, 36] the remainder of the studies tested interventions with multiple components such that it was impossible to separate the effect of each component. Below, we have summarized study results, arranged by component. Because only 1 study focused on pediatric patients,[38] results from pediatric and adult settings are combined.

Results of Included Intervention Studies
First Author and Publication Year Design Setting Main Intervention Components Other/ Comments Key Results Results Statistically Significant? Lower Risk of Bias
Widen Default Settings Alarm Delays Reconfigure Alarm Acuity Secondary Notification ECG Changes
  • NOTE: Lower risk of bias for intervention studies required all of the following characteristics be reported: (1) patient census accounted for in analysis, (2) formal statistical testing of effect or statistical process control methods used to evaluate effect, (3) intervention fidelity measured and reported (defined broadly as some measurement of whether the intervention was delivered as intended), and (4) relevant patient safety outcomes measured and reported (eg, the rate of code blue events before and after the intervention was implemented). Abbreviations: CCU, cardiac/telemetry care unit; ECG, electrocardiogram; ICU, intensive care unit; ITS, interrupted time series; PACU, postanesthesia care unit; PCU, progressive care unit; SpO2, oxygen saturation. *Delays were part of secondary notification system only. Delays explored retrospectively only; not part of prospective evaluation. Preimplementation count not reported.

Adult
Albert 2015[32] Experimental (cluster‐randomized) CCU Disposable vs reusable wires Disposable leads had 29% fewer no‐telemetry, leads‐fail, and leads‐off alarms and similar artifact alarms
Cvach 2013[33] Quasi‐experimental (before and after) CCU and PCU Daily change of electrodes 46% fewer alarms/bed/day
Cvach 2014[34] Quasi‐experimental (ITS) PCU ✓* Slope of regression line suggests decrease of 0.75 alarms/bed/day
Graham 2010[35] Quasi‐experimental (before and after) PCU 43% fewer crisis, warning, and system warning alarms on unit
Rheineck‐Leyssius 1997[36] Experimental (RCT) PACU Alarm limit of 85% had fewer alarms/patient but higher incidence of true hypoxemia for >1 minute (6% vs 2%)
Taenzer 2010[31] Quasi‐experimental (before and after with concurrent controls) Ward Universal SpO2 monitoring Rescue events decreased from 3.4 to 1.2 per 1,000 discharges; transfers to ICU decreased from 5.6 to 2.9 per 1,000 patient‐days, only 4 alarms/patient‐day
Whalen 2014[37] Quasi‐experimental (before and after) CCU 89% fewer audible alarms on unit
Pediatric
Dandoy 2014[38] Quasi‐experimental (ITS) Ward Timely monitor discontinuation; daily change of ECG electrodes Decrease in alarms/patient‐days from 180 to 40

Widening alarm parameter default settings was evaluated in 5 studies:[31, 35, 36, 37, 38] 1 single intervention randomized controlled trial (RCT),[36] and 4 multiple‐intervention, quasi‐experimental studies.[31, 35, 37, 38] In the RCT, using a lower SpO2 limit of 85% instead of the standard 90% resulted in 61% fewer alarms. In the 4 multiple intervention studies, 1 study reported significant reductions in alarm rates (P < 0.001),[37] 1 study did not report preintervention alarm rates but reported a postintervention alarm rate of 4 alarms per patient‐day,[31] and 2 studies reported reductions in alarm rates but did not report any statistical testing.[35, 38] Of the 3 studies examining patient safety, 1 study with universal monitoring reported fewer rescue events and transfers to the ICU postimplementation,[31] 1 study reported no missed acute decompensations,[38] and 1 study (the RCT) reported significantly more true hypoxemia events (P = 0.001).[36]

Alarm delays were evaluated in 4 studies:[31, 34, 36, 38] 3 multiple‐intervention, quasi‐experimental studies[31, 34, 38] and 1 retrospective analysis of data from an RCT.[36] One study combined alarm delays with widening defaults in a universal monitoring strategy and reported a postintervention alarm rate of 4 alarms per patient.[31] Another study evaluated delays as part of a secondary notification pager system and found a negatively sloping regression line that suggested a decreasing alarm rate, but did not report statistical testing.[34] The third study reported a reduction in alarm rates but did not report statistical testing.[38] The RCT compared the impact of a hypothetical 15‐second alarm delay to that of a lower SpO2 limit reduction and reported a similar reduction in alarms.[36] Of the 4 studies examining patient safety, 1 study with universal monitoring reported improvements,[31] 2 studies reported no adverse outcomes,[35, 38] and the retrospective analysis of data from the RCT reported the theoretical adverse outcome of delayed detection of sudden, severe desaturations.[36]

Reconfiguring alarm acuity was evaluated in 2 studies, both of which were multiple‐intervention quasi‐experimental studies.[35, 37] Both showed reductions in alarm rates: 1 was significant without increasing adverse events (P < 0.001),[37] and the other did not report statistical testing or safety outcomes.[35]

Secondary notification of nurses using pagers was the main intervention component of 1 study incorporating delays between the alarms and the alarm pages.[34] As mentioned above, a negatively sloping regression line was displayed, but no statistical testing or safety outcomes were reported.

Disposable electrocardiographic lead wires or daily electrode changes were evaluated in 3 studies:[32, 33, 38] 1 single intervention cluster‐randomized trial[32] and 2 quasi‐experimental studies.[33, 38] In the cluster‐randomized trial, disposable lead wires were compared to reusable lead wires, with disposable lead wires having significantly fewer technical alarms for lead signal failures (P = 0.03) but a similar number of monitoring artifact alarms (P = 0.44).[32] In a single‐intervention, quasi‐experimental study, daily electrode change showed a reduction in alarms, but no statistical testing was reported.[33] One multiple‐intervention, quasi‐experimental study incorporating daily electrode change showed fewer alarms without statistical testing.[38] Of the 2 studies examining patient safety, both reported no adverse outcomes.[32, 38]

DISCUSSION

This systematic review of physiologic monitor alarms in the hospital yielded the following main findings: (1) between 74% and 99% of physiologic monitor alarms were not actionable, (2) a significant relationship between alarm exposure and nurse response time was demonstrated in 2 small observational studies, and (3) although interventions were most often studied in combination, results from the studies with lower risk of bias suggest that widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and/or changing electrodes daily are the most promising interventions for reducing alarms. Only 5 of 8 intervention studies measured intervention safety and found that widening alarm parameters and implementing alarm delays had mixed safety outcomes, whereas disposable electrocardiographic lead wires and daily electrode changes had no adverse safety outcomes.[29, 30, 34, 35, 36] Safety measures are crucial to ensuring the highest level of patient safety is met; interventions are rendered useless without ensuring actionable alarms are not disabled. The variation in results across studies likely reflects the wide range of care settings as well as differences in design and quality.

This field is still in its infancy, with 18 of the 32 articles published in the past 5 years. We anticipate improvements in quality and rigor as the field matures, as well as clinically tested interventions that incorporate smart alarms. Smart alarms integrate data from multiple physiologic signals and the patient's history to better detect physiologic changes in the patient and improve the positive predictive value of alarms. Academicindustry partnerships will be required to implement and rigorously test smart alarms and other emerging technologies in the hospital.

To our knowledge, this is the first systematic review focused on monitor alarms with specific review questions relevant to alarm fatigue. Cvach recently published an integrative review of alarm fatigue using research published through 2011.[39] Our review builds upon her work by contributing a more extensive and systematic search strategy with databases spanning nursing, medicine, and engineering, including additional languages, and including newer studies published through April 2015. In addition, we included multiple cross‐team checks in our eligibility review to ensure high sensitivity and specificity of the resulting set of studies.

Although we focused on interventions aiming to reduce alarms, there has also been important recent work focused on reducing telemetry utilization in adult hospital populations as well as work focused on reducing pulse oximetry utilization in children admitted with respiratory conditions. Dressler and colleagues reported an immediate and sustained reduction in telemetry utilization in hospitalized adults upon redesign of cardiac telemetry order sets to include the clinical indication, which defaulted to the American Heart Association guideline‐recommended telemetry duration.[40] Instructions for bedside nurses were also included in the order set to facilitate appropriate telemetry discontinuation. Schondelmeyer and colleagues reported reductions in continuous pulse oximetry utilization in hospitalized children with asthma and bronchiolitis upon introduction of a multifaceted quality improvement program that included provider education, a nurse handoff checklist, and discontinuation criteria incorporated into order sets.[41]

Limitations of This Review and the Underlying Body of Work

There are limitations to this systematic review and its underlying body of work. With respect to our approach to this systematic review, we focused only on monitor alarms. Numerous other medical devices generate alarms in the patient‐care environment that also can contribute to alarm fatigue and deserve equally rigorous evaluation. With respect to the underlying body of work, the quality of individual studies was generally low. For example, determinations of alarm actionability were often made by a single rater without evaluation of the reliability or validity of these determinations, and statistical testing was often missing. There were also limitations specific to intervention studies, including evaluation of nongeneralizable patient populations, failure to measure the fidelity of the interventions, inadequate measures of intervention safety, and failure to statistically evaluate alarm reductions. Finally, though not necessarily a limitation, several studies were conducted by authors involved in or funded by the medical device industry.[11, 15, 19, 31, 32] This has the potential to introduce bias, although we have no indication that the quality of the science was adversely impacted.

Moving forward, the research agenda for physiologic monitor alarms should include the following: (1) more intensive focus on evaluating the relationship between alarm exposure and response time with analysis of important mediating factors that may promote or prevent alarm fatigue, (2) emphasis on studying interventions aimed at improving alarm management using rigorous designs such as cluster‐randomized trials and trials randomized by individual participant, (3) monitoring and reporting clinically meaningful balancing measures that represent unintended consequences of disabling or delaying potentially important alarms and possibly reducing the clinicians' ability to detect true patient deterioration and intervene in a timely manner, and (4) support for transparent academicindustry partnerships to evaluate new alarm technology in real‐world settings. As evidence‐based interventions emerge, there will be new opportunities to study different implementation strategies of these interventions to optimize effectiveness.

CONCLUSIONS

The body of literature relevant to physiologic monitor alarm characteristics and alarm fatigue is limited but growing rapidly. Although we know that most alarms are not actionable and that there appears to be a relationship between alarm exposure and response time that could be caused by alarm fatigue, we cannot yet say with certainty that we know which interventions are most effective in safely reducing unnecessary alarms. Interventions that appear most promising and should be prioritized for intensive evaluation include widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and changing electrodes daily. Careful evaluation of these interventions must include systematically examining adverse patient safety consequences.

Acknowledgements

The authors thank Amogh Karnik and Micheal Sellars for their technical assistance during the review and extraction process.

Disclosures: Ms. Zander is supported by the Society of Hospital Medicine Student Hospitalist Scholar Grant. Dr. Bonafide and Ms. Stemler are supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number K23HL116427. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no conflicts of interest.

Clinical alarm safety has become a recent target for improvement in many hospitals. In 2013, The Joint Commission released a National Patient Safety Goal prompting accredited hospitals to establish alarm safety as a hospital priority, identify the most important alarm signals to manage, and, by 2016, develop policies and procedures that address alarm management.[1] In addition, the Emergency Care Research Institute has named alarm hazards the top health technology hazard each year since 2012.[2]

The primary arguments supporting the elevation of alarm management to a national hospital priority in the United States include the following: (1) clinicians rely on alarms to notify them of important physiologic changes, (2) alarms occur frequently and usually do not warrant clinical intervention, and (3) alarm overload renders clinicians unable to respond to all alarms, resulting in alarm fatigue: responding more slowly or ignoring alarms that may represent actual clinical deterioration.[3, 4] These arguments are built largely on anecdotal data, reported safety event databases, and small studies that have not previously been systematically analyzed.

Despite the national focus on alarms, we still know very little about fundamental questions key to improving alarm safety. In this systematic review, we aimed to answer 3 key questions about physiologic monitor alarms: (1) What proportion of alarms warrant attention or clinical intervention (ie, actionable alarms), and how does this proportion vary between adult and pediatric populations and between intensive care unit (ICU) and ward settings? (2) What is the relationship between alarm exposure and clinician response time? (3) What interventions are effective in reducing the frequency of alarms?

We limited our scope to monitor alarms because few studies have evaluated the characteristics of alarms from other medical devices, and because missing relevant monitor alarms could adversely impact patient safety.

METHODS

We performed a systematic review of the literature in accordance with the Meta‐Analysis of Observational Studies in Epidemiology guidelines[5] and developed this manuscript using the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) statement.[6]

Eligibility Criteria

With help from an experienced biomedical librarian (C.D.S.), we searched PubMed, the Cumulative Index to Nursing and Allied Health Literature, Scopus, Cochrane Library, ClinicalTrials.gov, and Google Scholar from January 1980 through April 2015 (see Supporting Information in the online version of this article for the search terms and queries). We hand searched the reference lists of included articles and reviewed our personal libraries to identify additional relevant studies.

We included peer‐reviewed, original research studies published in English, Spanish, or French that addressed the questions outlined above. Eligible patient populations were children and adults admitted to hospital inpatient units and emergency departments (EDs). We excluded alarms in procedural suites or operating rooms (typically responded to by anesthesiologists already with the patient) because of the differences in environment of care, staff‐to‐patient ratio, and equipment. We included observational studies reporting the actionability of physiologic monitor alarms (ie, alarms warranting special attention or clinical intervention), as well as nurse responses to these alarms. We excluded studies focused on the effects of alarms unrelated to patient safety, such as families' and patients' stress, noise, or sleep disturbance. We included only intervention studies evaluating pragmatic interventions ready for clinical implementation (ie, not experimental devices or software algorithms).

Selection Process and Data Extraction

First, 2 authors screened the titles and abstracts of articles for eligibility. To maximize sensitivity, if at least 1 author considered the article relevant, the article proceeded to full‐text review. Second, the full texts of articles screened were independently reviewed by 2 authors in an unblinded fashion to determine their eligibility. Any disagreements concerning eligibility were resolved by team consensus. To assure consistency in eligibility determinations across the team, a core group of the authors (C.W.P, C.P.B., E.E., and V.V.G.) held a series of meetings to review and discuss each potentially eligible article and reach consensus on the final list of included articles. Two authors independently extracted the following characteristics from included studies: alarm review methods, analytic design, fidelity measurement, consideration of unintended adverse safety consequences, and key results. Reviewers were not blinded to journal, authors, or affiliations.

Synthesis of Results and Risk Assessment

Given the high degree of heterogeneity in methodology, we were unable to generate summary proportions of the observational studies or perform a meta‐analysis of the intervention studies. Thus, we organized the studies into clinically relevant categories and presented key aspects in tables. Due to the heterogeneity of the studies and the controversy surrounding quality scores,[5] we did not generate summary scores of study quality. Instead, we evaluated and reported key design elements that had the potential to bias the results. To recognize the more comprehensive studies in the field, we developed by consensus a set of characteristics that distinguished studies with lower risk of bias. These characteristics are shown and defined in Table 1.

General Characteristics of Included Studies
First Author and Publication Year Alarm Review Method Indicators of Potential Bias for Observational Studies Indicators of Potential Bias for Intervention Studies
Monitor System Direct Observation Medical Record Review Rhythm Annotation Video Observation Remote Monitoring Staff Medical Device Industry Involved Two Independent Reviewers At Least 1 Reviewer Is a Clinical Expert Reviewer Not Simultaneously in Patient Care Clear Definition of Alarm Actionability Census Included Statistical Testing or QI SPC Methods Fidelity Assessed Safety Assessed Lower Risk of Bias
  • NOTE: Lower risk of bias for observational studies required all of the following characteristics be reported: (1) two independent reviewers of alarms, (2) at least 1 clinical expert reviewer (ie, physician or nurse), (3) the reviewer was not simultaneously involved in clinical care, and (4) there was a clear definition of alarm actionability provided in the article. These indicators assess detection bias, observer bias, analytical bias, and reporting bias and were derived from the Meta‐analysis of Observational Studies in Epidemiology checklist.[5] Lower risk of bias for intervention studies required all of the following characteristics be reported: (1) patient census accounted for in analysis, (2) formal statistical testing of effect or statistical process control methods used to evaluate effect, (3) intervention fidelity measured and reported (defined broadly as some measurement of whether the intervention was delivered as intended), and (4) relevant patient safety outcomes measured and reported (eg, the rate of code blue events before and after the intervention was implemented). These indicators assess reporting bias and internal validity bias and were derived from the Downs and Black checklist.[42] Monitor system: alarm data were electronically collected directly from the physiologic monitors and saved on a computer device through software such as BedMasterEx. Direct observation: an in‐person observer, such as a research assistant or a nurse, takes note of the alarm data and/or responses to alarms. Medical record review: data on alarms and/or responses to alarms were extracted from the patient medical records. Rhythm annotation: data on waveforms from cardiac monitors were collected and saved on a computer device through software such as BedMasterEx. Video observation: video cameras were set up in the patient's room and recorded data on alarms and/or responses to alarms. Remote monitor staff: clinicians situated at a remote location observe the patient via video camera and may be able to communicate with the patient or the patient's assigned nurse. Abbreviations: QI, quality improvement; RN, registered nurse; SPC, statistical process control. *Monitor system + RN interrogation. Assigned nurse making observations. Monitor from central station. Alarm outcome reported using run chart, and fidelity outcomes presented using statistical process control charts.

Adult Observational
Atzema 2006[7] ✓*
Billinghurst 2003[8]
Biot 2000[9]
Chambrin 1999[10]
Drew 2014[11]
Gazarian 2014[12]
Grges 2009[13]
Gross 2011[15]
Inokuchi 2013[14]
Koski 1990[16]
Morales Snchez 2014[17]
Pergher 2014[18]
Siebig 2010[19]
Voepel‐Lewis 2013[20]
Way 2014[21]
Pediatric Observational
Bonafide 2015[22]
Lawless 1994[23]
Rosman 2013[24]
Talley 2011[25]
Tsien 1997[26]
van Pul 2015[27]
Varpio 2012[28]
Mixed Adult and Pediatric Observational
O'Carroll 1986[29]
Wiklund 1994[30]
Adult Intervention
Albert 2015[32]
Cvach 2013[33]
Cvach 2014[34]
Graham 2010[35]
Rheineck‐Leyssius 1997[36]
Taenzer 2010[31]
Whalen 2014[37]
Pediatric Intervention
Dandoy 2014[38]

For the purposes of this review, we defined nonactionable alarms as including both invalid (false) alarms that do not that accurately represent the physiologic status of the patient and alarms that are valid but do not warrant special attention or clinical intervention (nuisance alarms). We did not separate out invalid alarms due to the tremendous variation between studies in how validity was measured.

RESULTS

Study Selection

Search results produced 4629 articles (see the flow diagram in the Supporting Information in the online version of this article), of which 32 articles were eligible: 24 observational studies describing alarm characteristics and 8 studies describing interventions to reduce alarm frequency.

Observational Study Characteristics

Characteristics of included studies are shown in Table 1. Of the 24 observational studies,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] 15 included adult patients,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] 7 included pediatric patients,[22, 23, 24, 25, 26, 27, 28] and 2 included both adult and pediatric patients.[29, 30] All were single‐hospital studies, except for 1 study by Chambrin and colleagues[10] that included 5 sites. The number of patient‐hours examined in each study ranged from 60 to 113,880.[7, 8, 9, 10, 11, 13, 14, 15, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 29, 30] Hospital settings included ICUs (n = 16),[9, 10, 11, 13, 14, 16, 17, 18, 19, 22, 23, 24, 25, 26, 27, 29] general wards (n = 5),[12, 15, 20, 22, 28] EDs (n = 2),[7, 21] postanesthesia care unit (PACU) (n = 1),[30] and cardiac care unit (CCU) (n = 1).[8] Studies varied in the type of physiologic signals recorded and data collection methods, ranging from direct observation by a nurse who was simultaneously caring for patients[29] to video recording with expert review.[14, 19, 22] Four observational studies met the criteria for lower risk of bias.[11, 14, 15, 22]

Intervention Study Characteristics

Of the 8 intervention studies, 7 included adult patients,[31, 32, 33, 34, 35, 36, 37] and 1 included pediatric patients.[38] All were single‐hospital studies; 6 were quasi‐experimental[31, 33, 34, 35, 37, 38] and 2 were experimental.[32, 36] Settings included progressive care units (n = 3),[33, 34, 35] CCUs (n = 3),[32, 33, 37] wards (n = 2),[31, 38] PACU (n = 1),[36] and a step‐down unit (n = 1).[32] All except 1 study[32] used the monitoring system to record alarm data. Several studies evaluated multicomponent interventions that included combinations of the following: widening alarm parameters,[31, 35, 36, 37, 38] instituting alarm delays,[31, 34, 36, 38] reconfiguring alarm acuity,[35, 37] use of secondary notifications,[34] daily change of electrocardiographic electrodes or use of disposable electrocardiographic wires,[32, 33, 38] universal monitoring in high‐risk populations,[31] and timely discontinuation of monitoring in low‐risk populations.[38] Four intervention studies met our prespecified lower risk of bias criteria.[31, 32, 36, 38]

Proportion of Alarms Considered Actionable

Results of the observational studies are provided in Table 2. The proportion of alarms that were actionable was <1% to 26% in adult ICU settings,[9, 10, 11, 13, 14, 16, 17, 19] 20% to 36% in adult ward settings,[12, 15, 20] 17% in a mixed adult and pediatric PACU setting,[30] 3% to 13% in pediatric ICU settings,[22, 23, 24, 25, 26] and 1% in a pediatric ward setting.[22]

Results of Included Observational Studies
Signals Included
First Author and Publication Year Setting Monitored Patient‐Hours SpO2 ECG Arrhythmia ECG Parametersa Blood Pressure Total Alarms Actionable Alarms Alarm Response Lower Risk of Bias
  • NOTE: Lower risk of bias for observational studies required all of the following characteristics be reported: (1) two independent reviewers of alarms, (2) at least 1 clinical expert reviewer (i.e. physician or nurse), (3) the reviewer was not simultaneously involved in clinical care, and (4) there was a clear definition of alarm actionability provided in the article. Abbreviations: CCU, cardiac/telemetry care unit; ECG, electrocardiogram; ED, emergency department; ICU, intensive care unit; PACU, postanesthesia care unit; SpO2, oxygen saturation; VT, ventricular tachycardia.

  • Includes respiratory rate measured via ECG leads. Actionable is defined as alarms warranting special attention or clinical intervention. Valid is defined as the alarm accurately representing the physiologic status of the patient. Directly addresses relationship between alarm exposure and response time. ∥Not provided directly; estimated from description of data collection methods.

Adult
Atzema 2006[7] ED 371 1,762 0.20%
Billinghurst 2003[8] CCU 420 751 Not reported; 17% were valid Nurses with higher acuity patients and smaller % of valid alarms had slower response rates
Biot 2000[9] ICU 250 3,665 3%
Chambrin 1999[10] ICU 1,971 3,188 26%
Drew 2014[11] ICU 48,173 2,558,760 0.3% of 3,861 VT alarms
Gazarian 2014[12] Ward 54 nurse‐hours 205 22% Response to 47% of alarms
Grges 2009[13] ICU 200 1,214 5%
Gross 2011[15] Ward 530 4,393 20%
Inokuchi 2013[14] ICU 2,697 11,591 6%
Koski 1990[16] ICU 400 2,322 12%
Morales Snchez 2014[17] ICU 434 sessions 215 25% Response to 93% of alarms, of which 50% were within 10 seconds
Pergher 2014[18] ICU 60 76 Not reported 72% of alarms stopped before nurse response or had >10 minutes response time
Siebig 2010[19] ICU 982 5,934 15%
Voepel‐Lewis 2013[20] Ward 1,616 710 36% Response time was longer for patients in highest quartile of total alarms
Way 2014[21] ED 93 572 Not reported; 75% were valid Nurses responded to more alarms in resuscitation room vs acute care area, but response time was longer
Pediatric
Bonafide 2015[22] Ward + ICU 210 5,070 13% PICU, 1% ward Incremental increases in response time as number of nonactionable alarms in preceding 120 minutes increased
Lawless 1994[23] ICU 928 2,176 6%
Rosman 2013[24] ICU 8,232 54,656 4% of rhythm alarms true critical"
Talley 2011[25] ICU 1,470∥ 2,245 3%
Tsien 1997[26] ICU 298 2,942 8%
van Pul 2015[27] ICU 113,880∥ 222,751 Not reported Assigned nurse did not respond to 6% of alarms within 45 seconds
Varpio 2012[28] Ward 49 unit‐hours 446 Not reported 70% of all alarms and 41% of crisis alarms were not responded to within 1 minute
Both
O'Carroll 1986[29] ICU 2,258∥ 284 2%
Wiklund 1994[30] PACU 207 1,891 17%

Relationship Between Alarm Exposure and Response Time

Whereas 9 studies addressed response time,[8, 12, 17, 18, 20, 21, 22, 27, 28] only 2 evaluated the relationship between alarm burden and nurse response time.[20, 22] Voepel‐Lewis and colleagues found that nurse responses were slower to patients with the highest quartile of alarms (57.6 seconds) compared to those with the lowest (45.4 seconds) or medium (42.3 seconds) quartiles of alarms on an adult ward (P = 0.046). They did not find an association between false alarm exposure and response time.[20] Bonafide and colleagues found incremental increases in response time as the number of nonactionable alarms in the preceding 120 minutes increased (P < 0.001 in the pediatric ICU, P = 0.009 on the pediatric ward).[22]

Interventions Effective in Reducing Alarms

Results of the 8 intervention studies are provided in Table 3. Three studies evaluated single interventions;[32, 33, 36] the remainder of the studies tested interventions with multiple components such that it was impossible to separate the effect of each component. Below, we have summarized study results, arranged by component. Because only 1 study focused on pediatric patients,[38] results from pediatric and adult settings are combined.

Results of Included Intervention Studies
First Author and Publication Year Design Setting Main Intervention Components Other/ Comments Key Results Results Statistically Significant? Lower Risk of Bias
Widen Default Settings Alarm Delays Reconfigure Alarm Acuity Secondary Notification ECG Changes
  • NOTE: Lower risk of bias for intervention studies required all of the following characteristics be reported: (1) patient census accounted for in analysis, (2) formal statistical testing of effect or statistical process control methods used to evaluate effect, (3) intervention fidelity measured and reported (defined broadly as some measurement of whether the intervention was delivered as intended), and (4) relevant patient safety outcomes measured and reported (eg, the rate of code blue events before and after the intervention was implemented). Abbreviations: CCU, cardiac/telemetry care unit; ECG, electrocardiogram; ICU, intensive care unit; ITS, interrupted time series; PACU, postanesthesia care unit; PCU, progressive care unit; SpO2, oxygen saturation. *Delays were part of secondary notification system only. Delays explored retrospectively only; not part of prospective evaluation. Preimplementation count not reported.

Adult
Albert 2015[32] Experimental (cluster‐randomized) CCU Disposable vs reusable wires Disposable leads had 29% fewer no‐telemetry, leads‐fail, and leads‐off alarms and similar artifact alarms
Cvach 2013[33] Quasi‐experimental (before and after) CCU and PCU Daily change of electrodes 46% fewer alarms/bed/day
Cvach 2014[34] Quasi‐experimental (ITS) PCU ✓* Slope of regression line suggests decrease of 0.75 alarms/bed/day
Graham 2010[35] Quasi‐experimental (before and after) PCU 43% fewer crisis, warning, and system warning alarms on unit
Rheineck‐Leyssius 1997[36] Experimental (RCT) PACU Alarm limit of 85% had fewer alarms/patient but higher incidence of true hypoxemia for >1 minute (6% vs 2%)
Taenzer 2010[31] Quasi‐experimental (before and after with concurrent controls) Ward Universal SpO2 monitoring Rescue events decreased from 3.4 to 1.2 per 1,000 discharges; transfers to ICU decreased from 5.6 to 2.9 per 1,000 patient‐days, only 4 alarms/patient‐day
Whalen 2014[37] Quasi‐experimental (before and after) CCU 89% fewer audible alarms on unit
Pediatric
Dandoy 2014[38] Quasi‐experimental (ITS) Ward Timely monitor discontinuation; daily change of ECG electrodes Decrease in alarms/patient‐days from 180 to 40

Widening alarm parameter default settings was evaluated in 5 studies:[31, 35, 36, 37, 38] 1 single intervention randomized controlled trial (RCT),[36] and 4 multiple‐intervention, quasi‐experimental studies.[31, 35, 37, 38] In the RCT, using a lower SpO2 limit of 85% instead of the standard 90% resulted in 61% fewer alarms. In the 4 multiple intervention studies, 1 study reported significant reductions in alarm rates (P < 0.001),[37] 1 study did not report preintervention alarm rates but reported a postintervention alarm rate of 4 alarms per patient‐day,[31] and 2 studies reported reductions in alarm rates but did not report any statistical testing.[35, 38] Of the 3 studies examining patient safety, 1 study with universal monitoring reported fewer rescue events and transfers to the ICU postimplementation,[31] 1 study reported no missed acute decompensations,[38] and 1 study (the RCT) reported significantly more true hypoxemia events (P = 0.001).[36]

Alarm delays were evaluated in 4 studies:[31, 34, 36, 38] 3 multiple‐intervention, quasi‐experimental studies[31, 34, 38] and 1 retrospective analysis of data from an RCT.[36] One study combined alarm delays with widening defaults in a universal monitoring strategy and reported a postintervention alarm rate of 4 alarms per patient.[31] Another study evaluated delays as part of a secondary notification pager system and found a negatively sloping regression line that suggested a decreasing alarm rate, but did not report statistical testing.[34] The third study reported a reduction in alarm rates but did not report statistical testing.[38] The RCT compared the impact of a hypothetical 15‐second alarm delay to that of a lower SpO2 limit reduction and reported a similar reduction in alarms.[36] Of the 4 studies examining patient safety, 1 study with universal monitoring reported improvements,[31] 2 studies reported no adverse outcomes,[35, 38] and the retrospective analysis of data from the RCT reported the theoretical adverse outcome of delayed detection of sudden, severe desaturations.[36]

Reconfiguring alarm acuity was evaluated in 2 studies, both of which were multiple‐intervention quasi‐experimental studies.[35, 37] Both showed reductions in alarm rates: 1 was significant without increasing adverse events (P < 0.001),[37] and the other did not report statistical testing or safety outcomes.[35]

Secondary notification of nurses using pagers was the main intervention component of 1 study incorporating delays between the alarms and the alarm pages.[34] As mentioned above, a negatively sloping regression line was displayed, but no statistical testing or safety outcomes were reported.

Disposable electrocardiographic lead wires or daily electrode changes were evaluated in 3 studies:[32, 33, 38] 1 single intervention cluster‐randomized trial[32] and 2 quasi‐experimental studies.[33, 38] In the cluster‐randomized trial, disposable lead wires were compared to reusable lead wires, with disposable lead wires having significantly fewer technical alarms for lead signal failures (P = 0.03) but a similar number of monitoring artifact alarms (P = 0.44).[32] In a single‐intervention, quasi‐experimental study, daily electrode change showed a reduction in alarms, but no statistical testing was reported.[33] One multiple‐intervention, quasi‐experimental study incorporating daily electrode change showed fewer alarms without statistical testing.[38] Of the 2 studies examining patient safety, both reported no adverse outcomes.[32, 38]

DISCUSSION

This systematic review of physiologic monitor alarms in the hospital yielded the following main findings: (1) between 74% and 99% of physiologic monitor alarms were not actionable, (2) a significant relationship between alarm exposure and nurse response time was demonstrated in 2 small observational studies, and (3) although interventions were most often studied in combination, results from the studies with lower risk of bias suggest that widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and/or changing electrodes daily are the most promising interventions for reducing alarms. Only 5 of 8 intervention studies measured intervention safety and found that widening alarm parameters and implementing alarm delays had mixed safety outcomes, whereas disposable electrocardiographic lead wires and daily electrode changes had no adverse safety outcomes.[29, 30, 34, 35, 36] Safety measures are crucial to ensuring the highest level of patient safety is met; interventions are rendered useless without ensuring actionable alarms are not disabled. The variation in results across studies likely reflects the wide range of care settings as well as differences in design and quality.

This field is still in its infancy, with 18 of the 32 articles published in the past 5 years. We anticipate improvements in quality and rigor as the field matures, as well as clinically tested interventions that incorporate smart alarms. Smart alarms integrate data from multiple physiologic signals and the patient's history to better detect physiologic changes in the patient and improve the positive predictive value of alarms. Academicindustry partnerships will be required to implement and rigorously test smart alarms and other emerging technologies in the hospital.

To our knowledge, this is the first systematic review focused on monitor alarms with specific review questions relevant to alarm fatigue. Cvach recently published an integrative review of alarm fatigue using research published through 2011.[39] Our review builds upon her work by contributing a more extensive and systematic search strategy with databases spanning nursing, medicine, and engineering, including additional languages, and including newer studies published through April 2015. In addition, we included multiple cross‐team checks in our eligibility review to ensure high sensitivity and specificity of the resulting set of studies.

Although we focused on interventions aiming to reduce alarms, there has also been important recent work focused on reducing telemetry utilization in adult hospital populations as well as work focused on reducing pulse oximetry utilization in children admitted with respiratory conditions. Dressler and colleagues reported an immediate and sustained reduction in telemetry utilization in hospitalized adults upon redesign of cardiac telemetry order sets to include the clinical indication, which defaulted to the American Heart Association guideline‐recommended telemetry duration.[40] Instructions for bedside nurses were also included in the order set to facilitate appropriate telemetry discontinuation. Schondelmeyer and colleagues reported reductions in continuous pulse oximetry utilization in hospitalized children with asthma and bronchiolitis upon introduction of a multifaceted quality improvement program that included provider education, a nurse handoff checklist, and discontinuation criteria incorporated into order sets.[41]

Limitations of This Review and the Underlying Body of Work

There are limitations to this systematic review and its underlying body of work. With respect to our approach to this systematic review, we focused only on monitor alarms. Numerous other medical devices generate alarms in the patient‐care environment that also can contribute to alarm fatigue and deserve equally rigorous evaluation. With respect to the underlying body of work, the quality of individual studies was generally low. For example, determinations of alarm actionability were often made by a single rater without evaluation of the reliability or validity of these determinations, and statistical testing was often missing. There were also limitations specific to intervention studies, including evaluation of nongeneralizable patient populations, failure to measure the fidelity of the interventions, inadequate measures of intervention safety, and failure to statistically evaluate alarm reductions. Finally, though not necessarily a limitation, several studies were conducted by authors involved in or funded by the medical device industry.[11, 15, 19, 31, 32] This has the potential to introduce bias, although we have no indication that the quality of the science was adversely impacted.

Moving forward, the research agenda for physiologic monitor alarms should include the following: (1) more intensive focus on evaluating the relationship between alarm exposure and response time with analysis of important mediating factors that may promote or prevent alarm fatigue, (2) emphasis on studying interventions aimed at improving alarm management using rigorous designs such as cluster‐randomized trials and trials randomized by individual participant, (3) monitoring and reporting clinically meaningful balancing measures that represent unintended consequences of disabling or delaying potentially important alarms and possibly reducing the clinicians' ability to detect true patient deterioration and intervene in a timely manner, and (4) support for transparent academicindustry partnerships to evaluate new alarm technology in real‐world settings. As evidence‐based interventions emerge, there will be new opportunities to study different implementation strategies of these interventions to optimize effectiveness.

CONCLUSIONS

The body of literature relevant to physiologic monitor alarm characteristics and alarm fatigue is limited but growing rapidly. Although we know that most alarms are not actionable and that there appears to be a relationship between alarm exposure and response time that could be caused by alarm fatigue, we cannot yet say with certainty that we know which interventions are most effective in safely reducing unnecessary alarms. Interventions that appear most promising and should be prioritized for intensive evaluation include widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and changing electrodes daily. Careful evaluation of these interventions must include systematically examining adverse patient safety consequences.

Acknowledgements

The authors thank Amogh Karnik and Micheal Sellars for their technical assistance during the review and extraction process.

Disclosures: Ms. Zander is supported by the Society of Hospital Medicine Student Hospitalist Scholar Grant. Dr. Bonafide and Ms. Stemler are supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number K23HL116427. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no conflicts of interest.

References
  1. National Patient Safety Goals Effective January 1, 2015. The Joint Commission Web site. http://www.jointcommission.org/assets/1/6/2015_NPSG_HAP.pdf. Accessed July 17, 2015.
  2. ECRI Institute. 2015 Top 10 Health Technology Hazards. Available at: https://www.ecri.org/Pages/2015‐Hazards.aspx. Accessed June 23, 2015.
  3. Sendelbach S, Funk M. Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378386.
  4. Chopra V, McMahon LF. Redesigning hospital alarms for patient safety: alarmed and potentially dangerous. JAMA. 2014;311(12):11991200.
  5. Stroup DF, Berlin JA, Morton SC, et al. Meta‐analysis of observational studies in epidemiology: a proposal for reporting. Meta‐analysis Of Observational Studies in Epidemiology (MOOSE) Group. JAMA. 2000;283(15):20082012.
  6. Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta‐analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264269, W64.
  7. Atzema C, Schull MJ, Borgundvaag B, Slaughter GRD, Lee CK. ALARMED: adverse events in low‐risk patients with chest pain receiving continuous electrocardiographic monitoring in the emergency department. A pilot study. Am J Emerg Med. 2006;24:6267.
  8. Billinghurst F, Morgan B, Arthur HM. Patient and nurse‐related implications of remote cardiac telemetry. Clin Nurs Res. 2003;12(4):356370.
  9. Biot L, Carry PY, Perdrix JP, Eberhard A, Baconnier P. Clinical evaluation of alarm efficiency in intensive care [in French]. Ann Fr Anesth Reanim. 2000;19:459466.
  10. Chambrin MC, Ravaux P, Calvelo‐Aros D, Jaborska A, Chopin C, Boniface B. Multicentric study of monitoring alarms in the adult intensive care unit (ICU): a descriptive analysis. Intensive Care Med. 1999;25:13601366.
  11. Drew BJ, Harris P, Zègre‐Hemsey JK, et al. Insights into the problem of alarm fatigue with physiologic monitor devices: a comprehensive observational study of consecutive intensive care unit patients. PloS One. 2014;9(10):e110274.
  12. Gazarian PK. Nurses' response to frequency and types of electrocardiography alarms in a non‐ critical care setting: a descriptive study. Int J Nurs Stud. 2014;51(2):190197.
  13. Görges M, Markewitz BA, Westenskow DR. Improving alarm performance in the medical intensive care unit using delays and clinical context. Anesth Analg. 2009;108:15461552.
  14. Inokuchi R, Sato H, Nanjo Y, et al. The proportion of clinically relevant alarms decreases as patient clinical severity decreases in intensive care units: a pilot study. BMJ Open. 2013;3(9):e003354e003354.
  15. Gross B, Dahl D, Nielsen L. Physiologic monitoring alarm load on medical/surgical floors of a community hospital. Biomed Instrum Technol. 2011;45:2936.
  16. Koski EM, Mäkivirta A, Sukuvaara T, Kari A. Frequency and reliability of alarms in the monitoring of cardiac postoperative patients. Int J Clin Monit Comput. 1990;7(2):129133.
  17. Morales Sánchez C, Murillo Pérez MA, Torrente Vela S, et al. Audit of the bedside monitor alarms in a critical care unit [in Spanish]. Enferm Intensiva. 2014;25(3):8390.
  18. Pergher AK, Silva RCL. Stimulus‐response time to invasive blood pressure alarms: implications for the safety of critical‐care patients. Rev Gaúcha Enferm. 2014;35(2):135141.
  19. Siebig S, Kuhls S, Imhoff M, Gather U, Scholmerich J, Wrede CE. Intensive care unit alarms— how many do we need? Crit Care Med. 2010;38:451456.
  20. Voepel‐Lewis T, Parker ML, Burke CN, et al. Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):13511358.
  21. Way RB, Beer SA, Wilson SJ. Whats that noise? Bedside monitoring in the Emergency Department. Int Emerg Nurs. 2014;22(4):197201.
  22. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345351.
  23. Lawless ST. Crying wolf: false alarms in a pediatric intensive care unit. Crit Care Med. 1994;22(6):981985.
  24. Rosman EC, Blaufox AD, Menco A, Trope R, Seiden HS. What are we missing? Arrhythmia detection in the pediatric intensive care unit. J Pediatr. 2013;163(2):511514.
  25. Talley LB, Hooper J, Jacobs B, et al. Cardiopulmonary monitors and clinically significant events in critically ill children. Biomed Instrum Technol. 2011;45(s1):3845.
  26. Tsien CL, Fackler JC. Poor prognosis for existing monitors in the intensive care unit. Crit Care Med. 1997;25:614619.
  27. Pul C, Mortel H, Bogaart J, Mohns T, Andriessen P. Safe patient monitoring is challenging but still feasible in a neonatal intensive care unit with single family rooms. Acta Paediatr Oslo Nor 1992. 2015;104(6):e247e254.
  28. Varpio L, Kuziemsky C, Macdonald C, King WJ. The helpful or hindering effects of in‐hospital patient monitor alarms on nurses: a qualitative analysis. CIN Comput Inform Nurs. 2012;30(4):210217.
  29. O'Carroll T. Survey of alarms in an intensive therapy unit. Anaesthesia. 1986;41(7):742744.
  30. Wiklund L, Hök B, Ståhl K, Jordeby‐Jönsson A. Postanesthesia monitoring revisited: frequency of true and false alarms from different monitoring devices. J Clin Anesth. 1994;6(3):182188.
  31. Taenzer AH, Pyke JB, McGrath SP, Blike GT. Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before‐and‐after concurrence study. Anesthesiology. 2010;112(2):282287.
  32. Albert NM, Murray T, Bena JF, et al. Differences in alarm events between disposable and reusable electrocardiography lead wires. Am J Crit Care. 2015;24(1):6774.
  33. Cvach MM, Biggs M, Rothwell KJ, Charles‐Hudson C. Daily electrode change and effect on cardiac monitor alarms: an evidence‐based practice approach. J Nurs Care Qual. 2013;28:265271.
  34. Cvach MM, Frank RJ, Doyle P, Stevens ZK. Use of pagers with an alarm escalation system to reduce cardiac monitor alarm signals. J Nurs Care Qual. 2014;29(1):918.
  35. Graham KC, Cvach M. Monitor alarm fatigue: standardizing use of physiological monitors and decreasing nuisance alarms. Am J Crit Care. 2010;19:2834.
  36. Rheineck‐Leyssius AT, Kalkman CJ. Influence of pulse oximeter lower alarm limit on the incidence of hypoxaemia in the recovery room. Br J Anaesth. 1997;79(4):460464.
  37. Whalen DA, Covelle PM, Piepenbrink JC, Villanova KL, Cuneo CL, Awtry EH. Novel approach to cardiac alarm management on telemetry units. J Cardiovasc Nurs. 2014;29(5):E13E22.
  38. Dandoy CE, Davies SM, Flesch L, et al. A team‐based approach to reducing cardiac monitor alarms. Pediatrics. 2014;134(6):e1686e1694.
  39. Cvach M. Monitor alarm fatigue: an integrative review. Biomed Instrum Technol. 2012;46(4):268277.
  40. Dressler R, Dryer MM, Coletti C, Mahoney D, Doorey AJ. Altering overuse of cardiac telemetry in non‐intensive care unit settings by hardwiring the use of American Heart Association guidelines. JAMA Intern Med. 2014;174(11):18521854.
  41. Schondelmeyer AC, Simmons JM, Statile AM, et al. Using quality improvement to reduce continuous pulse oximetry use in children with wheezing. Pediatrics. 2015;135(4):e1044e1051.
  42. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non‐randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377384.
References
  1. National Patient Safety Goals Effective January 1, 2015. The Joint Commission Web site. http://www.jointcommission.org/assets/1/6/2015_NPSG_HAP.pdf. Accessed July 17, 2015.
  2. ECRI Institute. 2015 Top 10 Health Technology Hazards. Available at: https://www.ecri.org/Pages/2015‐Hazards.aspx. Accessed June 23, 2015.
  3. Sendelbach S, Funk M. Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378386.
  4. Chopra V, McMahon LF. Redesigning hospital alarms for patient safety: alarmed and potentially dangerous. JAMA. 2014;311(12):11991200.
  5. Stroup DF, Berlin JA, Morton SC, et al. Meta‐analysis of observational studies in epidemiology: a proposal for reporting. Meta‐analysis Of Observational Studies in Epidemiology (MOOSE) Group. JAMA. 2000;283(15):20082012.
  6. Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta‐analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264269, W64.
  7. Atzema C, Schull MJ, Borgundvaag B, Slaughter GRD, Lee CK. ALARMED: adverse events in low‐risk patients with chest pain receiving continuous electrocardiographic monitoring in the emergency department. A pilot study. Am J Emerg Med. 2006;24:6267.
  8. Billinghurst F, Morgan B, Arthur HM. Patient and nurse‐related implications of remote cardiac telemetry. Clin Nurs Res. 2003;12(4):356370.
  9. Biot L, Carry PY, Perdrix JP, Eberhard A, Baconnier P. Clinical evaluation of alarm efficiency in intensive care [in French]. Ann Fr Anesth Reanim. 2000;19:459466.
  10. Chambrin MC, Ravaux P, Calvelo‐Aros D, Jaborska A, Chopin C, Boniface B. Multicentric study of monitoring alarms in the adult intensive care unit (ICU): a descriptive analysis. Intensive Care Med. 1999;25:13601366.
  11. Drew BJ, Harris P, Zègre‐Hemsey JK, et al. Insights into the problem of alarm fatigue with physiologic monitor devices: a comprehensive observational study of consecutive intensive care unit patients. PloS One. 2014;9(10):e110274.
  12. Gazarian PK. Nurses' response to frequency and types of electrocardiography alarms in a non‐ critical care setting: a descriptive study. Int J Nurs Stud. 2014;51(2):190197.
  13. Görges M, Markewitz BA, Westenskow DR. Improving alarm performance in the medical intensive care unit using delays and clinical context. Anesth Analg. 2009;108:15461552.
  14. Inokuchi R, Sato H, Nanjo Y, et al. The proportion of clinically relevant alarms decreases as patient clinical severity decreases in intensive care units: a pilot study. BMJ Open. 2013;3(9):e003354e003354.
  15. Gross B, Dahl D, Nielsen L. Physiologic monitoring alarm load on medical/surgical floors of a community hospital. Biomed Instrum Technol. 2011;45:2936.
  16. Koski EM, Mäkivirta A, Sukuvaara T, Kari A. Frequency and reliability of alarms in the monitoring of cardiac postoperative patients. Int J Clin Monit Comput. 1990;7(2):129133.
  17. Morales Sánchez C, Murillo Pérez MA, Torrente Vela S, et al. Audit of the bedside monitor alarms in a critical care unit [in Spanish]. Enferm Intensiva. 2014;25(3):8390.
  18. Pergher AK, Silva RCL. Stimulus‐response time to invasive blood pressure alarms: implications for the safety of critical‐care patients. Rev Gaúcha Enferm. 2014;35(2):135141.
  19. Siebig S, Kuhls S, Imhoff M, Gather U, Scholmerich J, Wrede CE. Intensive care unit alarms— how many do we need? Crit Care Med. 2010;38:451456.
  20. Voepel‐Lewis T, Parker ML, Burke CN, et al. Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):13511358.
  21. Way RB, Beer SA, Wilson SJ. Whats that noise? Bedside monitoring in the Emergency Department. Int Emerg Nurs. 2014;22(4):197201.
  22. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345351.
  23. Lawless ST. Crying wolf: false alarms in a pediatric intensive care unit. Crit Care Med. 1994;22(6):981985.
  24. Rosman EC, Blaufox AD, Menco A, Trope R, Seiden HS. What are we missing? Arrhythmia detection in the pediatric intensive care unit. J Pediatr. 2013;163(2):511514.
  25. Talley LB, Hooper J, Jacobs B, et al. Cardiopulmonary monitors and clinically significant events in critically ill children. Biomed Instrum Technol. 2011;45(s1):3845.
  26. Tsien CL, Fackler JC. Poor prognosis for existing monitors in the intensive care unit. Crit Care Med. 1997;25:614619.
  27. Pul C, Mortel H, Bogaart J, Mohns T, Andriessen P. Safe patient monitoring is challenging but still feasible in a neonatal intensive care unit with single family rooms. Acta Paediatr Oslo Nor 1992. 2015;104(6):e247e254.
  28. Varpio L, Kuziemsky C, Macdonald C, King WJ. The helpful or hindering effects of in‐hospital patient monitor alarms on nurses: a qualitative analysis. CIN Comput Inform Nurs. 2012;30(4):210217.
  29. O'Carroll T. Survey of alarms in an intensive therapy unit. Anaesthesia. 1986;41(7):742744.
  30. Wiklund L, Hök B, Ståhl K, Jordeby‐Jönsson A. Postanesthesia monitoring revisited: frequency of true and false alarms from different monitoring devices. J Clin Anesth. 1994;6(3):182188.
  31. Taenzer AH, Pyke JB, McGrath SP, Blike GT. Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before‐and‐after concurrence study. Anesthesiology. 2010;112(2):282287.
  32. Albert NM, Murray T, Bena JF, et al. Differences in alarm events between disposable and reusable electrocardiography lead wires. Am J Crit Care. 2015;24(1):6774.
  33. Cvach MM, Biggs M, Rothwell KJ, Charles‐Hudson C. Daily electrode change and effect on cardiac monitor alarms: an evidence‐based practice approach. J Nurs Care Qual. 2013;28:265271.
  34. Cvach MM, Frank RJ, Doyle P, Stevens ZK. Use of pagers with an alarm escalation system to reduce cardiac monitor alarm signals. J Nurs Care Qual. 2014;29(1):918.
  35. Graham KC, Cvach M. Monitor alarm fatigue: standardizing use of physiological monitors and decreasing nuisance alarms. Am J Crit Care. 2010;19:2834.
  36. Rheineck‐Leyssius AT, Kalkman CJ. Influence of pulse oximeter lower alarm limit on the incidence of hypoxaemia in the recovery room. Br J Anaesth. 1997;79(4):460464.
  37. Whalen DA, Covelle PM, Piepenbrink JC, Villanova KL, Cuneo CL, Awtry EH. Novel approach to cardiac alarm management on telemetry units. J Cardiovasc Nurs. 2014;29(5):E13E22.
  38. Dandoy CE, Davies SM, Flesch L, et al. A team‐based approach to reducing cardiac monitor alarms. Pediatrics. 2014;134(6):e1686e1694.
  39. Cvach M. Monitor alarm fatigue: an integrative review. Biomed Instrum Technol. 2012;46(4):268277.
  40. Dressler R, Dryer MM, Coletti C, Mahoney D, Doorey AJ. Altering overuse of cardiac telemetry in non‐intensive care unit settings by hardwiring the use of American Heart Association guidelines. JAMA Intern Med. 2014;174(11):18521854.
  41. Schondelmeyer AC, Simmons JM, Statile AM, et al. Using quality improvement to reduce continuous pulse oximetry use in children with wheezing. Pediatrics. 2015;135(4):e1044e1051.
  42. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non‐randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377384.
Issue
Journal of Hospital Medicine - 11(2)
Issue
Journal of Hospital Medicine - 11(2)
Page Number
136-144
Page Number
136-144
Publications
Publications
Article Type
Display Headline
Systematic Review of Physiologic Monitor Alarm Characteristics and Pragmatic Interventions to Reduce Alarm Frequency
Display Headline
Systematic Review of Physiologic Monitor Alarm Characteristics and Pragmatic Interventions to Reduce Alarm Frequency
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Christopher P. Bonafide, MD, MSCE, The Children's Hospital of Philadelphia, 3401 Civic Center Blvd., Philadelphia, PA 19104; Telephone: 267‐426‐2901; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Monitor Alarms and Response Time

Article Type
Changed
Sun, 05/21/2017 - 13:06
Display Headline
Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital

Hospital physiologic monitors can alert clinicians to early signs of physiologic deterioration, and thus have great potential to save lives. However, monitors generate frequent alarms,[1, 2, 3, 4, 5, 6, 7, 8] and most are not relevant to the patient's safety (over 90% of pediatric intensive care unit (PICU)[1, 2] and over 70% of adult intensive care alarms).[5, 6] In psychology experiments, humans rapidly learn to ignore or respond more slowly to alarms when exposed to high false‐alarm rates, exhibiting alarm fatigue.[9, 10] In 2013, The Joint Commission named alarm fatigue the most common contributing factor to alarm‐related sentinel events in hospitals.[11, 12]

Although alarm fatigue has been implicated as a major threat to patient safety, little empirical data support its existence in hospitals. In this study, we aimed to determine if there was an association between nurses' recent exposure to nonactionable physiologic monitor alarms and their response time to future alarms for the same patients. This exploratory work was designed to inform future research in this area, acknowledging that the sample size would be too small for multivariable modeling.

METHODS

Study Definitions

The alarm classification scheme is shown in Figure 1. Note that, for clarity, we have intentionally avoided using the terms true and false alarms because their interpretations vary across studies and can be misleading.

Figure 1
Alarm classification scheme.

Potentially Critical Alarm

A potentially critical alarm is any alarm for a clinical condition for which a timely response is important to determine if the alarm requires intervention to save the patient's life. This is based on the alarm type alone, including alarms for life‐threatening arrhythmias such as asystole and ventricular tachycardia, as well as alarms for vital signs outside the set limits. Supporting Table 1 in the online version of this article lists the breakdown of alarm types that we defined a priori as potentially and not potentially critical.

Characteristics of the 2,445 Alarms for Clinical Conditions
 PICUWard
Alarm typeNo.% of Total% Valid% ActionableNo.% of Total% Valid% Actionable
  • NOTE: Abbreviations: N/A, not applicable; PICU, pediatric intensive care unit.

Oxygen saturation19719.482.738.659041.224.41.9
Heart rate19419.195.41.026618.687.20.0
Respiratory rate22922.680.813.531622.148.11.0
Blood pressure25925.583.85.8110.872.70.0
Critical arrhythmia10.10.00.040.30.00.0
Noncritical arrhythmia717.02.80.024417.18.60.0
Central venous pressure494.80.00.000.0N/AN/A
Exhaled carbon dioxide141.492.950.000.0N/AN/A
Total1014100.075.612.91,431100.038.91.0

Valid Alarm

A valid alarm is any alarm that correctly identifies the physiologic status of the patient. Validity was based on waveform quality, lead signal strength indicators, and artifact conditions, referencing each monitor's operator's manual.

Actionable Alarm

An actionable alarm is any valid alarm for a clinical condition that either: (1) leads to a clinical intervention; (2) leads to a consultation with another clinician at the bedside (and thus visible on camera); or (3) is a situation that should have led to intervention or consultation, but the alarm was unwitnessed or misinterpreted by the staff at the bedside.

Nonactionable Alarm

An unactionable alarm is any alarm that does not meet the actionable definition above, including invalid alarms such as those caused by motion artifact, equipment/technical alarms, and alarms that are valid but nonactionable (nuisance alarms).[13]

Response Time

The response time is the time elapsed from when the alarm fired at the bedside to when the nurse entered the room or peered through a window or door, measured in seconds.

Setting and Subjects

We performed this study between August 2012 and July 2013 at a freestanding children's hospital. We evaluated nurses caring for 2 populations: (1) PICU patients with heart and/or lung failure (requiring inotropic support and/or invasive mechanical ventilation), and (2) medical patients on a general inpatient ward. Nurses caring for heart and/or lung failure patients in the PICU typically were assigned 1 to 2 total patients. Nurses on the medical ward typically were assigned 2 to 4 patients. We identified subjects from the population of nurses caring for eligible patients with parents available to provide in‐person consent in each setting. Our primary interest was to evaluate the association between nonactionable alarms and response time, and not to study the epidemiology of alarms in a random sample. Therefore, when alarm data were available prior to screening, we first approached nurses caring for patients in the top 25% of alarm rates for that unit over the preceding 4 hours. We identified preceding alarm rates using BedMasterEx (Excel Medical Electronics, Jupiter, FL).

Human Subjects Protection

This study was approved by the institutional review board of The Children's Hospital of Philadelphia. We obtained written in‐person consent from the patient's parent and the nurse subject. We obtained a Certificate of Confidentiality from the National Institutes of Health to further protect study participants.[14]

Monitoring Equipment

All patients in the PICU were monitored continuously using General Electric (GE) (Fairfield, CT) solar devices. All bed spaces on the wards include GE Dash monitors that are used if ordered. On the ward we studied, 30% to 50% of patients are typically monitored at any given time. In addition to alarming at the bedside, most clinical alarms also generated a text message sent to the nurse's wireless phone listing the room number and the word monitor. Messages did not provide any clinical information about the alarm or patient's status. There were no technicians reviewing alarms centrally.

Physicians used an order set to order monitoring, selecting 1 of 4 available preconfigured profiles: infant <6 months, infant 6 months to 1 year, child, and adult. The parameters for each age group are in Supporting Figure 1, available in the online version of this article. A physician order is required for a nurse to change the parameters. Participating in the study did not affect this workflow.

Primary Outcome

The primary outcome was the nurse's response time to potentially critical monitor alarms that occurred while neither they nor any other clinicians were in the patient's room.

Primary Exposure and Alarm Classification

The primary exposure was the number of nonactionable alarms in the same patient over the preceding 120 minutes (rolling and updated each minute). The alarm classification scheme is shown in Figure 1.

Due to technical limitations with obtaining time‐stamped alarm data from the different ventilators in use during the study period, we were unable to identify the causes of all ventilator alarms. Therefore, we included ventilator alarms that did not lead to clinical interventions as nonactionable alarm exposures, but we did not evaluate the response time to any ventilator alarms.

Data Collection

We combined video recordings with monitor time‐stamp data to evaluate the association between nonactionable alarms and the nurse's response time. Our detailed video recording and annotation methods have been published separately.[15] Briefly, we mounted up to 6 small video cameras in patients' rooms and recorded up to 6 hours per session. The cameras captured the monitor display, a wide view of the room, a close‐up view of the patient, and all windows and doors through which staff could visually assess the patient without entering the room.

Video Processing, Review, and Annotation

The first 5 video sessions were reviewed in a group training setting. Research assistants received instruction on how to determine alarm validity and actionability in accordance with the study definitions. Following the training period, the review workflow was as follows. First, a research assistant entered basic information and a preliminary assessment of the alarm's clinical validity and actionability into a REDCap (Research Electronic Data Capture; Vanderbilt University, Nashville, TN) database.[16] Later, a physician investigator secondarily reviewed all alarms and confirmed the assessments of the research assistants or, when disagreements occurred, discussed and reconciled the database. Alarms that remained unresolved after secondary review were flagged for review with an additional physician or nurse investigator in a team meeting.

Data Analysis

We summarized the patient and nurse subjects, the distributions of alarms, and the response times to potentially critical monitor alarms that occurred while neither the nurse nor any other clinicians were in the patient's room. We explored the data using plots of alarms and response times occurring within individual video sessions as well as with simple linear regression. Hypothesizing that any alarm fatigue effect would be strongest in the highest alarm patients, and having observed that alarms are distributed very unevenly across patients in both the PICU and ward, we made the decision not to use quartiles, but rather to form clinically meaningful categories. We also hypothesized that nurses might not exhibit alarm fatigue unless they were inundated with alarms. We thus divided the nonactionable alarm counts over the preceding 120 minutes into 3 categories: 0 to 29 alarms to represent a low to average alarm rate exhibited by the bottom 50% of the patients, 30 to 79 alarms to represent an elevated alarm rate, and 80+ alarms to represent an extremely high alarm rate exhibited by the top 5%. Because the exposure time was 120 minutes, we conducted the analysis on the alarms occurring after a nurse had been video recorded for at least 120 minutes.

We further evaluated the relationship between nonactionable alarms and nurse response time with Kaplan‐Meier plots by nonactionable alarm count category using the observed response‐time data. The Kaplan‐Meier plots compared response time across the nonactionable alarm exposure group, without any statistical modeling. A log‐rank test stratified by nurse evaluated whether the distributions of response time in the Kaplan‐Meier plots differed across the 3 alarm exposure groups, accounting for within‐nurse clustering.

Accelerated failure‐time regression based on the Weibull distribution then allowed us to compare response time across each alarm exposure group and provided confidence intervals. Accelerated failure‐time models are comparable to Cox models, but emphasize time to event rather than hazards.[17, 18] We determined that the Weibull distribution was suitable by evaluating smoothed hazard and log‐hazard plots, the confidence intervals of the shape parameters in the Weibull models that did not include 1, and by demonstrating that the Weibull model had better fit than an alternative (exponential) model using the likelihood‐ratio test (P<0.0001 for PICU, P=0.02 for ward). Due to the small sample size of nurses and patients, we could not adjust for nurse‐ or patient‐level covariates in the model. When comparing the nonactionable alarm exposure groups in the regression model (029 vs 3079, 3079 vs 80+, and 029 vs 80+), we Bonferroni corrected the critical P value for the 3 comparisons, for a critical P value of 0.05/3=0.0167.

Nurse Questionnaire

At the session's conclusion, nurses completed a questionnaire that included demographics and asked, Did you respond more quickly to monitor alarms during this study because you knew you were being filmed? to measure if nurses would report experiencing a Hawthorne‐like effect.[19, 20, 21]

RESULTS

We performed 40 sessions among 40 patients and 36 nurses over 210 hours. We performed 20 sessions in children with heart and/or lung failure in the PICU and 20 sessions in children on a general ward. Sessions took place on weekdays between 9:00 am and 6:00 pm. There were 3 occasions when we filmed 2 patients cared for by the same nurse at the same time.

Nurses were mostly female (94.4%) and had between 2 months and 28 years of experience (median, 4.8 years). Patients on the ward ranged from 5 days to 5.4 years old (median, 6 months). Patients in the PICU ranged from 5 months to 16 years old (median, 2.5 years). Among the PICU patients, 14 (70%) were receiving mechanical ventilation only, 3 (15%) were receiving vasopressors only, and 3 (15%) were receiving mechanical ventilation and vasopressors.

We observed 5070 alarms during the 40 sessions. We excluded 108 (2.1%) that occurred at the end of video recording sessions with the nurse absent from the room because the nurse's response could not be determined. Alarms per session ranged from 10 to 1430 (median, 75; interquartile range [IQR], 35138). We excluded the outlier PICU patient with 1430 alarms in 5 hours from the analysis to avoid the potential for biasing the results. Figure 2 depicts the data flow.

Figure 2
Flow diagram of alarms used as exposures and outcomes in evaluating the association between nonactionable alarm exposure and response time.

Following the 5 training sessions, research assistants independently reviewed and made preliminary assessments on 4674 alarms; these alarms were all secondarily reviewed by a physician. Using the physician reviewer as the gold standard, the research assistant's sensitivity (assess alarm as actionable when physician also assesses as actionable) was 96.8% and specificity (assess alarm as nonactionable when physician also assesses as nonactionable) was 96.9%. We had to review 54 of 4674 alarms (1.2%) with an additional physician or nurse investigator to achieve consensus.

Characteristics of the 2445 alarms for clinical conditions are shown in Table 1. Only 12.9% of alarms in heart‐ and/or lung‐failure patients in the PICU were actionable, and only 1.0% of alarms in medical patients on a general inpatient ward were actionable.

Overall Response Times for Out‐of‐Room Alarms

We first evaluated response times without excluding alarms occurring prior to the 120‐minute mark. Of the 2445 clinical condition alarms, we excluded the 315 noncritical arrhythmia types from analysis of response time because they did not meet our definition of potentially critical alarms. Of the 2130 potentially critical alarms, 1185 (55.6%) occurred while neither the nurse nor any other clinician was in the patient's room. We proceeded to analyze the response time to these 1185 alarms (307 in the PICU and 878 on the ward). In the PICU, median response time was 3.3 minutes (IQR, 0.814.4). On the ward, median response time was 9.8 minutes (IQR, 3.222.4).

Response‐Time Association With Nonactionable Alarm Exposure

Next, we analyzed the association between response time to potentially critical alarms that occurred when the nurse was not in the patient's room and the number of nonactionable alarms occurring over the preceding 120‐minute window. This required excluding the alarms that occurred in the first 120 minutes of each session, leaving 647 alarms with eligible response times to evaluate the exposure between prior nonactionable alarm exposure and response time: 219 in the PICU and 428 on the ward. Kaplan‐Meier plots and tabulated response times demonstrated the incremental relationships between each nonactionable alarm exposure category in the observed data, with the effects most prominent as the Kaplan‐Meier plots diverged beyond the median (Figure 3 and Table 2). Excluding the extreme outlier patient had no effect on the results, because 1378 of the 1430 alarms occurred with the nurse present at the bedside, and only 2 of the remaining alarms were potentially critical.

Figure 3
Kaplan‐Meier plots of observed response times for pediatric intensive care unit (PICU) and ward. Abbreviations: ICU, intensive care unit.
Association Between Nonactionable Alarm Exposure in Preceding 120 Minutes and Response Time to Potentially Critical Alarms Based on Observed Data and With Response Time Modeled Using Weibull Accelerated Failure‐Time Regression
 Observed DataAccelerated Failure‐Time Model
Number of Potentially Critical AlarmsMinutes Elapsed Until This Percentage of Alarms Was Responded toModeled Response Time, min95% CI, minP Value*
50% (Median)75%90%95%
  • NOTE: Abbreviations: CI, confidence interval; PICU, pediatric intensive care unit. *The critical P value used as the cut point between significant and nonsignificant, accounting for multiple comparisons, is 0.0167.

PICU        
029 nonactionable alarms701.68.018.625.12.81.9‐3.8Reference
3079 nonactionable alarms1226.317.822.526.05.34.06.70.001 (vs 029)
80+ nonactionable alarms2716.028.432.033.18.54.312.70.009 (vs 029), 0.15 (vs 3079)
Ward        
029 nonactionable alarms1599.817.825.028.97.76.39.1Reference
3079 nonactionable alarms21111.622.444.663.211.59.613.30.001 (vs 029)
80+ nonactionable alarms588.357.663.869.515.611.020.10.001 (vs 029), 0.09 (vs 3079)

Accelerated failure‐time regressions revealed significant incremental increases in the modeled response time as the number of preceding nonactionable alarms increased in both the PICU and ward settings (Table 2).

Hawthorne‐like Effects

Four of the 36 nurses reported that they responded more quickly to monitor alarms because they knew they were being filmed.

DISCUSSION

Alarm fatigue has recently generated interest among nurses,[22] physicians,[23] regulatory bodies,[24] patient safety organizations,[25] and even attorneys,[26] despite a lack of prior evidence linking nonactionable alarm exposure to response time or other adverse patient‐relevant outcomes. This study's main findings were that (1) the vast majority of alarms were nonactionable, (2) response time to alarms occurring while the nurse was out of the room increased as the number of nonactionable alarms over the preceding 120 minutes increased. These findings may be explained by alarm fatigue.

Our results build upon the findings of other related studies. The nonactionable alarm proportions we found were similar to other pediatric studies, reporting greater than 90% nonactionable alarms.[1, 2] One other study has reported a relationship between alarm exposure and response time. In that study, Voepel‐Lewis and colleagues evaluated nurse responses to pulse oximetry desaturation alarms in adult orthopedic surgery patients using time‐stamp data from their monitor notification system.[27] They found that alarm response time was significantly longer for patients in the highest quartile of alarms compared to those in lower quartiles. Our study provides new data suggesting a relationship between nonactionable alarm exposure and nurse response time.

Our study has several limitations. First, as a preliminary study to investigate feasibility and possible association, the sample of patients and nurses was necessarily limited and did not permit adjustment for nurse‐ or patient‐level covariates. A multivariable analysis with a larger sample might provide insight into alternate explanations for these findings other than alarm fatigue, including measures of nurse workload and patient factors (such as age and illness severity). Additional factors that are not as easily measured can also contribute to the complex decision of when and how to respond to alarms.[28, 29] Second, nurses were aware that they were being video recorded as part of a study of nonactionable alarms, although they did not know the specific details of measurement. Although this lack of blinding might lead to a Hawthorne‐like effect, our positive results suggest that this effect, if present, did not fully obscure the association. Third, all sessions took place on weekdays during daytime hours, but effects of nonactionable alarms might vary by time and day. Finally, we suspect that when nurses experience critical alarms that require them to intervene and rescue a patient, their response times to that patient's alarms that occur later in their shift will be quicker due to a heightened concern for the alarm being actionable. We were unable to explore that relationship in this analysis because the number of critical alarms requiring intervention was very small. This is a topic of future study.

CONCLUSIONS

We identified an association between a nurse's prior exposure to nonactionable alarms and response time to future alarms. This finding is consistent with alarm fatigue, but requires further study to more clearly delineate other factors that might confound or modify that relationship.

Disclosures

This project was funded by the Health Research Formula Fund Grant 4100050891 from the Pennsylvania Department of Public Health Commonwealth Universal Research Enhancement Program (awarded to Drs. Keren and Bonafide). Dr. Bonafide is also supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number K23HL116427. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors have no financial relationships or conflicts of interest relevant to this article to disclose.

Files
References
  1. Lawless ST. Crying wolf: false alarms in a pediatric intensive care unit. Crit Care Med. 1994;22(6):981985.
  2. Tsien CL, Fackler JC. Poor prognosis for existing monitors in the intensive care unit. Crit Care Med. 1997;25(4):614619.
  3. Biot L, Carry PY, Perdrix JP, Eberhard A, Baconnier P. Clinical evaluation of alarm efficiency in intensive care [in French]. Ann Fr Anesth Reanim. 2000;19:459466.
  4. Borowski M, Siebig S, Wrede C, Imhoff M. Reducing false alarms of intensive care online‐monitoring systems: an evaluation of two signal extraction algorithms. Comput Math Methods Med. 2011;2011:143480.
  5. Chambrin MC, Ravaux P, Calvelo‐Aros D, Jaborska A, Chopin C, Boniface B. Multicentric study of monitoring alarms in the adult intensive care unit (ICU): a descriptive analysis. Intensive Care Med. 1999;25:13601366.
  6. Görges M, Markewitz BA, Westenskow DR. Improving alarm performance in the medical intensive care unit using delays and clinical context. Anesth Analg. 2009;108:15461552.
  7. Graham KC, Cvach M. Monitor alarm fatigue: standardizing use of physiological monitors and decreasing nuisance alarms. Am J Crit Care. 2010;19:2834.
  8. Siebig S, Kuhls S, Imhoff M, Gather U, Scholmerich J, Wrede CE. Intensive care unit alarms—how many do we need? Crit Care Med. 2010;38:451456.
  9. Getty DJ, Swets JA, Rickett RM, Gonthier D. System operator response to warnings of danger: a laboratory investigation of the effects of the predictive value of a warning on human response time. J Exp Psychol Appl. 1995;1:1933.
  10. Bliss JP, Gilson RD, Deaton JE. Human probability matching behaviour in response to alarms of varying reliability. Ergonomics. 1995;38:23002312.
  11. The Joint Commission. Sentinel event alert: medical device alarm safety in hospitals. 2013. Available at: http://www.jointcommission.org/sea_issue_50/. Accessed October 9, 2014.
  12. Mitka M. Joint commission warns of alarm fatigue: multitude of alarms from monitoring devices problematic. JAMA. 2013;309(22):23152316.
  13. Cvach M. Monitor alarm fatigue: an integrative review. Biomed Instrum Technol. 2012;46(4):268277.
  14. NIH Certificates of Confidentiality Kiosk. Available at: http://grants.nih.gov/grants/policy/coc/. Accessed April 21, 2014.
  15. Bonafide CP, Zander M, Graham CS, et al. Video methods for evaluating physiologic monitor alarms and alarm responses. Biomed Instrum Technol. 2014;48(3):220230.
  16. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata‐driven methodology and workflow process for providing translational research informatics support. J Biomed Inf. 2009;42:377381.
  17. Collett D. Accelerated failure time and other parametric models. In: Modelling Survival Data in Medical Research. 2nd ed. Boca Raton, FL: Chapman 2003:197229.
  18. Cleves M, Gould W, Gutierrez RG, Marchenko YV. Parametric models. In: An Introduction to Survival Analysis Using Stata, 3rd ed. College Station, TX: Stata Press; 2010:229244.
  19. Roethlisberger FJ, Dickson WJ. Management and the Worker. Cambridge, MA: Harvard University Press; 1939.
  20. Parsons HM. What happened at Hawthorne? Science. 1974;183(4128):922932.
  21. Ballermann M, Shaw N, Mayes D, Gibney RN, Westbrook J. Validation of the Work Observation Method By Activity Timing (WOMBAT) method of conducting time‐motion observations in critical care settings: an observational study. BMC Med Inf Decis Mak. 2011;11:32.
  22. Sendelbach S, Funk M. Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378386.
  23. Chopra V, McMahon LF. Redesigning hospital alarms for patient safety: alarmed and potentially dangerous. JAMA. 2014;311(12):11991200.
  24. The Joint Commission. The Joint Commission announces 2014 National Patient Safety Goal. Jt Comm Perspect. 2013;33:14.
  25. Top 10 health technology hazards for 2014. Health Devices. 2013;42(11):354380.
  26. My Philly Lawyer. Medical malpractice: alarm fatigue threatens patient safety. 2014. Available at: http://www.myphillylawyer.com/Resources/Legal-Articles/Medical-Malpractice-Alarm-Fatigue-Threatens-Patient-Safety.shtml. Accessed April 4, 2014.
  27. Voepel‐Lewis T, Parker ML, Burke CN, et al. Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):13511358.
  28. Gazarian PK, Carrier N, Cohen R, Schram H, Shiromani S. A description of nurses' decision‐making in managing electrocardiographic monitor alarms [published online ahead of print May 10, 2014]. J Clin Nurs. doi:10.1111/jocn.12625.
  29. Gazarian PK. Nurses' response to frequency and types of electrocardiography alarms in a non‐critical care setting: a descriptive study. Int J Nurs Stud. 2014;51(2):190197.
Article PDF
Issue
Journal of Hospital Medicine - 10(6)
Publications
Page Number
345-351
Sections
Files
Files
Article PDF
Article PDF

Hospital physiologic monitors can alert clinicians to early signs of physiologic deterioration, and thus have great potential to save lives. However, monitors generate frequent alarms,[1, 2, 3, 4, 5, 6, 7, 8] and most are not relevant to the patient's safety (over 90% of pediatric intensive care unit (PICU)[1, 2] and over 70% of adult intensive care alarms).[5, 6] In psychology experiments, humans rapidly learn to ignore or respond more slowly to alarms when exposed to high false‐alarm rates, exhibiting alarm fatigue.[9, 10] In 2013, The Joint Commission named alarm fatigue the most common contributing factor to alarm‐related sentinel events in hospitals.[11, 12]

Although alarm fatigue has been implicated as a major threat to patient safety, little empirical data support its existence in hospitals. In this study, we aimed to determine if there was an association between nurses' recent exposure to nonactionable physiologic monitor alarms and their response time to future alarms for the same patients. This exploratory work was designed to inform future research in this area, acknowledging that the sample size would be too small for multivariable modeling.

METHODS

Study Definitions

The alarm classification scheme is shown in Figure 1. Note that, for clarity, we have intentionally avoided using the terms true and false alarms because their interpretations vary across studies and can be misleading.

Figure 1
Alarm classification scheme.

Potentially Critical Alarm

A potentially critical alarm is any alarm for a clinical condition for which a timely response is important to determine if the alarm requires intervention to save the patient's life. This is based on the alarm type alone, including alarms for life‐threatening arrhythmias such as asystole and ventricular tachycardia, as well as alarms for vital signs outside the set limits. Supporting Table 1 in the online version of this article lists the breakdown of alarm types that we defined a priori as potentially and not potentially critical.

Characteristics of the 2,445 Alarms for Clinical Conditions
 PICUWard
Alarm typeNo.% of Total% Valid% ActionableNo.% of Total% Valid% Actionable
  • NOTE: Abbreviations: N/A, not applicable; PICU, pediatric intensive care unit.

Oxygen saturation19719.482.738.659041.224.41.9
Heart rate19419.195.41.026618.687.20.0
Respiratory rate22922.680.813.531622.148.11.0
Blood pressure25925.583.85.8110.872.70.0
Critical arrhythmia10.10.00.040.30.00.0
Noncritical arrhythmia717.02.80.024417.18.60.0
Central venous pressure494.80.00.000.0N/AN/A
Exhaled carbon dioxide141.492.950.000.0N/AN/A
Total1014100.075.612.91,431100.038.91.0

Valid Alarm

A valid alarm is any alarm that correctly identifies the physiologic status of the patient. Validity was based on waveform quality, lead signal strength indicators, and artifact conditions, referencing each monitor's operator's manual.

Actionable Alarm

An actionable alarm is any valid alarm for a clinical condition that either: (1) leads to a clinical intervention; (2) leads to a consultation with another clinician at the bedside (and thus visible on camera); or (3) is a situation that should have led to intervention or consultation, but the alarm was unwitnessed or misinterpreted by the staff at the bedside.

Nonactionable Alarm

An unactionable alarm is any alarm that does not meet the actionable definition above, including invalid alarms such as those caused by motion artifact, equipment/technical alarms, and alarms that are valid but nonactionable (nuisance alarms).[13]

Response Time

The response time is the time elapsed from when the alarm fired at the bedside to when the nurse entered the room or peered through a window or door, measured in seconds.

Setting and Subjects

We performed this study between August 2012 and July 2013 at a freestanding children's hospital. We evaluated nurses caring for 2 populations: (1) PICU patients with heart and/or lung failure (requiring inotropic support and/or invasive mechanical ventilation), and (2) medical patients on a general inpatient ward. Nurses caring for heart and/or lung failure patients in the PICU typically were assigned 1 to 2 total patients. Nurses on the medical ward typically were assigned 2 to 4 patients. We identified subjects from the population of nurses caring for eligible patients with parents available to provide in‐person consent in each setting. Our primary interest was to evaluate the association between nonactionable alarms and response time, and not to study the epidemiology of alarms in a random sample. Therefore, when alarm data were available prior to screening, we first approached nurses caring for patients in the top 25% of alarm rates for that unit over the preceding 4 hours. We identified preceding alarm rates using BedMasterEx (Excel Medical Electronics, Jupiter, FL).

Human Subjects Protection

This study was approved by the institutional review board of The Children's Hospital of Philadelphia. We obtained written in‐person consent from the patient's parent and the nurse subject. We obtained a Certificate of Confidentiality from the National Institutes of Health to further protect study participants.[14]

Monitoring Equipment

All patients in the PICU were monitored continuously using General Electric (GE) (Fairfield, CT) solar devices. All bed spaces on the wards include GE Dash monitors that are used if ordered. On the ward we studied, 30% to 50% of patients are typically monitored at any given time. In addition to alarming at the bedside, most clinical alarms also generated a text message sent to the nurse's wireless phone listing the room number and the word monitor. Messages did not provide any clinical information about the alarm or patient's status. There were no technicians reviewing alarms centrally.

Physicians used an order set to order monitoring, selecting 1 of 4 available preconfigured profiles: infant <6 months, infant 6 months to 1 year, child, and adult. The parameters for each age group are in Supporting Figure 1, available in the online version of this article. A physician order is required for a nurse to change the parameters. Participating in the study did not affect this workflow.

Primary Outcome

The primary outcome was the nurse's response time to potentially critical monitor alarms that occurred while neither they nor any other clinicians were in the patient's room.

Primary Exposure and Alarm Classification

The primary exposure was the number of nonactionable alarms in the same patient over the preceding 120 minutes (rolling and updated each minute). The alarm classification scheme is shown in Figure 1.

Due to technical limitations with obtaining time‐stamped alarm data from the different ventilators in use during the study period, we were unable to identify the causes of all ventilator alarms. Therefore, we included ventilator alarms that did not lead to clinical interventions as nonactionable alarm exposures, but we did not evaluate the response time to any ventilator alarms.

Data Collection

We combined video recordings with monitor time‐stamp data to evaluate the association between nonactionable alarms and the nurse's response time. Our detailed video recording and annotation methods have been published separately.[15] Briefly, we mounted up to 6 small video cameras in patients' rooms and recorded up to 6 hours per session. The cameras captured the monitor display, a wide view of the room, a close‐up view of the patient, and all windows and doors through which staff could visually assess the patient without entering the room.

Video Processing, Review, and Annotation

The first 5 video sessions were reviewed in a group training setting. Research assistants received instruction on how to determine alarm validity and actionability in accordance with the study definitions. Following the training period, the review workflow was as follows. First, a research assistant entered basic information and a preliminary assessment of the alarm's clinical validity and actionability into a REDCap (Research Electronic Data Capture; Vanderbilt University, Nashville, TN) database.[16] Later, a physician investigator secondarily reviewed all alarms and confirmed the assessments of the research assistants or, when disagreements occurred, discussed and reconciled the database. Alarms that remained unresolved after secondary review were flagged for review with an additional physician or nurse investigator in a team meeting.

Data Analysis

We summarized the patient and nurse subjects, the distributions of alarms, and the response times to potentially critical monitor alarms that occurred while neither the nurse nor any other clinicians were in the patient's room. We explored the data using plots of alarms and response times occurring within individual video sessions as well as with simple linear regression. Hypothesizing that any alarm fatigue effect would be strongest in the highest alarm patients, and having observed that alarms are distributed very unevenly across patients in both the PICU and ward, we made the decision not to use quartiles, but rather to form clinically meaningful categories. We also hypothesized that nurses might not exhibit alarm fatigue unless they were inundated with alarms. We thus divided the nonactionable alarm counts over the preceding 120 minutes into 3 categories: 0 to 29 alarms to represent a low to average alarm rate exhibited by the bottom 50% of the patients, 30 to 79 alarms to represent an elevated alarm rate, and 80+ alarms to represent an extremely high alarm rate exhibited by the top 5%. Because the exposure time was 120 minutes, we conducted the analysis on the alarms occurring after a nurse had been video recorded for at least 120 minutes.

We further evaluated the relationship between nonactionable alarms and nurse response time with Kaplan‐Meier plots by nonactionable alarm count category using the observed response‐time data. The Kaplan‐Meier plots compared response time across the nonactionable alarm exposure group, without any statistical modeling. A log‐rank test stratified by nurse evaluated whether the distributions of response time in the Kaplan‐Meier plots differed across the 3 alarm exposure groups, accounting for within‐nurse clustering.

Accelerated failure‐time regression based on the Weibull distribution then allowed us to compare response time across each alarm exposure group and provided confidence intervals. Accelerated failure‐time models are comparable to Cox models, but emphasize time to event rather than hazards.[17, 18] We determined that the Weibull distribution was suitable by evaluating smoothed hazard and log‐hazard plots, the confidence intervals of the shape parameters in the Weibull models that did not include 1, and by demonstrating that the Weibull model had better fit than an alternative (exponential) model using the likelihood‐ratio test (P<0.0001 for PICU, P=0.02 for ward). Due to the small sample size of nurses and patients, we could not adjust for nurse‐ or patient‐level covariates in the model. When comparing the nonactionable alarm exposure groups in the regression model (029 vs 3079, 3079 vs 80+, and 029 vs 80+), we Bonferroni corrected the critical P value for the 3 comparisons, for a critical P value of 0.05/3=0.0167.

Nurse Questionnaire

At the session's conclusion, nurses completed a questionnaire that included demographics and asked, Did you respond more quickly to monitor alarms during this study because you knew you were being filmed? to measure if nurses would report experiencing a Hawthorne‐like effect.[19, 20, 21]

RESULTS

We performed 40 sessions among 40 patients and 36 nurses over 210 hours. We performed 20 sessions in children with heart and/or lung failure in the PICU and 20 sessions in children on a general ward. Sessions took place on weekdays between 9:00 am and 6:00 pm. There were 3 occasions when we filmed 2 patients cared for by the same nurse at the same time.

Nurses were mostly female (94.4%) and had between 2 months and 28 years of experience (median, 4.8 years). Patients on the ward ranged from 5 days to 5.4 years old (median, 6 months). Patients in the PICU ranged from 5 months to 16 years old (median, 2.5 years). Among the PICU patients, 14 (70%) were receiving mechanical ventilation only, 3 (15%) were receiving vasopressors only, and 3 (15%) were receiving mechanical ventilation and vasopressors.

We observed 5070 alarms during the 40 sessions. We excluded 108 (2.1%) that occurred at the end of video recording sessions with the nurse absent from the room because the nurse's response could not be determined. Alarms per session ranged from 10 to 1430 (median, 75; interquartile range [IQR], 35138). We excluded the outlier PICU patient with 1430 alarms in 5 hours from the analysis to avoid the potential for biasing the results. Figure 2 depicts the data flow.

Figure 2
Flow diagram of alarms used as exposures and outcomes in evaluating the association between nonactionable alarm exposure and response time.

Following the 5 training sessions, research assistants independently reviewed and made preliminary assessments on 4674 alarms; these alarms were all secondarily reviewed by a physician. Using the physician reviewer as the gold standard, the research assistant's sensitivity (assess alarm as actionable when physician also assesses as actionable) was 96.8% and specificity (assess alarm as nonactionable when physician also assesses as nonactionable) was 96.9%. We had to review 54 of 4674 alarms (1.2%) with an additional physician or nurse investigator to achieve consensus.

Characteristics of the 2445 alarms for clinical conditions are shown in Table 1. Only 12.9% of alarms in heart‐ and/or lung‐failure patients in the PICU were actionable, and only 1.0% of alarms in medical patients on a general inpatient ward were actionable.

Overall Response Times for Out‐of‐Room Alarms

We first evaluated response times without excluding alarms occurring prior to the 120‐minute mark. Of the 2445 clinical condition alarms, we excluded the 315 noncritical arrhythmia types from analysis of response time because they did not meet our definition of potentially critical alarms. Of the 2130 potentially critical alarms, 1185 (55.6%) occurred while neither the nurse nor any other clinician was in the patient's room. We proceeded to analyze the response time to these 1185 alarms (307 in the PICU and 878 on the ward). In the PICU, median response time was 3.3 minutes (IQR, 0.814.4). On the ward, median response time was 9.8 minutes (IQR, 3.222.4).

Response‐Time Association With Nonactionable Alarm Exposure

Next, we analyzed the association between response time to potentially critical alarms that occurred when the nurse was not in the patient's room and the number of nonactionable alarms occurring over the preceding 120‐minute window. This required excluding the alarms that occurred in the first 120 minutes of each session, leaving 647 alarms with eligible response times to evaluate the exposure between prior nonactionable alarm exposure and response time: 219 in the PICU and 428 on the ward. Kaplan‐Meier plots and tabulated response times demonstrated the incremental relationships between each nonactionable alarm exposure category in the observed data, with the effects most prominent as the Kaplan‐Meier plots diverged beyond the median (Figure 3 and Table 2). Excluding the extreme outlier patient had no effect on the results, because 1378 of the 1430 alarms occurred with the nurse present at the bedside, and only 2 of the remaining alarms were potentially critical.

Figure 3
Kaplan‐Meier plots of observed response times for pediatric intensive care unit (PICU) and ward. Abbreviations: ICU, intensive care unit.
Association Between Nonactionable Alarm Exposure in Preceding 120 Minutes and Response Time to Potentially Critical Alarms Based on Observed Data and With Response Time Modeled Using Weibull Accelerated Failure‐Time Regression
 Observed DataAccelerated Failure‐Time Model
Number of Potentially Critical AlarmsMinutes Elapsed Until This Percentage of Alarms Was Responded toModeled Response Time, min95% CI, minP Value*
50% (Median)75%90%95%
  • NOTE: Abbreviations: CI, confidence interval; PICU, pediatric intensive care unit. *The critical P value used as the cut point between significant and nonsignificant, accounting for multiple comparisons, is 0.0167.

PICU        
029 nonactionable alarms701.68.018.625.12.81.9‐3.8Reference
3079 nonactionable alarms1226.317.822.526.05.34.06.70.001 (vs 029)
80+ nonactionable alarms2716.028.432.033.18.54.312.70.009 (vs 029), 0.15 (vs 3079)
Ward        
029 nonactionable alarms1599.817.825.028.97.76.39.1Reference
3079 nonactionable alarms21111.622.444.663.211.59.613.30.001 (vs 029)
80+ nonactionable alarms588.357.663.869.515.611.020.10.001 (vs 029), 0.09 (vs 3079)

Accelerated failure‐time regressions revealed significant incremental increases in the modeled response time as the number of preceding nonactionable alarms increased in both the PICU and ward settings (Table 2).

Hawthorne‐like Effects

Four of the 36 nurses reported that they responded more quickly to monitor alarms because they knew they were being filmed.

DISCUSSION

Alarm fatigue has recently generated interest among nurses,[22] physicians,[23] regulatory bodies,[24] patient safety organizations,[25] and even attorneys,[26] despite a lack of prior evidence linking nonactionable alarm exposure to response time or other adverse patient‐relevant outcomes. This study's main findings were that (1) the vast majority of alarms were nonactionable, (2) response time to alarms occurring while the nurse was out of the room increased as the number of nonactionable alarms over the preceding 120 minutes increased. These findings may be explained by alarm fatigue.

Our results build upon the findings of other related studies. The nonactionable alarm proportions we found were similar to other pediatric studies, reporting greater than 90% nonactionable alarms.[1, 2] One other study has reported a relationship between alarm exposure and response time. In that study, Voepel‐Lewis and colleagues evaluated nurse responses to pulse oximetry desaturation alarms in adult orthopedic surgery patients using time‐stamp data from their monitor notification system.[27] They found that alarm response time was significantly longer for patients in the highest quartile of alarms compared to those in lower quartiles. Our study provides new data suggesting a relationship between nonactionable alarm exposure and nurse response time.

Our study has several limitations. First, as a preliminary study to investigate feasibility and possible association, the sample of patients and nurses was necessarily limited and did not permit adjustment for nurse‐ or patient‐level covariates. A multivariable analysis with a larger sample might provide insight into alternate explanations for these findings other than alarm fatigue, including measures of nurse workload and patient factors (such as age and illness severity). Additional factors that are not as easily measured can also contribute to the complex decision of when and how to respond to alarms.[28, 29] Second, nurses were aware that they were being video recorded as part of a study of nonactionable alarms, although they did not know the specific details of measurement. Although this lack of blinding might lead to a Hawthorne‐like effect, our positive results suggest that this effect, if present, did not fully obscure the association. Third, all sessions took place on weekdays during daytime hours, but effects of nonactionable alarms might vary by time and day. Finally, we suspect that when nurses experience critical alarms that require them to intervene and rescue a patient, their response times to that patient's alarms that occur later in their shift will be quicker due to a heightened concern for the alarm being actionable. We were unable to explore that relationship in this analysis because the number of critical alarms requiring intervention was very small. This is a topic of future study.

CONCLUSIONS

We identified an association between a nurse's prior exposure to nonactionable alarms and response time to future alarms. This finding is consistent with alarm fatigue, but requires further study to more clearly delineate other factors that might confound or modify that relationship.

Disclosures

This project was funded by the Health Research Formula Fund Grant 4100050891 from the Pennsylvania Department of Public Health Commonwealth Universal Research Enhancement Program (awarded to Drs. Keren and Bonafide). Dr. Bonafide is also supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number K23HL116427. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors have no financial relationships or conflicts of interest relevant to this article to disclose.

Hospital physiologic monitors can alert clinicians to early signs of physiologic deterioration, and thus have great potential to save lives. However, monitors generate frequent alarms,[1, 2, 3, 4, 5, 6, 7, 8] and most are not relevant to the patient's safety (over 90% of pediatric intensive care unit (PICU)[1, 2] and over 70% of adult intensive care alarms).[5, 6] In psychology experiments, humans rapidly learn to ignore or respond more slowly to alarms when exposed to high false‐alarm rates, exhibiting alarm fatigue.[9, 10] In 2013, The Joint Commission named alarm fatigue the most common contributing factor to alarm‐related sentinel events in hospitals.[11, 12]

Although alarm fatigue has been implicated as a major threat to patient safety, little empirical data support its existence in hospitals. In this study, we aimed to determine if there was an association between nurses' recent exposure to nonactionable physiologic monitor alarms and their response time to future alarms for the same patients. This exploratory work was designed to inform future research in this area, acknowledging that the sample size would be too small for multivariable modeling.

METHODS

Study Definitions

The alarm classification scheme is shown in Figure 1. Note that, for clarity, we have intentionally avoided using the terms true and false alarms because their interpretations vary across studies and can be misleading.

Figure 1
Alarm classification scheme.

Potentially Critical Alarm

A potentially critical alarm is any alarm for a clinical condition for which a timely response is important to determine if the alarm requires intervention to save the patient's life. This is based on the alarm type alone, including alarms for life‐threatening arrhythmias such as asystole and ventricular tachycardia, as well as alarms for vital signs outside the set limits. Supporting Table 1 in the online version of this article lists the breakdown of alarm types that we defined a priori as potentially and not potentially critical.

Characteristics of the 2,445 Alarms for Clinical Conditions
 PICUWard
Alarm typeNo.% of Total% Valid% ActionableNo.% of Total% Valid% Actionable
  • NOTE: Abbreviations: N/A, not applicable; PICU, pediatric intensive care unit.

Oxygen saturation19719.482.738.659041.224.41.9
Heart rate19419.195.41.026618.687.20.0
Respiratory rate22922.680.813.531622.148.11.0
Blood pressure25925.583.85.8110.872.70.0
Critical arrhythmia10.10.00.040.30.00.0
Noncritical arrhythmia717.02.80.024417.18.60.0
Central venous pressure494.80.00.000.0N/AN/A
Exhaled carbon dioxide141.492.950.000.0N/AN/A
Total1014100.075.612.91,431100.038.91.0

Valid Alarm

A valid alarm is any alarm that correctly identifies the physiologic status of the patient. Validity was based on waveform quality, lead signal strength indicators, and artifact conditions, referencing each monitor's operator's manual.

Actionable Alarm

An actionable alarm is any valid alarm for a clinical condition that either: (1) leads to a clinical intervention; (2) leads to a consultation with another clinician at the bedside (and thus visible on camera); or (3) is a situation that should have led to intervention or consultation, but the alarm was unwitnessed or misinterpreted by the staff at the bedside.

Nonactionable Alarm

An unactionable alarm is any alarm that does not meet the actionable definition above, including invalid alarms such as those caused by motion artifact, equipment/technical alarms, and alarms that are valid but nonactionable (nuisance alarms).[13]

Response Time

The response time is the time elapsed from when the alarm fired at the bedside to when the nurse entered the room or peered through a window or door, measured in seconds.

Setting and Subjects

We performed this study between August 2012 and July 2013 at a freestanding children's hospital. We evaluated nurses caring for 2 populations: (1) PICU patients with heart and/or lung failure (requiring inotropic support and/or invasive mechanical ventilation), and (2) medical patients on a general inpatient ward. Nurses caring for heart and/or lung failure patients in the PICU typically were assigned 1 to 2 total patients. Nurses on the medical ward typically were assigned 2 to 4 patients. We identified subjects from the population of nurses caring for eligible patients with parents available to provide in‐person consent in each setting. Our primary interest was to evaluate the association between nonactionable alarms and response time, and not to study the epidemiology of alarms in a random sample. Therefore, when alarm data were available prior to screening, we first approached nurses caring for patients in the top 25% of alarm rates for that unit over the preceding 4 hours. We identified preceding alarm rates using BedMasterEx (Excel Medical Electronics, Jupiter, FL).

Human Subjects Protection

This study was approved by the institutional review board of The Children's Hospital of Philadelphia. We obtained written in‐person consent from the patient's parent and the nurse subject. We obtained a Certificate of Confidentiality from the National Institutes of Health to further protect study participants.[14]

Monitoring Equipment

All patients in the PICU were monitored continuously using General Electric (GE) (Fairfield, CT) solar devices. All bed spaces on the wards include GE Dash monitors that are used if ordered. On the ward we studied, 30% to 50% of patients are typically monitored at any given time. In addition to alarming at the bedside, most clinical alarms also generated a text message sent to the nurse's wireless phone listing the room number and the word monitor. Messages did not provide any clinical information about the alarm or patient's status. There were no technicians reviewing alarms centrally.

Physicians used an order set to order monitoring, selecting 1 of 4 available preconfigured profiles: infant <6 months, infant 6 months to 1 year, child, and adult. The parameters for each age group are in Supporting Figure 1, available in the online version of this article. A physician order is required for a nurse to change the parameters. Participating in the study did not affect this workflow.

Primary Outcome

The primary outcome was the nurse's response time to potentially critical monitor alarms that occurred while neither they nor any other clinicians were in the patient's room.

Primary Exposure and Alarm Classification

The primary exposure was the number of nonactionable alarms in the same patient over the preceding 120 minutes (rolling and updated each minute). The alarm classification scheme is shown in Figure 1.

Due to technical limitations with obtaining time‐stamped alarm data from the different ventilators in use during the study period, we were unable to identify the causes of all ventilator alarms. Therefore, we included ventilator alarms that did not lead to clinical interventions as nonactionable alarm exposures, but we did not evaluate the response time to any ventilator alarms.

Data Collection

We combined video recordings with monitor time‐stamp data to evaluate the association between nonactionable alarms and the nurse's response time. Our detailed video recording and annotation methods have been published separately.[15] Briefly, we mounted up to 6 small video cameras in patients' rooms and recorded up to 6 hours per session. The cameras captured the monitor display, a wide view of the room, a close‐up view of the patient, and all windows and doors through which staff could visually assess the patient without entering the room.

Video Processing, Review, and Annotation

The first 5 video sessions were reviewed in a group training setting. Research assistants received instruction on how to determine alarm validity and actionability in accordance with the study definitions. Following the training period, the review workflow was as follows. First, a research assistant entered basic information and a preliminary assessment of the alarm's clinical validity and actionability into a REDCap (Research Electronic Data Capture; Vanderbilt University, Nashville, TN) database.[16] Later, a physician investigator secondarily reviewed all alarms and confirmed the assessments of the research assistants or, when disagreements occurred, discussed and reconciled the database. Alarms that remained unresolved after secondary review were flagged for review with an additional physician or nurse investigator in a team meeting.

Data Analysis

We summarized the patient and nurse subjects, the distributions of alarms, and the response times to potentially critical monitor alarms that occurred while neither the nurse nor any other clinicians were in the patient's room. We explored the data using plots of alarms and response times occurring within individual video sessions as well as with simple linear regression. Hypothesizing that any alarm fatigue effect would be strongest in the highest alarm patients, and having observed that alarms are distributed very unevenly across patients in both the PICU and ward, we made the decision not to use quartiles, but rather to form clinically meaningful categories. We also hypothesized that nurses might not exhibit alarm fatigue unless they were inundated with alarms. We thus divided the nonactionable alarm counts over the preceding 120 minutes into 3 categories: 0 to 29 alarms to represent a low to average alarm rate exhibited by the bottom 50% of the patients, 30 to 79 alarms to represent an elevated alarm rate, and 80+ alarms to represent an extremely high alarm rate exhibited by the top 5%. Because the exposure time was 120 minutes, we conducted the analysis on the alarms occurring after a nurse had been video recorded for at least 120 minutes.

We further evaluated the relationship between nonactionable alarms and nurse response time with Kaplan‐Meier plots by nonactionable alarm count category using the observed response‐time data. The Kaplan‐Meier plots compared response time across the nonactionable alarm exposure group, without any statistical modeling. A log‐rank test stratified by nurse evaluated whether the distributions of response time in the Kaplan‐Meier plots differed across the 3 alarm exposure groups, accounting for within‐nurse clustering.

Accelerated failure‐time regression based on the Weibull distribution then allowed us to compare response time across each alarm exposure group and provided confidence intervals. Accelerated failure‐time models are comparable to Cox models, but emphasize time to event rather than hazards.[17, 18] We determined that the Weibull distribution was suitable by evaluating smoothed hazard and log‐hazard plots, the confidence intervals of the shape parameters in the Weibull models that did not include 1, and by demonstrating that the Weibull model had better fit than an alternative (exponential) model using the likelihood‐ratio test (P<0.0001 for PICU, P=0.02 for ward). Due to the small sample size of nurses and patients, we could not adjust for nurse‐ or patient‐level covariates in the model. When comparing the nonactionable alarm exposure groups in the regression model (029 vs 3079, 3079 vs 80+, and 029 vs 80+), we Bonferroni corrected the critical P value for the 3 comparisons, for a critical P value of 0.05/3=0.0167.

Nurse Questionnaire

At the session's conclusion, nurses completed a questionnaire that included demographics and asked, Did you respond more quickly to monitor alarms during this study because you knew you were being filmed? to measure if nurses would report experiencing a Hawthorne‐like effect.[19, 20, 21]

RESULTS

We performed 40 sessions among 40 patients and 36 nurses over 210 hours. We performed 20 sessions in children with heart and/or lung failure in the PICU and 20 sessions in children on a general ward. Sessions took place on weekdays between 9:00 am and 6:00 pm. There were 3 occasions when we filmed 2 patients cared for by the same nurse at the same time.

Nurses were mostly female (94.4%) and had between 2 months and 28 years of experience (median, 4.8 years). Patients on the ward ranged from 5 days to 5.4 years old (median, 6 months). Patients in the PICU ranged from 5 months to 16 years old (median, 2.5 years). Among the PICU patients, 14 (70%) were receiving mechanical ventilation only, 3 (15%) were receiving vasopressors only, and 3 (15%) were receiving mechanical ventilation and vasopressors.

We observed 5070 alarms during the 40 sessions. We excluded 108 (2.1%) that occurred at the end of video recording sessions with the nurse absent from the room because the nurse's response could not be determined. Alarms per session ranged from 10 to 1430 (median, 75; interquartile range [IQR], 35138). We excluded the outlier PICU patient with 1430 alarms in 5 hours from the analysis to avoid the potential for biasing the results. Figure 2 depicts the data flow.

Figure 2
Flow diagram of alarms used as exposures and outcomes in evaluating the association between nonactionable alarm exposure and response time.

Following the 5 training sessions, research assistants independently reviewed and made preliminary assessments on 4674 alarms; these alarms were all secondarily reviewed by a physician. Using the physician reviewer as the gold standard, the research assistant's sensitivity (assess alarm as actionable when physician also assesses as actionable) was 96.8% and specificity (assess alarm as nonactionable when physician also assesses as nonactionable) was 96.9%. We had to review 54 of 4674 alarms (1.2%) with an additional physician or nurse investigator to achieve consensus.

Characteristics of the 2445 alarms for clinical conditions are shown in Table 1. Only 12.9% of alarms in heart‐ and/or lung‐failure patients in the PICU were actionable, and only 1.0% of alarms in medical patients on a general inpatient ward were actionable.

Overall Response Times for Out‐of‐Room Alarms

We first evaluated response times without excluding alarms occurring prior to the 120‐minute mark. Of the 2445 clinical condition alarms, we excluded the 315 noncritical arrhythmia types from analysis of response time because they did not meet our definition of potentially critical alarms. Of the 2130 potentially critical alarms, 1185 (55.6%) occurred while neither the nurse nor any other clinician was in the patient's room. We proceeded to analyze the response time to these 1185 alarms (307 in the PICU and 878 on the ward). In the PICU, median response time was 3.3 minutes (IQR, 0.814.4). On the ward, median response time was 9.8 minutes (IQR, 3.222.4).

Response‐Time Association With Nonactionable Alarm Exposure

Next, we analyzed the association between response time to potentially critical alarms that occurred when the nurse was not in the patient's room and the number of nonactionable alarms occurring over the preceding 120‐minute window. This required excluding the alarms that occurred in the first 120 minutes of each session, leaving 647 alarms with eligible response times to evaluate the exposure between prior nonactionable alarm exposure and response time: 219 in the PICU and 428 on the ward. Kaplan‐Meier plots and tabulated response times demonstrated the incremental relationships between each nonactionable alarm exposure category in the observed data, with the effects most prominent as the Kaplan‐Meier plots diverged beyond the median (Figure 3 and Table 2). Excluding the extreme outlier patient had no effect on the results, because 1378 of the 1430 alarms occurred with the nurse present at the bedside, and only 2 of the remaining alarms were potentially critical.

Figure 3
Kaplan‐Meier plots of observed response times for pediatric intensive care unit (PICU) and ward. Abbreviations: ICU, intensive care unit.
Association Between Nonactionable Alarm Exposure in Preceding 120 Minutes and Response Time to Potentially Critical Alarms Based on Observed Data and With Response Time Modeled Using Weibull Accelerated Failure‐Time Regression
 Observed DataAccelerated Failure‐Time Model
Number of Potentially Critical AlarmsMinutes Elapsed Until This Percentage of Alarms Was Responded toModeled Response Time, min95% CI, minP Value*
50% (Median)75%90%95%
  • NOTE: Abbreviations: CI, confidence interval; PICU, pediatric intensive care unit. *The critical P value used as the cut point between significant and nonsignificant, accounting for multiple comparisons, is 0.0167.

PICU        
029 nonactionable alarms701.68.018.625.12.81.9‐3.8Reference
3079 nonactionable alarms1226.317.822.526.05.34.06.70.001 (vs 029)
80+ nonactionable alarms2716.028.432.033.18.54.312.70.009 (vs 029), 0.15 (vs 3079)
Ward        
029 nonactionable alarms1599.817.825.028.97.76.39.1Reference
3079 nonactionable alarms21111.622.444.663.211.59.613.30.001 (vs 029)
80+ nonactionable alarms588.357.663.869.515.611.020.10.001 (vs 029), 0.09 (vs 3079)

Accelerated failure‐time regressions revealed significant incremental increases in the modeled response time as the number of preceding nonactionable alarms increased in both the PICU and ward settings (Table 2).

Hawthorne‐like Effects

Four of the 36 nurses reported that they responded more quickly to monitor alarms because they knew they were being filmed.

DISCUSSION

Alarm fatigue has recently generated interest among nurses,[22] physicians,[23] regulatory bodies,[24] patient safety organizations,[25] and even attorneys,[26] despite a lack of prior evidence linking nonactionable alarm exposure to response time or other adverse patient‐relevant outcomes. This study's main findings were that (1) the vast majority of alarms were nonactionable, (2) response time to alarms occurring while the nurse was out of the room increased as the number of nonactionable alarms over the preceding 120 minutes increased. These findings may be explained by alarm fatigue.

Our results build upon the findings of other related studies. The nonactionable alarm proportions we found were similar to other pediatric studies, reporting greater than 90% nonactionable alarms.[1, 2] One other study has reported a relationship between alarm exposure and response time. In that study, Voepel‐Lewis and colleagues evaluated nurse responses to pulse oximetry desaturation alarms in adult orthopedic surgery patients using time‐stamp data from their monitor notification system.[27] They found that alarm response time was significantly longer for patients in the highest quartile of alarms compared to those in lower quartiles. Our study provides new data suggesting a relationship between nonactionable alarm exposure and nurse response time.

Our study has several limitations. First, as a preliminary study to investigate feasibility and possible association, the sample of patients and nurses was necessarily limited and did not permit adjustment for nurse‐ or patient‐level covariates. A multivariable analysis with a larger sample might provide insight into alternate explanations for these findings other than alarm fatigue, including measures of nurse workload and patient factors (such as age and illness severity). Additional factors that are not as easily measured can also contribute to the complex decision of when and how to respond to alarms.[28, 29] Second, nurses were aware that they were being video recorded as part of a study of nonactionable alarms, although they did not know the specific details of measurement. Although this lack of blinding might lead to a Hawthorne‐like effect, our positive results suggest that this effect, if present, did not fully obscure the association. Third, all sessions took place on weekdays during daytime hours, but effects of nonactionable alarms might vary by time and day. Finally, we suspect that when nurses experience critical alarms that require them to intervene and rescue a patient, their response times to that patient's alarms that occur later in their shift will be quicker due to a heightened concern for the alarm being actionable. We were unable to explore that relationship in this analysis because the number of critical alarms requiring intervention was very small. This is a topic of future study.

CONCLUSIONS

We identified an association between a nurse's prior exposure to nonactionable alarms and response time to future alarms. This finding is consistent with alarm fatigue, but requires further study to more clearly delineate other factors that might confound or modify that relationship.

Disclosures

This project was funded by the Health Research Formula Fund Grant 4100050891 from the Pennsylvania Department of Public Health Commonwealth Universal Research Enhancement Program (awarded to Drs. Keren and Bonafide). Dr. Bonafide is also supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number K23HL116427. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors have no financial relationships or conflicts of interest relevant to this article to disclose.

References
  1. Lawless ST. Crying wolf: false alarms in a pediatric intensive care unit. Crit Care Med. 1994;22(6):981985.
  2. Tsien CL, Fackler JC. Poor prognosis for existing monitors in the intensive care unit. Crit Care Med. 1997;25(4):614619.
  3. Biot L, Carry PY, Perdrix JP, Eberhard A, Baconnier P. Clinical evaluation of alarm efficiency in intensive care [in French]. Ann Fr Anesth Reanim. 2000;19:459466.
  4. Borowski M, Siebig S, Wrede C, Imhoff M. Reducing false alarms of intensive care online‐monitoring systems: an evaluation of two signal extraction algorithms. Comput Math Methods Med. 2011;2011:143480.
  5. Chambrin MC, Ravaux P, Calvelo‐Aros D, Jaborska A, Chopin C, Boniface B. Multicentric study of monitoring alarms in the adult intensive care unit (ICU): a descriptive analysis. Intensive Care Med. 1999;25:13601366.
  6. Görges M, Markewitz BA, Westenskow DR. Improving alarm performance in the medical intensive care unit using delays and clinical context. Anesth Analg. 2009;108:15461552.
  7. Graham KC, Cvach M. Monitor alarm fatigue: standardizing use of physiological monitors and decreasing nuisance alarms. Am J Crit Care. 2010;19:2834.
  8. Siebig S, Kuhls S, Imhoff M, Gather U, Scholmerich J, Wrede CE. Intensive care unit alarms—how many do we need? Crit Care Med. 2010;38:451456.
  9. Getty DJ, Swets JA, Rickett RM, Gonthier D. System operator response to warnings of danger: a laboratory investigation of the effects of the predictive value of a warning on human response time. J Exp Psychol Appl. 1995;1:1933.
  10. Bliss JP, Gilson RD, Deaton JE. Human probability matching behaviour in response to alarms of varying reliability. Ergonomics. 1995;38:23002312.
  11. The Joint Commission. Sentinel event alert: medical device alarm safety in hospitals. 2013. Available at: http://www.jointcommission.org/sea_issue_50/. Accessed October 9, 2014.
  12. Mitka M. Joint commission warns of alarm fatigue: multitude of alarms from monitoring devices problematic. JAMA. 2013;309(22):23152316.
  13. Cvach M. Monitor alarm fatigue: an integrative review. Biomed Instrum Technol. 2012;46(4):268277.
  14. NIH Certificates of Confidentiality Kiosk. Available at: http://grants.nih.gov/grants/policy/coc/. Accessed April 21, 2014.
  15. Bonafide CP, Zander M, Graham CS, et al. Video methods for evaluating physiologic monitor alarms and alarm responses. Biomed Instrum Technol. 2014;48(3):220230.
  16. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata‐driven methodology and workflow process for providing translational research informatics support. J Biomed Inf. 2009;42:377381.
  17. Collett D. Accelerated failure time and other parametric models. In: Modelling Survival Data in Medical Research. 2nd ed. Boca Raton, FL: Chapman 2003:197229.
  18. Cleves M, Gould W, Gutierrez RG, Marchenko YV. Parametric models. In: An Introduction to Survival Analysis Using Stata, 3rd ed. College Station, TX: Stata Press; 2010:229244.
  19. Roethlisberger FJ, Dickson WJ. Management and the Worker. Cambridge, MA: Harvard University Press; 1939.
  20. Parsons HM. What happened at Hawthorne? Science. 1974;183(4128):922932.
  21. Ballermann M, Shaw N, Mayes D, Gibney RN, Westbrook J. Validation of the Work Observation Method By Activity Timing (WOMBAT) method of conducting time‐motion observations in critical care settings: an observational study. BMC Med Inf Decis Mak. 2011;11:32.
  22. Sendelbach S, Funk M. Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378386.
  23. Chopra V, McMahon LF. Redesigning hospital alarms for patient safety: alarmed and potentially dangerous. JAMA. 2014;311(12):11991200.
  24. The Joint Commission. The Joint Commission announces 2014 National Patient Safety Goal. Jt Comm Perspect. 2013;33:14.
  25. Top 10 health technology hazards for 2014. Health Devices. 2013;42(11):354380.
  26. My Philly Lawyer. Medical malpractice: alarm fatigue threatens patient safety. 2014. Available at: http://www.myphillylawyer.com/Resources/Legal-Articles/Medical-Malpractice-Alarm-Fatigue-Threatens-Patient-Safety.shtml. Accessed April 4, 2014.
  27. Voepel‐Lewis T, Parker ML, Burke CN, et al. Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):13511358.
  28. Gazarian PK, Carrier N, Cohen R, Schram H, Shiromani S. A description of nurses' decision‐making in managing electrocardiographic monitor alarms [published online ahead of print May 10, 2014]. J Clin Nurs. doi:10.1111/jocn.12625.
  29. Gazarian PK. Nurses' response to frequency and types of electrocardiography alarms in a non‐critical care setting: a descriptive study. Int J Nurs Stud. 2014;51(2):190197.
References
  1. Lawless ST. Crying wolf: false alarms in a pediatric intensive care unit. Crit Care Med. 1994;22(6):981985.
  2. Tsien CL, Fackler JC. Poor prognosis for existing monitors in the intensive care unit. Crit Care Med. 1997;25(4):614619.
  3. Biot L, Carry PY, Perdrix JP, Eberhard A, Baconnier P. Clinical evaluation of alarm efficiency in intensive care [in French]. Ann Fr Anesth Reanim. 2000;19:459466.
  4. Borowski M, Siebig S, Wrede C, Imhoff M. Reducing false alarms of intensive care online‐monitoring systems: an evaluation of two signal extraction algorithms. Comput Math Methods Med. 2011;2011:143480.
  5. Chambrin MC, Ravaux P, Calvelo‐Aros D, Jaborska A, Chopin C, Boniface B. Multicentric study of monitoring alarms in the adult intensive care unit (ICU): a descriptive analysis. Intensive Care Med. 1999;25:13601366.
  6. Görges M, Markewitz BA, Westenskow DR. Improving alarm performance in the medical intensive care unit using delays and clinical context. Anesth Analg. 2009;108:15461552.
  7. Graham KC, Cvach M. Monitor alarm fatigue: standardizing use of physiological monitors and decreasing nuisance alarms. Am J Crit Care. 2010;19:2834.
  8. Siebig S, Kuhls S, Imhoff M, Gather U, Scholmerich J, Wrede CE. Intensive care unit alarms—how many do we need? Crit Care Med. 2010;38:451456.
  9. Getty DJ, Swets JA, Rickett RM, Gonthier D. System operator response to warnings of danger: a laboratory investigation of the effects of the predictive value of a warning on human response time. J Exp Psychol Appl. 1995;1:1933.
  10. Bliss JP, Gilson RD, Deaton JE. Human probability matching behaviour in response to alarms of varying reliability. Ergonomics. 1995;38:23002312.
  11. The Joint Commission. Sentinel event alert: medical device alarm safety in hospitals. 2013. Available at: http://www.jointcommission.org/sea_issue_50/. Accessed October 9, 2014.
  12. Mitka M. Joint commission warns of alarm fatigue: multitude of alarms from monitoring devices problematic. JAMA. 2013;309(22):23152316.
  13. Cvach M. Monitor alarm fatigue: an integrative review. Biomed Instrum Technol. 2012;46(4):268277.
  14. NIH Certificates of Confidentiality Kiosk. Available at: http://grants.nih.gov/grants/policy/coc/. Accessed April 21, 2014.
  15. Bonafide CP, Zander M, Graham CS, et al. Video methods for evaluating physiologic monitor alarms and alarm responses. Biomed Instrum Technol. 2014;48(3):220230.
  16. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata‐driven methodology and workflow process for providing translational research informatics support. J Biomed Inf. 2009;42:377381.
  17. Collett D. Accelerated failure time and other parametric models. In: Modelling Survival Data in Medical Research. 2nd ed. Boca Raton, FL: Chapman 2003:197229.
  18. Cleves M, Gould W, Gutierrez RG, Marchenko YV. Parametric models. In: An Introduction to Survival Analysis Using Stata, 3rd ed. College Station, TX: Stata Press; 2010:229244.
  19. Roethlisberger FJ, Dickson WJ. Management and the Worker. Cambridge, MA: Harvard University Press; 1939.
  20. Parsons HM. What happened at Hawthorne? Science. 1974;183(4128):922932.
  21. Ballermann M, Shaw N, Mayes D, Gibney RN, Westbrook J. Validation of the Work Observation Method By Activity Timing (WOMBAT) method of conducting time‐motion observations in critical care settings: an observational study. BMC Med Inf Decis Mak. 2011;11:32.
  22. Sendelbach S, Funk M. Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378386.
  23. Chopra V, McMahon LF. Redesigning hospital alarms for patient safety: alarmed and potentially dangerous. JAMA. 2014;311(12):11991200.
  24. The Joint Commission. The Joint Commission announces 2014 National Patient Safety Goal. Jt Comm Perspect. 2013;33:14.
  25. Top 10 health technology hazards for 2014. Health Devices. 2013;42(11):354380.
  26. My Philly Lawyer. Medical malpractice: alarm fatigue threatens patient safety. 2014. Available at: http://www.myphillylawyer.com/Resources/Legal-Articles/Medical-Malpractice-Alarm-Fatigue-Threatens-Patient-Safety.shtml. Accessed April 4, 2014.
  27. Voepel‐Lewis T, Parker ML, Burke CN, et al. Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):13511358.
  28. Gazarian PK, Carrier N, Cohen R, Schram H, Shiromani S. A description of nurses' decision‐making in managing electrocardiographic monitor alarms [published online ahead of print May 10, 2014]. J Clin Nurs. doi:10.1111/jocn.12625.
  29. Gazarian PK. Nurses' response to frequency and types of electrocardiography alarms in a non‐critical care setting: a descriptive study. Int J Nurs Stud. 2014;51(2):190197.
Issue
Journal of Hospital Medicine - 10(6)
Issue
Journal of Hospital Medicine - 10(6)
Page Number
345-351
Page Number
345-351
Publications
Publications
Article Type
Display Headline
Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital
Display Headline
Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Christopher P. Bonafide, MD, The Children's Hospital of Philadelphia, 34th St. and Civic Center Blvd., Suite 12NW80, Philadelphia, PA 19104; Telephone: 267‐426‐2901; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files