User login
Review of Physiologic Monitor Alarms
Clinical alarm safety has become a recent target for improvement in many hospitals. In 2013, The Joint Commission released a National Patient Safety Goal prompting accredited hospitals to establish alarm safety as a hospital priority, identify the most important alarm signals to manage, and, by 2016, develop policies and procedures that address alarm management.[1] In addition, the Emergency Care Research Institute has named alarm hazards the top health technology hazard each year since 2012.[2]
The primary arguments supporting the elevation of alarm management to a national hospital priority in the United States include the following: (1) clinicians rely on alarms to notify them of important physiologic changes, (2) alarms occur frequently and usually do not warrant clinical intervention, and (3) alarm overload renders clinicians unable to respond to all alarms, resulting in alarm fatigue: responding more slowly or ignoring alarms that may represent actual clinical deterioration.[3, 4] These arguments are built largely on anecdotal data, reported safety event databases, and small studies that have not previously been systematically analyzed.
Despite the national focus on alarms, we still know very little about fundamental questions key to improving alarm safety. In this systematic review, we aimed to answer 3 key questions about physiologic monitor alarms: (1) What proportion of alarms warrant attention or clinical intervention (ie, actionable alarms), and how does this proportion vary between adult and pediatric populations and between intensive care unit (ICU) and ward settings? (2) What is the relationship between alarm exposure and clinician response time? (3) What interventions are effective in reducing the frequency of alarms?
We limited our scope to monitor alarms because few studies have evaluated the characteristics of alarms from other medical devices, and because missing relevant monitor alarms could adversely impact patient safety.
METHODS
We performed a systematic review of the literature in accordance with the Meta‐Analysis of Observational Studies in Epidemiology guidelines[5] and developed this manuscript using the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) statement.[6]
Eligibility Criteria
With help from an experienced biomedical librarian (C.D.S.), we searched PubMed, the Cumulative Index to Nursing and Allied Health Literature, Scopus, Cochrane Library,
We included peer‐reviewed, original research studies published in English, Spanish, or French that addressed the questions outlined above. Eligible patient populations were children and adults admitted to hospital inpatient units and emergency departments (EDs). We excluded alarms in procedural suites or operating rooms (typically responded to by anesthesiologists already with the patient) because of the differences in environment of care, staff‐to‐patient ratio, and equipment. We included observational studies reporting the actionability of physiologic monitor alarms (ie, alarms warranting special attention or clinical intervention), as well as nurse responses to these alarms. We excluded studies focused on the effects of alarms unrelated to patient safety, such as families' and patients' stress, noise, or sleep disturbance. We included only intervention studies evaluating pragmatic interventions ready for clinical implementation (ie, not experimental devices or software algorithms).
Selection Process and Data Extraction
First, 2 authors screened the titles and abstracts of articles for eligibility. To maximize sensitivity, if at least 1 author considered the article relevant, the article proceeded to full‐text review. Second, the full texts of articles screened were independently reviewed by 2 authors in an unblinded fashion to determine their eligibility. Any disagreements concerning eligibility were resolved by team consensus. To assure consistency in eligibility determinations across the team, a core group of the authors (C.W.P, C.P.B., E.E., and V.V.G.) held a series of meetings to review and discuss each potentially eligible article and reach consensus on the final list of included articles. Two authors independently extracted the following characteristics from included studies: alarm review methods, analytic design, fidelity measurement, consideration of unintended adverse safety consequences, and key results. Reviewers were not blinded to journal, authors, or affiliations.
Synthesis of Results and Risk Assessment
Given the high degree of heterogeneity in methodology, we were unable to generate summary proportions of the observational studies or perform a meta‐analysis of the intervention studies. Thus, we organized the studies into clinically relevant categories and presented key aspects in tables. Due to the heterogeneity of the studies and the controversy surrounding quality scores,[5] we did not generate summary scores of study quality. Instead, we evaluated and reported key design elements that had the potential to bias the results. To recognize the more comprehensive studies in the field, we developed by consensus a set of characteristics that distinguished studies with lower risk of bias. These characteristics are shown and defined in Table 1.
First Author and Publication Year | Alarm Review Method | Indicators of Potential Bias for Observational Studies | Indicators of Potential Bias for Intervention Studies | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Monitor System | Direct Observation | Medical Record Review | Rhythm Annotation | Video Observation | Remote Monitoring Staff | Medical Device Industry Involved | Two Independent Reviewers | At Least 1 Reviewer Is a Clinical Expert | Reviewer Not Simultaneously in Patient Care | Clear Definition of Alarm Actionability | Census Included | Statistical Testing or QI SPC Methods | Fidelity Assessed | Safety Assessed | Lower Risk of Bias | |
| ||||||||||||||||
Adult Observational | ||||||||||||||||
Atzema 2006[7] | ✓* | ✓ | ✓ | |||||||||||||
Billinghurst 2003[8] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Biot 2000[9] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Chambrin 1999[10] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Drew 2014[11] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||
Gazarian 2014[12] | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||
Grges 2009[13] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Gross 2011[15] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||
Inokuchi 2013[14] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||
Koski 1990[16] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Morales Snchez 2014[17] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Pergher 2014[18] | ✓ | ✓ | ||||||||||||||
Siebig 2010[19] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
Voepel‐Lewis 2013[20] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Way 2014[21] | ✓ | ✓ | ✓ | |||||||||||||
Pediatric Observational | ||||||||||||||||
Bonafide 2015[22] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||
Lawless 1994[23] | ✓ | ✓ | ||||||||||||||
Rosman 2013[24] | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||
Talley 2011[25] | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||
Tsien 1997[26] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
van Pul 2015[27] | ✓ | |||||||||||||||
Varpio 2012[28] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Mixed Adult and Pediatric Observational | ||||||||||||||||
O'Carroll 1986[29] | ✓ | |||||||||||||||
Wiklund 1994[30] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Adult Intervention | ||||||||||||||||
Albert 2015[32] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||
Cvach 2013[33] | ✓ | ✓ | ||||||||||||||
Cvach 2014[34] | ✓ | ✓ | ||||||||||||||
Graham 2010[35] | ✓ | |||||||||||||||
Rheineck‐Leyssius 1997[36] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
Taenzer 2010[31] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
Whalen 2014[37] | ✓ | ✓ | ✓ | |||||||||||||
Pediatric Intervention | ||||||||||||||||
Dandoy 2014[38] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
For the purposes of this review, we defined nonactionable alarms as including both invalid (false) alarms that do not that accurately represent the physiologic status of the patient and alarms that are valid but do not warrant special attention or clinical intervention (nuisance alarms). We did not separate out invalid alarms due to the tremendous variation between studies in how validity was measured.
RESULTS
Study Selection
Search results produced 4629 articles (see the flow diagram in the Supporting Information in the online version of this article), of which 32 articles were eligible: 24 observational studies describing alarm characteristics and 8 studies describing interventions to reduce alarm frequency.
Observational Study Characteristics
Characteristics of included studies are shown in Table 1. Of the 24 observational studies,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] 15 included adult patients,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] 7 included pediatric patients,[22, 23, 24, 25, 26, 27, 28] and 2 included both adult and pediatric patients.[29, 30] All were single‐hospital studies, except for 1 study by Chambrin and colleagues[10] that included 5 sites. The number of patient‐hours examined in each study ranged from 60 to 113,880.[7, 8, 9, 10, 11, 13, 14, 15, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 29, 30] Hospital settings included ICUs (n = 16),[9, 10, 11, 13, 14, 16, 17, 18, 19, 22, 23, 24, 25, 26, 27, 29] general wards (n = 5),[12, 15, 20, 22, 28] EDs (n = 2),[7, 21] postanesthesia care unit (PACU) (n = 1),[30] and cardiac care unit (CCU) (n = 1).[8] Studies varied in the type of physiologic signals recorded and data collection methods, ranging from direct observation by a nurse who was simultaneously caring for patients[29] to video recording with expert review.[14, 19, 22] Four observational studies met the criteria for lower risk of bias.[11, 14, 15, 22]
Intervention Study Characteristics
Of the 8 intervention studies, 7 included adult patients,[31, 32, 33, 34, 35, 36, 37] and 1 included pediatric patients.[38] All were single‐hospital studies; 6 were quasi‐experimental[31, 33, 34, 35, 37, 38] and 2 were experimental.[32, 36] Settings included progressive care units (n = 3),[33, 34, 35] CCUs (n = 3),[32, 33, 37] wards (n = 2),[31, 38] PACU (n = 1),[36] and a step‐down unit (n = 1).[32] All except 1 study[32] used the monitoring system to record alarm data. Several studies evaluated multicomponent interventions that included combinations of the following: widening alarm parameters,[31, 35, 36, 37, 38] instituting alarm delays,[31, 34, 36, 38] reconfiguring alarm acuity,[35, 37] use of secondary notifications,[34] daily change of electrocardiographic electrodes or use of disposable electrocardiographic wires,[32, 33, 38] universal monitoring in high‐risk populations,[31] and timely discontinuation of monitoring in low‐risk populations.[38] Four intervention studies met our prespecified lower risk of bias criteria.[31, 32, 36, 38]
Proportion of Alarms Considered Actionable
Results of the observational studies are provided in Table 2. The proportion of alarms that were actionable was <1% to 26% in adult ICU settings,[9, 10, 11, 13, 14, 16, 17, 19] 20% to 36% in adult ward settings,[12, 15, 20] 17% in a mixed adult and pediatric PACU setting,[30] 3% to 13% in pediatric ICU settings,[22, 23, 24, 25, 26] and 1% in a pediatric ward setting.[22]
Signals Included | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
First Author and Publication Year | Setting | Monitored Patient‐Hours | SpO2 | ECG Arrhythmia | ECG Parametersa | Blood Pressure | Total Alarms | Actionable Alarms | Alarm Response | Lower Risk of Bias |
| ||||||||||
Adult | ||||||||||
Atzema 2006[7] | ED | 371 | ✓ | 1,762 | 0.20% | |||||
Billinghurst 2003[8] | CCU | 420 | ✓ | 751 | Not reported; 17% were valid | Nurses with higher acuity patients and smaller % of valid alarms had slower response rates | ||||
Biot 2000[9] | ICU | 250 | ✓ | ✓ | ✓ | ✓ | 3,665 | 3% | ||
Chambrin 1999[10] | ICU | 1,971 | ✓ | ✓ | ✓ | ✓ | 3,188 | 26% | ||
Drew 2014[11] | ICU | 48,173 | ✓ | ✓ | ✓ | ✓ | 2,558,760 | 0.3% of 3,861 VT alarms | ✓ | |
Gazarian 2014[12] | Ward | 54 nurse‐hours | ✓ | ✓ | ✓ | 205 | 22% | Response to 47% of alarms | ||
Grges 2009[13] | ICU | 200 | ✓ | ✓ | ✓ | ✓ | 1,214 | 5% | ||
Gross 2011[15] | Ward | 530 | ✓ | ✓ | ✓ | ✓ | 4,393 | 20% | ✓ | |
Inokuchi 2013[14] | ICU | 2,697 | ✓ | ✓ | ✓ | ✓ | 11,591 | 6% | ✓ | |
Koski 1990[16] | ICU | 400 | ✓ | ✓ | 2,322 | 12% | ||||
Morales Snchez 2014[17] | ICU | 434 sessions | ✓ | ✓ | ✓ | 215 | 25% | Response to 93% of alarms, of which 50% were within 10 seconds | ||
Pergher 2014[18] | ICU | 60 | ✓ | 76 | Not reported | 72% of alarms stopped before nurse response or had >10 minutes response time | ||||
Siebig 2010[19] | ICU | 982 | ✓ | ✓ | ✓ | ✓ | 5,934 | 15% | ||
Voepel‐Lewis 2013[20] | Ward | 1,616 | ✓ | 710 | 36% | Response time was longer for patients in highest quartile of total alarms | ||||
Way 2014[21] | ED | 93 | ✓ | ✓ | ✓ | ✓ | 572 | Not reported; 75% were valid | Nurses responded to more alarms in resuscitation room vs acute care area, but response time was longer | |
Pediatric | ||||||||||
Bonafide 2015[22] | Ward + ICU | 210 | ✓ | ✓ | ✓ | ✓ | 5,070 | 13% PICU, 1% ward | Incremental increases in response time as number of nonactionable alarms in preceding 120 minutes increased | ✓ |
Lawless 1994[23] | ICU | 928 | ✓ | ✓ | ✓ | 2,176 | 6% | |||
Rosman 2013[24] | ICU | 8,232 | ✓ | ✓ | ✓ | ✓ | 54,656 | 4% of rhythm alarms true critical" | ||
Talley 2011[25] | ICU | 1,470∥ | ✓ | ✓ | ✓ | ✓ | 2,245 | 3% | ||
Tsien 1997[26] | ICU | 298 | ✓ | ✓ | ✓ | 2,942 | 8% | |||
van Pul 2015[27] | ICU | 113,880∥ | ✓ | ✓ | ✓ | ✓ | 222,751 | Not reported | Assigned nurse did not respond to 6% of alarms within 45 seconds | |
Varpio 2012[28] | Ward | 49 unit‐hours | ✓ | ✓ | ✓ | ✓ | 446 | Not reported | 70% of all alarms and 41% of crisis alarms were not responded to within 1 minute | |
Both | ||||||||||
O'Carroll 1986[29] | ICU | 2,258∥ | ✓ | 284 | 2% | |||||
Wiklund 1994[30] | PACU | 207 | ✓ | ✓ | ✓ | 1,891 | 17% |
Relationship Between Alarm Exposure and Response Time
Whereas 9 studies addressed response time,[8, 12, 17, 18, 20, 21, 22, 27, 28] only 2 evaluated the relationship between alarm burden and nurse response time.[20, 22] Voepel‐Lewis and colleagues found that nurse responses were slower to patients with the highest quartile of alarms (57.6 seconds) compared to those with the lowest (45.4 seconds) or medium (42.3 seconds) quartiles of alarms on an adult ward (P = 0.046). They did not find an association between false alarm exposure and response time.[20] Bonafide and colleagues found incremental increases in response time as the number of nonactionable alarms in the preceding 120 minutes increased (P < 0.001 in the pediatric ICU, P = 0.009 on the pediatric ward).[22]
Interventions Effective in Reducing Alarms
Results of the 8 intervention studies are provided in Table 3. Three studies evaluated single interventions;[32, 33, 36] the remainder of the studies tested interventions with multiple components such that it was impossible to separate the effect of each component. Below, we have summarized study results, arranged by component. Because only 1 study focused on pediatric patients,[38] results from pediatric and adult settings are combined.
First Author and Publication Year | Design | Setting | Main Intervention Components | Other/ Comments | Key Results | Results Statistically Significant? | Lower Risk of Bias | ||||
---|---|---|---|---|---|---|---|---|---|---|---|
Widen Default Settings | Alarm Delays | Reconfigure Alarm Acuity | Secondary Notification | ECG Changes | |||||||
| |||||||||||
Adult | |||||||||||
Albert 2015[32] | Experimental (cluster‐randomized) | CCU | ✓ | Disposable vs reusable wires | Disposable leads had 29% fewer no‐telemetry, leads‐fail, and leads‐off alarms and similar artifact alarms | ✓ | ✓ | ||||
Cvach 2013[33] | Quasi‐experimental (before and after) | CCU and PCU | ✓ | Daily change of electrodes | 46% fewer alarms/bed/day | ||||||
Cvach 2014[34] | Quasi‐experimental (ITS) | PCU | ✓* | ✓ | Slope of regression line suggests decrease of 0.75 alarms/bed/day | ||||||
Graham 2010[35] | Quasi‐experimental (before and after) | PCU | ✓ | ✓ | 43% fewer crisis, warning, and system warning alarms on unit | ||||||
Rheineck‐Leyssius 1997[36] | Experimental (RCT) | PACU | ✓ | ✓ | Alarm limit of 85% had fewer alarms/patient but higher incidence of true hypoxemia for >1 minute (6% vs 2%) | ✓ | ✓ | ||||
Taenzer 2010[31] | Quasi‐experimental (before and after with concurrent controls) | Ward | ✓ | ✓ | Universal SpO2 monitoring | Rescue events decreased from 3.4 to 1.2 per 1,000 discharges; transfers to ICU decreased from 5.6 to 2.9 per 1,000 patient‐days, only 4 alarms/patient‐day | ✓ | ✓ | |||
Whalen 2014[37] | Quasi‐experimental (before and after) | CCU | ✓ | ✓ | 89% fewer audible alarms on unit | ✓ | |||||
Pediatric | |||||||||||
Dandoy 2014[38] | Quasi‐experimental (ITS) | Ward | ✓ | ✓ | ✓ | Timely monitor discontinuation; daily change of ECG electrodes | Decrease in alarms/patient‐days from 180 to 40 | ✓ |
Widening alarm parameter default settings was evaluated in 5 studies:[31, 35, 36, 37, 38] 1 single intervention randomized controlled trial (RCT),[36] and 4 multiple‐intervention, quasi‐experimental studies.[31, 35, 37, 38] In the RCT, using a lower SpO2 limit of 85% instead of the standard 90% resulted in 61% fewer alarms. In the 4 multiple intervention studies, 1 study reported significant reductions in alarm rates (P < 0.001),[37] 1 study did not report preintervention alarm rates but reported a postintervention alarm rate of 4 alarms per patient‐day,[31] and 2 studies reported reductions in alarm rates but did not report any statistical testing.[35, 38] Of the 3 studies examining patient safety, 1 study with universal monitoring reported fewer rescue events and transfers to the ICU postimplementation,[31] 1 study reported no missed acute decompensations,[38] and 1 study (the RCT) reported significantly more true hypoxemia events (P = 0.001).[36]
Alarm delays were evaluated in 4 studies:[31, 34, 36, 38] 3 multiple‐intervention, quasi‐experimental studies[31, 34, 38] and 1 retrospective analysis of data from an RCT.[36] One study combined alarm delays with widening defaults in a universal monitoring strategy and reported a postintervention alarm rate of 4 alarms per patient.[31] Another study evaluated delays as part of a secondary notification pager system and found a negatively sloping regression line that suggested a decreasing alarm rate, but did not report statistical testing.[34] The third study reported a reduction in alarm rates but did not report statistical testing.[38] The RCT compared the impact of a hypothetical 15‐second alarm delay to that of a lower SpO2 limit reduction and reported a similar reduction in alarms.[36] Of the 4 studies examining patient safety, 1 study with universal monitoring reported improvements,[31] 2 studies reported no adverse outcomes,[35, 38] and the retrospective analysis of data from the RCT reported the theoretical adverse outcome of delayed detection of sudden, severe desaturations.[36]
Reconfiguring alarm acuity was evaluated in 2 studies, both of which were multiple‐intervention quasi‐experimental studies.[35, 37] Both showed reductions in alarm rates: 1 was significant without increasing adverse events (P < 0.001),[37] and the other did not report statistical testing or safety outcomes.[35]
Secondary notification of nurses using pagers was the main intervention component of 1 study incorporating delays between the alarms and the alarm pages.[34] As mentioned above, a negatively sloping regression line was displayed, but no statistical testing or safety outcomes were reported.
Disposable electrocardiographic lead wires or daily electrode changes were evaluated in 3 studies:[32, 33, 38] 1 single intervention cluster‐randomized trial[32] and 2 quasi‐experimental studies.[33, 38] In the cluster‐randomized trial, disposable lead wires were compared to reusable lead wires, with disposable lead wires having significantly fewer technical alarms for lead signal failures (P = 0.03) but a similar number of monitoring artifact alarms (P = 0.44).[32] In a single‐intervention, quasi‐experimental study, daily electrode change showed a reduction in alarms, but no statistical testing was reported.[33] One multiple‐intervention, quasi‐experimental study incorporating daily electrode change showed fewer alarms without statistical testing.[38] Of the 2 studies examining patient safety, both reported no adverse outcomes.[32, 38]
DISCUSSION
This systematic review of physiologic monitor alarms in the hospital yielded the following main findings: (1) between 74% and 99% of physiologic monitor alarms were not actionable, (2) a significant relationship between alarm exposure and nurse response time was demonstrated in 2 small observational studies, and (3) although interventions were most often studied in combination, results from the studies with lower risk of bias suggest that widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and/or changing electrodes daily are the most promising interventions for reducing alarms. Only 5 of 8 intervention studies measured intervention safety and found that widening alarm parameters and implementing alarm delays had mixed safety outcomes, whereas disposable electrocardiographic lead wires and daily electrode changes had no adverse safety outcomes.[29, 30, 34, 35, 36] Safety measures are crucial to ensuring the highest level of patient safety is met; interventions are rendered useless without ensuring actionable alarms are not disabled. The variation in results across studies likely reflects the wide range of care settings as well as differences in design and quality.
This field is still in its infancy, with 18 of the 32 articles published in the past 5 years. We anticipate improvements in quality and rigor as the field matures, as well as clinically tested interventions that incorporate smart alarms. Smart alarms integrate data from multiple physiologic signals and the patient's history to better detect physiologic changes in the patient and improve the positive predictive value of alarms. Academicindustry partnerships will be required to implement and rigorously test smart alarms and other emerging technologies in the hospital.
To our knowledge, this is the first systematic review focused on monitor alarms with specific review questions relevant to alarm fatigue. Cvach recently published an integrative review of alarm fatigue using research published through 2011.[39] Our review builds upon her work by contributing a more extensive and systematic search strategy with databases spanning nursing, medicine, and engineering, including additional languages, and including newer studies published through April 2015. In addition, we included multiple cross‐team checks in our eligibility review to ensure high sensitivity and specificity of the resulting set of studies.
Although we focused on interventions aiming to reduce alarms, there has also been important recent work focused on reducing telemetry utilization in adult hospital populations as well as work focused on reducing pulse oximetry utilization in children admitted with respiratory conditions. Dressler and colleagues reported an immediate and sustained reduction in telemetry utilization in hospitalized adults upon redesign of cardiac telemetry order sets to include the clinical indication, which defaulted to the American Heart Association guideline‐recommended telemetry duration.[40] Instructions for bedside nurses were also included in the order set to facilitate appropriate telemetry discontinuation. Schondelmeyer and colleagues reported reductions in continuous pulse oximetry utilization in hospitalized children with asthma and bronchiolitis upon introduction of a multifaceted quality improvement program that included provider education, a nurse handoff checklist, and discontinuation criteria incorporated into order sets.[41]
Limitations of This Review and the Underlying Body of Work
There are limitations to this systematic review and its underlying body of work. With respect to our approach to this systematic review, we focused only on monitor alarms. Numerous other medical devices generate alarms in the patient‐care environment that also can contribute to alarm fatigue and deserve equally rigorous evaluation. With respect to the underlying body of work, the quality of individual studies was generally low. For example, determinations of alarm actionability were often made by a single rater without evaluation of the reliability or validity of these determinations, and statistical testing was often missing. There were also limitations specific to intervention studies, including evaluation of nongeneralizable patient populations, failure to measure the fidelity of the interventions, inadequate measures of intervention safety, and failure to statistically evaluate alarm reductions. Finally, though not necessarily a limitation, several studies were conducted by authors involved in or funded by the medical device industry.[11, 15, 19, 31, 32] This has the potential to introduce bias, although we have no indication that the quality of the science was adversely impacted.
Moving forward, the research agenda for physiologic monitor alarms should include the following: (1) more intensive focus on evaluating the relationship between alarm exposure and response time with analysis of important mediating factors that may promote or prevent alarm fatigue, (2) emphasis on studying interventions aimed at improving alarm management using rigorous designs such as cluster‐randomized trials and trials randomized by individual participant, (3) monitoring and reporting clinically meaningful balancing measures that represent unintended consequences of disabling or delaying potentially important alarms and possibly reducing the clinicians' ability to detect true patient deterioration and intervene in a timely manner, and (4) support for transparent academicindustry partnerships to evaluate new alarm technology in real‐world settings. As evidence‐based interventions emerge, there will be new opportunities to study different implementation strategies of these interventions to optimize effectiveness.
CONCLUSIONS
The body of literature relevant to physiologic monitor alarm characteristics and alarm fatigue is limited but growing rapidly. Although we know that most alarms are not actionable and that there appears to be a relationship between alarm exposure and response time that could be caused by alarm fatigue, we cannot yet say with certainty that we know which interventions are most effective in safely reducing unnecessary alarms. Interventions that appear most promising and should be prioritized for intensive evaluation include widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and changing electrodes daily. Careful evaluation of these interventions must include systematically examining adverse patient safety consequences.
Acknowledgements
The authors thank Amogh Karnik and Micheal Sellars for their technical assistance during the review and extraction process.
Disclosures: Ms. Zander is supported by the Society of Hospital Medicine Student Hospitalist Scholar Grant. Dr. Bonafide and Ms. Stemler are supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number K23HL116427. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no conflicts of interest.
- National Patient Safety Goals Effective January 1, 2015. The Joint Commission Web site. http://www.jointcommission.org/assets/1/6/2015_NPSG_HAP.pdf. Accessed July 17, 2015.
- ECRI Institute. 2015 Top 10 Health Technology Hazards. Available at: https://www.ecri.org/Pages/2015‐Hazards.aspx. Accessed June 23, 2015.
- Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378–386. , .
- Redesigning hospital alarms for patient safety: alarmed and potentially dangerous. JAMA. 2014;311(12):1199–1200. , .
- Meta‐analysis of observational studies in epidemiology: a proposal for reporting. Meta‐analysis Of Observational Studies in Epidemiology (MOOSE) Group. JAMA. 2000;283(15):2008–2012. , , , et al.
- PRISMA Group. Preferred reporting items for systematic reviews and meta‐analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–269, W64. , , , ;
- ALARMED: adverse events in low‐risk patients with chest pain receiving continuous electrocardiographic monitoring in the emergency department. A pilot study. Am J Emerg Med. 2006;24:62–67. , , , , .
- Patient and nurse‐related implications of remote cardiac telemetry. Clin Nurs Res. 2003;12(4):356–370. , , .
- Clinical evaluation of alarm efficiency in intensive care [in French]. Ann Fr Anesth Reanim. 2000;19:459–466. , , , , .
- Multicentric study of monitoring alarms in the adult intensive care unit (ICU): a descriptive analysis. Intensive Care Med. 1999;25:1360–1366. , , , , , .
- Insights into the problem of alarm fatigue with physiologic monitor devices: a comprehensive observational study of consecutive intensive care unit patients. PloS One. 2014;9(10):e110274. , , , et al.
- Nurses' response to frequency and types of electrocardiography alarms in a non‐ critical care setting: a descriptive study. Int J Nurs Stud. 2014;51(2):190–197. .
- Improving alarm performance in the medical intensive care unit using delays and clinical context. Anesth Analg. 2009;108:1546–1552. , , .
- The proportion of clinically relevant alarms decreases as patient clinical severity decreases in intensive care units: a pilot study. BMJ Open. 2013;3(9):e003354–e003354. , , , et al.
- Physiologic monitoring alarm load on medical/surgical floors of a community hospital. Biomed Instrum Technol. 2011;45:29–36. , , .
- Frequency and reliability of alarms in the monitoring of cardiac postoperative patients. Int J Clin Monit Comput. 1990;7(2):129–133. , , , .
- Audit of the bedside monitor alarms in a critical care unit [in Spanish]. Enferm Intensiva. 2014;25(3):83–90. , , , et al.
- Stimulus‐response time to invasive blood pressure alarms: implications for the safety of critical‐care patients. Rev Gaúcha Enferm. 2014;35(2):135–141. , .
- Intensive care unit alarms— how many do we need? Crit Care Med. 2010;38:451–456. , , , , , .
- Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):1351–1358. , , , et al.
- Whats that noise? Bedside monitoring in the Emergency Department. Int Emerg Nurs. 2014;22(4):197–201. , , .
- Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345–351. , , , et al.
- Crying wolf: false alarms in a pediatric intensive care unit. Crit Care Med. 1994;22(6):981–985. .
- What are we missing? Arrhythmia detection in the pediatric intensive care unit. J Pediatr. 2013;163(2):511–514. , , , , .
- Cardiopulmonary monitors and clinically significant events in critically ill children. Biomed Instrum Technol. 2011;45(s1):38–45. , , , et al.
- Poor prognosis for existing monitors in the intensive care unit. Crit Care Med. 1997;25:614–619. , .
- Safe patient monitoring is challenging but still feasible in a neonatal intensive care unit with single family rooms. Acta Paediatr Oslo Nor 1992. 2015;104(6):e247–e254. , , , , .
- The helpful or hindering effects of in‐hospital patient monitor alarms on nurses: a qualitative analysis. CIN Comput Inform Nurs. 2012;30(4):210–217. , , , .
- Survey of alarms in an intensive therapy unit. Anaesthesia. 1986;41(7):742–744. .
- Postanesthesia monitoring revisited: frequency of true and false alarms from different monitoring devices. J Clin Anesth. 1994;6(3):182–188. , , , .
- Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before‐and‐after concurrence study. Anesthesiology. 2010;112(2):282–287. , , , .
- Differences in alarm events between disposable and reusable electrocardiography lead wires. Am J Crit Care. 2015;24(1):67–74. , , , et al.
- Daily electrode change and effect on cardiac monitor alarms: an evidence‐based practice approach. J Nurs Care Qual. 2013;28:265–271. , , , .
- Use of pagers with an alarm escalation system to reduce cardiac monitor alarm signals. J Nurs Care Qual. 2014;29(1):9–18. , , , .
- Monitor alarm fatigue: standardizing use of physiological monitors and decreasing nuisance alarms. Am J Crit Care. 2010;19:28–34. , .
- Influence of pulse oximeter lower alarm limit on the incidence of hypoxaemia in the recovery room. Br J Anaesth. 1997;79(4):460–464. , .
- Novel approach to cardiac alarm management on telemetry units. J Cardiovasc Nurs. 2014;29(5):E13–E22. , , , , , .
- A team‐based approach to reducing cardiac monitor alarms. Pediatrics. 2014;134(6):e1686–e1694. , , , et al.
- Monitor alarm fatigue: an integrative review. Biomed Instrum Technol. 2012;46(4):268–277. .
- Altering overuse of cardiac telemetry in non‐intensive care unit settings by hardwiring the use of American Heart Association guidelines. JAMA Intern Med. 2014;174(11):1852–1854. , , , , .
- Using quality improvement to reduce continuous pulse oximetry use in children with wheezing. Pediatrics. 2015;135(4):e1044–e1051. , , , et al.
- The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non‐randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377–384. , .
Clinical alarm safety has become a recent target for improvement in many hospitals. In 2013, The Joint Commission released a National Patient Safety Goal prompting accredited hospitals to establish alarm safety as a hospital priority, identify the most important alarm signals to manage, and, by 2016, develop policies and procedures that address alarm management.[1] In addition, the Emergency Care Research Institute has named alarm hazards the top health technology hazard each year since 2012.[2]
The primary arguments supporting the elevation of alarm management to a national hospital priority in the United States include the following: (1) clinicians rely on alarms to notify them of important physiologic changes, (2) alarms occur frequently and usually do not warrant clinical intervention, and (3) alarm overload renders clinicians unable to respond to all alarms, resulting in alarm fatigue: responding more slowly or ignoring alarms that may represent actual clinical deterioration.[3, 4] These arguments are built largely on anecdotal data, reported safety event databases, and small studies that have not previously been systematically analyzed.
Despite the national focus on alarms, we still know very little about fundamental questions key to improving alarm safety. In this systematic review, we aimed to answer 3 key questions about physiologic monitor alarms: (1) What proportion of alarms warrant attention or clinical intervention (ie, actionable alarms), and how does this proportion vary between adult and pediatric populations and between intensive care unit (ICU) and ward settings? (2) What is the relationship between alarm exposure and clinician response time? (3) What interventions are effective in reducing the frequency of alarms?
We limited our scope to monitor alarms because few studies have evaluated the characteristics of alarms from other medical devices, and because missing relevant monitor alarms could adversely impact patient safety.
METHODS
We performed a systematic review of the literature in accordance with the Meta‐Analysis of Observational Studies in Epidemiology guidelines[5] and developed this manuscript using the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) statement.[6]
Eligibility Criteria
With help from an experienced biomedical librarian (C.D.S.), we searched PubMed, the Cumulative Index to Nursing and Allied Health Literature, Scopus, Cochrane Library,
We included peer‐reviewed, original research studies published in English, Spanish, or French that addressed the questions outlined above. Eligible patient populations were children and adults admitted to hospital inpatient units and emergency departments (EDs). We excluded alarms in procedural suites or operating rooms (typically responded to by anesthesiologists already with the patient) because of the differences in environment of care, staff‐to‐patient ratio, and equipment. We included observational studies reporting the actionability of physiologic monitor alarms (ie, alarms warranting special attention or clinical intervention), as well as nurse responses to these alarms. We excluded studies focused on the effects of alarms unrelated to patient safety, such as families' and patients' stress, noise, or sleep disturbance. We included only intervention studies evaluating pragmatic interventions ready for clinical implementation (ie, not experimental devices or software algorithms).
Selection Process and Data Extraction
First, 2 authors screened the titles and abstracts of articles for eligibility. To maximize sensitivity, if at least 1 author considered the article relevant, the article proceeded to full‐text review. Second, the full texts of articles screened were independently reviewed by 2 authors in an unblinded fashion to determine their eligibility. Any disagreements concerning eligibility were resolved by team consensus. To assure consistency in eligibility determinations across the team, a core group of the authors (C.W.P, C.P.B., E.E., and V.V.G.) held a series of meetings to review and discuss each potentially eligible article and reach consensus on the final list of included articles. Two authors independently extracted the following characteristics from included studies: alarm review methods, analytic design, fidelity measurement, consideration of unintended adverse safety consequences, and key results. Reviewers were not blinded to journal, authors, or affiliations.
Synthesis of Results and Risk Assessment
Given the high degree of heterogeneity in methodology, we were unable to generate summary proportions of the observational studies or perform a meta‐analysis of the intervention studies. Thus, we organized the studies into clinically relevant categories and presented key aspects in tables. Due to the heterogeneity of the studies and the controversy surrounding quality scores,[5] we did not generate summary scores of study quality. Instead, we evaluated and reported key design elements that had the potential to bias the results. To recognize the more comprehensive studies in the field, we developed by consensus a set of characteristics that distinguished studies with lower risk of bias. These characteristics are shown and defined in Table 1.
First Author and Publication Year | Alarm Review Method | Indicators of Potential Bias for Observational Studies | Indicators of Potential Bias for Intervention Studies | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Monitor System | Direct Observation | Medical Record Review | Rhythm Annotation | Video Observation | Remote Monitoring Staff | Medical Device Industry Involved | Two Independent Reviewers | At Least 1 Reviewer Is a Clinical Expert | Reviewer Not Simultaneously in Patient Care | Clear Definition of Alarm Actionability | Census Included | Statistical Testing or QI SPC Methods | Fidelity Assessed | Safety Assessed | Lower Risk of Bias | |
| ||||||||||||||||
Adult Observational | ||||||||||||||||
Atzema 2006[7] | ✓* | ✓ | ✓ | |||||||||||||
Billinghurst 2003[8] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Biot 2000[9] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Chambrin 1999[10] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Drew 2014[11] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||
Gazarian 2014[12] | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||
Grges 2009[13] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Gross 2011[15] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||
Inokuchi 2013[14] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||
Koski 1990[16] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Morales Snchez 2014[17] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Pergher 2014[18] | ✓ | ✓ | ||||||||||||||
Siebig 2010[19] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
Voepel‐Lewis 2013[20] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Way 2014[21] | ✓ | ✓ | ✓ | |||||||||||||
Pediatric Observational | ||||||||||||||||
Bonafide 2015[22] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||
Lawless 1994[23] | ✓ | ✓ | ||||||||||||||
Rosman 2013[24] | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||
Talley 2011[25] | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||
Tsien 1997[26] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
van Pul 2015[27] | ✓ | |||||||||||||||
Varpio 2012[28] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Mixed Adult and Pediatric Observational | ||||||||||||||||
O'Carroll 1986[29] | ✓ | |||||||||||||||
Wiklund 1994[30] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Adult Intervention | ||||||||||||||||
Albert 2015[32] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||
Cvach 2013[33] | ✓ | ✓ | ||||||||||||||
Cvach 2014[34] | ✓ | ✓ | ||||||||||||||
Graham 2010[35] | ✓ | |||||||||||||||
Rheineck‐Leyssius 1997[36] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
Taenzer 2010[31] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
Whalen 2014[37] | ✓ | ✓ | ✓ | |||||||||||||
Pediatric Intervention | ||||||||||||||||
Dandoy 2014[38] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
For the purposes of this review, we defined nonactionable alarms as including both invalid (false) alarms that do not that accurately represent the physiologic status of the patient and alarms that are valid but do not warrant special attention or clinical intervention (nuisance alarms). We did not separate out invalid alarms due to the tremendous variation between studies in how validity was measured.
RESULTS
Study Selection
Search results produced 4629 articles (see the flow diagram in the Supporting Information in the online version of this article), of which 32 articles were eligible: 24 observational studies describing alarm characteristics and 8 studies describing interventions to reduce alarm frequency.
Observational Study Characteristics
Characteristics of included studies are shown in Table 1. Of the 24 observational studies,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] 15 included adult patients,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] 7 included pediatric patients,[22, 23, 24, 25, 26, 27, 28] and 2 included both adult and pediatric patients.[29, 30] All were single‐hospital studies, except for 1 study by Chambrin and colleagues[10] that included 5 sites. The number of patient‐hours examined in each study ranged from 60 to 113,880.[7, 8, 9, 10, 11, 13, 14, 15, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 29, 30] Hospital settings included ICUs (n = 16),[9, 10, 11, 13, 14, 16, 17, 18, 19, 22, 23, 24, 25, 26, 27, 29] general wards (n = 5),[12, 15, 20, 22, 28] EDs (n = 2),[7, 21] postanesthesia care unit (PACU) (n = 1),[30] and cardiac care unit (CCU) (n = 1).[8] Studies varied in the type of physiologic signals recorded and data collection methods, ranging from direct observation by a nurse who was simultaneously caring for patients[29] to video recording with expert review.[14, 19, 22] Four observational studies met the criteria for lower risk of bias.[11, 14, 15, 22]
Intervention Study Characteristics
Of the 8 intervention studies, 7 included adult patients,[31, 32, 33, 34, 35, 36, 37] and 1 included pediatric patients.[38] All were single‐hospital studies; 6 were quasi‐experimental[31, 33, 34, 35, 37, 38] and 2 were experimental.[32, 36] Settings included progressive care units (n = 3),[33, 34, 35] CCUs (n = 3),[32, 33, 37] wards (n = 2),[31, 38] PACU (n = 1),[36] and a step‐down unit (n = 1).[32] All except 1 study[32] used the monitoring system to record alarm data. Several studies evaluated multicomponent interventions that included combinations of the following: widening alarm parameters,[31, 35, 36, 37, 38] instituting alarm delays,[31, 34, 36, 38] reconfiguring alarm acuity,[35, 37] use of secondary notifications,[34] daily change of electrocardiographic electrodes or use of disposable electrocardiographic wires,[32, 33, 38] universal monitoring in high‐risk populations,[31] and timely discontinuation of monitoring in low‐risk populations.[38] Four intervention studies met our prespecified lower risk of bias criteria.[31, 32, 36, 38]
Proportion of Alarms Considered Actionable
Results of the observational studies are provided in Table 2. The proportion of alarms that were actionable was <1% to 26% in adult ICU settings,[9, 10, 11, 13, 14, 16, 17, 19] 20% to 36% in adult ward settings,[12, 15, 20] 17% in a mixed adult and pediatric PACU setting,[30] 3% to 13% in pediatric ICU settings,[22, 23, 24, 25, 26] and 1% in a pediatric ward setting.[22]
Signals Included | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
First Author and Publication Year | Setting | Monitored Patient‐Hours | SpO2 | ECG Arrhythmia | ECG Parametersa | Blood Pressure | Total Alarms | Actionable Alarms | Alarm Response | Lower Risk of Bias |
| ||||||||||
Adult | ||||||||||
Atzema 2006[7] | ED | 371 | ✓ | 1,762 | 0.20% | |||||
Billinghurst 2003[8] | CCU | 420 | ✓ | 751 | Not reported; 17% were valid | Nurses with higher acuity patients and smaller % of valid alarms had slower response rates | ||||
Biot 2000[9] | ICU | 250 | ✓ | ✓ | ✓ | ✓ | 3,665 | 3% | ||
Chambrin 1999[10] | ICU | 1,971 | ✓ | ✓ | ✓ | ✓ | 3,188 | 26% | ||
Drew 2014[11] | ICU | 48,173 | ✓ | ✓ | ✓ | ✓ | 2,558,760 | 0.3% of 3,861 VT alarms | ✓ | |
Gazarian 2014[12] | Ward | 54 nurse‐hours | ✓ | ✓ | ✓ | 205 | 22% | Response to 47% of alarms | ||
Grges 2009[13] | ICU | 200 | ✓ | ✓ | ✓ | ✓ | 1,214 | 5% | ||
Gross 2011[15] | Ward | 530 | ✓ | ✓ | ✓ | ✓ | 4,393 | 20% | ✓ | |
Inokuchi 2013[14] | ICU | 2,697 | ✓ | ✓ | ✓ | ✓ | 11,591 | 6% | ✓ | |
Koski 1990[16] | ICU | 400 | ✓ | ✓ | 2,322 | 12% | ||||
Morales Snchez 2014[17] | ICU | 434 sessions | ✓ | ✓ | ✓ | 215 | 25% | Response to 93% of alarms, of which 50% were within 10 seconds | ||
Pergher 2014[18] | ICU | 60 | ✓ | 76 | Not reported | 72% of alarms stopped before nurse response or had >10 minutes response time | ||||
Siebig 2010[19] | ICU | 982 | ✓ | ✓ | ✓ | ✓ | 5,934 | 15% | ||
Voepel‐Lewis 2013[20] | Ward | 1,616 | ✓ | 710 | 36% | Response time was longer for patients in highest quartile of total alarms | ||||
Way 2014[21] | ED | 93 | ✓ | ✓ | ✓ | ✓ | 572 | Not reported; 75% were valid | Nurses responded to more alarms in resuscitation room vs acute care area, but response time was longer | |
Pediatric | ||||||||||
Bonafide 2015[22] | Ward + ICU | 210 | ✓ | ✓ | ✓ | ✓ | 5,070 | 13% PICU, 1% ward | Incremental increases in response time as number of nonactionable alarms in preceding 120 minutes increased | ✓ |
Lawless 1994[23] | ICU | 928 | ✓ | ✓ | ✓ | 2,176 | 6% | |||
Rosman 2013[24] | ICU | 8,232 | ✓ | ✓ | ✓ | ✓ | 54,656 | 4% of rhythm alarms true critical" | ||
Talley 2011[25] | ICU | 1,470∥ | ✓ | ✓ | ✓ | ✓ | 2,245 | 3% | ||
Tsien 1997[26] | ICU | 298 | ✓ | ✓ | ✓ | 2,942 | 8% | |||
van Pul 2015[27] | ICU | 113,880∥ | ✓ | ✓ | ✓ | ✓ | 222,751 | Not reported | Assigned nurse did not respond to 6% of alarms within 45 seconds | |
Varpio 2012[28] | Ward | 49 unit‐hours | ✓ | ✓ | ✓ | ✓ | 446 | Not reported | 70% of all alarms and 41% of crisis alarms were not responded to within 1 minute | |
Both | ||||||||||
O'Carroll 1986[29] | ICU | 2,258∥ | ✓ | 284 | 2% | |||||
Wiklund 1994[30] | PACU | 207 | ✓ | ✓ | ✓ | 1,891 | 17% |
Relationship Between Alarm Exposure and Response Time
Whereas 9 studies addressed response time,[8, 12, 17, 18, 20, 21, 22, 27, 28] only 2 evaluated the relationship between alarm burden and nurse response time.[20, 22] Voepel‐Lewis and colleagues found that nurse responses were slower to patients with the highest quartile of alarms (57.6 seconds) compared to those with the lowest (45.4 seconds) or medium (42.3 seconds) quartiles of alarms on an adult ward (P = 0.046). They did not find an association between false alarm exposure and response time.[20] Bonafide and colleagues found incremental increases in response time as the number of nonactionable alarms in the preceding 120 minutes increased (P < 0.001 in the pediatric ICU, P = 0.009 on the pediatric ward).[22]
Interventions Effective in Reducing Alarms
Results of the 8 intervention studies are provided in Table 3. Three studies evaluated single interventions;[32, 33, 36] the remainder of the studies tested interventions with multiple components such that it was impossible to separate the effect of each component. Below, we have summarized study results, arranged by component. Because only 1 study focused on pediatric patients,[38] results from pediatric and adult settings are combined.
First Author and Publication Year | Design | Setting | Main Intervention Components | Other/ Comments | Key Results | Results Statistically Significant? | Lower Risk of Bias | ||||
---|---|---|---|---|---|---|---|---|---|---|---|
Widen Default Settings | Alarm Delays | Reconfigure Alarm Acuity | Secondary Notification | ECG Changes | |||||||
| |||||||||||
Adult | |||||||||||
Albert 2015[32] | Experimental (cluster‐randomized) | CCU | ✓ | Disposable vs reusable wires | Disposable leads had 29% fewer no‐telemetry, leads‐fail, and leads‐off alarms and similar artifact alarms | ✓ | ✓ | ||||
Cvach 2013[33] | Quasi‐experimental (before and after) | CCU and PCU | ✓ | Daily change of electrodes | 46% fewer alarms/bed/day | ||||||
Cvach 2014[34] | Quasi‐experimental (ITS) | PCU | ✓* | ✓ | Slope of regression line suggests decrease of 0.75 alarms/bed/day | ||||||
Graham 2010[35] | Quasi‐experimental (before and after) | PCU | ✓ | ✓ | 43% fewer crisis, warning, and system warning alarms on unit | ||||||
Rheineck‐Leyssius 1997[36] | Experimental (RCT) | PACU | ✓ | ✓ | Alarm limit of 85% had fewer alarms/patient but higher incidence of true hypoxemia for >1 minute (6% vs 2%) | ✓ | ✓ | ||||
Taenzer 2010[31] | Quasi‐experimental (before and after with concurrent controls) | Ward | ✓ | ✓ | Universal SpO2 monitoring | Rescue events decreased from 3.4 to 1.2 per 1,000 discharges; transfers to ICU decreased from 5.6 to 2.9 per 1,000 patient‐days, only 4 alarms/patient‐day | ✓ | ✓ | |||
Whalen 2014[37] | Quasi‐experimental (before and after) | CCU | ✓ | ✓ | 89% fewer audible alarms on unit | ✓ | |||||
Pediatric | |||||||||||
Dandoy 2014[38] | Quasi‐experimental (ITS) | Ward | ✓ | ✓ | ✓ | Timely monitor discontinuation; daily change of ECG electrodes | Decrease in alarms/patient‐days from 180 to 40 | ✓ |
Widening alarm parameter default settings was evaluated in 5 studies:[31, 35, 36, 37, 38] 1 single intervention randomized controlled trial (RCT),[36] and 4 multiple‐intervention, quasi‐experimental studies.[31, 35, 37, 38] In the RCT, using a lower SpO2 limit of 85% instead of the standard 90% resulted in 61% fewer alarms. In the 4 multiple intervention studies, 1 study reported significant reductions in alarm rates (P < 0.001),[37] 1 study did not report preintervention alarm rates but reported a postintervention alarm rate of 4 alarms per patient‐day,[31] and 2 studies reported reductions in alarm rates but did not report any statistical testing.[35, 38] Of the 3 studies examining patient safety, 1 study with universal monitoring reported fewer rescue events and transfers to the ICU postimplementation,[31] 1 study reported no missed acute decompensations,[38] and 1 study (the RCT) reported significantly more true hypoxemia events (P = 0.001).[36]
Alarm delays were evaluated in 4 studies:[31, 34, 36, 38] 3 multiple‐intervention, quasi‐experimental studies[31, 34, 38] and 1 retrospective analysis of data from an RCT.[36] One study combined alarm delays with widening defaults in a universal monitoring strategy and reported a postintervention alarm rate of 4 alarms per patient.[31] Another study evaluated delays as part of a secondary notification pager system and found a negatively sloping regression line that suggested a decreasing alarm rate, but did not report statistical testing.[34] The third study reported a reduction in alarm rates but did not report statistical testing.[38] The RCT compared the impact of a hypothetical 15‐second alarm delay to that of a lower SpO2 limit reduction and reported a similar reduction in alarms.[36] Of the 4 studies examining patient safety, 1 study with universal monitoring reported improvements,[31] 2 studies reported no adverse outcomes,[35, 38] and the retrospective analysis of data from the RCT reported the theoretical adverse outcome of delayed detection of sudden, severe desaturations.[36]
Reconfiguring alarm acuity was evaluated in 2 studies, both of which were multiple‐intervention quasi‐experimental studies.[35, 37] Both showed reductions in alarm rates: 1 was significant without increasing adverse events (P < 0.001),[37] and the other did not report statistical testing or safety outcomes.[35]
Secondary notification of nurses using pagers was the main intervention component of 1 study incorporating delays between the alarms and the alarm pages.[34] As mentioned above, a negatively sloping regression line was displayed, but no statistical testing or safety outcomes were reported.
Disposable electrocardiographic lead wires or daily electrode changes were evaluated in 3 studies:[32, 33, 38] 1 single intervention cluster‐randomized trial[32] and 2 quasi‐experimental studies.[33, 38] In the cluster‐randomized trial, disposable lead wires were compared to reusable lead wires, with disposable lead wires having significantly fewer technical alarms for lead signal failures (P = 0.03) but a similar number of monitoring artifact alarms (P = 0.44).[32] In a single‐intervention, quasi‐experimental study, daily electrode change showed a reduction in alarms, but no statistical testing was reported.[33] One multiple‐intervention, quasi‐experimental study incorporating daily electrode change showed fewer alarms without statistical testing.[38] Of the 2 studies examining patient safety, both reported no adverse outcomes.[32, 38]
DISCUSSION
This systematic review of physiologic monitor alarms in the hospital yielded the following main findings: (1) between 74% and 99% of physiologic monitor alarms were not actionable, (2) a significant relationship between alarm exposure and nurse response time was demonstrated in 2 small observational studies, and (3) although interventions were most often studied in combination, results from the studies with lower risk of bias suggest that widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and/or changing electrodes daily are the most promising interventions for reducing alarms. Only 5 of 8 intervention studies measured intervention safety and found that widening alarm parameters and implementing alarm delays had mixed safety outcomes, whereas disposable electrocardiographic lead wires and daily electrode changes had no adverse safety outcomes.[29, 30, 34, 35, 36] Safety measures are crucial to ensuring the highest level of patient safety is met; interventions are rendered useless without ensuring actionable alarms are not disabled. The variation in results across studies likely reflects the wide range of care settings as well as differences in design and quality.
This field is still in its infancy, with 18 of the 32 articles published in the past 5 years. We anticipate improvements in quality and rigor as the field matures, as well as clinically tested interventions that incorporate smart alarms. Smart alarms integrate data from multiple physiologic signals and the patient's history to better detect physiologic changes in the patient and improve the positive predictive value of alarms. Academicindustry partnerships will be required to implement and rigorously test smart alarms and other emerging technologies in the hospital.
To our knowledge, this is the first systematic review focused on monitor alarms with specific review questions relevant to alarm fatigue. Cvach recently published an integrative review of alarm fatigue using research published through 2011.[39] Our review builds upon her work by contributing a more extensive and systematic search strategy with databases spanning nursing, medicine, and engineering, including additional languages, and including newer studies published through April 2015. In addition, we included multiple cross‐team checks in our eligibility review to ensure high sensitivity and specificity of the resulting set of studies.
Although we focused on interventions aiming to reduce alarms, there has also been important recent work focused on reducing telemetry utilization in adult hospital populations as well as work focused on reducing pulse oximetry utilization in children admitted with respiratory conditions. Dressler and colleagues reported an immediate and sustained reduction in telemetry utilization in hospitalized adults upon redesign of cardiac telemetry order sets to include the clinical indication, which defaulted to the American Heart Association guideline‐recommended telemetry duration.[40] Instructions for bedside nurses were also included in the order set to facilitate appropriate telemetry discontinuation. Schondelmeyer and colleagues reported reductions in continuous pulse oximetry utilization in hospitalized children with asthma and bronchiolitis upon introduction of a multifaceted quality improvement program that included provider education, a nurse handoff checklist, and discontinuation criteria incorporated into order sets.[41]
Limitations of This Review and the Underlying Body of Work
There are limitations to this systematic review and its underlying body of work. With respect to our approach to this systematic review, we focused only on monitor alarms. Numerous other medical devices generate alarms in the patient‐care environment that also can contribute to alarm fatigue and deserve equally rigorous evaluation. With respect to the underlying body of work, the quality of individual studies was generally low. For example, determinations of alarm actionability were often made by a single rater without evaluation of the reliability or validity of these determinations, and statistical testing was often missing. There were also limitations specific to intervention studies, including evaluation of nongeneralizable patient populations, failure to measure the fidelity of the interventions, inadequate measures of intervention safety, and failure to statistically evaluate alarm reductions. Finally, though not necessarily a limitation, several studies were conducted by authors involved in or funded by the medical device industry.[11, 15, 19, 31, 32] This has the potential to introduce bias, although we have no indication that the quality of the science was adversely impacted.
Moving forward, the research agenda for physiologic monitor alarms should include the following: (1) more intensive focus on evaluating the relationship between alarm exposure and response time with analysis of important mediating factors that may promote or prevent alarm fatigue, (2) emphasis on studying interventions aimed at improving alarm management using rigorous designs such as cluster‐randomized trials and trials randomized by individual participant, (3) monitoring and reporting clinically meaningful balancing measures that represent unintended consequences of disabling or delaying potentially important alarms and possibly reducing the clinicians' ability to detect true patient deterioration and intervene in a timely manner, and (4) support for transparent academicindustry partnerships to evaluate new alarm technology in real‐world settings. As evidence‐based interventions emerge, there will be new opportunities to study different implementation strategies of these interventions to optimize effectiveness.
CONCLUSIONS
The body of literature relevant to physiologic monitor alarm characteristics and alarm fatigue is limited but growing rapidly. Although we know that most alarms are not actionable and that there appears to be a relationship between alarm exposure and response time that could be caused by alarm fatigue, we cannot yet say with certainty that we know which interventions are most effective in safely reducing unnecessary alarms. Interventions that appear most promising and should be prioritized for intensive evaluation include widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and changing electrodes daily. Careful evaluation of these interventions must include systematically examining adverse patient safety consequences.
Acknowledgements
The authors thank Amogh Karnik and Micheal Sellars for their technical assistance during the review and extraction process.
Disclosures: Ms. Zander is supported by the Society of Hospital Medicine Student Hospitalist Scholar Grant. Dr. Bonafide and Ms. Stemler are supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number K23HL116427. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no conflicts of interest.
Clinical alarm safety has become a recent target for improvement in many hospitals. In 2013, The Joint Commission released a National Patient Safety Goal prompting accredited hospitals to establish alarm safety as a hospital priority, identify the most important alarm signals to manage, and, by 2016, develop policies and procedures that address alarm management.[1] In addition, the Emergency Care Research Institute has named alarm hazards the top health technology hazard each year since 2012.[2]
The primary arguments supporting the elevation of alarm management to a national hospital priority in the United States include the following: (1) clinicians rely on alarms to notify them of important physiologic changes, (2) alarms occur frequently and usually do not warrant clinical intervention, and (3) alarm overload renders clinicians unable to respond to all alarms, resulting in alarm fatigue: responding more slowly or ignoring alarms that may represent actual clinical deterioration.[3, 4] These arguments are built largely on anecdotal data, reported safety event databases, and small studies that have not previously been systematically analyzed.
Despite the national focus on alarms, we still know very little about fundamental questions key to improving alarm safety. In this systematic review, we aimed to answer 3 key questions about physiologic monitor alarms: (1) What proportion of alarms warrant attention or clinical intervention (ie, actionable alarms), and how does this proportion vary between adult and pediatric populations and between intensive care unit (ICU) and ward settings? (2) What is the relationship between alarm exposure and clinician response time? (3) What interventions are effective in reducing the frequency of alarms?
We limited our scope to monitor alarms because few studies have evaluated the characteristics of alarms from other medical devices, and because missing relevant monitor alarms could adversely impact patient safety.
METHODS
We performed a systematic review of the literature in accordance with the Meta‐Analysis of Observational Studies in Epidemiology guidelines[5] and developed this manuscript using the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) statement.[6]
Eligibility Criteria
With help from an experienced biomedical librarian (C.D.S.), we searched PubMed, the Cumulative Index to Nursing and Allied Health Literature, Scopus, Cochrane Library,
We included peer‐reviewed, original research studies published in English, Spanish, or French that addressed the questions outlined above. Eligible patient populations were children and adults admitted to hospital inpatient units and emergency departments (EDs). We excluded alarms in procedural suites or operating rooms (typically responded to by anesthesiologists already with the patient) because of the differences in environment of care, staff‐to‐patient ratio, and equipment. We included observational studies reporting the actionability of physiologic monitor alarms (ie, alarms warranting special attention or clinical intervention), as well as nurse responses to these alarms. We excluded studies focused on the effects of alarms unrelated to patient safety, such as families' and patients' stress, noise, or sleep disturbance. We included only intervention studies evaluating pragmatic interventions ready for clinical implementation (ie, not experimental devices or software algorithms).
Selection Process and Data Extraction
First, 2 authors screened the titles and abstracts of articles for eligibility. To maximize sensitivity, if at least 1 author considered the article relevant, the article proceeded to full‐text review. Second, the full texts of articles screened were independently reviewed by 2 authors in an unblinded fashion to determine their eligibility. Any disagreements concerning eligibility were resolved by team consensus. To assure consistency in eligibility determinations across the team, a core group of the authors (C.W.P, C.P.B., E.E., and V.V.G.) held a series of meetings to review and discuss each potentially eligible article and reach consensus on the final list of included articles. Two authors independently extracted the following characteristics from included studies: alarm review methods, analytic design, fidelity measurement, consideration of unintended adverse safety consequences, and key results. Reviewers were not blinded to journal, authors, or affiliations.
Synthesis of Results and Risk Assessment
Given the high degree of heterogeneity in methodology, we were unable to generate summary proportions of the observational studies or perform a meta‐analysis of the intervention studies. Thus, we organized the studies into clinically relevant categories and presented key aspects in tables. Due to the heterogeneity of the studies and the controversy surrounding quality scores,[5] we did not generate summary scores of study quality. Instead, we evaluated and reported key design elements that had the potential to bias the results. To recognize the more comprehensive studies in the field, we developed by consensus a set of characteristics that distinguished studies with lower risk of bias. These characteristics are shown and defined in Table 1.
First Author and Publication Year | Alarm Review Method | Indicators of Potential Bias for Observational Studies | Indicators of Potential Bias for Intervention Studies | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Monitor System | Direct Observation | Medical Record Review | Rhythm Annotation | Video Observation | Remote Monitoring Staff | Medical Device Industry Involved | Two Independent Reviewers | At Least 1 Reviewer Is a Clinical Expert | Reviewer Not Simultaneously in Patient Care | Clear Definition of Alarm Actionability | Census Included | Statistical Testing or QI SPC Methods | Fidelity Assessed | Safety Assessed | Lower Risk of Bias | |
| ||||||||||||||||
Adult Observational | ||||||||||||||||
Atzema 2006[7] | ✓* | ✓ | ✓ | |||||||||||||
Billinghurst 2003[8] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Biot 2000[9] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Chambrin 1999[10] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Drew 2014[11] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||
Gazarian 2014[12] | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||
Grges 2009[13] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Gross 2011[15] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||
Inokuchi 2013[14] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||
Koski 1990[16] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Morales Snchez 2014[17] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Pergher 2014[18] | ✓ | ✓ | ||||||||||||||
Siebig 2010[19] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
Voepel‐Lewis 2013[20] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Way 2014[21] | ✓ | ✓ | ✓ | |||||||||||||
Pediatric Observational | ||||||||||||||||
Bonafide 2015[22] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||
Lawless 1994[23] | ✓ | ✓ | ||||||||||||||
Rosman 2013[24] | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||
Talley 2011[25] | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||
Tsien 1997[26] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
van Pul 2015[27] | ✓ | |||||||||||||||
Varpio 2012[28] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Mixed Adult and Pediatric Observational | ||||||||||||||||
O'Carroll 1986[29] | ✓ | |||||||||||||||
Wiklund 1994[30] | ✓ | ✓ | ✓ | ✓ | ||||||||||||
Adult Intervention | ||||||||||||||||
Albert 2015[32] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||
Cvach 2013[33] | ✓ | ✓ | ||||||||||||||
Cvach 2014[34] | ✓ | ✓ | ||||||||||||||
Graham 2010[35] | ✓ | |||||||||||||||
Rheineck‐Leyssius 1997[36] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
Taenzer 2010[31] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
Whalen 2014[37] | ✓ | ✓ | ✓ | |||||||||||||
Pediatric Intervention | ||||||||||||||||
Dandoy 2014[38] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
For the purposes of this review, we defined nonactionable alarms as including both invalid (false) alarms that do not that accurately represent the physiologic status of the patient and alarms that are valid but do not warrant special attention or clinical intervention (nuisance alarms). We did not separate out invalid alarms due to the tremendous variation between studies in how validity was measured.
RESULTS
Study Selection
Search results produced 4629 articles (see the flow diagram in the Supporting Information in the online version of this article), of which 32 articles were eligible: 24 observational studies describing alarm characteristics and 8 studies describing interventions to reduce alarm frequency.
Observational Study Characteristics
Characteristics of included studies are shown in Table 1. Of the 24 observational studies,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] 15 included adult patients,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] 7 included pediatric patients,[22, 23, 24, 25, 26, 27, 28] and 2 included both adult and pediatric patients.[29, 30] All were single‐hospital studies, except for 1 study by Chambrin and colleagues[10] that included 5 sites. The number of patient‐hours examined in each study ranged from 60 to 113,880.[7, 8, 9, 10, 11, 13, 14, 15, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 29, 30] Hospital settings included ICUs (n = 16),[9, 10, 11, 13, 14, 16, 17, 18, 19, 22, 23, 24, 25, 26, 27, 29] general wards (n = 5),[12, 15, 20, 22, 28] EDs (n = 2),[7, 21] postanesthesia care unit (PACU) (n = 1),[30] and cardiac care unit (CCU) (n = 1).[8] Studies varied in the type of physiologic signals recorded and data collection methods, ranging from direct observation by a nurse who was simultaneously caring for patients[29] to video recording with expert review.[14, 19, 22] Four observational studies met the criteria for lower risk of bias.[11, 14, 15, 22]
Intervention Study Characteristics
Of the 8 intervention studies, 7 included adult patients,[31, 32, 33, 34, 35, 36, 37] and 1 included pediatric patients.[38] All were single‐hospital studies; 6 were quasi‐experimental[31, 33, 34, 35, 37, 38] and 2 were experimental.[32, 36] Settings included progressive care units (n = 3),[33, 34, 35] CCUs (n = 3),[32, 33, 37] wards (n = 2),[31, 38] PACU (n = 1),[36] and a step‐down unit (n = 1).[32] All except 1 study[32] used the monitoring system to record alarm data. Several studies evaluated multicomponent interventions that included combinations of the following: widening alarm parameters,[31, 35, 36, 37, 38] instituting alarm delays,[31, 34, 36, 38] reconfiguring alarm acuity,[35, 37] use of secondary notifications,[34] daily change of electrocardiographic electrodes or use of disposable electrocardiographic wires,[32, 33, 38] universal monitoring in high‐risk populations,[31] and timely discontinuation of monitoring in low‐risk populations.[38] Four intervention studies met our prespecified lower risk of bias criteria.[31, 32, 36, 38]
Proportion of Alarms Considered Actionable
Results of the observational studies are provided in Table 2. The proportion of alarms that were actionable was <1% to 26% in adult ICU settings,[9, 10, 11, 13, 14, 16, 17, 19] 20% to 36% in adult ward settings,[12, 15, 20] 17% in a mixed adult and pediatric PACU setting,[30] 3% to 13% in pediatric ICU settings,[22, 23, 24, 25, 26] and 1% in a pediatric ward setting.[22]
Signals Included | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
First Author and Publication Year | Setting | Monitored Patient‐Hours | SpO2 | ECG Arrhythmia | ECG Parametersa | Blood Pressure | Total Alarms | Actionable Alarms | Alarm Response | Lower Risk of Bias |
| ||||||||||
Adult | ||||||||||
Atzema 2006[7] | ED | 371 | ✓ | 1,762 | 0.20% | |||||
Billinghurst 2003[8] | CCU | 420 | ✓ | 751 | Not reported; 17% were valid | Nurses with higher acuity patients and smaller % of valid alarms had slower response rates | ||||
Biot 2000[9] | ICU | 250 | ✓ | ✓ | ✓ | ✓ | 3,665 | 3% | ||
Chambrin 1999[10] | ICU | 1,971 | ✓ | ✓ | ✓ | ✓ | 3,188 | 26% | ||
Drew 2014[11] | ICU | 48,173 | ✓ | ✓ | ✓ | ✓ | 2,558,760 | 0.3% of 3,861 VT alarms | ✓ | |
Gazarian 2014[12] | Ward | 54 nurse‐hours | ✓ | ✓ | ✓ | 205 | 22% | Response to 47% of alarms | ||
Grges 2009[13] | ICU | 200 | ✓ | ✓ | ✓ | ✓ | 1,214 | 5% | ||
Gross 2011[15] | Ward | 530 | ✓ | ✓ | ✓ | ✓ | 4,393 | 20% | ✓ | |
Inokuchi 2013[14] | ICU | 2,697 | ✓ | ✓ | ✓ | ✓ | 11,591 | 6% | ✓ | |
Koski 1990[16] | ICU | 400 | ✓ | ✓ | 2,322 | 12% | ||||
Morales Snchez 2014[17] | ICU | 434 sessions | ✓ | ✓ | ✓ | 215 | 25% | Response to 93% of alarms, of which 50% were within 10 seconds | ||
Pergher 2014[18] | ICU | 60 | ✓ | 76 | Not reported | 72% of alarms stopped before nurse response or had >10 minutes response time | ||||
Siebig 2010[19] | ICU | 982 | ✓ | ✓ | ✓ | ✓ | 5,934 | 15% | ||
Voepel‐Lewis 2013[20] | Ward | 1,616 | ✓ | 710 | 36% | Response time was longer for patients in highest quartile of total alarms | ||||
Way 2014[21] | ED | 93 | ✓ | ✓ | ✓ | ✓ | 572 | Not reported; 75% were valid | Nurses responded to more alarms in resuscitation room vs acute care area, but response time was longer | |
Pediatric | ||||||||||
Bonafide 2015[22] | Ward + ICU | 210 | ✓ | ✓ | ✓ | ✓ | 5,070 | 13% PICU, 1% ward | Incremental increases in response time as number of nonactionable alarms in preceding 120 minutes increased | ✓ |
Lawless 1994[23] | ICU | 928 | ✓ | ✓ | ✓ | 2,176 | 6% | |||
Rosman 2013[24] | ICU | 8,232 | ✓ | ✓ | ✓ | ✓ | 54,656 | 4% of rhythm alarms true critical" | ||
Talley 2011[25] | ICU | 1,470∥ | ✓ | ✓ | ✓ | ✓ | 2,245 | 3% | ||
Tsien 1997[26] | ICU | 298 | ✓ | ✓ | ✓ | 2,942 | 8% | |||
van Pul 2015[27] | ICU | 113,880∥ | ✓ | ✓ | ✓ | ✓ | 222,751 | Not reported | Assigned nurse did not respond to 6% of alarms within 45 seconds | |
Varpio 2012[28] | Ward | 49 unit‐hours | ✓ | ✓ | ✓ | ✓ | 446 | Not reported | 70% of all alarms and 41% of crisis alarms were not responded to within 1 minute | |
Both | ||||||||||
O'Carroll 1986[29] | ICU | 2,258∥ | ✓ | 284 | 2% | |||||
Wiklund 1994[30] | PACU | 207 | ✓ | ✓ | ✓ | 1,891 | 17% |
Relationship Between Alarm Exposure and Response Time
Whereas 9 studies addressed response time,[8, 12, 17, 18, 20, 21, 22, 27, 28] only 2 evaluated the relationship between alarm burden and nurse response time.[20, 22] Voepel‐Lewis and colleagues found that nurse responses were slower to patients with the highest quartile of alarms (57.6 seconds) compared to those with the lowest (45.4 seconds) or medium (42.3 seconds) quartiles of alarms on an adult ward (P = 0.046). They did not find an association between false alarm exposure and response time.[20] Bonafide and colleagues found incremental increases in response time as the number of nonactionable alarms in the preceding 120 minutes increased (P < 0.001 in the pediatric ICU, P = 0.009 on the pediatric ward).[22]
Interventions Effective in Reducing Alarms
Results of the 8 intervention studies are provided in Table 3. Three studies evaluated single interventions;[32, 33, 36] the remainder of the studies tested interventions with multiple components such that it was impossible to separate the effect of each component. Below, we have summarized study results, arranged by component. Because only 1 study focused on pediatric patients,[38] results from pediatric and adult settings are combined.
First Author and Publication Year | Design | Setting | Main Intervention Components | Other/ Comments | Key Results | Results Statistically Significant? | Lower Risk of Bias | ||||
---|---|---|---|---|---|---|---|---|---|---|---|
Widen Default Settings | Alarm Delays | Reconfigure Alarm Acuity | Secondary Notification | ECG Changes | |||||||
| |||||||||||
Adult | |||||||||||
Albert 2015[32] | Experimental (cluster‐randomized) | CCU | ✓ | Disposable vs reusable wires | Disposable leads had 29% fewer no‐telemetry, leads‐fail, and leads‐off alarms and similar artifact alarms | ✓ | ✓ | ||||
Cvach 2013[33] | Quasi‐experimental (before and after) | CCU and PCU | ✓ | Daily change of electrodes | 46% fewer alarms/bed/day | ||||||
Cvach 2014[34] | Quasi‐experimental (ITS) | PCU | ✓* | ✓ | Slope of regression line suggests decrease of 0.75 alarms/bed/day | ||||||
Graham 2010[35] | Quasi‐experimental (before and after) | PCU | ✓ | ✓ | 43% fewer crisis, warning, and system warning alarms on unit | ||||||
Rheineck‐Leyssius 1997[36] | Experimental (RCT) | PACU | ✓ | ✓ | Alarm limit of 85% had fewer alarms/patient but higher incidence of true hypoxemia for >1 minute (6% vs 2%) | ✓ | ✓ | ||||
Taenzer 2010[31] | Quasi‐experimental (before and after with concurrent controls) | Ward | ✓ | ✓ | Universal SpO2 monitoring | Rescue events decreased from 3.4 to 1.2 per 1,000 discharges; transfers to ICU decreased from 5.6 to 2.9 per 1,000 patient‐days, only 4 alarms/patient‐day | ✓ | ✓ | |||
Whalen 2014[37] | Quasi‐experimental (before and after) | CCU | ✓ | ✓ | 89% fewer audible alarms on unit | ✓ | |||||
Pediatric | |||||||||||
Dandoy 2014[38] | Quasi‐experimental (ITS) | Ward | ✓ | ✓ | ✓ | Timely monitor discontinuation; daily change of ECG electrodes | Decrease in alarms/patient‐days from 180 to 40 | ✓ |
Widening alarm parameter default settings was evaluated in 5 studies:[31, 35, 36, 37, 38] 1 single intervention randomized controlled trial (RCT),[36] and 4 multiple‐intervention, quasi‐experimental studies.[31, 35, 37, 38] In the RCT, using a lower SpO2 limit of 85% instead of the standard 90% resulted in 61% fewer alarms. In the 4 multiple intervention studies, 1 study reported significant reductions in alarm rates (P < 0.001),[37] 1 study did not report preintervention alarm rates but reported a postintervention alarm rate of 4 alarms per patient‐day,[31] and 2 studies reported reductions in alarm rates but did not report any statistical testing.[35, 38] Of the 3 studies examining patient safety, 1 study with universal monitoring reported fewer rescue events and transfers to the ICU postimplementation,[31] 1 study reported no missed acute decompensations,[38] and 1 study (the RCT) reported significantly more true hypoxemia events (P = 0.001).[36]
Alarm delays were evaluated in 4 studies:[31, 34, 36, 38] 3 multiple‐intervention, quasi‐experimental studies[31, 34, 38] and 1 retrospective analysis of data from an RCT.[36] One study combined alarm delays with widening defaults in a universal monitoring strategy and reported a postintervention alarm rate of 4 alarms per patient.[31] Another study evaluated delays as part of a secondary notification pager system and found a negatively sloping regression line that suggested a decreasing alarm rate, but did not report statistical testing.[34] The third study reported a reduction in alarm rates but did not report statistical testing.[38] The RCT compared the impact of a hypothetical 15‐second alarm delay to that of a lower SpO2 limit reduction and reported a similar reduction in alarms.[36] Of the 4 studies examining patient safety, 1 study with universal monitoring reported improvements,[31] 2 studies reported no adverse outcomes,[35, 38] and the retrospective analysis of data from the RCT reported the theoretical adverse outcome of delayed detection of sudden, severe desaturations.[36]
Reconfiguring alarm acuity was evaluated in 2 studies, both of which were multiple‐intervention quasi‐experimental studies.[35, 37] Both showed reductions in alarm rates: 1 was significant without increasing adverse events (P < 0.001),[37] and the other did not report statistical testing or safety outcomes.[35]
Secondary notification of nurses using pagers was the main intervention component of 1 study incorporating delays between the alarms and the alarm pages.[34] As mentioned above, a negatively sloping regression line was displayed, but no statistical testing or safety outcomes were reported.
Disposable electrocardiographic lead wires or daily electrode changes were evaluated in 3 studies:[32, 33, 38] 1 single intervention cluster‐randomized trial[32] and 2 quasi‐experimental studies.[33, 38] In the cluster‐randomized trial, disposable lead wires were compared to reusable lead wires, with disposable lead wires having significantly fewer technical alarms for lead signal failures (P = 0.03) but a similar number of monitoring artifact alarms (P = 0.44).[32] In a single‐intervention, quasi‐experimental study, daily electrode change showed a reduction in alarms, but no statistical testing was reported.[33] One multiple‐intervention, quasi‐experimental study incorporating daily electrode change showed fewer alarms without statistical testing.[38] Of the 2 studies examining patient safety, both reported no adverse outcomes.[32, 38]
DISCUSSION
This systematic review of physiologic monitor alarms in the hospital yielded the following main findings: (1) between 74% and 99% of physiologic monitor alarms were not actionable, (2) a significant relationship between alarm exposure and nurse response time was demonstrated in 2 small observational studies, and (3) although interventions were most often studied in combination, results from the studies with lower risk of bias suggest that widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and/or changing electrodes daily are the most promising interventions for reducing alarms. Only 5 of 8 intervention studies measured intervention safety and found that widening alarm parameters and implementing alarm delays had mixed safety outcomes, whereas disposable electrocardiographic lead wires and daily electrode changes had no adverse safety outcomes.[29, 30, 34, 35, 36] Safety measures are crucial to ensuring the highest level of patient safety is met; interventions are rendered useless without ensuring actionable alarms are not disabled. The variation in results across studies likely reflects the wide range of care settings as well as differences in design and quality.
This field is still in its infancy, with 18 of the 32 articles published in the past 5 years. We anticipate improvements in quality and rigor as the field matures, as well as clinically tested interventions that incorporate smart alarms. Smart alarms integrate data from multiple physiologic signals and the patient's history to better detect physiologic changes in the patient and improve the positive predictive value of alarms. Academicindustry partnerships will be required to implement and rigorously test smart alarms and other emerging technologies in the hospital.
To our knowledge, this is the first systematic review focused on monitor alarms with specific review questions relevant to alarm fatigue. Cvach recently published an integrative review of alarm fatigue using research published through 2011.[39] Our review builds upon her work by contributing a more extensive and systematic search strategy with databases spanning nursing, medicine, and engineering, including additional languages, and including newer studies published through April 2015. In addition, we included multiple cross‐team checks in our eligibility review to ensure high sensitivity and specificity of the resulting set of studies.
Although we focused on interventions aiming to reduce alarms, there has also been important recent work focused on reducing telemetry utilization in adult hospital populations as well as work focused on reducing pulse oximetry utilization in children admitted with respiratory conditions. Dressler and colleagues reported an immediate and sustained reduction in telemetry utilization in hospitalized adults upon redesign of cardiac telemetry order sets to include the clinical indication, which defaulted to the American Heart Association guideline‐recommended telemetry duration.[40] Instructions for bedside nurses were also included in the order set to facilitate appropriate telemetry discontinuation. Schondelmeyer and colleagues reported reductions in continuous pulse oximetry utilization in hospitalized children with asthma and bronchiolitis upon introduction of a multifaceted quality improvement program that included provider education, a nurse handoff checklist, and discontinuation criteria incorporated into order sets.[41]
Limitations of This Review and the Underlying Body of Work
There are limitations to this systematic review and its underlying body of work. With respect to our approach to this systematic review, we focused only on monitor alarms. Numerous other medical devices generate alarms in the patient‐care environment that also can contribute to alarm fatigue and deserve equally rigorous evaluation. With respect to the underlying body of work, the quality of individual studies was generally low. For example, determinations of alarm actionability were often made by a single rater without evaluation of the reliability or validity of these determinations, and statistical testing was often missing. There were also limitations specific to intervention studies, including evaluation of nongeneralizable patient populations, failure to measure the fidelity of the interventions, inadequate measures of intervention safety, and failure to statistically evaluate alarm reductions. Finally, though not necessarily a limitation, several studies were conducted by authors involved in or funded by the medical device industry.[11, 15, 19, 31, 32] This has the potential to introduce bias, although we have no indication that the quality of the science was adversely impacted.
Moving forward, the research agenda for physiologic monitor alarms should include the following: (1) more intensive focus on evaluating the relationship between alarm exposure and response time with analysis of important mediating factors that may promote or prevent alarm fatigue, (2) emphasis on studying interventions aimed at improving alarm management using rigorous designs such as cluster‐randomized trials and trials randomized by individual participant, (3) monitoring and reporting clinically meaningful balancing measures that represent unintended consequences of disabling or delaying potentially important alarms and possibly reducing the clinicians' ability to detect true patient deterioration and intervene in a timely manner, and (4) support for transparent academicindustry partnerships to evaluate new alarm technology in real‐world settings. As evidence‐based interventions emerge, there will be new opportunities to study different implementation strategies of these interventions to optimize effectiveness.
CONCLUSIONS
The body of literature relevant to physiologic monitor alarm characteristics and alarm fatigue is limited but growing rapidly. Although we know that most alarms are not actionable and that there appears to be a relationship between alarm exposure and response time that could be caused by alarm fatigue, we cannot yet say with certainty that we know which interventions are most effective in safely reducing unnecessary alarms. Interventions that appear most promising and should be prioritized for intensive evaluation include widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and changing electrodes daily. Careful evaluation of these interventions must include systematically examining adverse patient safety consequences.
Acknowledgements
The authors thank Amogh Karnik and Micheal Sellars for their technical assistance during the review and extraction process.
Disclosures: Ms. Zander is supported by the Society of Hospital Medicine Student Hospitalist Scholar Grant. Dr. Bonafide and Ms. Stemler are supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number K23HL116427. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no conflicts of interest.
- National Patient Safety Goals Effective January 1, 2015. The Joint Commission Web site. http://www.jointcommission.org/assets/1/6/2015_NPSG_HAP.pdf. Accessed July 17, 2015.
- ECRI Institute. 2015 Top 10 Health Technology Hazards. Available at: https://www.ecri.org/Pages/2015‐Hazards.aspx. Accessed June 23, 2015.
- Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378–386. , .
- Redesigning hospital alarms for patient safety: alarmed and potentially dangerous. JAMA. 2014;311(12):1199–1200. , .
- Meta‐analysis of observational studies in epidemiology: a proposal for reporting. Meta‐analysis Of Observational Studies in Epidemiology (MOOSE) Group. JAMA. 2000;283(15):2008–2012. , , , et al.
- PRISMA Group. Preferred reporting items for systematic reviews and meta‐analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–269, W64. , , , ;
- ALARMED: adverse events in low‐risk patients with chest pain receiving continuous electrocardiographic monitoring in the emergency department. A pilot study. Am J Emerg Med. 2006;24:62–67. , , , , .
- Patient and nurse‐related implications of remote cardiac telemetry. Clin Nurs Res. 2003;12(4):356–370. , , .
- Clinical evaluation of alarm efficiency in intensive care [in French]. Ann Fr Anesth Reanim. 2000;19:459–466. , , , , .
- Multicentric study of monitoring alarms in the adult intensive care unit (ICU): a descriptive analysis. Intensive Care Med. 1999;25:1360–1366. , , , , , .
- Insights into the problem of alarm fatigue with physiologic monitor devices: a comprehensive observational study of consecutive intensive care unit patients. PloS One. 2014;9(10):e110274. , , , et al.
- Nurses' response to frequency and types of electrocardiography alarms in a non‐ critical care setting: a descriptive study. Int J Nurs Stud. 2014;51(2):190–197. .
- Improving alarm performance in the medical intensive care unit using delays and clinical context. Anesth Analg. 2009;108:1546–1552. , , .
- The proportion of clinically relevant alarms decreases as patient clinical severity decreases in intensive care units: a pilot study. BMJ Open. 2013;3(9):e003354–e003354. , , , et al.
- Physiologic monitoring alarm load on medical/surgical floors of a community hospital. Biomed Instrum Technol. 2011;45:29–36. , , .
- Frequency and reliability of alarms in the monitoring of cardiac postoperative patients. Int J Clin Monit Comput. 1990;7(2):129–133. , , , .
- Audit of the bedside monitor alarms in a critical care unit [in Spanish]. Enferm Intensiva. 2014;25(3):83–90. , , , et al.
- Stimulus‐response time to invasive blood pressure alarms: implications for the safety of critical‐care patients. Rev Gaúcha Enferm. 2014;35(2):135–141. , .
- Intensive care unit alarms— how many do we need? Crit Care Med. 2010;38:451–456. , , , , , .
- Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):1351–1358. , , , et al.
- Whats that noise? Bedside monitoring in the Emergency Department. Int Emerg Nurs. 2014;22(4):197–201. , , .
- Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345–351. , , , et al.
- Crying wolf: false alarms in a pediatric intensive care unit. Crit Care Med. 1994;22(6):981–985. .
- What are we missing? Arrhythmia detection in the pediatric intensive care unit. J Pediatr. 2013;163(2):511–514. , , , , .
- Cardiopulmonary monitors and clinically significant events in critically ill children. Biomed Instrum Technol. 2011;45(s1):38–45. , , , et al.
- Poor prognosis for existing monitors in the intensive care unit. Crit Care Med. 1997;25:614–619. , .
- Safe patient monitoring is challenging but still feasible in a neonatal intensive care unit with single family rooms. Acta Paediatr Oslo Nor 1992. 2015;104(6):e247–e254. , , , , .
- The helpful or hindering effects of in‐hospital patient monitor alarms on nurses: a qualitative analysis. CIN Comput Inform Nurs. 2012;30(4):210–217. , , , .
- Survey of alarms in an intensive therapy unit. Anaesthesia. 1986;41(7):742–744. .
- Postanesthesia monitoring revisited: frequency of true and false alarms from different monitoring devices. J Clin Anesth. 1994;6(3):182–188. , , , .
- Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before‐and‐after concurrence study. Anesthesiology. 2010;112(2):282–287. , , , .
- Differences in alarm events between disposable and reusable electrocardiography lead wires. Am J Crit Care. 2015;24(1):67–74. , , , et al.
- Daily electrode change and effect on cardiac monitor alarms: an evidence‐based practice approach. J Nurs Care Qual. 2013;28:265–271. , , , .
- Use of pagers with an alarm escalation system to reduce cardiac monitor alarm signals. J Nurs Care Qual. 2014;29(1):9–18. , , , .
- Monitor alarm fatigue: standardizing use of physiological monitors and decreasing nuisance alarms. Am J Crit Care. 2010;19:28–34. , .
- Influence of pulse oximeter lower alarm limit on the incidence of hypoxaemia in the recovery room. Br J Anaesth. 1997;79(4):460–464. , .
- Novel approach to cardiac alarm management on telemetry units. J Cardiovasc Nurs. 2014;29(5):E13–E22. , , , , , .
- A team‐based approach to reducing cardiac monitor alarms. Pediatrics. 2014;134(6):e1686–e1694. , , , et al.
- Monitor alarm fatigue: an integrative review. Biomed Instrum Technol. 2012;46(4):268–277. .
- Altering overuse of cardiac telemetry in non‐intensive care unit settings by hardwiring the use of American Heart Association guidelines. JAMA Intern Med. 2014;174(11):1852–1854. , , , , .
- Using quality improvement to reduce continuous pulse oximetry use in children with wheezing. Pediatrics. 2015;135(4):e1044–e1051. , , , et al.
- The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non‐randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377–384. , .
- National Patient Safety Goals Effective January 1, 2015. The Joint Commission Web site. http://www.jointcommission.org/assets/1/6/2015_NPSG_HAP.pdf. Accessed July 17, 2015.
- ECRI Institute. 2015 Top 10 Health Technology Hazards. Available at: https://www.ecri.org/Pages/2015‐Hazards.aspx. Accessed June 23, 2015.
- Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378–386. , .
- Redesigning hospital alarms for patient safety: alarmed and potentially dangerous. JAMA. 2014;311(12):1199–1200. , .
- Meta‐analysis of observational studies in epidemiology: a proposal for reporting. Meta‐analysis Of Observational Studies in Epidemiology (MOOSE) Group. JAMA. 2000;283(15):2008–2012. , , , et al.
- PRISMA Group. Preferred reporting items for systematic reviews and meta‐analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–269, W64. , , , ;
- ALARMED: adverse events in low‐risk patients with chest pain receiving continuous electrocardiographic monitoring in the emergency department. A pilot study. Am J Emerg Med. 2006;24:62–67. , , , , .
- Patient and nurse‐related implications of remote cardiac telemetry. Clin Nurs Res. 2003;12(4):356–370. , , .
- Clinical evaluation of alarm efficiency in intensive care [in French]. Ann Fr Anesth Reanim. 2000;19:459–466. , , , , .
- Multicentric study of monitoring alarms in the adult intensive care unit (ICU): a descriptive analysis. Intensive Care Med. 1999;25:1360–1366. , , , , , .
- Insights into the problem of alarm fatigue with physiologic monitor devices: a comprehensive observational study of consecutive intensive care unit patients. PloS One. 2014;9(10):e110274. , , , et al.
- Nurses' response to frequency and types of electrocardiography alarms in a non‐ critical care setting: a descriptive study. Int J Nurs Stud. 2014;51(2):190–197. .
- Improving alarm performance in the medical intensive care unit using delays and clinical context. Anesth Analg. 2009;108:1546–1552. , , .
- The proportion of clinically relevant alarms decreases as patient clinical severity decreases in intensive care units: a pilot study. BMJ Open. 2013;3(9):e003354–e003354. , , , et al.
- Physiologic monitoring alarm load on medical/surgical floors of a community hospital. Biomed Instrum Technol. 2011;45:29–36. , , .
- Frequency and reliability of alarms in the monitoring of cardiac postoperative patients. Int J Clin Monit Comput. 1990;7(2):129–133. , , , .
- Audit of the bedside monitor alarms in a critical care unit [in Spanish]. Enferm Intensiva. 2014;25(3):83–90. , , , et al.
- Stimulus‐response time to invasive blood pressure alarms: implications for the safety of critical‐care patients. Rev Gaúcha Enferm. 2014;35(2):135–141. , .
- Intensive care unit alarms— how many do we need? Crit Care Med. 2010;38:451–456. , , , , , .
- Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):1351–1358. , , , et al.
- Whats that noise? Bedside monitoring in the Emergency Department. Int Emerg Nurs. 2014;22(4):197–201. , , .
- Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345–351. , , , et al.
- Crying wolf: false alarms in a pediatric intensive care unit. Crit Care Med. 1994;22(6):981–985. .
- What are we missing? Arrhythmia detection in the pediatric intensive care unit. J Pediatr. 2013;163(2):511–514. , , , , .
- Cardiopulmonary monitors and clinically significant events in critically ill children. Biomed Instrum Technol. 2011;45(s1):38–45. , , , et al.
- Poor prognosis for existing monitors in the intensive care unit. Crit Care Med. 1997;25:614–619. , .
- Safe patient monitoring is challenging but still feasible in a neonatal intensive care unit with single family rooms. Acta Paediatr Oslo Nor 1992. 2015;104(6):e247–e254. , , , , .
- The helpful or hindering effects of in‐hospital patient monitor alarms on nurses: a qualitative analysis. CIN Comput Inform Nurs. 2012;30(4):210–217. , , , .
- Survey of alarms in an intensive therapy unit. Anaesthesia. 1986;41(7):742–744. .
- Postanesthesia monitoring revisited: frequency of true and false alarms from different monitoring devices. J Clin Anesth. 1994;6(3):182–188. , , , .
- Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before‐and‐after concurrence study. Anesthesiology. 2010;112(2):282–287. , , , .
- Differences in alarm events between disposable and reusable electrocardiography lead wires. Am J Crit Care. 2015;24(1):67–74. , , , et al.
- Daily electrode change and effect on cardiac monitor alarms: an evidence‐based practice approach. J Nurs Care Qual. 2013;28:265–271. , , , .
- Use of pagers with an alarm escalation system to reduce cardiac monitor alarm signals. J Nurs Care Qual. 2014;29(1):9–18. , , , .
- Monitor alarm fatigue: standardizing use of physiological monitors and decreasing nuisance alarms. Am J Crit Care. 2010;19:28–34. , .
- Influence of pulse oximeter lower alarm limit on the incidence of hypoxaemia in the recovery room. Br J Anaesth. 1997;79(4):460–464. , .
- Novel approach to cardiac alarm management on telemetry units. J Cardiovasc Nurs. 2014;29(5):E13–E22. , , , , , .
- A team‐based approach to reducing cardiac monitor alarms. Pediatrics. 2014;134(6):e1686–e1694. , , , et al.
- Monitor alarm fatigue: an integrative review. Biomed Instrum Technol. 2012;46(4):268–277. .
- Altering overuse of cardiac telemetry in non‐intensive care unit settings by hardwiring the use of American Heart Association guidelines. JAMA Intern Med. 2014;174(11):1852–1854. , , , , .
- Using quality improvement to reduce continuous pulse oximetry use in children with wheezing. Pediatrics. 2015;135(4):e1044–e1051. , , , et al.
- The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non‐randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377–384. , .
Pharmacotherapy for Tobacco Use and COPD
Up to one‐third of the 700,000 patients admitted annually for an exacerbation of chronic obstructive pulmonary disease (COPD) continue to smoke tobacco.[1, 2] Smokers with COPD are at high risk for poor health outcomes directly attributable to tobacco‐related conditions, including progression of lung disease and cardiovascular diseases.[3, 4, 5] Treatment for tobacco addiction is the most essential intervention for these patients.
Hospital admission has been suggested as an opportune time for the initiation of smoking cessation.[6] Hospitalized patients are already in a smoke‐free environment, and have access to physicians, nurses, and pharmacists who can prescribe medications for support.[7] Documenting smoking status and offering smoking cessation treatment during and after discharge are quality metrics required by the Joint Commission, and recommended by the National Quality Forum.[8, 9] Hospitals have made significant efforts to comply with these requirements.[10]
Limited data exist regarding the effectiveness and utilization of treatments known to reduce cigarette use among COPD patients in nontrial environments. Prescribing patterns of medications for smoking cessation in the real world following admission for COPD are not well studied. We sought to examine the utilization of inpatient brief tobacco counseling and postdischarge pharmacotherapy following discharge for exacerbation of COPD, as well as to (1) examine the association of postdischarge pharmacotherapy with self‐reported smoking cessation at 6 to 12 months and (2) assess differences in effectiveness between cessation medications prescribed.
METHODS
We conducted a cohort study of current smokers discharged following a COPD exacerbation within the Veterans Affairs (VA) Veterans Integrated Service Network (VISN)‐20. This study was approved by the VA Puget Sound Health Care System Institutional Review Board (#00461).
We utilized clinical information from the VISN‐20 data warehouse that collects data using the VA electronic medical record, including demographics, prescription medications, hospital admissions, hospital and outpatient diagnoses, and dates of death, and is commonly used for research. In addition, we utilized health factors, coded electronic entries describing patient health behaviors that are entered by nursing staff at the time of a patient encounter, and the text of chart notes that were available for electronic query.
Study Cohort
We identified all smokers aged 40 years hospitalized between 2005 and 2012 with either a primary discharge diagnosis of COPD based on International Classification of Diseases, 9th Revision codes (491, 492, 493.2, and 496) or an admission diagnosis from the text of the admit notes indicating an exacerbation of COPD. We limited to patients aged 40 years to improve the specificity of the diagnosis of COPD, and we selected the first hospitalization that met inclusion criteria. We excluded subjects who died within 6 months of discharge (Figure 1).

To establish tobacco status, we built on previously developed and validated methodology,[11] and performed truncated natural language processing using phrases in the medical record that reflected patients' tobacco status, querying all notes from the day of admission up to 6 months prior. If no tobacco status was indicated in the notes, we identified the status encoded by the most recent health factor. We manually examined the results of the natural language processing and the determination of health factors to confirm the tobacco status. Manual review was undertaken by 1 of 2 trained study personnel. In the case of an ambiguous or contradictory status, an additional team member reviewed the information to attempt to make a determination. If no determination could be made, the record was coded to unknown. This method allowed us to identify a baseline status for all but 77 of the 3580 patients admitted for COPD.
Outcome and Exposure
The outcome was tobacco status at 6 to 12 months after discharge. Using the same methods developed for identification of baseline smoking status, we obtained smoking status for each subject up to 12 months postdischarge. If multiple notes and encounters were available indicating smoking status, we chose the latest within 12 months of discharge. Subjects lacking a follow‐up status were presumed to be smokers, a common assumption.[12] The 6 to 12month time horizon was chosen as these are the most common time points used to examine a sustained change in tobacco status,[13, 14, 15] and allowed for adequate time for treatment and clinical follow‐up.
Our primary exposure was any smoking cessation medication or combination dispensed within 90 days of discharge. This time horizon for treatment was chosen due to recent studies indicating this is a meaningful period for postdischarge treatment.[14] We assessed the use of nicotine patch, short‐acting nicotine, varenicline, buproprion, or any combination. Accurate data on the prescription and dispensing of these medications were available from the VA pharmacy record. Secondary exposure was the choice of medication dispensed among treated patients. We assessed additional exposures including receipt of cessation medications within 48 hours of discharge, treatment in the year prior to admission, and predischarge counseling. Predischarge counseling was determined as having occurred if nurses documented that they completed a discharge process focused on smoking cessation. Referral to a quit line is part of this process; however, due to the confidential nature of these interactions, generally low use of this service, and lack of linkage to the VA electronic health record, it was not considered in the analysis.
Confounders
Potential confounders were assessed in the year prior to admission up to discharge from the index hospitalization, with the use of mechanical or noninvasive ventilation assessed during the hospitalization. We adjusted for variables chosen a priori for their known or expected association with smoking cessation including demographics, Charlson Comorbidity Index,[16] markers of COPD severity (need for invasive or noninvasive mechanical ventilation during index hospitalization, use of oral steroids, long‐acting inhaled bronchodilators, and/or canister count of short‐acting bronchodilators in the year prior to admission), history of drug or alcohol abuse, homelessness, depression, psychosis, post‐traumatic stress disorder, lung cancer, coronary artery disease, and under‐ or overweight status. Nurse‐based counseling prior to discharge was included as a variable for adjustment for our primary and secondary predictors to assess the influence of pharmacotherapy specifically. Due to 3.1% missingness in body mass index, multiple imputation with chained equations was used to impute missing values, with 10 imputations performed. The imputation was performed using a linear regression model containing all variables included in the final model, grouped by facility.
Statistical Analysis
All analyses were performed using Stata 13 (StataCorp, College Station, TX) software. 2 tests and t tests were used to assess for unadjusted bivariate associations. Using the pooled imputed datasets, we performed multivariable logistic regression to compare odds ratios for a change in smoking status, adjusting the estimates of coefficients and standard errors by applying combination rules to the 10 completed‐data estimates.[17] We analyzed our primary and secondary predictors, adjusting for the confounders chosen a priori, clustered by facility with robust standard errors. An level of <0.05 was considered significant.
Sensitivity Analysis
We assumed that subjects missing a follow‐up status were ongoing smokers. However, given the high mortality rate observed in our cohort, we were concerned that some subjects lacking a follow‐up status may have died, missing the opportunity to have a quit attempt recorded. Therefore, we performed sensitivity analysis excluding subjects who died during the 6 to 12 months of follow‐up, repeating the imputation and analysis as described above. In addition, due to concern for indication bias in the choice of medication used for our secondary analysis, we performed propensity score matching for treatment with each medication in comparison to nicotine patch, using the teffects command, with 3 nearest neighbor matches. We included additional comorbidities in the propensity score matching.[18]
RESULTS
Among these 1334 subjects at 6 to 12 months of follow‐up, 63.7% reported ongoing smoking, 19.8% of patients reported quitting, and 17.5% of patients had no reported status and were presumed to be smokers. Four hundred fifty (33.7%) patients were dispensed a smoking cessation medication within 90 days of discharge. Patients who were dispensed medications were younger and more likely to be female. Nearly all patients who received medications also received documented predischarge counseling (94.6%), as did the majority of patients who did not receive medications (83.8%) (Table 1).
Variable | No Medication Dispensed, n = 884, No. (%) | Medication Dispensed, n = 450, No. (%) | P Value |
---|---|---|---|
| |||
Not smoking at 612 months | 179 (20.2) | 85 (18.9) | 0.56 |
Brief counseling at discharge | 742 (83.8%) | 424 (94.6%) | <0.001* |
Age | 64.49.13 (4094) | 61.07.97 (4185) | <0.001* |
Male | 852 (96.3) | 423 (94.0) | 0.05* |
Race | 0.12 | ||
White | 744 (84.2) | 377 (83.8) | |
Black | 41 (4.6) | 12 (2.7) | |
Other/unknown | 99 (11.1) | 61 (13.6) | |
BMI | 28.09.5 (12.669.0) | 28.910.8 (14.860.0) | 0.15 |
Homeless | 68 (7.7) | 36 (8.0) | 0.84 |
Psychiatric conditions/substance abuse | |||
History of alcohol abuse | 205 (23.2) | 106 (23.6) | 0.88 |
History of drug abuse | 110 (12.4) | 72 (16.0) | 0.07 |
Depression | 39 (4.4) | 29 (6.4) | 0.11 |
Psychosis | 201 (22.7) | 88 (19.6) | 0.18 |
PTSD | 146 (16.5) | 88 (19.6) | 0.17 |
Comorbidities | |||
Coronary artery disease | 254 (28.7) | 110 (24.4) | 0.10 |
Cerebrovascular accident | 80 (9.0) | 28 (2.2) | 0.86 |
Obstructive sleep apnea | 42 (4.8) | 23 (5.1) | 0.77 |
Lung cancer | 21 (2.4) | 10 (2.2) | 0.86 |
Charlson Comorbidity Index | 2.251.93 (014) | 2.111.76 (010) | 0.49 |
Markers of COPD severity | |||
Mechanical ventilation during admission | 28 (3.2) | 14 (3.1) | 0.96 |
NIPPV during admission | 97 (11.0) | 51 (11.3) | 0.84 |
Oral steroids prescribed in the past year | 334 (37.8) | 154 (34.2) | 0.20 |
Treatment with tiotropium in the past year | 97 (11.0) | 55 (12.2) | 0.50 |
Treatment with LABA in the past year | 264 (29.9) | 155 (34.4) | 0.09 |
Canisters of SABA used in past year | 6.639.8, (084) | 7.469.63 (045) | 0.14 |
Canisters of ipratropium used in past year | 6.458.81 (054) | 6.869.08 (064) | 0.42 |
Died during 612 months of follow‐up | 78 (8.8) | 28 (6.6) | 0.10 |
Of patients dispensed a study medication, 246 (18.4% of patients, 54.7% of all medications dispensed) were dispensed medications within 48 hours of discharge (Table 2). Of the patients dispensed medication, the majority received nicotine patches alone (Table 3), and 18.9% of patients received combination therapy, with the majority receiving nicotine patch and short‐acting nicotine replacement therapy (NRT) or patch and buproprion. A significant number of patients were prescribed medications within 90 days of discharge, but did not have them dispensed within that timeframe (n = 224, 16.8%).
Medication Dispensed | No. (%) | % Quit (Unadjusted) | OR (95% CI) | P Value |
---|---|---|---|---|
| ||||
No medications dispensed | 884 (66.3) | 20.2 | Referent | |
Any medication from | ||||
Discharge to 90 days | 450 (33.7) | 18.9 | 0.88 (0.741.04) | 0.137 |
Within 48 hours of discharge | 246 (18.4) | 18.3 | 0.87 (0.661.14) | 0.317 |
Treated in the year prior to admission | 221 (16.6) | 19.6 | Referent | |
Treated in the year prior to admission + 090 days postdischarge | 152 (11.4) | 18.4 | 0.95 (0.791.13) | 0.534 |
No nurse‐provided counseling prior to discharge | 169 (12.7) | 20.5 | Referent | |
Nurse‐provided counseling prior to discharge | 1,165 (87.3) | 19.5 | 0.95 (0.661.36) | 0.774 |
Medication Dispensed | No. (%) | % Quit (Unadjusted) | OR (95% CI) | P Value |
---|---|---|---|---|
| ||||
Nicotine patch | 242 (53.8) | 18.6 | Referent | |
Monotherapy with | ||||
Varenicline | 36 (8.0) | 30.6 | 2.44 (1.484.05) | 0.001 |
Short‐acting NRT | 34 (7.6) | 11.8 | 0.66 (0.510.85) | 0.001 |
Buproprion | 55 (12.2) | 21.8 | 1.05 (0.671.62) | 0.843 |
Combination therapy | 85 (18.9) | 15.7 | 0.94 (0.711.24) | 0.645 |
Association of Treatment With Study Medications and Quitting Smoking
In adjusted analyses, the odds of quitting smoking at 6 to 12 months were not greater among patients who were dispensed a study medication within 90 days of discharge (odds ratio [OR]: 0.88, 95% confidence interval [CI]: 0.74‐1.04). We found no association between counseling provided at discharge and smoking cessation (OR: 0.95, 95% CI: 0.0.66‐1.), adjusted for the receipt of medications. There was no difference in quit rate between patients dispensed medication within 48 hours of discharge, or between patients treated in the year prior to admission and again postdischarge (Table 2).
We then assessed differences in effectiveness between specific medications among the 450 patients who were dispensed medications. Using nicotine patch alone as the referent group, patients treated with varenicline demonstrated greater odds of smoking cessation (OR: 2.44, 95% CI: 1.48‐4.05). Patients treated with short‐acting NRT alone were less likely to report smoking cessation (OR: 0.66, 95% CI: 0.51‐0.85). Patients treated with buproprion or combination therapy were no more likely to report cessation (Table 3). When sensitivity analysis was performed using propensity score matching with additional variables included, there were no significant differences in the observed associations.
Our overall mortality rate observed at 1 year was 19.5%, nearly identical to previous cohort studies of patients admitted for COPD.[19, 20] Because of the possibility of behavioral differences on the part of patients and physicians regarding subjects with a limited life expectancy, we performed sensitivity analysis limited to the patients who survived to at least 12 months of follow‐up. One hundred six patients (7.9%) died during 6 to 12 months of follow‐up. There was no change in inference for our primary exposure (OR: 0.95, 95% CI: 0.79‐1.14) or any of the secondary exposures examined.
DISCUSSION
In this observational study, postdischarge pharmacotherapy within 90 days of discharge was provided to a minority of high‐risk smokers admitted for COPD, and was not associated with smoking cessation at 6 to 12 months. In comparison to nicotine patch alone, varenicline was associated with a higher odds of cessation, with decreased odds of cessation among patients treated with short‐acting NRT alone. The overall quit rate was significant at 19.8%, and is consistent with annual quit rates observed among patients with COPD in other settings,[21, 22] but is far lower than quit rates observed after admission for acute myocardial infarction.[23, 24, 25] Although the proportion of patients treated at the time of discharge or within 90 days was low, our findings are in keeping with previous studies, which demonstrated low rates of pharmacologic treatment following hospitalization, averaging 14%.[26] Treatment for tobacco use is likely underutilized for this group of high‐risk smokers. However, a significant proportion of patients who were prescribed medications in the postdischarge period did not have medications filled. This likely reflects both the rapid changes in motivation that characterize quit attempts,[27] as well as efforts on the part of primary care physicians to make these medications available to facilitate future quit attempts.
There are several possible explanations for the findings in our study. Pharmaceutical therapies were not provided at random. The provision of pharmacotherapy and the ultimate success of a quit attempt reflects a complex interaction of patient beliefs concerning medications, level of addiction and motivation, physician behavior and knowledge, and organizational factors. Organizational factors such as the structure of electronic discharge orders and the availability of decision support materials may influence a physician's likelihood of prescribing medications, the choice of medication prescribed, and therefore the adequacy of control of withdrawal symptoms. NRT is often under dosed to control ongoing symptoms,[28] and needs to be adjusted until relief is obtained, providing an additional barrier to effectiveness during the transition out of the hospital. Because most smokers with COPD are highly addicted to nicotine,[29] high‐dose NRT, combination therapy, or varenicline would be necessary to adequately control symptoms.[30] However, a significant minority of patients received short‐acting NRT alone.
Despite a high observed efficacy in recent trials,[31, 32] few subjects in our study received varenicline. This may be related to both secular trends and administrative barriers to the use of varenicline in the VA system. Use of this medication was limited among patients with psychiatric disorders due to safety concerns. These concerns have since been largely disproven, but may have limited access to this medication.[33, 34, 35] Although we adjusted for a history of mental illness, patients who received varenicline may have had more past quit attempts and less active mental illness, which may be associated with improved cessation rates. Despite the high prevalence of mental illness we observed, this is typical of the population of smokers, with studies indicating nearly one‐third of smokers overall suffer from mental illness.[36]
Although the majority of our patients received a brief, nurse‐based counseling intervention, there is considerable concern about the overall effectiveness of a single predischarge interaction to produce sustained smoking cessation among highly addicted smokers.[37, 38, 39, 40] The Joint Commission has recently restructured the requirements for smoking cessation treatment for hospitalized patients, and it is now up to hospitals to implement treatment mechanisms that not only meet the national requirements, but also provide a meaningful clinical effect. Though the optimum treatment for hospitalized smokers with COPD is unknown, previous positive studies of smoking cessation among hospitalized patients underscore the need for a higher‐intensity counseling intervention that begins during hospitalization and continues after discharge.[13, 41] Cessation counseling services including tobacco cessation groups and quit lines are available through the VA; however, the use of these services is typically low and requires the patient to enroll independently after discharge, an additional barrier. The lack of association between medications and smoking cessation found in our study could reflect poor effectiveness of medications in the absence of a systematic counseling intervention. Alternatively, the association may be explained that patients who were more highly addicted and perhaps less motivated to quit received tobacco cessation medications more often, but were also less likely to stop tobacco use, a form of indication bias.
Our study has several limitations. We do not have addiction or motivation levels for a cessation attempt, a potential unmeasured confounder. Although predictive of quit attempts, motivation factors are less predictive of cessation maintenance, and may therefore have an unclear effect on our outcome.[42, 43] Our outcome was gathered as part of routine clinical care, which may have introduced bias if patients over‐reported cessation because of social desirability. In healthcare settings, however, this form of assessing smoking status is generally valid.[44] Exposure to counseling or medications obtained outside of the VA system would not have been captured. Given the financial incentive, we believe it is unlikely that many patients admitted to a VA medical center obtained medications elsewhere.[45] The diagnosis of COPD was made administratively. However, all subjects were admitted for an exacerbation, which is associated with more severe COPD by Global Initiative for Obstructive Lung Disease (GOLD) stage.[46] Patients with more severe COPD are often excluded from studies of smoking cessation due to concerns of high dropout and lower prevalence of smoking among patients with GOLD stage IV disease,[47, 48] making this a strength of our study. Subjects who died may have quit only in extremis, or failed to document their quit attempts. However, our sensitivity analysis limited to survivors did not change the study results. There may have been some misclassification in the use of buproprion, which may also be prescribed as an antidepressant. Finally, although representative of the veterans who seek care within the VISN‐20, our patients were primarily white and male, limiting the ability to generalize outside of this group.
Our study had several strengths. We examined a large cohort of patients admitted to a complete care organization, including patients from a diverse group of VA settings comprising academically and nonacademically affiliated centers. We performed an unbiased collection of patients, including all smokers discharged for COPD. We had access to excellent completeness of medications prescribed and filled as collected within the VA system, enabling us to observe medications dispensed and prescribed at several time points. We also had near complete ascertainment of outcomes including by using natural language processing with manual confirmation of smoking status.
In summary, we found that provision of medications to treat ongoing tobacco use among patients discharged for COPD was low, and receipt of medications was not associated with a reduction in smoking tobacco at 6 to 12 months postdischarge. However, among those treated, varenicline appears to be superior to the nicotine patch, with short‐acting nicotine replacement potentially less effective, a biologically plausible finding. The motivation to quit smoking changes rapidly over time. Providing these medications in the hospital and during the time after discharge is a potential means to improve quit rates, but medications need to be paired with counseling to be most effective. Collectively, these data suggest that systems‐based interventions are needed to increase the availability of intense counseling and the use of tailored pharmacotherapy to these patients.
Acknowledgements
The authors acknowledge Mr. Robert Plumley, who performed the data extraction and natural language processing necessary to complete this project.
Disclosures: Dr. Melzer conceived of the research question and performed background reading, analyses, primary drafting, and final revision of the manuscript. Drs. Collins and Feemster participated in finalizing the research question, developing the cohort, performing data collection, and revising the manuscript. Dr. Au provided the database for analysis, helped finalize the research question, and assisted in interpretation of the data and revision of the manuscript. Dr. Au has personally reviewed the data, understands the statistical methods employed, and confirms an understanding of this analysis, that the methods are clearly described, and that they are a fair way to report the results. This material is based upon work supported in part by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, who provided access to data, office space, and programming and data management. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs, the United States government, or the National Institutes of Health. Dr. Au is an unpaid research consultant for Analysis Group. None of the other authors have any conflicts of interest to disclose. Dr. Melzer is supported by an institutional F‐32 (HL007287‐36) through the University of Washington Department of Pulmonary and Critical Care. Dr. Feemster is supported by an National Institutes of Health, National Heart, Lung, and Blood Institute, K23 Mentored Career Development Award (HL111116). Partial support of this project was provided by Gilead Sciences with research funding to the Seattle Institute for Biomedical and Clinical Research. Additional support was received through the VA Health Services Research and Development. A portion of this work was presented in abstract form at the American Thoracic Society International Meeting, May 2015, in Denver, Colorado.
- Patients hospitalized for COPD have a high prevalence of modifiable risk factors for exacerbation (EFRAM study). Eur Respir J. 2000;16(6):1037–1042. , , , , , .
- Analysis of hospitalizations for COPD exacerbation: opportunities for improving care. COPD. 2010;7(2):85–92. , , , et al.
- Mortality in COPD: role of comorbidities. Eur Respir J. 2006;28(6):1245–1257. , , , .
- Cardiovascular comorbidity in COPD: systematic literature review. Chest. 2013;144(4):1163–1178. , , , .
- Engaging patients and clinicians in treating tobacco addiction. JAMA Intern Med. 2014;174(8):1299–1300. .
- Smokers who are hospitalized: a window of opportunity for cessation interventions. Prev Med. 1992;21(2):262–269. , .
- Interventions for smoking cessation in hospitalised patients. Cochrane Database Syst Rev. 2012;5:CD001837. , , , .
- Specifications Manual for National Hospital Inpatient Quality Measures. Available at: http://www.jointcommission.org/specifications_manual_for_national_hospital_inpatient_quality_measures.aspx. Accessed January 15, 2015.
- Treating Tobacco Use and Dependence. April 2013. Agency for Healthcare Research and Quality, Rockford, MD. Available at: http://www.ahrq.gov/professionals/clinicians‐providers/guidelines‐recommendations/tobacco/clinicians/update/index.html. Accessed January 15, 2015.
- Smoking cessation advice rates in US hospitals. Arch Intern Med. 2011;171(18):1682–1684. , , , .
- Validating smoking data from the Veteran's Affairs Health Factors dataset, an electronic data source. Nicotine Tob Res. 2011;13(12):1233–1239. , , , et al.
- Do u smoke after txt? Results of a randomised trial of smoking cessation using mobile phone text messaging. Tob Control. 2005;14(4):255–261. , , , et al.
- The effectiveness of smoking cessation groups offered to hospitalised patients with symptoms of exacerbations of chronic obstructive pulmonary disease (COPD). Clin Respir J. 2008;2(3):158–165. , , , .
- Sustained care intervention and postdischarge smoking cessation among hospitalized adults: a randomized clinical trial. JAMA. 2014;312(7):719–728. , , , et al.
- Bupropion for smokers hospitalized with acute cardiovascular disease. Am J Med. 2006;119(12):1080–1087. , , , et al.
- Adapting a clinical comorbidity index for use with ICD‐9‐CM administrative databases. J Clin Epidemiol. 1992;45(6):613–619. , , .
- Multiple Imputation for Nonresponse in Surveys. New York, NY: Wiley; 1987. .
- Methods for constructing and assessing propensity scores. Health Serv Res. 2014;49(5):1701–1720. , , , et al.
- Mortality and mortality‐related factors after hospitalization for acute exacerbation of COPD. Chest. 2003;124(2):459–467. , , .
- Mortality after hospitalization for COPD. Chest. 2002;121(5):1441–1448. , , , et al.
- State quitlines and cessation patterns among adults with selected chronic diseases in 15 states, 2005–2008. Prev Chronic Dis. 2012;9(10):120105. , , , , , .
- The effects of counseling on smoking cessation among patients hospitalized with chronic obstructive pulmonary disease: a randomized clinical trial. Int J Addict. 1991;26(1):107–119. , , .
- Predictors of smoking cessation after a myocardial infarction: the role of institutional smoking cessation programs in improving success. Arch Intern Med. 2008;168(18):1961–1967. , , , et al.
- Post‐myocardial infarction smoking cessation counseling: associations with immediate and late mortality in older Medicare patients. Am J Med. 2005;118(3):269–275. , , , , , .
- Smoking cessation after acute myocardial infarction: effects of a nurse‐‐managed intervention. Ann Intern Med. 1990;113(2):118–123. , , , .
- Smoking care provision in hospitals: a review of prevalence. Nicotine Tob Res. 2008;10(5):757–774. , , , et al.
- Intentions to quit smoking change over short periods of time. Addict Behav. 2005;30(4):653–662. , , , .
- Association of amount and duration of NRT use in smokers with cigarette consumption and motivation to stop smoking: a national survey of smokers in England. Addict Behav. 2015;40(0):33–38. , , , , .
- Smoking prevalence, behaviours, and cessation among individuals with COPD or asthma. Respir Med. 2011;105(3):477–484. , .
- American College of Chest Physicians. Tobacco Dependence Treatment ToolKit. 3rd ed. Available at: http://tobaccodependence.chestnet.org. Accessed January 29, 2015. , , , et al.
- Effects of varenicline on smoking cessation in patients with mild to moderate COPD: a randomized controlled trial. Chest. 2011;139(3):591–599. , , , , , .
- Varenicline versus transdermal nicotine patch for smoking cessation: results from a randomised open‐label trial. Thorax. 2008;63(8):717–724. , , , et al.
- Psychiatric adverse events in randomized, double‐blind, placebo‐controlled clinical trials of varenicline. Drug Saf. 2010;33(4):289–301. , , , , .
- Studies linking smoking‐cessation drug with suicide risk spark concerns. JAMA. 2009;301(10):1007–1008. .
- A randomized, double‐blind, placebo‐controlled study evaluating the safety and efficacy of varenicline for smoking cessation in patients with schizophrenia or schizoaffective disorder. J Clin Psychiatry. 2012;73(5):654–660. , , , et al.
- Smoking and mental illness: results from population surveys in Australia and the United States. BMC Public Health. 2009;9(1):285. , , .
- Implementation and effectiveness of a brief smoking‐cessation intervention for hospital patients. Med Care. 2000;38(5):451–459. , , , .
- Clinical trial comparing nicotine replacement therapy (NRT) plus brief counselling, brief counselling alone, and minimal intervention on smoking cessation in hospital inpatients. Thorax. 2003;58(6):484–488. , , , et al.
- Dissociation between hospital performance of the smoking cessation counseling quality metric and cessation outcomes after myocardial infarction. Arch Intern Med. 2008;168(19):2111–2117. , , , et al.
- Smoking cessation in hospitalized patients: Results of a randomized trial. Arch Intern Med. 1997;157(4):409–415. , , , , .
- Intensive smoking cessation counseling versus minimal counseling among hospitalized smokers treated with transdermal nicotine replacement: a randomized trial. Am J Med. 2003;114(7):555–562. , , , , .
- Motivational factors predict quit attempts but not maintenance of smoking cessation: findings from the International Tobacco Control Four country project. Nicotine Tob Res. 2010;12(suppl):S4–S11. , , , et al.
- Predictors of attempts to stop smoking and their success in adult general population samples: a systematic review. Addiction. 2011;106(12):2110–2121. , , , , .
- Validity of self‐reported smoking status among participants in a lung cancer screening trial. Cancer Epidemiol Biomarkers Prev. 2006;15(10):1825–1828. , , , et al.
- VHA enrollees' health care coverage and use of care. Med Care Res Rev. 2003;60(2):253–67. , , , .
- Association between lung function and exacerbation frequency in patients with COPD. Int J Chron Obstruct Pulmon Dis. 2010;5:435–444. , , , , .
- Smoking cessation in patients with chronic obstructive pulmonary disease: a double‐blind, placebo‐controlled, randomised trial. Lancet. 2001;357(9268):1571–1575. , , , et al.
- Nurse‐conducted smoking cessation in patients with COPD using nicotine sublingual tablets and behavioral support. Chest. 2006;130(2):334–342. , , .
Up to one‐third of the 700,000 patients admitted annually for an exacerbation of chronic obstructive pulmonary disease (COPD) continue to smoke tobacco.[1, 2] Smokers with COPD are at high risk for poor health outcomes directly attributable to tobacco‐related conditions, including progression of lung disease and cardiovascular diseases.[3, 4, 5] Treatment for tobacco addiction is the most essential intervention for these patients.
Hospital admission has been suggested as an opportune time for the initiation of smoking cessation.[6] Hospitalized patients are already in a smoke‐free environment, and have access to physicians, nurses, and pharmacists who can prescribe medications for support.[7] Documenting smoking status and offering smoking cessation treatment during and after discharge are quality metrics required by the Joint Commission, and recommended by the National Quality Forum.[8, 9] Hospitals have made significant efforts to comply with these requirements.[10]
Limited data exist regarding the effectiveness and utilization of treatments known to reduce cigarette use among COPD patients in nontrial environments. Prescribing patterns of medications for smoking cessation in the real world following admission for COPD are not well studied. We sought to examine the utilization of inpatient brief tobacco counseling and postdischarge pharmacotherapy following discharge for exacerbation of COPD, as well as to (1) examine the association of postdischarge pharmacotherapy with self‐reported smoking cessation at 6 to 12 months and (2) assess differences in effectiveness between cessation medications prescribed.
METHODS
We conducted a cohort study of current smokers discharged following a COPD exacerbation within the Veterans Affairs (VA) Veterans Integrated Service Network (VISN)‐20. This study was approved by the VA Puget Sound Health Care System Institutional Review Board (#00461).
We utilized clinical information from the VISN‐20 data warehouse that collects data using the VA electronic medical record, including demographics, prescription medications, hospital admissions, hospital and outpatient diagnoses, and dates of death, and is commonly used for research. In addition, we utilized health factors, coded electronic entries describing patient health behaviors that are entered by nursing staff at the time of a patient encounter, and the text of chart notes that were available for electronic query.
Study Cohort
We identified all smokers aged 40 years hospitalized between 2005 and 2012 with either a primary discharge diagnosis of COPD based on International Classification of Diseases, 9th Revision codes (491, 492, 493.2, and 496) or an admission diagnosis from the text of the admit notes indicating an exacerbation of COPD. We limited to patients aged 40 years to improve the specificity of the diagnosis of COPD, and we selected the first hospitalization that met inclusion criteria. We excluded subjects who died within 6 months of discharge (Figure 1).

To establish tobacco status, we built on previously developed and validated methodology,[11] and performed truncated natural language processing using phrases in the medical record that reflected patients' tobacco status, querying all notes from the day of admission up to 6 months prior. If no tobacco status was indicated in the notes, we identified the status encoded by the most recent health factor. We manually examined the results of the natural language processing and the determination of health factors to confirm the tobacco status. Manual review was undertaken by 1 of 2 trained study personnel. In the case of an ambiguous or contradictory status, an additional team member reviewed the information to attempt to make a determination. If no determination could be made, the record was coded to unknown. This method allowed us to identify a baseline status for all but 77 of the 3580 patients admitted for COPD.
Outcome and Exposure
The outcome was tobacco status at 6 to 12 months after discharge. Using the same methods developed for identification of baseline smoking status, we obtained smoking status for each subject up to 12 months postdischarge. If multiple notes and encounters were available indicating smoking status, we chose the latest within 12 months of discharge. Subjects lacking a follow‐up status were presumed to be smokers, a common assumption.[12] The 6 to 12month time horizon was chosen as these are the most common time points used to examine a sustained change in tobacco status,[13, 14, 15] and allowed for adequate time for treatment and clinical follow‐up.
Our primary exposure was any smoking cessation medication or combination dispensed within 90 days of discharge. This time horizon for treatment was chosen due to recent studies indicating this is a meaningful period for postdischarge treatment.[14] We assessed the use of nicotine patch, short‐acting nicotine, varenicline, buproprion, or any combination. Accurate data on the prescription and dispensing of these medications were available from the VA pharmacy record. Secondary exposure was the choice of medication dispensed among treated patients. We assessed additional exposures including receipt of cessation medications within 48 hours of discharge, treatment in the year prior to admission, and predischarge counseling. Predischarge counseling was determined as having occurred if nurses documented that they completed a discharge process focused on smoking cessation. Referral to a quit line is part of this process; however, due to the confidential nature of these interactions, generally low use of this service, and lack of linkage to the VA electronic health record, it was not considered in the analysis.
Confounders
Potential confounders were assessed in the year prior to admission up to discharge from the index hospitalization, with the use of mechanical or noninvasive ventilation assessed during the hospitalization. We adjusted for variables chosen a priori for their known or expected association with smoking cessation including demographics, Charlson Comorbidity Index,[16] markers of COPD severity (need for invasive or noninvasive mechanical ventilation during index hospitalization, use of oral steroids, long‐acting inhaled bronchodilators, and/or canister count of short‐acting bronchodilators in the year prior to admission), history of drug or alcohol abuse, homelessness, depression, psychosis, post‐traumatic stress disorder, lung cancer, coronary artery disease, and under‐ or overweight status. Nurse‐based counseling prior to discharge was included as a variable for adjustment for our primary and secondary predictors to assess the influence of pharmacotherapy specifically. Due to 3.1% missingness in body mass index, multiple imputation with chained equations was used to impute missing values, with 10 imputations performed. The imputation was performed using a linear regression model containing all variables included in the final model, grouped by facility.
Statistical Analysis
All analyses were performed using Stata 13 (StataCorp, College Station, TX) software. 2 tests and t tests were used to assess for unadjusted bivariate associations. Using the pooled imputed datasets, we performed multivariable logistic regression to compare odds ratios for a change in smoking status, adjusting the estimates of coefficients and standard errors by applying combination rules to the 10 completed‐data estimates.[17] We analyzed our primary and secondary predictors, adjusting for the confounders chosen a priori, clustered by facility with robust standard errors. An level of <0.05 was considered significant.
Sensitivity Analysis
We assumed that subjects missing a follow‐up status were ongoing smokers. However, given the high mortality rate observed in our cohort, we were concerned that some subjects lacking a follow‐up status may have died, missing the opportunity to have a quit attempt recorded. Therefore, we performed sensitivity analysis excluding subjects who died during the 6 to 12 months of follow‐up, repeating the imputation and analysis as described above. In addition, due to concern for indication bias in the choice of medication used for our secondary analysis, we performed propensity score matching for treatment with each medication in comparison to nicotine patch, using the teffects command, with 3 nearest neighbor matches. We included additional comorbidities in the propensity score matching.[18]
RESULTS
Among these 1334 subjects at 6 to 12 months of follow‐up, 63.7% reported ongoing smoking, 19.8% of patients reported quitting, and 17.5% of patients had no reported status and were presumed to be smokers. Four hundred fifty (33.7%) patients were dispensed a smoking cessation medication within 90 days of discharge. Patients who were dispensed medications were younger and more likely to be female. Nearly all patients who received medications also received documented predischarge counseling (94.6%), as did the majority of patients who did not receive medications (83.8%) (Table 1).
Variable | No Medication Dispensed, n = 884, No. (%) | Medication Dispensed, n = 450, No. (%) | P Value |
---|---|---|---|
| |||
Not smoking at 612 months | 179 (20.2) | 85 (18.9) | 0.56 |
Brief counseling at discharge | 742 (83.8%) | 424 (94.6%) | <0.001* |
Age | 64.49.13 (4094) | 61.07.97 (4185) | <0.001* |
Male | 852 (96.3) | 423 (94.0) | 0.05* |
Race | 0.12 | ||
White | 744 (84.2) | 377 (83.8) | |
Black | 41 (4.6) | 12 (2.7) | |
Other/unknown | 99 (11.1) | 61 (13.6) | |
BMI | 28.09.5 (12.669.0) | 28.910.8 (14.860.0) | 0.15 |
Homeless | 68 (7.7) | 36 (8.0) | 0.84 |
Psychiatric conditions/substance abuse | |||
History of alcohol abuse | 205 (23.2) | 106 (23.6) | 0.88 |
History of drug abuse | 110 (12.4) | 72 (16.0) | 0.07 |
Depression | 39 (4.4) | 29 (6.4) | 0.11 |
Psychosis | 201 (22.7) | 88 (19.6) | 0.18 |
PTSD | 146 (16.5) | 88 (19.6) | 0.17 |
Comorbidities | |||
Coronary artery disease | 254 (28.7) | 110 (24.4) | 0.10 |
Cerebrovascular accident | 80 (9.0) | 28 (2.2) | 0.86 |
Obstructive sleep apnea | 42 (4.8) | 23 (5.1) | 0.77 |
Lung cancer | 21 (2.4) | 10 (2.2) | 0.86 |
Charlson Comorbidity Index | 2.251.93 (014) | 2.111.76 (010) | 0.49 |
Markers of COPD severity | |||
Mechanical ventilation during admission | 28 (3.2) | 14 (3.1) | 0.96 |
NIPPV during admission | 97 (11.0) | 51 (11.3) | 0.84 |
Oral steroids prescribed in the past year | 334 (37.8) | 154 (34.2) | 0.20 |
Treatment with tiotropium in the past year | 97 (11.0) | 55 (12.2) | 0.50 |
Treatment with LABA in the past year | 264 (29.9) | 155 (34.4) | 0.09 |
Canisters of SABA used in past year | 6.639.8, (084) | 7.469.63 (045) | 0.14 |
Canisters of ipratropium used in past year | 6.458.81 (054) | 6.869.08 (064) | 0.42 |
Died during 612 months of follow‐up | 78 (8.8) | 28 (6.6) | 0.10 |
Of patients dispensed a study medication, 246 (18.4% of patients, 54.7% of all medications dispensed) were dispensed medications within 48 hours of discharge (Table 2). Of the patients dispensed medication, the majority received nicotine patches alone (Table 3), and 18.9% of patients received combination therapy, with the majority receiving nicotine patch and short‐acting nicotine replacement therapy (NRT) or patch and buproprion. A significant number of patients were prescribed medications within 90 days of discharge, but did not have them dispensed within that timeframe (n = 224, 16.8%).
Medication Dispensed | No. (%) | % Quit (Unadjusted) | OR (95% CI) | P Value |
---|---|---|---|---|
| ||||
No medications dispensed | 884 (66.3) | 20.2 | Referent | |
Any medication from | ||||
Discharge to 90 days | 450 (33.7) | 18.9 | 0.88 (0.741.04) | 0.137 |
Within 48 hours of discharge | 246 (18.4) | 18.3 | 0.87 (0.661.14) | 0.317 |
Treated in the year prior to admission | 221 (16.6) | 19.6 | Referent | |
Treated in the year prior to admission + 090 days postdischarge | 152 (11.4) | 18.4 | 0.95 (0.791.13) | 0.534 |
No nurse‐provided counseling prior to discharge | 169 (12.7) | 20.5 | Referent | |
Nurse‐provided counseling prior to discharge | 1,165 (87.3) | 19.5 | 0.95 (0.661.36) | 0.774 |
Medication Dispensed | No. (%) | % Quit (Unadjusted) | OR (95% CI) | P Value |
---|---|---|---|---|
| ||||
Nicotine patch | 242 (53.8) | 18.6 | Referent | |
Monotherapy with | ||||
Varenicline | 36 (8.0) | 30.6 | 2.44 (1.484.05) | 0.001 |
Short‐acting NRT | 34 (7.6) | 11.8 | 0.66 (0.510.85) | 0.001 |
Buproprion | 55 (12.2) | 21.8 | 1.05 (0.671.62) | 0.843 |
Combination therapy | 85 (18.9) | 15.7 | 0.94 (0.711.24) | 0.645 |
Association of Treatment With Study Medications and Quitting Smoking
In adjusted analyses, the odds of quitting smoking at 6 to 12 months were not greater among patients who were dispensed a study medication within 90 days of discharge (odds ratio [OR]: 0.88, 95% confidence interval [CI]: 0.74‐1.04). We found no association between counseling provided at discharge and smoking cessation (OR: 0.95, 95% CI: 0.0.66‐1.), adjusted for the receipt of medications. There was no difference in quit rate between patients dispensed medication within 48 hours of discharge, or between patients treated in the year prior to admission and again postdischarge (Table 2).
We then assessed differences in effectiveness between specific medications among the 450 patients who were dispensed medications. Using nicotine patch alone as the referent group, patients treated with varenicline demonstrated greater odds of smoking cessation (OR: 2.44, 95% CI: 1.48‐4.05). Patients treated with short‐acting NRT alone were less likely to report smoking cessation (OR: 0.66, 95% CI: 0.51‐0.85). Patients treated with buproprion or combination therapy were no more likely to report cessation (Table 3). When sensitivity analysis was performed using propensity score matching with additional variables included, there were no significant differences in the observed associations.
Our overall mortality rate observed at 1 year was 19.5%, nearly identical to previous cohort studies of patients admitted for COPD.[19, 20] Because of the possibility of behavioral differences on the part of patients and physicians regarding subjects with a limited life expectancy, we performed sensitivity analysis limited to the patients who survived to at least 12 months of follow‐up. One hundred six patients (7.9%) died during 6 to 12 months of follow‐up. There was no change in inference for our primary exposure (OR: 0.95, 95% CI: 0.79‐1.14) or any of the secondary exposures examined.
DISCUSSION
In this observational study, postdischarge pharmacotherapy within 90 days of discharge was provided to a minority of high‐risk smokers admitted for COPD, and was not associated with smoking cessation at 6 to 12 months. In comparison to nicotine patch alone, varenicline was associated with a higher odds of cessation, with decreased odds of cessation among patients treated with short‐acting NRT alone. The overall quit rate was significant at 19.8%, and is consistent with annual quit rates observed among patients with COPD in other settings,[21, 22] but is far lower than quit rates observed after admission for acute myocardial infarction.[23, 24, 25] Although the proportion of patients treated at the time of discharge or within 90 days was low, our findings are in keeping with previous studies, which demonstrated low rates of pharmacologic treatment following hospitalization, averaging 14%.[26] Treatment for tobacco use is likely underutilized for this group of high‐risk smokers. However, a significant proportion of patients who were prescribed medications in the postdischarge period did not have medications filled. This likely reflects both the rapid changes in motivation that characterize quit attempts,[27] as well as efforts on the part of primary care physicians to make these medications available to facilitate future quit attempts.
There are several possible explanations for the findings in our study. Pharmaceutical therapies were not provided at random. The provision of pharmacotherapy and the ultimate success of a quit attempt reflects a complex interaction of patient beliefs concerning medications, level of addiction and motivation, physician behavior and knowledge, and organizational factors. Organizational factors such as the structure of electronic discharge orders and the availability of decision support materials may influence a physician's likelihood of prescribing medications, the choice of medication prescribed, and therefore the adequacy of control of withdrawal symptoms. NRT is often under dosed to control ongoing symptoms,[28] and needs to be adjusted until relief is obtained, providing an additional barrier to effectiveness during the transition out of the hospital. Because most smokers with COPD are highly addicted to nicotine,[29] high‐dose NRT, combination therapy, or varenicline would be necessary to adequately control symptoms.[30] However, a significant minority of patients received short‐acting NRT alone.
Despite a high observed efficacy in recent trials,[31, 32] few subjects in our study received varenicline. This may be related to both secular trends and administrative barriers to the use of varenicline in the VA system. Use of this medication was limited among patients with psychiatric disorders due to safety concerns. These concerns have since been largely disproven, but may have limited access to this medication.[33, 34, 35] Although we adjusted for a history of mental illness, patients who received varenicline may have had more past quit attempts and less active mental illness, which may be associated with improved cessation rates. Despite the high prevalence of mental illness we observed, this is typical of the population of smokers, with studies indicating nearly one‐third of smokers overall suffer from mental illness.[36]
Although the majority of our patients received a brief, nurse‐based counseling intervention, there is considerable concern about the overall effectiveness of a single predischarge interaction to produce sustained smoking cessation among highly addicted smokers.[37, 38, 39, 40] The Joint Commission has recently restructured the requirements for smoking cessation treatment for hospitalized patients, and it is now up to hospitals to implement treatment mechanisms that not only meet the national requirements, but also provide a meaningful clinical effect. Though the optimum treatment for hospitalized smokers with COPD is unknown, previous positive studies of smoking cessation among hospitalized patients underscore the need for a higher‐intensity counseling intervention that begins during hospitalization and continues after discharge.[13, 41] Cessation counseling services including tobacco cessation groups and quit lines are available through the VA; however, the use of these services is typically low and requires the patient to enroll independently after discharge, an additional barrier. The lack of association between medications and smoking cessation found in our study could reflect poor effectiveness of medications in the absence of a systematic counseling intervention. Alternatively, the association may be explained that patients who were more highly addicted and perhaps less motivated to quit received tobacco cessation medications more often, but were also less likely to stop tobacco use, a form of indication bias.
Our study has several limitations. We do not have addiction or motivation levels for a cessation attempt, a potential unmeasured confounder. Although predictive of quit attempts, motivation factors are less predictive of cessation maintenance, and may therefore have an unclear effect on our outcome.[42, 43] Our outcome was gathered as part of routine clinical care, which may have introduced bias if patients over‐reported cessation because of social desirability. In healthcare settings, however, this form of assessing smoking status is generally valid.[44] Exposure to counseling or medications obtained outside of the VA system would not have been captured. Given the financial incentive, we believe it is unlikely that many patients admitted to a VA medical center obtained medications elsewhere.[45] The diagnosis of COPD was made administratively. However, all subjects were admitted for an exacerbation, which is associated with more severe COPD by Global Initiative for Obstructive Lung Disease (GOLD) stage.[46] Patients with more severe COPD are often excluded from studies of smoking cessation due to concerns of high dropout and lower prevalence of smoking among patients with GOLD stage IV disease,[47, 48] making this a strength of our study. Subjects who died may have quit only in extremis, or failed to document their quit attempts. However, our sensitivity analysis limited to survivors did not change the study results. There may have been some misclassification in the use of buproprion, which may also be prescribed as an antidepressant. Finally, although representative of the veterans who seek care within the VISN‐20, our patients were primarily white and male, limiting the ability to generalize outside of this group.
Our study had several strengths. We examined a large cohort of patients admitted to a complete care organization, including patients from a diverse group of VA settings comprising academically and nonacademically affiliated centers. We performed an unbiased collection of patients, including all smokers discharged for COPD. We had access to excellent completeness of medications prescribed and filled as collected within the VA system, enabling us to observe medications dispensed and prescribed at several time points. We also had near complete ascertainment of outcomes including by using natural language processing with manual confirmation of smoking status.
In summary, we found that provision of medications to treat ongoing tobacco use among patients discharged for COPD was low, and receipt of medications was not associated with a reduction in smoking tobacco at 6 to 12 months postdischarge. However, among those treated, varenicline appears to be superior to the nicotine patch, with short‐acting nicotine replacement potentially less effective, a biologically plausible finding. The motivation to quit smoking changes rapidly over time. Providing these medications in the hospital and during the time after discharge is a potential means to improve quit rates, but medications need to be paired with counseling to be most effective. Collectively, these data suggest that systems‐based interventions are needed to increase the availability of intense counseling and the use of tailored pharmacotherapy to these patients.
Acknowledgements
The authors acknowledge Mr. Robert Plumley, who performed the data extraction and natural language processing necessary to complete this project.
Disclosures: Dr. Melzer conceived of the research question and performed background reading, analyses, primary drafting, and final revision of the manuscript. Drs. Collins and Feemster participated in finalizing the research question, developing the cohort, performing data collection, and revising the manuscript. Dr. Au provided the database for analysis, helped finalize the research question, and assisted in interpretation of the data and revision of the manuscript. Dr. Au has personally reviewed the data, understands the statistical methods employed, and confirms an understanding of this analysis, that the methods are clearly described, and that they are a fair way to report the results. This material is based upon work supported in part by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, who provided access to data, office space, and programming and data management. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs, the United States government, or the National Institutes of Health. Dr. Au is an unpaid research consultant for Analysis Group. None of the other authors have any conflicts of interest to disclose. Dr. Melzer is supported by an institutional F‐32 (HL007287‐36) through the University of Washington Department of Pulmonary and Critical Care. Dr. Feemster is supported by an National Institutes of Health, National Heart, Lung, and Blood Institute, K23 Mentored Career Development Award (HL111116). Partial support of this project was provided by Gilead Sciences with research funding to the Seattle Institute for Biomedical and Clinical Research. Additional support was received through the VA Health Services Research and Development. A portion of this work was presented in abstract form at the American Thoracic Society International Meeting, May 2015, in Denver, Colorado.
Up to one‐third of the 700,000 patients admitted annually for an exacerbation of chronic obstructive pulmonary disease (COPD) continue to smoke tobacco.[1, 2] Smokers with COPD are at high risk for poor health outcomes directly attributable to tobacco‐related conditions, including progression of lung disease and cardiovascular diseases.[3, 4, 5] Treatment for tobacco addiction is the most essential intervention for these patients.
Hospital admission has been suggested as an opportune time for the initiation of smoking cessation.[6] Hospitalized patients are already in a smoke‐free environment, and have access to physicians, nurses, and pharmacists who can prescribe medications for support.[7] Documenting smoking status and offering smoking cessation treatment during and after discharge are quality metrics required by the Joint Commission, and recommended by the National Quality Forum.[8, 9] Hospitals have made significant efforts to comply with these requirements.[10]
Limited data exist regarding the effectiveness and utilization of treatments known to reduce cigarette use among COPD patients in nontrial environments. Prescribing patterns of medications for smoking cessation in the real world following admission for COPD are not well studied. We sought to examine the utilization of inpatient brief tobacco counseling and postdischarge pharmacotherapy following discharge for exacerbation of COPD, as well as to (1) examine the association of postdischarge pharmacotherapy with self‐reported smoking cessation at 6 to 12 months and (2) assess differences in effectiveness between cessation medications prescribed.
METHODS
We conducted a cohort study of current smokers discharged following a COPD exacerbation within the Veterans Affairs (VA) Veterans Integrated Service Network (VISN)‐20. This study was approved by the VA Puget Sound Health Care System Institutional Review Board (#00461).
We utilized clinical information from the VISN‐20 data warehouse that collects data using the VA electronic medical record, including demographics, prescription medications, hospital admissions, hospital and outpatient diagnoses, and dates of death, and is commonly used for research. In addition, we utilized health factors, coded electronic entries describing patient health behaviors that are entered by nursing staff at the time of a patient encounter, and the text of chart notes that were available for electronic query.
Study Cohort
We identified all smokers aged 40 years hospitalized between 2005 and 2012 with either a primary discharge diagnosis of COPD based on International Classification of Diseases, 9th Revision codes (491, 492, 493.2, and 496) or an admission diagnosis from the text of the admit notes indicating an exacerbation of COPD. We limited to patients aged 40 years to improve the specificity of the diagnosis of COPD, and we selected the first hospitalization that met inclusion criteria. We excluded subjects who died within 6 months of discharge (Figure 1).

To establish tobacco status, we built on previously developed and validated methodology,[11] and performed truncated natural language processing using phrases in the medical record that reflected patients' tobacco status, querying all notes from the day of admission up to 6 months prior. If no tobacco status was indicated in the notes, we identified the status encoded by the most recent health factor. We manually examined the results of the natural language processing and the determination of health factors to confirm the tobacco status. Manual review was undertaken by 1 of 2 trained study personnel. In the case of an ambiguous or contradictory status, an additional team member reviewed the information to attempt to make a determination. If no determination could be made, the record was coded to unknown. This method allowed us to identify a baseline status for all but 77 of the 3580 patients admitted for COPD.
Outcome and Exposure
The outcome was tobacco status at 6 to 12 months after discharge. Using the same methods developed for identification of baseline smoking status, we obtained smoking status for each subject up to 12 months postdischarge. If multiple notes and encounters were available indicating smoking status, we chose the latest within 12 months of discharge. Subjects lacking a follow‐up status were presumed to be smokers, a common assumption.[12] The 6 to 12month time horizon was chosen as these are the most common time points used to examine a sustained change in tobacco status,[13, 14, 15] and allowed for adequate time for treatment and clinical follow‐up.
Our primary exposure was any smoking cessation medication or combination dispensed within 90 days of discharge. This time horizon for treatment was chosen due to recent studies indicating this is a meaningful period for postdischarge treatment.[14] We assessed the use of nicotine patch, short‐acting nicotine, varenicline, buproprion, or any combination. Accurate data on the prescription and dispensing of these medications were available from the VA pharmacy record. Secondary exposure was the choice of medication dispensed among treated patients. We assessed additional exposures including receipt of cessation medications within 48 hours of discharge, treatment in the year prior to admission, and predischarge counseling. Predischarge counseling was determined as having occurred if nurses documented that they completed a discharge process focused on smoking cessation. Referral to a quit line is part of this process; however, due to the confidential nature of these interactions, generally low use of this service, and lack of linkage to the VA electronic health record, it was not considered in the analysis.
Confounders
Potential confounders were assessed in the year prior to admission up to discharge from the index hospitalization, with the use of mechanical or noninvasive ventilation assessed during the hospitalization. We adjusted for variables chosen a priori for their known or expected association with smoking cessation including demographics, Charlson Comorbidity Index,[16] markers of COPD severity (need for invasive or noninvasive mechanical ventilation during index hospitalization, use of oral steroids, long‐acting inhaled bronchodilators, and/or canister count of short‐acting bronchodilators in the year prior to admission), history of drug or alcohol abuse, homelessness, depression, psychosis, post‐traumatic stress disorder, lung cancer, coronary artery disease, and under‐ or overweight status. Nurse‐based counseling prior to discharge was included as a variable for adjustment for our primary and secondary predictors to assess the influence of pharmacotherapy specifically. Due to 3.1% missingness in body mass index, multiple imputation with chained equations was used to impute missing values, with 10 imputations performed. The imputation was performed using a linear regression model containing all variables included in the final model, grouped by facility.
Statistical Analysis
All analyses were performed using Stata 13 (StataCorp, College Station, TX) software. 2 tests and t tests were used to assess for unadjusted bivariate associations. Using the pooled imputed datasets, we performed multivariable logistic regression to compare odds ratios for a change in smoking status, adjusting the estimates of coefficients and standard errors by applying combination rules to the 10 completed‐data estimates.[17] We analyzed our primary and secondary predictors, adjusting for the confounders chosen a priori, clustered by facility with robust standard errors. An level of <0.05 was considered significant.
Sensitivity Analysis
We assumed that subjects missing a follow‐up status were ongoing smokers. However, given the high mortality rate observed in our cohort, we were concerned that some subjects lacking a follow‐up status may have died, missing the opportunity to have a quit attempt recorded. Therefore, we performed sensitivity analysis excluding subjects who died during the 6 to 12 months of follow‐up, repeating the imputation and analysis as described above. In addition, due to concern for indication bias in the choice of medication used for our secondary analysis, we performed propensity score matching for treatment with each medication in comparison to nicotine patch, using the teffects command, with 3 nearest neighbor matches. We included additional comorbidities in the propensity score matching.[18]
RESULTS
Among these 1334 subjects at 6 to 12 months of follow‐up, 63.7% reported ongoing smoking, 19.8% of patients reported quitting, and 17.5% of patients had no reported status and were presumed to be smokers. Four hundred fifty (33.7%) patients were dispensed a smoking cessation medication within 90 days of discharge. Patients who were dispensed medications were younger and more likely to be female. Nearly all patients who received medications also received documented predischarge counseling (94.6%), as did the majority of patients who did not receive medications (83.8%) (Table 1).
Variable | No Medication Dispensed, n = 884, No. (%) | Medication Dispensed, n = 450, No. (%) | P Value |
---|---|---|---|
| |||
Not smoking at 612 months | 179 (20.2) | 85 (18.9) | 0.56 |
Brief counseling at discharge | 742 (83.8%) | 424 (94.6%) | <0.001* |
Age | 64.49.13 (4094) | 61.07.97 (4185) | <0.001* |
Male | 852 (96.3) | 423 (94.0) | 0.05* |
Race | 0.12 | ||
White | 744 (84.2) | 377 (83.8) | |
Black | 41 (4.6) | 12 (2.7) | |
Other/unknown | 99 (11.1) | 61 (13.6) | |
BMI | 28.09.5 (12.669.0) | 28.910.8 (14.860.0) | 0.15 |
Homeless | 68 (7.7) | 36 (8.0) | 0.84 |
Psychiatric conditions/substance abuse | |||
History of alcohol abuse | 205 (23.2) | 106 (23.6) | 0.88 |
History of drug abuse | 110 (12.4) | 72 (16.0) | 0.07 |
Depression | 39 (4.4) | 29 (6.4) | 0.11 |
Psychosis | 201 (22.7) | 88 (19.6) | 0.18 |
PTSD | 146 (16.5) | 88 (19.6) | 0.17 |
Comorbidities | |||
Coronary artery disease | 254 (28.7) | 110 (24.4) | 0.10 |
Cerebrovascular accident | 80 (9.0) | 28 (2.2) | 0.86 |
Obstructive sleep apnea | 42 (4.8) | 23 (5.1) | 0.77 |
Lung cancer | 21 (2.4) | 10 (2.2) | 0.86 |
Charlson Comorbidity Index | 2.251.93 (014) | 2.111.76 (010) | 0.49 |
Markers of COPD severity | |||
Mechanical ventilation during admission | 28 (3.2) | 14 (3.1) | 0.96 |
NIPPV during admission | 97 (11.0) | 51 (11.3) | 0.84 |
Oral steroids prescribed in the past year | 334 (37.8) | 154 (34.2) | 0.20 |
Treatment with tiotropium in the past year | 97 (11.0) | 55 (12.2) | 0.50 |
Treatment with LABA in the past year | 264 (29.9) | 155 (34.4) | 0.09 |
Canisters of SABA used in past year | 6.639.8, (084) | 7.469.63 (045) | 0.14 |
Canisters of ipratropium used in past year | 6.458.81 (054) | 6.869.08 (064) | 0.42 |
Died during 612 months of follow‐up | 78 (8.8) | 28 (6.6) | 0.10 |
Of patients dispensed a study medication, 246 (18.4% of patients, 54.7% of all medications dispensed) were dispensed medications within 48 hours of discharge (Table 2). Of the patients dispensed medication, the majority received nicotine patches alone (Table 3), and 18.9% of patients received combination therapy, with the majority receiving nicotine patch and short‐acting nicotine replacement therapy (NRT) or patch and buproprion. A significant number of patients were prescribed medications within 90 days of discharge, but did not have them dispensed within that timeframe (n = 224, 16.8%).
Medication Dispensed | No. (%) | % Quit (Unadjusted) | OR (95% CI) | P Value |
---|---|---|---|---|
| ||||
No medications dispensed | 884 (66.3) | 20.2 | Referent | |
Any medication from | ||||
Discharge to 90 days | 450 (33.7) | 18.9 | 0.88 (0.741.04) | 0.137 |
Within 48 hours of discharge | 246 (18.4) | 18.3 | 0.87 (0.661.14) | 0.317 |
Treated in the year prior to admission | 221 (16.6) | 19.6 | Referent | |
Treated in the year prior to admission + 090 days postdischarge | 152 (11.4) | 18.4 | 0.95 (0.791.13) | 0.534 |
No nurse‐provided counseling prior to discharge | 169 (12.7) | 20.5 | Referent | |
Nurse‐provided counseling prior to discharge | 1,165 (87.3) | 19.5 | 0.95 (0.661.36) | 0.774 |
Medication Dispensed | No. (%) | % Quit (Unadjusted) | OR (95% CI) | P Value |
---|---|---|---|---|
| ||||
Nicotine patch | 242 (53.8) | 18.6 | Referent | |
Monotherapy with | ||||
Varenicline | 36 (8.0) | 30.6 | 2.44 (1.484.05) | 0.001 |
Short‐acting NRT | 34 (7.6) | 11.8 | 0.66 (0.510.85) | 0.001 |
Buproprion | 55 (12.2) | 21.8 | 1.05 (0.671.62) | 0.843 |
Combination therapy | 85 (18.9) | 15.7 | 0.94 (0.711.24) | 0.645 |
Association of Treatment With Study Medications and Quitting Smoking
In adjusted analyses, the odds of quitting smoking at 6 to 12 months were not greater among patients who were dispensed a study medication within 90 days of discharge (odds ratio [OR]: 0.88, 95% confidence interval [CI]: 0.74‐1.04). We found no association between counseling provided at discharge and smoking cessation (OR: 0.95, 95% CI: 0.0.66‐1.), adjusted for the receipt of medications. There was no difference in quit rate between patients dispensed medication within 48 hours of discharge, or between patients treated in the year prior to admission and again postdischarge (Table 2).
We then assessed differences in effectiveness between specific medications among the 450 patients who were dispensed medications. Using nicotine patch alone as the referent group, patients treated with varenicline demonstrated greater odds of smoking cessation (OR: 2.44, 95% CI: 1.48‐4.05). Patients treated with short‐acting NRT alone were less likely to report smoking cessation (OR: 0.66, 95% CI: 0.51‐0.85). Patients treated with buproprion or combination therapy were no more likely to report cessation (Table 3). When sensitivity analysis was performed using propensity score matching with additional variables included, there were no significant differences in the observed associations.
Our overall mortality rate observed at 1 year was 19.5%, nearly identical to previous cohort studies of patients admitted for COPD.[19, 20] Because of the possibility of behavioral differences on the part of patients and physicians regarding subjects with a limited life expectancy, we performed sensitivity analysis limited to the patients who survived to at least 12 months of follow‐up. One hundred six patients (7.9%) died during 6 to 12 months of follow‐up. There was no change in inference for our primary exposure (OR: 0.95, 95% CI: 0.79‐1.14) or any of the secondary exposures examined.
DISCUSSION
In this observational study, postdischarge pharmacotherapy within 90 days of discharge was provided to a minority of high‐risk smokers admitted for COPD, and was not associated with smoking cessation at 6 to 12 months. In comparison to nicotine patch alone, varenicline was associated with a higher odds of cessation, with decreased odds of cessation among patients treated with short‐acting NRT alone. The overall quit rate was significant at 19.8%, and is consistent with annual quit rates observed among patients with COPD in other settings,[21, 22] but is far lower than quit rates observed after admission for acute myocardial infarction.[23, 24, 25] Although the proportion of patients treated at the time of discharge or within 90 days was low, our findings are in keeping with previous studies, which demonstrated low rates of pharmacologic treatment following hospitalization, averaging 14%.[26] Treatment for tobacco use is likely underutilized for this group of high‐risk smokers. However, a significant proportion of patients who were prescribed medications in the postdischarge period did not have medications filled. This likely reflects both the rapid changes in motivation that characterize quit attempts,[27] as well as efforts on the part of primary care physicians to make these medications available to facilitate future quit attempts.
There are several possible explanations for the findings in our study. Pharmaceutical therapies were not provided at random. The provision of pharmacotherapy and the ultimate success of a quit attempt reflects a complex interaction of patient beliefs concerning medications, level of addiction and motivation, physician behavior and knowledge, and organizational factors. Organizational factors such as the structure of electronic discharge orders and the availability of decision support materials may influence a physician's likelihood of prescribing medications, the choice of medication prescribed, and therefore the adequacy of control of withdrawal symptoms. NRT is often under dosed to control ongoing symptoms,[28] and needs to be adjusted until relief is obtained, providing an additional barrier to effectiveness during the transition out of the hospital. Because most smokers with COPD are highly addicted to nicotine,[29] high‐dose NRT, combination therapy, or varenicline would be necessary to adequately control symptoms.[30] However, a significant minority of patients received short‐acting NRT alone.
Despite a high observed efficacy in recent trials,[31, 32] few subjects in our study received varenicline. This may be related to both secular trends and administrative barriers to the use of varenicline in the VA system. Use of this medication was limited among patients with psychiatric disorders due to safety concerns. These concerns have since been largely disproven, but may have limited access to this medication.[33, 34, 35] Although we adjusted for a history of mental illness, patients who received varenicline may have had more past quit attempts and less active mental illness, which may be associated with improved cessation rates. Despite the high prevalence of mental illness we observed, this is typical of the population of smokers, with studies indicating nearly one‐third of smokers overall suffer from mental illness.[36]
Although the majority of our patients received a brief, nurse‐based counseling intervention, there is considerable concern about the overall effectiveness of a single predischarge interaction to produce sustained smoking cessation among highly addicted smokers.[37, 38, 39, 40] The Joint Commission has recently restructured the requirements for smoking cessation treatment for hospitalized patients, and it is now up to hospitals to implement treatment mechanisms that not only meet the national requirements, but also provide a meaningful clinical effect. Though the optimum treatment for hospitalized smokers with COPD is unknown, previous positive studies of smoking cessation among hospitalized patients underscore the need for a higher‐intensity counseling intervention that begins during hospitalization and continues after discharge.[13, 41] Cessation counseling services including tobacco cessation groups and quit lines are available through the VA; however, the use of these services is typically low and requires the patient to enroll independently after discharge, an additional barrier. The lack of association between medications and smoking cessation found in our study could reflect poor effectiveness of medications in the absence of a systematic counseling intervention. Alternatively, the association may be explained that patients who were more highly addicted and perhaps less motivated to quit received tobacco cessation medications more often, but were also less likely to stop tobacco use, a form of indication bias.
Our study has several limitations. We do not have addiction or motivation levels for a cessation attempt, a potential unmeasured confounder. Although predictive of quit attempts, motivation factors are less predictive of cessation maintenance, and may therefore have an unclear effect on our outcome.[42, 43] Our outcome was gathered as part of routine clinical care, which may have introduced bias if patients over‐reported cessation because of social desirability. In healthcare settings, however, this form of assessing smoking status is generally valid.[44] Exposure to counseling or medications obtained outside of the VA system would not have been captured. Given the financial incentive, we believe it is unlikely that many patients admitted to a VA medical center obtained medications elsewhere.[45] The diagnosis of COPD was made administratively. However, all subjects were admitted for an exacerbation, which is associated with more severe COPD by Global Initiative for Obstructive Lung Disease (GOLD) stage.[46] Patients with more severe COPD are often excluded from studies of smoking cessation due to concerns of high dropout and lower prevalence of smoking among patients with GOLD stage IV disease,[47, 48] making this a strength of our study. Subjects who died may have quit only in extremis, or failed to document their quit attempts. However, our sensitivity analysis limited to survivors did not change the study results. There may have been some misclassification in the use of buproprion, which may also be prescribed as an antidepressant. Finally, although representative of the veterans who seek care within the VISN‐20, our patients were primarily white and male, limiting the ability to generalize outside of this group.
Our study had several strengths. We examined a large cohort of patients admitted to a complete care organization, including patients from a diverse group of VA settings comprising academically and nonacademically affiliated centers. We performed an unbiased collection of patients, including all smokers discharged for COPD. We had access to excellent completeness of medications prescribed and filled as collected within the VA system, enabling us to observe medications dispensed and prescribed at several time points. We also had near complete ascertainment of outcomes including by using natural language processing with manual confirmation of smoking status.
In summary, we found that provision of medications to treat ongoing tobacco use among patients discharged for COPD was low, and receipt of medications was not associated with a reduction in smoking tobacco at 6 to 12 months postdischarge. However, among those treated, varenicline appears to be superior to the nicotine patch, with short‐acting nicotine replacement potentially less effective, a biologically plausible finding. The motivation to quit smoking changes rapidly over time. Providing these medications in the hospital and during the time after discharge is a potential means to improve quit rates, but medications need to be paired with counseling to be most effective. Collectively, these data suggest that systems‐based interventions are needed to increase the availability of intense counseling and the use of tailored pharmacotherapy to these patients.
Acknowledgements
The authors acknowledge Mr. Robert Plumley, who performed the data extraction and natural language processing necessary to complete this project.
Disclosures: Dr. Melzer conceived of the research question and performed background reading, analyses, primary drafting, and final revision of the manuscript. Drs. Collins and Feemster participated in finalizing the research question, developing the cohort, performing data collection, and revising the manuscript. Dr. Au provided the database for analysis, helped finalize the research question, and assisted in interpretation of the data and revision of the manuscript. Dr. Au has personally reviewed the data, understands the statistical methods employed, and confirms an understanding of this analysis, that the methods are clearly described, and that they are a fair way to report the results. This material is based upon work supported in part by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, who provided access to data, office space, and programming and data management. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs, the United States government, or the National Institutes of Health. Dr. Au is an unpaid research consultant for Analysis Group. None of the other authors have any conflicts of interest to disclose. Dr. Melzer is supported by an institutional F‐32 (HL007287‐36) through the University of Washington Department of Pulmonary and Critical Care. Dr. Feemster is supported by an National Institutes of Health, National Heart, Lung, and Blood Institute, K23 Mentored Career Development Award (HL111116). Partial support of this project was provided by Gilead Sciences with research funding to the Seattle Institute for Biomedical and Clinical Research. Additional support was received through the VA Health Services Research and Development. A portion of this work was presented in abstract form at the American Thoracic Society International Meeting, May 2015, in Denver, Colorado.
- Patients hospitalized for COPD have a high prevalence of modifiable risk factors for exacerbation (EFRAM study). Eur Respir J. 2000;16(6):1037–1042. , , , , , .
- Analysis of hospitalizations for COPD exacerbation: opportunities for improving care. COPD. 2010;7(2):85–92. , , , et al.
- Mortality in COPD: role of comorbidities. Eur Respir J. 2006;28(6):1245–1257. , , , .
- Cardiovascular comorbidity in COPD: systematic literature review. Chest. 2013;144(4):1163–1178. , , , .
- Engaging patients and clinicians in treating tobacco addiction. JAMA Intern Med. 2014;174(8):1299–1300. .
- Smokers who are hospitalized: a window of opportunity for cessation interventions. Prev Med. 1992;21(2):262–269. , .
- Interventions for smoking cessation in hospitalised patients. Cochrane Database Syst Rev. 2012;5:CD001837. , , , .
- Specifications Manual for National Hospital Inpatient Quality Measures. Available at: http://www.jointcommission.org/specifications_manual_for_national_hospital_inpatient_quality_measures.aspx. Accessed January 15, 2015.
- Treating Tobacco Use and Dependence. April 2013. Agency for Healthcare Research and Quality, Rockford, MD. Available at: http://www.ahrq.gov/professionals/clinicians‐providers/guidelines‐recommendations/tobacco/clinicians/update/index.html. Accessed January 15, 2015.
- Smoking cessation advice rates in US hospitals. Arch Intern Med. 2011;171(18):1682–1684. , , , .
- Validating smoking data from the Veteran's Affairs Health Factors dataset, an electronic data source. Nicotine Tob Res. 2011;13(12):1233–1239. , , , et al.
- Do u smoke after txt? Results of a randomised trial of smoking cessation using mobile phone text messaging. Tob Control. 2005;14(4):255–261. , , , et al.
- The effectiveness of smoking cessation groups offered to hospitalised patients with symptoms of exacerbations of chronic obstructive pulmonary disease (COPD). Clin Respir J. 2008;2(3):158–165. , , , .
- Sustained care intervention and postdischarge smoking cessation among hospitalized adults: a randomized clinical trial. JAMA. 2014;312(7):719–728. , , , et al.
- Bupropion for smokers hospitalized with acute cardiovascular disease. Am J Med. 2006;119(12):1080–1087. , , , et al.
- Adapting a clinical comorbidity index for use with ICD‐9‐CM administrative databases. J Clin Epidemiol. 1992;45(6):613–619. , , .
- Multiple Imputation for Nonresponse in Surveys. New York, NY: Wiley; 1987. .
- Methods for constructing and assessing propensity scores. Health Serv Res. 2014;49(5):1701–1720. , , , et al.
- Mortality and mortality‐related factors after hospitalization for acute exacerbation of COPD. Chest. 2003;124(2):459–467. , , .
- Mortality after hospitalization for COPD. Chest. 2002;121(5):1441–1448. , , , et al.
- State quitlines and cessation patterns among adults with selected chronic diseases in 15 states, 2005–2008. Prev Chronic Dis. 2012;9(10):120105. , , , , , .
- The effects of counseling on smoking cessation among patients hospitalized with chronic obstructive pulmonary disease: a randomized clinical trial. Int J Addict. 1991;26(1):107–119. , , .
- Predictors of smoking cessation after a myocardial infarction: the role of institutional smoking cessation programs in improving success. Arch Intern Med. 2008;168(18):1961–1967. , , , et al.
- Post‐myocardial infarction smoking cessation counseling: associations with immediate and late mortality in older Medicare patients. Am J Med. 2005;118(3):269–275. , , , , , .
- Smoking cessation after acute myocardial infarction: effects of a nurse‐‐managed intervention. Ann Intern Med. 1990;113(2):118–123. , , , .
- Smoking care provision in hospitals: a review of prevalence. Nicotine Tob Res. 2008;10(5):757–774. , , , et al.
- Intentions to quit smoking change over short periods of time. Addict Behav. 2005;30(4):653–662. , , , .
- Association of amount and duration of NRT use in smokers with cigarette consumption and motivation to stop smoking: a national survey of smokers in England. Addict Behav. 2015;40(0):33–38. , , , , .
- Smoking prevalence, behaviours, and cessation among individuals with COPD or asthma. Respir Med. 2011;105(3):477–484. , .
- American College of Chest Physicians. Tobacco Dependence Treatment ToolKit. 3rd ed. Available at: http://tobaccodependence.chestnet.org. Accessed January 29, 2015. , , , et al.
- Effects of varenicline on smoking cessation in patients with mild to moderate COPD: a randomized controlled trial. Chest. 2011;139(3):591–599. , , , , , .
- Varenicline versus transdermal nicotine patch for smoking cessation: results from a randomised open‐label trial. Thorax. 2008;63(8):717–724. , , , et al.
- Psychiatric adverse events in randomized, double‐blind, placebo‐controlled clinical trials of varenicline. Drug Saf. 2010;33(4):289–301. , , , , .
- Studies linking smoking‐cessation drug with suicide risk spark concerns. JAMA. 2009;301(10):1007–1008. .
- A randomized, double‐blind, placebo‐controlled study evaluating the safety and efficacy of varenicline for smoking cessation in patients with schizophrenia or schizoaffective disorder. J Clin Psychiatry. 2012;73(5):654–660. , , , et al.
- Smoking and mental illness: results from population surveys in Australia and the United States. BMC Public Health. 2009;9(1):285. , , .
- Implementation and effectiveness of a brief smoking‐cessation intervention for hospital patients. Med Care. 2000;38(5):451–459. , , , .
- Clinical trial comparing nicotine replacement therapy (NRT) plus brief counselling, brief counselling alone, and minimal intervention on smoking cessation in hospital inpatients. Thorax. 2003;58(6):484–488. , , , et al.
- Dissociation between hospital performance of the smoking cessation counseling quality metric and cessation outcomes after myocardial infarction. Arch Intern Med. 2008;168(19):2111–2117. , , , et al.
- Smoking cessation in hospitalized patients: Results of a randomized trial. Arch Intern Med. 1997;157(4):409–415. , , , , .
- Intensive smoking cessation counseling versus minimal counseling among hospitalized smokers treated with transdermal nicotine replacement: a randomized trial. Am J Med. 2003;114(7):555–562. , , , , .
- Motivational factors predict quit attempts but not maintenance of smoking cessation: findings from the International Tobacco Control Four country project. Nicotine Tob Res. 2010;12(suppl):S4–S11. , , , et al.
- Predictors of attempts to stop smoking and their success in adult general population samples: a systematic review. Addiction. 2011;106(12):2110–2121. , , , , .
- Validity of self‐reported smoking status among participants in a lung cancer screening trial. Cancer Epidemiol Biomarkers Prev. 2006;15(10):1825–1828. , , , et al.
- VHA enrollees' health care coverage and use of care. Med Care Res Rev. 2003;60(2):253–67. , , , .
- Association between lung function and exacerbation frequency in patients with COPD. Int J Chron Obstruct Pulmon Dis. 2010;5:435–444. , , , , .
- Smoking cessation in patients with chronic obstructive pulmonary disease: a double‐blind, placebo‐controlled, randomised trial. Lancet. 2001;357(9268):1571–1575. , , , et al.
- Nurse‐conducted smoking cessation in patients with COPD using nicotine sublingual tablets and behavioral support. Chest. 2006;130(2):334–342. , , .
- Patients hospitalized for COPD have a high prevalence of modifiable risk factors for exacerbation (EFRAM study). Eur Respir J. 2000;16(6):1037–1042. , , , , , .
- Analysis of hospitalizations for COPD exacerbation: opportunities for improving care. COPD. 2010;7(2):85–92. , , , et al.
- Mortality in COPD: role of comorbidities. Eur Respir J. 2006;28(6):1245–1257. , , , .
- Cardiovascular comorbidity in COPD: systematic literature review. Chest. 2013;144(4):1163–1178. , , , .
- Engaging patients and clinicians in treating tobacco addiction. JAMA Intern Med. 2014;174(8):1299–1300. .
- Smokers who are hospitalized: a window of opportunity for cessation interventions. Prev Med. 1992;21(2):262–269. , .
- Interventions for smoking cessation in hospitalised patients. Cochrane Database Syst Rev. 2012;5:CD001837. , , , .
- Specifications Manual for National Hospital Inpatient Quality Measures. Available at: http://www.jointcommission.org/specifications_manual_for_national_hospital_inpatient_quality_measures.aspx. Accessed January 15, 2015.
- Treating Tobacco Use and Dependence. April 2013. Agency for Healthcare Research and Quality, Rockford, MD. Available at: http://www.ahrq.gov/professionals/clinicians‐providers/guidelines‐recommendations/tobacco/clinicians/update/index.html. Accessed January 15, 2015.
- Smoking cessation advice rates in US hospitals. Arch Intern Med. 2011;171(18):1682–1684. , , , .
- Validating smoking data from the Veteran's Affairs Health Factors dataset, an electronic data source. Nicotine Tob Res. 2011;13(12):1233–1239. , , , et al.
- Do u smoke after txt? Results of a randomised trial of smoking cessation using mobile phone text messaging. Tob Control. 2005;14(4):255–261. , , , et al.
- The effectiveness of smoking cessation groups offered to hospitalised patients with symptoms of exacerbations of chronic obstructive pulmonary disease (COPD). Clin Respir J. 2008;2(3):158–165. , , , .
- Sustained care intervention and postdischarge smoking cessation among hospitalized adults: a randomized clinical trial. JAMA. 2014;312(7):719–728. , , , et al.
- Bupropion for smokers hospitalized with acute cardiovascular disease. Am J Med. 2006;119(12):1080–1087. , , , et al.
- Adapting a clinical comorbidity index for use with ICD‐9‐CM administrative databases. J Clin Epidemiol. 1992;45(6):613–619. , , .
- Multiple Imputation for Nonresponse in Surveys. New York, NY: Wiley; 1987. .
- Methods for constructing and assessing propensity scores. Health Serv Res. 2014;49(5):1701–1720. , , , et al.
- Mortality and mortality‐related factors after hospitalization for acute exacerbation of COPD. Chest. 2003;124(2):459–467. , , .
- Mortality after hospitalization for COPD. Chest. 2002;121(5):1441–1448. , , , et al.
- State quitlines and cessation patterns among adults with selected chronic diseases in 15 states, 2005–2008. Prev Chronic Dis. 2012;9(10):120105. , , , , , .
- The effects of counseling on smoking cessation among patients hospitalized with chronic obstructive pulmonary disease: a randomized clinical trial. Int J Addict. 1991;26(1):107–119. , , .
- Predictors of smoking cessation after a myocardial infarction: the role of institutional smoking cessation programs in improving success. Arch Intern Med. 2008;168(18):1961–1967. , , , et al.
- Post‐myocardial infarction smoking cessation counseling: associations with immediate and late mortality in older Medicare patients. Am J Med. 2005;118(3):269–275. , , , , , .
- Smoking cessation after acute myocardial infarction: effects of a nurse‐‐managed intervention. Ann Intern Med. 1990;113(2):118–123. , , , .
- Smoking care provision in hospitals: a review of prevalence. Nicotine Tob Res. 2008;10(5):757–774. , , , et al.
- Intentions to quit smoking change over short periods of time. Addict Behav. 2005;30(4):653–662. , , , .
- Association of amount and duration of NRT use in smokers with cigarette consumption and motivation to stop smoking: a national survey of smokers in England. Addict Behav. 2015;40(0):33–38. , , , , .
- Smoking prevalence, behaviours, and cessation among individuals with COPD or asthma. Respir Med. 2011;105(3):477–484. , .
- American College of Chest Physicians. Tobacco Dependence Treatment ToolKit. 3rd ed. Available at: http://tobaccodependence.chestnet.org. Accessed January 29, 2015. , , , et al.
- Effects of varenicline on smoking cessation in patients with mild to moderate COPD: a randomized controlled trial. Chest. 2011;139(3):591–599. , , , , , .
- Varenicline versus transdermal nicotine patch for smoking cessation: results from a randomised open‐label trial. Thorax. 2008;63(8):717–724. , , , et al.
- Psychiatric adverse events in randomized, double‐blind, placebo‐controlled clinical trials of varenicline. Drug Saf. 2010;33(4):289–301. , , , , .
- Studies linking smoking‐cessation drug with suicide risk spark concerns. JAMA. 2009;301(10):1007–1008. .
- A randomized, double‐blind, placebo‐controlled study evaluating the safety and efficacy of varenicline for smoking cessation in patients with schizophrenia or schizoaffective disorder. J Clin Psychiatry. 2012;73(5):654–660. , , , et al.
- Smoking and mental illness: results from population surveys in Australia and the United States. BMC Public Health. 2009;9(1):285. , , .
- Implementation and effectiveness of a brief smoking‐cessation intervention for hospital patients. Med Care. 2000;38(5):451–459. , , , .
- Clinical trial comparing nicotine replacement therapy (NRT) plus brief counselling, brief counselling alone, and minimal intervention on smoking cessation in hospital inpatients. Thorax. 2003;58(6):484–488. , , , et al.
- Dissociation between hospital performance of the smoking cessation counseling quality metric and cessation outcomes after myocardial infarction. Arch Intern Med. 2008;168(19):2111–2117. , , , et al.
- Smoking cessation in hospitalized patients: Results of a randomized trial. Arch Intern Med. 1997;157(4):409–415. , , , , .
- Intensive smoking cessation counseling versus minimal counseling among hospitalized smokers treated with transdermal nicotine replacement: a randomized trial. Am J Med. 2003;114(7):555–562. , , , , .
- Motivational factors predict quit attempts but not maintenance of smoking cessation: findings from the International Tobacco Control Four country project. Nicotine Tob Res. 2010;12(suppl):S4–S11. , , , et al.
- Predictors of attempts to stop smoking and their success in adult general population samples: a systematic review. Addiction. 2011;106(12):2110–2121. , , , , .
- Validity of self‐reported smoking status among participants in a lung cancer screening trial. Cancer Epidemiol Biomarkers Prev. 2006;15(10):1825–1828. , , , et al.
- VHA enrollees' health care coverage and use of care. Med Care Res Rev. 2003;60(2):253–67. , , , .
- Association between lung function and exacerbation frequency in patients with COPD. Int J Chron Obstruct Pulmon Dis. 2010;5:435–444. , , , , .
- Smoking cessation in patients with chronic obstructive pulmonary disease: a double‐blind, placebo‐controlled, randomised trial. Lancet. 2001;357(9268):1571–1575. , , , et al.
- Nurse‐conducted smoking cessation in patients with COPD using nicotine sublingual tablets and behavioral support. Chest. 2006;130(2):334–342. , , .
© 2015 Society of Hospital Medicine
PICC and Venous Catheter Appropriateness
Vascular access devices (VADs), including peripherally inserted central venous catheters (PICCs) and traditional central venous catheters (CVCs), remain a cornerstone for the delivery of necessary therapy. VADs are used routinely to treat inpatients and increasingly outpatients too. PICCs possess characteristics that are often favorable in a variety of clinical settings when compared to traditional CVCs. However, a paucity of evidence regarding the indication, selection, application, duration, and risks associated with these devices exists. PICCs are often used in situations when peripheral venous catheters (PIVsincluding ultrasound‐guided peripheral intravenous catheters and midline catheters [midlines]) would meet patient needs and confer a lower risk of complications. An unmet need to define indications and promote utilization that conforms to optimal use currently exists. The purpose of this article was to highlight for hospitalists the methodology and subsequent key recommendations published recently[1] regarding appropriateness of PICCs as they pertain to other vascular access device use.
BACKGROUND
Greater utilization of PICCs to meet a variety of clinical needs has recently emerged in hospital‐based medicine.[2, 3] This phenomenon is likely a function of favorable characteristics when comparing PICCs with traditional CVCs. PICCs are often favored because of safety with insertion in the arm, compatibility with inpatient and outpatient therapies, ease of protocolization for insertion by vascular access nursing services, patient tolerability, and cost savings.[4, 5, 6, 7, 8] Yet limitations of PICCs exist and complications including malpositioning, dislodgement, and luminal occlusion[9, 10, 11] affect patient safety and outcomes. Most notably, PICCs are strongly associated with risk for thrombosis and infection, complications that are most frequent in hospitalized and critically ill patients.[12, 13, 14, 15, 16]
Vascular access devices and particularly PICCs pose a substantial risk for thrombosis.[16, 17, 18, 19, 20] PICCs represent the greatest risk factor for upper extremity deep vein thrombosis (DVT), and in one study, PICC‐associated DVT risk was double that with traditional CVCs.[17] Risk factors for the development of PICC‐associated DVT include ipsilateral paresis,[21] infection,[22] PICC diameter,[19, 20] and prolonged surgery (procedure duration >1 hour) with a PICC in place.[23] Recently, PICCs placed in the upper extremity have been described as a possible risk factor for lower extremity venous thrombosis as well.[24, 25]
Infection complicating CVCs is well described,[12, 15] and guidelines for the prevention of catheter‐associated blood stream infections exist.[26, 27] However, the magnitude of the risk of infection associated with PICCs compared with traditional CVCs remains uncertain. Some reports suggest a decrease risk for infection with the utilization of PICCs[28]; others suggest a similar risk.[29] Existing guidelines, however, do not recommend substituting PICCs for CVCs as a technique to reduce infection, especially in general medical patients.[30]
It is not surprising that variability in the clinical use of PICCs and inappropriate PICC utilization has been described[31, 32] given the heterogeneity of patients and clinical situations in which PICCs are used. Simple awareness of medical devices in place is central to optimizing care. Important to the hospitalist physician is a recent study that found that 1 in 5 physicians were unaware of a CVC being present in their patient.[33] Indeed, emphasis has been placed on optimizing the use of PICC lines nationally through the Choosing Wisely initiative.[34, 35]
A panel of experts was convened at the University of Michigan in an effort to further clarify the appropriate use of VADs. Panelists engaged in a RAND Corporation/University of California Los Angeles (RAND/UCLA) Appropriateness Methodology review[36] to provide guidance regarding VAD use. The RAND/UCLA methodology is a validated way to assess the appropriateness of medical and surgical resource utilization, and details of this methodology are published elsewhere.[1] In brief, each panelist was provided a series of clinical scenarios associated with the use of central venous catheters purposefully including areas of consensus, controversy, and ambiguity. Using a standardized method for rating appropriateness, whereby median ratings on opposite ends of a 1 to 9 scale were used to indicate preference of one device over another (for example 9 reflected appropriate and 13 reflected inappropriate), the methodology classified consensus results into three levels of appropriateness. These three levels are: appropriate when the panel median is between 7 and 9 and without disagreement, uncertain/neutral when the panel median is between 4 and 6 or disagreement exists regardless of the median, or inappropriate when the panel median is between 1 and 3 without disagreement.
RESULTS
Comprehensive results regarding appropriateness ratings are reported elsewhere.[1] Results especially key to hospital‐based practitioners are summarized below. Table 1 highlights common scenarios when PICC placement is considered appropriate and inappropriate.
|
A. Appropriate indications for PICC use |
Delivery of peripherally compatible infusates when the proposed duration is 6 or more days* |
Delivery of nonperipherally compatible infusates (eg, irritants/vesicants) regardless of proposed duration of use |
Delivery of cyclical or episodic chemotherapy that can be administered through a peripheral vein in patients with active cancer, provided the proposed duration of such treatment is 3 or more months |
Invasive hemodynamic monitoring or necessary central venous access in a critically ill patient, provided the proposed duration is 15 or more days |
Frequent phlebotomy (every 8 hours) in a hospitalized patient provided the proposed duration is 6 or more days |
Intermittent infusions or infrequent phlebotomy in patients with poor/difficult peripheral venous access, provided that the proposed duration is 6 or more days |
Intermittent infusions or infrequent phlebotomy in patients with poor/difficult peripheral venous access, provided that the proposed duration is 6 or more days |
For infusions or palliative treatment during end‐of‐life care∥ |
Delivery of peripherally compatible infusates for patients residing in skilled nursing facilities or transitioning from hospital to home, provided that the proposed duration is at least 15 or more days |
B. Inappropriate indications for PICC use |
Placement for any indication other than infusion of nonperipherally compatible infusates (eg, irritants/vesicants) when the proposed duration is 5 or fewer days |
Placement in a patient with active cancer for cyclical chemotherapy that can be administered through a peripheral vein, when the proposed duration of treatment is 3 or fewer months and peripheral veins are available |
Placement in a patient with stage 3b or greater chronic kidney disease (estimated glomerular filtration rate <44 mL/min) or in patients currently receiving renal replacement therapy via any modality |
Insertion for nonfrequent phlebotomy if the proposed duration is 5 or fewer days |
Patient or family request in a patient that is not actively dying/on hospice for comfort from daily lab draws |
Medical or nursing provider request in the absence of other appropriate criteria for PICC use |
Appropriateness of PICCs in General Hospitalized Medical Patients
The appropriateness of PICCs when compared to other VADs among hospitalized medical patients can be broadly characterized based upon the planned infusate and the anticipated duration of use. PICCs were the preferred VAD when the anticipated duration of infusion was greater than 15 days or for any duration if the infusion was an irritant/vesicant (such as parenteral nutrition or chemotherapy). PICCs were considered appropriate if the proposed duration of use was 6 to 14 days, though preference for a midline or an ultrasound‐guided PIV was noted for this time‐frame. Tunneled catheters were considered appropriate only for the infusion of an irritant/vesicant when the anticipated duration was 15 days; similarly, implanted ports were rated as appropriate when an irritant/vesicant infusion was planned for 31 days. Both tunneled catheters and ports were rated as appropriate when episodic infusion over the duration of several months was necessary. Disagreement existed between the panelists regarding the appropriateness of PICC placement for the indication of frequent blood draws (3 phlebotomies per day) and among patients with difficult venous access, when phlebotomy would be needed for 5 days. In these cases an individualized patient‐centered approach was recommended. PICC placement was considered appropriate in these situations if venous access was required 6 days, but ultrasound‐guided and midline PIVs were again preferred to PICCs when the expected duration of use was <14 days.
Appropriateness of PICCs in Patients With Chronic Kidney Disease
The appropriateness of PICC use among patients with chronic kidney disease (CKD) takes into consideration disease stage as defined by the Kidney Disease: Improving Global Outcomes workgroup.[37] Although panelist recommendations did not differ for patients with stage 1 to 3a CKD (estimated GFR 45 mL/min) from those noted above, for patient's stage 3b or greater CKD, insertion of devices into an arm vein was rated as inappropriate (valuing the preservation of peripheral and central veins for possible hemodialysis/creation of arteriovenous fistulae and grafts). Among patients with stage 3b or greater CKD, PIV access in the dorsum of the hand was recommended for an expected duration of use 5 days. In consultation with a nephrologist, the use of a tunneled small‐bore central catheter (4 French or 5 French) inserted into the jugular vein was rated as appropriate in stage 3b or greater CKD patients requiring venous access for a longer duration.
Appropriateness of PICC Use in Patients with Cancer
The panelists' acknowledged the heterogeneity of thrombosis risk based on cancer type; recommendations reflect the assumption of cancer as a solid tumor. Vascular access choice among cancer patients is complicated by the cyclic nature of therapy frequently administered, the diversity of infusate (eg, nonirritant or nonvesicant versus irritant/vesicant), and uncertainties surrounding duration of therapy. To address this, the panelists chose a pragmatic approach considering the infusate (irritant/vesicant or not), and dichotomized treatment duration (3 months or not). Among cancer patients requiring nonvesicant/nonirritant chemotherapy for a duration 3 months, interval placement of PIVs was rated as appropriate, and disagreement existed among the panelists regarding the appropriateness of PICCs. If 3 months of chemotherapy was necessary, then PICCs or tunneled‐cuffed catheters were considered appropriate. Ports were rated as appropriate if the expected use was 6 months. Among cancer patients requiring vesicant/emrritant chemotherapy, PICCs and tunneled‐cuffed catheters were rated as appropriate for all time intervals, and ports were rated as neutral for 3‐ to 6‐month durations of infusion, and appropriate for durations greater than 6 months. When acceptable, PICCs were favored over tunneled‐cuffed catheters among cancer patients with coagulopathy (eg, severe thrombocytopenia, elevated international normalized ratios).
Appropriateness of PICCs in Patients With Critical Illness
Among critically ill patients, PIVs and midline catheters were rated as appropriate for infusion of 5 days, and 6 to 14 days, respectively, whereas PICCs were considered appropriate only when use 15 days was anticipated. Although both CVCs and PICCs were rated as appropriate among hemodynamically unstable patients in scenarios where invasive cardiovascular monitoring is necessary for durations of 14 days and 15 days, respectively, CVCs were favored over PICCs among patients who are hemodynamically unstable or requiring vasopressors.
Appropriateness of PICC Use In Special Populations
The existence of patients who require lifelong, often intermittent, intravenous access (eg, sickle cell anemia, short‐gut syndrome, cystic fibrosis) necessitates distinct recommendations for venous access. In this population, recommendations were categorized based on frequency of hospitalization. In patients that were hospitalized infrequently (<5 hospitalizations per year), use of midlines was preferred to PICCs when the hospitalization was expected to last 5 days; PICCs were rated as appropriate for a duration of use 15 days. However, in patients who require frequent hospitalization (6 hospitalizations annually), tunneled‐cuffed catheters were rated as appropriate and preferred over PICCs when the expected duration of use was 15 days per session.
For long‐term residents in skilled nursing facilities, PICCs were rated as appropriate for an expected duration of use 15 days, but uncertain for a duration of 6 to 14 days (when midlines were rated as appropriate). For venous access of 5 days, PIVs were rated as most appropriate.
How, When, by Whom, and Which PICCs Should Be Inserted
Societal recommendations[26] and guidelines[38] for routine placement and positioning of PICCs by dedicated nursing services exist.[39, 40] Panelists favored consultation with the specialists ordering vascular access devices (eg, infectious disease, nephrology, hematology, oncology) within the first few days of admission for optimal device selection and timing of insertion. For example, PICCs were rated as appropriate to be placed within 2 to 3 days of hospital admission for patients requiring longterm antimicrobial infusion (in the absence of bacteremia). Preferential PICC placement by interventional radiology was rated as appropriate if portable ultrasound did not identify a suitable target vein, the catheter fails to advance over the guidewire during a bedside attempt, or the patient requires sedation not appropriate for bedside placement. Interventional radiology insertion was also preferred in patients with bilateral mastectomy, altered chest anatomy, and for patients with permanent pacemakers or defibrillators if the contralateral arm is was not amenable for insertion. PICCs are generally placed at the bedside (with radiographic confirmation of catheter position, or with electrocardiography guidance when proficiency with this technique exists) or under direct visualization in the interventional radiology suite. As recommended elsewhere,[21, 26, 41] panelists rated the placement of the PICC catheter tip in the lower one‐third of the superior vena cava, at the cavoatrial junction, or in the right atrium as being appropriate. Nuanced recommendations surrounding PICC adjustment under varying circumstances can be found in the parent document.[1] Single‐lumen devices, which are associated with fewer complications, were rated as the appropriate default lumen of choice in the absence of a documented rationale for a multilumen PICC as a mechanism to decrease possible complications.[19, 20, 42] The insertion of multilumen PICCs for separating blood draws from infusions or ensuring a backup lumen is available was rated as inappropriate. Consistent with recent recommendations,[43, 44] normal saline rather than heparin was rated as appropriate to maintain catheter patency. The advancement of a migrated PICC was rated as inappropriate under all circumstances.
CONCLUSIONS
In‐hospital healthcare providers are routinely confronted with dilemmas surrounding choice of VAD. The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC) initiative is a multidisciplinary effort to clarify decision‐making related to VAD use. The systematic literature review and RAND/UCLA appropriateness method applied by the MAGIC panelists identifies areas of broad consensus surrounding the use of PICCs in relation to other VADs, and highlights uncertainties regarding the best practice to guide clinical care. Appropriateness statements facilitate standardization for the use, care, and discontinuation of VADs. These recommendations may be important to healthcare quality officers and payers as they allow for measurement of, and adherence to, standardized practice. In an era of electronic medical records and embedded clinical decision support, these recommendations may facilitate a just‐in‐time resource for optimal VAD management, outcomes measurement, and patient follow‐up. In addition to directing clinical care, these recommendations may serve as a lattice for the formation of future randomized clinical trials to further clarify important areas of the uncertainty surrounding VAD use.
Disclosures: Drs. Woller and Stevens disclose financial support paid to their institution of employment (Intermountain Medical Center) for conducting clinical research (with no financial support paid to either investigator). Dr. Woller discloses serving as an expert panelist for the Michigan Appropriateness Guide for Intravenous Catheters (MAGIC) initiative. The authors report no other conflicts of interest.
- The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC): results from a multispecialty panel using the RAND/UCLA appropriateness method. Ann Intern Med. 2015;163(6 suppl):S1–S40. , , , et al.
- Peripherally inserted central venous catheters in the acute care setting: a safe alternative to high‐risk short‐term central venous catheters. Am J Infect Control. 2010;38(2):149–153. , , , et al.
- Peripherally inserted central catheters may lower the incidence of catheter‐related blood stream infections in patients in surgical intensive care units. Surg Infect (Larchmt). 2011;12(4):279–282. , , , , , .
- Developing an alternative workflow model for peripherally inserted central catheter placement. J Infus Nurs. 2012;35(1):34–42. .
- Nurse‐led PICC insertion: is it cost effective? Br J Nurs. 2013;22(19):S9–S15. , .
- Facility wide benefits of radiology vascular access teams, part 2. Radiol Manage. 2010;32(3):39–43. , .
- Facility wide benefits of radiology vascular access teams. Radiol Manage. 2010;32(1):28–32; quiz 3–4. , .
- Advantages and disadvantages of peripherally inserted central venous catheters (PICC) compared to other central venous lines: a systematic review of the literature. Acta Oncol. 2013;52(5):886–892. , , , .
- The problem with peripherally inserted central catheters. JAMA. 2012;308(15):1527–1528. , , .
- Malposition of peripherally inserted central catheter: experience from 3,012 patients with cancer. Exp Ther Med. 2013;6(4):891–893. , .
- Complications associated with peripheral or central routes for central venous cannulation. Anaesthesia. 2012;67(1):65–71. , , .
- Bloodstream infection, venous thrombosis, and peripherally inserted central catheters: reappraising the evidence. Am J Med. 2012;125(8):733–741. , , , , .
- A randomised, controlled trial comparing the long‐term effects of peripherally inserted central catheter placement in chemotherapy patients using B‐mode ultrasound with modified Seldinger technique versus blind puncture. Eur J Oncol Nurs. 2014;18(1):94–103. , , , et al.
- A retrospective study on the long‐term placement of peripherally inserted central catheters and the importance of nursing care and education. Cancer Nurs. 2011;34(1):E25–E30. , , , , .
- The risk of bloodstream infection associated with peripherally inserted central catheters compared with central venous catheters in adults: a systematic review and meta‐analysis. Infect Control Hosp Epidemiol. 2013;34(9):908–918. , , , , .
- Risk of venous thromboembolism associated with peripherally inserted central catheters: a systematic review and meta‐analysis. Lancet. 2013;382(9889):311–325. , , , et al.
- Risk factors for catheter‐related thrombosis (CRT) in cancer patients: a patient‐level data (IPD) meta‐analysis of clinical trials and prospective studies. J Thromb Haemost. 2011;9(2):312–319. , , , et al.
- Upper extremity deep vein thrombosis: a community‐based perspective. Am J Med. 2007;120(8):678–684. , , , .
- Risk of symptomatic DVT associated with peripherally inserted central catheters. Chest. 2010;138(4):803–810. , , , et al.
- Reduction of peripherally inserted central catheter associated deep venous thrombosis. Chest. 2013;143(3):627–633. , , , et al.
- Risk factors associated with peripherally inserted central venous catheter‐related large vein thrombosis in neurological intensive care patients. Intensive Care Med. 2012;38(2):272–278. , , , , , .
- Upper extremity venous thrombosis in patients with cancer with peripherally inserted central venous catheters: a retrospective analysis of risk factors. J Oncol Pract. 2013;9(1):e8–e12. , , , , .
- 2008 Standards, Options and Recommendations (SOR) guidelines for the prevention and treatment of thrombosis associated with central venous catheters in patients with cancer: report from the working group. Ann Oncol. 2009;20(9):1459–1471. , , , et al.
- The association between picc use and venous thromboembolism in upper and lower extremities. Am J Med. 2015;128(9):986–993.e1. , , , , .
- VTE Incidence and risk factors in patients with severe sepsis and septic shock. Chest. 2015;148(5):1224–1230. , , , et al.
- Infusion Nurses Society. Infusion nursing standards of practice. J Infus Nurs. 2011;34(1S).
- Healthcare Infection Control Practices Advisory Committee (HICPAC) (Appendix 1). Summary of recommendations: Guidelines for the Prevention of Intravascular Catheter‐related Infections. Clin Infect Dis. 2011;52:1087–1099. , , , , , , et al.
- Catheter‐associated bloodstream infection incidence and risk factors in adults with cancer: a prospective cohort study. J Hosp Infect. 2011;78(1):26–30. , , , et al.
- Risk of catheter‐related bloodstream infection with peripherally inserted central venous catheters used in hospitalized patients. Chest. 2005;128(2):489–495. , .
- Guidelines for the prevention of intravascular catheter‐related infections. Clin Infect Dis. 2011;52(9):e162–e193. , , , et al.
- Temporary central venous catheter utilization patterns in a large tertiary care center: tracking the “idle central venous catheter”. Infect Control Hosp Epidemiol. 2012;33(1):50–57. , , , et al.
- Peripherally inserted central catheters: use at a tertiary care pediatric center. J Vasc Interv Radiol. 2013;24(9):1323–1331. , , , , , .
- Do clinicians know which of their patients have central venous catheters?: a multicenter observational study. Ann Intern Med. 2014;161(8):562–567. , , , et al.
- Choosing Wisely. American Society of Nephrology. Don't place peripherally inserted central catheters (PICC) in stage III‐V CKD patients without consulting nephrology. Available at: http://www.choosingwisely.org/clinician‐lists/american‐society‐nephrology‐peripherally‐inserted‐central‐catheters‐in‐stage‐iii‐iv‐ckd‐patients. Accessed November 3, 2015.
- Society of General Internal Medicine. Don't place, or leave in place, peripherally inserted central catheters for patient or provider convenience. Available at: http://www.choosingwisely.org/clinician‐lists/society‐general‐internal‐medicine‐peripherally‐inserted‐central‐catheters‐for‐patient‐provider‐convenience. Accessed November 3, 2015.
- The RAND/UCLA appropriateness method user's manual. Santa Monica, CA: RAND; 2001. Available at: http://www.rand.org/pubs/monograph_reports/MR1269.html. , , , et al.
- National Kidney Foundation/Kidney Disease Outcomes Quality Initiative. KDOQI 2012 clinical practice guidelines for chronic kidney disease. Kidney Inter. 2013;(suppl 3):1–150. Accessed November 3, 2015.
- Practice guidelines for central venous access: a report by the American Society of Anesthesiologists Task Force on Central Venous Access. Anesthesiology. 2012;116(3):539–573. , , , et al.
- Improved care and reduced costs for patients requiring peripherally inserted central catheters: the role of bedside ultrasound and a dedicated team. JPEN J Parenter Enteral Nutr. 2005;29(5):374–379. , , , , .
- Analysis of tip malposition and correction in peripherally inserted central catheters placed at bedside by a dedicated nursing team. J Vasc Interv Radiol. 2007;18(4):513–518. , , , .
- Food and Drug Administration Task Force. Precautions necessary with central venous catheters. FDA Drug Bull. 1989:15–16.
- Insertion of PICCs with minimum number of lumens reduces complications and costs. J Am Coll Radiol. 2013;10(11):864–868. , , , .
- Flushing the central venous catheter: is heparin necessary? J Vasc Access. 2014;15(4):241–248. , , , et al.
- Heparin versus 0.9% sodium chloride intermittent flushing for prevention of occlusion in central venous catheters in adults. Cochrane Database Syst Rev. 2014;10:CD008462. , , , , , .
Vascular access devices (VADs), including peripherally inserted central venous catheters (PICCs) and traditional central venous catheters (CVCs), remain a cornerstone for the delivery of necessary therapy. VADs are used routinely to treat inpatients and increasingly outpatients too. PICCs possess characteristics that are often favorable in a variety of clinical settings when compared to traditional CVCs. However, a paucity of evidence regarding the indication, selection, application, duration, and risks associated with these devices exists. PICCs are often used in situations when peripheral venous catheters (PIVsincluding ultrasound‐guided peripheral intravenous catheters and midline catheters [midlines]) would meet patient needs and confer a lower risk of complications. An unmet need to define indications and promote utilization that conforms to optimal use currently exists. The purpose of this article was to highlight for hospitalists the methodology and subsequent key recommendations published recently[1] regarding appropriateness of PICCs as they pertain to other vascular access device use.
BACKGROUND
Greater utilization of PICCs to meet a variety of clinical needs has recently emerged in hospital‐based medicine.[2, 3] This phenomenon is likely a function of favorable characteristics when comparing PICCs with traditional CVCs. PICCs are often favored because of safety with insertion in the arm, compatibility with inpatient and outpatient therapies, ease of protocolization for insertion by vascular access nursing services, patient tolerability, and cost savings.[4, 5, 6, 7, 8] Yet limitations of PICCs exist and complications including malpositioning, dislodgement, and luminal occlusion[9, 10, 11] affect patient safety and outcomes. Most notably, PICCs are strongly associated with risk for thrombosis and infection, complications that are most frequent in hospitalized and critically ill patients.[12, 13, 14, 15, 16]
Vascular access devices and particularly PICCs pose a substantial risk for thrombosis.[16, 17, 18, 19, 20] PICCs represent the greatest risk factor for upper extremity deep vein thrombosis (DVT), and in one study, PICC‐associated DVT risk was double that with traditional CVCs.[17] Risk factors for the development of PICC‐associated DVT include ipsilateral paresis,[21] infection,[22] PICC diameter,[19, 20] and prolonged surgery (procedure duration >1 hour) with a PICC in place.[23] Recently, PICCs placed in the upper extremity have been described as a possible risk factor for lower extremity venous thrombosis as well.[24, 25]
Infection complicating CVCs is well described,[12, 15] and guidelines for the prevention of catheter‐associated blood stream infections exist.[26, 27] However, the magnitude of the risk of infection associated with PICCs compared with traditional CVCs remains uncertain. Some reports suggest a decrease risk for infection with the utilization of PICCs[28]; others suggest a similar risk.[29] Existing guidelines, however, do not recommend substituting PICCs for CVCs as a technique to reduce infection, especially in general medical patients.[30]
It is not surprising that variability in the clinical use of PICCs and inappropriate PICC utilization has been described[31, 32] given the heterogeneity of patients and clinical situations in which PICCs are used. Simple awareness of medical devices in place is central to optimizing care. Important to the hospitalist physician is a recent study that found that 1 in 5 physicians were unaware of a CVC being present in their patient.[33] Indeed, emphasis has been placed on optimizing the use of PICC lines nationally through the Choosing Wisely initiative.[34, 35]
A panel of experts was convened at the University of Michigan in an effort to further clarify the appropriate use of VADs. Panelists engaged in a RAND Corporation/University of California Los Angeles (RAND/UCLA) Appropriateness Methodology review[36] to provide guidance regarding VAD use. The RAND/UCLA methodology is a validated way to assess the appropriateness of medical and surgical resource utilization, and details of this methodology are published elsewhere.[1] In brief, each panelist was provided a series of clinical scenarios associated with the use of central venous catheters purposefully including areas of consensus, controversy, and ambiguity. Using a standardized method for rating appropriateness, whereby median ratings on opposite ends of a 1 to 9 scale were used to indicate preference of one device over another (for example 9 reflected appropriate and 13 reflected inappropriate), the methodology classified consensus results into three levels of appropriateness. These three levels are: appropriate when the panel median is between 7 and 9 and without disagreement, uncertain/neutral when the panel median is between 4 and 6 or disagreement exists regardless of the median, or inappropriate when the panel median is between 1 and 3 without disagreement.
RESULTS
Comprehensive results regarding appropriateness ratings are reported elsewhere.[1] Results especially key to hospital‐based practitioners are summarized below. Table 1 highlights common scenarios when PICC placement is considered appropriate and inappropriate.
|
A. Appropriate indications for PICC use |
Delivery of peripherally compatible infusates when the proposed duration is 6 or more days* |
Delivery of nonperipherally compatible infusates (eg, irritants/vesicants) regardless of proposed duration of use |
Delivery of cyclical or episodic chemotherapy that can be administered through a peripheral vein in patients with active cancer, provided the proposed duration of such treatment is 3 or more months |
Invasive hemodynamic monitoring or necessary central venous access in a critically ill patient, provided the proposed duration is 15 or more days |
Frequent phlebotomy (every 8 hours) in a hospitalized patient provided the proposed duration is 6 or more days |
Intermittent infusions or infrequent phlebotomy in patients with poor/difficult peripheral venous access, provided that the proposed duration is 6 or more days |
Intermittent infusions or infrequent phlebotomy in patients with poor/difficult peripheral venous access, provided that the proposed duration is 6 or more days |
For infusions or palliative treatment during end‐of‐life care∥ |
Delivery of peripherally compatible infusates for patients residing in skilled nursing facilities or transitioning from hospital to home, provided that the proposed duration is at least 15 or more days |
B. Inappropriate indications for PICC use |
Placement for any indication other than infusion of nonperipherally compatible infusates (eg, irritants/vesicants) when the proposed duration is 5 or fewer days |
Placement in a patient with active cancer for cyclical chemotherapy that can be administered through a peripheral vein, when the proposed duration of treatment is 3 or fewer months and peripheral veins are available |
Placement in a patient with stage 3b or greater chronic kidney disease (estimated glomerular filtration rate <44 mL/min) or in patients currently receiving renal replacement therapy via any modality |
Insertion for nonfrequent phlebotomy if the proposed duration is 5 or fewer days |
Patient or family request in a patient that is not actively dying/on hospice for comfort from daily lab draws |
Medical or nursing provider request in the absence of other appropriate criteria for PICC use |
Appropriateness of PICCs in General Hospitalized Medical Patients
The appropriateness of PICCs when compared to other VADs among hospitalized medical patients can be broadly characterized based upon the planned infusate and the anticipated duration of use. PICCs were the preferred VAD when the anticipated duration of infusion was greater than 15 days or for any duration if the infusion was an irritant/vesicant (such as parenteral nutrition or chemotherapy). PICCs were considered appropriate if the proposed duration of use was 6 to 14 days, though preference for a midline or an ultrasound‐guided PIV was noted for this time‐frame. Tunneled catheters were considered appropriate only for the infusion of an irritant/vesicant when the anticipated duration was 15 days; similarly, implanted ports were rated as appropriate when an irritant/vesicant infusion was planned for 31 days. Both tunneled catheters and ports were rated as appropriate when episodic infusion over the duration of several months was necessary. Disagreement existed between the panelists regarding the appropriateness of PICC placement for the indication of frequent blood draws (3 phlebotomies per day) and among patients with difficult venous access, when phlebotomy would be needed for 5 days. In these cases an individualized patient‐centered approach was recommended. PICC placement was considered appropriate in these situations if venous access was required 6 days, but ultrasound‐guided and midline PIVs were again preferred to PICCs when the expected duration of use was <14 days.
Appropriateness of PICCs in Patients With Chronic Kidney Disease
The appropriateness of PICC use among patients with chronic kidney disease (CKD) takes into consideration disease stage as defined by the Kidney Disease: Improving Global Outcomes workgroup.[37] Although panelist recommendations did not differ for patients with stage 1 to 3a CKD (estimated GFR 45 mL/min) from those noted above, for patient's stage 3b or greater CKD, insertion of devices into an arm vein was rated as inappropriate (valuing the preservation of peripheral and central veins for possible hemodialysis/creation of arteriovenous fistulae and grafts). Among patients with stage 3b or greater CKD, PIV access in the dorsum of the hand was recommended for an expected duration of use 5 days. In consultation with a nephrologist, the use of a tunneled small‐bore central catheter (4 French or 5 French) inserted into the jugular vein was rated as appropriate in stage 3b or greater CKD patients requiring venous access for a longer duration.
Appropriateness of PICC Use in Patients with Cancer
The panelists' acknowledged the heterogeneity of thrombosis risk based on cancer type; recommendations reflect the assumption of cancer as a solid tumor. Vascular access choice among cancer patients is complicated by the cyclic nature of therapy frequently administered, the diversity of infusate (eg, nonirritant or nonvesicant versus irritant/vesicant), and uncertainties surrounding duration of therapy. To address this, the panelists chose a pragmatic approach considering the infusate (irritant/vesicant or not), and dichotomized treatment duration (3 months or not). Among cancer patients requiring nonvesicant/nonirritant chemotherapy for a duration 3 months, interval placement of PIVs was rated as appropriate, and disagreement existed among the panelists regarding the appropriateness of PICCs. If 3 months of chemotherapy was necessary, then PICCs or tunneled‐cuffed catheters were considered appropriate. Ports were rated as appropriate if the expected use was 6 months. Among cancer patients requiring vesicant/emrritant chemotherapy, PICCs and tunneled‐cuffed catheters were rated as appropriate for all time intervals, and ports were rated as neutral for 3‐ to 6‐month durations of infusion, and appropriate for durations greater than 6 months. When acceptable, PICCs were favored over tunneled‐cuffed catheters among cancer patients with coagulopathy (eg, severe thrombocytopenia, elevated international normalized ratios).
Appropriateness of PICCs in Patients With Critical Illness
Among critically ill patients, PIVs and midline catheters were rated as appropriate for infusion of 5 days, and 6 to 14 days, respectively, whereas PICCs were considered appropriate only when use 15 days was anticipated. Although both CVCs and PICCs were rated as appropriate among hemodynamically unstable patients in scenarios where invasive cardiovascular monitoring is necessary for durations of 14 days and 15 days, respectively, CVCs were favored over PICCs among patients who are hemodynamically unstable or requiring vasopressors.
Appropriateness of PICC Use In Special Populations
The existence of patients who require lifelong, often intermittent, intravenous access (eg, sickle cell anemia, short‐gut syndrome, cystic fibrosis) necessitates distinct recommendations for venous access. In this population, recommendations were categorized based on frequency of hospitalization. In patients that were hospitalized infrequently (<5 hospitalizations per year), use of midlines was preferred to PICCs when the hospitalization was expected to last 5 days; PICCs were rated as appropriate for a duration of use 15 days. However, in patients who require frequent hospitalization (6 hospitalizations annually), tunneled‐cuffed catheters were rated as appropriate and preferred over PICCs when the expected duration of use was 15 days per session.
For long‐term residents in skilled nursing facilities, PICCs were rated as appropriate for an expected duration of use 15 days, but uncertain for a duration of 6 to 14 days (when midlines were rated as appropriate). For venous access of 5 days, PIVs were rated as most appropriate.
How, When, by Whom, and Which PICCs Should Be Inserted
Societal recommendations[26] and guidelines[38] for routine placement and positioning of PICCs by dedicated nursing services exist.[39, 40] Panelists favored consultation with the specialists ordering vascular access devices (eg, infectious disease, nephrology, hematology, oncology) within the first few days of admission for optimal device selection and timing of insertion. For example, PICCs were rated as appropriate to be placed within 2 to 3 days of hospital admission for patients requiring longterm antimicrobial infusion (in the absence of bacteremia). Preferential PICC placement by interventional radiology was rated as appropriate if portable ultrasound did not identify a suitable target vein, the catheter fails to advance over the guidewire during a bedside attempt, or the patient requires sedation not appropriate for bedside placement. Interventional radiology insertion was also preferred in patients with bilateral mastectomy, altered chest anatomy, and for patients with permanent pacemakers or defibrillators if the contralateral arm is was not amenable for insertion. PICCs are generally placed at the bedside (with radiographic confirmation of catheter position, or with electrocardiography guidance when proficiency with this technique exists) or under direct visualization in the interventional radiology suite. As recommended elsewhere,[21, 26, 41] panelists rated the placement of the PICC catheter tip in the lower one‐third of the superior vena cava, at the cavoatrial junction, or in the right atrium as being appropriate. Nuanced recommendations surrounding PICC adjustment under varying circumstances can be found in the parent document.[1] Single‐lumen devices, which are associated with fewer complications, were rated as the appropriate default lumen of choice in the absence of a documented rationale for a multilumen PICC as a mechanism to decrease possible complications.[19, 20, 42] The insertion of multilumen PICCs for separating blood draws from infusions or ensuring a backup lumen is available was rated as inappropriate. Consistent with recent recommendations,[43, 44] normal saline rather than heparin was rated as appropriate to maintain catheter patency. The advancement of a migrated PICC was rated as inappropriate under all circumstances.
CONCLUSIONS
In‐hospital healthcare providers are routinely confronted with dilemmas surrounding choice of VAD. The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC) initiative is a multidisciplinary effort to clarify decision‐making related to VAD use. The systematic literature review and RAND/UCLA appropriateness method applied by the MAGIC panelists identifies areas of broad consensus surrounding the use of PICCs in relation to other VADs, and highlights uncertainties regarding the best practice to guide clinical care. Appropriateness statements facilitate standardization for the use, care, and discontinuation of VADs. These recommendations may be important to healthcare quality officers and payers as they allow for measurement of, and adherence to, standardized practice. In an era of electronic medical records and embedded clinical decision support, these recommendations may facilitate a just‐in‐time resource for optimal VAD management, outcomes measurement, and patient follow‐up. In addition to directing clinical care, these recommendations may serve as a lattice for the formation of future randomized clinical trials to further clarify important areas of the uncertainty surrounding VAD use.
Disclosures: Drs. Woller and Stevens disclose financial support paid to their institution of employment (Intermountain Medical Center) for conducting clinical research (with no financial support paid to either investigator). Dr. Woller discloses serving as an expert panelist for the Michigan Appropriateness Guide for Intravenous Catheters (MAGIC) initiative. The authors report no other conflicts of interest.
Vascular access devices (VADs), including peripherally inserted central venous catheters (PICCs) and traditional central venous catheters (CVCs), remain a cornerstone for the delivery of necessary therapy. VADs are used routinely to treat inpatients and increasingly outpatients too. PICCs possess characteristics that are often favorable in a variety of clinical settings when compared to traditional CVCs. However, a paucity of evidence regarding the indication, selection, application, duration, and risks associated with these devices exists. PICCs are often used in situations when peripheral venous catheters (PIVsincluding ultrasound‐guided peripheral intravenous catheters and midline catheters [midlines]) would meet patient needs and confer a lower risk of complications. An unmet need to define indications and promote utilization that conforms to optimal use currently exists. The purpose of this article was to highlight for hospitalists the methodology and subsequent key recommendations published recently[1] regarding appropriateness of PICCs as they pertain to other vascular access device use.
BACKGROUND
Greater utilization of PICCs to meet a variety of clinical needs has recently emerged in hospital‐based medicine.[2, 3] This phenomenon is likely a function of favorable characteristics when comparing PICCs with traditional CVCs. PICCs are often favored because of safety with insertion in the arm, compatibility with inpatient and outpatient therapies, ease of protocolization for insertion by vascular access nursing services, patient tolerability, and cost savings.[4, 5, 6, 7, 8] Yet limitations of PICCs exist and complications including malpositioning, dislodgement, and luminal occlusion[9, 10, 11] affect patient safety and outcomes. Most notably, PICCs are strongly associated with risk for thrombosis and infection, complications that are most frequent in hospitalized and critically ill patients.[12, 13, 14, 15, 16]
Vascular access devices and particularly PICCs pose a substantial risk for thrombosis.[16, 17, 18, 19, 20] PICCs represent the greatest risk factor for upper extremity deep vein thrombosis (DVT), and in one study, PICC‐associated DVT risk was double that with traditional CVCs.[17] Risk factors for the development of PICC‐associated DVT include ipsilateral paresis,[21] infection,[22] PICC diameter,[19, 20] and prolonged surgery (procedure duration >1 hour) with a PICC in place.[23] Recently, PICCs placed in the upper extremity have been described as a possible risk factor for lower extremity venous thrombosis as well.[24, 25]
Infection complicating CVCs is well described,[12, 15] and guidelines for the prevention of catheter‐associated blood stream infections exist.[26, 27] However, the magnitude of the risk of infection associated with PICCs compared with traditional CVCs remains uncertain. Some reports suggest a decrease risk for infection with the utilization of PICCs[28]; others suggest a similar risk.[29] Existing guidelines, however, do not recommend substituting PICCs for CVCs as a technique to reduce infection, especially in general medical patients.[30]
It is not surprising that variability in the clinical use of PICCs and inappropriate PICC utilization has been described[31, 32] given the heterogeneity of patients and clinical situations in which PICCs are used. Simple awareness of medical devices in place is central to optimizing care. Important to the hospitalist physician is a recent study that found that 1 in 5 physicians were unaware of a CVC being present in their patient.[33] Indeed, emphasis has been placed on optimizing the use of PICC lines nationally through the Choosing Wisely initiative.[34, 35]
A panel of experts was convened at the University of Michigan in an effort to further clarify the appropriate use of VADs. Panelists engaged in a RAND Corporation/University of California Los Angeles (RAND/UCLA) Appropriateness Methodology review[36] to provide guidance regarding VAD use. The RAND/UCLA methodology is a validated way to assess the appropriateness of medical and surgical resource utilization, and details of this methodology are published elsewhere.[1] In brief, each panelist was provided a series of clinical scenarios associated with the use of central venous catheters purposefully including areas of consensus, controversy, and ambiguity. Using a standardized method for rating appropriateness, whereby median ratings on opposite ends of a 1 to 9 scale were used to indicate preference of one device over another (for example 9 reflected appropriate and 13 reflected inappropriate), the methodology classified consensus results into three levels of appropriateness. These three levels are: appropriate when the panel median is between 7 and 9 and without disagreement, uncertain/neutral when the panel median is between 4 and 6 or disagreement exists regardless of the median, or inappropriate when the panel median is between 1 and 3 without disagreement.
RESULTS
Comprehensive results regarding appropriateness ratings are reported elsewhere.[1] Results especially key to hospital‐based practitioners are summarized below. Table 1 highlights common scenarios when PICC placement is considered appropriate and inappropriate.
|
A. Appropriate indications for PICC use |
Delivery of peripherally compatible infusates when the proposed duration is 6 or more days* |
Delivery of nonperipherally compatible infusates (eg, irritants/vesicants) regardless of proposed duration of use |
Delivery of cyclical or episodic chemotherapy that can be administered through a peripheral vein in patients with active cancer, provided the proposed duration of such treatment is 3 or more months |
Invasive hemodynamic monitoring or necessary central venous access in a critically ill patient, provided the proposed duration is 15 or more days |
Frequent phlebotomy (every 8 hours) in a hospitalized patient provided the proposed duration is 6 or more days |
Intermittent infusions or infrequent phlebotomy in patients with poor/difficult peripheral venous access, provided that the proposed duration is 6 or more days |
Intermittent infusions or infrequent phlebotomy in patients with poor/difficult peripheral venous access, provided that the proposed duration is 6 or more days |
For infusions or palliative treatment during end‐of‐life care∥ |
Delivery of peripherally compatible infusates for patients residing in skilled nursing facilities or transitioning from hospital to home, provided that the proposed duration is at least 15 or more days |
B. Inappropriate indications for PICC use |
Placement for any indication other than infusion of nonperipherally compatible infusates (eg, irritants/vesicants) when the proposed duration is 5 or fewer days |
Placement in a patient with active cancer for cyclical chemotherapy that can be administered through a peripheral vein, when the proposed duration of treatment is 3 or fewer months and peripheral veins are available |
Placement in a patient with stage 3b or greater chronic kidney disease (estimated glomerular filtration rate <44 mL/min) or in patients currently receiving renal replacement therapy via any modality |
Insertion for nonfrequent phlebotomy if the proposed duration is 5 or fewer days |
Patient or family request in a patient that is not actively dying/on hospice for comfort from daily lab draws |
Medical or nursing provider request in the absence of other appropriate criteria for PICC use |
Appropriateness of PICCs in General Hospitalized Medical Patients
The appropriateness of PICCs when compared to other VADs among hospitalized medical patients can be broadly characterized based upon the planned infusate and the anticipated duration of use. PICCs were the preferred VAD when the anticipated duration of infusion was greater than 15 days or for any duration if the infusion was an irritant/vesicant (such as parenteral nutrition or chemotherapy). PICCs were considered appropriate if the proposed duration of use was 6 to 14 days, though preference for a midline or an ultrasound‐guided PIV was noted for this time‐frame. Tunneled catheters were considered appropriate only for the infusion of an irritant/vesicant when the anticipated duration was 15 days; similarly, implanted ports were rated as appropriate when an irritant/vesicant infusion was planned for 31 days. Both tunneled catheters and ports were rated as appropriate when episodic infusion over the duration of several months was necessary. Disagreement existed between the panelists regarding the appropriateness of PICC placement for the indication of frequent blood draws (3 phlebotomies per day) and among patients with difficult venous access, when phlebotomy would be needed for 5 days. In these cases an individualized patient‐centered approach was recommended. PICC placement was considered appropriate in these situations if venous access was required 6 days, but ultrasound‐guided and midline PIVs were again preferred to PICCs when the expected duration of use was <14 days.
Appropriateness of PICCs in Patients With Chronic Kidney Disease
The appropriateness of PICC use among patients with chronic kidney disease (CKD) takes into consideration disease stage as defined by the Kidney Disease: Improving Global Outcomes workgroup.[37] Although panelist recommendations did not differ for patients with stage 1 to 3a CKD (estimated GFR 45 mL/min) from those noted above, for patient's stage 3b or greater CKD, insertion of devices into an arm vein was rated as inappropriate (valuing the preservation of peripheral and central veins for possible hemodialysis/creation of arteriovenous fistulae and grafts). Among patients with stage 3b or greater CKD, PIV access in the dorsum of the hand was recommended for an expected duration of use 5 days. In consultation with a nephrologist, the use of a tunneled small‐bore central catheter (4 French or 5 French) inserted into the jugular vein was rated as appropriate in stage 3b or greater CKD patients requiring venous access for a longer duration.
Appropriateness of PICC Use in Patients with Cancer
The panelists' acknowledged the heterogeneity of thrombosis risk based on cancer type; recommendations reflect the assumption of cancer as a solid tumor. Vascular access choice among cancer patients is complicated by the cyclic nature of therapy frequently administered, the diversity of infusate (eg, nonirritant or nonvesicant versus irritant/vesicant), and uncertainties surrounding duration of therapy. To address this, the panelists chose a pragmatic approach considering the infusate (irritant/vesicant or not), and dichotomized treatment duration (3 months or not). Among cancer patients requiring nonvesicant/nonirritant chemotherapy for a duration 3 months, interval placement of PIVs was rated as appropriate, and disagreement existed among the panelists regarding the appropriateness of PICCs. If 3 months of chemotherapy was necessary, then PICCs or tunneled‐cuffed catheters were considered appropriate. Ports were rated as appropriate if the expected use was 6 months. Among cancer patients requiring vesicant/emrritant chemotherapy, PICCs and tunneled‐cuffed catheters were rated as appropriate for all time intervals, and ports were rated as neutral for 3‐ to 6‐month durations of infusion, and appropriate for durations greater than 6 months. When acceptable, PICCs were favored over tunneled‐cuffed catheters among cancer patients with coagulopathy (eg, severe thrombocytopenia, elevated international normalized ratios).
Appropriateness of PICCs in Patients With Critical Illness
Among critically ill patients, PIVs and midline catheters were rated as appropriate for infusion of 5 days, and 6 to 14 days, respectively, whereas PICCs were considered appropriate only when use 15 days was anticipated. Although both CVCs and PICCs were rated as appropriate among hemodynamically unstable patients in scenarios where invasive cardiovascular monitoring is necessary for durations of 14 days and 15 days, respectively, CVCs were favored over PICCs among patients who are hemodynamically unstable or requiring vasopressors.
Appropriateness of PICC Use In Special Populations
The existence of patients who require lifelong, often intermittent, intravenous access (eg, sickle cell anemia, short‐gut syndrome, cystic fibrosis) necessitates distinct recommendations for venous access. In this population, recommendations were categorized based on frequency of hospitalization. In patients that were hospitalized infrequently (<5 hospitalizations per year), use of midlines was preferred to PICCs when the hospitalization was expected to last 5 days; PICCs were rated as appropriate for a duration of use 15 days. However, in patients who require frequent hospitalization (6 hospitalizations annually), tunneled‐cuffed catheters were rated as appropriate and preferred over PICCs when the expected duration of use was 15 days per session.
For long‐term residents in skilled nursing facilities, PICCs were rated as appropriate for an expected duration of use 15 days, but uncertain for a duration of 6 to 14 days (when midlines were rated as appropriate). For venous access of 5 days, PIVs were rated as most appropriate.
How, When, by Whom, and Which PICCs Should Be Inserted
Societal recommendations[26] and guidelines[38] for routine placement and positioning of PICCs by dedicated nursing services exist.[39, 40] Panelists favored consultation with the specialists ordering vascular access devices (eg, infectious disease, nephrology, hematology, oncology) within the first few days of admission for optimal device selection and timing of insertion. For example, PICCs were rated as appropriate to be placed within 2 to 3 days of hospital admission for patients requiring longterm antimicrobial infusion (in the absence of bacteremia). Preferential PICC placement by interventional radiology was rated as appropriate if portable ultrasound did not identify a suitable target vein, the catheter fails to advance over the guidewire during a bedside attempt, or the patient requires sedation not appropriate for bedside placement. Interventional radiology insertion was also preferred in patients with bilateral mastectomy, altered chest anatomy, and for patients with permanent pacemakers or defibrillators if the contralateral arm is was not amenable for insertion. PICCs are generally placed at the bedside (with radiographic confirmation of catheter position, or with electrocardiography guidance when proficiency with this technique exists) or under direct visualization in the interventional radiology suite. As recommended elsewhere,[21, 26, 41] panelists rated the placement of the PICC catheter tip in the lower one‐third of the superior vena cava, at the cavoatrial junction, or in the right atrium as being appropriate. Nuanced recommendations surrounding PICC adjustment under varying circumstances can be found in the parent document.[1] Single‐lumen devices, which are associated with fewer complications, were rated as the appropriate default lumen of choice in the absence of a documented rationale for a multilumen PICC as a mechanism to decrease possible complications.[19, 20, 42] The insertion of multilumen PICCs for separating blood draws from infusions or ensuring a backup lumen is available was rated as inappropriate. Consistent with recent recommendations,[43, 44] normal saline rather than heparin was rated as appropriate to maintain catheter patency. The advancement of a migrated PICC was rated as inappropriate under all circumstances.
CONCLUSIONS
In‐hospital healthcare providers are routinely confronted with dilemmas surrounding choice of VAD. The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC) initiative is a multidisciplinary effort to clarify decision‐making related to VAD use. The systematic literature review and RAND/UCLA appropriateness method applied by the MAGIC panelists identifies areas of broad consensus surrounding the use of PICCs in relation to other VADs, and highlights uncertainties regarding the best practice to guide clinical care. Appropriateness statements facilitate standardization for the use, care, and discontinuation of VADs. These recommendations may be important to healthcare quality officers and payers as they allow for measurement of, and adherence to, standardized practice. In an era of electronic medical records and embedded clinical decision support, these recommendations may facilitate a just‐in‐time resource for optimal VAD management, outcomes measurement, and patient follow‐up. In addition to directing clinical care, these recommendations may serve as a lattice for the formation of future randomized clinical trials to further clarify important areas of the uncertainty surrounding VAD use.
Disclosures: Drs. Woller and Stevens disclose financial support paid to their institution of employment (Intermountain Medical Center) for conducting clinical research (with no financial support paid to either investigator). Dr. Woller discloses serving as an expert panelist for the Michigan Appropriateness Guide for Intravenous Catheters (MAGIC) initiative. The authors report no other conflicts of interest.
- The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC): results from a multispecialty panel using the RAND/UCLA appropriateness method. Ann Intern Med. 2015;163(6 suppl):S1–S40. , , , et al.
- Peripherally inserted central venous catheters in the acute care setting: a safe alternative to high‐risk short‐term central venous catheters. Am J Infect Control. 2010;38(2):149–153. , , , et al.
- Peripherally inserted central catheters may lower the incidence of catheter‐related blood stream infections in patients in surgical intensive care units. Surg Infect (Larchmt). 2011;12(4):279–282. , , , , , .
- Developing an alternative workflow model for peripherally inserted central catheter placement. J Infus Nurs. 2012;35(1):34–42. .
- Nurse‐led PICC insertion: is it cost effective? Br J Nurs. 2013;22(19):S9–S15. , .
- Facility wide benefits of radiology vascular access teams, part 2. Radiol Manage. 2010;32(3):39–43. , .
- Facility wide benefits of radiology vascular access teams. Radiol Manage. 2010;32(1):28–32; quiz 3–4. , .
- Advantages and disadvantages of peripherally inserted central venous catheters (PICC) compared to other central venous lines: a systematic review of the literature. Acta Oncol. 2013;52(5):886–892. , , , .
- The problem with peripherally inserted central catheters. JAMA. 2012;308(15):1527–1528. , , .
- Malposition of peripherally inserted central catheter: experience from 3,012 patients with cancer. Exp Ther Med. 2013;6(4):891–893. , .
- Complications associated with peripheral or central routes for central venous cannulation. Anaesthesia. 2012;67(1):65–71. , , .
- Bloodstream infection, venous thrombosis, and peripherally inserted central catheters: reappraising the evidence. Am J Med. 2012;125(8):733–741. , , , , .
- A randomised, controlled trial comparing the long‐term effects of peripherally inserted central catheter placement in chemotherapy patients using B‐mode ultrasound with modified Seldinger technique versus blind puncture. Eur J Oncol Nurs. 2014;18(1):94–103. , , , et al.
- A retrospective study on the long‐term placement of peripherally inserted central catheters and the importance of nursing care and education. Cancer Nurs. 2011;34(1):E25–E30. , , , , .
- The risk of bloodstream infection associated with peripherally inserted central catheters compared with central venous catheters in adults: a systematic review and meta‐analysis. Infect Control Hosp Epidemiol. 2013;34(9):908–918. , , , , .
- Risk of venous thromboembolism associated with peripherally inserted central catheters: a systematic review and meta‐analysis. Lancet. 2013;382(9889):311–325. , , , et al.
- Risk factors for catheter‐related thrombosis (CRT) in cancer patients: a patient‐level data (IPD) meta‐analysis of clinical trials and prospective studies. J Thromb Haemost. 2011;9(2):312–319. , , , et al.
- Upper extremity deep vein thrombosis: a community‐based perspective. Am J Med. 2007;120(8):678–684. , , , .
- Risk of symptomatic DVT associated with peripherally inserted central catheters. Chest. 2010;138(4):803–810. , , , et al.
- Reduction of peripherally inserted central catheter associated deep venous thrombosis. Chest. 2013;143(3):627–633. , , , et al.
- Risk factors associated with peripherally inserted central venous catheter‐related large vein thrombosis in neurological intensive care patients. Intensive Care Med. 2012;38(2):272–278. , , , , , .
- Upper extremity venous thrombosis in patients with cancer with peripherally inserted central venous catheters: a retrospective analysis of risk factors. J Oncol Pract. 2013;9(1):e8–e12. , , , , .
- 2008 Standards, Options and Recommendations (SOR) guidelines for the prevention and treatment of thrombosis associated with central venous catheters in patients with cancer: report from the working group. Ann Oncol. 2009;20(9):1459–1471. , , , et al.
- The association between picc use and venous thromboembolism in upper and lower extremities. Am J Med. 2015;128(9):986–993.e1. , , , , .
- VTE Incidence and risk factors in patients with severe sepsis and septic shock. Chest. 2015;148(5):1224–1230. , , , et al.
- Infusion Nurses Society. Infusion nursing standards of practice. J Infus Nurs. 2011;34(1S).
- Healthcare Infection Control Practices Advisory Committee (HICPAC) (Appendix 1). Summary of recommendations: Guidelines for the Prevention of Intravascular Catheter‐related Infections. Clin Infect Dis. 2011;52:1087–1099. , , , , , , et al.
- Catheter‐associated bloodstream infection incidence and risk factors in adults with cancer: a prospective cohort study. J Hosp Infect. 2011;78(1):26–30. , , , et al.
- Risk of catheter‐related bloodstream infection with peripherally inserted central venous catheters used in hospitalized patients. Chest. 2005;128(2):489–495. , .
- Guidelines for the prevention of intravascular catheter‐related infections. Clin Infect Dis. 2011;52(9):e162–e193. , , , et al.
- Temporary central venous catheter utilization patterns in a large tertiary care center: tracking the “idle central venous catheter”. Infect Control Hosp Epidemiol. 2012;33(1):50–57. , , , et al.
- Peripherally inserted central catheters: use at a tertiary care pediatric center. J Vasc Interv Radiol. 2013;24(9):1323–1331. , , , , , .
- Do clinicians know which of their patients have central venous catheters?: a multicenter observational study. Ann Intern Med. 2014;161(8):562–567. , , , et al.
- Choosing Wisely. American Society of Nephrology. Don't place peripherally inserted central catheters (PICC) in stage III‐V CKD patients without consulting nephrology. Available at: http://www.choosingwisely.org/clinician‐lists/american‐society‐nephrology‐peripherally‐inserted‐central‐catheters‐in‐stage‐iii‐iv‐ckd‐patients. Accessed November 3, 2015.
- Society of General Internal Medicine. Don't place, or leave in place, peripherally inserted central catheters for patient or provider convenience. Available at: http://www.choosingwisely.org/clinician‐lists/society‐general‐internal‐medicine‐peripherally‐inserted‐central‐catheters‐for‐patient‐provider‐convenience. Accessed November 3, 2015.
- The RAND/UCLA appropriateness method user's manual. Santa Monica, CA: RAND; 2001. Available at: http://www.rand.org/pubs/monograph_reports/MR1269.html. , , , et al.
- National Kidney Foundation/Kidney Disease Outcomes Quality Initiative. KDOQI 2012 clinical practice guidelines for chronic kidney disease. Kidney Inter. 2013;(suppl 3):1–150. Accessed November 3, 2015.
- Practice guidelines for central venous access: a report by the American Society of Anesthesiologists Task Force on Central Venous Access. Anesthesiology. 2012;116(3):539–573. , , , et al.
- Improved care and reduced costs for patients requiring peripherally inserted central catheters: the role of bedside ultrasound and a dedicated team. JPEN J Parenter Enteral Nutr. 2005;29(5):374–379. , , , , .
- Analysis of tip malposition and correction in peripherally inserted central catheters placed at bedside by a dedicated nursing team. J Vasc Interv Radiol. 2007;18(4):513–518. , , , .
- Food and Drug Administration Task Force. Precautions necessary with central venous catheters. FDA Drug Bull. 1989:15–16.
- Insertion of PICCs with minimum number of lumens reduces complications and costs. J Am Coll Radiol. 2013;10(11):864–868. , , , .
- Flushing the central venous catheter: is heparin necessary? J Vasc Access. 2014;15(4):241–248. , , , et al.
- Heparin versus 0.9% sodium chloride intermittent flushing for prevention of occlusion in central venous catheters in adults. Cochrane Database Syst Rev. 2014;10:CD008462. , , , , , .
- The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC): results from a multispecialty panel using the RAND/UCLA appropriateness method. Ann Intern Med. 2015;163(6 suppl):S1–S40. , , , et al.
- Peripherally inserted central venous catheters in the acute care setting: a safe alternative to high‐risk short‐term central venous catheters. Am J Infect Control. 2010;38(2):149–153. , , , et al.
- Peripherally inserted central catheters may lower the incidence of catheter‐related blood stream infections in patients in surgical intensive care units. Surg Infect (Larchmt). 2011;12(4):279–282. , , , , , .
- Developing an alternative workflow model for peripherally inserted central catheter placement. J Infus Nurs. 2012;35(1):34–42. .
- Nurse‐led PICC insertion: is it cost effective? Br J Nurs. 2013;22(19):S9–S15. , .
- Facility wide benefits of radiology vascular access teams, part 2. Radiol Manage. 2010;32(3):39–43. , .
- Facility wide benefits of radiology vascular access teams. Radiol Manage. 2010;32(1):28–32; quiz 3–4. , .
- Advantages and disadvantages of peripherally inserted central venous catheters (PICC) compared to other central venous lines: a systematic review of the literature. Acta Oncol. 2013;52(5):886–892. , , , .
- The problem with peripherally inserted central catheters. JAMA. 2012;308(15):1527–1528. , , .
- Malposition of peripherally inserted central catheter: experience from 3,012 patients with cancer. Exp Ther Med. 2013;6(4):891–893. , .
- Complications associated with peripheral or central routes for central venous cannulation. Anaesthesia. 2012;67(1):65–71. , , .
- Bloodstream infection, venous thrombosis, and peripherally inserted central catheters: reappraising the evidence. Am J Med. 2012;125(8):733–741. , , , , .
- A randomised, controlled trial comparing the long‐term effects of peripherally inserted central catheter placement in chemotherapy patients using B‐mode ultrasound with modified Seldinger technique versus blind puncture. Eur J Oncol Nurs. 2014;18(1):94–103. , , , et al.
- A retrospective study on the long‐term placement of peripherally inserted central catheters and the importance of nursing care and education. Cancer Nurs. 2011;34(1):E25–E30. , , , , .
- The risk of bloodstream infection associated with peripherally inserted central catheters compared with central venous catheters in adults: a systematic review and meta‐analysis. Infect Control Hosp Epidemiol. 2013;34(9):908–918. , , , , .
- Risk of venous thromboembolism associated with peripherally inserted central catheters: a systematic review and meta‐analysis. Lancet. 2013;382(9889):311–325. , , , et al.
- Risk factors for catheter‐related thrombosis (CRT) in cancer patients: a patient‐level data (IPD) meta‐analysis of clinical trials and prospective studies. J Thromb Haemost. 2011;9(2):312–319. , , , et al.
- Upper extremity deep vein thrombosis: a community‐based perspective. Am J Med. 2007;120(8):678–684. , , , .
- Risk of symptomatic DVT associated with peripherally inserted central catheters. Chest. 2010;138(4):803–810. , , , et al.
- Reduction of peripherally inserted central catheter associated deep venous thrombosis. Chest. 2013;143(3):627–633. , , , et al.
- Risk factors associated with peripherally inserted central venous catheter‐related large vein thrombosis in neurological intensive care patients. Intensive Care Med. 2012;38(2):272–278. , , , , , .
- Upper extremity venous thrombosis in patients with cancer with peripherally inserted central venous catheters: a retrospective analysis of risk factors. J Oncol Pract. 2013;9(1):e8–e12. , , , , .
- 2008 Standards, Options and Recommendations (SOR) guidelines for the prevention and treatment of thrombosis associated with central venous catheters in patients with cancer: report from the working group. Ann Oncol. 2009;20(9):1459–1471. , , , et al.
- The association between picc use and venous thromboembolism in upper and lower extremities. Am J Med. 2015;128(9):986–993.e1. , , , , .
- VTE Incidence and risk factors in patients with severe sepsis and septic shock. Chest. 2015;148(5):1224–1230. , , , et al.
- Infusion Nurses Society. Infusion nursing standards of practice. J Infus Nurs. 2011;34(1S).
- Healthcare Infection Control Practices Advisory Committee (HICPAC) (Appendix 1). Summary of recommendations: Guidelines for the Prevention of Intravascular Catheter‐related Infections. Clin Infect Dis. 2011;52:1087–1099. , , , , , , et al.
- Catheter‐associated bloodstream infection incidence and risk factors in adults with cancer: a prospective cohort study. J Hosp Infect. 2011;78(1):26–30. , , , et al.
- Risk of catheter‐related bloodstream infection with peripherally inserted central venous catheters used in hospitalized patients. Chest. 2005;128(2):489–495. , .
- Guidelines for the prevention of intravascular catheter‐related infections. Clin Infect Dis. 2011;52(9):e162–e193. , , , et al.
- Temporary central venous catheter utilization patterns in a large tertiary care center: tracking the “idle central venous catheter”. Infect Control Hosp Epidemiol. 2012;33(1):50–57. , , , et al.
- Peripherally inserted central catheters: use at a tertiary care pediatric center. J Vasc Interv Radiol. 2013;24(9):1323–1331. , , , , , .
- Do clinicians know which of their patients have central venous catheters?: a multicenter observational study. Ann Intern Med. 2014;161(8):562–567. , , , et al.
- Choosing Wisely. American Society of Nephrology. Don't place peripherally inserted central catheters (PICC) in stage III‐V CKD patients without consulting nephrology. Available at: http://www.choosingwisely.org/clinician‐lists/american‐society‐nephrology‐peripherally‐inserted‐central‐catheters‐in‐stage‐iii‐iv‐ckd‐patients. Accessed November 3, 2015.
- Society of General Internal Medicine. Don't place, or leave in place, peripherally inserted central catheters for patient or provider convenience. Available at: http://www.choosingwisely.org/clinician‐lists/society‐general‐internal‐medicine‐peripherally‐inserted‐central‐catheters‐for‐patient‐provider‐convenience. Accessed November 3, 2015.
- The RAND/UCLA appropriateness method user's manual. Santa Monica, CA: RAND; 2001. Available at: http://www.rand.org/pubs/monograph_reports/MR1269.html. , , , et al.
- National Kidney Foundation/Kidney Disease Outcomes Quality Initiative. KDOQI 2012 clinical practice guidelines for chronic kidney disease. Kidney Inter. 2013;(suppl 3):1–150. Accessed November 3, 2015.
- Practice guidelines for central venous access: a report by the American Society of Anesthesiologists Task Force on Central Venous Access. Anesthesiology. 2012;116(3):539–573. , , , et al.
- Improved care and reduced costs for patients requiring peripherally inserted central catheters: the role of bedside ultrasound and a dedicated team. JPEN J Parenter Enteral Nutr. 2005;29(5):374–379. , , , , .
- Analysis of tip malposition and correction in peripherally inserted central catheters placed at bedside by a dedicated nursing team. J Vasc Interv Radiol. 2007;18(4):513–518. , , , .
- Food and Drug Administration Task Force. Precautions necessary with central venous catheters. FDA Drug Bull. 1989:15–16.
- Insertion of PICCs with minimum number of lumens reduces complications and costs. J Am Coll Radiol. 2013;10(11):864–868. , , , .
- Flushing the central venous catheter: is heparin necessary? J Vasc Access. 2014;15(4):241–248. , , , et al.
- Heparin versus 0.9% sodium chloride intermittent flushing for prevention of occlusion in central venous catheters in adults. Cochrane Database Syst Rev. 2014;10:CD008462. , , , , , .
Warfarin‐Associated Adverse Events
Warfarin is 1 of the most common causes of adverse drug events, with hospitalized patients being particularly at risk compared to outpatients.[1] Despite the availability of new oral anticoagulants (NOACs), physicians commonly prescribe warfarin to hospitalized patients,[2] likely in part due to the greater difficulty in reversing NOACs compared to warfarin. Furthermore, uptake of the NOACs is likely to be slow in resource‐poor countries due to the lower cost of warfarin.[3] However, the narrow therapeutic index, frequent drug‐drug interactions, and patient variability in metabolism of warfarin makes management challenging.[4] Thus, warfarin remains a significant cause of adverse events in hospitalized patients, occurring in approximately 3% to 8% of exposed patients, depending on underlying condition.[2, 5]
An elevated international normalized ratio (INR) is a strong predictor of drug‐associated adverse events (patient harm). In a study employing 21 different electronic triggers to identify potential adverse events, an elevated INR had the highest yield for events associated with harm (96% of INRs >5.0 associated with harm).[6] Although pharmacist‐managed inpatient anticoagulation services have been shown to improve warfarin management,[7, 8] there are evidence gaps regarding the causes of warfarin‐related adverse events and practice changes that could decrease their frequency. Although overanticoagulation is a well‐known risk factor for warfarin‐related adverse events,[9, 10] there are few evidence‐based warfarin monitoring and dosing recommendations for hospitalized patients.[10] For example, the 2012 American College of Chest Physicians Antithrombotic Guidelines[11] provide a weak recommendation on initial dosing of warfarin, but no recommendations on how frequently to monitor the INR, or appropriate dosing responses to INR levels. Although many hospitals employ protocols that suggest daily INR monitoring until stable, there are no evidence‐based guidelines to support this practice.[12] Conversely, there are reports of flags to order an INR level that are not activated unless greater than 2[13] or 3 days[14] pass since the prior INR. Protocols from some major academic medical centers suggest that after a therapeutic INR is reached, INR levels can be measured intermittently, as infrequently as twice a week.[15, 16]
The 2015 Joint Commission anticoagulant‐focused National Patient Safety Goal[17] (initially issued in 2008) mandates the assessment of baseline coagulation status before starting warfarin, and warfarin dosing based on a current INR; however, current is not defined. Neither the extent to which the mandate for assessing baseline coagulation status is adhered to nor the relationship between this process of care and patient outcomes is known. The importance of adverse drug events associated with anticoagulants, included warfarin, was also recently highlighted in the 2014 federal National Action Plan for Adverse Drug Event Prevention. In this document, the prevention of adverse drug events associated with anticoagulants was 1 of the 3 areas selected for special national attention and action.[18]
The Medicare Patient Safety Monitoring System (MPSMS) is a national chart abstraction‐based system that includes 21 in‐hospital adverse event measures, including warfarin‐associated adverse drug events.[2] Because of the importance of warfarin‐associated bleeding in hospitalized patients, we analyzed MPSMS data to determine what factors related to INR monitoring practices place patients at risk for these events. We were particularly interested in determining if we could detect potentially modifiable predictors of overanticoagulation and warfarin‐associated adverse events.
METHODS
Study Sample
We combined 2009 to 2013 MPSMS all payer data from the Centers for Medicare & Medicaid Services Hospital Inpatient Quality Reporting program for 4 common medical conditions: (1) acute myocardial infarction, (2) heart failure, (3) pneumonia, and (4) major surgery (as defined by the national Surgical Care Improvement Project).[19] To increase the sample size for cardiac patients, we combined myocardial infarction patients and heart failure patients into 1 group: acute cardiovascular disease. Patients under 18 years of age are excluded from the MPSMS sample, and we excluded patients whose INR never exceeded 1.5 after the initiation of warfarin therapy.
Patient Characteristics
Patient characteristics included demographics (age, sex, race [white, black, and other race]) and comorbidities. Comorbidities abstracted from medical records included: histories at the time of hospital admission of heart failure, obesity, coronary artery disease, renal disease, cerebrovascular disease, chronic obstructive pulmonary disease, cancer, diabetes, and smoking. The use of anticoagulants other than warfarin was also captured.
INRs
The INR measurement period for each patient started from the initial date of warfarin administration and ended on the date the maximum INR occurred. If a patient had more than 1 INR value on any day, the higher INR value was selected. A day without an INR measurement was defined as no INR value documented for a calendar day within the INR measurement period, starting on the third day of warfarin and ending on the day of the maximum INR level.
Outcomes
The study was performed to assess the association between the number of days on which a patient did not have an INR measured while receiving warfarin and the occurrence of (1) an INR 6.0[20, 21] (intermediate outcome) and (2) a warfarin‐associated adverse event. A description of the MPSMS measure of warfarin‐associated adverse events has been previously published.[2] Warfarin‐associated adverse events must have occurred within 48 hours of predefined triggers: an INR 4.0, cessation of warfarin therapy, administration of vitamin K or fresh frozen plasma, or transfusion of packed red blood cells other than in the setting of a surgical procedure. Warfarin‐associated adverse events were divided into minor and major events for this analysis. Minor events were defined as bleeding, drop in hematocrit of 3 points (occurring more than 48 hours after admission and not associated with surgery), or development of a hematoma. Major events were death, intracranial bleeding, or cardiac arrest. A patient who had both a major and a minor event was considered as having had a major event.
To assess the relationship between a rapidly rising INR and a subsequent INR 5.0 or 6.0, we determined the increase in INR between the measurement done 2 days prior to the maximum INR and 1 day prior to the maximum INR. This analysis was performed only on patients whose INR was 2.0 and 3.5 on the day prior to the maximum INR. In doing so, we sought to determine if the INR rise could predict the occurrence of a subsequent severely elevated INR in patients whose INR was within or near the therapeutic range.
Statistical Analysis
We conducted bivariate analysis to quantify the associations between lapses in measurement of the INR and subsequent warfarin‐associated adverse events, using the Mantel‐Haenszel 2 test for categorical variables. We fitted a generalized linear model with a logit link function to estimate the association of days on which an INR was not measured and the occurrence of the composite adverse event measure or the occurrence of an INR 6.0, adjusting for baseline patient characteristics, the number of days on warfarin, and receipt of heparin and low‐molecular‐weight heparin (LMWH). To account for potential imbalances in baseline patient characteristics and warfarin use prior to admission, we conducted a second analysis using the stabilized inverse probability weights approach. Specifically, we weighted each patient by the patient's inverse propensity scores of having only 1 day, at least 1 day, and at least 2 days without an INR measurement while receiving warfarin.[22, 23, 24, 25] To obtain the propensity scores, we fitted 3 logistic models with all variables included in the above primary mixed models except receipt of LMWH, heparin, and the number of days on warfarin as predictors, but 3 different outcomes, 1 day without an INR measurement, 1 or more days without an INR measurement, and 2 or more days without an INR measurement. Analyses were conducted using SAS version 9.2 (SAS Institute Inc., Cary, NC). All statistical testing was 2‐sided, at a significance level of 0.05. The institutional review board at Solutions IRB (Little Rock, AR) determined that the requirement for informed consent could be waived based on the nature of the study.
RESULTS
There were 130,828 patients included in the 2009 to 2013 MPSMS sample, of whom 19,445 (14.9%) received warfarin during their hospital stay and had at least 1 INR measurement. Among these patients, 5228 (26.9%) had no INR level above 1.5 and were excluded from further analysis, leaving 14,217 included patients. Of these patients, 1055 (7.4%) developed a warfarin‐associated adverse event. Table 1 demonstrates the baseline demographics and comorbidities of the included patients.
Characteristics | Acute Cardiovascular Disease, No. (%), N = 6,394 | Pneumonia, No. (%), N = 3,668 | Major Surgery, No. (%), N = 4,155 | All, No. (%), N = 14,217 |
---|---|---|---|---|
| ||||
Age, mean [SD] | 75.3 [12.4] | 74.5 [13.3] | 69.4 [11.8] | 73.4 [12.7] |
Sex, female | 3,175 (49.7) | 1,741 (47.5) | 2,639 (63.5) | 7,555 (53.1) |
Race | ||||
White | 5,388 (84.3) | 3,268 (89.1) | 3,760 (90.5) | 12,416 (87.3) |
Other | 1,006 (15.7) | 400 (10.9) | 395 (9.5) | 1,801 (12.7) |
Comorbidities | ||||
Cancer | 1,186 (18.6) | 939 (25.6) | 708 (17.0) | 2,833 (19.9) |
Diabetes | 3,043 (47.6) | 1,536 (41.9) | 1,080 (26.0) | 5,659 (39.8) |
Obesity | 1,938 (30.3) | 896 (24.4) | 1,260 (30.3) | 4,094 (28.8) |
Cerebrovascular disease | 1,664 (26.0) | 910 (24.8) | 498 (12.0) | 3,072 (21.6) |
Heart failure/pulmonary edema | 5,882 (92.0) | 2,052 (55.9) | 607 (14.6) | 8,541 (60.1) |
Chronic obstructive pulmonary disease | 2,636 (41.2) | 1,929 (52.6) | 672 (16.2) | 5,237 (36.8) |
Smoking | 895 (14.0) | 662 (18.1) | 623 (15.0) | 2,180 (15.3) |
Corticosteroids | 490 (7.7) | 568 (15.5) | 147 (3.5) | 1,205 (8.5) |
Coronary artery disease | 4,628 (72.4) | 1,875 (51.1) | 1,228 (29.6) | 7,731 (54.4) |
Renal disease | 3,000 (46.9) | 1,320 (36.0) | 565 (13.6) | 4,885 (34.4) |
Warfarin prior to arrival | 5,074 (79.4) | 3,020 (82.3) | 898 (21.6) | 8,992 (63.3) |
Heparin given during hospitalization | 850 (13.3) | 282 (7.7) | 314 (7.6) | 1,446 (10.7) |
LMWH given during hospitalization | 1,591 (24.9) | 1,070 (29.2) | 1,431 (34.4) | 4,092 (28.8) |
Warfarin was started on hospital day 1 for 6825 (48.0%) of 14,217 patients. Among these patients, 6539 (95.8%) had an INR measured within 1 calendar day. We were unable to determine how many patients who started warfarin later in their hospital stay had a baseline INR, as we did not capture INRs performed prior to the day that warfarin was initiated.
Supporting Table 1 in the online version of this article demonstrates the association between an INR 6.0 and the occurrence of warfarin‐associated adverse events. A maximum INR 6.0 occurred in 469 (3.3%) of the patients included in the study, and among those patients, 133 (28.4%) experienced a warfarin‐associated adverse event compared to 922 (6.7%) adverse events in the 13,748 patients who did not develop an INR 6.0 (P < 0.001).
Among 8529 patients who received warfarin for at least 3 days, beginning on the third day of warfarin, 1549 patients (18.2%) did not have INR measured at least once each day that they received warfarin. Table 2 demonstrates that patients who had 2 or more days on which the INR was not measured had higher rates of INR 6.0 than patients for whom the INR was measured daily. A similar association was seen for warfarin‐associated adverse events (Table 2).
No. of Patients, No. (%), N = 8,529 | Patients With INR on All Days, No. (%), N = 6,980 | Patients With 1 Day Without an INR, No. (%), N = 968 | Patients With 2 or More Days Without an INR, No. (%), N = 581 | P Value | |
---|---|---|---|---|---|
| |||||
Maximum INR | <0.01* | ||||
1.515.99 | 8,183 | 6,748 (96.7) | 911 (94.1) | 524 (90.2) | |
6.0 | 346 | 232 (3.3) | 57 (5.9) | 57 (9.8) | |
Warfarin‐associated adverse events | <0.01* | ||||
No adverse events | 7,689 (90.2) | 6,331 (90.7) | 872 (90.1) | 486 (83.6) | |
Minor adverse events | 792 (9.3) | 617 (8.8) | 86 (8.9) | 89 (15.3) | |
Major adverse events | 48 (0.6) | 32 (0.5) | 10 (1.0) | 6 (1.0) |
Figure 1A demonstrates the association between the number of days without an INR measurement and the subsequent development of an INR 6.0 or a warfarin‐associated adverse event, adjusted for baseline patient characteristics, receipt of heparin and LMWH, and number of days on warfarin. Patients with 1 or more days without an INR measurement had higher risk‐adjusted ORs of a subsequent INR 6.0, although the difference was not statistically significant for surgical patients. The analysis results based on inverse propensity scoring are seen in Figure 1B. Cardiac and surgical patients with 2 or more days without an INR measurement were at higher risk of having a warfarin‐associated adverse event, whereas cardiac and pneumonia patients with 1 or more days without an INR measurement were at higher risk of developing an INR 6.0.

Supporting Table 2 in the online version of this article demonstrates the relationship between patient characteristics and the occurrence of an INR 6.0 or a warfarin‐related adverse event. The only characteristic that was associated with either of these outcomes for all 3 patient conditions was renal disease, which was positively associated with a warfarin‐associated adverse event. Warfarin use prior to arrival was associated with lower risks of both an INR 6.0 and a warfarin‐associated adverse event, except for among surgical patients. Supporting Table 3 in the online version of this article demonstrates the differences in patient characteristics between patients who had daily INR measurement and those who had at least 1 day without an INR measurement.
Figure 2 illustrates the relationship of the maximum INR to the prior 1‐day change in INR in 4963 patients whose INR on the day prior to the maximum INR was 2.0 to 3.5. When the increase in INR was <0.9, the risk of the next day's INR being 6.0 was 0.7%, and if the increase was 0.9, the risk was 5.2%. The risk of developing an INR 5.0 was 1.9% if the preceding day's INR increase was <0.9 and 15.3% if the prior day's INR rise was 0.9. Overall, 51% of INRs 6.0 and 55% of INRs 5.0 were immediately preceded by an INR increase of 0.9. The positive likelihood ratio (LR) for a 0.9 rise in INR predicting an INR of 6.0 was 4.2, and the positive LR was 4.9 for predicting an INR 5.0.

There was no decline in the frequency of warfarin use among the patients in the MPSMS sample during the study period (16.7% in 2009 and 17.3% in 2013).
DISCUSSION
We studied warfarin‐associated adverse events in a nationally representative study of patients who received warfarin while in an acute care hospital for a primary diagnosis of cardiac disease, pneumonia, or major surgery. Several findings resulted from our analysis. First, warfarin is still commonly prescribed to hospitalized patients and remains a frequent cause of adverse events; 7.4% of the 2009 to 2013 MPSMS population who received warfarin and had at least 1 INR >1.5 developed a warfarin‐associated adverse event.
Over 95% of patients who received warfarin on the day of hospital admission had an INR performed within 1 day. This is similar to the results from a 2006 single center study in which 95% of patients had an INR measured prior to their first dose of warfarin.[10] Since 2008, The Joint Commission's National Patient Safety Goal has required the assessment of coagulation status before starting warfarin.[17] The high level of adherence to this standard suggests that further attention to this process of care is unlikely to significantly improve patient safety.
We also found that the lack of daily INR measurements was associated with an increased risk of an INR 6.0 and warfarin‐associated adverse events in some patient populations. There is limited evidence addressing the appropriate frequency of INR measurement in hospitalized patients receiving warfarin. The Joint Commission National Patient Safety Goal requires use of a current INR to adjust this therapy, but provides no specifics.[17] Although some experts believe that INRs should be monitored daily in hospitalized patients, this does not appear to be uniformly accepted. In some reports, 2[13] or 3[14] consecutive days without the performance of an INR was required to activate a reminder. Protocols from some major teaching hospitals specify intermittent monitoring once the INR is therapeutic.[15, 16] Because our results suggest that lapses in INR measurement lead to overanticoagulation and warfarin‐related adverse events, it may be appropriate to measure INRs daily in most hospitalized patients receiving warfarin. This would be consistent with the many known causes of INR instability in patients admitted to the hospital, including drug‐drug interactions, hepatic dysfunction, and changes in volume of distribution, such that truly stable hospitalized patients are likely rare. Indeed, hospital admission is a well‐known predictor of instability of warfarin effect. [9] Although our results suggest that daily INR measurement is associated with a lower rate of overanticoagulation, future studies might better define lower risk patients for whom daily INR measurement would not be necessary.
A prior INR increase 0.9 in 1 day was associated with an increased risk of subsequent overanticoagulation. Although a rapidly rising INR is known to predict overanticoagulation[10, 14] we could find no evidence as to what specific rate of rise confers this risk. Our results suggest that use of a warfarin dosing protocol that considers both the absolute value of the INR and the rate of rise could reduce warfarin‐related adverse events.
There are important limitations of our study. We did not abstract warfarin dosages, which precluded study of the appropriateness of both initial warfarin dosing and adjustment of the warfarin dose based on INR results. MPSMS does not reliably capture antiplatelet agents or other agents that result in drug‐drug interactions with warfarin, such as antibiotics, so this factor could theoretically have confounded our results. Antibiotic use seems unlikely to be a major confounder, because patients with acute cardiovascular disease demonstrated a similar relationship between INR measurement and an INR 6.0 to that seen with pneumonia and surgical patients, despite the latter patients likely having greater antibiotics exposure. Furthermore, MPSMS does not capture indices of severity of illness, so other unmeasured confounders could have influenced our results. Although we have data for patients admitted to the hospital for only 4 conditions, these are conditions that represent approximately 22% of hospital admissions in the United States.[2] Strengths of our study include the nationally representative and randomly selected cases and use of data that were obtained from chart abstraction as opposed to administrative data. Through the use of centralized data abstraction, we avoided the potential bias introduced when hospitals self‐report adverse events.
In summary, in a national sample of patients admitted to the hospital for 4 common conditions, warfarin‐associated adverse events were detected in 7.4% of patients who received warfarin. Lack of daily INR measurement was associated with an increased risk of overanticoagulation and warfarin‐associated adverse events in certain patient populations. A 1‐day increase in the INR of 0.9 predicted subsequent overanticoagulation. These results provide actionable opportunities to improve safety in some hospitalized patients receiving warfarin.
Acknowledgements
The authors express their appreciation to Dan Budnitz, MD, MPH, for his advice regarding study design and his review and comments on a draft of this manuscript.
Disclosures: This work was supported by contract HHSA290201200003C from the Agency for Healthcare Research and Quality, United States Department of Health and Human Services, Rockville, Maryland. Qualidigm was the contractor. The authors assume full responsibility for the accuracy and completeness of the ideas. Dr. Metersky has worked on various quality improvement and patient safety projects with Qualidigm, Centers for Medicare & Medicaid Services, and the Agency for Healthcare Research and Quality. His employer has received remuneration for this work. Dr. Krumholz works under contract with the Centers for Medicare & Medicaid Services to develop and maintain performance measures. Dr. Krumholz is the chair of a cardiac scientific advisory board for UnitedHealth and the recipient of a research grant from Medtronic, Inc. through Yale University. The other authors report no conflicts of interest.
- Delivery of optimized inpatient anticoagulation therapy: consensus statement from the anticoagulation forum. Ann Pharmacother. 2013;47:714–724. , , , , , .
- National trends in patient safety for four common conditions, 2005–2011. N Engl J Med. 2014;370:341–351. , , , et al.
- Update on antithrombotic therapy: new anticoagulants. Circulation. 2010;121:1523–1532 , .
- The pharmacogenetics of coumarin therapy. Pharmacogenomics. 2005;6:503–513. , , , .
- Adverse drug events among hospitalized Medicare patients: epidemiology and national estimates from a new approach to surveillance. Jt Comm J Qual Patient Saf. 2010;36:12–21. , , .
- Active surveillance using electronic triggers to detect adverse events in hospitalized patients. Qual Saf Health Care. 2006;15:184–190. , , , et al.
- Inpatient warfarin management: pharmacist management using a detailed dosing protocol. J Thromb Thrombolysis. 2012;33:178–184. , , , et al.
- Efficacy and safety of a pharmacist‐managed inpatient anticoagulation service for warfarin initiation and titration. J Clin Pharm Ther. 2011;36:585–591. , , , , .
- Bleeding complications of oral anticoagulant treatment: an inception‐cohort, prospective collaborative study (ISCOAT). Italian Study on Complications of Oral Anticoagulant Therapy. Lancet. 1996;348:423–428. , , , et al.
- Oral anticoagulation in the hospital: analysis of patients at risk. J Thromb Thrombolysis. 2011;31:22–26. , , , , , .
- Evidence‐based management of anticoagulant therapy: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence‐Based Clinical Practice Guidelines. Chest. 2012;141:e152S–e184S. , , , et al.
- Agency for Healthcare Research and Quality. National Guideline Clearinghouse. Available at: http://www.guideline.gov. Accessed April 30, 2015.
- Reduction in anticoagulation‐related adverse drug events using a trigger‐based methodology. Jt Comm J Qual Patient Saf. 2005;31:313–318. , .
- Use of specific indicators to detect warfarin‐related adverse events. Am J Health Syst Pharm. 2005;62:1683–1688. , , .
- University of Wisconsin Health. Warfarin management– adult–inpatient clinical practice guideline. Available at: http://www.uwhealth.org/files/uwhealth/docs/pdf3/Inpatient_Warfarin_Guideline.pdf. Accessed April 30, 2015
- Anticoagulation Guidelines ‐ LSU Health Shreveport. Available at: http://myhsc.lsuhscshreveport.edu/pharmacy/PT%20Policies/Anticoagulation_Safety.pdf. Accessed November 29, 2015.
- The Joint Commission. National patient safety goals effective January 1, 2015. Available at: http://www.jointcommission.org/assets/1/6/2015_NPSG_HAP.pdf. Accessed November 29, 2015.
- U.S. Department of Health and Human Services. Office of Disease Prevention and Health Promotion. Available at: http://health.gov/hcq/pdfs/ade-action-plan-508c.pdf. Accessed November 29, 2015.
- The Joint Commission. Surgical care improvement project. Available at: http://www.jointcommission.org/surgical_care_improvement_project. Accessed May 5, 2015.
- Optimization of inpatient warfarin therapy: Impact of daily consultation by a pharmacist‐managed anticoagulation service. Ann Pharmacother. 2000;34:567–572. , , , et al.
- Effects of requiring a baseline International Normalized Ratio for inpatients treated with warfarin. Am J Health Syst Pharm. 2010;67:17–22. , , .
- Weighting regressions by propensity scores. Eval Rev. 2008;32:392–409. , .
- An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivar Behav Res. 2011;46:399–424. .
- Propensity score methods for bias reduction in the comparison of a treatment to a non‐randomized control group. Stat Med. 1998;17:2265–2281. .
- The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70:41–55. , .
Warfarin is 1 of the most common causes of adverse drug events, with hospitalized patients being particularly at risk compared to outpatients.[1] Despite the availability of new oral anticoagulants (NOACs), physicians commonly prescribe warfarin to hospitalized patients,[2] likely in part due to the greater difficulty in reversing NOACs compared to warfarin. Furthermore, uptake of the NOACs is likely to be slow in resource‐poor countries due to the lower cost of warfarin.[3] However, the narrow therapeutic index, frequent drug‐drug interactions, and patient variability in metabolism of warfarin makes management challenging.[4] Thus, warfarin remains a significant cause of adverse events in hospitalized patients, occurring in approximately 3% to 8% of exposed patients, depending on underlying condition.[2, 5]
An elevated international normalized ratio (INR) is a strong predictor of drug‐associated adverse events (patient harm). In a study employing 21 different electronic triggers to identify potential adverse events, an elevated INR had the highest yield for events associated with harm (96% of INRs >5.0 associated with harm).[6] Although pharmacist‐managed inpatient anticoagulation services have been shown to improve warfarin management,[7, 8] there are evidence gaps regarding the causes of warfarin‐related adverse events and practice changes that could decrease their frequency. Although overanticoagulation is a well‐known risk factor for warfarin‐related adverse events,[9, 10] there are few evidence‐based warfarin monitoring and dosing recommendations for hospitalized patients.[10] For example, the 2012 American College of Chest Physicians Antithrombotic Guidelines[11] provide a weak recommendation on initial dosing of warfarin, but no recommendations on how frequently to monitor the INR, or appropriate dosing responses to INR levels. Although many hospitals employ protocols that suggest daily INR monitoring until stable, there are no evidence‐based guidelines to support this practice.[12] Conversely, there are reports of flags to order an INR level that are not activated unless greater than 2[13] or 3 days[14] pass since the prior INR. Protocols from some major academic medical centers suggest that after a therapeutic INR is reached, INR levels can be measured intermittently, as infrequently as twice a week.[15, 16]
The 2015 Joint Commission anticoagulant‐focused National Patient Safety Goal[17] (initially issued in 2008) mandates the assessment of baseline coagulation status before starting warfarin, and warfarin dosing based on a current INR; however, current is not defined. Neither the extent to which the mandate for assessing baseline coagulation status is adhered to nor the relationship between this process of care and patient outcomes is known. The importance of adverse drug events associated with anticoagulants, included warfarin, was also recently highlighted in the 2014 federal National Action Plan for Adverse Drug Event Prevention. In this document, the prevention of adverse drug events associated with anticoagulants was 1 of the 3 areas selected for special national attention and action.[18]
The Medicare Patient Safety Monitoring System (MPSMS) is a national chart abstraction‐based system that includes 21 in‐hospital adverse event measures, including warfarin‐associated adverse drug events.[2] Because of the importance of warfarin‐associated bleeding in hospitalized patients, we analyzed MPSMS data to determine what factors related to INR monitoring practices place patients at risk for these events. We were particularly interested in determining if we could detect potentially modifiable predictors of overanticoagulation and warfarin‐associated adverse events.
METHODS
Study Sample
We combined 2009 to 2013 MPSMS all payer data from the Centers for Medicare & Medicaid Services Hospital Inpatient Quality Reporting program for 4 common medical conditions: (1) acute myocardial infarction, (2) heart failure, (3) pneumonia, and (4) major surgery (as defined by the national Surgical Care Improvement Project).[19] To increase the sample size for cardiac patients, we combined myocardial infarction patients and heart failure patients into 1 group: acute cardiovascular disease. Patients under 18 years of age are excluded from the MPSMS sample, and we excluded patients whose INR never exceeded 1.5 after the initiation of warfarin therapy.
Patient Characteristics
Patient characteristics included demographics (age, sex, race [white, black, and other race]) and comorbidities. Comorbidities abstracted from medical records included: histories at the time of hospital admission of heart failure, obesity, coronary artery disease, renal disease, cerebrovascular disease, chronic obstructive pulmonary disease, cancer, diabetes, and smoking. The use of anticoagulants other than warfarin was also captured.
INRs
The INR measurement period for each patient started from the initial date of warfarin administration and ended on the date the maximum INR occurred. If a patient had more than 1 INR value on any day, the higher INR value was selected. A day without an INR measurement was defined as no INR value documented for a calendar day within the INR measurement period, starting on the third day of warfarin and ending on the day of the maximum INR level.
Outcomes
The study was performed to assess the association between the number of days on which a patient did not have an INR measured while receiving warfarin and the occurrence of (1) an INR 6.0[20, 21] (intermediate outcome) and (2) a warfarin‐associated adverse event. A description of the MPSMS measure of warfarin‐associated adverse events has been previously published.[2] Warfarin‐associated adverse events must have occurred within 48 hours of predefined triggers: an INR 4.0, cessation of warfarin therapy, administration of vitamin K or fresh frozen plasma, or transfusion of packed red blood cells other than in the setting of a surgical procedure. Warfarin‐associated adverse events were divided into minor and major events for this analysis. Minor events were defined as bleeding, drop in hematocrit of 3 points (occurring more than 48 hours after admission and not associated with surgery), or development of a hematoma. Major events were death, intracranial bleeding, or cardiac arrest. A patient who had both a major and a minor event was considered as having had a major event.
To assess the relationship between a rapidly rising INR and a subsequent INR 5.0 or 6.0, we determined the increase in INR between the measurement done 2 days prior to the maximum INR and 1 day prior to the maximum INR. This analysis was performed only on patients whose INR was 2.0 and 3.5 on the day prior to the maximum INR. In doing so, we sought to determine if the INR rise could predict the occurrence of a subsequent severely elevated INR in patients whose INR was within or near the therapeutic range.
Statistical Analysis
We conducted bivariate analysis to quantify the associations between lapses in measurement of the INR and subsequent warfarin‐associated adverse events, using the Mantel‐Haenszel 2 test for categorical variables. We fitted a generalized linear model with a logit link function to estimate the association of days on which an INR was not measured and the occurrence of the composite adverse event measure or the occurrence of an INR 6.0, adjusting for baseline patient characteristics, the number of days on warfarin, and receipt of heparin and low‐molecular‐weight heparin (LMWH). To account for potential imbalances in baseline patient characteristics and warfarin use prior to admission, we conducted a second analysis using the stabilized inverse probability weights approach. Specifically, we weighted each patient by the patient's inverse propensity scores of having only 1 day, at least 1 day, and at least 2 days without an INR measurement while receiving warfarin.[22, 23, 24, 25] To obtain the propensity scores, we fitted 3 logistic models with all variables included in the above primary mixed models except receipt of LMWH, heparin, and the number of days on warfarin as predictors, but 3 different outcomes, 1 day without an INR measurement, 1 or more days without an INR measurement, and 2 or more days without an INR measurement. Analyses were conducted using SAS version 9.2 (SAS Institute Inc., Cary, NC). All statistical testing was 2‐sided, at a significance level of 0.05. The institutional review board at Solutions IRB (Little Rock, AR) determined that the requirement for informed consent could be waived based on the nature of the study.
RESULTS
There were 130,828 patients included in the 2009 to 2013 MPSMS sample, of whom 19,445 (14.9%) received warfarin during their hospital stay and had at least 1 INR measurement. Among these patients, 5228 (26.9%) had no INR level above 1.5 and were excluded from further analysis, leaving 14,217 included patients. Of these patients, 1055 (7.4%) developed a warfarin‐associated adverse event. Table 1 demonstrates the baseline demographics and comorbidities of the included patients.
Characteristics | Acute Cardiovascular Disease, No. (%), N = 6,394 | Pneumonia, No. (%), N = 3,668 | Major Surgery, No. (%), N = 4,155 | All, No. (%), N = 14,217 |
---|---|---|---|---|
| ||||
Age, mean [SD] | 75.3 [12.4] | 74.5 [13.3] | 69.4 [11.8] | 73.4 [12.7] |
Sex, female | 3,175 (49.7) | 1,741 (47.5) | 2,639 (63.5) | 7,555 (53.1) |
Race | ||||
White | 5,388 (84.3) | 3,268 (89.1) | 3,760 (90.5) | 12,416 (87.3) |
Other | 1,006 (15.7) | 400 (10.9) | 395 (9.5) | 1,801 (12.7) |
Comorbidities | ||||
Cancer | 1,186 (18.6) | 939 (25.6) | 708 (17.0) | 2,833 (19.9) |
Diabetes | 3,043 (47.6) | 1,536 (41.9) | 1,080 (26.0) | 5,659 (39.8) |
Obesity | 1,938 (30.3) | 896 (24.4) | 1,260 (30.3) | 4,094 (28.8) |
Cerebrovascular disease | 1,664 (26.0) | 910 (24.8) | 498 (12.0) | 3,072 (21.6) |
Heart failure/pulmonary edema | 5,882 (92.0) | 2,052 (55.9) | 607 (14.6) | 8,541 (60.1) |
Chronic obstructive pulmonary disease | 2,636 (41.2) | 1,929 (52.6) | 672 (16.2) | 5,237 (36.8) |
Smoking | 895 (14.0) | 662 (18.1) | 623 (15.0) | 2,180 (15.3) |
Corticosteroids | 490 (7.7) | 568 (15.5) | 147 (3.5) | 1,205 (8.5) |
Coronary artery disease | 4,628 (72.4) | 1,875 (51.1) | 1,228 (29.6) | 7,731 (54.4) |
Renal disease | 3,000 (46.9) | 1,320 (36.0) | 565 (13.6) | 4,885 (34.4) |
Warfarin prior to arrival | 5,074 (79.4) | 3,020 (82.3) | 898 (21.6) | 8,992 (63.3) |
Heparin given during hospitalization | 850 (13.3) | 282 (7.7) | 314 (7.6) | 1,446 (10.7) |
LMWH given during hospitalization | 1,591 (24.9) | 1,070 (29.2) | 1,431 (34.4) | 4,092 (28.8) |
Warfarin was started on hospital day 1 for 6825 (48.0%) of 14,217 patients. Among these patients, 6539 (95.8%) had an INR measured within 1 calendar day. We were unable to determine how many patients who started warfarin later in their hospital stay had a baseline INR, as we did not capture INRs performed prior to the day that warfarin was initiated.
Supporting Table 1 in the online version of this article demonstrates the association between an INR 6.0 and the occurrence of warfarin‐associated adverse events. A maximum INR 6.0 occurred in 469 (3.3%) of the patients included in the study, and among those patients, 133 (28.4%) experienced a warfarin‐associated adverse event compared to 922 (6.7%) adverse events in the 13,748 patients who did not develop an INR 6.0 (P < 0.001).
Among 8529 patients who received warfarin for at least 3 days, beginning on the third day of warfarin, 1549 patients (18.2%) did not have INR measured at least once each day that they received warfarin. Table 2 demonstrates that patients who had 2 or more days on which the INR was not measured had higher rates of INR 6.0 than patients for whom the INR was measured daily. A similar association was seen for warfarin‐associated adverse events (Table 2).
No. of Patients, No. (%), N = 8,529 | Patients With INR on All Days, No. (%), N = 6,980 | Patients With 1 Day Without an INR, No. (%), N = 968 | Patients With 2 or More Days Without an INR, No. (%), N = 581 | P Value | |
---|---|---|---|---|---|
| |||||
Maximum INR | <0.01* | ||||
1.515.99 | 8,183 | 6,748 (96.7) | 911 (94.1) | 524 (90.2) | |
6.0 | 346 | 232 (3.3) | 57 (5.9) | 57 (9.8) | |
Warfarin‐associated adverse events | <0.01* | ||||
No adverse events | 7,689 (90.2) | 6,331 (90.7) | 872 (90.1) | 486 (83.6) | |
Minor adverse events | 792 (9.3) | 617 (8.8) | 86 (8.9) | 89 (15.3) | |
Major adverse events | 48 (0.6) | 32 (0.5) | 10 (1.0) | 6 (1.0) |
Figure 1A demonstrates the association between the number of days without an INR measurement and the subsequent development of an INR 6.0 or a warfarin‐associated adverse event, adjusted for baseline patient characteristics, receipt of heparin and LMWH, and number of days on warfarin. Patients with 1 or more days without an INR measurement had higher risk‐adjusted ORs of a subsequent INR 6.0, although the difference was not statistically significant for surgical patients. The analysis results based on inverse propensity scoring are seen in Figure 1B. Cardiac and surgical patients with 2 or more days without an INR measurement were at higher risk of having a warfarin‐associated adverse event, whereas cardiac and pneumonia patients with 1 or more days without an INR measurement were at higher risk of developing an INR 6.0.

Supporting Table 2 in the online version of this article demonstrates the relationship between patient characteristics and the occurrence of an INR 6.0 or a warfarin‐related adverse event. The only characteristic that was associated with either of these outcomes for all 3 patient conditions was renal disease, which was positively associated with a warfarin‐associated adverse event. Warfarin use prior to arrival was associated with lower risks of both an INR 6.0 and a warfarin‐associated adverse event, except for among surgical patients. Supporting Table 3 in the online version of this article demonstrates the differences in patient characteristics between patients who had daily INR measurement and those who had at least 1 day without an INR measurement.
Figure 2 illustrates the relationship of the maximum INR to the prior 1‐day change in INR in 4963 patients whose INR on the day prior to the maximum INR was 2.0 to 3.5. When the increase in INR was <0.9, the risk of the next day's INR being 6.0 was 0.7%, and if the increase was 0.9, the risk was 5.2%. The risk of developing an INR 5.0 was 1.9% if the preceding day's INR increase was <0.9 and 15.3% if the prior day's INR rise was 0.9. Overall, 51% of INRs 6.0 and 55% of INRs 5.0 were immediately preceded by an INR increase of 0.9. The positive likelihood ratio (LR) for a 0.9 rise in INR predicting an INR of 6.0 was 4.2, and the positive LR was 4.9 for predicting an INR 5.0.

There was no decline in the frequency of warfarin use among the patients in the MPSMS sample during the study period (16.7% in 2009 and 17.3% in 2013).
DISCUSSION
We studied warfarin‐associated adverse events in a nationally representative study of patients who received warfarin while in an acute care hospital for a primary diagnosis of cardiac disease, pneumonia, or major surgery. Several findings resulted from our analysis. First, warfarin is still commonly prescribed to hospitalized patients and remains a frequent cause of adverse events; 7.4% of the 2009 to 2013 MPSMS population who received warfarin and had at least 1 INR >1.5 developed a warfarin‐associated adverse event.
Over 95% of patients who received warfarin on the day of hospital admission had an INR performed within 1 day. This is similar to the results from a 2006 single center study in which 95% of patients had an INR measured prior to their first dose of warfarin.[10] Since 2008, The Joint Commission's National Patient Safety Goal has required the assessment of coagulation status before starting warfarin.[17] The high level of adherence to this standard suggests that further attention to this process of care is unlikely to significantly improve patient safety.
We also found that the lack of daily INR measurements was associated with an increased risk of an INR 6.0 and warfarin‐associated adverse events in some patient populations. There is limited evidence addressing the appropriate frequency of INR measurement in hospitalized patients receiving warfarin. The Joint Commission National Patient Safety Goal requires use of a current INR to adjust this therapy, but provides no specifics.[17] Although some experts believe that INRs should be monitored daily in hospitalized patients, this does not appear to be uniformly accepted. In some reports, 2[13] or 3[14] consecutive days without the performance of an INR was required to activate a reminder. Protocols from some major teaching hospitals specify intermittent monitoring once the INR is therapeutic.[15, 16] Because our results suggest that lapses in INR measurement lead to overanticoagulation and warfarin‐related adverse events, it may be appropriate to measure INRs daily in most hospitalized patients receiving warfarin. This would be consistent with the many known causes of INR instability in patients admitted to the hospital, including drug‐drug interactions, hepatic dysfunction, and changes in volume of distribution, such that truly stable hospitalized patients are likely rare. Indeed, hospital admission is a well‐known predictor of instability of warfarin effect. [9] Although our results suggest that daily INR measurement is associated with a lower rate of overanticoagulation, future studies might better define lower risk patients for whom daily INR measurement would not be necessary.
A prior INR increase 0.9 in 1 day was associated with an increased risk of subsequent overanticoagulation. Although a rapidly rising INR is known to predict overanticoagulation[10, 14] we could find no evidence as to what specific rate of rise confers this risk. Our results suggest that use of a warfarin dosing protocol that considers both the absolute value of the INR and the rate of rise could reduce warfarin‐related adverse events.
There are important limitations of our study. We did not abstract warfarin dosages, which precluded study of the appropriateness of both initial warfarin dosing and adjustment of the warfarin dose based on INR results. MPSMS does not reliably capture antiplatelet agents or other agents that result in drug‐drug interactions with warfarin, such as antibiotics, so this factor could theoretically have confounded our results. Antibiotic use seems unlikely to be a major confounder, because patients with acute cardiovascular disease demonstrated a similar relationship between INR measurement and an INR 6.0 to that seen with pneumonia and surgical patients, despite the latter patients likely having greater antibiotics exposure. Furthermore, MPSMS does not capture indices of severity of illness, so other unmeasured confounders could have influenced our results. Although we have data for patients admitted to the hospital for only 4 conditions, these are conditions that represent approximately 22% of hospital admissions in the United States.[2] Strengths of our study include the nationally representative and randomly selected cases and use of data that were obtained from chart abstraction as opposed to administrative data. Through the use of centralized data abstraction, we avoided the potential bias introduced when hospitals self‐report adverse events.
In summary, in a national sample of patients admitted to the hospital for 4 common conditions, warfarin‐associated adverse events were detected in 7.4% of patients who received warfarin. Lack of daily INR measurement was associated with an increased risk of overanticoagulation and warfarin‐associated adverse events in certain patient populations. A 1‐day increase in the INR of 0.9 predicted subsequent overanticoagulation. These results provide actionable opportunities to improve safety in some hospitalized patients receiving warfarin.
Acknowledgements
The authors express their appreciation to Dan Budnitz, MD, MPH, for his advice regarding study design and his review and comments on a draft of this manuscript.
Disclosures: This work was supported by contract HHSA290201200003C from the Agency for Healthcare Research and Quality, United States Department of Health and Human Services, Rockville, Maryland. Qualidigm was the contractor. The authors assume full responsibility for the accuracy and completeness of the ideas. Dr. Metersky has worked on various quality improvement and patient safety projects with Qualidigm, Centers for Medicare & Medicaid Services, and the Agency for Healthcare Research and Quality. His employer has received remuneration for this work. Dr. Krumholz works under contract with the Centers for Medicare & Medicaid Services to develop and maintain performance measures. Dr. Krumholz is the chair of a cardiac scientific advisory board for UnitedHealth and the recipient of a research grant from Medtronic, Inc. through Yale University. The other authors report no conflicts of interest.
Warfarin is 1 of the most common causes of adverse drug events, with hospitalized patients being particularly at risk compared to outpatients.[1] Despite the availability of new oral anticoagulants (NOACs), physicians commonly prescribe warfarin to hospitalized patients,[2] likely in part due to the greater difficulty in reversing NOACs compared to warfarin. Furthermore, uptake of the NOACs is likely to be slow in resource‐poor countries due to the lower cost of warfarin.[3] However, the narrow therapeutic index, frequent drug‐drug interactions, and patient variability in metabolism of warfarin makes management challenging.[4] Thus, warfarin remains a significant cause of adverse events in hospitalized patients, occurring in approximately 3% to 8% of exposed patients, depending on underlying condition.[2, 5]
An elevated international normalized ratio (INR) is a strong predictor of drug‐associated adverse events (patient harm). In a study employing 21 different electronic triggers to identify potential adverse events, an elevated INR had the highest yield for events associated with harm (96% of INRs >5.0 associated with harm).[6] Although pharmacist‐managed inpatient anticoagulation services have been shown to improve warfarin management,[7, 8] there are evidence gaps regarding the causes of warfarin‐related adverse events and practice changes that could decrease their frequency. Although overanticoagulation is a well‐known risk factor for warfarin‐related adverse events,[9, 10] there are few evidence‐based warfarin monitoring and dosing recommendations for hospitalized patients.[10] For example, the 2012 American College of Chest Physicians Antithrombotic Guidelines[11] provide a weak recommendation on initial dosing of warfarin, but no recommendations on how frequently to monitor the INR, or appropriate dosing responses to INR levels. Although many hospitals employ protocols that suggest daily INR monitoring until stable, there are no evidence‐based guidelines to support this practice.[12] Conversely, there are reports of flags to order an INR level that are not activated unless greater than 2[13] or 3 days[14] pass since the prior INR. Protocols from some major academic medical centers suggest that after a therapeutic INR is reached, INR levels can be measured intermittently, as infrequently as twice a week.[15, 16]
The 2015 Joint Commission anticoagulant‐focused National Patient Safety Goal[17] (initially issued in 2008) mandates the assessment of baseline coagulation status before starting warfarin, and warfarin dosing based on a current INR; however, current is not defined. Neither the extent to which the mandate for assessing baseline coagulation status is adhered to nor the relationship between this process of care and patient outcomes is known. The importance of adverse drug events associated with anticoagulants, included warfarin, was also recently highlighted in the 2014 federal National Action Plan for Adverse Drug Event Prevention. In this document, the prevention of adverse drug events associated with anticoagulants was 1 of the 3 areas selected for special national attention and action.[18]
The Medicare Patient Safety Monitoring System (MPSMS) is a national chart abstraction‐based system that includes 21 in‐hospital adverse event measures, including warfarin‐associated adverse drug events.[2] Because of the importance of warfarin‐associated bleeding in hospitalized patients, we analyzed MPSMS data to determine what factors related to INR monitoring practices place patients at risk for these events. We were particularly interested in determining if we could detect potentially modifiable predictors of overanticoagulation and warfarin‐associated adverse events.
METHODS
Study Sample
We combined 2009 to 2013 MPSMS all payer data from the Centers for Medicare & Medicaid Services Hospital Inpatient Quality Reporting program for 4 common medical conditions: (1) acute myocardial infarction, (2) heart failure, (3) pneumonia, and (4) major surgery (as defined by the national Surgical Care Improvement Project).[19] To increase the sample size for cardiac patients, we combined myocardial infarction patients and heart failure patients into 1 group: acute cardiovascular disease. Patients under 18 years of age are excluded from the MPSMS sample, and we excluded patients whose INR never exceeded 1.5 after the initiation of warfarin therapy.
Patient Characteristics
Patient characteristics included demographics (age, sex, race [white, black, and other race]) and comorbidities. Comorbidities abstracted from medical records included: histories at the time of hospital admission of heart failure, obesity, coronary artery disease, renal disease, cerebrovascular disease, chronic obstructive pulmonary disease, cancer, diabetes, and smoking. The use of anticoagulants other than warfarin was also captured.
INRs
The INR measurement period for each patient started from the initial date of warfarin administration and ended on the date the maximum INR occurred. If a patient had more than 1 INR value on any day, the higher INR value was selected. A day without an INR measurement was defined as no INR value documented for a calendar day within the INR measurement period, starting on the third day of warfarin and ending on the day of the maximum INR level.
Outcomes
The study was performed to assess the association between the number of days on which a patient did not have an INR measured while receiving warfarin and the occurrence of (1) an INR 6.0[20, 21] (intermediate outcome) and (2) a warfarin‐associated adverse event. A description of the MPSMS measure of warfarin‐associated adverse events has been previously published.[2] Warfarin‐associated adverse events must have occurred within 48 hours of predefined triggers: an INR 4.0, cessation of warfarin therapy, administration of vitamin K or fresh frozen plasma, or transfusion of packed red blood cells other than in the setting of a surgical procedure. Warfarin‐associated adverse events were divided into minor and major events for this analysis. Minor events were defined as bleeding, drop in hematocrit of 3 points (occurring more than 48 hours after admission and not associated with surgery), or development of a hematoma. Major events were death, intracranial bleeding, or cardiac arrest. A patient who had both a major and a minor event was considered as having had a major event.
To assess the relationship between a rapidly rising INR and a subsequent INR 5.0 or 6.0, we determined the increase in INR between the measurement done 2 days prior to the maximum INR and 1 day prior to the maximum INR. This analysis was performed only on patients whose INR was 2.0 and 3.5 on the day prior to the maximum INR. In doing so, we sought to determine if the INR rise could predict the occurrence of a subsequent severely elevated INR in patients whose INR was within or near the therapeutic range.
Statistical Analysis
We conducted bivariate analysis to quantify the associations between lapses in measurement of the INR and subsequent warfarin‐associated adverse events, using the Mantel‐Haenszel 2 test for categorical variables. We fitted a generalized linear model with a logit link function to estimate the association of days on which an INR was not measured and the occurrence of the composite adverse event measure or the occurrence of an INR 6.0, adjusting for baseline patient characteristics, the number of days on warfarin, and receipt of heparin and low‐molecular‐weight heparin (LMWH). To account for potential imbalances in baseline patient characteristics and warfarin use prior to admission, we conducted a second analysis using the stabilized inverse probability weights approach. Specifically, we weighted each patient by the patient's inverse propensity scores of having only 1 day, at least 1 day, and at least 2 days without an INR measurement while receiving warfarin.[22, 23, 24, 25] To obtain the propensity scores, we fitted 3 logistic models with all variables included in the above primary mixed models except receipt of LMWH, heparin, and the number of days on warfarin as predictors, but 3 different outcomes, 1 day without an INR measurement, 1 or more days without an INR measurement, and 2 or more days without an INR measurement. Analyses were conducted using SAS version 9.2 (SAS Institute Inc., Cary, NC). All statistical testing was 2‐sided, at a significance level of 0.05. The institutional review board at Solutions IRB (Little Rock, AR) determined that the requirement for informed consent could be waived based on the nature of the study.
RESULTS
There were 130,828 patients included in the 2009 to 2013 MPSMS sample, of whom 19,445 (14.9%) received warfarin during their hospital stay and had at least 1 INR measurement. Among these patients, 5228 (26.9%) had no INR level above 1.5 and were excluded from further analysis, leaving 14,217 included patients. Of these patients, 1055 (7.4%) developed a warfarin‐associated adverse event. Table 1 demonstrates the baseline demographics and comorbidities of the included patients.
Characteristics | Acute Cardiovascular Disease, No. (%), N = 6,394 | Pneumonia, No. (%), N = 3,668 | Major Surgery, No. (%), N = 4,155 | All, No. (%), N = 14,217 |
---|---|---|---|---|
| ||||
Age, mean [SD] | 75.3 [12.4] | 74.5 [13.3] | 69.4 [11.8] | 73.4 [12.7] |
Sex, female | 3,175 (49.7) | 1,741 (47.5) | 2,639 (63.5) | 7,555 (53.1) |
Race | ||||
White | 5,388 (84.3) | 3,268 (89.1) | 3,760 (90.5) | 12,416 (87.3) |
Other | 1,006 (15.7) | 400 (10.9) | 395 (9.5) | 1,801 (12.7) |
Comorbidities | ||||
Cancer | 1,186 (18.6) | 939 (25.6) | 708 (17.0) | 2,833 (19.9) |
Diabetes | 3,043 (47.6) | 1,536 (41.9) | 1,080 (26.0) | 5,659 (39.8) |
Obesity | 1,938 (30.3) | 896 (24.4) | 1,260 (30.3) | 4,094 (28.8) |
Cerebrovascular disease | 1,664 (26.0) | 910 (24.8) | 498 (12.0) | 3,072 (21.6) |
Heart failure/pulmonary edema | 5,882 (92.0) | 2,052 (55.9) | 607 (14.6) | 8,541 (60.1) |
Chronic obstructive pulmonary disease | 2,636 (41.2) | 1,929 (52.6) | 672 (16.2) | 5,237 (36.8) |
Smoking | 895 (14.0) | 662 (18.1) | 623 (15.0) | 2,180 (15.3) |
Corticosteroids | 490 (7.7) | 568 (15.5) | 147 (3.5) | 1,205 (8.5) |
Coronary artery disease | 4,628 (72.4) | 1,875 (51.1) | 1,228 (29.6) | 7,731 (54.4) |
Renal disease | 3,000 (46.9) | 1,320 (36.0) | 565 (13.6) | 4,885 (34.4) |
Warfarin prior to arrival | 5,074 (79.4) | 3,020 (82.3) | 898 (21.6) | 8,992 (63.3) |
Heparin given during hospitalization | 850 (13.3) | 282 (7.7) | 314 (7.6) | 1,446 (10.7) |
LMWH given during hospitalization | 1,591 (24.9) | 1,070 (29.2) | 1,431 (34.4) | 4,092 (28.8) |
Warfarin was started on hospital day 1 for 6825 (48.0%) of 14,217 patients. Among these patients, 6539 (95.8%) had an INR measured within 1 calendar day. We were unable to determine how many patients who started warfarin later in their hospital stay had a baseline INR, as we did not capture INRs performed prior to the day that warfarin was initiated.
Supporting Table 1 in the online version of this article demonstrates the association between an INR 6.0 and the occurrence of warfarin‐associated adverse events. A maximum INR 6.0 occurred in 469 (3.3%) of the patients included in the study, and among those patients, 133 (28.4%) experienced a warfarin‐associated adverse event compared to 922 (6.7%) adverse events in the 13,748 patients who did not develop an INR 6.0 (P < 0.001).
Among 8529 patients who received warfarin for at least 3 days, beginning on the third day of warfarin, 1549 patients (18.2%) did not have INR measured at least once each day that they received warfarin. Table 2 demonstrates that patients who had 2 or more days on which the INR was not measured had higher rates of INR 6.0 than patients for whom the INR was measured daily. A similar association was seen for warfarin‐associated adverse events (Table 2).
No. of Patients, No. (%), N = 8,529 | Patients With INR on All Days, No. (%), N = 6,980 | Patients With 1 Day Without an INR, No. (%), N = 968 | Patients With 2 or More Days Without an INR, No. (%), N = 581 | P Value | |
---|---|---|---|---|---|
| |||||
Maximum INR | <0.01* | ||||
1.515.99 | 8,183 | 6,748 (96.7) | 911 (94.1) | 524 (90.2) | |
6.0 | 346 | 232 (3.3) | 57 (5.9) | 57 (9.8) | |
Warfarin‐associated adverse events | <0.01* | ||||
No adverse events | 7,689 (90.2) | 6,331 (90.7) | 872 (90.1) | 486 (83.6) | |
Minor adverse events | 792 (9.3) | 617 (8.8) | 86 (8.9) | 89 (15.3) | |
Major adverse events | 48 (0.6) | 32 (0.5) | 10 (1.0) | 6 (1.0) |
Figure 1A demonstrates the association between the number of days without an INR measurement and the subsequent development of an INR 6.0 or a warfarin‐associated adverse event, adjusted for baseline patient characteristics, receipt of heparin and LMWH, and number of days on warfarin. Patients with 1 or more days without an INR measurement had higher risk‐adjusted ORs of a subsequent INR 6.0, although the difference was not statistically significant for surgical patients. The analysis results based on inverse propensity scoring are seen in Figure 1B. Cardiac and surgical patients with 2 or more days without an INR measurement were at higher risk of having a warfarin‐associated adverse event, whereas cardiac and pneumonia patients with 1 or more days without an INR measurement were at higher risk of developing an INR 6.0.

Supporting Table 2 in the online version of this article demonstrates the relationship between patient characteristics and the occurrence of an INR 6.0 or a warfarin‐related adverse event. The only characteristic that was associated with either of these outcomes for all 3 patient conditions was renal disease, which was positively associated with a warfarin‐associated adverse event. Warfarin use prior to arrival was associated with lower risks of both an INR 6.0 and a warfarin‐associated adverse event, except for among surgical patients. Supporting Table 3 in the online version of this article demonstrates the differences in patient characteristics between patients who had daily INR measurement and those who had at least 1 day without an INR measurement.
Figure 2 illustrates the relationship of the maximum INR to the prior 1‐day change in INR in 4963 patients whose INR on the day prior to the maximum INR was 2.0 to 3.5. When the increase in INR was <0.9, the risk of the next day's INR being 6.0 was 0.7%, and if the increase was 0.9, the risk was 5.2%. The risk of developing an INR 5.0 was 1.9% if the preceding day's INR increase was <0.9 and 15.3% if the prior day's INR rise was 0.9. Overall, 51% of INRs 6.0 and 55% of INRs 5.0 were immediately preceded by an INR increase of 0.9. The positive likelihood ratio (LR) for a 0.9 rise in INR predicting an INR of 6.0 was 4.2, and the positive LR was 4.9 for predicting an INR 5.0.

There was no decline in the frequency of warfarin use among the patients in the MPSMS sample during the study period (16.7% in 2009 and 17.3% in 2013).
DISCUSSION
We studied warfarin‐associated adverse events in a nationally representative study of patients who received warfarin while in an acute care hospital for a primary diagnosis of cardiac disease, pneumonia, or major surgery. Several findings resulted from our analysis. First, warfarin is still commonly prescribed to hospitalized patients and remains a frequent cause of adverse events; 7.4% of the 2009 to 2013 MPSMS population who received warfarin and had at least 1 INR >1.5 developed a warfarin‐associated adverse event.
Over 95% of patients who received warfarin on the day of hospital admission had an INR performed within 1 day. This is similar to the results from a 2006 single center study in which 95% of patients had an INR measured prior to their first dose of warfarin.[10] Since 2008, The Joint Commission's National Patient Safety Goal has required the assessment of coagulation status before starting warfarin.[17] The high level of adherence to this standard suggests that further attention to this process of care is unlikely to significantly improve patient safety.
We also found that the lack of daily INR measurements was associated with an increased risk of an INR 6.0 and warfarin‐associated adverse events in some patient populations. There is limited evidence addressing the appropriate frequency of INR measurement in hospitalized patients receiving warfarin. The Joint Commission National Patient Safety Goal requires use of a current INR to adjust this therapy, but provides no specifics.[17] Although some experts believe that INRs should be monitored daily in hospitalized patients, this does not appear to be uniformly accepted. In some reports, 2[13] or 3[14] consecutive days without the performance of an INR was required to activate a reminder. Protocols from some major teaching hospitals specify intermittent monitoring once the INR is therapeutic.[15, 16] Because our results suggest that lapses in INR measurement lead to overanticoagulation and warfarin‐related adverse events, it may be appropriate to measure INRs daily in most hospitalized patients receiving warfarin. This would be consistent with the many known causes of INR instability in patients admitted to the hospital, including drug‐drug interactions, hepatic dysfunction, and changes in volume of distribution, such that truly stable hospitalized patients are likely rare. Indeed, hospital admission is a well‐known predictor of instability of warfarin effect. [9] Although our results suggest that daily INR measurement is associated with a lower rate of overanticoagulation, future studies might better define lower risk patients for whom daily INR measurement would not be necessary.
A prior INR increase 0.9 in 1 day was associated with an increased risk of subsequent overanticoagulation. Although a rapidly rising INR is known to predict overanticoagulation[10, 14] we could find no evidence as to what specific rate of rise confers this risk. Our results suggest that use of a warfarin dosing protocol that considers both the absolute value of the INR and the rate of rise could reduce warfarin‐related adverse events.
There are important limitations of our study. We did not abstract warfarin dosages, which precluded study of the appropriateness of both initial warfarin dosing and adjustment of the warfarin dose based on INR results. MPSMS does not reliably capture antiplatelet agents or other agents that result in drug‐drug interactions with warfarin, such as antibiotics, so this factor could theoretically have confounded our results. Antibiotic use seems unlikely to be a major confounder, because patients with acute cardiovascular disease demonstrated a similar relationship between INR measurement and an INR 6.0 to that seen with pneumonia and surgical patients, despite the latter patients likely having greater antibiotics exposure. Furthermore, MPSMS does not capture indices of severity of illness, so other unmeasured confounders could have influenced our results. Although we have data for patients admitted to the hospital for only 4 conditions, these are conditions that represent approximately 22% of hospital admissions in the United States.[2] Strengths of our study include the nationally representative and randomly selected cases and use of data that were obtained from chart abstraction as opposed to administrative data. Through the use of centralized data abstraction, we avoided the potential bias introduced when hospitals self‐report adverse events.
In summary, in a national sample of patients admitted to the hospital for 4 common conditions, warfarin‐associated adverse events were detected in 7.4% of patients who received warfarin. Lack of daily INR measurement was associated with an increased risk of overanticoagulation and warfarin‐associated adverse events in certain patient populations. A 1‐day increase in the INR of 0.9 predicted subsequent overanticoagulation. These results provide actionable opportunities to improve safety in some hospitalized patients receiving warfarin.
Acknowledgements
The authors express their appreciation to Dan Budnitz, MD, MPH, for his advice regarding study design and his review and comments on a draft of this manuscript.
Disclosures: This work was supported by contract HHSA290201200003C from the Agency for Healthcare Research and Quality, United States Department of Health and Human Services, Rockville, Maryland. Qualidigm was the contractor. The authors assume full responsibility for the accuracy and completeness of the ideas. Dr. Metersky has worked on various quality improvement and patient safety projects with Qualidigm, Centers for Medicare & Medicaid Services, and the Agency for Healthcare Research and Quality. His employer has received remuneration for this work. Dr. Krumholz works under contract with the Centers for Medicare & Medicaid Services to develop and maintain performance measures. Dr. Krumholz is the chair of a cardiac scientific advisory board for UnitedHealth and the recipient of a research grant from Medtronic, Inc. through Yale University. The other authors report no conflicts of interest.
- Delivery of optimized inpatient anticoagulation therapy: consensus statement from the anticoagulation forum. Ann Pharmacother. 2013;47:714–724. , , , , , .
- National trends in patient safety for four common conditions, 2005–2011. N Engl J Med. 2014;370:341–351. , , , et al.
- Update on antithrombotic therapy: new anticoagulants. Circulation. 2010;121:1523–1532 , .
- The pharmacogenetics of coumarin therapy. Pharmacogenomics. 2005;6:503–513. , , , .
- Adverse drug events among hospitalized Medicare patients: epidemiology and national estimates from a new approach to surveillance. Jt Comm J Qual Patient Saf. 2010;36:12–21. , , .
- Active surveillance using electronic triggers to detect adverse events in hospitalized patients. Qual Saf Health Care. 2006;15:184–190. , , , et al.
- Inpatient warfarin management: pharmacist management using a detailed dosing protocol. J Thromb Thrombolysis. 2012;33:178–184. , , , et al.
- Efficacy and safety of a pharmacist‐managed inpatient anticoagulation service for warfarin initiation and titration. J Clin Pharm Ther. 2011;36:585–591. , , , , .
- Bleeding complications of oral anticoagulant treatment: an inception‐cohort, prospective collaborative study (ISCOAT). Italian Study on Complications of Oral Anticoagulant Therapy. Lancet. 1996;348:423–428. , , , et al.
- Oral anticoagulation in the hospital: analysis of patients at risk. J Thromb Thrombolysis. 2011;31:22–26. , , , , , .
- Evidence‐based management of anticoagulant therapy: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence‐Based Clinical Practice Guidelines. Chest. 2012;141:e152S–e184S. , , , et al.
- Agency for Healthcare Research and Quality. National Guideline Clearinghouse. Available at: http://www.guideline.gov. Accessed April 30, 2015.
- Reduction in anticoagulation‐related adverse drug events using a trigger‐based methodology. Jt Comm J Qual Patient Saf. 2005;31:313–318. , .
- Use of specific indicators to detect warfarin‐related adverse events. Am J Health Syst Pharm. 2005;62:1683–1688. , , .
- University of Wisconsin Health. Warfarin management– adult–inpatient clinical practice guideline. Available at: http://www.uwhealth.org/files/uwhealth/docs/pdf3/Inpatient_Warfarin_Guideline.pdf. Accessed April 30, 2015
- Anticoagulation Guidelines ‐ LSU Health Shreveport. Available at: http://myhsc.lsuhscshreveport.edu/pharmacy/PT%20Policies/Anticoagulation_Safety.pdf. Accessed November 29, 2015.
- The Joint Commission. National patient safety goals effective January 1, 2015. Available at: http://www.jointcommission.org/assets/1/6/2015_NPSG_HAP.pdf. Accessed November 29, 2015.
- U.S. Department of Health and Human Services. Office of Disease Prevention and Health Promotion. Available at: http://health.gov/hcq/pdfs/ade-action-plan-508c.pdf. Accessed November 29, 2015.
- The Joint Commission. Surgical care improvement project. Available at: http://www.jointcommission.org/surgical_care_improvement_project. Accessed May 5, 2015.
- Optimization of inpatient warfarin therapy: Impact of daily consultation by a pharmacist‐managed anticoagulation service. Ann Pharmacother. 2000;34:567–572. , , , et al.
- Effects of requiring a baseline International Normalized Ratio for inpatients treated with warfarin. Am J Health Syst Pharm. 2010;67:17–22. , , .
- Weighting regressions by propensity scores. Eval Rev. 2008;32:392–409. , .
- An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivar Behav Res. 2011;46:399–424. .
- Propensity score methods for bias reduction in the comparison of a treatment to a non‐randomized control group. Stat Med. 1998;17:2265–2281. .
- The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70:41–55. , .
- Delivery of optimized inpatient anticoagulation therapy: consensus statement from the anticoagulation forum. Ann Pharmacother. 2013;47:714–724. , , , , , .
- National trends in patient safety for four common conditions, 2005–2011. N Engl J Med. 2014;370:341–351. , , , et al.
- Update on antithrombotic therapy: new anticoagulants. Circulation. 2010;121:1523–1532 , .
- The pharmacogenetics of coumarin therapy. Pharmacogenomics. 2005;6:503–513. , , , .
- Adverse drug events among hospitalized Medicare patients: epidemiology and national estimates from a new approach to surveillance. Jt Comm J Qual Patient Saf. 2010;36:12–21. , , .
- Active surveillance using electronic triggers to detect adverse events in hospitalized patients. Qual Saf Health Care. 2006;15:184–190. , , , et al.
- Inpatient warfarin management: pharmacist management using a detailed dosing protocol. J Thromb Thrombolysis. 2012;33:178–184. , , , et al.
- Efficacy and safety of a pharmacist‐managed inpatient anticoagulation service for warfarin initiation and titration. J Clin Pharm Ther. 2011;36:585–591. , , , , .
- Bleeding complications of oral anticoagulant treatment: an inception‐cohort, prospective collaborative study (ISCOAT). Italian Study on Complications of Oral Anticoagulant Therapy. Lancet. 1996;348:423–428. , , , et al.
- Oral anticoagulation in the hospital: analysis of patients at risk. J Thromb Thrombolysis. 2011;31:22–26. , , , , , .
- Evidence‐based management of anticoagulant therapy: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence‐Based Clinical Practice Guidelines. Chest. 2012;141:e152S–e184S. , , , et al.
- Agency for Healthcare Research and Quality. National Guideline Clearinghouse. Available at: http://www.guideline.gov. Accessed April 30, 2015.
- Reduction in anticoagulation‐related adverse drug events using a trigger‐based methodology. Jt Comm J Qual Patient Saf. 2005;31:313–318. , .
- Use of specific indicators to detect warfarin‐related adverse events. Am J Health Syst Pharm. 2005;62:1683–1688. , , .
- University of Wisconsin Health. Warfarin management– adult–inpatient clinical practice guideline. Available at: http://www.uwhealth.org/files/uwhealth/docs/pdf3/Inpatient_Warfarin_Guideline.pdf. Accessed April 30, 2015
- Anticoagulation Guidelines ‐ LSU Health Shreveport. Available at: http://myhsc.lsuhscshreveport.edu/pharmacy/PT%20Policies/Anticoagulation_Safety.pdf. Accessed November 29, 2015.
- The Joint Commission. National patient safety goals effective January 1, 2015. Available at: http://www.jointcommission.org/assets/1/6/2015_NPSG_HAP.pdf. Accessed November 29, 2015.
- U.S. Department of Health and Human Services. Office of Disease Prevention and Health Promotion. Available at: http://health.gov/hcq/pdfs/ade-action-plan-508c.pdf. Accessed November 29, 2015.
- The Joint Commission. Surgical care improvement project. Available at: http://www.jointcommission.org/surgical_care_improvement_project. Accessed May 5, 2015.
- Optimization of inpatient warfarin therapy: Impact of daily consultation by a pharmacist‐managed anticoagulation service. Ann Pharmacother. 2000;34:567–572. , , , et al.
- Effects of requiring a baseline International Normalized Ratio for inpatients treated with warfarin. Am J Health Syst Pharm. 2010;67:17–22. , , .
- Weighting regressions by propensity scores. Eval Rev. 2008;32:392–409. , .
- An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivar Behav Res. 2011;46:399–424. .
- Propensity score methods for bias reduction in the comparison of a treatment to a non‐randomized control group. Stat Med. 1998;17:2265–2281. .
- The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70:41–55. , .
© 2015 Society of Hospital Medicine
Functional dependence linked to risk of complications after spine surgery
SAN DIEGO – Functional dependence following elective cervical spine procedures was associated with a significantly increased risk of almost all 30-day complications analyzed, including mortality, a large retrospective analysis of national data demonstrated.
The findings suggest that physicians should “include the patient’s level of functional independence, in addition to more traditional medical comorbidities, in the risk-benefit analysis of surgical decision making,” Dr. Alpesh A. Patel said in an interview in advance of the annual meeting of the Cervical Spine Research Society. “Those individuals with dependence need to be counseled appropriately about their increased risk of complications including mortality.”
Dr. Patel, professor and director of orthopedic spine surgery at Northwestern University Feinberg School of Medicine, Chicago, and his associates retrospectively reviewed the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) data files from 2006 to 2013 and limited their analysis to patients undergoing elective anterior cervical fusions, posterior cervical fusions, cervical laminectomy, cervical laminotomy, cervical discectomy, or corpectomy. They divided patients into one of three groups based on the following preoperative functional status parameters: independent, comprising those not requiring assistance or any equipment for activities of daily living (ADLs); partially dependent, including those with equipment such as prosthetics, equipment, or devices and requiring some assistance from another person for ADLs; and totally dependent, in which patients require total assistance for all ADLs. The researchers used univariate analysis to compare patient demographics, comorbidities, and 30-day postoperative complications among the three groups, followed by multivariate logistic regression to analyze the independent association of functional dependence on 30-day complications when controlling for procedure and comorbidity variances.
Dr. Patel reported findings from 24,357 patients: 23,620 (97.0%) functionally independent, 664 (2.7%) partially dependent, and 73 (0.3%) totally dependent. Dependent patients were significantly older and had higher rates of all comorbidities (P less than .001), with the exception of obesity (P = .214). In addition, 30-day complication rates were higher for all complications (P less than .001) other than neurological (P =.060) and surgical site complications (P =.668). When the researchers controlled for type of procedure and for disparities in patient preoperative variables, multivariate analyses demonstrated that functional dependence was independently associated with sepsis (odds ratio 6.40; P less than .001), pulmonary (OR 4.13; P less than .001), venous thromboembolism (OR 4.27, P less than .001), renal (OR 3.32; P less than .001), and cardiac complications (OR 4.68; P =.001), along with mortality (OR 8.31; P less than .001).
“The very strong association between functional dependence and mortality was quite surprising,” Dr. Patel said. “It was, to the contrary, also surprising to see that, despite wide variance in medical comorbidities and functional status, surgical complications such as infection and neurological injury were similar in all groups.” He characterized the study as “the first large-scale assessment of functional status as a predictor of patient outcomes after cervical spine surgery. It fits in line with other studies utilizing large databases. Big data analysis of outcomes can be used to identify risk factors for complications including death after surgery. Identifying these factors is important if we are going to improve the care we provide. Accurately quantifying the impact of these risk factors is also critical when we risk stratify and compare hospitals and physicians.”
He acknowledged certain limitations of the study, including the fact that it is a retrospective study “with a heterogeneous population of patients, surgeons, hospitals, and procedures. This adds uncertainty to the analysis at the level of the individual patient but does provide generalizability to a broader patient population.”
Dr. Patel reported having no conflicts of interest.
SAN DIEGO – Functional dependence following elective cervical spine procedures was associated with a significantly increased risk of almost all 30-day complications analyzed, including mortality, a large retrospective analysis of national data demonstrated.
The findings suggest that physicians should “include the patient’s level of functional independence, in addition to more traditional medical comorbidities, in the risk-benefit analysis of surgical decision making,” Dr. Alpesh A. Patel said in an interview in advance of the annual meeting of the Cervical Spine Research Society. “Those individuals with dependence need to be counseled appropriately about their increased risk of complications including mortality.”
Dr. Patel, professor and director of orthopedic spine surgery at Northwestern University Feinberg School of Medicine, Chicago, and his associates retrospectively reviewed the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) data files from 2006 to 2013 and limited their analysis to patients undergoing elective anterior cervical fusions, posterior cervical fusions, cervical laminectomy, cervical laminotomy, cervical discectomy, or corpectomy. They divided patients into one of three groups based on the following preoperative functional status parameters: independent, comprising those not requiring assistance or any equipment for activities of daily living (ADLs); partially dependent, including those with equipment such as prosthetics, equipment, or devices and requiring some assistance from another person for ADLs; and totally dependent, in which patients require total assistance for all ADLs. The researchers used univariate analysis to compare patient demographics, comorbidities, and 30-day postoperative complications among the three groups, followed by multivariate logistic regression to analyze the independent association of functional dependence on 30-day complications when controlling for procedure and comorbidity variances.
Dr. Patel reported findings from 24,357 patients: 23,620 (97.0%) functionally independent, 664 (2.7%) partially dependent, and 73 (0.3%) totally dependent. Dependent patients were significantly older and had higher rates of all comorbidities (P less than .001), with the exception of obesity (P = .214). In addition, 30-day complication rates were higher for all complications (P less than .001) other than neurological (P =.060) and surgical site complications (P =.668). When the researchers controlled for type of procedure and for disparities in patient preoperative variables, multivariate analyses demonstrated that functional dependence was independently associated with sepsis (odds ratio 6.40; P less than .001), pulmonary (OR 4.13; P less than .001), venous thromboembolism (OR 4.27, P less than .001), renal (OR 3.32; P less than .001), and cardiac complications (OR 4.68; P =.001), along with mortality (OR 8.31; P less than .001).
“The very strong association between functional dependence and mortality was quite surprising,” Dr. Patel said. “It was, to the contrary, also surprising to see that, despite wide variance in medical comorbidities and functional status, surgical complications such as infection and neurological injury were similar in all groups.” He characterized the study as “the first large-scale assessment of functional status as a predictor of patient outcomes after cervical spine surgery. It fits in line with other studies utilizing large databases. Big data analysis of outcomes can be used to identify risk factors for complications including death after surgery. Identifying these factors is important if we are going to improve the care we provide. Accurately quantifying the impact of these risk factors is also critical when we risk stratify and compare hospitals and physicians.”
He acknowledged certain limitations of the study, including the fact that it is a retrospective study “with a heterogeneous population of patients, surgeons, hospitals, and procedures. This adds uncertainty to the analysis at the level of the individual patient but does provide generalizability to a broader patient population.”
Dr. Patel reported having no conflicts of interest.
SAN DIEGO – Functional dependence following elective cervical spine procedures was associated with a significantly increased risk of almost all 30-day complications analyzed, including mortality, a large retrospective analysis of national data demonstrated.
The findings suggest that physicians should “include the patient’s level of functional independence, in addition to more traditional medical comorbidities, in the risk-benefit analysis of surgical decision making,” Dr. Alpesh A. Patel said in an interview in advance of the annual meeting of the Cervical Spine Research Society. “Those individuals with dependence need to be counseled appropriately about their increased risk of complications including mortality.”
Dr. Patel, professor and director of orthopedic spine surgery at Northwestern University Feinberg School of Medicine, Chicago, and his associates retrospectively reviewed the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) data files from 2006 to 2013 and limited their analysis to patients undergoing elective anterior cervical fusions, posterior cervical fusions, cervical laminectomy, cervical laminotomy, cervical discectomy, or corpectomy. They divided patients into one of three groups based on the following preoperative functional status parameters: independent, comprising those not requiring assistance or any equipment for activities of daily living (ADLs); partially dependent, including those with equipment such as prosthetics, equipment, or devices and requiring some assistance from another person for ADLs; and totally dependent, in which patients require total assistance for all ADLs. The researchers used univariate analysis to compare patient demographics, comorbidities, and 30-day postoperative complications among the three groups, followed by multivariate logistic regression to analyze the independent association of functional dependence on 30-day complications when controlling for procedure and comorbidity variances.
Dr. Patel reported findings from 24,357 patients: 23,620 (97.0%) functionally independent, 664 (2.7%) partially dependent, and 73 (0.3%) totally dependent. Dependent patients were significantly older and had higher rates of all comorbidities (P less than .001), with the exception of obesity (P = .214). In addition, 30-day complication rates were higher for all complications (P less than .001) other than neurological (P =.060) and surgical site complications (P =.668). When the researchers controlled for type of procedure and for disparities in patient preoperative variables, multivariate analyses demonstrated that functional dependence was independently associated with sepsis (odds ratio 6.40; P less than .001), pulmonary (OR 4.13; P less than .001), venous thromboembolism (OR 4.27, P less than .001), renal (OR 3.32; P less than .001), and cardiac complications (OR 4.68; P =.001), along with mortality (OR 8.31; P less than .001).
“The very strong association between functional dependence and mortality was quite surprising,” Dr. Patel said. “It was, to the contrary, also surprising to see that, despite wide variance in medical comorbidities and functional status, surgical complications such as infection and neurological injury were similar in all groups.” He characterized the study as “the first large-scale assessment of functional status as a predictor of patient outcomes after cervical spine surgery. It fits in line with other studies utilizing large databases. Big data analysis of outcomes can be used to identify risk factors for complications including death after surgery. Identifying these factors is important if we are going to improve the care we provide. Accurately quantifying the impact of these risk factors is also critical when we risk stratify and compare hospitals and physicians.”
He acknowledged certain limitations of the study, including the fact that it is a retrospective study “with a heterogeneous population of patients, surgeons, hospitals, and procedures. This adds uncertainty to the analysis at the level of the individual patient but does provide generalizability to a broader patient population.”
Dr. Patel reported having no conflicts of interest.
AT CSRS 2015
Key clinical point: Preoperative functional status is predictive of morbidity and mortality following elective cervical spine surgery.
Major finding: Patients who were dependent from a functional standpoint were significantly older and had higher rates of all comorbidities, compared with their counterparts who were partially dependent or functionally independent (P less than .001).
Data source: A retrospective analysis of 24,357 patient files from the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP).
Disclosures: Dr. Patel reported having no conflicts of interest.
Half of hospitals penalized in 2015 by CMS quality program will pay again in 2016
More than half of hospitals penalized under the Hospital-Acquired Condition Reduction Program in fiscal year 2015 will be penalized again in FY 2016, the Centers for Medicare & Medicaid Services reported.
The program, instituted as part of the Affordable Care Act, penalizes the lowest quartile of qualifying non-Maryland hospitals with the worst risk-adjusted HAC quality measures by reducing payments related to those discharges by 1%. Maryland hospitals are currently excluded from the program because of insufficient data.
The CMS reported that in FY2016, 758 out of the 3,308 hospitals subject to the program will face the payment reduction, up from 724 in FY 2015. “Out of the 758 hospitals in the worst performing quartile in FY 2016, approximately 53.7 percent were also in the worst performing quartile in FY 2015,” the agency said in a fact sheet.*
In general, the average performance improved on two of the three measures in both years of the program: the mean Patient Safety Indicator 90 Composite Index Value (tracking pressure ulcer, iatrogenic pneumothorax, central venous catheter-related bloodstream infections, postoperative hip fractures, perioperative pulmonary embolism or deep vein thrombosis, postoperative sepsis, postoperative wound dehiscence, and accident puncture or laceration) and the mean Central Line-Associated Blood Stream Infection Standardized Infection Ratio (SIR), the agency noted. The mean Catheter-Associated Urinary Tract Infection SIR increased slightly. A fourth measure, the mean Surgical Site Infection SIR, was added as measure for fiscal 2016.
*CORRECTION, 1/7/2016: An earlier version of this article did not clearly state the percentage of hospitals that were also in the worst performing quartile in FY 2015.
More than half of hospitals penalized under the Hospital-Acquired Condition Reduction Program in fiscal year 2015 will be penalized again in FY 2016, the Centers for Medicare & Medicaid Services reported.
The program, instituted as part of the Affordable Care Act, penalizes the lowest quartile of qualifying non-Maryland hospitals with the worst risk-adjusted HAC quality measures by reducing payments related to those discharges by 1%. Maryland hospitals are currently excluded from the program because of insufficient data.
The CMS reported that in FY2016, 758 out of the 3,308 hospitals subject to the program will face the payment reduction, up from 724 in FY 2015. “Out of the 758 hospitals in the worst performing quartile in FY 2016, approximately 53.7 percent were also in the worst performing quartile in FY 2015,” the agency said in a fact sheet.*
In general, the average performance improved on two of the three measures in both years of the program: the mean Patient Safety Indicator 90 Composite Index Value (tracking pressure ulcer, iatrogenic pneumothorax, central venous catheter-related bloodstream infections, postoperative hip fractures, perioperative pulmonary embolism or deep vein thrombosis, postoperative sepsis, postoperative wound dehiscence, and accident puncture or laceration) and the mean Central Line-Associated Blood Stream Infection Standardized Infection Ratio (SIR), the agency noted. The mean Catheter-Associated Urinary Tract Infection SIR increased slightly. A fourth measure, the mean Surgical Site Infection SIR, was added as measure for fiscal 2016.
*CORRECTION, 1/7/2016: An earlier version of this article did not clearly state the percentage of hospitals that were also in the worst performing quartile in FY 2015.
More than half of hospitals penalized under the Hospital-Acquired Condition Reduction Program in fiscal year 2015 will be penalized again in FY 2016, the Centers for Medicare & Medicaid Services reported.
The program, instituted as part of the Affordable Care Act, penalizes the lowest quartile of qualifying non-Maryland hospitals with the worst risk-adjusted HAC quality measures by reducing payments related to those discharges by 1%. Maryland hospitals are currently excluded from the program because of insufficient data.
The CMS reported that in FY2016, 758 out of the 3,308 hospitals subject to the program will face the payment reduction, up from 724 in FY 2015. “Out of the 758 hospitals in the worst performing quartile in FY 2016, approximately 53.7 percent were also in the worst performing quartile in FY 2015,” the agency said in a fact sheet.*
In general, the average performance improved on two of the three measures in both years of the program: the mean Patient Safety Indicator 90 Composite Index Value (tracking pressure ulcer, iatrogenic pneumothorax, central venous catheter-related bloodstream infections, postoperative hip fractures, perioperative pulmonary embolism or deep vein thrombosis, postoperative sepsis, postoperative wound dehiscence, and accident puncture or laceration) and the mean Central Line-Associated Blood Stream Infection Standardized Infection Ratio (SIR), the agency noted. The mean Catheter-Associated Urinary Tract Infection SIR increased slightly. A fourth measure, the mean Surgical Site Infection SIR, was added as measure for fiscal 2016.
*CORRECTION, 1/7/2016: An earlier version of this article did not clearly state the percentage of hospitals that were also in the worst performing quartile in FY 2015.
Combination offers ‘important new option’ for CLL, team says
Photo courtesy of ASH
ORLANDO, FL—Idelalisib, the first-in-class PI3Kδ inhibitor, combined with bendamustine and rituximab (BR) for relapsed/refractory chronic
lymphocytic leukemia (CLL) offers “an important new option over the standard of care,” according to Andrew Zelenetz, MD, a member of the
international research team that conducted the phase 3 study of this combination.
Patients who received idelalisib plus BR experienced a much longer progression-free survival (PFS) than those who received BR alone, 23.1
months versus 11.1 months, respectively.
“And the benefit was seen across risk groups,” Dr Zelenetz said.
He pointed out that the trial was stopped early in October because of the “overwhelming benefit” of idelalisib compared to the conventional therapy arm.
Dr Zelenetz, of Memorial Sloan Kettering Cancer Center in New York, New York, presented the findings at the 2015 ASH Annual Meeting as LBA-5.
Idelalisib had already been approved by the US Food and Drug Administration for the treatment of relapsed/refractory CLL.
“Many people refer to this [idelalisib] as a B-cell receptor drug,” Dr Zelenetz said, “but it is more than that. It is involved in signaling of very key pathways in cell survival and migration.”
The investigators hoped that by combining idelalisib with BR, they would be able to improve PFS and maintain tolerable toxicity. So they conducted Study 115 to find out.
Study 115 design and population
Study 115 was a double-blind, placebo-controlled phase 3 study.
The idelalisib arm consisted of 207 patients randomized to receive bendamustine at 70 mg/m2 on days 1 and 2 every 4 weeks for 6 cycles, rituximab at 375 mg/m2 during cycle 1 and 500 mg/m2 cycles 2 through 6, and idelalisib at 150 mg twice daily until progression.
The BR arm consisted of 209 patients randomized to the same BR regimen plus placebo twice daily until progression.
Investigators stratified patients according to 17p deletion and/or TP52 mutation, IGHV mutation status, and refractory versus relapsed disease.
The primary endpoint was PFS and the secondary endpoints were overall response rate (ORR), nodal response, overall survival (OS), and complete response (CR) rate.
Patients had to have disease progression within less than 36 months from their last therapy, measurable disease, and no history of CLL transformation. They could not have progressed in less than 6 months from their last bendamustine treatment and they could not have had any prior inhibitors of BTK, PI3Kδ, or SYK.
Patient disposition and demographics
One hundred fifteen patients (56%) in the idelalisib arm are still on study, and 52% are on treatment. In the BR arm, 63 patients (30%) are still on study, and 29% are on treatment.
Patient characteristics were well balanced between the arms. Most patients (76%) were male, 58% were younger than 65 years and 42% were 65 or older. About half were Rai stage III/IV and the median number of prior regimens was 2 (range, 1–13).
The most common prior regimens in both arms were fludarabine/cyclophosphamide/rituximab, fludarabine/cyclophosphamide, and chlorambucil. Fifteen percent of patients in the idelalisib arm and 8% in the BR arm had prior BR.
A third of patients in each arm had either 17p deletion or TP53 mutation, and two-thirds had neither. Most patients did not have IGHV mutation—84% in the idelalisib group and 83% in the BR group.
Thirty-one percent of the idelalisib-treated patients and 29% of the placebo patients had refractory disease, and 69% and 71%, respectively, had relapsed disease.
Efficacy
Median PFS, as assessed by independent review committee, “was highly statistically significant,” Dr Zelenetz said, at 23.1 months for idelalisib and 11.1 for BR (P<0.0001).
In addition, all subgroups analyzed favored idelalisib—refractory or relapsed disease, mutation status, cytogenetics, gender, age, and race.
Patients with neither deletion 17p nor TP53 had a hazard ratio of 0.22 favoring the idelalisib group, and patients with either one or the other of those mutations had a hazard ratio of 0.50 favoring idelalisib.
ORR was 68% and 45% for idelalisib and placebo, respectively, with 5% in the idelalisib arm and none in the placebo arm achieving a CR.
Dr Zelenetz pointed out that the CR rate was low largely due to missing confirmatory biopsies.
Ninety-six percent of patients in the idelalisib arm experienced 50% or more reduction in lymph nodes, compared with 61% in the placebo arm.
Patients in the idelalisib arm also experienced a significant improvement in OS of P=0.008 when stratified and P=0.023 when unstratified. Median OS has not been reached in either arm.
There was no difference in survival benefit in patients with refractory disease.
Safety
All patients in the idelalisib arm and 97% in the BR arm experienced an adverse event (AE), with 93% and 76% grade 3 or higher in the idelalisib and BR arms, respectively.
Serious AEs occurred in 66% of idelalisib-treated patients and 44% of placebo patients.
Fifty-four patients (26%) in the idelalisib arm discontinued the study drug due to AEs, and 22 (11%) required a study drug dose reduction. This was compared with 28 patients (13%) discontinuing and 13 patients requiring dose reductions in the placebo arm.
The most frequent AE occurring in more than 10% of patients was neutropenia. Grade 3 or higher neutropenia occurred in 60% of idelalisib patients and 46% of placebo patients.
Most AEs were higher in the idelalisib arm compared with the BR arm, including grade 3 or higher events, such as febrile neutropenia (20%, 6%), anemia (15%, 12%), thrombocytopenia (13%, 12%), pneumonia (11%, 6%), ALT increase (11%, <1%), pyrexia (7%, 3%), diarrhea (7%, 2%), and rash (3%, 0), among others.
Serious AEs occurring in more than 2% of patients were also higher in the idelalisib arm than the BR arm, and included febrile neutropenia (18%, 5%), pneumonia (14%, 6%), pyrexia (12%, 6%), neutropenia (4%, 1%), sepsis (4%, 1%), anemia (2%, 2%), lower respiratory tract infection (2%, 2%), diarrhea (4%, <1%), and neutropenic sepsis (1%, 3%).
The remainder of the serious AEs—urinary tract infection, bronchitis, septic shock, and squamous cell carcinoma—occurred in 2% or fewer patients in either arm.
Dr Zelenetz pointed out that the safety profile is consistent with previously reported studies.
Gilead Sciences developed idelalisib and funded Study 115.
*Data in the abstract differ slightly from data presented at the meeting.
Photo courtesy of ASH
ORLANDO, FL—Idelalisib, the first-in-class PI3Kδ inhibitor, combined with bendamustine and rituximab (BR) for relapsed/refractory chronic
lymphocytic leukemia (CLL) offers “an important new option over the standard of care,” according to Andrew Zelenetz, MD, a member of the
international research team that conducted the phase 3 study of this combination.
Patients who received idelalisib plus BR experienced a much longer progression-free survival (PFS) than those who received BR alone, 23.1
months versus 11.1 months, respectively.
“And the benefit was seen across risk groups,” Dr Zelenetz said.
He pointed out that the trial was stopped early in October because of the “overwhelming benefit” of idelalisib compared to the conventional therapy arm.
Dr Zelenetz, of Memorial Sloan Kettering Cancer Center in New York, New York, presented the findings at the 2015 ASH Annual Meeting as LBA-5.
Idelalisib had already been approved by the US Food and Drug Administration for the treatment of relapsed/refractory CLL.
“Many people refer to this [idelalisib] as a B-cell receptor drug,” Dr Zelenetz said, “but it is more than that. It is involved in signaling of very key pathways in cell survival and migration.”
The investigators hoped that by combining idelalisib with BR, they would be able to improve PFS and maintain tolerable toxicity. So they conducted Study 115 to find out.
Study 115 design and population
Study 115 was a double-blind, placebo-controlled phase 3 study.
The idelalisib arm consisted of 207 patients randomized to receive bendamustine at 70 mg/m2 on days 1 and 2 every 4 weeks for 6 cycles, rituximab at 375 mg/m2 during cycle 1 and 500 mg/m2 cycles 2 through 6, and idelalisib at 150 mg twice daily until progression.
The BR arm consisted of 209 patients randomized to the same BR regimen plus placebo twice daily until progression.
Investigators stratified patients according to 17p deletion and/or TP52 mutation, IGHV mutation status, and refractory versus relapsed disease.
The primary endpoint was PFS and the secondary endpoints were overall response rate (ORR), nodal response, overall survival (OS), and complete response (CR) rate.
Patients had to have disease progression within less than 36 months from their last therapy, measurable disease, and no history of CLL transformation. They could not have progressed in less than 6 months from their last bendamustine treatment and they could not have had any prior inhibitors of BTK, PI3Kδ, or SYK.
Patient disposition and demographics
One hundred fifteen patients (56%) in the idelalisib arm are still on study, and 52% are on treatment. In the BR arm, 63 patients (30%) are still on study, and 29% are on treatment.
Patient characteristics were well balanced between the arms. Most patients (76%) were male, 58% were younger than 65 years and 42% were 65 or older. About half were Rai stage III/IV and the median number of prior regimens was 2 (range, 1–13).
The most common prior regimens in both arms were fludarabine/cyclophosphamide/rituximab, fludarabine/cyclophosphamide, and chlorambucil. Fifteen percent of patients in the idelalisib arm and 8% in the BR arm had prior BR.
A third of patients in each arm had either 17p deletion or TP53 mutation, and two-thirds had neither. Most patients did not have IGHV mutation—84% in the idelalisib group and 83% in the BR group.
Thirty-one percent of the idelalisib-treated patients and 29% of the placebo patients had refractory disease, and 69% and 71%, respectively, had relapsed disease.
Efficacy
Median PFS, as assessed by independent review committee, “was highly statistically significant,” Dr Zelenetz said, at 23.1 months for idelalisib and 11.1 for BR (P<0.0001).
In addition, all subgroups analyzed favored idelalisib—refractory or relapsed disease, mutation status, cytogenetics, gender, age, and race.
Patients with neither deletion 17p nor TP53 had a hazard ratio of 0.22 favoring the idelalisib group, and patients with either one or the other of those mutations had a hazard ratio of 0.50 favoring idelalisib.
ORR was 68% and 45% for idelalisib and placebo, respectively, with 5% in the idelalisib arm and none in the placebo arm achieving a CR.
Dr Zelenetz pointed out that the CR rate was low largely due to missing confirmatory biopsies.
Ninety-six percent of patients in the idelalisib arm experienced 50% or more reduction in lymph nodes, compared with 61% in the placebo arm.
Patients in the idelalisib arm also experienced a significant improvement in OS of P=0.008 when stratified and P=0.023 when unstratified. Median OS has not been reached in either arm.
There was no difference in survival benefit in patients with refractory disease.
Safety
All patients in the idelalisib arm and 97% in the BR arm experienced an adverse event (AE), with 93% and 76% grade 3 or higher in the idelalisib and BR arms, respectively.
Serious AEs occurred in 66% of idelalisib-treated patients and 44% of placebo patients.
Fifty-four patients (26%) in the idelalisib arm discontinued the study drug due to AEs, and 22 (11%) required a study drug dose reduction. This was compared with 28 patients (13%) discontinuing and 13 patients requiring dose reductions in the placebo arm.
The most frequent AE occurring in more than 10% of patients was neutropenia. Grade 3 or higher neutropenia occurred in 60% of idelalisib patients and 46% of placebo patients.
Most AEs were higher in the idelalisib arm compared with the BR arm, including grade 3 or higher events, such as febrile neutropenia (20%, 6%), anemia (15%, 12%), thrombocytopenia (13%, 12%), pneumonia (11%, 6%), ALT increase (11%, <1%), pyrexia (7%, 3%), diarrhea (7%, 2%), and rash (3%, 0), among others.
Serious AEs occurring in more than 2% of patients were also higher in the idelalisib arm than the BR arm, and included febrile neutropenia (18%, 5%), pneumonia (14%, 6%), pyrexia (12%, 6%), neutropenia (4%, 1%), sepsis (4%, 1%), anemia (2%, 2%), lower respiratory tract infection (2%, 2%), diarrhea (4%, <1%), and neutropenic sepsis (1%, 3%).
The remainder of the serious AEs—urinary tract infection, bronchitis, septic shock, and squamous cell carcinoma—occurred in 2% or fewer patients in either arm.
Dr Zelenetz pointed out that the safety profile is consistent with previously reported studies.
Gilead Sciences developed idelalisib and funded Study 115.
*Data in the abstract differ slightly from data presented at the meeting.
Photo courtesy of ASH
ORLANDO, FL—Idelalisib, the first-in-class PI3Kδ inhibitor, combined with bendamustine and rituximab (BR) for relapsed/refractory chronic
lymphocytic leukemia (CLL) offers “an important new option over the standard of care,” according to Andrew Zelenetz, MD, a member of the
international research team that conducted the phase 3 study of this combination.
Patients who received idelalisib plus BR experienced a much longer progression-free survival (PFS) than those who received BR alone, 23.1
months versus 11.1 months, respectively.
“And the benefit was seen across risk groups,” Dr Zelenetz said.
He pointed out that the trial was stopped early in October because of the “overwhelming benefit” of idelalisib compared to the conventional therapy arm.
Dr Zelenetz, of Memorial Sloan Kettering Cancer Center in New York, New York, presented the findings at the 2015 ASH Annual Meeting as LBA-5.
Idelalisib had already been approved by the US Food and Drug Administration for the treatment of relapsed/refractory CLL.
“Many people refer to this [idelalisib] as a B-cell receptor drug,” Dr Zelenetz said, “but it is more than that. It is involved in signaling of very key pathways in cell survival and migration.”
The investigators hoped that by combining idelalisib with BR, they would be able to improve PFS and maintain tolerable toxicity. So they conducted Study 115 to find out.
Study 115 design and population
Study 115 was a double-blind, placebo-controlled phase 3 study.
The idelalisib arm consisted of 207 patients randomized to receive bendamustine at 70 mg/m2 on days 1 and 2 every 4 weeks for 6 cycles, rituximab at 375 mg/m2 during cycle 1 and 500 mg/m2 cycles 2 through 6, and idelalisib at 150 mg twice daily until progression.
The BR arm consisted of 209 patients randomized to the same BR regimen plus placebo twice daily until progression.
Investigators stratified patients according to 17p deletion and/or TP52 mutation, IGHV mutation status, and refractory versus relapsed disease.
The primary endpoint was PFS and the secondary endpoints were overall response rate (ORR), nodal response, overall survival (OS), and complete response (CR) rate.
Patients had to have disease progression within less than 36 months from their last therapy, measurable disease, and no history of CLL transformation. They could not have progressed in less than 6 months from their last bendamustine treatment and they could not have had any prior inhibitors of BTK, PI3Kδ, or SYK.
Patient disposition and demographics
One hundred fifteen patients (56%) in the idelalisib arm are still on study, and 52% are on treatment. In the BR arm, 63 patients (30%) are still on study, and 29% are on treatment.
Patient characteristics were well balanced between the arms. Most patients (76%) were male, 58% were younger than 65 years and 42% were 65 or older. About half were Rai stage III/IV and the median number of prior regimens was 2 (range, 1–13).
The most common prior regimens in both arms were fludarabine/cyclophosphamide/rituximab, fludarabine/cyclophosphamide, and chlorambucil. Fifteen percent of patients in the idelalisib arm and 8% in the BR arm had prior BR.
A third of patients in each arm had either 17p deletion or TP53 mutation, and two-thirds had neither. Most patients did not have IGHV mutation—84% in the idelalisib group and 83% in the BR group.
Thirty-one percent of the idelalisib-treated patients and 29% of the placebo patients had refractory disease, and 69% and 71%, respectively, had relapsed disease.
Efficacy
Median PFS, as assessed by independent review committee, “was highly statistically significant,” Dr Zelenetz said, at 23.1 months for idelalisib and 11.1 for BR (P<0.0001).
In addition, all subgroups analyzed favored idelalisib—refractory or relapsed disease, mutation status, cytogenetics, gender, age, and race.
Patients with neither deletion 17p nor TP53 had a hazard ratio of 0.22 favoring the idelalisib group, and patients with either one or the other of those mutations had a hazard ratio of 0.50 favoring idelalisib.
ORR was 68% and 45% for idelalisib and placebo, respectively, with 5% in the idelalisib arm and none in the placebo arm achieving a CR.
Dr Zelenetz pointed out that the CR rate was low largely due to missing confirmatory biopsies.
Ninety-six percent of patients in the idelalisib arm experienced 50% or more reduction in lymph nodes, compared with 61% in the placebo arm.
Patients in the idelalisib arm also experienced a significant improvement in OS of P=0.008 when stratified and P=0.023 when unstratified. Median OS has not been reached in either arm.
There was no difference in survival benefit in patients with refractory disease.
Safety
All patients in the idelalisib arm and 97% in the BR arm experienced an adverse event (AE), with 93% and 76% grade 3 or higher in the idelalisib and BR arms, respectively.
Serious AEs occurred in 66% of idelalisib-treated patients and 44% of placebo patients.
Fifty-four patients (26%) in the idelalisib arm discontinued the study drug due to AEs, and 22 (11%) required a study drug dose reduction. This was compared with 28 patients (13%) discontinuing and 13 patients requiring dose reductions in the placebo arm.
The most frequent AE occurring in more than 10% of patients was neutropenia. Grade 3 or higher neutropenia occurred in 60% of idelalisib patients and 46% of placebo patients.
Most AEs were higher in the idelalisib arm compared with the BR arm, including grade 3 or higher events, such as febrile neutropenia (20%, 6%), anemia (15%, 12%), thrombocytopenia (13%, 12%), pneumonia (11%, 6%), ALT increase (11%, <1%), pyrexia (7%, 3%), diarrhea (7%, 2%), and rash (3%, 0), among others.
Serious AEs occurring in more than 2% of patients were also higher in the idelalisib arm than the BR arm, and included febrile neutropenia (18%, 5%), pneumonia (14%, 6%), pyrexia (12%, 6%), neutropenia (4%, 1%), sepsis (4%, 1%), anemia (2%, 2%), lower respiratory tract infection (2%, 2%), diarrhea (4%, <1%), and neutropenic sepsis (1%, 3%).
The remainder of the serious AEs—urinary tract infection, bronchitis, septic shock, and squamous cell carcinoma—occurred in 2% or fewer patients in either arm.
Dr Zelenetz pointed out that the safety profile is consistent with previously reported studies.
Gilead Sciences developed idelalisib and funded Study 115.
*Data in the abstract differ slightly from data presented at the meeting.
Anti-PD-1, IMiD combo immunotherapy active in heavily pretreated myeloma
ORLANDO – Partnering the PD-1 antibody pembrolizumab with pomalidomide and dexamethasone induced responses in 60% of 27 patients with heavily pretreated relapsed and/or refractory multiple myeloma in a phase II trial.
This included 1 stringent complete response, 4 very good partial responses (VGPR), and 11 partial responses (PR). Eight patients had stable disease and 3 progressed.
Further, the overall response rate was 55% (2 VGPR, 9 PR) in patients double-refractory to immunomodulatory drugs (IMiDs) and proteasome inhibitors (PIs) and 50% (1 VGPR and 5 PR) in those with high-risk cytogenetics.
“The regimen shows promising anti-myeloma activity in heavily pretreated patients” and had a “predictable and manageable side-effect profile,” Dr. Asraf Badros of University of Maryland Marlene and Stewart Greenebaum Cancer Center, Baltimore, said at the annual meeting of the American Society of Hematology.
The investigators hypothesized that blocking signaling of the programmed cell death protein 1 (PD-1) and its ligand PD-L1 with pembrolizumab (Keytruda) would activate multiple myeloma-specific cytotoxic T cells that could be further enhanced by the immunomodulator pomalidomide (Pomalyst).
The primary goal of the ongoing study is to establish the safety of the combination therapy.
In all, 33 patients received 28-day cycles of pembrolizumab 200 mg intravenous every 2 weeks plus pomalidomide 4 mg daily for 21 days and dexamethasone 40 mg weekly (20 mg for patients older than 70 years). After 24 months, responders will continue pomalidomide and dexamethasone alone until progression.
At enrollment, patients had to have relapsed and/or refractory disease after at least two lines of prior therapy including an IMiD and a PI, an ECOG performance status of less than 2, and adequate organ function.
Key exclusion criteria are an active autoimmune disease requiring systemic treatment or a history of severe autoimmune disease such as interstitial lung disease or active, non-infectious pneumonitis.
Patients received a median of three prior lines of therapy, 67% had prior autologous transplant, 89% were refractory to lenalidomide, and 70% were double-refractory to both IMiDs and PIs. The median age was 65 years (range 42-81 years), 73% were male, and 42% had high-risk deletion 17p and/or a translocation of chromosomes 14 and 16 [t(14;16)].
The median number of cycles was six and median follow-up short at 7.4 months.
The most common adverse events reported in 10% of all grades were fatigue and hypoglycemia, mostly grades 1 and 2.
The most serious adverse events in the study were pneumonia and infection, including one death due to sepsis, Dr. Badros said. Two other patients died as a result of disease progression and one because of a cardiac event.
“We reported a lot, actually, of immune-related adverse events,” he said.
In 10% of patients, the investigators noted pneumonitis, one of which was grade 3, as well as hyperthyroidism and autoimmune hepatitis. Pembrolizumab was not stopped and the pneumonitis was treated with steroids until symptoms resolved. Patients resumed the assigned doses, though one patient withdrew consent.
The pneumonitis did not appear to be associated with prior therapy and it “responded extremely quickly and well to the steroids, indicating it might be a cytokine release issue,” Dr. Badros said.
Five patients required pomalidomide dose reductions due to neutropenia in two, and one case each of rash, palpitations, and fatigue. After the septic death, antibiotic prophylaxis was started in all patients, he said.
A total of 22 patients remain on study, with 7 patients discontinuing because of disease progression.
Given the short follow-up, it is “too early to talk about progression-free and overall-survival, but the signal we are getting is quite impressive,” Dr. Badros said.
The investigators also tried to look at PD-L1 expression in bone marrow samples collected at screening and on day 1 of cycle 3. No PD-L1 expression was found on plasma cells in the first patient, about 40% expression in the second, and 100% expression in the third, which is consistent with the heterogeneity of PD-L1 expression reported previously in the literature, Dr. Badros said. PD-L1 expression on bone marrow biopsies was very hard to standardize and they are exploring various methods to assess the impact of fixation and decalcification on level of expression, he added.
Dr. Badros disclosed off-label use of pembrolizumab and no relevant conflicts of interest.
ORLANDO – Partnering the PD-1 antibody pembrolizumab with pomalidomide and dexamethasone induced responses in 60% of 27 patients with heavily pretreated relapsed and/or refractory multiple myeloma in a phase II trial.
This included 1 stringent complete response, 4 very good partial responses (VGPR), and 11 partial responses (PR). Eight patients had stable disease and 3 progressed.
Further, the overall response rate was 55% (2 VGPR, 9 PR) in patients double-refractory to immunomodulatory drugs (IMiDs) and proteasome inhibitors (PIs) and 50% (1 VGPR and 5 PR) in those with high-risk cytogenetics.
“The regimen shows promising anti-myeloma activity in heavily pretreated patients” and had a “predictable and manageable side-effect profile,” Dr. Asraf Badros of University of Maryland Marlene and Stewart Greenebaum Cancer Center, Baltimore, said at the annual meeting of the American Society of Hematology.
The investigators hypothesized that blocking signaling of the programmed cell death protein 1 (PD-1) and its ligand PD-L1 with pembrolizumab (Keytruda) would activate multiple myeloma-specific cytotoxic T cells that could be further enhanced by the immunomodulator pomalidomide (Pomalyst).
The primary goal of the ongoing study is to establish the safety of the combination therapy.
In all, 33 patients received 28-day cycles of pembrolizumab 200 mg intravenous every 2 weeks plus pomalidomide 4 mg daily for 21 days and dexamethasone 40 mg weekly (20 mg for patients older than 70 years). After 24 months, responders will continue pomalidomide and dexamethasone alone until progression.
At enrollment, patients had to have relapsed and/or refractory disease after at least two lines of prior therapy including an IMiD and a PI, an ECOG performance status of less than 2, and adequate organ function.
Key exclusion criteria are an active autoimmune disease requiring systemic treatment or a history of severe autoimmune disease such as interstitial lung disease or active, non-infectious pneumonitis.
Patients received a median of three prior lines of therapy, 67% had prior autologous transplant, 89% were refractory to lenalidomide, and 70% were double-refractory to both IMiDs and PIs. The median age was 65 years (range 42-81 years), 73% were male, and 42% had high-risk deletion 17p and/or a translocation of chromosomes 14 and 16 [t(14;16)].
The median number of cycles was six and median follow-up short at 7.4 months.
The most common adverse events reported in 10% of all grades were fatigue and hypoglycemia, mostly grades 1 and 2.
The most serious adverse events in the study were pneumonia and infection, including one death due to sepsis, Dr. Badros said. Two other patients died as a result of disease progression and one because of a cardiac event.
“We reported a lot, actually, of immune-related adverse events,” he said.
In 10% of patients, the investigators noted pneumonitis, one of which was grade 3, as well as hyperthyroidism and autoimmune hepatitis. Pembrolizumab was not stopped and the pneumonitis was treated with steroids until symptoms resolved. Patients resumed the assigned doses, though one patient withdrew consent.
The pneumonitis did not appear to be associated with prior therapy and it “responded extremely quickly and well to the steroids, indicating it might be a cytokine release issue,” Dr. Badros said.
Five patients required pomalidomide dose reductions due to neutropenia in two, and one case each of rash, palpitations, and fatigue. After the septic death, antibiotic prophylaxis was started in all patients, he said.
A total of 22 patients remain on study, with 7 patients discontinuing because of disease progression.
Given the short follow-up, it is “too early to talk about progression-free and overall-survival, but the signal we are getting is quite impressive,” Dr. Badros said.
The investigators also tried to look at PD-L1 expression in bone marrow samples collected at screening and on day 1 of cycle 3. No PD-L1 expression was found on plasma cells in the first patient, about 40% expression in the second, and 100% expression in the third, which is consistent with the heterogeneity of PD-L1 expression reported previously in the literature, Dr. Badros said. PD-L1 expression on bone marrow biopsies was very hard to standardize and they are exploring various methods to assess the impact of fixation and decalcification on level of expression, he added.
Dr. Badros disclosed off-label use of pembrolizumab and no relevant conflicts of interest.
ORLANDO – Partnering the PD-1 antibody pembrolizumab with pomalidomide and dexamethasone induced responses in 60% of 27 patients with heavily pretreated relapsed and/or refractory multiple myeloma in a phase II trial.
This included 1 stringent complete response, 4 very good partial responses (VGPR), and 11 partial responses (PR). Eight patients had stable disease and 3 progressed.
Further, the overall response rate was 55% (2 VGPR, 9 PR) in patients double-refractory to immunomodulatory drugs (IMiDs) and proteasome inhibitors (PIs) and 50% (1 VGPR and 5 PR) in those with high-risk cytogenetics.
“The regimen shows promising anti-myeloma activity in heavily pretreated patients” and had a “predictable and manageable side-effect profile,” Dr. Asraf Badros of University of Maryland Marlene and Stewart Greenebaum Cancer Center, Baltimore, said at the annual meeting of the American Society of Hematology.
The investigators hypothesized that blocking signaling of the programmed cell death protein 1 (PD-1) and its ligand PD-L1 with pembrolizumab (Keytruda) would activate multiple myeloma-specific cytotoxic T cells that could be further enhanced by the immunomodulator pomalidomide (Pomalyst).
The primary goal of the ongoing study is to establish the safety of the combination therapy.
In all, 33 patients received 28-day cycles of pembrolizumab 200 mg intravenous every 2 weeks plus pomalidomide 4 mg daily for 21 days and dexamethasone 40 mg weekly (20 mg for patients older than 70 years). After 24 months, responders will continue pomalidomide and dexamethasone alone until progression.
At enrollment, patients had to have relapsed and/or refractory disease after at least two lines of prior therapy including an IMiD and a PI, an ECOG performance status of less than 2, and adequate organ function.
Key exclusion criteria are an active autoimmune disease requiring systemic treatment or a history of severe autoimmune disease such as interstitial lung disease or active, non-infectious pneumonitis.
Patients received a median of three prior lines of therapy, 67% had prior autologous transplant, 89% were refractory to lenalidomide, and 70% were double-refractory to both IMiDs and PIs. The median age was 65 years (range 42-81 years), 73% were male, and 42% had high-risk deletion 17p and/or a translocation of chromosomes 14 and 16 [t(14;16)].
The median number of cycles was six and median follow-up short at 7.4 months.
The most common adverse events reported in 10% of all grades were fatigue and hypoglycemia, mostly grades 1 and 2.
The most serious adverse events in the study were pneumonia and infection, including one death due to sepsis, Dr. Badros said. Two other patients died as a result of disease progression and one because of a cardiac event.
“We reported a lot, actually, of immune-related adverse events,” he said.
In 10% of patients, the investigators noted pneumonitis, one of which was grade 3, as well as hyperthyroidism and autoimmune hepatitis. Pembrolizumab was not stopped and the pneumonitis was treated with steroids until symptoms resolved. Patients resumed the assigned doses, though one patient withdrew consent.
The pneumonitis did not appear to be associated with prior therapy and it “responded extremely quickly and well to the steroids, indicating it might be a cytokine release issue,” Dr. Badros said.
Five patients required pomalidomide dose reductions due to neutropenia in two, and one case each of rash, palpitations, and fatigue. After the septic death, antibiotic prophylaxis was started in all patients, he said.
A total of 22 patients remain on study, with 7 patients discontinuing because of disease progression.
Given the short follow-up, it is “too early to talk about progression-free and overall-survival, but the signal we are getting is quite impressive,” Dr. Badros said.
The investigators also tried to look at PD-L1 expression in bone marrow samples collected at screening and on day 1 of cycle 3. No PD-L1 expression was found on plasma cells in the first patient, about 40% expression in the second, and 100% expression in the third, which is consistent with the heterogeneity of PD-L1 expression reported previously in the literature, Dr. Badros said. PD-L1 expression on bone marrow biopsies was very hard to standardize and they are exploring various methods to assess the impact of fixation and decalcification on level of expression, he added.
Dr. Badros disclosed off-label use of pembrolizumab and no relevant conflicts of interest.
AT ASH 2015
Key clinical point: Pembrolizumab in combination with pomalidomide and dexamethasone has shown strong clinical activity in heavily pretreated relapsed or refractory multiple myeloma.
Major finding: The overall response rate was 60% in 27 evaluable patients.
Data source: Ongoing phase II study of 33 patients with relapsed/refractory multiple myeloma.
Disclosures: Dr. Badros disclosed off-label use of pembrolizumab and no relevant conflicts of interest.
Study characterizes injury risk in cervical myelopathy patients
SAN DIEGO – Compared with age-matched controls, patients with cervical spondylotic myelopathy had a significantly increased incidence of falls, hip fractures, and other injuries, preliminary results from a study of Medicare data suggest.
“Cervical myelopathy is the most common cause of spinal cord dysfunction in patients over age 55,” Dr. Daniel J. Blizzard said at the annual meeting of the Cervical Spine Research Society. “In general, it’s cord compression secondary to their ossification of posterior latitudinal ligament, congenital stenosis, and/or degenerative changes to vertebral bodies, discs, and facet joints. These create an upper motor neuron lesion, which causes gait disturbances, imbalance, loss of manual dexterity and coordination, and sensory changes and weakness.”
Dr. Blizzard, an orthopedic surgery resident at Duke University, Durham, N.C., noted that myelopathy gait is the most common presenting symptom in cervical spondylotic myelopathy (CSM), affecting almost 30% of patients. “It’s present in three-quarters of CSM patients undergoing decompression,” he said. “Cord compression can lead to impaired proprioception, spasticity, and stiffness. We know that this gait dysfunction is multifactorial. Imbalance and unsteadiness lead to compensatory broad-based arrhythmic shuffling and clumsy-appearing gait to maintain balance.”
An estimated one-third of people over age 65 fall at least once per year and this may lead to significant morbidity, including institutionalization, loss of independence, and mortality, Dr. Blizzard continued. “We know that gait dysfunction is a significant risk factor for falls,” he said. “This can be CSM, lower extremity osteoarthritis, deconditioning, or poor vision. The primary cause of a gait disturbance may not be accurately identified, especially if a more obvious cause is already known.”
The researchers set out to determine the fall and injury risk of patients with CSM, “with the goal of guiding attention to what we thought might be a potentially underestimated disease with regard to morbidity, and to provide data to consider when determining the type and timing of CSM treatment,” Dr. Blizzard said. They used the PearlDiver database to search the Medicare sample during 2005-2012, and used ICD-9 codes to identify patients with CSM. They also identified a subpopulation of CSM patients that underwent decompression, “not for the purpose of comparing the effect of decompression, but to identify a population with more severe disease,” he explained. They included a control population with no CSM, vestibular disease, or Parkinson’s disease.
Dr. Blizzard reported preliminary results from a total of 601,390 patients with CSM, 77,346 patients with CSM plus decompression, and 49,550,651 controls. They looked at the incidence of falls, head injuries, skull fractures, subdural hematomas, and other orthopedic injuries including fractures of the hip, femur, leg, ankle, pelvis, and lower extremity sprains. The researchers found that when compared with controls, patients with CSM had a statistically significant increased incidence of all injuries, including hip fracture (risk ratio, 2.62), head injury (RR, 7.34), and fall (RR, 8.08). The incidence of hip fracture, head injury, and fall was also increased among the subset of CSM patients who had undergone decompression (RR of 2.25, 8.34, and 9.62, respectively).
Dr. Blizzard acknowledged certain limitations of the study, including its retrospective design. “Statistical and clinical significance are two very different things,” he emphasized. “When we get numbers this big, everything will become statistically significant, but whether things are clinically significant is up to interpretation. The presence of disease and complications is contingent upon proper coding and recognition by providers. We have no measures of severity, extent, or chronicity of disease.”
Despite such limitations, he concluded that the findings suggest that impact of CSM on morbidity “is probably underestimated by many. Symptoms of CSM can be insidious or masked. Patients can often attribute these to normal effects of aging, and often primary care physicians will not recognize these initial symptoms, especially if there is another confounding presenting complaint.”
Conservative interventions for CSM patients, he said, include gait training/physical therapy, assistive aids, hip pads, exercise programs with balance training, and an assessment of hazards in the home environment. From a surgical standpoint, the findings raise the possibility that surgeons may want to “be more aggressive” in their decision to operate on patients with CSM. “This dataset is in no way able to address this question, but I think it provides interesting information regarding the true morbidity of the disease,” Dr. Blizzard said. “There is clear risk and morbidity with cervical compression. Studies show improvement in patients regardless of age, severity, and chronicity.”
Dr. Blizzard reported having no financial disclosures.
SAN DIEGO – Compared with age-matched controls, patients with cervical spondylotic myelopathy had a significantly increased incidence of falls, hip fractures, and other injuries, preliminary results from a study of Medicare data suggest.
“Cervical myelopathy is the most common cause of spinal cord dysfunction in patients over age 55,” Dr. Daniel J. Blizzard said at the annual meeting of the Cervical Spine Research Society. “In general, it’s cord compression secondary to their ossification of posterior latitudinal ligament, congenital stenosis, and/or degenerative changes to vertebral bodies, discs, and facet joints. These create an upper motor neuron lesion, which causes gait disturbances, imbalance, loss of manual dexterity and coordination, and sensory changes and weakness.”
Dr. Blizzard, an orthopedic surgery resident at Duke University, Durham, N.C., noted that myelopathy gait is the most common presenting symptom in cervical spondylotic myelopathy (CSM), affecting almost 30% of patients. “It’s present in three-quarters of CSM patients undergoing decompression,” he said. “Cord compression can lead to impaired proprioception, spasticity, and stiffness. We know that this gait dysfunction is multifactorial. Imbalance and unsteadiness lead to compensatory broad-based arrhythmic shuffling and clumsy-appearing gait to maintain balance.”
An estimated one-third of people over age 65 fall at least once per year and this may lead to significant morbidity, including institutionalization, loss of independence, and mortality, Dr. Blizzard continued. “We know that gait dysfunction is a significant risk factor for falls,” he said. “This can be CSM, lower extremity osteoarthritis, deconditioning, or poor vision. The primary cause of a gait disturbance may not be accurately identified, especially if a more obvious cause is already known.”
The researchers set out to determine the fall and injury risk of patients with CSM, “with the goal of guiding attention to what we thought might be a potentially underestimated disease with regard to morbidity, and to provide data to consider when determining the type and timing of CSM treatment,” Dr. Blizzard said. They used the PearlDiver database to search the Medicare sample during 2005-2012, and used ICD-9 codes to identify patients with CSM. They also identified a subpopulation of CSM patients that underwent decompression, “not for the purpose of comparing the effect of decompression, but to identify a population with more severe disease,” he explained. They included a control population with no CSM, vestibular disease, or Parkinson’s disease.
Dr. Blizzard reported preliminary results from a total of 601,390 patients with CSM, 77,346 patients with CSM plus decompression, and 49,550,651 controls. They looked at the incidence of falls, head injuries, skull fractures, subdural hematomas, and other orthopedic injuries including fractures of the hip, femur, leg, ankle, pelvis, and lower extremity sprains. The researchers found that when compared with controls, patients with CSM had a statistically significant increased incidence of all injuries, including hip fracture (risk ratio, 2.62), head injury (RR, 7.34), and fall (RR, 8.08). The incidence of hip fracture, head injury, and fall was also increased among the subset of CSM patients who had undergone decompression (RR of 2.25, 8.34, and 9.62, respectively).
Dr. Blizzard acknowledged certain limitations of the study, including its retrospective design. “Statistical and clinical significance are two very different things,” he emphasized. “When we get numbers this big, everything will become statistically significant, but whether things are clinically significant is up to interpretation. The presence of disease and complications is contingent upon proper coding and recognition by providers. We have no measures of severity, extent, or chronicity of disease.”
Despite such limitations, he concluded that the findings suggest that impact of CSM on morbidity “is probably underestimated by many. Symptoms of CSM can be insidious or masked. Patients can often attribute these to normal effects of aging, and often primary care physicians will not recognize these initial symptoms, especially if there is another confounding presenting complaint.”
Conservative interventions for CSM patients, he said, include gait training/physical therapy, assistive aids, hip pads, exercise programs with balance training, and an assessment of hazards in the home environment. From a surgical standpoint, the findings raise the possibility that surgeons may want to “be more aggressive” in their decision to operate on patients with CSM. “This dataset is in no way able to address this question, but I think it provides interesting information regarding the true morbidity of the disease,” Dr. Blizzard said. “There is clear risk and morbidity with cervical compression. Studies show improvement in patients regardless of age, severity, and chronicity.”
Dr. Blizzard reported having no financial disclosures.
SAN DIEGO – Compared with age-matched controls, patients with cervical spondylotic myelopathy had a significantly increased incidence of falls, hip fractures, and other injuries, preliminary results from a study of Medicare data suggest.
“Cervical myelopathy is the most common cause of spinal cord dysfunction in patients over age 55,” Dr. Daniel J. Blizzard said at the annual meeting of the Cervical Spine Research Society. “In general, it’s cord compression secondary to their ossification of posterior latitudinal ligament, congenital stenosis, and/or degenerative changes to vertebral bodies, discs, and facet joints. These create an upper motor neuron lesion, which causes gait disturbances, imbalance, loss of manual dexterity and coordination, and sensory changes and weakness.”
Dr. Blizzard, an orthopedic surgery resident at Duke University, Durham, N.C., noted that myelopathy gait is the most common presenting symptom in cervical spondylotic myelopathy (CSM), affecting almost 30% of patients. “It’s present in three-quarters of CSM patients undergoing decompression,” he said. “Cord compression can lead to impaired proprioception, spasticity, and stiffness. We know that this gait dysfunction is multifactorial. Imbalance and unsteadiness lead to compensatory broad-based arrhythmic shuffling and clumsy-appearing gait to maintain balance.”
An estimated one-third of people over age 65 fall at least once per year and this may lead to significant morbidity, including institutionalization, loss of independence, and mortality, Dr. Blizzard continued. “We know that gait dysfunction is a significant risk factor for falls,” he said. “This can be CSM, lower extremity osteoarthritis, deconditioning, or poor vision. The primary cause of a gait disturbance may not be accurately identified, especially if a more obvious cause is already known.”
The researchers set out to determine the fall and injury risk of patients with CSM, “with the goal of guiding attention to what we thought might be a potentially underestimated disease with regard to morbidity, and to provide data to consider when determining the type and timing of CSM treatment,” Dr. Blizzard said. They used the PearlDiver database to search the Medicare sample during 2005-2012, and used ICD-9 codes to identify patients with CSM. They also identified a subpopulation of CSM patients that underwent decompression, “not for the purpose of comparing the effect of decompression, but to identify a population with more severe disease,” he explained. They included a control population with no CSM, vestibular disease, or Parkinson’s disease.
Dr. Blizzard reported preliminary results from a total of 601,390 patients with CSM, 77,346 patients with CSM plus decompression, and 49,550,651 controls. They looked at the incidence of falls, head injuries, skull fractures, subdural hematomas, and other orthopedic injuries including fractures of the hip, femur, leg, ankle, pelvis, and lower extremity sprains. The researchers found that when compared with controls, patients with CSM had a statistically significant increased incidence of all injuries, including hip fracture (risk ratio, 2.62), head injury (RR, 7.34), and fall (RR, 8.08). The incidence of hip fracture, head injury, and fall was also increased among the subset of CSM patients who had undergone decompression (RR of 2.25, 8.34, and 9.62, respectively).
Dr. Blizzard acknowledged certain limitations of the study, including its retrospective design. “Statistical and clinical significance are two very different things,” he emphasized. “When we get numbers this big, everything will become statistically significant, but whether things are clinically significant is up to interpretation. The presence of disease and complications is contingent upon proper coding and recognition by providers. We have no measures of severity, extent, or chronicity of disease.”
Despite such limitations, he concluded that the findings suggest that impact of CSM on morbidity “is probably underestimated by many. Symptoms of CSM can be insidious or masked. Patients can often attribute these to normal effects of aging, and often primary care physicians will not recognize these initial symptoms, especially if there is another confounding presenting complaint.”
Conservative interventions for CSM patients, he said, include gait training/physical therapy, assistive aids, hip pads, exercise programs with balance training, and an assessment of hazards in the home environment. From a surgical standpoint, the findings raise the possibility that surgeons may want to “be more aggressive” in their decision to operate on patients with CSM. “This dataset is in no way able to address this question, but I think it provides interesting information regarding the true morbidity of the disease,” Dr. Blizzard said. “There is clear risk and morbidity with cervical compression. Studies show improvement in patients regardless of age, severity, and chronicity.”
Dr. Blizzard reported having no financial disclosures.
AT CSRS 2015
Key clinical point: Medicare patients with cervical spondylotic myelopathy face an increased risk of falls and fractures.
Major finding: Compared with controls, patients with CSM had a statistically significant increased incidence of all injuries, including hip fracture (risk ratio, 2.62), head injury (RR, 7.34), and fall (RR, 8.08).
Data source: A retrospective analysis of Medicare patients during 2005-2012, including 601,390 patients with CSM, 77,346 patients with CSM plus decompression, and 49,550,651 controls.
Disclosures: Dr. Blizzard reported having no financial disclosures.
Group recommends adding rituximab to ALL therapy
Photo courtesy of ASH
ORLANDO, FL—Investigators from the Group for Research on Adult Lymphoblastic Leukemia (GRAALL) recommend integrating rituximab into the treatment of adult patients with acute lymphoblastic leukemia (ALL) based on results of the GRAALL-R 2005 study.
Patients who received rituximab as part of their therapy had a median event-free survival (EFS) at 2 years of 65%, compared to 52% for patients who did not receive rituximab. After censoring for stem cell transplant in first complete remission, the benefit was even greater.
Sébastien Maury, MD, PhD, of Hȏpital Hénri Mondor in Creteil, France, presented the results during the plenary session of the 2015 ASH Annual Meeting as abstract 1.
Dr Maury said GRAALL-R 2005 is the first phase 3, randomized study to evaluate the role of rituximab in the treatment of B-cell precursor (BCP) ALL.
Only one previous study, he said, suggested a potential benefit of adding rituximab compared to historic controls of chemotherapy alone.
He explained that, because the CD20 antigen is expressed at diagnosis in 30% to 40% of patients with BCP-ALL, investigators undertook to evaluate whether adding the anti-CD20 monoclonal antibody rituximab to the ALL treatment regimen could be beneficial for newly diagnosed Ph-negative BCP-ALL patients.
Study design & population
Investigators randomized 105 patients to receive the pediatric-inspired GRAALL protocol plus rituximab and 104 patients to the same regimen without rituximab.
Patients had to have 20% or more CD20-positive leukemic blasts.
Patients in the rituximab arm received 375 mg/m2 during induction on days 1 and 7, during salvage reinduction (if needed) on days 1 and 7, during consolidation blocks (6 infusions), during late intensification on days 1 and 7, and during the first year of maintenance (6 infusions), for a total of 16 to 18 infusions.
“In this trial, allogeneic transplantation was offered in first remission to high-risk patients who were those patients with at least one of these baseline or response-related criteria,” Dr Maury said.
Investigators defined high-risk at baseline as having a white blood cell count of 30 x 109/L or higher, CNS involvement, CD10-negative disease, or unfavorable cytogenetics.
And response-related criteria for high-risk disease included poor peripheral blast clearance after the 1-week steroid pre-phase, poor bone marrow blast clearance after the first week of chemotherapy, or no hematologic complete response after the first induction course.
Patient characteristics were well balanced between the arms, with a median age for the entire group of 40.2 years. Rituximab-treated patients had 61% CD20-positive blasts, and the no-rituximab arm had 69%.
More patients in the rituximab arm had a better ECOG performance status, although the difference was not significant. Thirteen percent were assessed as being grade 2 or higher in the rituximab arm, compared with 18% in the no-rituximab arm (P=0.06).
“The proportion of high-risk patients was comparable in both arms,” Dr Maury said, “representing around two-thirds of the study population.”
In the rituximab arm, 70% were considered high-risk, compared with 64% in the no-rituximab arm (P=0.46).
“However, despite this,” he said, “a significantly higher proportion of patients received allo transplant at first remission in the rituximab arm, 34% versus 20%. And since this was not explained by a different proportion of high-risk patients, this was probably due to differences in donor availability.”
Dr Maury noted that compliance to treatment was “quite good.”
Efficacy
The median follow-up was 30 months, and the primary endpoint was EFS.
The EFS rate for rituximab-treated patients at 2 years was 65%, compared with 52% for the non-rituximab patients (hazard ratio=0.66, P=0.038).
EFS was also significantly better with rituximab when patients were censored at allogeneic transplant, with a hazard ratio of 0.59 and a significance of 0.021.
However, there were no significant differences in early complete response rates, minimal residual disease (MRD) after induction, and MRD after consolidation.
“[O]nly 40% of patients could be centrally analyzed [for MRD],” Dr Maury explained, “which may be the reason why we could not detect any impact of rituximab on MRD.”
The cumulative incidence of relapse at 2 years was 18% in the rituximab arm and 32% in the no-rituximab arm (hazard ratio=0.52, P=0.017). And after censoring for stem cell transplant in first complete remission, the hazard ratio was 0.49 in favor of rituximab (P=0.018).
Overall survival (OS) was not significantly different between the arms. Rituximab-treated patients had an OS rate of 71%, compared with 64% in the no-rituximab arm (P=0.095).
“However, this difference became significant when censoring patients at time of allo-transplant,” Dr Maury said.
There was a 12% cumulative incidence of death in first complete remission at 2 years in each arm.
Investigators performed multivariate analysis and found that treatment with rituximab (P=0.020), age (P=0.022), white blood cell count of 30 x 109/L or higher (P=0.005), and CNS involvement all significantly impacted EFS.
When they introduced stem cell transplant in first remission as a covariable, the same factors remained significant. Allogeneic stem cell transplant in first remission did not make a significant difference on EFS (P=0.62).
Safety
One hundred twenty-four patients reported 246 severe adverse events, the most frequent of which was infection—71 in the rituximab arm and 55 in the no-rituximab arm, a difference that was not significant (P=0.16).
Severe allergic events were significantly different between the arms, with 2 severe allergic events reported in the rituximab arm and 14 in the no-rituximab arm (P=0.002). Of these 16 events, all but one were due to asparaginase.
“We believe that this may reflect the protective effect of rituximab that might inhibit B-cell protection of antibodies against asparaginase,” Dr Maury said, although the investigators did not actually measure the antibodies.
Severe lab abnormalities, neurologic and pulmonary events, coagulopathy, cardiologic and gastrointestinal events were not significantly different between the arms.
Dr Maury emphasized that the addition of rituximab to standard intensive chemotherapy is well tolerated, significantly improves EFS, and prolongs OS in patients not receiving allogeneic transplant in first remission.
While the optimal dose schedule of rituximab still remains to be determined, the GRAALL investigators believe that “the addition of rituximab should be the new standard of care for these patients,” Dr Maury declared.
Photo courtesy of ASH
ORLANDO, FL—Investigators from the Group for Research on Adult Lymphoblastic Leukemia (GRAALL) recommend integrating rituximab into the treatment of adult patients with acute lymphoblastic leukemia (ALL) based on results of the GRAALL-R 2005 study.
Patients who received rituximab as part of their therapy had a median event-free survival (EFS) at 2 years of 65%, compared to 52% for patients who did not receive rituximab. After censoring for stem cell transplant in first complete remission, the benefit was even greater.
Sébastien Maury, MD, PhD, of Hȏpital Hénri Mondor in Creteil, France, presented the results during the plenary session of the 2015 ASH Annual Meeting as abstract 1.
Dr Maury said GRAALL-R 2005 is the first phase 3, randomized study to evaluate the role of rituximab in the treatment of B-cell precursor (BCP) ALL.
Only one previous study, he said, suggested a potential benefit of adding rituximab compared to historic controls of chemotherapy alone.
He explained that, because the CD20 antigen is expressed at diagnosis in 30% to 40% of patients with BCP-ALL, investigators undertook to evaluate whether adding the anti-CD20 monoclonal antibody rituximab to the ALL treatment regimen could be beneficial for newly diagnosed Ph-negative BCP-ALL patients.
Study design & population
Investigators randomized 105 patients to receive the pediatric-inspired GRAALL protocol plus rituximab and 104 patients to the same regimen without rituximab.
Patients had to have 20% or more CD20-positive leukemic blasts.
Patients in the rituximab arm received 375 mg/m2 during induction on days 1 and 7, during salvage reinduction (if needed) on days 1 and 7, during consolidation blocks (6 infusions), during late intensification on days 1 and 7, and during the first year of maintenance (6 infusions), for a total of 16 to 18 infusions.
“In this trial, allogeneic transplantation was offered in first remission to high-risk patients who were those patients with at least one of these baseline or response-related criteria,” Dr Maury said.
Investigators defined high-risk at baseline as having a white blood cell count of 30 x 109/L or higher, CNS involvement, CD10-negative disease, or unfavorable cytogenetics.
And response-related criteria for high-risk disease included poor peripheral blast clearance after the 1-week steroid pre-phase, poor bone marrow blast clearance after the first week of chemotherapy, or no hematologic complete response after the first induction course.
Patient characteristics were well balanced between the arms, with a median age for the entire group of 40.2 years. Rituximab-treated patients had 61% CD20-positive blasts, and the no-rituximab arm had 69%.
More patients in the rituximab arm had a better ECOG performance status, although the difference was not significant. Thirteen percent were assessed as being grade 2 or higher in the rituximab arm, compared with 18% in the no-rituximab arm (P=0.06).
“The proportion of high-risk patients was comparable in both arms,” Dr Maury said, “representing around two-thirds of the study population.”
In the rituximab arm, 70% were considered high-risk, compared with 64% in the no-rituximab arm (P=0.46).
“However, despite this,” he said, “a significantly higher proportion of patients received allo transplant at first remission in the rituximab arm, 34% versus 20%. And since this was not explained by a different proportion of high-risk patients, this was probably due to differences in donor availability.”
Dr Maury noted that compliance to treatment was “quite good.”
Efficacy
The median follow-up was 30 months, and the primary endpoint was EFS.
The EFS rate for rituximab-treated patients at 2 years was 65%, compared with 52% for the non-rituximab patients (hazard ratio=0.66, P=0.038).
EFS was also significantly better with rituximab when patients were censored at allogeneic transplant, with a hazard ratio of 0.59 and a significance of 0.021.
However, there were no significant differences in early complete response rates, minimal residual disease (MRD) after induction, and MRD after consolidation.
“[O]nly 40% of patients could be centrally analyzed [for MRD],” Dr Maury explained, “which may be the reason why we could not detect any impact of rituximab on MRD.”
The cumulative incidence of relapse at 2 years was 18% in the rituximab arm and 32% in the no-rituximab arm (hazard ratio=0.52, P=0.017). And after censoring for stem cell transplant in first complete remission, the hazard ratio was 0.49 in favor of rituximab (P=0.018).
Overall survival (OS) was not significantly different between the arms. Rituximab-treated patients had an OS rate of 71%, compared with 64% in the no-rituximab arm (P=0.095).
“However, this difference became significant when censoring patients at time of allo-transplant,” Dr Maury said.
There was a 12% cumulative incidence of death in first complete remission at 2 years in each arm.
Investigators performed multivariate analysis and found that treatment with rituximab (P=0.020), age (P=0.022), white blood cell count of 30 x 109/L or higher (P=0.005), and CNS involvement all significantly impacted EFS.
When they introduced stem cell transplant in first remission as a covariable, the same factors remained significant. Allogeneic stem cell transplant in first remission did not make a significant difference on EFS (P=0.62).
Safety
One hundred twenty-four patients reported 246 severe adverse events, the most frequent of which was infection—71 in the rituximab arm and 55 in the no-rituximab arm, a difference that was not significant (P=0.16).
Severe allergic events were significantly different between the arms, with 2 severe allergic events reported in the rituximab arm and 14 in the no-rituximab arm (P=0.002). Of these 16 events, all but one were due to asparaginase.
“We believe that this may reflect the protective effect of rituximab that might inhibit B-cell protection of antibodies against asparaginase,” Dr Maury said, although the investigators did not actually measure the antibodies.
Severe lab abnormalities, neurologic and pulmonary events, coagulopathy, cardiologic and gastrointestinal events were not significantly different between the arms.
Dr Maury emphasized that the addition of rituximab to standard intensive chemotherapy is well tolerated, significantly improves EFS, and prolongs OS in patients not receiving allogeneic transplant in first remission.
While the optimal dose schedule of rituximab still remains to be determined, the GRAALL investigators believe that “the addition of rituximab should be the new standard of care for these patients,” Dr Maury declared.
Photo courtesy of ASH
ORLANDO, FL—Investigators from the Group for Research on Adult Lymphoblastic Leukemia (GRAALL) recommend integrating rituximab into the treatment of adult patients with acute lymphoblastic leukemia (ALL) based on results of the GRAALL-R 2005 study.
Patients who received rituximab as part of their therapy had a median event-free survival (EFS) at 2 years of 65%, compared to 52% for patients who did not receive rituximab. After censoring for stem cell transplant in first complete remission, the benefit was even greater.
Sébastien Maury, MD, PhD, of Hȏpital Hénri Mondor in Creteil, France, presented the results during the plenary session of the 2015 ASH Annual Meeting as abstract 1.
Dr Maury said GRAALL-R 2005 is the first phase 3, randomized study to evaluate the role of rituximab in the treatment of B-cell precursor (BCP) ALL.
Only one previous study, he said, suggested a potential benefit of adding rituximab compared to historic controls of chemotherapy alone.
He explained that, because the CD20 antigen is expressed at diagnosis in 30% to 40% of patients with BCP-ALL, investigators undertook to evaluate whether adding the anti-CD20 monoclonal antibody rituximab to the ALL treatment regimen could be beneficial for newly diagnosed Ph-negative BCP-ALL patients.
Study design & population
Investigators randomized 105 patients to receive the pediatric-inspired GRAALL protocol plus rituximab and 104 patients to the same regimen without rituximab.
Patients had to have 20% or more CD20-positive leukemic blasts.
Patients in the rituximab arm received 375 mg/m2 during induction on days 1 and 7, during salvage reinduction (if needed) on days 1 and 7, during consolidation blocks (6 infusions), during late intensification on days 1 and 7, and during the first year of maintenance (6 infusions), for a total of 16 to 18 infusions.
“In this trial, allogeneic transplantation was offered in first remission to high-risk patients who were those patients with at least one of these baseline or response-related criteria,” Dr Maury said.
Investigators defined high-risk at baseline as having a white blood cell count of 30 x 109/L or higher, CNS involvement, CD10-negative disease, or unfavorable cytogenetics.
And response-related criteria for high-risk disease included poor peripheral blast clearance after the 1-week steroid pre-phase, poor bone marrow blast clearance after the first week of chemotherapy, or no hematologic complete response after the first induction course.
Patient characteristics were well balanced between the arms, with a median age for the entire group of 40.2 years. Rituximab-treated patients had 61% CD20-positive blasts, and the no-rituximab arm had 69%.
More patients in the rituximab arm had a better ECOG performance status, although the difference was not significant. Thirteen percent were assessed as being grade 2 or higher in the rituximab arm, compared with 18% in the no-rituximab arm (P=0.06).
“The proportion of high-risk patients was comparable in both arms,” Dr Maury said, “representing around two-thirds of the study population.”
In the rituximab arm, 70% were considered high-risk, compared with 64% in the no-rituximab arm (P=0.46).
“However, despite this,” he said, “a significantly higher proportion of patients received allo transplant at first remission in the rituximab arm, 34% versus 20%. And since this was not explained by a different proportion of high-risk patients, this was probably due to differences in donor availability.”
Dr Maury noted that compliance to treatment was “quite good.”
Efficacy
The median follow-up was 30 months, and the primary endpoint was EFS.
The EFS rate for rituximab-treated patients at 2 years was 65%, compared with 52% for the non-rituximab patients (hazard ratio=0.66, P=0.038).
EFS was also significantly better with rituximab when patients were censored at allogeneic transplant, with a hazard ratio of 0.59 and a significance of 0.021.
However, there were no significant differences in early complete response rates, minimal residual disease (MRD) after induction, and MRD after consolidation.
“[O]nly 40% of patients could be centrally analyzed [for MRD],” Dr Maury explained, “which may be the reason why we could not detect any impact of rituximab on MRD.”
The cumulative incidence of relapse at 2 years was 18% in the rituximab arm and 32% in the no-rituximab arm (hazard ratio=0.52, P=0.017). And after censoring for stem cell transplant in first complete remission, the hazard ratio was 0.49 in favor of rituximab (P=0.018).
Overall survival (OS) was not significantly different between the arms. Rituximab-treated patients had an OS rate of 71%, compared with 64% in the no-rituximab arm (P=0.095).
“However, this difference became significant when censoring patients at time of allo-transplant,” Dr Maury said.
There was a 12% cumulative incidence of death in first complete remission at 2 years in each arm.
Investigators performed multivariate analysis and found that treatment with rituximab (P=0.020), age (P=0.022), white blood cell count of 30 x 109/L or higher (P=0.005), and CNS involvement all significantly impacted EFS.
When they introduced stem cell transplant in first remission as a covariable, the same factors remained significant. Allogeneic stem cell transplant in first remission did not make a significant difference on EFS (P=0.62).
Safety
One hundred twenty-four patients reported 246 severe adverse events, the most frequent of which was infection—71 in the rituximab arm and 55 in the no-rituximab arm, a difference that was not significant (P=0.16).
Severe allergic events were significantly different between the arms, with 2 severe allergic events reported in the rituximab arm and 14 in the no-rituximab arm (P=0.002). Of these 16 events, all but one were due to asparaginase.
“We believe that this may reflect the protective effect of rituximab that might inhibit B-cell protection of antibodies against asparaginase,” Dr Maury said, although the investigators did not actually measure the antibodies.
Severe lab abnormalities, neurologic and pulmonary events, coagulopathy, cardiologic and gastrointestinal events were not significantly different between the arms.
Dr Maury emphasized that the addition of rituximab to standard intensive chemotherapy is well tolerated, significantly improves EFS, and prolongs OS in patients not receiving allogeneic transplant in first remission.
While the optimal dose schedule of rituximab still remains to be determined, the GRAALL investigators believe that “the addition of rituximab should be the new standard of care for these patients,” Dr Maury declared.