Inpatient Communication Barriers and Drivers When Caring for Limited English Proficiency Children

Article Type
Changed
Sun, 10/13/2019 - 21:21

Immigrant children make up the fastest growing segment of the population in the United States.1 While most immigrant children are fluent in English, approximately 40% live with a parent who has limited English proficiency (LEP; ie, speaks English less than “very well”).2,3 In pediatrics, LEP status has been associated with longer hospitalizations,4 higher hospitalization costs,5 increased risk for serious adverse medical events,4,6 and more frequent emergency department reutilization.7 In the inpatient setting, multiple aspects of care present a variety of communication challenges,8 which are amplified by shift work and workflow complexity that result in patients and families interacting with numerous providers over the course of an inpatient stay.

Increasing access to trained professional interpreters when caring for LEP patients improves communication, patient satisfaction, adherence, and mortality.9-12 However, even when access to interpreter services is established, effective use is not guaranteed.13 Up to 57% of pediatricians report relying on family members to communicate with LEP patients and their caregivers;9 23% of pediatric residents categorized LEP encounters as frustrating while 78% perceived care of LEP patients to be “misdirected” (eg, delay in diagnosis or discharge) because of associated language barriers.14

Understanding experiences of frontline inpatient medical providers and interpreters is crucial in identifying challenges and ways to optimize communication for hospitalized LEP patients and families. However, there is a paucity of literature exploring the perspectives of medical providers and interpreters as it relates to communication with hospitalized LEP children and families. In this study, we sought to identify barriers and drivers of effective communication with pediatric patients and families with LEP in the inpatient setting from the perspective of frontline medical providers and interpreters.

METHODS

Study Design

This qualitative study used Group Level Assessment (GLA), a structured participatory methodology that allows diverse groups of stakeholders to generate and evaluate data in interactive sessions.15-18 GLA structure promotes active participation, group problem-solving, and development of actionable plans, distinguishing it from focus groups and in-depth semistructured interviews.15,19 This study received a human subject research exemption by the institutional review board.

Study Setting

Cincinnati Children’s Hospital Medical Center (CCHMC) is a large quaternary care center with ~200 patient encounters each day who require the use of interpreter services. Interpreters (in-person, video, and phone) are utilized during admission, formal family-centered rounds, hospital discharge, and other encounters with physicians, nurses, and other healthcare professionals. In-person interpreters are available in-house for Spanish and Arabic, with 18 additional languages available through regional vendors. Despite available resources, there is no standard way in which medical providers and interpreters work with one another.

 

 

Study Participants and Recruitment

Medical providers who care for hospitalized general pediatric patients were eligible to participate, including attending physicians, resident physicians, bedside nurses, and inpatient ancillary staff (eg, respiratory therapists, physical therapists). Interpreters employed by CCHMC with experience in the inpatient setting were also eligible. Individuals were recruited based on published recommendations to optimize discussion and group-thinking.15 Each participant was asked to take part in one GLA session. Participants were assigned to specific sessions based on roles (ie, physicians, nurses, and interpreters) to maximize engagement and minimize the impact of hierarchy.

Study Procedure

GLA involves a seven-step structured process (Appendix 1): climate setting, generating, appreciating, reflecting, understanding, selecting, and action.15,18 Qualitative data were generated individually and anonymously by participants on flip charts in response to prompts such as: “I worry that LEP families___,” “The biggest challenge when using interpreter services is___,” and “I find___ works well in providing care for LEP families.” Prompts were developed by study investigators, modified based on input from nursing and interpreter services leadership, and finalized by GLA facilitators. Fifty-one unique prompts were utilized (Appendix 2); the number of prompts used (ranging from 15 to 32 prompts) per session was based on published recommendations.15 During sessions, study investigators took detailed notes, including verbatim transcription of participant quotes. Upon conclusion of the session, each participant completed a demographic survey, including years of experience, languages spoken and perceived fluency,20 and ethnicity.

Data Analysis

Within each session, under the guidance of trained and experienced GLA facilitators (WB, HV), participants distilled and summarized qualitative data into themes, discussed and prioritized themes, and generated action items. Following completion of all sessions, analyzed data was compiled by the research team to determine similarities and differences across groups based on participant roles, consolidate themes into barriers and drivers of communication with LEP families, and determine any overlap of priorities for action. Findings were shared back with each group to ensure accuracy and relevance.

RESULTS

Participants

A total of 64 individuals participated (Table 1): hospital medicine physicians and residents (56%), inpatient nurses and ancillary staff (16%), and interpreters (28%). While 81% of physicians spoke multiple languages, only 25% reported speaking them well; two physicians were certified to communicate medical information without an interpreter present.

Themes Resulting from GLA Sessions

A total of four barriers (Table 2) and four drivers (Table 3) of effective communication with pediatric LEP patients and their families in the inpatient setting were identified by participants. Participants across all groups, despite enthusiasm around improving communication, were concerned about quality of care LEP families received, noting that the system is “designed to deliver less-good care” and that “we really haven’t figured out how to care for [LEP patients and families] in a [high-]quality and reliable way.” Variation in theme discussion was noted between groups based on participant role: physicians voiced concern about rapport with LEP families, nurses emphasized actionable tasks, and interpreters focused on heightened challenges in times of stress.

 

 

Barrier 1: Difficulties Accessing Interpreter Services

Medical providers (physicians and nurses) identified the “opaque process to access [interpreter] services” as one of their biggest challenges when communicating with LEP families. In particular, the process of scheduling interpreters was described as a “black box,” with physicians and nurses expressing difficulty determining if and when in-person interpreters were scheduled and uncertainty about when to use modalities other than in-person interpretation. Participants across groups highlighted the lack of systems knowledge from medical providers and limitations within the system that make predictable, timely, and reliable access to interpreters challenging, especially for uncommon languages. Medical providers desired more in-person interpreters who can “stay as long as clinically indicated,” citing frustration associated with using phone- and video-interpretation (eg, challenges locating technology, unfamiliarity with use, unreliable functionality of equipment). Interpreters voiced wanting to take time to finish each encounter fully without “being in a hurry because the next appointment is coming soon” or “rushing… in [to the next] session sweating.”

Barrier 2: Uncertainty in Communication with LEP Families

Participants across all groups described three areas of uncertainty as detailed in Table 2: (1) what to share and how to prioritize information during encounters with LEP patients and families, (2) what is communicated during interpretation, and (3) what LEP patients and families understand.

Barrier 3: Unclear and Inconsistent Expectations and Roles of Team Members

Given the complexity involved in communication between medical providers, interpreters, and families, participants across all groups reported feeling ill-prepared when navigating hospital encounters with LEP patients and families. Interpreters reported having little to no clinical context, medical providers reported having no knowledge of the assigned interpreter’s style, and both interpreters and medical providers reported that families have little idea of what to expect or how to engage. All groups voiced frustration about the lack of clarity regarding specific roles and scope of practice for each team member during an encounter, where multiple people end up “talking [or] using the interpreter at once.” Interpreters shared their expectations of medical providers to set the pace and lead conversations with LEP families. On the other hand, medical providers expressed a desire for interpreters to provide cultural context to the team without prompting and to interrupt during encounters when necessary to voice concerns or redirect conversations.

Barrier 4: Unmet Family Engagement Expectations

Participants across all groups articulated challenges with establishing rapport with LEP patients and families, sharing concerns that “inadequate communication” due to “cultural or language barriers” ultimately impacts quality of care. Participants reported decreased bidirectional engagement with and from LEP families. Medical providers not only noted difficulty in connecting with LEP families “on a more personal level” and providing frequent medical updates, but also felt that LEP families do not ask questions even when uncertain. Interpreters expressed concerns about medical providers “not [having] enough patience to answer families’ questions” while LEP families “shy away from asking questions.”

Driver 1: Utilizing a Team-Based Approach between Medical Providers and Interpreters

 

 

Participants from all groups emphasized that a mutual understanding of roles and shared expectations regarding communication and interpretation style, clinical context, and time constraints would establish a foundation for respect between medical providers and interpreters. They reported that a team-based approach to LEP patient and family encounters were crucial to achieving effective communication.

Driver 2: Understanding the Role of Cultural Context in Providing Culturally Effective Care.

Participants across all groups highlighted three different aspects of cultural context that drive effective communication: (1) medical providers’ perception of the family’s culture; (2) LEP families’ knowledge about the culture and healthcare system in the US, and (3) medical providers insight into their own preconceived ideas about LEP families.

Driver 3: Practicing Empathy for Patients and Families

All participants reported that respect for diversity and consideration of the backgrounds and perspectives of LEP patients and families are necessary. Furthermore, both medical providers and interpreters articulated a need to remain patient and mindful when interacting with LEP families despite challenges, especially since, as noted by interpreters, encounters may “take longer, but it’s for a reason.”

Driver 4: Using Effective Family-Centered Communication Strategies

Participants identified the use of effective family-centered communication principles as a driver to optimal communication. Many of the principles identified by medical providers and interpreters are generally applicable to all hospitalized patients and families regardless of English proficiency: optimizing verbal communication (eg, using shorter sentences, pausing to allow for interpretation), optimizing nonverbal communication (eg, setting, position, and body language), and assessment of family understanding and engagement (eg, use of teach back).

DISCUSSION

Frontline medical providers and interpreters identified barriers and drivers that impact communication with LEP patients and families during hospitalization. To our knowledge, this is the first study that uses a participatory method to explore the perspectives of medical providers and interpreters who care for LEP children and families in the inpatient setting. Despite existing difficulties and concerns regarding language barriers and its impact on quality of care for hospitalized LEP patients and families, participants were enthusiastic about how identified barriers and drivers may inform future improvement efforts. Notable action steps for future improvement discussed by our participants included: increased use and functionality of technology for timely and predictable access to interpreters, deliberate training for providers focused on delivery of culturally-effective care, consistent use of family-centered communication strategies including teach-back, and implementing interdisciplinary expectation setting through “presessions” before encounters with LEP families.

Participants elaborated on several barriers previously described in the literature including time constraints and technical problems.14,21,22 Such barriers may serve as deterrents to consistent and appropriate use of interpreters in healthcare settings.9 A heavy reliance on off-site interpreters (including phone- or video-interpreters) and lack of knowledge regarding resource availability likely amplified frustration for medical providers. Communication with LEP families can be daunting, especially when medical providers do not care for LEP families or work with interpreters on a regular basis.14 Standardizing the education of medical providers regarding available resources, as well as the logistics, process, and parameters for scheduling interpreters and using technology, was an action step identified by our GLA participants. Targeted education about the logistics of accessing interpreter services and having standardized ways to make technology use easier (ie, one-touch dialing in hospital rooms) has been associated with increased interpreter use and decreased interpreter-related delays in care.23

Our frontline medical providers expressed added concern about not spending as much time with LEP families. In fact, LEP families in the literature have perceived medical providers to spend less time with their children compared to their English-proficient counterparts.24 Language and cultural barriers, both perceived and real, may limit medical provider rapport with LEP patients and families14 and likely contribute to medical providers relying on their preconceived assumptions instead.25 Cultural competency education for medical providers, as highlighted by our GLA participants as an action item, can be used to provide more comprehensive and effective care.26,27

In addition to enhancing cultural humility through education, our participants emphasized the use of family-centered communication strategies as a driver of optimal family engagement and understanding. Actively inviting questions from families and utilizing teach-back, an established evidence-based strategy28-30 discussed by our participants, can be particularly powerful in assessing family understanding and engagement. While information should be presented in plain language for families in all encounters,31 these evidence-based practices are of particular importance when communicating with LEP families. They promote effective communication, empower families to share concerns in a structured manner, and allow medical providers to address matters in real-time with interpreters present.

Finally, our participants highlighted the need for partnerships between providers and interpreter services, noting unclear roles and expectations among interpreters and medical providers as a major barrier. Specifically, physicians noted confusion regarding the scope of an interpreter’s practice. Participants from GLA sessions discussed the importance of a team-based approach and suggested implementing a “presession” prior to encounters with LEP patients and families. Presessions—a concept well accepted among interpreters and recommended by consensus-based practice guidelines—enable medical providers and interpreters to establish shared expectations about scope of practice, communication, interpretation style, time constraints, and medical context prior to patient encounters.32,33

There are several limitations to our study. First, individuals who chose to participate were likely highly motivated by their clinical experiences with LEP patients and invested in improving communication with LEP families. Second, the study is limited in generalizability, as it was conducted at a single academic institution in a Midwestern city. Despite regional variations in available resources as well as patient and workforce demographics, our findings regarding major themes are in agreement with previously published literature and further add to our understanding of ways to improve communication with this vulnerable population across the care spectrum. Lastly, we were logistically limited in our ability to elicit the perspectives of LEP families due to the participatory nature of GLA; the need for multiple interpreters to simultaneously interact with LEP individuals would have not only hindered active LEP family participation but may have also biased the data generated by patients and families, as the services interpreters provide during their inpatient stay was the focus of our study. Engaging LEP families in their preferred language using participatory methods should be considered for future studies.

In conclusion, frontline providers of medical and language services identified barriers and drivers impacting the effective use of interpreter services when communicating with LEP families during hospitalization. Our enhanced understanding of barriers and drivers, as well as identified actionable interventions, will inform future improvement of communication and interactions with LEP families that contributes to effective and efficient family centered care. A framework for the development and implementation of organizational strategies aimed at improving communication with LEP families must include a thorough assessment of impact, feasibility, stakeholder involvement, and sustainability of specific interventions. While there is no simple formula to improve language services, health systems should establish and adopt language access policies, standardize communication practices, and develop processes to optimize the use of language services in the hospital. Furthermore, engagement with LEP families to better understand their perceptions and experiences with the healthcare system is crucial to improve communication between medical providers and LEP families in the inpatient setting and should be the subject of future studies.

Disclosures

The authors have no conflicts of interest to disclose.

Funding

No external funding was secured for this study. Dr. Joanna Thomson is supported by the Agency for Healthcare Research and Quality (Grant #K08 HS025138). Dr. Raglin Bignall was supported through a Ruth L. Kirschstein National Research Service Award (T32HP10027) when the study was conducted. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding organizations. The funding organizations had no role in the design, preparation, review, or approval of this paper.

 

Files
References

1. The American Academy of Pediatrics Council on Community Pediatrics. Providing care for immigrant, migrant, and border children. Pediatrics. 2013;131(6):e2028-e2034. PubMed
2. Meneses C, Chilton L, Duffee J, et al. Council on Community Pediatrics Immigrant Health Tool Kit. The American Academy of Pediatrics. https://www.aap.org/en-us/Documents/cocp_toolkit_full.pdf. Accessed May 13, 2019.
3. Office for Civil Rights. Guidance to Federal Financial Assistance Recipients Regarding Title VI and the Prohibition Against National Origin Discrimination Affecting Limited English Proficient Persons. https://www.hhs.gov/civil-rights/for-individuals/special-topics/limited-english-proficiency/guidance-federal-financial-assistance-recipients-title-vi/index.html. Accessed May 13, 2019.
4. Lion KC, Rafton SA, Shafii J, et al. Association between language, serious adverse events, and length of stay Among hospitalized children. Hosp Pediatr. 2013;3(3):219-225. https://doi.org/10.1542/hpeds.2012-0091.
5. Lion KC, Wright DR, Desai AD, Mangione-Smith R. Costs of care for hospitalized children associated With preferred language and insurance type. Hosp Pediatr. 2017;7(2):70-78. https://doi.org/10.1542/hpeds.2016-0051.
6. Cohen AL, Rivara F, Marcuse EK, McPhillips H, Davis R. Are language barriers associated with serious medical events in hospitalized pediatric patients? Pediatrics. 2005;116(3):575-579. https://doi.org/10.1542/peds.2005-0521.
7. Samuels-Kalow ME, Stack AM, Amico K, Porter SC. Parental language and return visits to the Emergency Department After discharge. Pediatr Emerg Care. 2017;33(6):402-404. https://doi.org/10.1097/PEC.0000000000000592.
8. Unaka NI, Statile AM, Choe A, Shonna Yin H. Addressing health literacy in the inpatient setting. Curr Treat Options Pediatr. 2018;4(2):283-299. https://doi.org/10.1007/s40746-018-0122-3.
9. DeCamp LR, Kuo DZ, Flores G, O’Connor K, Minkovitz CS. Changes in language services use by US pediatricians. Pediatrics. 2013;132(2):e396-e406. https://doi.org/10.1542/peds.2012-2909.
10. Flores G. The impact of medical interpreter services on the quality of health care: A systematic review. Med Care Res Rev. 2005;62(3):255-299. https://doi.org/10.1177/1077558705275416.
11. Flores G, Abreu M, Barone CP, Bachur R, Lin H. Errors of medical interpretation and their potential clinical consequences: A comparison of professional versus hoc versus no interpreters. Ann Emerg Med. 2012;60(5):545-553. https://doi.org/10.1016/j.annemergmed.2012.01.025.
12. Anand KJ, Sepanski RJ, Giles K, Shah SH, Juarez PD. Pediatric intensive care unit mortality among Latino children before and after a multilevel health care delivery intervention. JAMA Pediatr. 2015;169(4):383-390. https://doi.org/10.1001/jamapediatrics.2014.3789.
13. The Joint Commission. Advancing Effective Communication, Cultural Competence, and Patient- and Family-Centered Care: A Roadmap for Hospitals. Oakbrook Terrace, IL: The Joint Commission; 2010.
14. Hernandez RG, Cowden JD, Moon M et al. Predictors of resident satisfaction in caring for limited English proficient families: a multisite study. Acad Pediatr. 2014;14(2):173-180. https://doi.org/10.1016/j.acap.2013.12.002.
15. Vaughn LM, Lohmueller M. Calling all stakeholders: group-level assessment (GLA)-a qualitative and participatory method for large groups. Eval Rev. 2014;38(4):336-355. https://doi.org/10.1177/0193841X14544903.
16. Vaughn LM, Jacquez F, Zhao J, Lang M. Partnering with students to explore the health needs of an ethnically diverse, low-resource school: an innovative large group assessment approach. Fam Commun Health. 2011;34(1):72-84. https://doi.org/10.1097/FCH.0b013e3181fded12.
17. Gosdin CH, Vaughn L. Perceptions of physician bedside handoff with nurse and family involvement. Hosp Pediatr. 2012;2(1):34-38. https://doi.org/10.1542/hpeds.2011-0008-2.
18. Graham KE, Schellinger AR, Vaughn LM. Developing strategies for positive change: transitioning foster youth to adulthood. Child Youth Serv Rev. 2015;54:71-79. https://doi.org/10.1016/j.childyouth.2015.04.014.
19. Vaughn LM. Group level assessment: A Large Group Method for Identifying Primary Issues and Needs within a community. London2014. http://methods.sagepub.com/case/group-level-assessment-large-group-primary-issues-needs-community. Accessed 2017/07/26.
20. Association of American Medical Colleges Electronic Residency Application Service. ERAS 2018 MyERAS Application Worksheet: Language Fluency. Washington, DC:: Association of American Medical Colleges; 2018:5.
21. Brisset C, Leanza Y, Laforest K. Working with interpreters in health care: A systematic review and meta-ethnography of qualitative studies. Patient Educ Couns. 2013;91(2):131-140. https://doi.org/10.1016/j.pec.2012.11.008.
22. Wiking E, Saleh-Stattin N, Johansson SE, Sundquist J. A description of some aspects of the triangular meeting between immigrant patients, their interpreters and GPs in primary health care in Stockholm, Sweden. Fam Pract. 2009;26(5):377-383. https://doi.org/10.1093/fampra/cmp052.
23. Lion KC, Ebel BE, Rafton S et al. Evaluation of a quality improvement intervention to increase use of telephonic interpretation. Pediatrics. 2015;135(3):e709-e716. https://doi.org/10.1542/peds.2014-2024.
24. Zurca AD, Fisher KR, Flor RJ, et al. Communication with limited English-proficient families in the PICU. Hosp Pediatr. 2017;7(1):9-15. https://doi.org/10.1542/hpeds.2016-0071.
25. Kodjo C. Cultural competence in clinician communication. Pediatr Rev. 2009;30(2):57-64. https://doi.org/10.1542/pir.30-2-57.
26. Britton CV, American Academy of Pediatrics Committee on Pediatric Workforce. Ensuring culturally effective pediatric care: implications for education and health policy. Pediatrics. 2004;114(6):1677-1685. https://doi.org/10.1542/peds.2004-2091.
27. The American Academy of Pediatrics. Culturally Effective Care Toolkit: Providing Cuturally Effective Pediatric Care; 2018. https://www.aap.org/en-us/professional-resources/practice-transformation/managing-patients/Pages/effective-care.aspx. Accessed May 13, 2019.
28. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812. https://doi.org/10.1056/NEJMsa1405556.
29. Jager AJ, Wynia MK. Who gets a teach-back? Patient-reported incidence of experiencing a teach-back. J Health Commun. 2012;17 Supplement 3:294-302. https://doi.org/10.1080/10810730.2012.712624.
30. Kornburger C, Gibson C, Sadowski S, Maletta K, Klingbeil C. Using “teach-back” to promote a safe transition from hospital to home: an evidence-based approach to improving the discharge process. J Pediatr Nurs. 2013;28(3):282-291. https://doi.org/10.1016/j.pedn.2012.10.007.
31. Abrams MA, Klass P, Dreyer BP. Health literacy and children: recommendations for action. Pediatrics. 2009;124 Supplement 3:S327-S331. https://doi.org/10.1542/peds.2009-1162I.
32. Betancourt JR, Renfrew MR, Green AR, Lopez L, Wasserman M. Improving Patient Safety Systems for Patients with Limited English Proficiency: a Guide for Hospitals. Agency for Healthcare Research and Quality; 2012.
<--pagebreak-->33. The National Council on Interpreting in Health Care. Best Practices for Communicating Through an Interpreter . https://refugeehealthta.org/access-to-care/language-access/best-practices-communicating-through-an-interpreter/. Accessed May 19, 2019.

Article PDF
Issue
Journal of Hospital Medicine 14(10)
Topics
Page Number
607-613. Published online first July 24, 2019
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Immigrant children make up the fastest growing segment of the population in the United States.1 While most immigrant children are fluent in English, approximately 40% live with a parent who has limited English proficiency (LEP; ie, speaks English less than “very well”).2,3 In pediatrics, LEP status has been associated with longer hospitalizations,4 higher hospitalization costs,5 increased risk for serious adverse medical events,4,6 and more frequent emergency department reutilization.7 In the inpatient setting, multiple aspects of care present a variety of communication challenges,8 which are amplified by shift work and workflow complexity that result in patients and families interacting with numerous providers over the course of an inpatient stay.

Increasing access to trained professional interpreters when caring for LEP patients improves communication, patient satisfaction, adherence, and mortality.9-12 However, even when access to interpreter services is established, effective use is not guaranteed.13 Up to 57% of pediatricians report relying on family members to communicate with LEP patients and their caregivers;9 23% of pediatric residents categorized LEP encounters as frustrating while 78% perceived care of LEP patients to be “misdirected” (eg, delay in diagnosis or discharge) because of associated language barriers.14

Understanding experiences of frontline inpatient medical providers and interpreters is crucial in identifying challenges and ways to optimize communication for hospitalized LEP patients and families. However, there is a paucity of literature exploring the perspectives of medical providers and interpreters as it relates to communication with hospitalized LEP children and families. In this study, we sought to identify barriers and drivers of effective communication with pediatric patients and families with LEP in the inpatient setting from the perspective of frontline medical providers and interpreters.

METHODS

Study Design

This qualitative study used Group Level Assessment (GLA), a structured participatory methodology that allows diverse groups of stakeholders to generate and evaluate data in interactive sessions.15-18 GLA structure promotes active participation, group problem-solving, and development of actionable plans, distinguishing it from focus groups and in-depth semistructured interviews.15,19 This study received a human subject research exemption by the institutional review board.

Study Setting

Cincinnati Children’s Hospital Medical Center (CCHMC) is a large quaternary care center with ~200 patient encounters each day who require the use of interpreter services. Interpreters (in-person, video, and phone) are utilized during admission, formal family-centered rounds, hospital discharge, and other encounters with physicians, nurses, and other healthcare professionals. In-person interpreters are available in-house for Spanish and Arabic, with 18 additional languages available through regional vendors. Despite available resources, there is no standard way in which medical providers and interpreters work with one another.

 

 

Study Participants and Recruitment

Medical providers who care for hospitalized general pediatric patients were eligible to participate, including attending physicians, resident physicians, bedside nurses, and inpatient ancillary staff (eg, respiratory therapists, physical therapists). Interpreters employed by CCHMC with experience in the inpatient setting were also eligible. Individuals were recruited based on published recommendations to optimize discussion and group-thinking.15 Each participant was asked to take part in one GLA session. Participants were assigned to specific sessions based on roles (ie, physicians, nurses, and interpreters) to maximize engagement and minimize the impact of hierarchy.

Study Procedure

GLA involves a seven-step structured process (Appendix 1): climate setting, generating, appreciating, reflecting, understanding, selecting, and action.15,18 Qualitative data were generated individually and anonymously by participants on flip charts in response to prompts such as: “I worry that LEP families___,” “The biggest challenge when using interpreter services is___,” and “I find___ works well in providing care for LEP families.” Prompts were developed by study investigators, modified based on input from nursing and interpreter services leadership, and finalized by GLA facilitators. Fifty-one unique prompts were utilized (Appendix 2); the number of prompts used (ranging from 15 to 32 prompts) per session was based on published recommendations.15 During sessions, study investigators took detailed notes, including verbatim transcription of participant quotes. Upon conclusion of the session, each participant completed a demographic survey, including years of experience, languages spoken and perceived fluency,20 and ethnicity.

Data Analysis

Within each session, under the guidance of trained and experienced GLA facilitators (WB, HV), participants distilled and summarized qualitative data into themes, discussed and prioritized themes, and generated action items. Following completion of all sessions, analyzed data was compiled by the research team to determine similarities and differences across groups based on participant roles, consolidate themes into barriers and drivers of communication with LEP families, and determine any overlap of priorities for action. Findings were shared back with each group to ensure accuracy and relevance.

RESULTS

Participants

A total of 64 individuals participated (Table 1): hospital medicine physicians and residents (56%), inpatient nurses and ancillary staff (16%), and interpreters (28%). While 81% of physicians spoke multiple languages, only 25% reported speaking them well; two physicians were certified to communicate medical information without an interpreter present.

Themes Resulting from GLA Sessions

A total of four barriers (Table 2) and four drivers (Table 3) of effective communication with pediatric LEP patients and their families in the inpatient setting were identified by participants. Participants across all groups, despite enthusiasm around improving communication, were concerned about quality of care LEP families received, noting that the system is “designed to deliver less-good care” and that “we really haven’t figured out how to care for [LEP patients and families] in a [high-]quality and reliable way.” Variation in theme discussion was noted between groups based on participant role: physicians voiced concern about rapport with LEP families, nurses emphasized actionable tasks, and interpreters focused on heightened challenges in times of stress.

 

 

Barrier 1: Difficulties Accessing Interpreter Services

Medical providers (physicians and nurses) identified the “opaque process to access [interpreter] services” as one of their biggest challenges when communicating with LEP families. In particular, the process of scheduling interpreters was described as a “black box,” with physicians and nurses expressing difficulty determining if and when in-person interpreters were scheduled and uncertainty about when to use modalities other than in-person interpretation. Participants across groups highlighted the lack of systems knowledge from medical providers and limitations within the system that make predictable, timely, and reliable access to interpreters challenging, especially for uncommon languages. Medical providers desired more in-person interpreters who can “stay as long as clinically indicated,” citing frustration associated with using phone- and video-interpretation (eg, challenges locating technology, unfamiliarity with use, unreliable functionality of equipment). Interpreters voiced wanting to take time to finish each encounter fully without “being in a hurry because the next appointment is coming soon” or “rushing… in [to the next] session sweating.”

Barrier 2: Uncertainty in Communication with LEP Families

Participants across all groups described three areas of uncertainty as detailed in Table 2: (1) what to share and how to prioritize information during encounters with LEP patients and families, (2) what is communicated during interpretation, and (3) what LEP patients and families understand.

Barrier 3: Unclear and Inconsistent Expectations and Roles of Team Members

Given the complexity involved in communication between medical providers, interpreters, and families, participants across all groups reported feeling ill-prepared when navigating hospital encounters with LEP patients and families. Interpreters reported having little to no clinical context, medical providers reported having no knowledge of the assigned interpreter’s style, and both interpreters and medical providers reported that families have little idea of what to expect or how to engage. All groups voiced frustration about the lack of clarity regarding specific roles and scope of practice for each team member during an encounter, where multiple people end up “talking [or] using the interpreter at once.” Interpreters shared their expectations of medical providers to set the pace and lead conversations with LEP families. On the other hand, medical providers expressed a desire for interpreters to provide cultural context to the team without prompting and to interrupt during encounters when necessary to voice concerns or redirect conversations.

Barrier 4: Unmet Family Engagement Expectations

Participants across all groups articulated challenges with establishing rapport with LEP patients and families, sharing concerns that “inadequate communication” due to “cultural or language barriers” ultimately impacts quality of care. Participants reported decreased bidirectional engagement with and from LEP families. Medical providers not only noted difficulty in connecting with LEP families “on a more personal level” and providing frequent medical updates, but also felt that LEP families do not ask questions even when uncertain. Interpreters expressed concerns about medical providers “not [having] enough patience to answer families’ questions” while LEP families “shy away from asking questions.”

Driver 1: Utilizing a Team-Based Approach between Medical Providers and Interpreters

 

 

Participants from all groups emphasized that a mutual understanding of roles and shared expectations regarding communication and interpretation style, clinical context, and time constraints would establish a foundation for respect between medical providers and interpreters. They reported that a team-based approach to LEP patient and family encounters were crucial to achieving effective communication.

Driver 2: Understanding the Role of Cultural Context in Providing Culturally Effective Care.

Participants across all groups highlighted three different aspects of cultural context that drive effective communication: (1) medical providers’ perception of the family’s culture; (2) LEP families’ knowledge about the culture and healthcare system in the US, and (3) medical providers insight into their own preconceived ideas about LEP families.

Driver 3: Practicing Empathy for Patients and Families

All participants reported that respect for diversity and consideration of the backgrounds and perspectives of LEP patients and families are necessary. Furthermore, both medical providers and interpreters articulated a need to remain patient and mindful when interacting with LEP families despite challenges, especially since, as noted by interpreters, encounters may “take longer, but it’s for a reason.”

Driver 4: Using Effective Family-Centered Communication Strategies

Participants identified the use of effective family-centered communication principles as a driver to optimal communication. Many of the principles identified by medical providers and interpreters are generally applicable to all hospitalized patients and families regardless of English proficiency: optimizing verbal communication (eg, using shorter sentences, pausing to allow for interpretation), optimizing nonverbal communication (eg, setting, position, and body language), and assessment of family understanding and engagement (eg, use of teach back).

DISCUSSION

Frontline medical providers and interpreters identified barriers and drivers that impact communication with LEP patients and families during hospitalization. To our knowledge, this is the first study that uses a participatory method to explore the perspectives of medical providers and interpreters who care for LEP children and families in the inpatient setting. Despite existing difficulties and concerns regarding language barriers and its impact on quality of care for hospitalized LEP patients and families, participants were enthusiastic about how identified barriers and drivers may inform future improvement efforts. Notable action steps for future improvement discussed by our participants included: increased use and functionality of technology for timely and predictable access to interpreters, deliberate training for providers focused on delivery of culturally-effective care, consistent use of family-centered communication strategies including teach-back, and implementing interdisciplinary expectation setting through “presessions” before encounters with LEP families.

Participants elaborated on several barriers previously described in the literature including time constraints and technical problems.14,21,22 Such barriers may serve as deterrents to consistent and appropriate use of interpreters in healthcare settings.9 A heavy reliance on off-site interpreters (including phone- or video-interpreters) and lack of knowledge regarding resource availability likely amplified frustration for medical providers. Communication with LEP families can be daunting, especially when medical providers do not care for LEP families or work with interpreters on a regular basis.14 Standardizing the education of medical providers regarding available resources, as well as the logistics, process, and parameters for scheduling interpreters and using technology, was an action step identified by our GLA participants. Targeted education about the logistics of accessing interpreter services and having standardized ways to make technology use easier (ie, one-touch dialing in hospital rooms) has been associated with increased interpreter use and decreased interpreter-related delays in care.23

Our frontline medical providers expressed added concern about not spending as much time with LEP families. In fact, LEP families in the literature have perceived medical providers to spend less time with their children compared to their English-proficient counterparts.24 Language and cultural barriers, both perceived and real, may limit medical provider rapport with LEP patients and families14 and likely contribute to medical providers relying on their preconceived assumptions instead.25 Cultural competency education for medical providers, as highlighted by our GLA participants as an action item, can be used to provide more comprehensive and effective care.26,27

In addition to enhancing cultural humility through education, our participants emphasized the use of family-centered communication strategies as a driver of optimal family engagement and understanding. Actively inviting questions from families and utilizing teach-back, an established evidence-based strategy28-30 discussed by our participants, can be particularly powerful in assessing family understanding and engagement. While information should be presented in plain language for families in all encounters,31 these evidence-based practices are of particular importance when communicating with LEP families. They promote effective communication, empower families to share concerns in a structured manner, and allow medical providers to address matters in real-time with interpreters present.

Finally, our participants highlighted the need for partnerships between providers and interpreter services, noting unclear roles and expectations among interpreters and medical providers as a major barrier. Specifically, physicians noted confusion regarding the scope of an interpreter’s practice. Participants from GLA sessions discussed the importance of a team-based approach and suggested implementing a “presession” prior to encounters with LEP patients and families. Presessions—a concept well accepted among interpreters and recommended by consensus-based practice guidelines—enable medical providers and interpreters to establish shared expectations about scope of practice, communication, interpretation style, time constraints, and medical context prior to patient encounters.32,33

There are several limitations to our study. First, individuals who chose to participate were likely highly motivated by their clinical experiences with LEP patients and invested in improving communication with LEP families. Second, the study is limited in generalizability, as it was conducted at a single academic institution in a Midwestern city. Despite regional variations in available resources as well as patient and workforce demographics, our findings regarding major themes are in agreement with previously published literature and further add to our understanding of ways to improve communication with this vulnerable population across the care spectrum. Lastly, we were logistically limited in our ability to elicit the perspectives of LEP families due to the participatory nature of GLA; the need for multiple interpreters to simultaneously interact with LEP individuals would have not only hindered active LEP family participation but may have also biased the data generated by patients and families, as the services interpreters provide during their inpatient stay was the focus of our study. Engaging LEP families in their preferred language using participatory methods should be considered for future studies.

In conclusion, frontline providers of medical and language services identified barriers and drivers impacting the effective use of interpreter services when communicating with LEP families during hospitalization. Our enhanced understanding of barriers and drivers, as well as identified actionable interventions, will inform future improvement of communication and interactions with LEP families that contributes to effective and efficient family centered care. A framework for the development and implementation of organizational strategies aimed at improving communication with LEP families must include a thorough assessment of impact, feasibility, stakeholder involvement, and sustainability of specific interventions. While there is no simple formula to improve language services, health systems should establish and adopt language access policies, standardize communication practices, and develop processes to optimize the use of language services in the hospital. Furthermore, engagement with LEP families to better understand their perceptions and experiences with the healthcare system is crucial to improve communication between medical providers and LEP families in the inpatient setting and should be the subject of future studies.

Disclosures

The authors have no conflicts of interest to disclose.

Funding

No external funding was secured for this study. Dr. Joanna Thomson is supported by the Agency for Healthcare Research and Quality (Grant #K08 HS025138). Dr. Raglin Bignall was supported through a Ruth L. Kirschstein National Research Service Award (T32HP10027) when the study was conducted. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding organizations. The funding organizations had no role in the design, preparation, review, or approval of this paper.

 

Immigrant children make up the fastest growing segment of the population in the United States.1 While most immigrant children are fluent in English, approximately 40% live with a parent who has limited English proficiency (LEP; ie, speaks English less than “very well”).2,3 In pediatrics, LEP status has been associated with longer hospitalizations,4 higher hospitalization costs,5 increased risk for serious adverse medical events,4,6 and more frequent emergency department reutilization.7 In the inpatient setting, multiple aspects of care present a variety of communication challenges,8 which are amplified by shift work and workflow complexity that result in patients and families interacting with numerous providers over the course of an inpatient stay.

Increasing access to trained professional interpreters when caring for LEP patients improves communication, patient satisfaction, adherence, and mortality.9-12 However, even when access to interpreter services is established, effective use is not guaranteed.13 Up to 57% of pediatricians report relying on family members to communicate with LEP patients and their caregivers;9 23% of pediatric residents categorized LEP encounters as frustrating while 78% perceived care of LEP patients to be “misdirected” (eg, delay in diagnosis or discharge) because of associated language barriers.14

Understanding experiences of frontline inpatient medical providers and interpreters is crucial in identifying challenges and ways to optimize communication for hospitalized LEP patients and families. However, there is a paucity of literature exploring the perspectives of medical providers and interpreters as it relates to communication with hospitalized LEP children and families. In this study, we sought to identify barriers and drivers of effective communication with pediatric patients and families with LEP in the inpatient setting from the perspective of frontline medical providers and interpreters.

METHODS

Study Design

This qualitative study used Group Level Assessment (GLA), a structured participatory methodology that allows diverse groups of stakeholders to generate and evaluate data in interactive sessions.15-18 GLA structure promotes active participation, group problem-solving, and development of actionable plans, distinguishing it from focus groups and in-depth semistructured interviews.15,19 This study received a human subject research exemption by the institutional review board.

Study Setting

Cincinnati Children’s Hospital Medical Center (CCHMC) is a large quaternary care center with ~200 patient encounters each day who require the use of interpreter services. Interpreters (in-person, video, and phone) are utilized during admission, formal family-centered rounds, hospital discharge, and other encounters with physicians, nurses, and other healthcare professionals. In-person interpreters are available in-house for Spanish and Arabic, with 18 additional languages available through regional vendors. Despite available resources, there is no standard way in which medical providers and interpreters work with one another.

 

 

Study Participants and Recruitment

Medical providers who care for hospitalized general pediatric patients were eligible to participate, including attending physicians, resident physicians, bedside nurses, and inpatient ancillary staff (eg, respiratory therapists, physical therapists). Interpreters employed by CCHMC with experience in the inpatient setting were also eligible. Individuals were recruited based on published recommendations to optimize discussion and group-thinking.15 Each participant was asked to take part in one GLA session. Participants were assigned to specific sessions based on roles (ie, physicians, nurses, and interpreters) to maximize engagement and minimize the impact of hierarchy.

Study Procedure

GLA involves a seven-step structured process (Appendix 1): climate setting, generating, appreciating, reflecting, understanding, selecting, and action.15,18 Qualitative data were generated individually and anonymously by participants on flip charts in response to prompts such as: “I worry that LEP families___,” “The biggest challenge when using interpreter services is___,” and “I find___ works well in providing care for LEP families.” Prompts were developed by study investigators, modified based on input from nursing and interpreter services leadership, and finalized by GLA facilitators. Fifty-one unique prompts were utilized (Appendix 2); the number of prompts used (ranging from 15 to 32 prompts) per session was based on published recommendations.15 During sessions, study investigators took detailed notes, including verbatim transcription of participant quotes. Upon conclusion of the session, each participant completed a demographic survey, including years of experience, languages spoken and perceived fluency,20 and ethnicity.

Data Analysis

Within each session, under the guidance of trained and experienced GLA facilitators (WB, HV), participants distilled and summarized qualitative data into themes, discussed and prioritized themes, and generated action items. Following completion of all sessions, analyzed data was compiled by the research team to determine similarities and differences across groups based on participant roles, consolidate themes into barriers and drivers of communication with LEP families, and determine any overlap of priorities for action. Findings were shared back with each group to ensure accuracy and relevance.

RESULTS

Participants

A total of 64 individuals participated (Table 1): hospital medicine physicians and residents (56%), inpatient nurses and ancillary staff (16%), and interpreters (28%). While 81% of physicians spoke multiple languages, only 25% reported speaking them well; two physicians were certified to communicate medical information without an interpreter present.

Themes Resulting from GLA Sessions

A total of four barriers (Table 2) and four drivers (Table 3) of effective communication with pediatric LEP patients and their families in the inpatient setting were identified by participants. Participants across all groups, despite enthusiasm around improving communication, were concerned about quality of care LEP families received, noting that the system is “designed to deliver less-good care” and that “we really haven’t figured out how to care for [LEP patients and families] in a [high-]quality and reliable way.” Variation in theme discussion was noted between groups based on participant role: physicians voiced concern about rapport with LEP families, nurses emphasized actionable tasks, and interpreters focused on heightened challenges in times of stress.

 

 

Barrier 1: Difficulties Accessing Interpreter Services

Medical providers (physicians and nurses) identified the “opaque process to access [interpreter] services” as one of their biggest challenges when communicating with LEP families. In particular, the process of scheduling interpreters was described as a “black box,” with physicians and nurses expressing difficulty determining if and when in-person interpreters were scheduled and uncertainty about when to use modalities other than in-person interpretation. Participants across groups highlighted the lack of systems knowledge from medical providers and limitations within the system that make predictable, timely, and reliable access to interpreters challenging, especially for uncommon languages. Medical providers desired more in-person interpreters who can “stay as long as clinically indicated,” citing frustration associated with using phone- and video-interpretation (eg, challenges locating technology, unfamiliarity with use, unreliable functionality of equipment). Interpreters voiced wanting to take time to finish each encounter fully without “being in a hurry because the next appointment is coming soon” or “rushing… in [to the next] session sweating.”

Barrier 2: Uncertainty in Communication with LEP Families

Participants across all groups described three areas of uncertainty as detailed in Table 2: (1) what to share and how to prioritize information during encounters with LEP patients and families, (2) what is communicated during interpretation, and (3) what LEP patients and families understand.

Barrier 3: Unclear and Inconsistent Expectations and Roles of Team Members

Given the complexity involved in communication between medical providers, interpreters, and families, participants across all groups reported feeling ill-prepared when navigating hospital encounters with LEP patients and families. Interpreters reported having little to no clinical context, medical providers reported having no knowledge of the assigned interpreter’s style, and both interpreters and medical providers reported that families have little idea of what to expect or how to engage. All groups voiced frustration about the lack of clarity regarding specific roles and scope of practice for each team member during an encounter, where multiple people end up “talking [or] using the interpreter at once.” Interpreters shared their expectations of medical providers to set the pace and lead conversations with LEP families. On the other hand, medical providers expressed a desire for interpreters to provide cultural context to the team without prompting and to interrupt during encounters when necessary to voice concerns or redirect conversations.

Barrier 4: Unmet Family Engagement Expectations

Participants across all groups articulated challenges with establishing rapport with LEP patients and families, sharing concerns that “inadequate communication” due to “cultural or language barriers” ultimately impacts quality of care. Participants reported decreased bidirectional engagement with and from LEP families. Medical providers not only noted difficulty in connecting with LEP families “on a more personal level” and providing frequent medical updates, but also felt that LEP families do not ask questions even when uncertain. Interpreters expressed concerns about medical providers “not [having] enough patience to answer families’ questions” while LEP families “shy away from asking questions.”

Driver 1: Utilizing a Team-Based Approach between Medical Providers and Interpreters

 

 

Participants from all groups emphasized that a mutual understanding of roles and shared expectations regarding communication and interpretation style, clinical context, and time constraints would establish a foundation for respect between medical providers and interpreters. They reported that a team-based approach to LEP patient and family encounters were crucial to achieving effective communication.

Driver 2: Understanding the Role of Cultural Context in Providing Culturally Effective Care.

Participants across all groups highlighted three different aspects of cultural context that drive effective communication: (1) medical providers’ perception of the family’s culture; (2) LEP families’ knowledge about the culture and healthcare system in the US, and (3) medical providers insight into their own preconceived ideas about LEP families.

Driver 3: Practicing Empathy for Patients and Families

All participants reported that respect for diversity and consideration of the backgrounds and perspectives of LEP patients and families are necessary. Furthermore, both medical providers and interpreters articulated a need to remain patient and mindful when interacting with LEP families despite challenges, especially since, as noted by interpreters, encounters may “take longer, but it’s for a reason.”

Driver 4: Using Effective Family-Centered Communication Strategies

Participants identified the use of effective family-centered communication principles as a driver to optimal communication. Many of the principles identified by medical providers and interpreters are generally applicable to all hospitalized patients and families regardless of English proficiency: optimizing verbal communication (eg, using shorter sentences, pausing to allow for interpretation), optimizing nonverbal communication (eg, setting, position, and body language), and assessment of family understanding and engagement (eg, use of teach back).

DISCUSSION

Frontline medical providers and interpreters identified barriers and drivers that impact communication with LEP patients and families during hospitalization. To our knowledge, this is the first study that uses a participatory method to explore the perspectives of medical providers and interpreters who care for LEP children and families in the inpatient setting. Despite existing difficulties and concerns regarding language barriers and its impact on quality of care for hospitalized LEP patients and families, participants were enthusiastic about how identified barriers and drivers may inform future improvement efforts. Notable action steps for future improvement discussed by our participants included: increased use and functionality of technology for timely and predictable access to interpreters, deliberate training for providers focused on delivery of culturally-effective care, consistent use of family-centered communication strategies including teach-back, and implementing interdisciplinary expectation setting through “presessions” before encounters with LEP families.

Participants elaborated on several barriers previously described in the literature including time constraints and technical problems.14,21,22 Such barriers may serve as deterrents to consistent and appropriate use of interpreters in healthcare settings.9 A heavy reliance on off-site interpreters (including phone- or video-interpreters) and lack of knowledge regarding resource availability likely amplified frustration for medical providers. Communication with LEP families can be daunting, especially when medical providers do not care for LEP families or work with interpreters on a regular basis.14 Standardizing the education of medical providers regarding available resources, as well as the logistics, process, and parameters for scheduling interpreters and using technology, was an action step identified by our GLA participants. Targeted education about the logistics of accessing interpreter services and having standardized ways to make technology use easier (ie, one-touch dialing in hospital rooms) has been associated with increased interpreter use and decreased interpreter-related delays in care.23

Our frontline medical providers expressed added concern about not spending as much time with LEP families. In fact, LEP families in the literature have perceived medical providers to spend less time with their children compared to their English-proficient counterparts.24 Language and cultural barriers, both perceived and real, may limit medical provider rapport with LEP patients and families14 and likely contribute to medical providers relying on their preconceived assumptions instead.25 Cultural competency education for medical providers, as highlighted by our GLA participants as an action item, can be used to provide more comprehensive and effective care.26,27

In addition to enhancing cultural humility through education, our participants emphasized the use of family-centered communication strategies as a driver of optimal family engagement and understanding. Actively inviting questions from families and utilizing teach-back, an established evidence-based strategy28-30 discussed by our participants, can be particularly powerful in assessing family understanding and engagement. While information should be presented in plain language for families in all encounters,31 these evidence-based practices are of particular importance when communicating with LEP families. They promote effective communication, empower families to share concerns in a structured manner, and allow medical providers to address matters in real-time with interpreters present.

Finally, our participants highlighted the need for partnerships between providers and interpreter services, noting unclear roles and expectations among interpreters and medical providers as a major barrier. Specifically, physicians noted confusion regarding the scope of an interpreter’s practice. Participants from GLA sessions discussed the importance of a team-based approach and suggested implementing a “presession” prior to encounters with LEP patients and families. Presessions—a concept well accepted among interpreters and recommended by consensus-based practice guidelines—enable medical providers and interpreters to establish shared expectations about scope of practice, communication, interpretation style, time constraints, and medical context prior to patient encounters.32,33

There are several limitations to our study. First, individuals who chose to participate were likely highly motivated by their clinical experiences with LEP patients and invested in improving communication with LEP families. Second, the study is limited in generalizability, as it was conducted at a single academic institution in a Midwestern city. Despite regional variations in available resources as well as patient and workforce demographics, our findings regarding major themes are in agreement with previously published literature and further add to our understanding of ways to improve communication with this vulnerable population across the care spectrum. Lastly, we were logistically limited in our ability to elicit the perspectives of LEP families due to the participatory nature of GLA; the need for multiple interpreters to simultaneously interact with LEP individuals would have not only hindered active LEP family participation but may have also biased the data generated by patients and families, as the services interpreters provide during their inpatient stay was the focus of our study. Engaging LEP families in their preferred language using participatory methods should be considered for future studies.

In conclusion, frontline providers of medical and language services identified barriers and drivers impacting the effective use of interpreter services when communicating with LEP families during hospitalization. Our enhanced understanding of barriers and drivers, as well as identified actionable interventions, will inform future improvement of communication and interactions with LEP families that contributes to effective and efficient family centered care. A framework for the development and implementation of organizational strategies aimed at improving communication with LEP families must include a thorough assessment of impact, feasibility, stakeholder involvement, and sustainability of specific interventions. While there is no simple formula to improve language services, health systems should establish and adopt language access policies, standardize communication practices, and develop processes to optimize the use of language services in the hospital. Furthermore, engagement with LEP families to better understand their perceptions and experiences with the healthcare system is crucial to improve communication between medical providers and LEP families in the inpatient setting and should be the subject of future studies.

Disclosures

The authors have no conflicts of interest to disclose.

Funding

No external funding was secured for this study. Dr. Joanna Thomson is supported by the Agency for Healthcare Research and Quality (Grant #K08 HS025138). Dr. Raglin Bignall was supported through a Ruth L. Kirschstein National Research Service Award (T32HP10027) when the study was conducted. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding organizations. The funding organizations had no role in the design, preparation, review, or approval of this paper.

 

References

1. The American Academy of Pediatrics Council on Community Pediatrics. Providing care for immigrant, migrant, and border children. Pediatrics. 2013;131(6):e2028-e2034. PubMed
2. Meneses C, Chilton L, Duffee J, et al. Council on Community Pediatrics Immigrant Health Tool Kit. The American Academy of Pediatrics. https://www.aap.org/en-us/Documents/cocp_toolkit_full.pdf. Accessed May 13, 2019.
3. Office for Civil Rights. Guidance to Federal Financial Assistance Recipients Regarding Title VI and the Prohibition Against National Origin Discrimination Affecting Limited English Proficient Persons. https://www.hhs.gov/civil-rights/for-individuals/special-topics/limited-english-proficiency/guidance-federal-financial-assistance-recipients-title-vi/index.html. Accessed May 13, 2019.
4. Lion KC, Rafton SA, Shafii J, et al. Association between language, serious adverse events, and length of stay Among hospitalized children. Hosp Pediatr. 2013;3(3):219-225. https://doi.org/10.1542/hpeds.2012-0091.
5. Lion KC, Wright DR, Desai AD, Mangione-Smith R. Costs of care for hospitalized children associated With preferred language and insurance type. Hosp Pediatr. 2017;7(2):70-78. https://doi.org/10.1542/hpeds.2016-0051.
6. Cohen AL, Rivara F, Marcuse EK, McPhillips H, Davis R. Are language barriers associated with serious medical events in hospitalized pediatric patients? Pediatrics. 2005;116(3):575-579. https://doi.org/10.1542/peds.2005-0521.
7. Samuels-Kalow ME, Stack AM, Amico K, Porter SC. Parental language and return visits to the Emergency Department After discharge. Pediatr Emerg Care. 2017;33(6):402-404. https://doi.org/10.1097/PEC.0000000000000592.
8. Unaka NI, Statile AM, Choe A, Shonna Yin H. Addressing health literacy in the inpatient setting. Curr Treat Options Pediatr. 2018;4(2):283-299. https://doi.org/10.1007/s40746-018-0122-3.
9. DeCamp LR, Kuo DZ, Flores G, O’Connor K, Minkovitz CS. Changes in language services use by US pediatricians. Pediatrics. 2013;132(2):e396-e406. https://doi.org/10.1542/peds.2012-2909.
10. Flores G. The impact of medical interpreter services on the quality of health care: A systematic review. Med Care Res Rev. 2005;62(3):255-299. https://doi.org/10.1177/1077558705275416.
11. Flores G, Abreu M, Barone CP, Bachur R, Lin H. Errors of medical interpretation and their potential clinical consequences: A comparison of professional versus hoc versus no interpreters. Ann Emerg Med. 2012;60(5):545-553. https://doi.org/10.1016/j.annemergmed.2012.01.025.
12. Anand KJ, Sepanski RJ, Giles K, Shah SH, Juarez PD. Pediatric intensive care unit mortality among Latino children before and after a multilevel health care delivery intervention. JAMA Pediatr. 2015;169(4):383-390. https://doi.org/10.1001/jamapediatrics.2014.3789.
13. The Joint Commission. Advancing Effective Communication, Cultural Competence, and Patient- and Family-Centered Care: A Roadmap for Hospitals. Oakbrook Terrace, IL: The Joint Commission; 2010.
14. Hernandez RG, Cowden JD, Moon M et al. Predictors of resident satisfaction in caring for limited English proficient families: a multisite study. Acad Pediatr. 2014;14(2):173-180. https://doi.org/10.1016/j.acap.2013.12.002.
15. Vaughn LM, Lohmueller M. Calling all stakeholders: group-level assessment (GLA)-a qualitative and participatory method for large groups. Eval Rev. 2014;38(4):336-355. https://doi.org/10.1177/0193841X14544903.
16. Vaughn LM, Jacquez F, Zhao J, Lang M. Partnering with students to explore the health needs of an ethnically diverse, low-resource school: an innovative large group assessment approach. Fam Commun Health. 2011;34(1):72-84. https://doi.org/10.1097/FCH.0b013e3181fded12.
17. Gosdin CH, Vaughn L. Perceptions of physician bedside handoff with nurse and family involvement. Hosp Pediatr. 2012;2(1):34-38. https://doi.org/10.1542/hpeds.2011-0008-2.
18. Graham KE, Schellinger AR, Vaughn LM. Developing strategies for positive change: transitioning foster youth to adulthood. Child Youth Serv Rev. 2015;54:71-79. https://doi.org/10.1016/j.childyouth.2015.04.014.
19. Vaughn LM. Group level assessment: A Large Group Method for Identifying Primary Issues and Needs within a community. London2014. http://methods.sagepub.com/case/group-level-assessment-large-group-primary-issues-needs-community. Accessed 2017/07/26.
20. Association of American Medical Colleges Electronic Residency Application Service. ERAS 2018 MyERAS Application Worksheet: Language Fluency. Washington, DC:: Association of American Medical Colleges; 2018:5.
21. Brisset C, Leanza Y, Laforest K. Working with interpreters in health care: A systematic review and meta-ethnography of qualitative studies. Patient Educ Couns. 2013;91(2):131-140. https://doi.org/10.1016/j.pec.2012.11.008.
22. Wiking E, Saleh-Stattin N, Johansson SE, Sundquist J. A description of some aspects of the triangular meeting between immigrant patients, their interpreters and GPs in primary health care in Stockholm, Sweden. Fam Pract. 2009;26(5):377-383. https://doi.org/10.1093/fampra/cmp052.
23. Lion KC, Ebel BE, Rafton S et al. Evaluation of a quality improvement intervention to increase use of telephonic interpretation. Pediatrics. 2015;135(3):e709-e716. https://doi.org/10.1542/peds.2014-2024.
24. Zurca AD, Fisher KR, Flor RJ, et al. Communication with limited English-proficient families in the PICU. Hosp Pediatr. 2017;7(1):9-15. https://doi.org/10.1542/hpeds.2016-0071.
25. Kodjo C. Cultural competence in clinician communication. Pediatr Rev. 2009;30(2):57-64. https://doi.org/10.1542/pir.30-2-57.
26. Britton CV, American Academy of Pediatrics Committee on Pediatric Workforce. Ensuring culturally effective pediatric care: implications for education and health policy. Pediatrics. 2004;114(6):1677-1685. https://doi.org/10.1542/peds.2004-2091.
27. The American Academy of Pediatrics. Culturally Effective Care Toolkit: Providing Cuturally Effective Pediatric Care; 2018. https://www.aap.org/en-us/professional-resources/practice-transformation/managing-patients/Pages/effective-care.aspx. Accessed May 13, 2019.
28. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812. https://doi.org/10.1056/NEJMsa1405556.
29. Jager AJ, Wynia MK. Who gets a teach-back? Patient-reported incidence of experiencing a teach-back. J Health Commun. 2012;17 Supplement 3:294-302. https://doi.org/10.1080/10810730.2012.712624.
30. Kornburger C, Gibson C, Sadowski S, Maletta K, Klingbeil C. Using “teach-back” to promote a safe transition from hospital to home: an evidence-based approach to improving the discharge process. J Pediatr Nurs. 2013;28(3):282-291. https://doi.org/10.1016/j.pedn.2012.10.007.
31. Abrams MA, Klass P, Dreyer BP. Health literacy and children: recommendations for action. Pediatrics. 2009;124 Supplement 3:S327-S331. https://doi.org/10.1542/peds.2009-1162I.
32. Betancourt JR, Renfrew MR, Green AR, Lopez L, Wasserman M. Improving Patient Safety Systems for Patients with Limited English Proficiency: a Guide for Hospitals. Agency for Healthcare Research and Quality; 2012.
<--pagebreak-->33. The National Council on Interpreting in Health Care. Best Practices for Communicating Through an Interpreter . https://refugeehealthta.org/access-to-care/language-access/best-practices-communicating-through-an-interpreter/. Accessed May 19, 2019.

References

1. The American Academy of Pediatrics Council on Community Pediatrics. Providing care for immigrant, migrant, and border children. Pediatrics. 2013;131(6):e2028-e2034. PubMed
2. Meneses C, Chilton L, Duffee J, et al. Council on Community Pediatrics Immigrant Health Tool Kit. The American Academy of Pediatrics. https://www.aap.org/en-us/Documents/cocp_toolkit_full.pdf. Accessed May 13, 2019.
3. Office for Civil Rights. Guidance to Federal Financial Assistance Recipients Regarding Title VI and the Prohibition Against National Origin Discrimination Affecting Limited English Proficient Persons. https://www.hhs.gov/civil-rights/for-individuals/special-topics/limited-english-proficiency/guidance-federal-financial-assistance-recipients-title-vi/index.html. Accessed May 13, 2019.
4. Lion KC, Rafton SA, Shafii J, et al. Association between language, serious adverse events, and length of stay Among hospitalized children. Hosp Pediatr. 2013;3(3):219-225. https://doi.org/10.1542/hpeds.2012-0091.
5. Lion KC, Wright DR, Desai AD, Mangione-Smith R. Costs of care for hospitalized children associated With preferred language and insurance type. Hosp Pediatr. 2017;7(2):70-78. https://doi.org/10.1542/hpeds.2016-0051.
6. Cohen AL, Rivara F, Marcuse EK, McPhillips H, Davis R. Are language barriers associated with serious medical events in hospitalized pediatric patients? Pediatrics. 2005;116(3):575-579. https://doi.org/10.1542/peds.2005-0521.
7. Samuels-Kalow ME, Stack AM, Amico K, Porter SC. Parental language and return visits to the Emergency Department After discharge. Pediatr Emerg Care. 2017;33(6):402-404. https://doi.org/10.1097/PEC.0000000000000592.
8. Unaka NI, Statile AM, Choe A, Shonna Yin H. Addressing health literacy in the inpatient setting. Curr Treat Options Pediatr. 2018;4(2):283-299. https://doi.org/10.1007/s40746-018-0122-3.
9. DeCamp LR, Kuo DZ, Flores G, O’Connor K, Minkovitz CS. Changes in language services use by US pediatricians. Pediatrics. 2013;132(2):e396-e406. https://doi.org/10.1542/peds.2012-2909.
10. Flores G. The impact of medical interpreter services on the quality of health care: A systematic review. Med Care Res Rev. 2005;62(3):255-299. https://doi.org/10.1177/1077558705275416.
11. Flores G, Abreu M, Barone CP, Bachur R, Lin H. Errors of medical interpretation and their potential clinical consequences: A comparison of professional versus hoc versus no interpreters. Ann Emerg Med. 2012;60(5):545-553. https://doi.org/10.1016/j.annemergmed.2012.01.025.
12. Anand KJ, Sepanski RJ, Giles K, Shah SH, Juarez PD. Pediatric intensive care unit mortality among Latino children before and after a multilevel health care delivery intervention. JAMA Pediatr. 2015;169(4):383-390. https://doi.org/10.1001/jamapediatrics.2014.3789.
13. The Joint Commission. Advancing Effective Communication, Cultural Competence, and Patient- and Family-Centered Care: A Roadmap for Hospitals. Oakbrook Terrace, IL: The Joint Commission; 2010.
14. Hernandez RG, Cowden JD, Moon M et al. Predictors of resident satisfaction in caring for limited English proficient families: a multisite study. Acad Pediatr. 2014;14(2):173-180. https://doi.org/10.1016/j.acap.2013.12.002.
15. Vaughn LM, Lohmueller M. Calling all stakeholders: group-level assessment (GLA)-a qualitative and participatory method for large groups. Eval Rev. 2014;38(4):336-355. https://doi.org/10.1177/0193841X14544903.
16. Vaughn LM, Jacquez F, Zhao J, Lang M. Partnering with students to explore the health needs of an ethnically diverse, low-resource school: an innovative large group assessment approach. Fam Commun Health. 2011;34(1):72-84. https://doi.org/10.1097/FCH.0b013e3181fded12.
17. Gosdin CH, Vaughn L. Perceptions of physician bedside handoff with nurse and family involvement. Hosp Pediatr. 2012;2(1):34-38. https://doi.org/10.1542/hpeds.2011-0008-2.
18. Graham KE, Schellinger AR, Vaughn LM. Developing strategies for positive change: transitioning foster youth to adulthood. Child Youth Serv Rev. 2015;54:71-79. https://doi.org/10.1016/j.childyouth.2015.04.014.
19. Vaughn LM. Group level assessment: A Large Group Method for Identifying Primary Issues and Needs within a community. London2014. http://methods.sagepub.com/case/group-level-assessment-large-group-primary-issues-needs-community. Accessed 2017/07/26.
20. Association of American Medical Colleges Electronic Residency Application Service. ERAS 2018 MyERAS Application Worksheet: Language Fluency. Washington, DC:: Association of American Medical Colleges; 2018:5.
21. Brisset C, Leanza Y, Laforest K. Working with interpreters in health care: A systematic review and meta-ethnography of qualitative studies. Patient Educ Couns. 2013;91(2):131-140. https://doi.org/10.1016/j.pec.2012.11.008.
22. Wiking E, Saleh-Stattin N, Johansson SE, Sundquist J. A description of some aspects of the triangular meeting between immigrant patients, their interpreters and GPs in primary health care in Stockholm, Sweden. Fam Pract. 2009;26(5):377-383. https://doi.org/10.1093/fampra/cmp052.
23. Lion KC, Ebel BE, Rafton S et al. Evaluation of a quality improvement intervention to increase use of telephonic interpretation. Pediatrics. 2015;135(3):e709-e716. https://doi.org/10.1542/peds.2014-2024.
24. Zurca AD, Fisher KR, Flor RJ, et al. Communication with limited English-proficient families in the PICU. Hosp Pediatr. 2017;7(1):9-15. https://doi.org/10.1542/hpeds.2016-0071.
25. Kodjo C. Cultural competence in clinician communication. Pediatr Rev. 2009;30(2):57-64. https://doi.org/10.1542/pir.30-2-57.
26. Britton CV, American Academy of Pediatrics Committee on Pediatric Workforce. Ensuring culturally effective pediatric care: implications for education and health policy. Pediatrics. 2004;114(6):1677-1685. https://doi.org/10.1542/peds.2004-2091.
27. The American Academy of Pediatrics. Culturally Effective Care Toolkit: Providing Cuturally Effective Pediatric Care; 2018. https://www.aap.org/en-us/professional-resources/practice-transformation/managing-patients/Pages/effective-care.aspx. Accessed May 13, 2019.
28. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812. https://doi.org/10.1056/NEJMsa1405556.
29. Jager AJ, Wynia MK. Who gets a teach-back? Patient-reported incidence of experiencing a teach-back. J Health Commun. 2012;17 Supplement 3:294-302. https://doi.org/10.1080/10810730.2012.712624.
30. Kornburger C, Gibson C, Sadowski S, Maletta K, Klingbeil C. Using “teach-back” to promote a safe transition from hospital to home: an evidence-based approach to improving the discharge process. J Pediatr Nurs. 2013;28(3):282-291. https://doi.org/10.1016/j.pedn.2012.10.007.
31. Abrams MA, Klass P, Dreyer BP. Health literacy and children: recommendations for action. Pediatrics. 2009;124 Supplement 3:S327-S331. https://doi.org/10.1542/peds.2009-1162I.
32. Betancourt JR, Renfrew MR, Green AR, Lopez L, Wasserman M. Improving Patient Safety Systems for Patients with Limited English Proficiency: a Guide for Hospitals. Agency for Healthcare Research and Quality; 2012.
<--pagebreak-->33. The National Council on Interpreting in Health Care. Best Practices for Communicating Through an Interpreter . https://refugeehealthta.org/access-to-care/language-access/best-practices-communicating-through-an-interpreter/. Accessed May 19, 2019.

Issue
Journal of Hospital Medicine 14(10)
Issue
Journal of Hospital Medicine 14(10)
Page Number
607-613. Published online first July 24, 2019
Page Number
607-613. Published online first July 24, 2019
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Corresponding Author: Angela Y. Choe, MD; E-mail: [email protected]; Telephone: 513-636-3893; Twitter: @AChoeMD
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

The Role of Adolescent Acne Treatment in Formation of Scars Among Patients With Persistent Adult Acne: Evidence From an Observational Study

Article Type
Changed
Thu, 07/18/2019 - 11:31
Display Headline
The Role of Adolescent Acne Treatment in Formation of Scars Among Patients With Persistent Adult Acne: Evidence From an Observational Study

In the last 20 years, the incidence of acne lesions in adults has markedly increased. 1 Acne affects adults (individuals older than 25 years) and is no longer a condition limited to adolescents and young adults (individuals younger than 25 years). According to Dreno et al, 2 the accepted age threshold for the onset of adult acne is 25 years. 1-3 In 2013, the term adult acne was defined. 2 Among patients with adult acne, there are 2 subtypes: (1) persistent adult acne, which is a continuation or recurrence of adolescent acne, affecting approximately 80% of patients, and (2) late-onset acne, affecting approximately 20% of patients. 4

Clinical symptoms of adult acne and available treatment modalities have been explored in the literature. Daily clinical experience shows that additional difficulties involved in the management of adult acne patients are related mainly to a high therapeutic failure rate in acne patients older than 25 years. 5 Persistent adult acne seems to be noteworthy because it causes long-term symptoms, and patients experience uncontrollable recurrences.

It is believed that adult acne often is resistant to treatment.
2 Adult skin is more sensitive to topical agents, leading to more irritation by medications intended for external use and cosmetics. 6 Scars in these patients are a frequent and undesirable consequence. 3

Effective treatment of acne encompasses oral antibiotics, topical and systemic retinoids, and oral contraceptive pills (OCPs). For years, oral subantimicrobial doses of cyclines have been recommended for acne treatment. Topical and oral retinoids have been successfully used for more than 30 years as important therapeutic options. 7 More recent evidence-based guidelines for acne issued by the American Academy of Dermatology 8 and the European Dermatology Forum 9 also show that retinoids play an important role in acne therapy. Their anti-inflammatory activity acts against comedones and their precursors (microcomedones). Successful antiacne therapy not only achieves a smooth face without comedones but also minimizes scar formation, postinflammatory discoloration, and long-lasting postinflammatory erythema. 10 Oral contraceptives have a mainly antiseborrheic effect. 11

Our study sought to analyze the potential influence of therapy during adolescent acne on patients who later developed adult acne. Particular attention was given to the use of oral antibiotics, isotretinoin, and topical retinoids for adolescent acne and their potential role in diminishing scar formation in adult acne.

Materials and Methods

Patient Demographics and Selection
A population-based study of Polish patients with adult acne was conducted. Patients were included in the study group on a consecutive basis from among those who visited our outpatient dermatology center from May 2015 to January 2016. A total of 111 patients (101 women [90.99%] and 10 men [9.01%]) were examined. The study group comprised patients aged 25 years and older who were treated for adult acne (20 patients [18.02%] were aged 25–29 years, 61 [54.95%] were aged 30–39 years, and 30 [27.02%] were 40 years or older).

The following inclusion criteria were used: observation period of at least 6 months in our dermatologic center for patients diagnosed with adult acne, at least 2 dermatologic visits for adult acne prior to the study, written informed consent for study participation and data processing (the aim of the study was explained to each participant by a dermatologist), and age 25 years or older. Exclusion criteria included those who were younger than 25 years, those who had only 1 dermatologic visit at our dermatology center, and those who were unwilling to participate or did not provide written informed consent. Our study was conducted according to Good Clinical Practice.

 

 


Data Collection
To obtain data with the highest degree of reliability, 3 sources of information were used: (1) a detailed medical interview conducted by one experienced dermatologist (E.C.) at our dermatology center at the first visit in all study participants, (2) a clinical examination that yielded results necessary for the assessment of scars using a method outlined by Jacob et al, 12 and (3) information included in available medical records. These data were then statistically analyzed.



Statistical Analysis
The results were presented as frequency plots, and a Fisher exact test was conducted to obtain a statistical comparison of the distributions of analyzed data. Unless otherwise indicated, 5% was adopted as the significance level. The statistical analysis was performed using Stata 14 software (StataCorp LLC, College Station, Texas).

Results

Incidence of Different Forms of Adult Acne
To analyze the onset of acne, patients were categorized into 1 of 2 groups: those with persistent adult acne (81.98%) and those with late-onset adult acne (ie, developed after 25 years of age)(18.02%).

Age at Initiation of Dermatologic Treatment
Of the patients with persistent adult acne, 31.87% first visited a dermatologist the same year that the first acne lesions appeared, 36.26% postponed the first visit by at least 5 years (Figure 1), and 23.08% started treatment at least 10 years after acne first appeared. Among patients with persistent adult acne, 76.92% began dermatologic treatment before 25 years of age, and 23.08% began treatment after 25 years of age. Of the latter, 28.57% did not start therapy until they were older than 35 years.

Figure 1. Initiation of dermatologic treatment for patients with persistent adult acne (n=91).

Severity of Adolescent Acne
In the persistent adult acne group, the severity of adolescent acne was assessed during the medical interview as well as detailed histories in medical records. The activity of acne was evaluated at 2-year intervals with the use of a 10-point scale: 1 to 3 points indicated mild acne (7.69% of patients), 4 to 6 points indicated moderate acne (24.18%), and 7 to 10 points indicated severe acne (68.13%).

Treatment of Persistent Acne in Adolescence
Treatment was comprised of oral therapy with antibiotics, isotretinoin, and/or application of topical retinoids (sometimes supported with OCPs). Monotherapy was the standard of treatment more than 25 years ago when patients with persistent adult acne were treated as adolescents or young adults. As many as 43.96% of patients with persistent adult acne did not receive any of these therapies before 25 years of age; rather, they used antiacne cosmetics or beauty procedures. Furthermore, 50.55% of patients were treated with oral antibiotics (Figure 2). Topical retinoids were used in 19.78% of patients and isotretinoin was used in 16.48%. Incidentally, OCPs were given to 26.5%. In the course of adolescent acne, 31.87% of patients received 2 to 4 courses of treatment with either antibiotics or retinoids (oral or topical), and 5.49% were treated with 5 or more courses of treatment (Figure 3). The analysis of each treatment revealed that only 1 patient received 4 courses of isotretinoin. Five courses of oral antibiotics were given in 1 patient, and 3 courses of topical retinoids were given in the same patient.

Figure 2. Patients with persistent adult acne treated with oral antibiotics, isotretinoin, and topical retinoids before 25 years of age (n=91).

Figure 3. Total number of oral antibiotics, isotretinoin, and topical retinoid treatments before 25 years of age in patients with persistent adult acne (n=91).

 

 

Topical Retinoids
In an analysis of the number of treatments with topical retinoids completed by patients with persistent adult acne, it was established that 80.22% of patients never used topical retinoids for acne during adolescence. Additionally, 12.08% of these patients completed 1 course of treatment, and 7.69% completed 2 to 4 treatments. However, after 25 years of age, only 25.27% of the patients with persistent adult acne were not treated with topical retinoids, and 35.16% completed more than 2 courses of treatment.



Duration of Treatment
Because adult acne is a chronic disease, the mean number of years that patients received treatment over the disease course was analyzed. In the case of persistent adult acne, the mean duration of treatment, including therapy received during adolescence, was more than 13 years. At the time of the study, more than 30% of patients had been undergoing treatment of adult acne for more than 20 years. Scars— The proportion of patients with persistent adult acne who experienced scarring was evaluated. In the persistent adult acne group, scars were identified in 53.85% of patients. Scars appeared only during adolescence in 26.37% of patients with persistent adult acne, scars appeared only after 25 years of age in 21.97% of patients, and scars appeared in adolescence as well as adulthood in 30.77% of patients.

In an analysis of patients with persistent adult acne who experienced scarring after 25 years of age, the proportion of patients with untreated adolescent acne and those who were treated with antibiotics only was not significantly different (60% vs 64%;
P = .478)(Table). The inclusion of topical retinoids into treatment decreased the proportion of scars (isotretinoin: 20%, P = .009; topical retinoids: 38.89%, P = .114).

Comment

Persistent Adult Acne
Patients with symptoms of persistent adult acne represented 81.98% of the study population, which was similar to a 1999 study by Goulden et al, 1 a 2001 study by Shaw and White, 13 and a 2009 report by Schmidt et al. 14 Of these patients with persistent adult acne, 23.08% initiated therapy after 25 years of age, and 23.08% started treatment at least 10 years after acne lesions first appeared. However, it is noteworthy that 68.13% of all patients with persistent adult acne assessed their disease as severe.

Treatment Modalities for Adult Acne
Over the last 5 years, some researchers have attempted to make recommendations for the treatment of adult acne based on standards adopted for the treatment of adolescent acne. 2,9,15 First-line treatment of patients with adult comedonal acne is topical retinoids. 9 The recommended treatment of mild to moderate adult inflammatory acne involves topical drugs, including retinoids, azelaic acid, or benzoyl peroxide, or oral medications, including antibiotics, OCPs, or antiandrogens. In severe inflammatory acne, the recommended treatment involves oral isotretinoin or combined therapies; the latter seems to be the most effective. 16 Furthermore, this therapy has been adjusted to the patient’s current clinical condition; general individual sensitivity of the skin to irritation and the risk for irritant activity of topical medications; and life situation, such as planned pregnancies and intended use of OCPs due to the risk for teratogenic effects of drugs. 17

To assess available treatment modalities, oral therapy with antibiotics or isotretinoin as well as topical retinoids were selected for our analysis. It is difficult to determine an exclusive impact of OCPs as acne treatment; according to our study, many female patients use hormone therapy for other medical conditions or contraception, and only a small proportion of these patients are prescribed hormone treatment for acne. We found that 43.96% of patients with persistent adult acne underwent no treatment with antibiotics, isotretinoin, or topical retinoids in adolescence. Patients who did not receive any of these treatments came only for single visits to a dermatologist, did not comply to a recommended therapy, or used only cosmetics or beauty procedures. We found that 80.22% of patients with persistent adult acne never used topical retinoids during adolescence and did not receive maintenance therapy, which may be attributed to the fact that there were no strict recommendations regarding retinoid treatment when these patients were adolescents or young adults. Published data indicate that retinoid use for acne treatment is not common. 18 Conversely, among patients older than 25 years with late-onset adult acne, there was only 1 patient (ie, < 1%) who had never received any oral antibiotic or isotretinoin treatment or therapy with topical retinoids. The reason for the lack of medical treatment is unknown. Only 25.27% of patients were not treated with topical retinoids, and 35.16% completed at least 2 courses of treatment. The use of topical retinoids for the treatment of persistent and late-onset adult acne may be the result of the spread of knowledge among dermatologists acquired over the last 25 years.



Acne Scarring
The worst complication of acne is scarring. Scars develop for the duration of the disease, during both adolescent and adult acne. In the group with persistent adult acne, scarring was found in 53.85% of patients. Scar formation has been previously reported as a common complication of acne. 19 The effects of skin lesions that remain after acne are not only limited to impaired cosmetic appearance; they also negatively affect mental health and impair quality of life. 20 The aim of our study was to analyze types of treatment for adolescent acne in patients who later had persistent adult acne. Postacne scars observed later are objective evidence of the severity of disease. We found that using oral antibiotics did not diminish the number of scars among persistent adult acne patients in adulthood. In contrast, isotretinoin or topical retinoid treatment during adolescence decreased the risk for scars occurring during adulthood. In our opinion, these findings emphasize the role of this type of treatment among adolescents or young adults. The decrease of scar formation in adult acne due to retinoid treatment in adolescence indirectly justifies the role of maintenance therapy with topical retinoids. 21,22

References
  1. Goulden V, Stables GI, Cunliffe WJ. Prevalence of facial acne in adults. J Am Acad Dermatol. 1999;41:577-580. 
  2. Dreno B, Layton A, Zouboulis CC, et al. Adult female acne: a new paradigm. J Eur Acad Dermatol Venereol. 2013;27:1063-1070. 
  3. Preneau S, Dreno B. Female acne--a different subtype of teenager acne? J Eur Acad Dermatol Venereol. 2012;26:277-282. 
  4. Goulden V, Clark SM, Cunliffe WJ. Post-adolescent acne: a review of clinical features. Br J Dermatol. 1997;136:66-70. 
  5. Kamangar F, Shinkai K. Acne in the adult female patient: a practical approach. Int J Dermatol. 2012;51:1162-1174. 
  6. Choi CW, Lee DH, Kim HS, et al. The clinical features of late onset acne compared with early onset acne in women. J Eur Acad Dermatol Venereol. 2011;25:454-461. 
  7. Kligman AM, Fulton JE Jr, Plewig G. Topical vitamin A acid in acne vulgaris. Arch Dermatol. 1969;99:469-476. 
  8. Zaenglein AL, Pathy AL, Schlosser BJ, et al. Guidelines of care for the management of acne vulgaris. J Am Acad Dermatol. 2016;74:945.e33-973.e33. 
  9. Nast A, Dreno B, Bettoli V, et al. European evidence-based guidelines for the treatment of acne. J Eur Acad Dermatol Venereol. 2012;26(suppl 1):1-29. 
  10. Levin J. The relationship of proper skin cleansing to pathophysiology, clinical benefits, and the concomitant use of prescription topical therapies in patients with acne vulgaris. Dermatol Clin. 2016;34:133-145. 
  11. Savage LJ, Layton AM. Treating acne vulgaris: systemic, local and combination therapy. Expert Rev Clin Pharmacol. 2010;3:563-580.  
  12. Jacob CL, Dover JS, Kaminer MS. Acne scarring: a classification system and review of treatment options. J Am Acad Dermatol. 2001;45:109-117. 
  13. Shaw JC, White LE. Persistent acne in adult women. Arch Dermatol. 2001;137:1252-1253. 
  14. Schmidt JV, Masuda PY, Miot HA. Acne in women: clinical patterns in different age groups. An Bras Dermatol. 2009;84:349-354. 
  15. Thiboutot D, Gollnick H, Bettoli V, et al. New insights into the management of acne: an update from the Global Alliance to Improve Outcomes in Acne group. J Am Acad Dermatol. 2009;60(5 suppl):1-50. 
  16. Williams C, Layton AM. Persistent acne in women: implications for the patient and for therapy. Am J Clin Dermatol. 2006;7:281-290. 
  17. Holzmann R, Shakery K. Postadolescent acne in females. Skin Pharmacol Physiol. 2014;27(suppl 1):3-8. 
  18. Pena S, Hill D, Feldman SR. Use of topical retinoids by dermatologist and non-dermatologist in the management of acne vulgaris. J Am Acad Dermatol. 2016;74:1252-1254. 
  19. Layton AM, Henderson CA, Cunliffe WJ. A clinical evaluation of acne scarring and its incidence. Clin Exp Dermatol. 1994;19;303-308. 
  20. Halvorsen JA, Stern RS, Dalgard F, et al. Suicidal ideation, mental health problems, and social impairment are increased in adolescents with acne: a population-based study. J Invest Dermatol. 2011;131:363-370. 
  21. Thielitz A, Sidou F, Gollnick H. Control of microcomedone formation throughout a maintenance treatment with adapalene gel, 0.1%. J Eur Acad Dermatol Venereol. 2007;21:747-753. 
  22. Leyden J, Thiboutot DM, Shalita R, et al. Comparison of tazarotene and minocycline maintenance therapies in acne vulgaris: a multicenter, double-blind, randomized, parallel-group study. Arch Dermatol. 2006;142:605-612.
Article PDF
Author and Disclosure Information

Dr. E. Chlebus is from Nova Derm Dermatology Centre, Warsaw, Poland. Dr. M. Chlebus is from the Department of Quantitative Finance, Faculty of Economic Sciences, University of Warsaw.

The authors report no conflict of interest.

Correspondence: Ewa Chlebus, MD, PhD, Twarda 60 str, 00-818 Warsaw, Poland ([email protected]).

Issue
Cutis - 104(1)
Publications
Topics
Page Number
57-61
Sections
Author and Disclosure Information

Dr. E. Chlebus is from Nova Derm Dermatology Centre, Warsaw, Poland. Dr. M. Chlebus is from the Department of Quantitative Finance, Faculty of Economic Sciences, University of Warsaw.

The authors report no conflict of interest.

Correspondence: Ewa Chlebus, MD, PhD, Twarda 60 str, 00-818 Warsaw, Poland ([email protected]).

Author and Disclosure Information

Dr. E. Chlebus is from Nova Derm Dermatology Centre, Warsaw, Poland. Dr. M. Chlebus is from the Department of Quantitative Finance, Faculty of Economic Sciences, University of Warsaw.

The authors report no conflict of interest.

Correspondence: Ewa Chlebus, MD, PhD, Twarda 60 str, 00-818 Warsaw, Poland ([email protected]).

Article PDF
Article PDF

In the last 20 years, the incidence of acne lesions in adults has markedly increased. 1 Acne affects adults (individuals older than 25 years) and is no longer a condition limited to adolescents and young adults (individuals younger than 25 years). According to Dreno et al, 2 the accepted age threshold for the onset of adult acne is 25 years. 1-3 In 2013, the term adult acne was defined. 2 Among patients with adult acne, there are 2 subtypes: (1) persistent adult acne, which is a continuation or recurrence of adolescent acne, affecting approximately 80% of patients, and (2) late-onset acne, affecting approximately 20% of patients. 4

Clinical symptoms of adult acne and available treatment modalities have been explored in the literature. Daily clinical experience shows that additional difficulties involved in the management of adult acne patients are related mainly to a high therapeutic failure rate in acne patients older than 25 years. 5 Persistent adult acne seems to be noteworthy because it causes long-term symptoms, and patients experience uncontrollable recurrences.

It is believed that adult acne often is resistant to treatment.
2 Adult skin is more sensitive to topical agents, leading to more irritation by medications intended for external use and cosmetics. 6 Scars in these patients are a frequent and undesirable consequence. 3

Effective treatment of acne encompasses oral antibiotics, topical and systemic retinoids, and oral contraceptive pills (OCPs). For years, oral subantimicrobial doses of cyclines have been recommended for acne treatment. Topical and oral retinoids have been successfully used for more than 30 years as important therapeutic options. 7 More recent evidence-based guidelines for acne issued by the American Academy of Dermatology 8 and the European Dermatology Forum 9 also show that retinoids play an important role in acne therapy. Their anti-inflammatory activity acts against comedones and their precursors (microcomedones). Successful antiacne therapy not only achieves a smooth face without comedones but also minimizes scar formation, postinflammatory discoloration, and long-lasting postinflammatory erythema. 10 Oral contraceptives have a mainly antiseborrheic effect. 11

Our study sought to analyze the potential influence of therapy during adolescent acne on patients who later developed adult acne. Particular attention was given to the use of oral antibiotics, isotretinoin, and topical retinoids for adolescent acne and their potential role in diminishing scar formation in adult acne.

Materials and Methods

Patient Demographics and Selection
A population-based study of Polish patients with adult acne was conducted. Patients were included in the study group on a consecutive basis from among those who visited our outpatient dermatology center from May 2015 to January 2016. A total of 111 patients (101 women [90.99%] and 10 men [9.01%]) were examined. The study group comprised patients aged 25 years and older who were treated for adult acne (20 patients [18.02%] were aged 25–29 years, 61 [54.95%] were aged 30–39 years, and 30 [27.02%] were 40 years or older).

The following inclusion criteria were used: observation period of at least 6 months in our dermatologic center for patients diagnosed with adult acne, at least 2 dermatologic visits for adult acne prior to the study, written informed consent for study participation and data processing (the aim of the study was explained to each participant by a dermatologist), and age 25 years or older. Exclusion criteria included those who were younger than 25 years, those who had only 1 dermatologic visit at our dermatology center, and those who were unwilling to participate or did not provide written informed consent. Our study was conducted according to Good Clinical Practice.

 

 


Data Collection
To obtain data with the highest degree of reliability, 3 sources of information were used: (1) a detailed medical interview conducted by one experienced dermatologist (E.C.) at our dermatology center at the first visit in all study participants, (2) a clinical examination that yielded results necessary for the assessment of scars using a method outlined by Jacob et al, 12 and (3) information included in available medical records. These data were then statistically analyzed.



Statistical Analysis
The results were presented as frequency plots, and a Fisher exact test was conducted to obtain a statistical comparison of the distributions of analyzed data. Unless otherwise indicated, 5% was adopted as the significance level. The statistical analysis was performed using Stata 14 software (StataCorp LLC, College Station, Texas).

Results

Incidence of Different Forms of Adult Acne
To analyze the onset of acne, patients were categorized into 1 of 2 groups: those with persistent adult acne (81.98%) and those with late-onset adult acne (ie, developed after 25 years of age)(18.02%).

Age at Initiation of Dermatologic Treatment
Of the patients with persistent adult acne, 31.87% first visited a dermatologist the same year that the first acne lesions appeared, 36.26% postponed the first visit by at least 5 years (Figure 1), and 23.08% started treatment at least 10 years after acne first appeared. Among patients with persistent adult acne, 76.92% began dermatologic treatment before 25 years of age, and 23.08% began treatment after 25 years of age. Of the latter, 28.57% did not start therapy until they were older than 35 years.

Figure 1. Initiation of dermatologic treatment for patients with persistent adult acne (n=91).

Severity of Adolescent Acne
In the persistent adult acne group, the severity of adolescent acne was assessed during the medical interview as well as detailed histories in medical records. The activity of acne was evaluated at 2-year intervals with the use of a 10-point scale: 1 to 3 points indicated mild acne (7.69% of patients), 4 to 6 points indicated moderate acne (24.18%), and 7 to 10 points indicated severe acne (68.13%).

Treatment of Persistent Acne in Adolescence
Treatment was comprised of oral therapy with antibiotics, isotretinoin, and/or application of topical retinoids (sometimes supported with OCPs). Monotherapy was the standard of treatment more than 25 years ago when patients with persistent adult acne were treated as adolescents or young adults. As many as 43.96% of patients with persistent adult acne did not receive any of these therapies before 25 years of age; rather, they used antiacne cosmetics or beauty procedures. Furthermore, 50.55% of patients were treated with oral antibiotics (Figure 2). Topical retinoids were used in 19.78% of patients and isotretinoin was used in 16.48%. Incidentally, OCPs were given to 26.5%. In the course of adolescent acne, 31.87% of patients received 2 to 4 courses of treatment with either antibiotics or retinoids (oral or topical), and 5.49% were treated with 5 or more courses of treatment (Figure 3). The analysis of each treatment revealed that only 1 patient received 4 courses of isotretinoin. Five courses of oral antibiotics were given in 1 patient, and 3 courses of topical retinoids were given in the same patient.

Figure 2. Patients with persistent adult acne treated with oral antibiotics, isotretinoin, and topical retinoids before 25 years of age (n=91).

Figure 3. Total number of oral antibiotics, isotretinoin, and topical retinoid treatments before 25 years of age in patients with persistent adult acne (n=91).

 

 

Topical Retinoids
In an analysis of the number of treatments with topical retinoids completed by patients with persistent adult acne, it was established that 80.22% of patients never used topical retinoids for acne during adolescence. Additionally, 12.08% of these patients completed 1 course of treatment, and 7.69% completed 2 to 4 treatments. However, after 25 years of age, only 25.27% of the patients with persistent adult acne were not treated with topical retinoids, and 35.16% completed more than 2 courses of treatment.



Duration of Treatment
Because adult acne is a chronic disease, the mean number of years that patients received treatment over the disease course was analyzed. In the case of persistent adult acne, the mean duration of treatment, including therapy received during adolescence, was more than 13 years. At the time of the study, more than 30% of patients had been undergoing treatment of adult acne for more than 20 years. Scars— The proportion of patients with persistent adult acne who experienced scarring was evaluated. In the persistent adult acne group, scars were identified in 53.85% of patients. Scars appeared only during adolescence in 26.37% of patients with persistent adult acne, scars appeared only after 25 years of age in 21.97% of patients, and scars appeared in adolescence as well as adulthood in 30.77% of patients.

In an analysis of patients with persistent adult acne who experienced scarring after 25 years of age, the proportion of patients with untreated adolescent acne and those who were treated with antibiotics only was not significantly different (60% vs 64%;
P = .478)(Table). The inclusion of topical retinoids into treatment decreased the proportion of scars (isotretinoin: 20%, P = .009; topical retinoids: 38.89%, P = .114).

Comment

Persistent Adult Acne
Patients with symptoms of persistent adult acne represented 81.98% of the study population, which was similar to a 1999 study by Goulden et al, 1 a 2001 study by Shaw and White, 13 and a 2009 report by Schmidt et al. 14 Of these patients with persistent adult acne, 23.08% initiated therapy after 25 years of age, and 23.08% started treatment at least 10 years after acne lesions first appeared. However, it is noteworthy that 68.13% of all patients with persistent adult acne assessed their disease as severe.

Treatment Modalities for Adult Acne
Over the last 5 years, some researchers have attempted to make recommendations for the treatment of adult acne based on standards adopted for the treatment of adolescent acne. 2,9,15 First-line treatment of patients with adult comedonal acne is topical retinoids. 9 The recommended treatment of mild to moderate adult inflammatory acne involves topical drugs, including retinoids, azelaic acid, or benzoyl peroxide, or oral medications, including antibiotics, OCPs, or antiandrogens. In severe inflammatory acne, the recommended treatment involves oral isotretinoin or combined therapies; the latter seems to be the most effective. 16 Furthermore, this therapy has been adjusted to the patient’s current clinical condition; general individual sensitivity of the skin to irritation and the risk for irritant activity of topical medications; and life situation, such as planned pregnancies and intended use of OCPs due to the risk for teratogenic effects of drugs. 17

To assess available treatment modalities, oral therapy with antibiotics or isotretinoin as well as topical retinoids were selected for our analysis. It is difficult to determine an exclusive impact of OCPs as acne treatment; according to our study, many female patients use hormone therapy for other medical conditions or contraception, and only a small proportion of these patients are prescribed hormone treatment for acne. We found that 43.96% of patients with persistent adult acne underwent no treatment with antibiotics, isotretinoin, or topical retinoids in adolescence. Patients who did not receive any of these treatments came only for single visits to a dermatologist, did not comply to a recommended therapy, or used only cosmetics or beauty procedures. We found that 80.22% of patients with persistent adult acne never used topical retinoids during adolescence and did not receive maintenance therapy, which may be attributed to the fact that there were no strict recommendations regarding retinoid treatment when these patients were adolescents or young adults. Published data indicate that retinoid use for acne treatment is not common. 18 Conversely, among patients older than 25 years with late-onset adult acne, there was only 1 patient (ie, < 1%) who had never received any oral antibiotic or isotretinoin treatment or therapy with topical retinoids. The reason for the lack of medical treatment is unknown. Only 25.27% of patients were not treated with topical retinoids, and 35.16% completed at least 2 courses of treatment. The use of topical retinoids for the treatment of persistent and late-onset adult acne may be the result of the spread of knowledge among dermatologists acquired over the last 25 years.



Acne Scarring
The worst complication of acne is scarring. Scars develop for the duration of the disease, during both adolescent and adult acne. In the group with persistent adult acne, scarring was found in 53.85% of patients. Scar formation has been previously reported as a common complication of acne. 19 The effects of skin lesions that remain after acne are not only limited to impaired cosmetic appearance; they also negatively affect mental health and impair quality of life. 20 The aim of our study was to analyze types of treatment for adolescent acne in patients who later had persistent adult acne. Postacne scars observed later are objective evidence of the severity of disease. We found that using oral antibiotics did not diminish the number of scars among persistent adult acne patients in adulthood. In contrast, isotretinoin or topical retinoid treatment during adolescence decreased the risk for scars occurring during adulthood. In our opinion, these findings emphasize the role of this type of treatment among adolescents or young adults. The decrease of scar formation in adult acne due to retinoid treatment in adolescence indirectly justifies the role of maintenance therapy with topical retinoids. 21,22

In the last 20 years, the incidence of acne lesions in adults has markedly increased. 1 Acne affects adults (individuals older than 25 years) and is no longer a condition limited to adolescents and young adults (individuals younger than 25 years). According to Dreno et al, 2 the accepted age threshold for the onset of adult acne is 25 years. 1-3 In 2013, the term adult acne was defined. 2 Among patients with adult acne, there are 2 subtypes: (1) persistent adult acne, which is a continuation or recurrence of adolescent acne, affecting approximately 80% of patients, and (2) late-onset acne, affecting approximately 20% of patients. 4

Clinical symptoms of adult acne and available treatment modalities have been explored in the literature. Daily clinical experience shows that additional difficulties involved in the management of adult acne patients are related mainly to a high therapeutic failure rate in acne patients older than 25 years. 5 Persistent adult acne seems to be noteworthy because it causes long-term symptoms, and patients experience uncontrollable recurrences.

It is believed that adult acne often is resistant to treatment.
2 Adult skin is more sensitive to topical agents, leading to more irritation by medications intended for external use and cosmetics. 6 Scars in these patients are a frequent and undesirable consequence. 3

Effective treatment of acne encompasses oral antibiotics, topical and systemic retinoids, and oral contraceptive pills (OCPs). For years, oral subantimicrobial doses of cyclines have been recommended for acne treatment. Topical and oral retinoids have been successfully used for more than 30 years as important therapeutic options. 7 More recent evidence-based guidelines for acne issued by the American Academy of Dermatology 8 and the European Dermatology Forum 9 also show that retinoids play an important role in acne therapy. Their anti-inflammatory activity acts against comedones and their precursors (microcomedones). Successful antiacne therapy not only achieves a smooth face without comedones but also minimizes scar formation, postinflammatory discoloration, and long-lasting postinflammatory erythema. 10 Oral contraceptives have a mainly antiseborrheic effect. 11

Our study sought to analyze the potential influence of therapy during adolescent acne on patients who later developed adult acne. Particular attention was given to the use of oral antibiotics, isotretinoin, and topical retinoids for adolescent acne and their potential role in diminishing scar formation in adult acne.

Materials and Methods

Patient Demographics and Selection
A population-based study of Polish patients with adult acne was conducted. Patients were included in the study group on a consecutive basis from among those who visited our outpatient dermatology center from May 2015 to January 2016. A total of 111 patients (101 women [90.99%] and 10 men [9.01%]) were examined. The study group comprised patients aged 25 years and older who were treated for adult acne (20 patients [18.02%] were aged 25–29 years, 61 [54.95%] were aged 30–39 years, and 30 [27.02%] were 40 years or older).

The following inclusion criteria were used: observation period of at least 6 months in our dermatologic center for patients diagnosed with adult acne, at least 2 dermatologic visits for adult acne prior to the study, written informed consent for study participation and data processing (the aim of the study was explained to each participant by a dermatologist), and age 25 years or older. Exclusion criteria included those who were younger than 25 years, those who had only 1 dermatologic visit at our dermatology center, and those who were unwilling to participate or did not provide written informed consent. Our study was conducted according to Good Clinical Practice.

 

 


Data Collection
To obtain data with the highest degree of reliability, 3 sources of information were used: (1) a detailed medical interview conducted by one experienced dermatologist (E.C.) at our dermatology center at the first visit in all study participants, (2) a clinical examination that yielded results necessary for the assessment of scars using a method outlined by Jacob et al, 12 and (3) information included in available medical records. These data were then statistically analyzed.



Statistical Analysis
The results were presented as frequency plots, and a Fisher exact test was conducted to obtain a statistical comparison of the distributions of analyzed data. Unless otherwise indicated, 5% was adopted as the significance level. The statistical analysis was performed using Stata 14 software (StataCorp LLC, College Station, Texas).

Results

Incidence of Different Forms of Adult Acne
To analyze the onset of acne, patients were categorized into 1 of 2 groups: those with persistent adult acne (81.98%) and those with late-onset adult acne (ie, developed after 25 years of age)(18.02%).

Age at Initiation of Dermatologic Treatment
Of the patients with persistent adult acne, 31.87% first visited a dermatologist the same year that the first acne lesions appeared, 36.26% postponed the first visit by at least 5 years (Figure 1), and 23.08% started treatment at least 10 years after acne first appeared. Among patients with persistent adult acne, 76.92% began dermatologic treatment before 25 years of age, and 23.08% began treatment after 25 years of age. Of the latter, 28.57% did not start therapy until they were older than 35 years.

Figure 1. Initiation of dermatologic treatment for patients with persistent adult acne (n=91).

Severity of Adolescent Acne
In the persistent adult acne group, the severity of adolescent acne was assessed during the medical interview as well as detailed histories in medical records. The activity of acne was evaluated at 2-year intervals with the use of a 10-point scale: 1 to 3 points indicated mild acne (7.69% of patients), 4 to 6 points indicated moderate acne (24.18%), and 7 to 10 points indicated severe acne (68.13%).

Treatment of Persistent Acne in Adolescence
Treatment was comprised of oral therapy with antibiotics, isotretinoin, and/or application of topical retinoids (sometimes supported with OCPs). Monotherapy was the standard of treatment more than 25 years ago when patients with persistent adult acne were treated as adolescents or young adults. As many as 43.96% of patients with persistent adult acne did not receive any of these therapies before 25 years of age; rather, they used antiacne cosmetics or beauty procedures. Furthermore, 50.55% of patients were treated with oral antibiotics (Figure 2). Topical retinoids were used in 19.78% of patients and isotretinoin was used in 16.48%. Incidentally, OCPs were given to 26.5%. In the course of adolescent acne, 31.87% of patients received 2 to 4 courses of treatment with either antibiotics or retinoids (oral or topical), and 5.49% were treated with 5 or more courses of treatment (Figure 3). The analysis of each treatment revealed that only 1 patient received 4 courses of isotretinoin. Five courses of oral antibiotics were given in 1 patient, and 3 courses of topical retinoids were given in the same patient.

Figure 2. Patients with persistent adult acne treated with oral antibiotics, isotretinoin, and topical retinoids before 25 years of age (n=91).

Figure 3. Total number of oral antibiotics, isotretinoin, and topical retinoid treatments before 25 years of age in patients with persistent adult acne (n=91).

 

 

Topical Retinoids
In an analysis of the number of treatments with topical retinoids completed by patients with persistent adult acne, it was established that 80.22% of patients never used topical retinoids for acne during adolescence. Additionally, 12.08% of these patients completed 1 course of treatment, and 7.69% completed 2 to 4 treatments. However, after 25 years of age, only 25.27% of the patients with persistent adult acne were not treated with topical retinoids, and 35.16% completed more than 2 courses of treatment.



Duration of Treatment
Because adult acne is a chronic disease, the mean number of years that patients received treatment over the disease course was analyzed. In the case of persistent adult acne, the mean duration of treatment, including therapy received during adolescence, was more than 13 years. At the time of the study, more than 30% of patients had been undergoing treatment of adult acne for more than 20 years. Scars— The proportion of patients with persistent adult acne who experienced scarring was evaluated. In the persistent adult acne group, scars were identified in 53.85% of patients. Scars appeared only during adolescence in 26.37% of patients with persistent adult acne, scars appeared only after 25 years of age in 21.97% of patients, and scars appeared in adolescence as well as adulthood in 30.77% of patients.

In an analysis of patients with persistent adult acne who experienced scarring after 25 years of age, the proportion of patients with untreated adolescent acne and those who were treated with antibiotics only was not significantly different (60% vs 64%;
P = .478)(Table). The inclusion of topical retinoids into treatment decreased the proportion of scars (isotretinoin: 20%, P = .009; topical retinoids: 38.89%, P = .114).

Comment

Persistent Adult Acne
Patients with symptoms of persistent adult acne represented 81.98% of the study population, which was similar to a 1999 study by Goulden et al, 1 a 2001 study by Shaw and White, 13 and a 2009 report by Schmidt et al. 14 Of these patients with persistent adult acne, 23.08% initiated therapy after 25 years of age, and 23.08% started treatment at least 10 years after acne lesions first appeared. However, it is noteworthy that 68.13% of all patients with persistent adult acne assessed their disease as severe.

Treatment Modalities for Adult Acne
Over the last 5 years, some researchers have attempted to make recommendations for the treatment of adult acne based on standards adopted for the treatment of adolescent acne. 2,9,15 First-line treatment of patients with adult comedonal acne is topical retinoids. 9 The recommended treatment of mild to moderate adult inflammatory acne involves topical drugs, including retinoids, azelaic acid, or benzoyl peroxide, or oral medications, including antibiotics, OCPs, or antiandrogens. In severe inflammatory acne, the recommended treatment involves oral isotretinoin or combined therapies; the latter seems to be the most effective. 16 Furthermore, this therapy has been adjusted to the patient’s current clinical condition; general individual sensitivity of the skin to irritation and the risk for irritant activity of topical medications; and life situation, such as planned pregnancies and intended use of OCPs due to the risk for teratogenic effects of drugs. 17

To assess available treatment modalities, oral therapy with antibiotics or isotretinoin as well as topical retinoids were selected for our analysis. It is difficult to determine an exclusive impact of OCPs as acne treatment; according to our study, many female patients use hormone therapy for other medical conditions or contraception, and only a small proportion of these patients are prescribed hormone treatment for acne. We found that 43.96% of patients with persistent adult acne underwent no treatment with antibiotics, isotretinoin, or topical retinoids in adolescence. Patients who did not receive any of these treatments came only for single visits to a dermatologist, did not comply to a recommended therapy, or used only cosmetics or beauty procedures. We found that 80.22% of patients with persistent adult acne never used topical retinoids during adolescence and did not receive maintenance therapy, which may be attributed to the fact that there were no strict recommendations regarding retinoid treatment when these patients were adolescents or young adults. Published data indicate that retinoid use for acne treatment is not common. 18 Conversely, among patients older than 25 years with late-onset adult acne, there was only 1 patient (ie, < 1%) who had never received any oral antibiotic or isotretinoin treatment or therapy with topical retinoids. The reason for the lack of medical treatment is unknown. Only 25.27% of patients were not treated with topical retinoids, and 35.16% completed at least 2 courses of treatment. The use of topical retinoids for the treatment of persistent and late-onset adult acne may be the result of the spread of knowledge among dermatologists acquired over the last 25 years.



Acne Scarring
The worst complication of acne is scarring. Scars develop for the duration of the disease, during both adolescent and adult acne. In the group with persistent adult acne, scarring was found in 53.85% of patients. Scar formation has been previously reported as a common complication of acne. 19 The effects of skin lesions that remain after acne are not only limited to impaired cosmetic appearance; they also negatively affect mental health and impair quality of life. 20 The aim of our study was to analyze types of treatment for adolescent acne in patients who later had persistent adult acne. Postacne scars observed later are objective evidence of the severity of disease. We found that using oral antibiotics did not diminish the number of scars among persistent adult acne patients in adulthood. In contrast, isotretinoin or topical retinoid treatment during adolescence decreased the risk for scars occurring during adulthood. In our opinion, these findings emphasize the role of this type of treatment among adolescents or young adults. The decrease of scar formation in adult acne due to retinoid treatment in adolescence indirectly justifies the role of maintenance therapy with topical retinoids. 21,22

References
  1. Goulden V, Stables GI, Cunliffe WJ. Prevalence of facial acne in adults. J Am Acad Dermatol. 1999;41:577-580. 
  2. Dreno B, Layton A, Zouboulis CC, et al. Adult female acne: a new paradigm. J Eur Acad Dermatol Venereol. 2013;27:1063-1070. 
  3. Preneau S, Dreno B. Female acne--a different subtype of teenager acne? J Eur Acad Dermatol Venereol. 2012;26:277-282. 
  4. Goulden V, Clark SM, Cunliffe WJ. Post-adolescent acne: a review of clinical features. Br J Dermatol. 1997;136:66-70. 
  5. Kamangar F, Shinkai K. Acne in the adult female patient: a practical approach. Int J Dermatol. 2012;51:1162-1174. 
  6. Choi CW, Lee DH, Kim HS, et al. The clinical features of late onset acne compared with early onset acne in women. J Eur Acad Dermatol Venereol. 2011;25:454-461. 
  7. Kligman AM, Fulton JE Jr, Plewig G. Topical vitamin A acid in acne vulgaris. Arch Dermatol. 1969;99:469-476. 
  8. Zaenglein AL, Pathy AL, Schlosser BJ, et al. Guidelines of care for the management of acne vulgaris. J Am Acad Dermatol. 2016;74:945.e33-973.e33. 
  9. Nast A, Dreno B, Bettoli V, et al. European evidence-based guidelines for the treatment of acne. J Eur Acad Dermatol Venereol. 2012;26(suppl 1):1-29. 
  10. Levin J. The relationship of proper skin cleansing to pathophysiology, clinical benefits, and the concomitant use of prescription topical therapies in patients with acne vulgaris. Dermatol Clin. 2016;34:133-145. 
  11. Savage LJ, Layton AM. Treating acne vulgaris: systemic, local and combination therapy. Expert Rev Clin Pharmacol. 2010;3:563-580.  
  12. Jacob CL, Dover JS, Kaminer MS. Acne scarring: a classification system and review of treatment options. J Am Acad Dermatol. 2001;45:109-117. 
  13. Shaw JC, White LE. Persistent acne in adult women. Arch Dermatol. 2001;137:1252-1253. 
  14. Schmidt JV, Masuda PY, Miot HA. Acne in women: clinical patterns in different age groups. An Bras Dermatol. 2009;84:349-354. 
  15. Thiboutot D, Gollnick H, Bettoli V, et al. New insights into the management of acne: an update from the Global Alliance to Improve Outcomes in Acne group. J Am Acad Dermatol. 2009;60(5 suppl):1-50. 
  16. Williams C, Layton AM. Persistent acne in women: implications for the patient and for therapy. Am J Clin Dermatol. 2006;7:281-290. 
  17. Holzmann R, Shakery K. Postadolescent acne in females. Skin Pharmacol Physiol. 2014;27(suppl 1):3-8. 
  18. Pena S, Hill D, Feldman SR. Use of topical retinoids by dermatologist and non-dermatologist in the management of acne vulgaris. J Am Acad Dermatol. 2016;74:1252-1254. 
  19. Layton AM, Henderson CA, Cunliffe WJ. A clinical evaluation of acne scarring and its incidence. Clin Exp Dermatol. 1994;19;303-308. 
  20. Halvorsen JA, Stern RS, Dalgard F, et al. Suicidal ideation, mental health problems, and social impairment are increased in adolescents with acne: a population-based study. J Invest Dermatol. 2011;131:363-370. 
  21. Thielitz A, Sidou F, Gollnick H. Control of microcomedone formation throughout a maintenance treatment with adapalene gel, 0.1%. J Eur Acad Dermatol Venereol. 2007;21:747-753. 
  22. Leyden J, Thiboutot DM, Shalita R, et al. Comparison of tazarotene and minocycline maintenance therapies in acne vulgaris: a multicenter, double-blind, randomized, parallel-group study. Arch Dermatol. 2006;142:605-612.
References
  1. Goulden V, Stables GI, Cunliffe WJ. Prevalence of facial acne in adults. J Am Acad Dermatol. 1999;41:577-580. 
  2. Dreno B, Layton A, Zouboulis CC, et al. Adult female acne: a new paradigm. J Eur Acad Dermatol Venereol. 2013;27:1063-1070. 
  3. Preneau S, Dreno B. Female acne--a different subtype of teenager acne? J Eur Acad Dermatol Venereol. 2012;26:277-282. 
  4. Goulden V, Clark SM, Cunliffe WJ. Post-adolescent acne: a review of clinical features. Br J Dermatol. 1997;136:66-70. 
  5. Kamangar F, Shinkai K. Acne in the adult female patient: a practical approach. Int J Dermatol. 2012;51:1162-1174. 
  6. Choi CW, Lee DH, Kim HS, et al. The clinical features of late onset acne compared with early onset acne in women. J Eur Acad Dermatol Venereol. 2011;25:454-461. 
  7. Kligman AM, Fulton JE Jr, Plewig G. Topical vitamin A acid in acne vulgaris. Arch Dermatol. 1969;99:469-476. 
  8. Zaenglein AL, Pathy AL, Schlosser BJ, et al. Guidelines of care for the management of acne vulgaris. J Am Acad Dermatol. 2016;74:945.e33-973.e33. 
  9. Nast A, Dreno B, Bettoli V, et al. European evidence-based guidelines for the treatment of acne. J Eur Acad Dermatol Venereol. 2012;26(suppl 1):1-29. 
  10. Levin J. The relationship of proper skin cleansing to pathophysiology, clinical benefits, and the concomitant use of prescription topical therapies in patients with acne vulgaris. Dermatol Clin. 2016;34:133-145. 
  11. Savage LJ, Layton AM. Treating acne vulgaris: systemic, local and combination therapy. Expert Rev Clin Pharmacol. 2010;3:563-580.  
  12. Jacob CL, Dover JS, Kaminer MS. Acne scarring: a classification system and review of treatment options. J Am Acad Dermatol. 2001;45:109-117. 
  13. Shaw JC, White LE. Persistent acne in adult women. Arch Dermatol. 2001;137:1252-1253. 
  14. Schmidt JV, Masuda PY, Miot HA. Acne in women: clinical patterns in different age groups. An Bras Dermatol. 2009;84:349-354. 
  15. Thiboutot D, Gollnick H, Bettoli V, et al. New insights into the management of acne: an update from the Global Alliance to Improve Outcomes in Acne group. J Am Acad Dermatol. 2009;60(5 suppl):1-50. 
  16. Williams C, Layton AM. Persistent acne in women: implications for the patient and for therapy. Am J Clin Dermatol. 2006;7:281-290. 
  17. Holzmann R, Shakery K. Postadolescent acne in females. Skin Pharmacol Physiol. 2014;27(suppl 1):3-8. 
  18. Pena S, Hill D, Feldman SR. Use of topical retinoids by dermatologist and non-dermatologist in the management of acne vulgaris. J Am Acad Dermatol. 2016;74:1252-1254. 
  19. Layton AM, Henderson CA, Cunliffe WJ. A clinical evaluation of acne scarring and its incidence. Clin Exp Dermatol. 1994;19;303-308. 
  20. Halvorsen JA, Stern RS, Dalgard F, et al. Suicidal ideation, mental health problems, and social impairment are increased in adolescents with acne: a population-based study. J Invest Dermatol. 2011;131:363-370. 
  21. Thielitz A, Sidou F, Gollnick H. Control of microcomedone formation throughout a maintenance treatment with adapalene gel, 0.1%. J Eur Acad Dermatol Venereol. 2007;21:747-753. 
  22. Leyden J, Thiboutot DM, Shalita R, et al. Comparison of tazarotene and minocycline maintenance therapies in acne vulgaris: a multicenter, double-blind, randomized, parallel-group study. Arch Dermatol. 2006;142:605-612.
Issue
Cutis - 104(1)
Issue
Cutis - 104(1)
Page Number
57-61
Page Number
57-61
Publications
Publications
Topics
Article Type
Display Headline
The Role of Adolescent Acne Treatment in Formation of Scars Among Patients With Persistent Adult Acne: Evidence From an Observational Study
Display Headline
The Role of Adolescent Acne Treatment in Formation of Scars Among Patients With Persistent Adult Acne: Evidence From an Observational Study
Sections
Inside the Article

Practice Points

  • Postacne scarring is the most severe complication of acne.
  • Isotretinoin or topical retinoid treatment in adolescence decreases the risk for scars during adult acne, justifying the role of maintenance therapy with topical retinoids.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Usage of and Attitudes Toward Health Information Exchange Before and After System Implementation in a VA Medical Center

Article Type
Changed
Mon, 07/15/2019 - 14:43
A quality improvement project demonstrated a meaningful improvement in VA staff satisfaction regarding access to community-based health records after implementation of an externally developed health information exchange system.

More than 9 million veterans are enrolled in the Veterans Health Administration (VHA). A high percentage of veterans who use VHA services have multiple chronic conditions and complex medical needs.1 In addition to receiving health care from the VHA, many of these patients receive additional services from non-VHA providers in the community. Furthermore, recent laws enacted, such as the 2018 VA MISSION Act and the 2014 VA Choice Program, have increased veterans’ use of community health care services.

VHA staff face considerable barriers when seeking documentation about non-VHA services delivered in the community, which can be fragmented across multiple health care systems. In many VHA medical centers, staff must telephone non-VHA sites of care and/or use time-consuming fax services to request community-based patient records. VA health care providers (HCPs) often complain that community records are not available to make timely clinical decisions or that they must do so without knowing past or co-occurring assessments or treatment plans. Without access to comprehensive health records, patients are at risk for duplicated treatment, medication errors, and death.2,3

Background

To improve the continuity and safety of health care, US governmental and health information experts stimulated formal communication among HCPs via the 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act.4,5 One of the primary aims of the HITECH Act was to promote reliable and interoperable electronic sharing of clinical information through health information exchange (HIE) for both patients and HCPs. Monetary incentives encouraged regional, state, or state-funded organizations to create and promote HIE capabilities.

Presently, empirical data are not available that describe the effect of external HIE systems in VHA settings. However, data examining non-VHA settings suggest that HIE may improve quality of care, although findings are mixed. For example, some research has found that HIE reduces hospital admissions, duplicated test ordering, and health care costs and improves decision making, whereas other research has found no change.3,6-13 Barriers to HIE use noted in community settings include poorly designed interfaces, inefficient workflow, and incomplete record availability.3,6-10,14

A few US Department of Veterans Affairs (VA) medical centers have recently initiated contracts with HIE organizations. Because much of the present research evaluates internally developed HIE systems, scholars in the field have identified a pressing need for useful statistics before and after implementation of externally developed HIE systems.13,15 Additionally, scholars call for data examining nonacademic settings (eg, VHA medical centers) and for diverse patient populations (eg, individuals with chronic disorders, veterans).13This quality improvement project had 2 goals. The first goal was to assess baseline descriptive statistics related to requesting/obtaining community health records in a VHA setting. The second goal was to evaluate VHA staff access to needed community health records (eg, records stemming from community consults) before and after implementation of an externally developed HIE system.

Methods

This project was a single-center, quality improvement evaluation examining the effect of implementing an HIE system, developed by an external nonprofit organization. The project protocol was approved by the VA Pacific Islands Healthcare System (VAPIHCS) Evidence-Based Practices Council. Clinicians’ responses were anonymous, and data were reported only in aggregate. Assessment was conducted by an evaluator who was not associated with the HIE system developers and its implementation, reducing the chance of bias.15

 

 

Coinciding with the HIE system implementation and prior to having access to it, VAPIHCS medical and managed care staff were invited to complete an online needs assessment tool. Voluntary trainings on the system were offered at various times on multiple days and lasted approximately 1 hour. Six months after the HIE system was implemented, a postassessment tool reevaluated HIE-related access.

VHA Setting and HIE System

VAPIHCS serves about 55,000 unique patients across a 2.6 million square-mile catchment area (Hawaii and Pacific Island territories). Facilities include a medium-sized, urban VA medical center and 7 suburban or rural/remote primary care outpatient clinics.

VAPIHCS contracted with Hawaii Health Information Exchange (HHIE), a nonprofit organization that was designated by the state of Hawaii to develop a seamless, secure HIE system. According to HHIE, 83% of the 23 hospitals in the state and 55% of Hawaii’s 2,927 active practicing physicians have adopted the HIE system (F. Chan, personal communication, December 12, 2018). HHIE’s data sources provide real-time access to a database of 20 million health records. Records include, among other records, data such as patients’ reasons for referral, encounter diagnoses, medications, immunizations, and discharge instructions from many (but not all) HCPs in Hawaii.

HHIE reports that it has the capacity to interface with all electronic health records systems currently in use in the community (F. Chan, personal communication, December 12, 2018). Although the HIE system can provide directed exchange (ie, sending and receiving secure information electronically between HCPs), the HIE system implemented in the VAPIHCS was limited to query-retrieve (ie, practitioner-initiated requests for information from other community HCPs). Specifically, to access patient records, practitioners log in to the HIE portal and enter a patient’s name in a search window. The system then generates a consolidated virtual chart with data collected from all HIE data-sharing participants. To share records, community HCPs either build or enable a profile in an integrated health care enterprise electronic communication interface into their data. However, VHA records were not made available to community HCPs at this initial stage.

Measures and Statistical Analysis

A template of quality improvement-related questions was adapted for this project with input from subject matter experts. Questions were then modified further based on interviews with 5 clinical and managed care staff members. The final online tool consisted of up to 20 multiple choice items and 2 open-ended questions delivered online. A 22-item evaluation tool was administered 6 months after system implementation. Frequencies were obtained for descriptive items, and group responses were compared across time.

Results

Thirty-nine staff (32 medical and 7 managed care staff) completed the needs assessment, and 20 staff (16 medical and 4 managed care staff) completed the postimplementation evaluation.

Before implementation of the HIE system, most staff (54%) indicated that they spent > 1 hour a week conducting tasks related to seeking and/or obtaining health records from the community. The largest percentage of staff (27%) requested > 10 community records during a typical week. Most respondents indicated that they would use an easy tool to instantly retrieve community health records at least 20 times per week (Table 1).

Preimplementation, 32.4% of respondents indicated that they could access community-based health records sometimes. Postimplementation, most respondents indicated they could access the records most of the time (Figure 1).

Preimplementation, staff most frequently indicated they were very dissatisfied with the current level of access to community records. Postimplementation, more staff were somewhat satisfied or very satisfied (Figure 2). Postimplementation, 48% of staff most often reported using the HIE system either several times a month or 2 to 4 times a week, 19% used the system daily, 19% used 1 to 2 times, and 14% never used the system. Most staff (67%) reported that the system improved access to records somewhat and supported continuing the contract with the HIE system. Conversely, 18% of respondents said that their access did not improve enough for the system to be of use to them.

Preimplementation, staff most frequently indicated that they did not have time (28.6%) or sufficient staff (25.7%) to request records (Table 2). Postimplementation, staff most frequently (33.3%) indicated that they had no problems accessing the HIE system, but 6.7% reported having time or interface/software difficulties.

 

 

Discussion

This report assessed a quality improvement project designed to increase VHA access to community health records via an external HIE system. Prior to this work, no data were available on use, barriers, and staff satisfaction related to implementing an externally developed HIE system within a VA medical center.13,15

Before the medical center implemented the HIE system, logistical barriers prevented most HCPs and managed care staff from obtaining needed community records. Staff faced challenges such as lacking time as well as rudimentary barriers, such as community clinics not responding to requests or the fax machine not working. Time remained a challenge after implementation, but this work demonstrated that the HIE system helped staff overcome many logistical barriers.

After implementation of the HIE system, staff reported an improvement in access and satisfaction related to retrieving community health records. These findings are consistent with most but not all evaluations of HIE systems.3,6,7,12,13 In the present work, staff used the system several times a month or several times a week, and most staff believed that access to the HIE system should be continued. Still, improvement was incomplete. The HIE system increased access to specific types of records (eg, reports) and health care systems (eg, large hospitals), but not others. As a result, the system was more useful for some staff than for others.

Research examining HIE systems in community and academic settings have identified factors that deter their use, such as poorly designed interfaces, inefficient workflow, and incomplete record availability.3,6,7,14,16 In the present project, incomplete record availability was a noted barrier. Additionally, a few staff reported system interface issues. However, most staff found the system easy to use as part of their daily workflow.

Because the HIE system had a meaningful, positive impact on VHA providers and staff, it will be sustained at VAPIHCS. Specifically, the contract with the HHIE has been renewed, and the number of user licenses has increased. Staff users now self-refer for the service or can be referred by their service chiefs.

Limitations

This work was designed to evaluate the effect of an HIE system on staff in 1 VHA setting; thus, findings may not be generalizable to other settings or HIE systems. Limitations of the present work include small sample size of respondents; limited time frame for responses; and limited response rate. The logical next step would be research efforts to compare access to the HIE system with no access on factors such as workload productivity, cost savings, and patient safety.

Conclusion

The vision of the HITECH Act was to improve the continuity and safety of health care via reliable and interoperable electronic sharing of clinical information across health care entities.6 This VHA quality improvement project demonstrated a meaningful improvement in staff’s level of satisfaction with access to community health records when staff used an externally developed HIE system. Not all types of records (eg, progress notes) were accessible, which resulted in the system being useful for most but not all staff.

In the future, the federal government’s internally developed Veterans Health Information Exchange (formerly known as the Virtual Lifetime Electronic Record [VLER]) is expected to enable VHA, the Department of Defense, and participating community care providers to access shared electronic health records nationally. However, until we can achieve that envisioned interoperability, VHA staff can use HIE and other clinical support applications to access health records.

References

1. Yu W, Ravelo A, Wagner TH, et al. Prevalence and costs of chronic conditions in the VA health care system. Med Care Res Rev. 2003;60(3)(suppl):146S-167S.

2. Bourgeois FC, Olson KL, Mandl KD. Patients treated at multiple acute health care facilities: quantifying information fragmentation. Arch Intern Med. 2010;170(22):1989-1995.

3. Rudin RS, Motala A, Goldzweig CL, Shekelle PG. Usage and effect of health information exchange: a systematic review. Ann Intern Med. 2014;161(11):803-811.

4. Blumenthal D. Implementation of the federal health information technology initiative. N Engl J Med. 2011;365(25):2426-2431.

5. The Office of the National Coordinator for Health Information Technology. Connecting health and care for the nation: a shared nationwide interoperability roadmap. Final version 1.0. https://www.healthit.gov/sites/default/files/hie-interoperability/nationwide-interoperability-roadmap-final-version-1.0.pdf. Accessed May 22, 2019.

6. Detmer D, Bloomrosen M, Raymond B, Tang P. Integrated personal health records: transformative tools for consumer-centric care. BMC Med Inform Decis Mak. 2008;8:45.

7. Hersh WR, Totten AM, Eden KB, et al. Outcomes from health information exchange: systematic review and future research needs. JMIR Med Inform. 2015;3(4):e39.

8. Vest JR, Kern LM, Campion TR Jr, Silver MD, Kaushal R. Association between use of a health information exchange system and hospital admissions. Appl Clin Inform. 2014;5(1):219-231.

9. Vest JR, Jung HY, Ostrovsky A, Das LT, McGinty GB. Image sharing technologies and reduction of imaging utilization: a systematic review and meta-analysis. J Am Coll Radiol. 2015;12(12 pt B):1371-1379.e3.

10. Walker DM. Does participation in health information exchange improve hospital efficiency? Health Care Manag Sci. 2018;21(3):426-438.

11. Gordon BD, Bernard K, Salzman J, Whitebird RR. Impact of health information exchange on emergency medicine clinical decision making. West J Emerg Med. 2015;16(7):1047-1051.

12. Hincapie A, Warholak T. The impact of health information exchange on health outcomes. Appl Clin Inform. 2011;2(4):499-507.

13. Rahurkar S, Vest JR, Menachemi N. Despite the spread of health information exchange, there is little evidence of its impact on cost, use, and quality of care. Health Aff (Millwood). 2015;34(3):477-483.

14. Eden KB, Totten AM, Kassakian SZ, et al. Barriers and facilitators to exchanging health information: a systematic review. Int J Med Inform. 2016;88:44-51.

15. Hersh WR, Totten AM, Eden K, et al. The evidence base for health information exchange. In: Dixon BE, ed. Health Information Exchange: Navigating and Managing a Network of Health Information Systems. Cambridge, MA: Academic Press; 2016:213-229.

16. Blavin F, Ramos C, Cafarella Lallemand N, Fass J, Ozanich G, Adler-Milstein J. Analyzing the public benefit attributable to interoperable health information exchange. https://aspe.hhs.gov/system/files/pdf/258851/AnalyzingthePublicBenefitAttributabletoInteroperableHealth.pdf. Published July 2017. Accessed May 22, 2019.

Article PDF
Author and Disclosure Information

Julia Whealin is an Informatics Research Psychologist, Reese Omizo is a Physician Informaticist, and Christopher Lopez is an Associate Chief of Staff, all at the VA Pacific Islands Healthcare System in Honolulu, Hawaii. Julia Whealin is an Associate Clinical Professor at the University of Hawaii School of Medicine in Manoa.
Correspondence: Julia Whealin ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Issue
Federal Practitioner - 36(7)a
Publications
Topics
Page Number
322-326
Sections
Author and Disclosure Information

Julia Whealin is an Informatics Research Psychologist, Reese Omizo is a Physician Informaticist, and Christopher Lopez is an Associate Chief of Staff, all at the VA Pacific Islands Healthcare System in Honolulu, Hawaii. Julia Whealin is an Associate Clinical Professor at the University of Hawaii School of Medicine in Manoa.
Correspondence: Julia Whealin ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Author and Disclosure Information

Julia Whealin is an Informatics Research Psychologist, Reese Omizo is a Physician Informaticist, and Christopher Lopez is an Associate Chief of Staff, all at the VA Pacific Islands Healthcare System in Honolulu, Hawaii. Julia Whealin is an Associate Clinical Professor at the University of Hawaii School of Medicine in Manoa.
Correspondence: Julia Whealin ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Article PDF
Article PDF
Related Articles
A quality improvement project demonstrated a meaningful improvement in VA staff satisfaction regarding access to community-based health records after implementation of an externally developed health information exchange system.
A quality improvement project demonstrated a meaningful improvement in VA staff satisfaction regarding access to community-based health records after implementation of an externally developed health information exchange system.

More than 9 million veterans are enrolled in the Veterans Health Administration (VHA). A high percentage of veterans who use VHA services have multiple chronic conditions and complex medical needs.1 In addition to receiving health care from the VHA, many of these patients receive additional services from non-VHA providers in the community. Furthermore, recent laws enacted, such as the 2018 VA MISSION Act and the 2014 VA Choice Program, have increased veterans’ use of community health care services.

VHA staff face considerable barriers when seeking documentation about non-VHA services delivered in the community, which can be fragmented across multiple health care systems. In many VHA medical centers, staff must telephone non-VHA sites of care and/or use time-consuming fax services to request community-based patient records. VA health care providers (HCPs) often complain that community records are not available to make timely clinical decisions or that they must do so without knowing past or co-occurring assessments or treatment plans. Without access to comprehensive health records, patients are at risk for duplicated treatment, medication errors, and death.2,3

Background

To improve the continuity and safety of health care, US governmental and health information experts stimulated formal communication among HCPs via the 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act.4,5 One of the primary aims of the HITECH Act was to promote reliable and interoperable electronic sharing of clinical information through health information exchange (HIE) for both patients and HCPs. Monetary incentives encouraged regional, state, or state-funded organizations to create and promote HIE capabilities.

Presently, empirical data are not available that describe the effect of external HIE systems in VHA settings. However, data examining non-VHA settings suggest that HIE may improve quality of care, although findings are mixed. For example, some research has found that HIE reduces hospital admissions, duplicated test ordering, and health care costs and improves decision making, whereas other research has found no change.3,6-13 Barriers to HIE use noted in community settings include poorly designed interfaces, inefficient workflow, and incomplete record availability.3,6-10,14

A few US Department of Veterans Affairs (VA) medical centers have recently initiated contracts with HIE organizations. Because much of the present research evaluates internally developed HIE systems, scholars in the field have identified a pressing need for useful statistics before and after implementation of externally developed HIE systems.13,15 Additionally, scholars call for data examining nonacademic settings (eg, VHA medical centers) and for diverse patient populations (eg, individuals with chronic disorders, veterans).13This quality improvement project had 2 goals. The first goal was to assess baseline descriptive statistics related to requesting/obtaining community health records in a VHA setting. The second goal was to evaluate VHA staff access to needed community health records (eg, records stemming from community consults) before and after implementation of an externally developed HIE system.

Methods

This project was a single-center, quality improvement evaluation examining the effect of implementing an HIE system, developed by an external nonprofit organization. The project protocol was approved by the VA Pacific Islands Healthcare System (VAPIHCS) Evidence-Based Practices Council. Clinicians’ responses were anonymous, and data were reported only in aggregate. Assessment was conducted by an evaluator who was not associated with the HIE system developers and its implementation, reducing the chance of bias.15

 

 

Coinciding with the HIE system implementation and prior to having access to it, VAPIHCS medical and managed care staff were invited to complete an online needs assessment tool. Voluntary trainings on the system were offered at various times on multiple days and lasted approximately 1 hour. Six months after the HIE system was implemented, a postassessment tool reevaluated HIE-related access.

VHA Setting and HIE System

VAPIHCS serves about 55,000 unique patients across a 2.6 million square-mile catchment area (Hawaii and Pacific Island territories). Facilities include a medium-sized, urban VA medical center and 7 suburban or rural/remote primary care outpatient clinics.

VAPIHCS contracted with Hawaii Health Information Exchange (HHIE), a nonprofit organization that was designated by the state of Hawaii to develop a seamless, secure HIE system. According to HHIE, 83% of the 23 hospitals in the state and 55% of Hawaii’s 2,927 active practicing physicians have adopted the HIE system (F. Chan, personal communication, December 12, 2018). HHIE’s data sources provide real-time access to a database of 20 million health records. Records include, among other records, data such as patients’ reasons for referral, encounter diagnoses, medications, immunizations, and discharge instructions from many (but not all) HCPs in Hawaii.

HHIE reports that it has the capacity to interface with all electronic health records systems currently in use in the community (F. Chan, personal communication, December 12, 2018). Although the HIE system can provide directed exchange (ie, sending and receiving secure information electronically between HCPs), the HIE system implemented in the VAPIHCS was limited to query-retrieve (ie, practitioner-initiated requests for information from other community HCPs). Specifically, to access patient records, practitioners log in to the HIE portal and enter a patient’s name in a search window. The system then generates a consolidated virtual chart with data collected from all HIE data-sharing participants. To share records, community HCPs either build or enable a profile in an integrated health care enterprise electronic communication interface into their data. However, VHA records were not made available to community HCPs at this initial stage.

Measures and Statistical Analysis

A template of quality improvement-related questions was adapted for this project with input from subject matter experts. Questions were then modified further based on interviews with 5 clinical and managed care staff members. The final online tool consisted of up to 20 multiple choice items and 2 open-ended questions delivered online. A 22-item evaluation tool was administered 6 months after system implementation. Frequencies were obtained for descriptive items, and group responses were compared across time.

Results

Thirty-nine staff (32 medical and 7 managed care staff) completed the needs assessment, and 20 staff (16 medical and 4 managed care staff) completed the postimplementation evaluation.

Before implementation of the HIE system, most staff (54%) indicated that they spent > 1 hour a week conducting tasks related to seeking and/or obtaining health records from the community. The largest percentage of staff (27%) requested > 10 community records during a typical week. Most respondents indicated that they would use an easy tool to instantly retrieve community health records at least 20 times per week (Table 1).

Preimplementation, 32.4% of respondents indicated that they could access community-based health records sometimes. Postimplementation, most respondents indicated they could access the records most of the time (Figure 1).

Preimplementation, staff most frequently indicated they were very dissatisfied with the current level of access to community records. Postimplementation, more staff were somewhat satisfied or very satisfied (Figure 2). Postimplementation, 48% of staff most often reported using the HIE system either several times a month or 2 to 4 times a week, 19% used the system daily, 19% used 1 to 2 times, and 14% never used the system. Most staff (67%) reported that the system improved access to records somewhat and supported continuing the contract with the HIE system. Conversely, 18% of respondents said that their access did not improve enough for the system to be of use to them.

Preimplementation, staff most frequently indicated that they did not have time (28.6%) or sufficient staff (25.7%) to request records (Table 2). Postimplementation, staff most frequently (33.3%) indicated that they had no problems accessing the HIE system, but 6.7% reported having time or interface/software difficulties.

 

 

Discussion

This report assessed a quality improvement project designed to increase VHA access to community health records via an external HIE system. Prior to this work, no data were available on use, barriers, and staff satisfaction related to implementing an externally developed HIE system within a VA medical center.13,15

Before the medical center implemented the HIE system, logistical barriers prevented most HCPs and managed care staff from obtaining needed community records. Staff faced challenges such as lacking time as well as rudimentary barriers, such as community clinics not responding to requests or the fax machine not working. Time remained a challenge after implementation, but this work demonstrated that the HIE system helped staff overcome many logistical barriers.

After implementation of the HIE system, staff reported an improvement in access and satisfaction related to retrieving community health records. These findings are consistent with most but not all evaluations of HIE systems.3,6,7,12,13 In the present work, staff used the system several times a month or several times a week, and most staff believed that access to the HIE system should be continued. Still, improvement was incomplete. The HIE system increased access to specific types of records (eg, reports) and health care systems (eg, large hospitals), but not others. As a result, the system was more useful for some staff than for others.

Research examining HIE systems in community and academic settings have identified factors that deter their use, such as poorly designed interfaces, inefficient workflow, and incomplete record availability.3,6,7,14,16 In the present project, incomplete record availability was a noted barrier. Additionally, a few staff reported system interface issues. However, most staff found the system easy to use as part of their daily workflow.

Because the HIE system had a meaningful, positive impact on VHA providers and staff, it will be sustained at VAPIHCS. Specifically, the contract with the HHIE has been renewed, and the number of user licenses has increased. Staff users now self-refer for the service or can be referred by their service chiefs.

Limitations

This work was designed to evaluate the effect of an HIE system on staff in 1 VHA setting; thus, findings may not be generalizable to other settings or HIE systems. Limitations of the present work include small sample size of respondents; limited time frame for responses; and limited response rate. The logical next step would be research efforts to compare access to the HIE system with no access on factors such as workload productivity, cost savings, and patient safety.

Conclusion

The vision of the HITECH Act was to improve the continuity and safety of health care via reliable and interoperable electronic sharing of clinical information across health care entities.6 This VHA quality improvement project demonstrated a meaningful improvement in staff’s level of satisfaction with access to community health records when staff used an externally developed HIE system. Not all types of records (eg, progress notes) were accessible, which resulted in the system being useful for most but not all staff.

In the future, the federal government’s internally developed Veterans Health Information Exchange (formerly known as the Virtual Lifetime Electronic Record [VLER]) is expected to enable VHA, the Department of Defense, and participating community care providers to access shared electronic health records nationally. However, until we can achieve that envisioned interoperability, VHA staff can use HIE and other clinical support applications to access health records.

More than 9 million veterans are enrolled in the Veterans Health Administration (VHA). A high percentage of veterans who use VHA services have multiple chronic conditions and complex medical needs.1 In addition to receiving health care from the VHA, many of these patients receive additional services from non-VHA providers in the community. Furthermore, recent laws enacted, such as the 2018 VA MISSION Act and the 2014 VA Choice Program, have increased veterans’ use of community health care services.

VHA staff face considerable barriers when seeking documentation about non-VHA services delivered in the community, which can be fragmented across multiple health care systems. In many VHA medical centers, staff must telephone non-VHA sites of care and/or use time-consuming fax services to request community-based patient records. VA health care providers (HCPs) often complain that community records are not available to make timely clinical decisions or that they must do so without knowing past or co-occurring assessments or treatment plans. Without access to comprehensive health records, patients are at risk for duplicated treatment, medication errors, and death.2,3

Background

To improve the continuity and safety of health care, US governmental and health information experts stimulated formal communication among HCPs via the 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act.4,5 One of the primary aims of the HITECH Act was to promote reliable and interoperable electronic sharing of clinical information through health information exchange (HIE) for both patients and HCPs. Monetary incentives encouraged regional, state, or state-funded organizations to create and promote HIE capabilities.

Presently, empirical data are not available that describe the effect of external HIE systems in VHA settings. However, data examining non-VHA settings suggest that HIE may improve quality of care, although findings are mixed. For example, some research has found that HIE reduces hospital admissions, duplicated test ordering, and health care costs and improves decision making, whereas other research has found no change.3,6-13 Barriers to HIE use noted in community settings include poorly designed interfaces, inefficient workflow, and incomplete record availability.3,6-10,14

A few US Department of Veterans Affairs (VA) medical centers have recently initiated contracts with HIE organizations. Because much of the present research evaluates internally developed HIE systems, scholars in the field have identified a pressing need for useful statistics before and after implementation of externally developed HIE systems.13,15 Additionally, scholars call for data examining nonacademic settings (eg, VHA medical centers) and for diverse patient populations (eg, individuals with chronic disorders, veterans).13This quality improvement project had 2 goals. The first goal was to assess baseline descriptive statistics related to requesting/obtaining community health records in a VHA setting. The second goal was to evaluate VHA staff access to needed community health records (eg, records stemming from community consults) before and after implementation of an externally developed HIE system.

Methods

This project was a single-center, quality improvement evaluation examining the effect of implementing an HIE system, developed by an external nonprofit organization. The project protocol was approved by the VA Pacific Islands Healthcare System (VAPIHCS) Evidence-Based Practices Council. Clinicians’ responses were anonymous, and data were reported only in aggregate. Assessment was conducted by an evaluator who was not associated with the HIE system developers and its implementation, reducing the chance of bias.15

 

 

Coinciding with the HIE system implementation and prior to having access to it, VAPIHCS medical and managed care staff were invited to complete an online needs assessment tool. Voluntary trainings on the system were offered at various times on multiple days and lasted approximately 1 hour. Six months after the HIE system was implemented, a postassessment tool reevaluated HIE-related access.

VHA Setting and HIE System

VAPIHCS serves about 55,000 unique patients across a 2.6 million square-mile catchment area (Hawaii and Pacific Island territories). Facilities include a medium-sized, urban VA medical center and 7 suburban or rural/remote primary care outpatient clinics.

VAPIHCS contracted with Hawaii Health Information Exchange (HHIE), a nonprofit organization that was designated by the state of Hawaii to develop a seamless, secure HIE system. According to HHIE, 83% of the 23 hospitals in the state and 55% of Hawaii’s 2,927 active practicing physicians have adopted the HIE system (F. Chan, personal communication, December 12, 2018). HHIE’s data sources provide real-time access to a database of 20 million health records. Records include, among other records, data such as patients’ reasons for referral, encounter diagnoses, medications, immunizations, and discharge instructions from many (but not all) HCPs in Hawaii.

HHIE reports that it has the capacity to interface with all electronic health records systems currently in use in the community (F. Chan, personal communication, December 12, 2018). Although the HIE system can provide directed exchange (ie, sending and receiving secure information electronically between HCPs), the HIE system implemented in the VAPIHCS was limited to query-retrieve (ie, practitioner-initiated requests for information from other community HCPs). Specifically, to access patient records, practitioners log in to the HIE portal and enter a patient’s name in a search window. The system then generates a consolidated virtual chart with data collected from all HIE data-sharing participants. To share records, community HCPs either build or enable a profile in an integrated health care enterprise electronic communication interface into their data. However, VHA records were not made available to community HCPs at this initial stage.

Measures and Statistical Analysis

A template of quality improvement-related questions was adapted for this project with input from subject matter experts. Questions were then modified further based on interviews with 5 clinical and managed care staff members. The final online tool consisted of up to 20 multiple choice items and 2 open-ended questions delivered online. A 22-item evaluation tool was administered 6 months after system implementation. Frequencies were obtained for descriptive items, and group responses were compared across time.

Results

Thirty-nine staff (32 medical and 7 managed care staff) completed the needs assessment, and 20 staff (16 medical and 4 managed care staff) completed the postimplementation evaluation.

Before implementation of the HIE system, most staff (54%) indicated that they spent > 1 hour a week conducting tasks related to seeking and/or obtaining health records from the community. The largest percentage of staff (27%) requested > 10 community records during a typical week. Most respondents indicated that they would use an easy tool to instantly retrieve community health records at least 20 times per week (Table 1).

Preimplementation, 32.4% of respondents indicated that they could access community-based health records sometimes. Postimplementation, most respondents indicated they could access the records most of the time (Figure 1).

Preimplementation, staff most frequently indicated they were very dissatisfied with the current level of access to community records. Postimplementation, more staff were somewhat satisfied or very satisfied (Figure 2). Postimplementation, 48% of staff most often reported using the HIE system either several times a month or 2 to 4 times a week, 19% used the system daily, 19% used 1 to 2 times, and 14% never used the system. Most staff (67%) reported that the system improved access to records somewhat and supported continuing the contract with the HIE system. Conversely, 18% of respondents said that their access did not improve enough for the system to be of use to them.

Preimplementation, staff most frequently indicated that they did not have time (28.6%) or sufficient staff (25.7%) to request records (Table 2). Postimplementation, staff most frequently (33.3%) indicated that they had no problems accessing the HIE system, but 6.7% reported having time or interface/software difficulties.

 

 

Discussion

This report assessed a quality improvement project designed to increase VHA access to community health records via an external HIE system. Prior to this work, no data were available on use, barriers, and staff satisfaction related to implementing an externally developed HIE system within a VA medical center.13,15

Before the medical center implemented the HIE system, logistical barriers prevented most HCPs and managed care staff from obtaining needed community records. Staff faced challenges such as lacking time as well as rudimentary barriers, such as community clinics not responding to requests or the fax machine not working. Time remained a challenge after implementation, but this work demonstrated that the HIE system helped staff overcome many logistical barriers.

After implementation of the HIE system, staff reported an improvement in access and satisfaction related to retrieving community health records. These findings are consistent with most but not all evaluations of HIE systems.3,6,7,12,13 In the present work, staff used the system several times a month or several times a week, and most staff believed that access to the HIE system should be continued. Still, improvement was incomplete. The HIE system increased access to specific types of records (eg, reports) and health care systems (eg, large hospitals), but not others. As a result, the system was more useful for some staff than for others.

Research examining HIE systems in community and academic settings have identified factors that deter their use, such as poorly designed interfaces, inefficient workflow, and incomplete record availability.3,6,7,14,16 In the present project, incomplete record availability was a noted barrier. Additionally, a few staff reported system interface issues. However, most staff found the system easy to use as part of their daily workflow.

Because the HIE system had a meaningful, positive impact on VHA providers and staff, it will be sustained at VAPIHCS. Specifically, the contract with the HHIE has been renewed, and the number of user licenses has increased. Staff users now self-refer for the service or can be referred by their service chiefs.

Limitations

This work was designed to evaluate the effect of an HIE system on staff in 1 VHA setting; thus, findings may not be generalizable to other settings or HIE systems. Limitations of the present work include small sample size of respondents; limited time frame for responses; and limited response rate. The logical next step would be research efforts to compare access to the HIE system with no access on factors such as workload productivity, cost savings, and patient safety.

Conclusion

The vision of the HITECH Act was to improve the continuity and safety of health care via reliable and interoperable electronic sharing of clinical information across health care entities.6 This VHA quality improvement project demonstrated a meaningful improvement in staff’s level of satisfaction with access to community health records when staff used an externally developed HIE system. Not all types of records (eg, progress notes) were accessible, which resulted in the system being useful for most but not all staff.

In the future, the federal government’s internally developed Veterans Health Information Exchange (formerly known as the Virtual Lifetime Electronic Record [VLER]) is expected to enable VHA, the Department of Defense, and participating community care providers to access shared electronic health records nationally. However, until we can achieve that envisioned interoperability, VHA staff can use HIE and other clinical support applications to access health records.

References

1. Yu W, Ravelo A, Wagner TH, et al. Prevalence and costs of chronic conditions in the VA health care system. Med Care Res Rev. 2003;60(3)(suppl):146S-167S.

2. Bourgeois FC, Olson KL, Mandl KD. Patients treated at multiple acute health care facilities: quantifying information fragmentation. Arch Intern Med. 2010;170(22):1989-1995.

3. Rudin RS, Motala A, Goldzweig CL, Shekelle PG. Usage and effect of health information exchange: a systematic review. Ann Intern Med. 2014;161(11):803-811.

4. Blumenthal D. Implementation of the federal health information technology initiative. N Engl J Med. 2011;365(25):2426-2431.

5. The Office of the National Coordinator for Health Information Technology. Connecting health and care for the nation: a shared nationwide interoperability roadmap. Final version 1.0. https://www.healthit.gov/sites/default/files/hie-interoperability/nationwide-interoperability-roadmap-final-version-1.0.pdf. Accessed May 22, 2019.

6. Detmer D, Bloomrosen M, Raymond B, Tang P. Integrated personal health records: transformative tools for consumer-centric care. BMC Med Inform Decis Mak. 2008;8:45.

7. Hersh WR, Totten AM, Eden KB, et al. Outcomes from health information exchange: systematic review and future research needs. JMIR Med Inform. 2015;3(4):e39.

8. Vest JR, Kern LM, Campion TR Jr, Silver MD, Kaushal R. Association between use of a health information exchange system and hospital admissions. Appl Clin Inform. 2014;5(1):219-231.

9. Vest JR, Jung HY, Ostrovsky A, Das LT, McGinty GB. Image sharing technologies and reduction of imaging utilization: a systematic review and meta-analysis. J Am Coll Radiol. 2015;12(12 pt B):1371-1379.e3.

10. Walker DM. Does participation in health information exchange improve hospital efficiency? Health Care Manag Sci. 2018;21(3):426-438.

11. Gordon BD, Bernard K, Salzman J, Whitebird RR. Impact of health information exchange on emergency medicine clinical decision making. West J Emerg Med. 2015;16(7):1047-1051.

12. Hincapie A, Warholak T. The impact of health information exchange on health outcomes. Appl Clin Inform. 2011;2(4):499-507.

13. Rahurkar S, Vest JR, Menachemi N. Despite the spread of health information exchange, there is little evidence of its impact on cost, use, and quality of care. Health Aff (Millwood). 2015;34(3):477-483.

14. Eden KB, Totten AM, Kassakian SZ, et al. Barriers and facilitators to exchanging health information: a systematic review. Int J Med Inform. 2016;88:44-51.

15. Hersh WR, Totten AM, Eden K, et al. The evidence base for health information exchange. In: Dixon BE, ed. Health Information Exchange: Navigating and Managing a Network of Health Information Systems. Cambridge, MA: Academic Press; 2016:213-229.

16. Blavin F, Ramos C, Cafarella Lallemand N, Fass J, Ozanich G, Adler-Milstein J. Analyzing the public benefit attributable to interoperable health information exchange. https://aspe.hhs.gov/system/files/pdf/258851/AnalyzingthePublicBenefitAttributabletoInteroperableHealth.pdf. Published July 2017. Accessed May 22, 2019.

References

1. Yu W, Ravelo A, Wagner TH, et al. Prevalence and costs of chronic conditions in the VA health care system. Med Care Res Rev. 2003;60(3)(suppl):146S-167S.

2. Bourgeois FC, Olson KL, Mandl KD. Patients treated at multiple acute health care facilities: quantifying information fragmentation. Arch Intern Med. 2010;170(22):1989-1995.

3. Rudin RS, Motala A, Goldzweig CL, Shekelle PG. Usage and effect of health information exchange: a systematic review. Ann Intern Med. 2014;161(11):803-811.

4. Blumenthal D. Implementation of the federal health information technology initiative. N Engl J Med. 2011;365(25):2426-2431.

5. The Office of the National Coordinator for Health Information Technology. Connecting health and care for the nation: a shared nationwide interoperability roadmap. Final version 1.0. https://www.healthit.gov/sites/default/files/hie-interoperability/nationwide-interoperability-roadmap-final-version-1.0.pdf. Accessed May 22, 2019.

6. Detmer D, Bloomrosen M, Raymond B, Tang P. Integrated personal health records: transformative tools for consumer-centric care. BMC Med Inform Decis Mak. 2008;8:45.

7. Hersh WR, Totten AM, Eden KB, et al. Outcomes from health information exchange: systematic review and future research needs. JMIR Med Inform. 2015;3(4):e39.

8. Vest JR, Kern LM, Campion TR Jr, Silver MD, Kaushal R. Association between use of a health information exchange system and hospital admissions. Appl Clin Inform. 2014;5(1):219-231.

9. Vest JR, Jung HY, Ostrovsky A, Das LT, McGinty GB. Image sharing technologies and reduction of imaging utilization: a systematic review and meta-analysis. J Am Coll Radiol. 2015;12(12 pt B):1371-1379.e3.

10. Walker DM. Does participation in health information exchange improve hospital efficiency? Health Care Manag Sci. 2018;21(3):426-438.

11. Gordon BD, Bernard K, Salzman J, Whitebird RR. Impact of health information exchange on emergency medicine clinical decision making. West J Emerg Med. 2015;16(7):1047-1051.

12. Hincapie A, Warholak T. The impact of health information exchange on health outcomes. Appl Clin Inform. 2011;2(4):499-507.

13. Rahurkar S, Vest JR, Menachemi N. Despite the spread of health information exchange, there is little evidence of its impact on cost, use, and quality of care. Health Aff (Millwood). 2015;34(3):477-483.

14. Eden KB, Totten AM, Kassakian SZ, et al. Barriers and facilitators to exchanging health information: a systematic review. Int J Med Inform. 2016;88:44-51.

15. Hersh WR, Totten AM, Eden K, et al. The evidence base for health information exchange. In: Dixon BE, ed. Health Information Exchange: Navigating and Managing a Network of Health Information Systems. Cambridge, MA: Academic Press; 2016:213-229.

16. Blavin F, Ramos C, Cafarella Lallemand N, Fass J, Ozanich G, Adler-Milstein J. Analyzing the public benefit attributable to interoperable health information exchange. https://aspe.hhs.gov/system/files/pdf/258851/AnalyzingthePublicBenefitAttributabletoInteroperableHealth.pdf. Published July 2017. Accessed May 22, 2019.

Issue
Federal Practitioner - 36(7)a
Issue
Federal Practitioner - 36(7)a
Page Number
322-326
Page Number
322-326
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Beyond the Polygraph: Deception Detection and the Autonomic Nervous System

Article Type
Changed
Mon, 07/15/2019 - 14:32

The US Department of Defense (DoD) and law enforcement agencies around the country utilize polygraph as an aid in security screenings and interrogation. It is assumed that a person being interviewed will have a visceral response when attempting to deceive the interviewer, and that this response can be detected by measuring the change in vital signs between questions. By using vital signs as an indirect measurement of deception-induced stress, the polygraph machine may provide a false positive or negative result if a patient has an inherited or acquired condition that affects the autonomic nervous system (ANS).

 

A variety of diseases from alcohol use disorder to rheumatoid arthritis can affect the ANS. In addition, a multitude of commonly prescribed drugs can affect the ANS. Although in their infancy, functional magnetic resonance imaging (fMRI) and EEG (electroencephalogram) deception detection techniques circumvent these issues. Dysautonomias may be an underappreciated cause of error in polygraph interpretation. Polygraph examiners and DoD agencies should be aware of the potential for these disorders to interfere with interpretation of results. In the near future, other modalities that do not measure autonomic variables may be utilized to avoid these pitfalls.

 

Polygraphy

Throughout history, humans have been interested in techniques and devices that can discern lies from the truth. Even in the ancient era, it was known that the act of lying had physiologic effects. In ancient Israel, if a woman accused of adultery should develop a swollen abdomen after drinking “waters of bitterness,” she was considered guilty of the crime, as described in Numbers 5:11-31. In Ancient China, those accused of fraud would be forced to hold dry rice in their mouths; if the expectorated rice was dry, the suspect was found guilty.1 We now know that catecholamines, particularly epinephrine, secreted during times of stress, cause relaxation of smooth muscle, leading to reduced bowel motility and dry mouth.2-4 However, most methods before the modern era were based more on superstition and chance rather than any sound physiologic premise.

When asked to discern the truth from falsehood based on their own perceptions, people correctly discern lies as false merely 47% of the time and truth as nondeceptive about 61% of the time.5 In short, unaided, we are very poor lie detectors. Therefore, a great deal of interest in technology that can aid in lie detection has ensued. With enhanced technology and understanding of human physiology came a renewed interest in lie detection. Since it was known that vital signs such as blood pressure (BP), heart rate, and breathing could be affected by the stressful situation brought on by deception, quantifying and measuring those responses in an effort to detect lying became a goal. In 1881, the Italian criminologist Cesare Lombroso invented a glove that when worn by a suspect, measured their BP.6-8 Changes in BP also were the target variable of the systolic BP deception test invented by William M. Marston, PhD, in 1915.8 Marston also experimented with measurements of other variables, such as muscle tension.9 In 1921, John Larson invented the first modern polygraph machine.7

 

 

Procedures

Today’s polygraph builds on these techniques. A standard polygraph measures respiration, heart rate, BP, and sudomotor function (sweating). Respiration is measured via strain gauges strapped around the chest and abdomen that respond to chest expansion during inhalation. BP and pulse can be measured through a variety of means, including finger pulse measurement or sphygmomanometer.8

Perspiration is measured by skin electrical conductance. Human sweat contains a variety of cations and anions—mostly sodium and chloride, but also potassium, bicarbonate, and lactate. The presence of these electrolytes alter electrical conduction at the skin surface when sweat is released.10

The exact questioning procedure used to perform a polygraph examination can vary. The Comparison Question Test is most commonly used. In this format, the interview consists of questions that are relevant to the investigation at hand, interspersed with control questions. The examiner compares the changes in vital signs and skin conduction to the baseline measurements generated during the pretest interview and during control questions.8 Using these standardized techniques, some studies have shown accuracy rates between 83% and 95% in controlled settings.8 However, studies performed outside of the polygraph community have found very high false positive rates, up to 50% or greater.11

The US Supreme Court has ruled that individual jurisdictions can decide whether or not to admit polygraph evidence in court, and the US Court of Appeals for the Eleventh Circuit has ruled that polygraph results are only admissible if both parties agree to it and are given sufficient notice.12,13 Currently, New Mexico is the only state that allows polygraph results to be used as evidence without a pretrial agreement; all other states either require such an agreement or forbid the results to be used as evidence.14

Although rarely used in federal and state courts as evidence, polygraphy is commonly used during investigations and in the hiring process of government agencies. DoD Directive 5210.48 and Instruction 5210.91 enable DoD investigative organizations (eg, Naval Criminal Investigative Service, National Security Agency, US Army Investigational Command) to use polygraph as an aid during investigations into suspected involvement with foreign intelligence, terrorism against the US, mishandling of classified documents, and other serious violations.15

The Role of the Physician in Polygraph Assessment

It may be rare that the physician is called upon to provide information regarding an individual’s medical condition or related medication use and the effect of these on polygraph results. In such cases, however, the physician must remember the primary fiduciary duty to the patient. Disclosure of medical conditions cannot be made without the patient’s consent, save in very specific situations (eg, Commanding Officer Inquiry, Tarasoff Duty to Protect, etc). It is the polygraph examiner’s responsibility to be aware of potential confounders in a particular examination.10

Physicians can have a responsibility when in administrative or supervisory positions, to advise security and other officials regarding the fitness for certain duties of candidates with whom there is no physician-patient relationship. This may include an individual’s ability to undergo polygraph examination and the validity of such results. However, when a physician-patient relationship is involved, care must be given to ensure that the patient understands that the relationship is protected both by professional standards and by law and that no information will be shared without the patient’s authorization (aside from those rare exceptions provided by law). Often, a straightforward explanation to the patient of the medical condition and any medication’s potential effects on polygraph results will be sufficient, allowing the patient to report as much as is deemed necessary to the polygraph examiner.

 

 

Polygraphy Pitfalls

Polygraphy presupposes that the subject will have a consistent and measurable physiologic response when he or she attempts to deceive the interviewer. The changes in BP, heart rate, respirations, and perspiration that are detected by polygraphy and interpreted by the examiner are controlled by the ANS (Table 1). There are a variety of diseases that are known to cause autonomic dysfunction (dysautonomia). Small fiber autonomic neuropathies often result in loss of sweating and altered heart rate and BP variation and can arise from many underlying conditions. Synucleinopathies, such as Parkinson disease, alter cardiovascular reflexes.14,16

Even diseases not commonly recognized as having a predominant clinical impact on ANS function can demonstrate measurable physiologic effect. For example, approximately 60% of patients with rheumatoid arthritis will have blunted cardiovagal baroreceptor responses and heart rate variability.17 ANS dysfunction is also a common sequela of alcoholism.18 Patients with diabetes mellitus often have an elevated resting heart rate and low heart rate variability due to dysregulated β-adrenergic activity.19 The impact of reduced baroreceptor response and reduced heart rate variability could impact the polygraph interpreter’s ability to discern responses using heart rate. Individuals with ANS dysfunction that causes blunted physiologic responses could have inconclusive or potentially worse false-negative polygraph results due to lack of variation between control and target questions.

To our knowledge, no study has been performed on the validity of polygraphy in patients with any form of dysautonomia. Additionally, a 2011 process and compliance study of the DoD polygraph program specifically recommended that “adjudicators would benefit from training in polygraph capabilities and limitations.”20 Although specific requirements vary from program to program, all programs accredited by the American Polygraph Association provide training in physiology, psychology, and standardization of test results.

Many commonly prescribed medications have effects on the ANS that could affect the results of a polygraph exam (Table 2). For example, β blockers reduce β adrenergic receptor activation in cardiac muscle and blood vessels, reducing heart rate, heart rate variability, cardiac contractility, and BP.21 This class of medication is prescribed for a variety of conditions, including congestive heart failure, hypertension, panic disorder, and posttraumatic stress disorder. Thus, a patient taking β blockers will have a blunted physiologic response to stress and have an increased likelihood of an inconclusive or false-negative polygraph exam.

Some over-the-counter medications also have effects on autonomic function. Sympathomimetics such as pseudoephedrine or antihistamines with anticholinergic activity like diphenhydramine can both increase heart rate and BP.22,23 Of the 10 most prescribed medications of 2016, 5 have direct effects on the ANS or the variables measured by the polygraph machine.24 An exhaustive list of medication effects on autonomic function is beyond the scope of this article.

A medication that may affect the results of a polygraph study that is of special interest to the DoD and military is mefloquine. Mefloquine is an antimalarial drug that has been used by military personnel deployed to malaria endemic regions.25 In murine models, mefloquine has been shown to disrupt autonomic and respiratory control in the central nervous system.26 The neuropsychiatric adverse effects of mefloquine are well documented and can last for years after exposure to the drug.27 Therefore, mefloquine could affect the results of a polygraph test through both direct toxic effects on the ANS as well as causing anxiety and depression, potentially affecting the subject’s response to questioning.

 

 

Alternative Modalities

Given the pitfalls inherent with external physiologic measures for lie detection, additional modalities that bypass measurement of ANS-governed responses have been sought. Indeed, the integration and combination of more comprehensive modalities has come to be named the forensic credibility assessment.

Functional MRI

Beginning in 1991, researchers began using fMRI to see real-time perfusion changes in areas of the cerebral cortex between times of rest and mental stimulation.26 This modality provides a noninvasive technique for viewing which specific parts of the brain are stimulated during activity. When someone is engaged in active deception, the dorsolateral prefrontal cortex has greater perfusion than when the patient is engaged in truth telling.28 Since fMRI involves imaging for evaluation of the central nervous system, it avoids the potential inaccuracies that can be seen in some subjects with autonomic irregularities. In fact, fMRI may have superior sensitivity and specificity for lie detection compared with that of conventional polygraphy.29

Significant limitations to the use of fMRI include the necessity of expensive specialized equipment and trained personnel to operate the MRI. Agencies that use polygraph examinations may be unwilling to make such an investment. Further, subjects with metallic foreign bodies or noncompatible medical implants cannot undergo the MRI procedure. Finally, there have been bioethical and legal concerns raised that measuring brain activity during interrogation may endanger “cognitive freedom” and may even be considered unreasonable search and seizure under the Fourth Amendment to the US Constitution.30 However, fMRI—like polygraphy—can only measure the difference between brain perfusion in 2 states. The idea of fMRI as “mind reading” is largely a misconception.31

Electroencephalography

Various EEG modalities have received increased interest for lie detection. In EEG, electrodes are used to measure the summation of a multitude of postsynaptic action potentials and the local voltage gradient they produce when cortical pyramidal neurons are fired in synchrony.32 These voltage gradients are detectable at the scalp surface. Shortly after the invention of EEG, it was observed that specific stimuli generated unique and predicable changes in EEG morphology. These event-related potentials (ERP) are detectable by scalp EEG shortly after the stimulus is given.33

ERPs can be elicited by a multitude of sensory stimuli, have a predictable and reproducible morphology, and are believed to be a psychophysiologic correlate of mental processing of stimuli.34 The P300 is an ERP characterized by a positive change in voltage occurring 300 milliseconds after a stimulus. It is associated with stimulus processing and categorization.35 Since deception is a complex cognitive process involving recognizing pertinent stimuli and inventing false responses to them, it was theorized that the detection of a P300 ERP during a patient interview would mean the patient truly recognizes the stimulus and is denying such knowledge. Early studies performed on P300 had variable accuracy for lie detection, roughly 40% to 80%, depending on the study. Thus, the rate of false negatives would increase if the subjects were coached on countermeasures, such as increasing the significance of distractor data or counting backward by 7s.36,37 Later studies have found ways of minimizing these issues, such as detection of a P900 ERP (a cortical potential at 900 milliseconds) that can be seen when subjects are attempting countermeasures.38

Another technique for increasing accuracy in EEG-mediated lie detection is measurement of multifaceted electroencephalographic response (MER), which involves a more detailed analysis of multiple EEG electrode sites and how the signaling changes over time using both visual comparison of multiple trials as well as bootstrap analysis.37 In particular, memory- and encoding-related multifaceted electroencephalographic response (MERMER) using P300 coupled with an electrically negative impulse recorded at the frontal lobe and phasic changes in the global EEG had superior accuracy than P300 alone.37

The benefits of EEG compared with that of fMRI include large reductions in cost, space, and restrictions for use in some individuals (EEG is safe for virtually all patients, including those with metallic foreign bodies). However, like fMRI, EEG still requires trained personnel to operate and interpret. Also, it has yet to be tested outside of the laboratory.

 

 

Conclusion

The ability to detect deception is an important factor in determining security risk and adjudication of legal proceedings, but untrained persons are surprisingly poor at discerning truth from lies. The polygraph has been used by law enforcement and government agencies for decades to aid in interrogation and the screening of employees for security clearances and other types of access. However, results are vulnerable to inaccuracies in subjects with autonomic disorders and may be confounded by multiple medications. While emerging technologies such as fMRI and EEG may allow superior accuracy by bypassing ANS-based physiologic outputs, the polygraph examiner and the physician must be aware of the effect of autonomic dysfunction and of the medications that affect the ANS. This is particularly true within military medicine, as many patients within this population are subject to polygraph examination.

References

1. Ford EB. Lie detection: historical, neuropsychiatric and legal dimensions. Int J Law Psychiatry. 2006;29(3):159-177.

2. Ohrn PG. Catecholamine infusion and gastrointestinal propulsion in the rat. Acta Chir Scand Suppl. 1979(461):43-52.

3. Sakamoto H. The study of catecholamine, acetylcholine and bradykinin in buccal circulation in dogs. Kurume Med J. 1979;26(2):153-162.

4. Bond CF Jr, Depaulo BM. Accuracy of deception judgments. Pers Soc Psychol Rev. 2006;10(3):214-234.

5. Vicianova M. Historical techniques of lie detection. Eur J sychology. 2015;11(3):522-534.

6. Matté JA. Forensic Psychophysiology Using the Polygraph: Scientific Truth Verification, Lie Detection. Williamsville, NY: JAM Publications; 2012.

7. Segrave K. Lie Detectors: A Social History. Jefferson, NC: McFarland & Company; 2004.

8. Nelson R. Scientific basis for polygraph testing. Polygraph. 2015;44(1):28-61.

9. Boucsein W. Electrodermal Activity. New York, NY: Springer Publishing; 2012.

10. US Congress, Office of Assessment and Technology. Scientific validity of polygraph testing: a research review and evaluation. https://ota.fas.org/reports/8320.pdf. Published 1983. Accessed June 12, 2019.

11. United States v Scheffer, 523 US 303 (1998).

12. United States v Piccinonna, 729 F Supp 1336 (SD Fl 1990).

13. Fridman DS, Janoe JS. The state of judicial gatekeeping in New Mexico. https://cyber.harvard.edu/daubert/nm.htm. Updated April 17, 1999. Accessed May 20, 2019.

14. Gibbons CH. Small fiber neuropathies. Continuum (Minneap Minn). 2014;20(5 Peripheral Nervous System Disorders):1398-1412.

15. US Department of Defense. Directive 5210.48: Credibility assessment (CA) program. https://fas.org/irp/doddir/dod/d5210_48.pdf. Updated February 12, 2018. Accessed May 30, 2019.

16. Postuma RB, Gagnon JF, Pelletier A, Montplaisir J. Prodromal autonomic symptoms and signs in Parkinson’s disease and dementia with Lewy bodies. Mov Disord. 2013;28(5):597-604.

17. Adlan AM, Lip GY, Paton JF, Kitas GD, Fisher JP. Autonomic function and rheumatoid arthritis: a systematic review. Semin Arthritis Rheum. 2014;44(3):283-304.

18. Di Ciaula A, Grattagliano I, Portincasa P. Chronic alcoholics retain dyspeptic symptoms, pan-enteric dysmotility, and autonomic neuropathy before and after abstinence. J Dig Dis. 2016;17(11):735-746.

19. Thaung HA, Baldi JC, Wang H, et al. Increased efferent cardiac sympathetic nerve activity and defective intrinsic heart rate regulation in type 2 diabetes. Diabetes. 2015;64(8):2944-2956.

20. US Department of Defense, Office of the Undersecretary of Defense for Intelligence. Department of Defense polygraph program process and compliance study: study report. https://fas.org/sgp/othergov/polygraph/dod-poly.pdf. Published December 19, 2011. Accessed May 20, 2019.

21. Ladage D, Schwinger RH, Brixius K. Cardio-selective beta-blocker: pharmacological evidence and their influence on exercise capacity. Cardiovasc Ther. 2013;31(2):76-83.

22. D’Souza RS, Mercogliano C, Ojukwu E, et al. Effects of prophylactic anticholinergic medications to decrease extrapyramidal side effects in patients taking acute antiemetic drugs: a systematic review and meta-analysis Emerg Med J. 2018;35:325-331.

23. Gheorghiev MD, Hosseini F, Moran J, Cooper CE. Effects of pseudoephedrine on parameters affecting exercise performance: a meta-analysis. Sports Med Open. 2018;4(1):44.

24. Frellick M. Top-selling, top-prescribed drugs for 2016. https://www.medscape.com/viewarticle/886404. Published October 2, 2017. Accessed May 20, 2019.

25. Lall DM, Dutschmann M, Deuchars J, Deuchars S. The anti-malarial drug mefloquine disrupts central autonomic and respiratory control in the working heart brainstem preparation of the rat. J Biomed Sci. 2012;19:103.

26. Ritchie EC, Block J, Nevin RL. Psychiatric side effects of mefloquine: applications to forensic psychiatry. J Am Acad Psychiatry Law. 2013;41(2):224-235.

27. Belliveau JW, Kennedy DN Jr, McKinstry RC, et al. Functional mapping of the human visual cortex by magnetic resonance imaging. Science. 1991;254(5032):716-719.

28. Ito A, Abe N, Fujii T, et al. The contribution of the dorsolateral prefrontal cortex to the preparation for deception and truth-telling. Brain Res. 2012;1464:43-52.

29. Langleben DD, Hakun JG, Seelig D. Polygraphy and functional magnetic resonance imaging in lie detection: a controlled blind comparison using the concealed information test. J Clin Psychiatry. 2016;77(10):1372-1380.

30. Boire RG. Searching the brain: the Fourth Amendment implications of brain-based deception detection devices. Am J Bioeth. 2005;5(2):62-63; discussion W5.

31. Langleben DD. Detection of deception with fMRI: Are we there yet? Legal Criminological Psychol. 2008;13(1):1-9.

32. Marcuse LV, Fields MC, Yoo J. Rowans Primer of EEG. 2nd ed. Edinburgh, Scotland, United Kingdom: Elsevier; 2016.

33. Farwell LA, Donchin E. The truth will out: interrogative polygraphy (“lie detection”) with event-related brain potentials. Psychophysiology. 1991;28(5):531-547.

34. Sur S, Sinha VK. Event-related potential: an overview. Ind Psychiatry J. 2009;18(1):70-73.

35. Polich J. Updating P300: an integrative theory of P3a and P3b. Clinical Neurophysiol. 2007;118(10):2128-2148.

36. Mertens R, Allen, JJB. The role of psychophysiology in forensic assessments: Deception detection, ERPs, and virtual reality mock crime scenarios. Psychophysiology. 2008;45(2):286-298.

37. Rosenfeld JP, Labkovsky E. New P300-based protocol to detect concealed information: resistance to mental countermeasures against only half the irrelevant stimuli and a possible ERP indicator of countermeasures. Psychophysiology. 2010;47(6):1002-1010.

38. Farwell LA, Smith SS. Using brain MERMER testing to detect knowledge despite efforts to conceal. J Forensic Sci. 2001;46(1):135-143.

Article PDF
Author and Disclosure Information

Glen Cook is a Staff Neurologist, and Charles Mitschow is a Psychiatry Resident, both at Naval Medical Center Portsmouth in Virginia.
Correspondence: Charles Mitschow (charles.e.mitschow.mil @mail.mil)

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Issue
Federal Practitioner - 36(7)a
Publications
Topics
Page Number
316-321
Sections
Author and Disclosure Information

Glen Cook is a Staff Neurologist, and Charles Mitschow is a Psychiatry Resident, both at Naval Medical Center Portsmouth in Virginia.
Correspondence: Charles Mitschow (charles.e.mitschow.mil @mail.mil)

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Author and Disclosure Information

Glen Cook is a Staff Neurologist, and Charles Mitschow is a Psychiatry Resident, both at Naval Medical Center Portsmouth in Virginia.
Correspondence: Charles Mitschow (charles.e.mitschow.mil @mail.mil)

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Article PDF
Article PDF
Related Articles

The US Department of Defense (DoD) and law enforcement agencies around the country utilize polygraph as an aid in security screenings and interrogation. It is assumed that a person being interviewed will have a visceral response when attempting to deceive the interviewer, and that this response can be detected by measuring the change in vital signs between questions. By using vital signs as an indirect measurement of deception-induced stress, the polygraph machine may provide a false positive or negative result if a patient has an inherited or acquired condition that affects the autonomic nervous system (ANS).

 

A variety of diseases from alcohol use disorder to rheumatoid arthritis can affect the ANS. In addition, a multitude of commonly prescribed drugs can affect the ANS. Although in their infancy, functional magnetic resonance imaging (fMRI) and EEG (electroencephalogram) deception detection techniques circumvent these issues. Dysautonomias may be an underappreciated cause of error in polygraph interpretation. Polygraph examiners and DoD agencies should be aware of the potential for these disorders to interfere with interpretation of results. In the near future, other modalities that do not measure autonomic variables may be utilized to avoid these pitfalls.

 

Polygraphy

Throughout history, humans have been interested in techniques and devices that can discern lies from the truth. Even in the ancient era, it was known that the act of lying had physiologic effects. In ancient Israel, if a woman accused of adultery should develop a swollen abdomen after drinking “waters of bitterness,” she was considered guilty of the crime, as described in Numbers 5:11-31. In Ancient China, those accused of fraud would be forced to hold dry rice in their mouths; if the expectorated rice was dry, the suspect was found guilty.1 We now know that catecholamines, particularly epinephrine, secreted during times of stress, cause relaxation of smooth muscle, leading to reduced bowel motility and dry mouth.2-4 However, most methods before the modern era were based more on superstition and chance rather than any sound physiologic premise.

When asked to discern the truth from falsehood based on their own perceptions, people correctly discern lies as false merely 47% of the time and truth as nondeceptive about 61% of the time.5 In short, unaided, we are very poor lie detectors. Therefore, a great deal of interest in technology that can aid in lie detection has ensued. With enhanced technology and understanding of human physiology came a renewed interest in lie detection. Since it was known that vital signs such as blood pressure (BP), heart rate, and breathing could be affected by the stressful situation brought on by deception, quantifying and measuring those responses in an effort to detect lying became a goal. In 1881, the Italian criminologist Cesare Lombroso invented a glove that when worn by a suspect, measured their BP.6-8 Changes in BP also were the target variable of the systolic BP deception test invented by William M. Marston, PhD, in 1915.8 Marston also experimented with measurements of other variables, such as muscle tension.9 In 1921, John Larson invented the first modern polygraph machine.7

 

 

Procedures

Today’s polygraph builds on these techniques. A standard polygraph measures respiration, heart rate, BP, and sudomotor function (sweating). Respiration is measured via strain gauges strapped around the chest and abdomen that respond to chest expansion during inhalation. BP and pulse can be measured through a variety of means, including finger pulse measurement or sphygmomanometer.8

Perspiration is measured by skin electrical conductance. Human sweat contains a variety of cations and anions—mostly sodium and chloride, but also potassium, bicarbonate, and lactate. The presence of these electrolytes alter electrical conduction at the skin surface when sweat is released.10

The exact questioning procedure used to perform a polygraph examination can vary. The Comparison Question Test is most commonly used. In this format, the interview consists of questions that are relevant to the investigation at hand, interspersed with control questions. The examiner compares the changes in vital signs and skin conduction to the baseline measurements generated during the pretest interview and during control questions.8 Using these standardized techniques, some studies have shown accuracy rates between 83% and 95% in controlled settings.8 However, studies performed outside of the polygraph community have found very high false positive rates, up to 50% or greater.11

The US Supreme Court has ruled that individual jurisdictions can decide whether or not to admit polygraph evidence in court, and the US Court of Appeals for the Eleventh Circuit has ruled that polygraph results are only admissible if both parties agree to it and are given sufficient notice.12,13 Currently, New Mexico is the only state that allows polygraph results to be used as evidence without a pretrial agreement; all other states either require such an agreement or forbid the results to be used as evidence.14

Although rarely used in federal and state courts as evidence, polygraphy is commonly used during investigations and in the hiring process of government agencies. DoD Directive 5210.48 and Instruction 5210.91 enable DoD investigative organizations (eg, Naval Criminal Investigative Service, National Security Agency, US Army Investigational Command) to use polygraph as an aid during investigations into suspected involvement with foreign intelligence, terrorism against the US, mishandling of classified documents, and other serious violations.15

The Role of the Physician in Polygraph Assessment

It may be rare that the physician is called upon to provide information regarding an individual’s medical condition or related medication use and the effect of these on polygraph results. In such cases, however, the physician must remember the primary fiduciary duty to the patient. Disclosure of medical conditions cannot be made without the patient’s consent, save in very specific situations (eg, Commanding Officer Inquiry, Tarasoff Duty to Protect, etc). It is the polygraph examiner’s responsibility to be aware of potential confounders in a particular examination.10

Physicians can have a responsibility when in administrative or supervisory positions, to advise security and other officials regarding the fitness for certain duties of candidates with whom there is no physician-patient relationship. This may include an individual’s ability to undergo polygraph examination and the validity of such results. However, when a physician-patient relationship is involved, care must be given to ensure that the patient understands that the relationship is protected both by professional standards and by law and that no information will be shared without the patient’s authorization (aside from those rare exceptions provided by law). Often, a straightforward explanation to the patient of the medical condition and any medication’s potential effects on polygraph results will be sufficient, allowing the patient to report as much as is deemed necessary to the polygraph examiner.

 

 

Polygraphy Pitfalls

Polygraphy presupposes that the subject will have a consistent and measurable physiologic response when he or she attempts to deceive the interviewer. The changes in BP, heart rate, respirations, and perspiration that are detected by polygraphy and interpreted by the examiner are controlled by the ANS (Table 1). There are a variety of diseases that are known to cause autonomic dysfunction (dysautonomia). Small fiber autonomic neuropathies often result in loss of sweating and altered heart rate and BP variation and can arise from many underlying conditions. Synucleinopathies, such as Parkinson disease, alter cardiovascular reflexes.14,16

Even diseases not commonly recognized as having a predominant clinical impact on ANS function can demonstrate measurable physiologic effect. For example, approximately 60% of patients with rheumatoid arthritis will have blunted cardiovagal baroreceptor responses and heart rate variability.17 ANS dysfunction is also a common sequela of alcoholism.18 Patients with diabetes mellitus often have an elevated resting heart rate and low heart rate variability due to dysregulated β-adrenergic activity.19 The impact of reduced baroreceptor response and reduced heart rate variability could impact the polygraph interpreter’s ability to discern responses using heart rate. Individuals with ANS dysfunction that causes blunted physiologic responses could have inconclusive or potentially worse false-negative polygraph results due to lack of variation between control and target questions.

To our knowledge, no study has been performed on the validity of polygraphy in patients with any form of dysautonomia. Additionally, a 2011 process and compliance study of the DoD polygraph program specifically recommended that “adjudicators would benefit from training in polygraph capabilities and limitations.”20 Although specific requirements vary from program to program, all programs accredited by the American Polygraph Association provide training in physiology, psychology, and standardization of test results.

Many commonly prescribed medications have effects on the ANS that could affect the results of a polygraph exam (Table 2). For example, β blockers reduce β adrenergic receptor activation in cardiac muscle and blood vessels, reducing heart rate, heart rate variability, cardiac contractility, and BP.21 This class of medication is prescribed for a variety of conditions, including congestive heart failure, hypertension, panic disorder, and posttraumatic stress disorder. Thus, a patient taking β blockers will have a blunted physiologic response to stress and have an increased likelihood of an inconclusive or false-negative polygraph exam.

Some over-the-counter medications also have effects on autonomic function. Sympathomimetics such as pseudoephedrine or antihistamines with anticholinergic activity like diphenhydramine can both increase heart rate and BP.22,23 Of the 10 most prescribed medications of 2016, 5 have direct effects on the ANS or the variables measured by the polygraph machine.24 An exhaustive list of medication effects on autonomic function is beyond the scope of this article.

A medication that may affect the results of a polygraph study that is of special interest to the DoD and military is mefloquine. Mefloquine is an antimalarial drug that has been used by military personnel deployed to malaria endemic regions.25 In murine models, mefloquine has been shown to disrupt autonomic and respiratory control in the central nervous system.26 The neuropsychiatric adverse effects of mefloquine are well documented and can last for years after exposure to the drug.27 Therefore, mefloquine could affect the results of a polygraph test through both direct toxic effects on the ANS as well as causing anxiety and depression, potentially affecting the subject’s response to questioning.

 

 

Alternative Modalities

Given the pitfalls inherent with external physiologic measures for lie detection, additional modalities that bypass measurement of ANS-governed responses have been sought. Indeed, the integration and combination of more comprehensive modalities has come to be named the forensic credibility assessment.

Functional MRI

Beginning in 1991, researchers began using fMRI to see real-time perfusion changes in areas of the cerebral cortex between times of rest and mental stimulation.26 This modality provides a noninvasive technique for viewing which specific parts of the brain are stimulated during activity. When someone is engaged in active deception, the dorsolateral prefrontal cortex has greater perfusion than when the patient is engaged in truth telling.28 Since fMRI involves imaging for evaluation of the central nervous system, it avoids the potential inaccuracies that can be seen in some subjects with autonomic irregularities. In fact, fMRI may have superior sensitivity and specificity for lie detection compared with that of conventional polygraphy.29

Significant limitations to the use of fMRI include the necessity of expensive specialized equipment and trained personnel to operate the MRI. Agencies that use polygraph examinations may be unwilling to make such an investment. Further, subjects with metallic foreign bodies or noncompatible medical implants cannot undergo the MRI procedure. Finally, there have been bioethical and legal concerns raised that measuring brain activity during interrogation may endanger “cognitive freedom” and may even be considered unreasonable search and seizure under the Fourth Amendment to the US Constitution.30 However, fMRI—like polygraphy—can only measure the difference between brain perfusion in 2 states. The idea of fMRI as “mind reading” is largely a misconception.31

Electroencephalography

Various EEG modalities have received increased interest for lie detection. In EEG, electrodes are used to measure the summation of a multitude of postsynaptic action potentials and the local voltage gradient they produce when cortical pyramidal neurons are fired in synchrony.32 These voltage gradients are detectable at the scalp surface. Shortly after the invention of EEG, it was observed that specific stimuli generated unique and predicable changes in EEG morphology. These event-related potentials (ERP) are detectable by scalp EEG shortly after the stimulus is given.33

ERPs can be elicited by a multitude of sensory stimuli, have a predictable and reproducible morphology, and are believed to be a psychophysiologic correlate of mental processing of stimuli.34 The P300 is an ERP characterized by a positive change in voltage occurring 300 milliseconds after a stimulus. It is associated with stimulus processing and categorization.35 Since deception is a complex cognitive process involving recognizing pertinent stimuli and inventing false responses to them, it was theorized that the detection of a P300 ERP during a patient interview would mean the patient truly recognizes the stimulus and is denying such knowledge. Early studies performed on P300 had variable accuracy for lie detection, roughly 40% to 80%, depending on the study. Thus, the rate of false negatives would increase if the subjects were coached on countermeasures, such as increasing the significance of distractor data or counting backward by 7s.36,37 Later studies have found ways of minimizing these issues, such as detection of a P900 ERP (a cortical potential at 900 milliseconds) that can be seen when subjects are attempting countermeasures.38

Another technique for increasing accuracy in EEG-mediated lie detection is measurement of multifaceted electroencephalographic response (MER), which involves a more detailed analysis of multiple EEG electrode sites and how the signaling changes over time using both visual comparison of multiple trials as well as bootstrap analysis.37 In particular, memory- and encoding-related multifaceted electroencephalographic response (MERMER) using P300 coupled with an electrically negative impulse recorded at the frontal lobe and phasic changes in the global EEG had superior accuracy than P300 alone.37

The benefits of EEG compared with that of fMRI include large reductions in cost, space, and restrictions for use in some individuals (EEG is safe for virtually all patients, including those with metallic foreign bodies). However, like fMRI, EEG still requires trained personnel to operate and interpret. Also, it has yet to be tested outside of the laboratory.

 

 

Conclusion

The ability to detect deception is an important factor in determining security risk and adjudication of legal proceedings, but untrained persons are surprisingly poor at discerning truth from lies. The polygraph has been used by law enforcement and government agencies for decades to aid in interrogation and the screening of employees for security clearances and other types of access. However, results are vulnerable to inaccuracies in subjects with autonomic disorders and may be confounded by multiple medications. While emerging technologies such as fMRI and EEG may allow superior accuracy by bypassing ANS-based physiologic outputs, the polygraph examiner and the physician must be aware of the effect of autonomic dysfunction and of the medications that affect the ANS. This is particularly true within military medicine, as many patients within this population are subject to polygraph examination.

The US Department of Defense (DoD) and law enforcement agencies around the country utilize polygraph as an aid in security screenings and interrogation. It is assumed that a person being interviewed will have a visceral response when attempting to deceive the interviewer, and that this response can be detected by measuring the change in vital signs between questions. By using vital signs as an indirect measurement of deception-induced stress, the polygraph machine may provide a false positive or negative result if a patient has an inherited or acquired condition that affects the autonomic nervous system (ANS).

 

A variety of diseases from alcohol use disorder to rheumatoid arthritis can affect the ANS. In addition, a multitude of commonly prescribed drugs can affect the ANS. Although in their infancy, functional magnetic resonance imaging (fMRI) and EEG (electroencephalogram) deception detection techniques circumvent these issues. Dysautonomias may be an underappreciated cause of error in polygraph interpretation. Polygraph examiners and DoD agencies should be aware of the potential for these disorders to interfere with interpretation of results. In the near future, other modalities that do not measure autonomic variables may be utilized to avoid these pitfalls.

 

Polygraphy

Throughout history, humans have been interested in techniques and devices that can discern lies from the truth. Even in the ancient era, it was known that the act of lying had physiologic effects. In ancient Israel, if a woman accused of adultery should develop a swollen abdomen after drinking “waters of bitterness,” she was considered guilty of the crime, as described in Numbers 5:11-31. In Ancient China, those accused of fraud would be forced to hold dry rice in their mouths; if the expectorated rice was dry, the suspect was found guilty.1 We now know that catecholamines, particularly epinephrine, secreted during times of stress, cause relaxation of smooth muscle, leading to reduced bowel motility and dry mouth.2-4 However, most methods before the modern era were based more on superstition and chance rather than any sound physiologic premise.

When asked to discern the truth from falsehood based on their own perceptions, people correctly discern lies as false merely 47% of the time and truth as nondeceptive about 61% of the time.5 In short, unaided, we are very poor lie detectors. Therefore, a great deal of interest in technology that can aid in lie detection has ensued. With enhanced technology and understanding of human physiology came a renewed interest in lie detection. Since it was known that vital signs such as blood pressure (BP), heart rate, and breathing could be affected by the stressful situation brought on by deception, quantifying and measuring those responses in an effort to detect lying became a goal. In 1881, the Italian criminologist Cesare Lombroso invented a glove that when worn by a suspect, measured their BP.6-8 Changes in BP also were the target variable of the systolic BP deception test invented by William M. Marston, PhD, in 1915.8 Marston also experimented with measurements of other variables, such as muscle tension.9 In 1921, John Larson invented the first modern polygraph machine.7

 

 

Procedures

Today’s polygraph builds on these techniques. A standard polygraph measures respiration, heart rate, BP, and sudomotor function (sweating). Respiration is measured via strain gauges strapped around the chest and abdomen that respond to chest expansion during inhalation. BP and pulse can be measured through a variety of means, including finger pulse measurement or sphygmomanometer.8

Perspiration is measured by skin electrical conductance. Human sweat contains a variety of cations and anions—mostly sodium and chloride, but also potassium, bicarbonate, and lactate. The presence of these electrolytes alter electrical conduction at the skin surface when sweat is released.10

The exact questioning procedure used to perform a polygraph examination can vary. The Comparison Question Test is most commonly used. In this format, the interview consists of questions that are relevant to the investigation at hand, interspersed with control questions. The examiner compares the changes in vital signs and skin conduction to the baseline measurements generated during the pretest interview and during control questions.8 Using these standardized techniques, some studies have shown accuracy rates between 83% and 95% in controlled settings.8 However, studies performed outside of the polygraph community have found very high false positive rates, up to 50% or greater.11

The US Supreme Court has ruled that individual jurisdictions can decide whether or not to admit polygraph evidence in court, and the US Court of Appeals for the Eleventh Circuit has ruled that polygraph results are only admissible if both parties agree to it and are given sufficient notice.12,13 Currently, New Mexico is the only state that allows polygraph results to be used as evidence without a pretrial agreement; all other states either require such an agreement or forbid the results to be used as evidence.14

Although rarely used in federal and state courts as evidence, polygraphy is commonly used during investigations and in the hiring process of government agencies. DoD Directive 5210.48 and Instruction 5210.91 enable DoD investigative organizations (eg, Naval Criminal Investigative Service, National Security Agency, US Army Investigational Command) to use polygraph as an aid during investigations into suspected involvement with foreign intelligence, terrorism against the US, mishandling of classified documents, and other serious violations.15

The Role of the Physician in Polygraph Assessment

It may be rare that the physician is called upon to provide information regarding an individual’s medical condition or related medication use and the effect of these on polygraph results. In such cases, however, the physician must remember the primary fiduciary duty to the patient. Disclosure of medical conditions cannot be made without the patient’s consent, save in very specific situations (eg, Commanding Officer Inquiry, Tarasoff Duty to Protect, etc). It is the polygraph examiner’s responsibility to be aware of potential confounders in a particular examination.10

Physicians can have a responsibility when in administrative or supervisory positions, to advise security and other officials regarding the fitness for certain duties of candidates with whom there is no physician-patient relationship. This may include an individual’s ability to undergo polygraph examination and the validity of such results. However, when a physician-patient relationship is involved, care must be given to ensure that the patient understands that the relationship is protected both by professional standards and by law and that no information will be shared without the patient’s authorization (aside from those rare exceptions provided by law). Often, a straightforward explanation to the patient of the medical condition and any medication’s potential effects on polygraph results will be sufficient, allowing the patient to report as much as is deemed necessary to the polygraph examiner.

 

 

Polygraphy Pitfalls

Polygraphy presupposes that the subject will have a consistent and measurable physiologic response when he or she attempts to deceive the interviewer. The changes in BP, heart rate, respirations, and perspiration that are detected by polygraphy and interpreted by the examiner are controlled by the ANS (Table 1). There are a variety of diseases that are known to cause autonomic dysfunction (dysautonomia). Small fiber autonomic neuropathies often result in loss of sweating and altered heart rate and BP variation and can arise from many underlying conditions. Synucleinopathies, such as Parkinson disease, alter cardiovascular reflexes.14,16

Even diseases not commonly recognized as having a predominant clinical impact on ANS function can demonstrate measurable physiologic effect. For example, approximately 60% of patients with rheumatoid arthritis will have blunted cardiovagal baroreceptor responses and heart rate variability.17 ANS dysfunction is also a common sequela of alcoholism.18 Patients with diabetes mellitus often have an elevated resting heart rate and low heart rate variability due to dysregulated β-adrenergic activity.19 The impact of reduced baroreceptor response and reduced heart rate variability could impact the polygraph interpreter’s ability to discern responses using heart rate. Individuals with ANS dysfunction that causes blunted physiologic responses could have inconclusive or potentially worse false-negative polygraph results due to lack of variation between control and target questions.

To our knowledge, no study has been performed on the validity of polygraphy in patients with any form of dysautonomia. Additionally, a 2011 process and compliance study of the DoD polygraph program specifically recommended that “adjudicators would benefit from training in polygraph capabilities and limitations.”20 Although specific requirements vary from program to program, all programs accredited by the American Polygraph Association provide training in physiology, psychology, and standardization of test results.

Many commonly prescribed medications have effects on the ANS that could affect the results of a polygraph exam (Table 2). For example, β blockers reduce β adrenergic receptor activation in cardiac muscle and blood vessels, reducing heart rate, heart rate variability, cardiac contractility, and BP.21 This class of medication is prescribed for a variety of conditions, including congestive heart failure, hypertension, panic disorder, and posttraumatic stress disorder. Thus, a patient taking β blockers will have a blunted physiologic response to stress and have an increased likelihood of an inconclusive or false-negative polygraph exam.

Some over-the-counter medications also have effects on autonomic function. Sympathomimetics such as pseudoephedrine or antihistamines with anticholinergic activity like diphenhydramine can both increase heart rate and BP.22,23 Of the 10 most prescribed medications of 2016, 5 have direct effects on the ANS or the variables measured by the polygraph machine.24 An exhaustive list of medication effects on autonomic function is beyond the scope of this article.

A medication that may affect the results of a polygraph study that is of special interest to the DoD and military is mefloquine. Mefloquine is an antimalarial drug that has been used by military personnel deployed to malaria endemic regions.25 In murine models, mefloquine has been shown to disrupt autonomic and respiratory control in the central nervous system.26 The neuropsychiatric adverse effects of mefloquine are well documented and can last for years after exposure to the drug.27 Therefore, mefloquine could affect the results of a polygraph test through both direct toxic effects on the ANS as well as causing anxiety and depression, potentially affecting the subject’s response to questioning.

 

 

Alternative Modalities

Given the pitfalls inherent with external physiologic measures for lie detection, additional modalities that bypass measurement of ANS-governed responses have been sought. Indeed, the integration and combination of more comprehensive modalities has come to be named the forensic credibility assessment.

Functional MRI

Beginning in 1991, researchers began using fMRI to see real-time perfusion changes in areas of the cerebral cortex between times of rest and mental stimulation.26 This modality provides a noninvasive technique for viewing which specific parts of the brain are stimulated during activity. When someone is engaged in active deception, the dorsolateral prefrontal cortex has greater perfusion than when the patient is engaged in truth telling.28 Since fMRI involves imaging for evaluation of the central nervous system, it avoids the potential inaccuracies that can be seen in some subjects with autonomic irregularities. In fact, fMRI may have superior sensitivity and specificity for lie detection compared with that of conventional polygraphy.29

Significant limitations to the use of fMRI include the necessity of expensive specialized equipment and trained personnel to operate the MRI. Agencies that use polygraph examinations may be unwilling to make such an investment. Further, subjects with metallic foreign bodies or noncompatible medical implants cannot undergo the MRI procedure. Finally, there have been bioethical and legal concerns raised that measuring brain activity during interrogation may endanger “cognitive freedom” and may even be considered unreasonable search and seizure under the Fourth Amendment to the US Constitution.30 However, fMRI—like polygraphy—can only measure the difference between brain perfusion in 2 states. The idea of fMRI as “mind reading” is largely a misconception.31

Electroencephalography

Various EEG modalities have received increased interest for lie detection. In EEG, electrodes are used to measure the summation of a multitude of postsynaptic action potentials and the local voltage gradient they produce when cortical pyramidal neurons are fired in synchrony.32 These voltage gradients are detectable at the scalp surface. Shortly after the invention of EEG, it was observed that specific stimuli generated unique and predicable changes in EEG morphology. These event-related potentials (ERP) are detectable by scalp EEG shortly after the stimulus is given.33

ERPs can be elicited by a multitude of sensory stimuli, have a predictable and reproducible morphology, and are believed to be a psychophysiologic correlate of mental processing of stimuli.34 The P300 is an ERP characterized by a positive change in voltage occurring 300 milliseconds after a stimulus. It is associated with stimulus processing and categorization.35 Since deception is a complex cognitive process involving recognizing pertinent stimuli and inventing false responses to them, it was theorized that the detection of a P300 ERP during a patient interview would mean the patient truly recognizes the stimulus and is denying such knowledge. Early studies performed on P300 had variable accuracy for lie detection, roughly 40% to 80%, depending on the study. Thus, the rate of false negatives would increase if the subjects were coached on countermeasures, such as increasing the significance of distractor data or counting backward by 7s.36,37 Later studies have found ways of minimizing these issues, such as detection of a P900 ERP (a cortical potential at 900 milliseconds) that can be seen when subjects are attempting countermeasures.38

Another technique for increasing accuracy in EEG-mediated lie detection is measurement of multifaceted electroencephalographic response (MER), which involves a more detailed analysis of multiple EEG electrode sites and how the signaling changes over time using both visual comparison of multiple trials as well as bootstrap analysis.37 In particular, memory- and encoding-related multifaceted electroencephalographic response (MERMER) using P300 coupled with an electrically negative impulse recorded at the frontal lobe and phasic changes in the global EEG had superior accuracy than P300 alone.37

The benefits of EEG compared with that of fMRI include large reductions in cost, space, and restrictions for use in some individuals (EEG is safe for virtually all patients, including those with metallic foreign bodies). However, like fMRI, EEG still requires trained personnel to operate and interpret. Also, it has yet to be tested outside of the laboratory.

 

 

Conclusion

The ability to detect deception is an important factor in determining security risk and adjudication of legal proceedings, but untrained persons are surprisingly poor at discerning truth from lies. The polygraph has been used by law enforcement and government agencies for decades to aid in interrogation and the screening of employees for security clearances and other types of access. However, results are vulnerable to inaccuracies in subjects with autonomic disorders and may be confounded by multiple medications. While emerging technologies such as fMRI and EEG may allow superior accuracy by bypassing ANS-based physiologic outputs, the polygraph examiner and the physician must be aware of the effect of autonomic dysfunction and of the medications that affect the ANS. This is particularly true within military medicine, as many patients within this population are subject to polygraph examination.

References

1. Ford EB. Lie detection: historical, neuropsychiatric and legal dimensions. Int J Law Psychiatry. 2006;29(3):159-177.

2. Ohrn PG. Catecholamine infusion and gastrointestinal propulsion in the rat. Acta Chir Scand Suppl. 1979(461):43-52.

3. Sakamoto H. The study of catecholamine, acetylcholine and bradykinin in buccal circulation in dogs. Kurume Med J. 1979;26(2):153-162.

4. Bond CF Jr, Depaulo BM. Accuracy of deception judgments. Pers Soc Psychol Rev. 2006;10(3):214-234.

5. Vicianova M. Historical techniques of lie detection. Eur J sychology. 2015;11(3):522-534.

6. Matté JA. Forensic Psychophysiology Using the Polygraph: Scientific Truth Verification, Lie Detection. Williamsville, NY: JAM Publications; 2012.

7. Segrave K. Lie Detectors: A Social History. Jefferson, NC: McFarland & Company; 2004.

8. Nelson R. Scientific basis for polygraph testing. Polygraph. 2015;44(1):28-61.

9. Boucsein W. Electrodermal Activity. New York, NY: Springer Publishing; 2012.

10. US Congress, Office of Assessment and Technology. Scientific validity of polygraph testing: a research review and evaluation. https://ota.fas.org/reports/8320.pdf. Published 1983. Accessed June 12, 2019.

11. United States v Scheffer, 523 US 303 (1998).

12. United States v Piccinonna, 729 F Supp 1336 (SD Fl 1990).

13. Fridman DS, Janoe JS. The state of judicial gatekeeping in New Mexico. https://cyber.harvard.edu/daubert/nm.htm. Updated April 17, 1999. Accessed May 20, 2019.

14. Gibbons CH. Small fiber neuropathies. Continuum (Minneap Minn). 2014;20(5 Peripheral Nervous System Disorders):1398-1412.

15. US Department of Defense. Directive 5210.48: Credibility assessment (CA) program. https://fas.org/irp/doddir/dod/d5210_48.pdf. Updated February 12, 2018. Accessed May 30, 2019.

16. Postuma RB, Gagnon JF, Pelletier A, Montplaisir J. Prodromal autonomic symptoms and signs in Parkinson’s disease and dementia with Lewy bodies. Mov Disord. 2013;28(5):597-604.

17. Adlan AM, Lip GY, Paton JF, Kitas GD, Fisher JP. Autonomic function and rheumatoid arthritis: a systematic review. Semin Arthritis Rheum. 2014;44(3):283-304.

18. Di Ciaula A, Grattagliano I, Portincasa P. Chronic alcoholics retain dyspeptic symptoms, pan-enteric dysmotility, and autonomic neuropathy before and after abstinence. J Dig Dis. 2016;17(11):735-746.

19. Thaung HA, Baldi JC, Wang H, et al. Increased efferent cardiac sympathetic nerve activity and defective intrinsic heart rate regulation in type 2 diabetes. Diabetes. 2015;64(8):2944-2956.

20. US Department of Defense, Office of the Undersecretary of Defense for Intelligence. Department of Defense polygraph program process and compliance study: study report. https://fas.org/sgp/othergov/polygraph/dod-poly.pdf. Published December 19, 2011. Accessed May 20, 2019.

21. Ladage D, Schwinger RH, Brixius K. Cardio-selective beta-blocker: pharmacological evidence and their influence on exercise capacity. Cardiovasc Ther. 2013;31(2):76-83.

22. D’Souza RS, Mercogliano C, Ojukwu E, et al. Effects of prophylactic anticholinergic medications to decrease extrapyramidal side effects in patients taking acute antiemetic drugs: a systematic review and meta-analysis Emerg Med J. 2018;35:325-331.

23. Gheorghiev MD, Hosseini F, Moran J, Cooper CE. Effects of pseudoephedrine on parameters affecting exercise performance: a meta-analysis. Sports Med Open. 2018;4(1):44.

24. Frellick M. Top-selling, top-prescribed drugs for 2016. https://www.medscape.com/viewarticle/886404. Published October 2, 2017. Accessed May 20, 2019.

25. Lall DM, Dutschmann M, Deuchars J, Deuchars S. The anti-malarial drug mefloquine disrupts central autonomic and respiratory control in the working heart brainstem preparation of the rat. J Biomed Sci. 2012;19:103.

26. Ritchie EC, Block J, Nevin RL. Psychiatric side effects of mefloquine: applications to forensic psychiatry. J Am Acad Psychiatry Law. 2013;41(2):224-235.

27. Belliveau JW, Kennedy DN Jr, McKinstry RC, et al. Functional mapping of the human visual cortex by magnetic resonance imaging. Science. 1991;254(5032):716-719.

28. Ito A, Abe N, Fujii T, et al. The contribution of the dorsolateral prefrontal cortex to the preparation for deception and truth-telling. Brain Res. 2012;1464:43-52.

29. Langleben DD, Hakun JG, Seelig D. Polygraphy and functional magnetic resonance imaging in lie detection: a controlled blind comparison using the concealed information test. J Clin Psychiatry. 2016;77(10):1372-1380.

30. Boire RG. Searching the brain: the Fourth Amendment implications of brain-based deception detection devices. Am J Bioeth. 2005;5(2):62-63; discussion W5.

31. Langleben DD. Detection of deception with fMRI: Are we there yet? Legal Criminological Psychol. 2008;13(1):1-9.

32. Marcuse LV, Fields MC, Yoo J. Rowans Primer of EEG. 2nd ed. Edinburgh, Scotland, United Kingdom: Elsevier; 2016.

33. Farwell LA, Donchin E. The truth will out: interrogative polygraphy (“lie detection”) with event-related brain potentials. Psychophysiology. 1991;28(5):531-547.

34. Sur S, Sinha VK. Event-related potential: an overview. Ind Psychiatry J. 2009;18(1):70-73.

35. Polich J. Updating P300: an integrative theory of P3a and P3b. Clinical Neurophysiol. 2007;118(10):2128-2148.

36. Mertens R, Allen, JJB. The role of psychophysiology in forensic assessments: Deception detection, ERPs, and virtual reality mock crime scenarios. Psychophysiology. 2008;45(2):286-298.

37. Rosenfeld JP, Labkovsky E. New P300-based protocol to detect concealed information: resistance to mental countermeasures against only half the irrelevant stimuli and a possible ERP indicator of countermeasures. Psychophysiology. 2010;47(6):1002-1010.

38. Farwell LA, Smith SS. Using brain MERMER testing to detect knowledge despite efforts to conceal. J Forensic Sci. 2001;46(1):135-143.

References

1. Ford EB. Lie detection: historical, neuropsychiatric and legal dimensions. Int J Law Psychiatry. 2006;29(3):159-177.

2. Ohrn PG. Catecholamine infusion and gastrointestinal propulsion in the rat. Acta Chir Scand Suppl. 1979(461):43-52.

3. Sakamoto H. The study of catecholamine, acetylcholine and bradykinin in buccal circulation in dogs. Kurume Med J. 1979;26(2):153-162.

4. Bond CF Jr, Depaulo BM. Accuracy of deception judgments. Pers Soc Psychol Rev. 2006;10(3):214-234.

5. Vicianova M. Historical techniques of lie detection. Eur J sychology. 2015;11(3):522-534.

6. Matté JA. Forensic Psychophysiology Using the Polygraph: Scientific Truth Verification, Lie Detection. Williamsville, NY: JAM Publications; 2012.

7. Segrave K. Lie Detectors: A Social History. Jefferson, NC: McFarland & Company; 2004.

8. Nelson R. Scientific basis for polygraph testing. Polygraph. 2015;44(1):28-61.

9. Boucsein W. Electrodermal Activity. New York, NY: Springer Publishing; 2012.

10. US Congress, Office of Assessment and Technology. Scientific validity of polygraph testing: a research review and evaluation. https://ota.fas.org/reports/8320.pdf. Published 1983. Accessed June 12, 2019.

11. United States v Scheffer, 523 US 303 (1998).

12. United States v Piccinonna, 729 F Supp 1336 (SD Fl 1990).

13. Fridman DS, Janoe JS. The state of judicial gatekeeping in New Mexico. https://cyber.harvard.edu/daubert/nm.htm. Updated April 17, 1999. Accessed May 20, 2019.

14. Gibbons CH. Small fiber neuropathies. Continuum (Minneap Minn). 2014;20(5 Peripheral Nervous System Disorders):1398-1412.

15. US Department of Defense. Directive 5210.48: Credibility assessment (CA) program. https://fas.org/irp/doddir/dod/d5210_48.pdf. Updated February 12, 2018. Accessed May 30, 2019.

16. Postuma RB, Gagnon JF, Pelletier A, Montplaisir J. Prodromal autonomic symptoms and signs in Parkinson’s disease and dementia with Lewy bodies. Mov Disord. 2013;28(5):597-604.

17. Adlan AM, Lip GY, Paton JF, Kitas GD, Fisher JP. Autonomic function and rheumatoid arthritis: a systematic review. Semin Arthritis Rheum. 2014;44(3):283-304.

18. Di Ciaula A, Grattagliano I, Portincasa P. Chronic alcoholics retain dyspeptic symptoms, pan-enteric dysmotility, and autonomic neuropathy before and after abstinence. J Dig Dis. 2016;17(11):735-746.

19. Thaung HA, Baldi JC, Wang H, et al. Increased efferent cardiac sympathetic nerve activity and defective intrinsic heart rate regulation in type 2 diabetes. Diabetes. 2015;64(8):2944-2956.

20. US Department of Defense, Office of the Undersecretary of Defense for Intelligence. Department of Defense polygraph program process and compliance study: study report. https://fas.org/sgp/othergov/polygraph/dod-poly.pdf. Published December 19, 2011. Accessed May 20, 2019.

21. Ladage D, Schwinger RH, Brixius K. Cardio-selective beta-blocker: pharmacological evidence and their influence on exercise capacity. Cardiovasc Ther. 2013;31(2):76-83.

22. D’Souza RS, Mercogliano C, Ojukwu E, et al. Effects of prophylactic anticholinergic medications to decrease extrapyramidal side effects in patients taking acute antiemetic drugs: a systematic review and meta-analysis Emerg Med J. 2018;35:325-331.

23. Gheorghiev MD, Hosseini F, Moran J, Cooper CE. Effects of pseudoephedrine on parameters affecting exercise performance: a meta-analysis. Sports Med Open. 2018;4(1):44.

24. Frellick M. Top-selling, top-prescribed drugs for 2016. https://www.medscape.com/viewarticle/886404. Published October 2, 2017. Accessed May 20, 2019.

25. Lall DM, Dutschmann M, Deuchars J, Deuchars S. The anti-malarial drug mefloquine disrupts central autonomic and respiratory control in the working heart brainstem preparation of the rat. J Biomed Sci. 2012;19:103.

26. Ritchie EC, Block J, Nevin RL. Psychiatric side effects of mefloquine: applications to forensic psychiatry. J Am Acad Psychiatry Law. 2013;41(2):224-235.

27. Belliveau JW, Kennedy DN Jr, McKinstry RC, et al. Functional mapping of the human visual cortex by magnetic resonance imaging. Science. 1991;254(5032):716-719.

28. Ito A, Abe N, Fujii T, et al. The contribution of the dorsolateral prefrontal cortex to the preparation for deception and truth-telling. Brain Res. 2012;1464:43-52.

29. Langleben DD, Hakun JG, Seelig D. Polygraphy and functional magnetic resonance imaging in lie detection: a controlled blind comparison using the concealed information test. J Clin Psychiatry. 2016;77(10):1372-1380.

30. Boire RG. Searching the brain: the Fourth Amendment implications of brain-based deception detection devices. Am J Bioeth. 2005;5(2):62-63; discussion W5.

31. Langleben DD. Detection of deception with fMRI: Are we there yet? Legal Criminological Psychol. 2008;13(1):1-9.

32. Marcuse LV, Fields MC, Yoo J. Rowans Primer of EEG. 2nd ed. Edinburgh, Scotland, United Kingdom: Elsevier; 2016.

33. Farwell LA, Donchin E. The truth will out: interrogative polygraphy (“lie detection”) with event-related brain potentials. Psychophysiology. 1991;28(5):531-547.

34. Sur S, Sinha VK. Event-related potential: an overview. Ind Psychiatry J. 2009;18(1):70-73.

35. Polich J. Updating P300: an integrative theory of P3a and P3b. Clinical Neurophysiol. 2007;118(10):2128-2148.

36. Mertens R, Allen, JJB. The role of psychophysiology in forensic assessments: Deception detection, ERPs, and virtual reality mock crime scenarios. Psychophysiology. 2008;45(2):286-298.

37. Rosenfeld JP, Labkovsky E. New P300-based protocol to detect concealed information: resistance to mental countermeasures against only half the irrelevant stimuli and a possible ERP indicator of countermeasures. Psychophysiology. 2010;47(6):1002-1010.

38. Farwell LA, Smith SS. Using brain MERMER testing to detect knowledge despite efforts to conceal. J Forensic Sci. 2001;46(1):135-143.

Issue
Federal Practitioner - 36(7)a
Issue
Federal Practitioner - 36(7)a
Page Number
316-321
Page Number
316-321
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Enoxaparin vs Continuous Heparin for Periprocedural Bridging in Patients With Atrial Fibrillation and Advanced Chronic Kidney Disease

Article Type
Changed
Mon, 07/22/2019 - 14:16
Bridging with enoxaparin rather than heparin has the potential to reduce the length of hospital stay, incidence of nosocomial infections, and cost of hospitalization.

There has been a long-standing controversy in the use of parenteral anticoagulation for perioperative bridging in patients with atrial fibrillation (AF) pursuing elective surgery.1 The decision to bridge is dependent on the patient’s risk of thromboembolic complications and susceptibility to bleed.1 The BRIDGE trial showed noninferiority in rate of stroke and embolism events between low molecular weight heparins (LMWHs) and no perioperative bridging.2 However, according to the American College of Chest Physicians (CHEST) 2012 guidelines, patients in the BRIDGE trial would be deemed low risk for thromboembolic events displayed by a mean CHADS2 (congestive heart failure [CHF], hypertension, age, diabetes mellitus, and stroke/transient ischemic attack) score of 2.3. Also, the BRIDGE study and many others excluded patients with advanced forms of chronic kidney disease (CKD).2,3

Similar to patients with AF, patients with advanced CKD (ACKD, stage 4 and 5 CKD) have an increased risk of stroke and venous thromboembolism (VTE).4,5 Patients with AF and ACKD have not been adequately studied for perioperative anticoagulation bridging outcomes. Although unfractionated heparin (UFH) is preferred over LMWH in ACKD patients,enoxaparin can be used in this population.1,6 Enoxaparin 1 mg/kg once daily is approved by the US Food and Drug Administration (FDA) for use in patients with severe renal insufficiency defined as creatinine clearance (CrCl) < 30 mL/min. This dosage adjustment is subsequent to studies with enoxaparin 1 mg/kg twice daily that showed a significant increase in major and minor bleeding in severe renal-insufficient patients with CrCl < 30 mL/min vs patients with CrCl > 30 mL/min.7 When comparing the myocardial infarction (MI) outcomes of severe renal-insufficient patients in the ExTRACT-TIMI 25 trial, enoxaparin 1 mg/kg once daily had no significant difference in nonfatal major bleeding vs UFH.8 In patients without renal impairment (no documentation of kidney disease), bridging therapy with LMWH was completed more than UFH in < 24 hours of hospital stay and with similar rates of VTEs and major bleeding.9 In addition to its ability to be administered outpatient, enoxaparin has a more predictable pharmacokinetic profile, allowing for less monitoring and a lower incidence of heparin-induced thrombocytopenia (HIT) vs that of UFH.6

The Michael E. DeBakey Veteran Affairs Medical Center (MEDVAMC) in Houston, Texas, is one of the largest US Department of Veterans Affairs (VA) hospitals in the US, managing > 150,000 veterans in Southeast Texas and other southern states. As a referral center for traveling patients, it is crucial that MEDVAMC decrease hospital length of stay (LOS) to increase space for incoming patients. Reducing LOS also reduces costs and may have a correlation with decreasing the incidence of nosocomial infections. Because of its significance to this facility, hospital LOS is an appropriate primary outcome for this study.

To our knowledge, bridging outcomes between LMWH and UFH in patients with AF and ACKD have never been studied. We hypothesized that using enoxaparin instead of heparin for periprocedural management would result in decreased hospital LOS, leading to a lower economic burden and lower incidence of nosocomial infections with no significant differences in major and minor bleeding and thromboembolic complications.10

 

 

Methods

This study was a single-center, retrospective chart review of adult patients from January 2008 to September 2017. The review was conducted at MEDVAMC and was approved by the research and development committee and by the Baylor College of Medicine Institutional Review Board. Formal consent was not required.

Included patients were aged ≥ 18 years with diagnoses of AF or atrial flutter and ACKD as recognized by a glomerular filtration rate (eGFR) of < 30 mL/min/1.73 m2 as calculated by use of the Modification of Diet in Renal Disease Study (MDRD) equation.11 Patients must have previously been on warfarin and required temporary interruption of warfarin for an elective procedure. During the interruption of warfarin therapy, a requirement was set for patients to be on periprocedural anticoagulation with subcutaneous (SC) enoxaparin 1 mg/kg daily or continuous IV heparin per MEDVAMC heparin protocol. Patients were excluded if they had experienced major bleeding in the 6 weeks prior to the elective procedure, had current thrombocytopenia (platelet count < 100 × 109/L), or had a history of heparin-induced thrombocytopenia (HIT) or a heparin allergy.

This patient population was identified using TheraDoc Clinical Surveillance Software System (Charlotte, NC), which has prebuilt alert reviews for anticoagulation medications, including enoxaparin and heparin. An alert for patients on enoxaparin with serum creatinine (SCr) > 1.5 mg/dL was used to screen patients who met the inclusion criteria. A second alert identified patients on heparin. The VA Computerized Patient Record System (CPRS) was used to collect patient data.

Economic Analysis

An economic analysis was conducted using data from the VA Managerial Cost Accounting Reports. Data on the national average cost per bed day was used for the purpose of extrapolating this information to multiple VA institutions.12 National average cost per day was determined by dividing the total cost by the number of bed days for the identified treating specialty during the fiscal period of 2018. Average cost per day data included costs for bed day, surgery, radiology services, laboratory tests, pharmacy services, treatment location (ie, intensive care units [ICUs]) and all other costs associated with an inpatient stay. A cost analysis was performed using this average cost per bed day and the mean LOS between enoxaparin and UFH for each treating specialty. The major outcome of the cost analysis was the total cost per average inpatient stay. The national average cost per bed day for each treating specialty was multiplied by the average LOS found for each treating specialty in this study; the sum of all the average costs per inpatient stay for the treating specialties resulted in the total cost per average inpatient stay. Permission to use these data was granted by the Pharmacy and Critical Care Services at MEDVAMC.

Patient Demographics and Characteristics

Data were collected on patient demographics (Table 1). Nosocomial infections, stroke/transient ischemic attack, MI, VTE, major and minor bleeding, and death are defined in Table 2.

The primary outcome of the study was hospital LOS. The study was powered at 90% for α = .05, which gives a required study population of 114 (1:1 enrollment ratio) patients to determine a statistically significant difference in hospital stay. This sample size was calculated using the mean hospital LOS (the primary objective) in the REGIMEN registry for LMWH (4.6 days) and UFH (10.3 days).9 To our knowledge, the incidence of nosocomial infections (a secondary outcome) has not been studied in this patient population; therefore, there was no basis to assess an appropriate sample size to find a difference in this outcome. Furthermore, the goal was to collect as many patients as possible to best assess this variable. Because of an expected high exclusion rate, 504 patients were reviewed to target a sample size of 120 patients. Due to the single-center nature of this review, the secondary outcomes of thromboembolic complications and major and minor bleeding were expected to be underpowered.

The final analysis compared the enoxaparin arm with the UFH arm. Univariate differences between the treatment groups were compared using the Fisher exact test for categorical variables. Demographic data and other continuous variables were analyzed by an unpaired t test to compare means between the 2 arms. Outcomes and characteristics were deemed statistically significant when α (P value) was < .05. All P values reported were 2-tailed with a 95% CI. No statistical analysis was performed for the cost differences (based on LOS per treating specialty) in the 2 treatment arms. Statistical analyses were completed by utilizing GraphPad Software (San Diego, CA).

 

 

Results

In total, 50 patients were analyzed in the study. There were 36 patients bridged with IV UFH at a concentration of 25,000 U/250 mL with an initial infusion rate of 12 U/kg/h. For the other arm, 14 patients were anticoagulated with renally dosed enoxaparin 1 mg/kg/d with an average daily dose of 89.3 mg; the mean actual body weight in this group was 90.9 mg (correlates with enoxaparin daily dose). Physicians of the primary team decided which parenteral anticoagulant to use. The difference in mean duration of inpatient parental anticoagulation between both groups was not statistically significant: enoxaparin at 7.1 days and UFH at 9.6 days (P = .19). Patients in the enoxaparin arm were off warfarin therapy for an average of 6.0 days vs 7.5 days for the UFH group (P = .29). The duration of outpatient anticoagulation with enoxaparin was not analyzed in this study.

Patient and Procedure Characteristics

All patients had AF or atrial flutter with 86% of patients (n = 43) having a CHADS2 > 2 and 48% (n = 29) having a CHA2DS2VASc > 4. Overall, the mean age was 71.3 years with similarities in ethnicity distribution. Patients had multiple comorbidities as shown by a mean Charlson Comorbidity Index (CCI) of 7.7 and an increased risk of bleeding as evidenced by 98% (n = 48) of patients having a HAS-BLED score of ≥ 3. A greater percentage of patients bridged with enoxaparin had DM, history of stroke and MI, and a heart valve, whereas UFH patients were more likely to be in stage 5 CKD (eGFR < 15 mL/min/1.73m2) with a significantly lower mean eGFR (16.76 vs 22.64, P = .03). Furthermore, there were more patients on hemodialysis in the UFH (50%) arm vs enoxaparin (21%) arm and a lower mean CrCl with UFH (20.1 mL/min) compared with enoxaparin (24.9 mL/min); however, the differences in hemodialysis and mean CrCl were not statistically significant. There were no patients on peritoneal dialysis in this review.

Procedure Characteristics

The average Revised Cardiac Risk Index (RCRI) score was about 3, indicating that these patients were at a Class IV risk (11%) of having a perioperative cardiac event (Table 3). Nineteen patients (38%) elected for a major surgery with all but 1 of the surgeries (major or minor) being invasive. The average length of surgery was 1.2 hours, and patients were more likely to undergo cardiothoracic procedures (38%). There were 2 out of 14 (14%) patients on enoxaparin who were able to have surgery as an outpatient; whereas this did not occur in patients on UFH. The procedures completed for these patients were a colostomy (minor surgery) and arteriovenous graft repair (major surgery). There were no statistically significant differences regarding types of procedures between the 2 arms.

Outcomes

The primary outcome of this study, hospital LOS, differed significantly in the enoxaparin arm vs UFH: 10.2 days vs 17.5 days, P = .04 (Table 4). The time-to-discharge from initiation of parenteral anticoagulation was significantly reduced with enoxaparin (7.1 days) compared with UFH (11.9 days); P = .04. Although also reduced in the enoxaparin arm, ICU LOS did not show statistical significance (1.1 days vs 4.0 days, P = .09).

About 36% (n = 18) of patients in this study acquired an infection during hospitalization for elective surgery. The most common microorganism and site of infection were Enterococcus species and urinary tract, respectively (Table 5). Nearly half (44%, n = 16) of the patients in the UFH group had a nosocomial infection vs 14% (n = 2) of enoxaparin-bridged patients with a difference approaching significance; P = .056. Both patients in the enoxaparin group had the urinary tract as the primary source of infection; 1 of these patients had a urologic procedure.

Major bleeding occurred in 7% (n = 1) of enoxaparin patients vs 22% (n = 8) in the UFH arm, but this was not found to be statistically significant (P = .41). Minor bleeding was similar between enoxaparin and UFH arms (14% vs 19%, P = .99). Regarding thromboembolic complications, the enoxaparin group (0%) had a numerical reduction compared to UFH (11%) with VTE (n = 4) being the only occurrence of the composite outcome (P = .57). There were 4 deaths within 30 days posthospitalization—all were from the UFH group (P = .57). Due to the small sample size of this study, these outcomes (bleeding and thrombotic events) were not powered to detect a statistically significant difference.

 

 

Economic Analysis

The average cost differences (Table 6) of hospitalization between enoxaparin and UFH were calculated using the average LOS per treating specialty multiplied by the national average cost of the MCO for an inpatient bed day in 2018.12 The treating specialty with the longest average LOS in the enoxaparin arm was thoracic (4.7 days). The UFH arm also had a large LOS (average days) for the thoracic specialty (6.4 days); however, the vascular specialty (6.7 days) had the longest average LOS in this group. Due to a mean LOS of 10.2 days in the enoxaparin arm, which was further stratified by treating specialty, the total cost per average inpatient stay was calculated as $51,710. On the other hand, patients in the UFH arm had a total cost per average inpatient stay of $92,848.

Monitoring

Anti-factor Xa levels for LMWH monitoring were not analyzed in this study due to a lack of values collected; only 1 patient had an anti-factor Xa level checked during this time frame. Infusion rates of UFH were adjusted based on aPTT levels collected per MEDVAMC inpatient anticoagulation protocol. The average percentage of aPTT in therapeutic range was 46.3% and the mean time-to-therapeutic range (SD) was about 2.4 (1.3) days. Due to this study’s retrospective nature, there were inconsistencies with availability of documentation of UFH infusion rates. For this reason, these values were not analyzed further.

Discussion

In 2017, the American College of Cardiology published the Periprocedural Anticoagulation Expert Consensus Pathway, which recommends for patients with AF at low risk (CHA2DS2VASc 1-4) of thromboembolism to not be bridged (unless patient had a prior VTE or stroke/TIA).13 Nearly half the patients in this study, were classified as moderate-to-high thrombotic risk as evidenced by a CHA2DS2VASc > 4 with a mean score of 4.8. Due to this study’s retrospective design from 2008 to 2017, many of the clinicians may have referenced the 2008 CHEST antithrombotic guidelines when making the decision to bridge patients; these guidelines and the previous MEDVAMC anticoagulation protocol recommend bridging patients with AF with CHADS2 > 2 (moderate-to-high thrombotic risk) in which all but 1 of the patients in this study met criteria.1,14 In contrast to the landmark BRIDGE trial, the mean CHADS2 score in this study was 3.6; this is an indication that our patient population was of individuals at an increased risk of stroke and embolism.

 

 

In addition to thromboembolic complications, patients in the current study also were at increased risk of clinically relevant bleeding with a mean HAS-BLED score of 4.1 and nearly all patients having a score > 3. The complexity of the veteran population also was displayed by this study’s mean CCI (7.7) and RCRI (3.0) indicating a 0% estimated 10-year survival and a 11% increase in having a perioperative cardiac event, respectively. A mean CCI of 7.7 is associated with a 13.3 relative risk of death within 6 years postoperation.15 All patients had a diagnosis of hypertension, and > 75% had this diagnosis complicated by DM. In addition, this patient population was of those with extensive cardiovascular disease or increased risk, which makes for a clinically relevant application of patients who would require periprocedural bridging.

Another positive aspect of this study is that all the baseline characteristics, apart from renal function, were similar between arms, helping to strengthen the ability to adequately compare the 2 bridging modalities. Our assumption for the reasoning that more stage 5 CKD and dialysis patients were anticoagulated with UFH vs enoxaparin is a result of concern for an increased risk of bleeding with a medication that is renally cleared 30% less in CrCl < 30 mL/min.16 Although, enoxaparin 1 mg/kg/d is FDA approved as a therapeutic anticoagulant option, clinicians at MEDVAMC likely had reservations about its use in end-stage CKD patients. Unlike many studies, including the BRIDGE trial, patients with ACKD were not excluded from this trial, and the outcomes with enoxaparin are available for interpretation.

To no surprise, for patients included in this study, enoxaparin use led to shorter hospital LOS, reduced ICU LOS, and a quicker time-to-discharge from initiation. This is credited to the 100% bioavailability of SC enoxaparin in conjunction with its means to be a therapeutic option as an outpatient.16 Unlike IV UFH, patients requiring bridging can be discharged on SC injections of enoxaparin until a therapeutic INR is maintained with warfarin.The duration of hospital LOS in both arms were longer in this study compared with that of other studies.9 This may be due to clinicians being more cautious with renal insufficient patients, and the patients included in this study had multiple comorbidities. According to an economic analysis performed by Amorosi and colleagues in 2004, bridging with enoxaparin instead of UFH can save up to $3,733 per patient and reduce bridging costs by 63% to 85% driven primarily by decreased hospital LOS.10

Economic Outcome

In our study, we conducted a cost analysis using national VA data that indicated a $41,138 or 44% reduction in total cost per average inpatient stay when bridging 1 patient with enoxaparin vs UFH. The benefit of this cost analysis is that it reflects direct costs at VA institutions nationally; this will allow these data to be useful for practitioners at MEDVAMC and other VA hospitals. Stratifying the costs by treating specialty instead of treatment location minimized skewing of the data as there were some patients with long LOS in the ICU. No patients in the enoxaparin arm were treated in otolaryngology, which may have skewed the data. The data included direct costs for beds as well as costs for multiple services, such as procedures, pharmacy, nursing, laboratory tests, and imaging. Unlike the Amorosi study, our review did not include acquisition costs for enoxaparin syringes and bags of UFH or laboratory costs for aPTT and anti-factor Xa levels in part because of the data source and the difficulty calculating costs over a 10-year span.

 

 

Patients in the enoxaparin arm had a trend toward fewer occurrences of hospital-acquired infections than did those in the UFH arm, which we believe is due to a decreased LOS (in both total hospital and ICU days) and fewer blood draws needed for monitoring. This also may be attributed to a longer mean duration of surgery in the UFH arm (1.3 hours) vs enoxaparin (0.9 hours). The percentage of patients with procedures ≥ 45 minutes and the types of procedures between both arms were similar. However, these outcomes were not statistically significant. In addition, elderly males who are hospitalized may require a catheter (due to urinary retention), and catheter-associated urinary tract infection (CAUTI) is one of the highest reported infections in acute care hospitals in the US. This is in line with our patient population and may be a supplementary reason for the increase in infection incidence with UFH. Though, whether urinary catheters were used in these patients was not evaluated in this study.

Despite being at an increased risk of experiencing a major adverse cardiovascular event (MACE), no patients in either arm had a stroke/TIA or MI within 30 days postprocedure. The only occurrences documented were VTEs, which happened only in 4 patients on UFH. Four people died in this study, solely in the UFH arm. The incidence of thromboembolic complications and death along with major and minor bleeding cannot be deduced as meaningful as this study was underpowered for these outcomes. Despite anti-factor Xa monitoring being recommended in ACKD patients on enoxaparin, this monitoring was not routinely performed in this study. Another limitation was the inability to adequately assess the appropriateness of nurse-adjusted UFH infusion rates largely due to the retrospective nature of this study. The variability of aPTT percentage in therapeutic range and time-to-therapeutic range reported was indicative of the difficulties of monitoring for the safety and efficacy of UFH.

In 1991, Cruickshank and colleagues conducted a study in which a standard nomogram (similar to the MEDVAMC nomogram) for the adjustment of IV heparin was implemented at a single hospital.17 The success rate (aPTT percentage in therapeutic range) was 59.4% and average time-to-therapeutic range was about 1 day. The success rate (46.3%) and time-to-therapeutic range (2.4 days) in our study were lower and longer, respectively, than was expected. One potential reason for this discrepancy could be the differences in indication as the patients in Cruickshank and colleagues were being treated for VTE, whereas patients in our study had AF or atrial flutter. Also, there were inconsistencies in the availability of documentation of monitoring parameters for heparin due to the study time frame and retrospective design. Patients on UFH who are not within the therapeutic range in a timely manner are at greater risk of MACE and major/minor bleeding. Our study was not powered to detect these findings.

Strengths and Limitations

A significant limitation of this study was its small sample size; the study was not able to meet power for the primary outcome; it is unknown whether our study met power for nosocomial infections. The study also was not a powered review of other adverse events, such as thromboembolic complications, bleeding, and death. The study had an uneven number of patients, which made it more difficult to appropriately compare 2 patient populations; the study also did not include medians for patient characteristics and outcomes.

 

 

Due to this study’s time frame, the clinical pharmacy services at MEDVAMC were not as robust as they are now, which is the reason the decisions on which anticoagulant to use were primarily physician based. The use of TheraDoc to identify patients posed the risk of missing patients who may not have had the appropriate laboratory tests performed (ie, SCr). Patients on UFH had a reduced eGFR compared with that of enoxaparin, which may limit our extrapolation of enoxaparin’s use in end-stage renal disease. The reduced eGFR and higher number of dialysis patients in the UFH arm may have increased the occurrence of more labile INRs and bleeding outcomes. Patients on hemodialysis typically have more comorbidities and an increased risk of infection due to the frequent use of catheters and needles to access the bloodstream. In addition, the potential differences in catheter use and duration between groups were not identified. If these parameters were studied, the data collected may have helped better explain the reasoning for increased incidence of infection in the UFH arm.

Strengths of this study include a complex patient population with similar characteristics, distribution of ethnicities representative of the US population, patients at moderate-to-high thrombotic risk, the analysis of nosocomial infections, and the exclusion of patients with normal renal function or moderate CKD.

Conclusion

To our knowledge, this is the first study to compare periprocedural bridging outcomes and incidence of nosocomial infections in patients with AF and ACKD. This review provides new evidence that in this patient population, enoxaparin is a potential anticoagulant to reduce hospital LOS and hospital-acquired infections. Compared with UFH, bridging with enoxaparin reduced hospital LOS and anticoagulation time-to-discharge by 7 and 5 days, respectively, and decreased the incidence of nosocomial infections by 30%. Using the mean LOS per treating specialty for both arms, bridging 1 patient with AF with enoxaparin vs UFH can potentially lead to an estimated $40,000 (44%) reduction in total cost of hospitalization. Enoxaparin also had no numeric differences in mortality and adverse events (stroke/TIA, MI, VTE) vs that of UFH, but it is important to note that this study was not powered to find a significant difference in these outcomes. Due to the mean eGFR of patients on enoxaparin being 22.6 mL/min/1.73 m2 and only 1 in 5 having stage 5 CKD, at this time, we do not recommend enoxaparin for periprocedural use in stage 5 CKD or in patients on hemodialysis. Larger studies are needed, including randomized trials, in this patient population to further evaluate these outcomes and assess the use of enoxaparin in patients with ACKD.

References

1. Douketis JD, Spyropoulos AC, Spencer FA, et al. Perioperative management of antithrombotic therapy: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(2)(suppl):e326S-350S.

2. Douketis JD, Spyropoulos AC, Kaatz S, et al; BRIDGE Investigators. Perioperative bridging anticoagulation in patients with atrial fibrillation. N Engl J Med. 2015;373(9):823-833.

3. Hammerstingl C, Schmitz A, Fimmers R, Omran H. Bridging of chronic oral anticoagulation with enoxaparin in patients with atrial fibrillation: results from the prospective BRAVE registry. Cardiovasc Ther. 2009;27(4):230-238.

4. Dad T, Weiner DE. Stroke and chronic kidney disease: epidemiology, pathogenesis, and management across kidney disease stages. Semin Nephrol. 2015;35(4):311-322.

5. Wattanakit K, Cushman M. Chronic kidney disease and venous thromboembolism: epidemiology and mechanisms. Curr Opin Pulm Med. 2009;15(5):408-412.

6. Saltiel M. Dosing low molecular weight heparins in kidney disease. J Pharm Pract. 2010;23(3):205-209.

7. Spinler SA, Inverso SM, Cohen M, Goodman SG, Stringer KA, Antman EM; ESSENCE and TIMI 11B Investigators. Safety and efficacy of unfractionated heparin versus enoxaparin in patients who are obese and patients with severe renal impairment: analysis from the ESSENCE and TIMI 11B studies. Am Heart J. 2003;146(1):33-41.

8. Fox KA, Antman EM, Montalescot G, et al. The impact of renal dysfunction on outcomes in the ExTRACT-TIMI 25 trial. J Am Coll Cardiol. 2007;49(23):2249-2255.

9. Spyropoulos AC, Turpie AG, Dunn AS, et al; REGIMEN Investigators. Clinical outcomes with unfractionated heparin or low-molecular-weight heparin as bridging therapy in patients on long-term oral anticoagulants: the REGIMEN registry. J Thromb Haemost. 2006;4(6):1246-1252.

10. Amorosi SL, Tsilimingras K, Thompson D, Fanikos J, Weinstein MC, Goldhaber SZ. Cost analysis of “bridging therapy” with low-molecular-weight heparin versus unfractionated heparin during temporary interruption of chronic anticoagulation. Am J Cardiol. 2004;93(4):509-511.

11. Inker LA, Astor BC, Fox CH, et al. KDOQI US commentary on the 2012 KDIGO clinical practice guideline for the evaluation and management of CKD. Am J Kidney Dis. 2014;63(5):713-735.

12. US Department of Veteran Affairs. Managerial Cost Accounting Financial User Support Reports: fiscal year 2018. https://www.herc.research.va.gov/include/page.asp?id=managerial-cost-accounting. [Source not verified.]

13. Doherty JU, Gluckman TJ, Hucker WJ, et al. 2017 ACC Expert Consensus Decision Pathway for Periprocedural Management of Anticoagulation in Patients With Nonvalvular Atrial Fibrillation: a report of the American College of Cardiology Clinical Expert Consensus Document Task Force. J Am Coll Cardiol. 2017;69(7):871-898.

14. Kearon C, Kahn SR, Agnelli G, et al. Antithrombotic therapy for venous thromboembolic disease: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines (8th Edition). Chest. 2008;133(6 suppl):454S-545S.

15. Charlson M, Szatrowski TP, Peterson J, Gold J. Validation of a combined comorbidity index. J Clin Epidemiol. 1994;47(11):1245-1251. 

16. Lovenox [package insert]. Bridgewater, NJ: Sanofi-Aventis; December 2017.

17. Cruickshank MK, Levine MN, Hirsh J, Roberts R, Siguenza M. A standard heparin nomogram for the management of heparin therapy. Arch Intern Med. 1991;151(2):333-337.

18. Steinberg BA, Peterson ED, Kim S, et al; Outcomes Registry for Better Informed Treatment of Atrial Fibrillation Investigators and Patients. Use and outcomes associated with bridging during anticoagulation interruptions in patients with atrial fibrillation: findings from the Outcomes Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT-AF). Circulation. 2015;131(5):488-494.

19. Verheugt FW, Steinhubl SR, Hamon M, et al. Incidence, prognostic impact, and influence of antithrombotic therapy on access and nonaccess site bleeding in percutaneous coronary intervention. JACC Cardiovasc Interv. 2011;4(2):191-197.

20. Bijsterveld NR, Peters RJ, Murphy SA, Bernink PJ, Tijssen JG, Cohen M. Recurrent cardiac ischemic events early after discontinuation of short-term heparin treatment in acute coronary syndromes: results from the Thrombolysis in Myocardial Infarction (TIMI) 11B and Efficacy and Safety of Subcutaneous Enoxaparin in Non-Q-Wave Coronary Events (ESSENCE) studies. J Am Coll Cardiol. 2003;42(12):2083-2089.

Article PDF
Author and Disclosure Information

Chandler Schexnayder is a Home-Based Primary Care Clinical Pharmacy Specialist, and Christine Aguilar is an Inpatient Surgery Clinical Pharmacy Specialist, both at the Michael E. DeBakey VA Medical Center in Houston, Texas. Kathleen Morneau is a Clinical Pharmacy Specialist in the Medical Intensive Care Unit and Antimicrobial Stewardship at the Audie L. Murphy Veterans Hospital in San Antonio, Texas.
Correspondence: Chandler Schexnayder (chandler.schexnayder@ va.gov)

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Issue
Federal Practitioner - 36(7)a
Publications
Topics
Page Number
306-315
Sections
Author and Disclosure Information

Chandler Schexnayder is a Home-Based Primary Care Clinical Pharmacy Specialist, and Christine Aguilar is an Inpatient Surgery Clinical Pharmacy Specialist, both at the Michael E. DeBakey VA Medical Center in Houston, Texas. Kathleen Morneau is a Clinical Pharmacy Specialist in the Medical Intensive Care Unit and Antimicrobial Stewardship at the Audie L. Murphy Veterans Hospital in San Antonio, Texas.
Correspondence: Chandler Schexnayder (chandler.schexnayder@ va.gov)

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Author and Disclosure Information

Chandler Schexnayder is a Home-Based Primary Care Clinical Pharmacy Specialist, and Christine Aguilar is an Inpatient Surgery Clinical Pharmacy Specialist, both at the Michael E. DeBakey VA Medical Center in Houston, Texas. Kathleen Morneau is a Clinical Pharmacy Specialist in the Medical Intensive Care Unit and Antimicrobial Stewardship at the Audie L. Murphy Veterans Hospital in San Antonio, Texas.
Correspondence: Chandler Schexnayder (chandler.schexnayder@ va.gov)

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Article PDF
Article PDF
Related Articles
Bridging with enoxaparin rather than heparin has the potential to reduce the length of hospital stay, incidence of nosocomial infections, and cost of hospitalization.
Bridging with enoxaparin rather than heparin has the potential to reduce the length of hospital stay, incidence of nosocomial infections, and cost of hospitalization.

There has been a long-standing controversy in the use of parenteral anticoagulation for perioperative bridging in patients with atrial fibrillation (AF) pursuing elective surgery.1 The decision to bridge is dependent on the patient’s risk of thromboembolic complications and susceptibility to bleed.1 The BRIDGE trial showed noninferiority in rate of stroke and embolism events between low molecular weight heparins (LMWHs) and no perioperative bridging.2 However, according to the American College of Chest Physicians (CHEST) 2012 guidelines, patients in the BRIDGE trial would be deemed low risk for thromboembolic events displayed by a mean CHADS2 (congestive heart failure [CHF], hypertension, age, diabetes mellitus, and stroke/transient ischemic attack) score of 2.3. Also, the BRIDGE study and many others excluded patients with advanced forms of chronic kidney disease (CKD).2,3

Similar to patients with AF, patients with advanced CKD (ACKD, stage 4 and 5 CKD) have an increased risk of stroke and venous thromboembolism (VTE).4,5 Patients with AF and ACKD have not been adequately studied for perioperative anticoagulation bridging outcomes. Although unfractionated heparin (UFH) is preferred over LMWH in ACKD patients,enoxaparin can be used in this population.1,6 Enoxaparin 1 mg/kg once daily is approved by the US Food and Drug Administration (FDA) for use in patients with severe renal insufficiency defined as creatinine clearance (CrCl) < 30 mL/min. This dosage adjustment is subsequent to studies with enoxaparin 1 mg/kg twice daily that showed a significant increase in major and minor bleeding in severe renal-insufficient patients with CrCl < 30 mL/min vs patients with CrCl > 30 mL/min.7 When comparing the myocardial infarction (MI) outcomes of severe renal-insufficient patients in the ExTRACT-TIMI 25 trial, enoxaparin 1 mg/kg once daily had no significant difference in nonfatal major bleeding vs UFH.8 In patients without renal impairment (no documentation of kidney disease), bridging therapy with LMWH was completed more than UFH in < 24 hours of hospital stay and with similar rates of VTEs and major bleeding.9 In addition to its ability to be administered outpatient, enoxaparin has a more predictable pharmacokinetic profile, allowing for less monitoring and a lower incidence of heparin-induced thrombocytopenia (HIT) vs that of UFH.6

The Michael E. DeBakey Veteran Affairs Medical Center (MEDVAMC) in Houston, Texas, is one of the largest US Department of Veterans Affairs (VA) hospitals in the US, managing > 150,000 veterans in Southeast Texas and other southern states. As a referral center for traveling patients, it is crucial that MEDVAMC decrease hospital length of stay (LOS) to increase space for incoming patients. Reducing LOS also reduces costs and may have a correlation with decreasing the incidence of nosocomial infections. Because of its significance to this facility, hospital LOS is an appropriate primary outcome for this study.

To our knowledge, bridging outcomes between LMWH and UFH in patients with AF and ACKD have never been studied. We hypothesized that using enoxaparin instead of heparin for periprocedural management would result in decreased hospital LOS, leading to a lower economic burden and lower incidence of nosocomial infections with no significant differences in major and minor bleeding and thromboembolic complications.10

 

 

Methods

This study was a single-center, retrospective chart review of adult patients from January 2008 to September 2017. The review was conducted at MEDVAMC and was approved by the research and development committee and by the Baylor College of Medicine Institutional Review Board. Formal consent was not required.

Included patients were aged ≥ 18 years with diagnoses of AF or atrial flutter and ACKD as recognized by a glomerular filtration rate (eGFR) of < 30 mL/min/1.73 m2 as calculated by use of the Modification of Diet in Renal Disease Study (MDRD) equation.11 Patients must have previously been on warfarin and required temporary interruption of warfarin for an elective procedure. During the interruption of warfarin therapy, a requirement was set for patients to be on periprocedural anticoagulation with subcutaneous (SC) enoxaparin 1 mg/kg daily or continuous IV heparin per MEDVAMC heparin protocol. Patients were excluded if they had experienced major bleeding in the 6 weeks prior to the elective procedure, had current thrombocytopenia (platelet count < 100 × 109/L), or had a history of heparin-induced thrombocytopenia (HIT) or a heparin allergy.

This patient population was identified using TheraDoc Clinical Surveillance Software System (Charlotte, NC), which has prebuilt alert reviews for anticoagulation medications, including enoxaparin and heparin. An alert for patients on enoxaparin with serum creatinine (SCr) > 1.5 mg/dL was used to screen patients who met the inclusion criteria. A second alert identified patients on heparin. The VA Computerized Patient Record System (CPRS) was used to collect patient data.

Economic Analysis

An economic analysis was conducted using data from the VA Managerial Cost Accounting Reports. Data on the national average cost per bed day was used for the purpose of extrapolating this information to multiple VA institutions.12 National average cost per day was determined by dividing the total cost by the number of bed days for the identified treating specialty during the fiscal period of 2018. Average cost per day data included costs for bed day, surgery, radiology services, laboratory tests, pharmacy services, treatment location (ie, intensive care units [ICUs]) and all other costs associated with an inpatient stay. A cost analysis was performed using this average cost per bed day and the mean LOS between enoxaparin and UFH for each treating specialty. The major outcome of the cost analysis was the total cost per average inpatient stay. The national average cost per bed day for each treating specialty was multiplied by the average LOS found for each treating specialty in this study; the sum of all the average costs per inpatient stay for the treating specialties resulted in the total cost per average inpatient stay. Permission to use these data was granted by the Pharmacy and Critical Care Services at MEDVAMC.

Patient Demographics and Characteristics

Data were collected on patient demographics (Table 1). Nosocomial infections, stroke/transient ischemic attack, MI, VTE, major and minor bleeding, and death are defined in Table 2.

The primary outcome of the study was hospital LOS. The study was powered at 90% for α = .05, which gives a required study population of 114 (1:1 enrollment ratio) patients to determine a statistically significant difference in hospital stay. This sample size was calculated using the mean hospital LOS (the primary objective) in the REGIMEN registry for LMWH (4.6 days) and UFH (10.3 days).9 To our knowledge, the incidence of nosocomial infections (a secondary outcome) has not been studied in this patient population; therefore, there was no basis to assess an appropriate sample size to find a difference in this outcome. Furthermore, the goal was to collect as many patients as possible to best assess this variable. Because of an expected high exclusion rate, 504 patients were reviewed to target a sample size of 120 patients. Due to the single-center nature of this review, the secondary outcomes of thromboembolic complications and major and minor bleeding were expected to be underpowered.

The final analysis compared the enoxaparin arm with the UFH arm. Univariate differences between the treatment groups were compared using the Fisher exact test for categorical variables. Demographic data and other continuous variables were analyzed by an unpaired t test to compare means between the 2 arms. Outcomes and characteristics were deemed statistically significant when α (P value) was < .05. All P values reported were 2-tailed with a 95% CI. No statistical analysis was performed for the cost differences (based on LOS per treating specialty) in the 2 treatment arms. Statistical analyses were completed by utilizing GraphPad Software (San Diego, CA).

 

 

Results

In total, 50 patients were analyzed in the study. There were 36 patients bridged with IV UFH at a concentration of 25,000 U/250 mL with an initial infusion rate of 12 U/kg/h. For the other arm, 14 patients were anticoagulated with renally dosed enoxaparin 1 mg/kg/d with an average daily dose of 89.3 mg; the mean actual body weight in this group was 90.9 mg (correlates with enoxaparin daily dose). Physicians of the primary team decided which parenteral anticoagulant to use. The difference in mean duration of inpatient parental anticoagulation between both groups was not statistically significant: enoxaparin at 7.1 days and UFH at 9.6 days (P = .19). Patients in the enoxaparin arm were off warfarin therapy for an average of 6.0 days vs 7.5 days for the UFH group (P = .29). The duration of outpatient anticoagulation with enoxaparin was not analyzed in this study.

Patient and Procedure Characteristics

All patients had AF or atrial flutter with 86% of patients (n = 43) having a CHADS2 > 2 and 48% (n = 29) having a CHA2DS2VASc > 4. Overall, the mean age was 71.3 years with similarities in ethnicity distribution. Patients had multiple comorbidities as shown by a mean Charlson Comorbidity Index (CCI) of 7.7 and an increased risk of bleeding as evidenced by 98% (n = 48) of patients having a HAS-BLED score of ≥ 3. A greater percentage of patients bridged with enoxaparin had DM, history of stroke and MI, and a heart valve, whereas UFH patients were more likely to be in stage 5 CKD (eGFR < 15 mL/min/1.73m2) with a significantly lower mean eGFR (16.76 vs 22.64, P = .03). Furthermore, there were more patients on hemodialysis in the UFH (50%) arm vs enoxaparin (21%) arm and a lower mean CrCl with UFH (20.1 mL/min) compared with enoxaparin (24.9 mL/min); however, the differences in hemodialysis and mean CrCl were not statistically significant. There were no patients on peritoneal dialysis in this review.

Procedure Characteristics

The average Revised Cardiac Risk Index (RCRI) score was about 3, indicating that these patients were at a Class IV risk (11%) of having a perioperative cardiac event (Table 3). Nineteen patients (38%) elected for a major surgery with all but 1 of the surgeries (major or minor) being invasive. The average length of surgery was 1.2 hours, and patients were more likely to undergo cardiothoracic procedures (38%). There were 2 out of 14 (14%) patients on enoxaparin who were able to have surgery as an outpatient; whereas this did not occur in patients on UFH. The procedures completed for these patients were a colostomy (minor surgery) and arteriovenous graft repair (major surgery). There were no statistically significant differences regarding types of procedures between the 2 arms.

Outcomes

The primary outcome of this study, hospital LOS, differed significantly in the enoxaparin arm vs UFH: 10.2 days vs 17.5 days, P = .04 (Table 4). The time-to-discharge from initiation of parenteral anticoagulation was significantly reduced with enoxaparin (7.1 days) compared with UFH (11.9 days); P = .04. Although also reduced in the enoxaparin arm, ICU LOS did not show statistical significance (1.1 days vs 4.0 days, P = .09).

About 36% (n = 18) of patients in this study acquired an infection during hospitalization for elective surgery. The most common microorganism and site of infection were Enterococcus species and urinary tract, respectively (Table 5). Nearly half (44%, n = 16) of the patients in the UFH group had a nosocomial infection vs 14% (n = 2) of enoxaparin-bridged patients with a difference approaching significance; P = .056. Both patients in the enoxaparin group had the urinary tract as the primary source of infection; 1 of these patients had a urologic procedure.

Major bleeding occurred in 7% (n = 1) of enoxaparin patients vs 22% (n = 8) in the UFH arm, but this was not found to be statistically significant (P = .41). Minor bleeding was similar between enoxaparin and UFH arms (14% vs 19%, P = .99). Regarding thromboembolic complications, the enoxaparin group (0%) had a numerical reduction compared to UFH (11%) with VTE (n = 4) being the only occurrence of the composite outcome (P = .57). There were 4 deaths within 30 days posthospitalization—all were from the UFH group (P = .57). Due to the small sample size of this study, these outcomes (bleeding and thrombotic events) were not powered to detect a statistically significant difference.

 

 

Economic Analysis

The average cost differences (Table 6) of hospitalization between enoxaparin and UFH were calculated using the average LOS per treating specialty multiplied by the national average cost of the MCO for an inpatient bed day in 2018.12 The treating specialty with the longest average LOS in the enoxaparin arm was thoracic (4.7 days). The UFH arm also had a large LOS (average days) for the thoracic specialty (6.4 days); however, the vascular specialty (6.7 days) had the longest average LOS in this group. Due to a mean LOS of 10.2 days in the enoxaparin arm, which was further stratified by treating specialty, the total cost per average inpatient stay was calculated as $51,710. On the other hand, patients in the UFH arm had a total cost per average inpatient stay of $92,848.

Monitoring

Anti-factor Xa levels for LMWH monitoring were not analyzed in this study due to a lack of values collected; only 1 patient had an anti-factor Xa level checked during this time frame. Infusion rates of UFH were adjusted based on aPTT levels collected per MEDVAMC inpatient anticoagulation protocol. The average percentage of aPTT in therapeutic range was 46.3% and the mean time-to-therapeutic range (SD) was about 2.4 (1.3) days. Due to this study’s retrospective nature, there were inconsistencies with availability of documentation of UFH infusion rates. For this reason, these values were not analyzed further.

Discussion

In 2017, the American College of Cardiology published the Periprocedural Anticoagulation Expert Consensus Pathway, which recommends for patients with AF at low risk (CHA2DS2VASc 1-4) of thromboembolism to not be bridged (unless patient had a prior VTE or stroke/TIA).13 Nearly half the patients in this study, were classified as moderate-to-high thrombotic risk as evidenced by a CHA2DS2VASc > 4 with a mean score of 4.8. Due to this study’s retrospective design from 2008 to 2017, many of the clinicians may have referenced the 2008 CHEST antithrombotic guidelines when making the decision to bridge patients; these guidelines and the previous MEDVAMC anticoagulation protocol recommend bridging patients with AF with CHADS2 > 2 (moderate-to-high thrombotic risk) in which all but 1 of the patients in this study met criteria.1,14 In contrast to the landmark BRIDGE trial, the mean CHADS2 score in this study was 3.6; this is an indication that our patient population was of individuals at an increased risk of stroke and embolism.

 

 

In addition to thromboembolic complications, patients in the current study also were at increased risk of clinically relevant bleeding with a mean HAS-BLED score of 4.1 and nearly all patients having a score > 3. The complexity of the veteran population also was displayed by this study’s mean CCI (7.7) and RCRI (3.0) indicating a 0% estimated 10-year survival and a 11% increase in having a perioperative cardiac event, respectively. A mean CCI of 7.7 is associated with a 13.3 relative risk of death within 6 years postoperation.15 All patients had a diagnosis of hypertension, and > 75% had this diagnosis complicated by DM. In addition, this patient population was of those with extensive cardiovascular disease or increased risk, which makes for a clinically relevant application of patients who would require periprocedural bridging.

Another positive aspect of this study is that all the baseline characteristics, apart from renal function, were similar between arms, helping to strengthen the ability to adequately compare the 2 bridging modalities. Our assumption for the reasoning that more stage 5 CKD and dialysis patients were anticoagulated with UFH vs enoxaparin is a result of concern for an increased risk of bleeding with a medication that is renally cleared 30% less in CrCl < 30 mL/min.16 Although, enoxaparin 1 mg/kg/d is FDA approved as a therapeutic anticoagulant option, clinicians at MEDVAMC likely had reservations about its use in end-stage CKD patients. Unlike many studies, including the BRIDGE trial, patients with ACKD were not excluded from this trial, and the outcomes with enoxaparin are available for interpretation.

To no surprise, for patients included in this study, enoxaparin use led to shorter hospital LOS, reduced ICU LOS, and a quicker time-to-discharge from initiation. This is credited to the 100% bioavailability of SC enoxaparin in conjunction with its means to be a therapeutic option as an outpatient.16 Unlike IV UFH, patients requiring bridging can be discharged on SC injections of enoxaparin until a therapeutic INR is maintained with warfarin.The duration of hospital LOS in both arms were longer in this study compared with that of other studies.9 This may be due to clinicians being more cautious with renal insufficient patients, and the patients included in this study had multiple comorbidities. According to an economic analysis performed by Amorosi and colleagues in 2004, bridging with enoxaparin instead of UFH can save up to $3,733 per patient and reduce bridging costs by 63% to 85% driven primarily by decreased hospital LOS.10

Economic Outcome

In our study, we conducted a cost analysis using national VA data that indicated a $41,138 or 44% reduction in total cost per average inpatient stay when bridging 1 patient with enoxaparin vs UFH. The benefit of this cost analysis is that it reflects direct costs at VA institutions nationally; this will allow these data to be useful for practitioners at MEDVAMC and other VA hospitals. Stratifying the costs by treating specialty instead of treatment location minimized skewing of the data as there were some patients with long LOS in the ICU. No patients in the enoxaparin arm were treated in otolaryngology, which may have skewed the data. The data included direct costs for beds as well as costs for multiple services, such as procedures, pharmacy, nursing, laboratory tests, and imaging. Unlike the Amorosi study, our review did not include acquisition costs for enoxaparin syringes and bags of UFH or laboratory costs for aPTT and anti-factor Xa levels in part because of the data source and the difficulty calculating costs over a 10-year span.

 

 

Patients in the enoxaparin arm had a trend toward fewer occurrences of hospital-acquired infections than did those in the UFH arm, which we believe is due to a decreased LOS (in both total hospital and ICU days) and fewer blood draws needed for monitoring. This also may be attributed to a longer mean duration of surgery in the UFH arm (1.3 hours) vs enoxaparin (0.9 hours). The percentage of patients with procedures ≥ 45 minutes and the types of procedures between both arms were similar. However, these outcomes were not statistically significant. In addition, elderly males who are hospitalized may require a catheter (due to urinary retention), and catheter-associated urinary tract infection (CAUTI) is one of the highest reported infections in acute care hospitals in the US. This is in line with our patient population and may be a supplementary reason for the increase in infection incidence with UFH. Though, whether urinary catheters were used in these patients was not evaluated in this study.

Despite being at an increased risk of experiencing a major adverse cardiovascular event (MACE), no patients in either arm had a stroke/TIA or MI within 30 days postprocedure. The only occurrences documented were VTEs, which happened only in 4 patients on UFH. Four people died in this study, solely in the UFH arm. The incidence of thromboembolic complications and death along with major and minor bleeding cannot be deduced as meaningful as this study was underpowered for these outcomes. Despite anti-factor Xa monitoring being recommended in ACKD patients on enoxaparin, this monitoring was not routinely performed in this study. Another limitation was the inability to adequately assess the appropriateness of nurse-adjusted UFH infusion rates largely due to the retrospective nature of this study. The variability of aPTT percentage in therapeutic range and time-to-therapeutic range reported was indicative of the difficulties of monitoring for the safety and efficacy of UFH.

In 1991, Cruickshank and colleagues conducted a study in which a standard nomogram (similar to the MEDVAMC nomogram) for the adjustment of IV heparin was implemented at a single hospital.17 The success rate (aPTT percentage in therapeutic range) was 59.4% and average time-to-therapeutic range was about 1 day. The success rate (46.3%) and time-to-therapeutic range (2.4 days) in our study were lower and longer, respectively, than was expected. One potential reason for this discrepancy could be the differences in indication as the patients in Cruickshank and colleagues were being treated for VTE, whereas patients in our study had AF or atrial flutter. Also, there were inconsistencies in the availability of documentation of monitoring parameters for heparin due to the study time frame and retrospective design. Patients on UFH who are not within the therapeutic range in a timely manner are at greater risk of MACE and major/minor bleeding. Our study was not powered to detect these findings.

Strengths and Limitations

A significant limitation of this study was its small sample size; the study was not able to meet power for the primary outcome; it is unknown whether our study met power for nosocomial infections. The study also was not a powered review of other adverse events, such as thromboembolic complications, bleeding, and death. The study had an uneven number of patients, which made it more difficult to appropriately compare 2 patient populations; the study also did not include medians for patient characteristics and outcomes.

 

 

Due to this study’s time frame, the clinical pharmacy services at MEDVAMC were not as robust as they are now, which is the reason the decisions on which anticoagulant to use were primarily physician based. The use of TheraDoc to identify patients posed the risk of missing patients who may not have had the appropriate laboratory tests performed (ie, SCr). Patients on UFH had a reduced eGFR compared with that of enoxaparin, which may limit our extrapolation of enoxaparin’s use in end-stage renal disease. The reduced eGFR and higher number of dialysis patients in the UFH arm may have increased the occurrence of more labile INRs and bleeding outcomes. Patients on hemodialysis typically have more comorbidities and an increased risk of infection due to the frequent use of catheters and needles to access the bloodstream. In addition, the potential differences in catheter use and duration between groups were not identified. If these parameters were studied, the data collected may have helped better explain the reasoning for increased incidence of infection in the UFH arm.

Strengths of this study include a complex patient population with similar characteristics, distribution of ethnicities representative of the US population, patients at moderate-to-high thrombotic risk, the analysis of nosocomial infections, and the exclusion of patients with normal renal function or moderate CKD.

Conclusion

To our knowledge, this is the first study to compare periprocedural bridging outcomes and incidence of nosocomial infections in patients with AF and ACKD. This review provides new evidence that in this patient population, enoxaparin is a potential anticoagulant to reduce hospital LOS and hospital-acquired infections. Compared with UFH, bridging with enoxaparin reduced hospital LOS and anticoagulation time-to-discharge by 7 and 5 days, respectively, and decreased the incidence of nosocomial infections by 30%. Using the mean LOS per treating specialty for both arms, bridging 1 patient with AF with enoxaparin vs UFH can potentially lead to an estimated $40,000 (44%) reduction in total cost of hospitalization. Enoxaparin also had no numeric differences in mortality and adverse events (stroke/TIA, MI, VTE) vs that of UFH, but it is important to note that this study was not powered to find a significant difference in these outcomes. Due to the mean eGFR of patients on enoxaparin being 22.6 mL/min/1.73 m2 and only 1 in 5 having stage 5 CKD, at this time, we do not recommend enoxaparin for periprocedural use in stage 5 CKD or in patients on hemodialysis. Larger studies are needed, including randomized trials, in this patient population to further evaluate these outcomes and assess the use of enoxaparin in patients with ACKD.

There has been a long-standing controversy in the use of parenteral anticoagulation for perioperative bridging in patients with atrial fibrillation (AF) pursuing elective surgery.1 The decision to bridge is dependent on the patient’s risk of thromboembolic complications and susceptibility to bleed.1 The BRIDGE trial showed noninferiority in rate of stroke and embolism events between low molecular weight heparins (LMWHs) and no perioperative bridging.2 However, according to the American College of Chest Physicians (CHEST) 2012 guidelines, patients in the BRIDGE trial would be deemed low risk for thromboembolic events displayed by a mean CHADS2 (congestive heart failure [CHF], hypertension, age, diabetes mellitus, and stroke/transient ischemic attack) score of 2.3. Also, the BRIDGE study and many others excluded patients with advanced forms of chronic kidney disease (CKD).2,3

Similar to patients with AF, patients with advanced CKD (ACKD, stage 4 and 5 CKD) have an increased risk of stroke and venous thromboembolism (VTE).4,5 Patients with AF and ACKD have not been adequately studied for perioperative anticoagulation bridging outcomes. Although unfractionated heparin (UFH) is preferred over LMWH in ACKD patients,enoxaparin can be used in this population.1,6 Enoxaparin 1 mg/kg once daily is approved by the US Food and Drug Administration (FDA) for use in patients with severe renal insufficiency defined as creatinine clearance (CrCl) < 30 mL/min. This dosage adjustment is subsequent to studies with enoxaparin 1 mg/kg twice daily that showed a significant increase in major and minor bleeding in severe renal-insufficient patients with CrCl < 30 mL/min vs patients with CrCl > 30 mL/min.7 When comparing the myocardial infarction (MI) outcomes of severe renal-insufficient patients in the ExTRACT-TIMI 25 trial, enoxaparin 1 mg/kg once daily had no significant difference in nonfatal major bleeding vs UFH.8 In patients without renal impairment (no documentation of kidney disease), bridging therapy with LMWH was completed more than UFH in < 24 hours of hospital stay and with similar rates of VTEs and major bleeding.9 In addition to its ability to be administered outpatient, enoxaparin has a more predictable pharmacokinetic profile, allowing for less monitoring and a lower incidence of heparin-induced thrombocytopenia (HIT) vs that of UFH.6

The Michael E. DeBakey Veteran Affairs Medical Center (MEDVAMC) in Houston, Texas, is one of the largest US Department of Veterans Affairs (VA) hospitals in the US, managing > 150,000 veterans in Southeast Texas and other southern states. As a referral center for traveling patients, it is crucial that MEDVAMC decrease hospital length of stay (LOS) to increase space for incoming patients. Reducing LOS also reduces costs and may have a correlation with decreasing the incidence of nosocomial infections. Because of its significance to this facility, hospital LOS is an appropriate primary outcome for this study.

To our knowledge, bridging outcomes between LMWH and UFH in patients with AF and ACKD have never been studied. We hypothesized that using enoxaparin instead of heparin for periprocedural management would result in decreased hospital LOS, leading to a lower economic burden and lower incidence of nosocomial infections with no significant differences in major and minor bleeding and thromboembolic complications.10

 

 

Methods

This study was a single-center, retrospective chart review of adult patients from January 2008 to September 2017. The review was conducted at MEDVAMC and was approved by the research and development committee and by the Baylor College of Medicine Institutional Review Board. Formal consent was not required.

Included patients were aged ≥ 18 years with diagnoses of AF or atrial flutter and ACKD as recognized by a glomerular filtration rate (eGFR) of < 30 mL/min/1.73 m2 as calculated by use of the Modification of Diet in Renal Disease Study (MDRD) equation.11 Patients must have previously been on warfarin and required temporary interruption of warfarin for an elective procedure. During the interruption of warfarin therapy, a requirement was set for patients to be on periprocedural anticoagulation with subcutaneous (SC) enoxaparin 1 mg/kg daily or continuous IV heparin per MEDVAMC heparin protocol. Patients were excluded if they had experienced major bleeding in the 6 weeks prior to the elective procedure, had current thrombocytopenia (platelet count < 100 × 109/L), or had a history of heparin-induced thrombocytopenia (HIT) or a heparin allergy.

This patient population was identified using TheraDoc Clinical Surveillance Software System (Charlotte, NC), which has prebuilt alert reviews for anticoagulation medications, including enoxaparin and heparin. An alert for patients on enoxaparin with serum creatinine (SCr) > 1.5 mg/dL was used to screen patients who met the inclusion criteria. A second alert identified patients on heparin. The VA Computerized Patient Record System (CPRS) was used to collect patient data.

Economic Analysis

An economic analysis was conducted using data from the VA Managerial Cost Accounting Reports. Data on the national average cost per bed day was used for the purpose of extrapolating this information to multiple VA institutions.12 National average cost per day was determined by dividing the total cost by the number of bed days for the identified treating specialty during the fiscal period of 2018. Average cost per day data included costs for bed day, surgery, radiology services, laboratory tests, pharmacy services, treatment location (ie, intensive care units [ICUs]) and all other costs associated with an inpatient stay. A cost analysis was performed using this average cost per bed day and the mean LOS between enoxaparin and UFH for each treating specialty. The major outcome of the cost analysis was the total cost per average inpatient stay. The national average cost per bed day for each treating specialty was multiplied by the average LOS found for each treating specialty in this study; the sum of all the average costs per inpatient stay for the treating specialties resulted in the total cost per average inpatient stay. Permission to use these data was granted by the Pharmacy and Critical Care Services at MEDVAMC.

Patient Demographics and Characteristics

Data were collected on patient demographics (Table 1). Nosocomial infections, stroke/transient ischemic attack, MI, VTE, major and minor bleeding, and death are defined in Table 2.

The primary outcome of the study was hospital LOS. The study was powered at 90% for α = .05, which gives a required study population of 114 (1:1 enrollment ratio) patients to determine a statistically significant difference in hospital stay. This sample size was calculated using the mean hospital LOS (the primary objective) in the REGIMEN registry for LMWH (4.6 days) and UFH (10.3 days).9 To our knowledge, the incidence of nosocomial infections (a secondary outcome) has not been studied in this patient population; therefore, there was no basis to assess an appropriate sample size to find a difference in this outcome. Furthermore, the goal was to collect as many patients as possible to best assess this variable. Because of an expected high exclusion rate, 504 patients were reviewed to target a sample size of 120 patients. Due to the single-center nature of this review, the secondary outcomes of thromboembolic complications and major and minor bleeding were expected to be underpowered.

The final analysis compared the enoxaparin arm with the UFH arm. Univariate differences between the treatment groups were compared using the Fisher exact test for categorical variables. Demographic data and other continuous variables were analyzed by an unpaired t test to compare means between the 2 arms. Outcomes and characteristics were deemed statistically significant when α (P value) was < .05. All P values reported were 2-tailed with a 95% CI. No statistical analysis was performed for the cost differences (based on LOS per treating specialty) in the 2 treatment arms. Statistical analyses were completed by utilizing GraphPad Software (San Diego, CA).

 

 

Results

In total, 50 patients were analyzed in the study. There were 36 patients bridged with IV UFH at a concentration of 25,000 U/250 mL with an initial infusion rate of 12 U/kg/h. For the other arm, 14 patients were anticoagulated with renally dosed enoxaparin 1 mg/kg/d with an average daily dose of 89.3 mg; the mean actual body weight in this group was 90.9 mg (correlates with enoxaparin daily dose). Physicians of the primary team decided which parenteral anticoagulant to use. The difference in mean duration of inpatient parental anticoagulation between both groups was not statistically significant: enoxaparin at 7.1 days and UFH at 9.6 days (P = .19). Patients in the enoxaparin arm were off warfarin therapy for an average of 6.0 days vs 7.5 days for the UFH group (P = .29). The duration of outpatient anticoagulation with enoxaparin was not analyzed in this study.

Patient and Procedure Characteristics

All patients had AF or atrial flutter with 86% of patients (n = 43) having a CHADS2 > 2 and 48% (n = 29) having a CHA2DS2VASc > 4. Overall, the mean age was 71.3 years with similarities in ethnicity distribution. Patients had multiple comorbidities as shown by a mean Charlson Comorbidity Index (CCI) of 7.7 and an increased risk of bleeding as evidenced by 98% (n = 48) of patients having a HAS-BLED score of ≥ 3. A greater percentage of patients bridged with enoxaparin had DM, history of stroke and MI, and a heart valve, whereas UFH patients were more likely to be in stage 5 CKD (eGFR < 15 mL/min/1.73m2) with a significantly lower mean eGFR (16.76 vs 22.64, P = .03). Furthermore, there were more patients on hemodialysis in the UFH (50%) arm vs enoxaparin (21%) arm and a lower mean CrCl with UFH (20.1 mL/min) compared with enoxaparin (24.9 mL/min); however, the differences in hemodialysis and mean CrCl were not statistically significant. There were no patients on peritoneal dialysis in this review.

Procedure Characteristics

The average Revised Cardiac Risk Index (RCRI) score was about 3, indicating that these patients were at a Class IV risk (11%) of having a perioperative cardiac event (Table 3). Nineteen patients (38%) elected for a major surgery with all but 1 of the surgeries (major or minor) being invasive. The average length of surgery was 1.2 hours, and patients were more likely to undergo cardiothoracic procedures (38%). There were 2 out of 14 (14%) patients on enoxaparin who were able to have surgery as an outpatient; whereas this did not occur in patients on UFH. The procedures completed for these patients were a colostomy (minor surgery) and arteriovenous graft repair (major surgery). There were no statistically significant differences regarding types of procedures between the 2 arms.

Outcomes

The primary outcome of this study, hospital LOS, differed significantly in the enoxaparin arm vs UFH: 10.2 days vs 17.5 days, P = .04 (Table 4). The time-to-discharge from initiation of parenteral anticoagulation was significantly reduced with enoxaparin (7.1 days) compared with UFH (11.9 days); P = .04. Although also reduced in the enoxaparin arm, ICU LOS did not show statistical significance (1.1 days vs 4.0 days, P = .09).

About 36% (n = 18) of patients in this study acquired an infection during hospitalization for elective surgery. The most common microorganism and site of infection were Enterococcus species and urinary tract, respectively (Table 5). Nearly half (44%, n = 16) of the patients in the UFH group had a nosocomial infection vs 14% (n = 2) of enoxaparin-bridged patients with a difference approaching significance; P = .056. Both patients in the enoxaparin group had the urinary tract as the primary source of infection; 1 of these patients had a urologic procedure.

Major bleeding occurred in 7% (n = 1) of enoxaparin patients vs 22% (n = 8) in the UFH arm, but this was not found to be statistically significant (P = .41). Minor bleeding was similar between enoxaparin and UFH arms (14% vs 19%, P = .99). Regarding thromboembolic complications, the enoxaparin group (0%) had a numerical reduction compared to UFH (11%) with VTE (n = 4) being the only occurrence of the composite outcome (P = .57). There were 4 deaths within 30 days posthospitalization—all were from the UFH group (P = .57). Due to the small sample size of this study, these outcomes (bleeding and thrombotic events) were not powered to detect a statistically significant difference.

 

 

Economic Analysis

The average cost differences (Table 6) of hospitalization between enoxaparin and UFH were calculated using the average LOS per treating specialty multiplied by the national average cost of the MCO for an inpatient bed day in 2018.12 The treating specialty with the longest average LOS in the enoxaparin arm was thoracic (4.7 days). The UFH arm also had a large LOS (average days) for the thoracic specialty (6.4 days); however, the vascular specialty (6.7 days) had the longest average LOS in this group. Due to a mean LOS of 10.2 days in the enoxaparin arm, which was further stratified by treating specialty, the total cost per average inpatient stay was calculated as $51,710. On the other hand, patients in the UFH arm had a total cost per average inpatient stay of $92,848.

Monitoring

Anti-factor Xa levels for LMWH monitoring were not analyzed in this study due to a lack of values collected; only 1 patient had an anti-factor Xa level checked during this time frame. Infusion rates of UFH were adjusted based on aPTT levels collected per MEDVAMC inpatient anticoagulation protocol. The average percentage of aPTT in therapeutic range was 46.3% and the mean time-to-therapeutic range (SD) was about 2.4 (1.3) days. Due to this study’s retrospective nature, there were inconsistencies with availability of documentation of UFH infusion rates. For this reason, these values were not analyzed further.

Discussion

In 2017, the American College of Cardiology published the Periprocedural Anticoagulation Expert Consensus Pathway, which recommends for patients with AF at low risk (CHA2DS2VASc 1-4) of thromboembolism to not be bridged (unless patient had a prior VTE or stroke/TIA).13 Nearly half the patients in this study, were classified as moderate-to-high thrombotic risk as evidenced by a CHA2DS2VASc > 4 with a mean score of 4.8. Due to this study’s retrospective design from 2008 to 2017, many of the clinicians may have referenced the 2008 CHEST antithrombotic guidelines when making the decision to bridge patients; these guidelines and the previous MEDVAMC anticoagulation protocol recommend bridging patients with AF with CHADS2 > 2 (moderate-to-high thrombotic risk) in which all but 1 of the patients in this study met criteria.1,14 In contrast to the landmark BRIDGE trial, the mean CHADS2 score in this study was 3.6; this is an indication that our patient population was of individuals at an increased risk of stroke and embolism.

 

 

In addition to thromboembolic complications, patients in the current study also were at increased risk of clinically relevant bleeding with a mean HAS-BLED score of 4.1 and nearly all patients having a score > 3. The complexity of the veteran population also was displayed by this study’s mean CCI (7.7) and RCRI (3.0) indicating a 0% estimated 10-year survival and a 11% increase in having a perioperative cardiac event, respectively. A mean CCI of 7.7 is associated with a 13.3 relative risk of death within 6 years postoperation.15 All patients had a diagnosis of hypertension, and > 75% had this diagnosis complicated by DM. In addition, this patient population was of those with extensive cardiovascular disease or increased risk, which makes for a clinically relevant application of patients who would require periprocedural bridging.

Another positive aspect of this study is that all the baseline characteristics, apart from renal function, were similar between arms, helping to strengthen the ability to adequately compare the 2 bridging modalities. Our assumption for the reasoning that more stage 5 CKD and dialysis patients were anticoagulated with UFH vs enoxaparin is a result of concern for an increased risk of bleeding with a medication that is renally cleared 30% less in CrCl < 30 mL/min.16 Although, enoxaparin 1 mg/kg/d is FDA approved as a therapeutic anticoagulant option, clinicians at MEDVAMC likely had reservations about its use in end-stage CKD patients. Unlike many studies, including the BRIDGE trial, patients with ACKD were not excluded from this trial, and the outcomes with enoxaparin are available for interpretation.

To no surprise, for patients included in this study, enoxaparin use led to shorter hospital LOS, reduced ICU LOS, and a quicker time-to-discharge from initiation. This is credited to the 100% bioavailability of SC enoxaparin in conjunction with its means to be a therapeutic option as an outpatient.16 Unlike IV UFH, patients requiring bridging can be discharged on SC injections of enoxaparin until a therapeutic INR is maintained with warfarin.The duration of hospital LOS in both arms were longer in this study compared with that of other studies.9 This may be due to clinicians being more cautious with renal insufficient patients, and the patients included in this study had multiple comorbidities. According to an economic analysis performed by Amorosi and colleagues in 2004, bridging with enoxaparin instead of UFH can save up to $3,733 per patient and reduce bridging costs by 63% to 85% driven primarily by decreased hospital LOS.10

Economic Outcome

In our study, we conducted a cost analysis using national VA data that indicated a $41,138 or 44% reduction in total cost per average inpatient stay when bridging 1 patient with enoxaparin vs UFH. The benefit of this cost analysis is that it reflects direct costs at VA institutions nationally; this will allow these data to be useful for practitioners at MEDVAMC and other VA hospitals. Stratifying the costs by treating specialty instead of treatment location minimized skewing of the data as there were some patients with long LOS in the ICU. No patients in the enoxaparin arm were treated in otolaryngology, which may have skewed the data. The data included direct costs for beds as well as costs for multiple services, such as procedures, pharmacy, nursing, laboratory tests, and imaging. Unlike the Amorosi study, our review did not include acquisition costs for enoxaparin syringes and bags of UFH or laboratory costs for aPTT and anti-factor Xa levels in part because of the data source and the difficulty calculating costs over a 10-year span.

 

 

Patients in the enoxaparin arm had a trend toward fewer occurrences of hospital-acquired infections than did those in the UFH arm, which we believe is due to a decreased LOS (in both total hospital and ICU days) and fewer blood draws needed for monitoring. This also may be attributed to a longer mean duration of surgery in the UFH arm (1.3 hours) vs enoxaparin (0.9 hours). The percentage of patients with procedures ≥ 45 minutes and the types of procedures between both arms were similar. However, these outcomes were not statistically significant. In addition, elderly males who are hospitalized may require a catheter (due to urinary retention), and catheter-associated urinary tract infection (CAUTI) is one of the highest reported infections in acute care hospitals in the US. This is in line with our patient population and may be a supplementary reason for the increase in infection incidence with UFH. Though, whether urinary catheters were used in these patients was not evaluated in this study.

Despite being at an increased risk of experiencing a major adverse cardiovascular event (MACE), no patients in either arm had a stroke/TIA or MI within 30 days postprocedure. The only occurrences documented were VTEs, which happened only in 4 patients on UFH. Four people died in this study, solely in the UFH arm. The incidence of thromboembolic complications and death along with major and minor bleeding cannot be deduced as meaningful as this study was underpowered for these outcomes. Despite anti-factor Xa monitoring being recommended in ACKD patients on enoxaparin, this monitoring was not routinely performed in this study. Another limitation was the inability to adequately assess the appropriateness of nurse-adjusted UFH infusion rates largely due to the retrospective nature of this study. The variability of aPTT percentage in therapeutic range and time-to-therapeutic range reported was indicative of the difficulties of monitoring for the safety and efficacy of UFH.

In 1991, Cruickshank and colleagues conducted a study in which a standard nomogram (similar to the MEDVAMC nomogram) for the adjustment of IV heparin was implemented at a single hospital.17 The success rate (aPTT percentage in therapeutic range) was 59.4% and average time-to-therapeutic range was about 1 day. The success rate (46.3%) and time-to-therapeutic range (2.4 days) in our study were lower and longer, respectively, than was expected. One potential reason for this discrepancy could be the differences in indication as the patients in Cruickshank and colleagues were being treated for VTE, whereas patients in our study had AF or atrial flutter. Also, there were inconsistencies in the availability of documentation of monitoring parameters for heparin due to the study time frame and retrospective design. Patients on UFH who are not within the therapeutic range in a timely manner are at greater risk of MACE and major/minor bleeding. Our study was not powered to detect these findings.

Strengths and Limitations

A significant limitation of this study was its small sample size; the study was not able to meet power for the primary outcome; it is unknown whether our study met power for nosocomial infections. The study also was not a powered review of other adverse events, such as thromboembolic complications, bleeding, and death. The study had an uneven number of patients, which made it more difficult to appropriately compare 2 patient populations; the study also did not include medians for patient characteristics and outcomes.

 

 

Due to this study’s time frame, the clinical pharmacy services at MEDVAMC were not as robust as they are now, which is the reason the decisions on which anticoagulant to use were primarily physician based. The use of TheraDoc to identify patients posed the risk of missing patients who may not have had the appropriate laboratory tests performed (ie, SCr). Patients on UFH had a reduced eGFR compared with that of enoxaparin, which may limit our extrapolation of enoxaparin’s use in end-stage renal disease. The reduced eGFR and higher number of dialysis patients in the UFH arm may have increased the occurrence of more labile INRs and bleeding outcomes. Patients on hemodialysis typically have more comorbidities and an increased risk of infection due to the frequent use of catheters and needles to access the bloodstream. In addition, the potential differences in catheter use and duration between groups were not identified. If these parameters were studied, the data collected may have helped better explain the reasoning for increased incidence of infection in the UFH arm.

Strengths of this study include a complex patient population with similar characteristics, distribution of ethnicities representative of the US population, patients at moderate-to-high thrombotic risk, the analysis of nosocomial infections, and the exclusion of patients with normal renal function or moderate CKD.

Conclusion

To our knowledge, this is the first study to compare periprocedural bridging outcomes and incidence of nosocomial infections in patients with AF and ACKD. This review provides new evidence that in this patient population, enoxaparin is a potential anticoagulant to reduce hospital LOS and hospital-acquired infections. Compared with UFH, bridging with enoxaparin reduced hospital LOS and anticoagulation time-to-discharge by 7 and 5 days, respectively, and decreased the incidence of nosocomial infections by 30%. Using the mean LOS per treating specialty for both arms, bridging 1 patient with AF with enoxaparin vs UFH can potentially lead to an estimated $40,000 (44%) reduction in total cost of hospitalization. Enoxaparin also had no numeric differences in mortality and adverse events (stroke/TIA, MI, VTE) vs that of UFH, but it is important to note that this study was not powered to find a significant difference in these outcomes. Due to the mean eGFR of patients on enoxaparin being 22.6 mL/min/1.73 m2 and only 1 in 5 having stage 5 CKD, at this time, we do not recommend enoxaparin for periprocedural use in stage 5 CKD or in patients on hemodialysis. Larger studies are needed, including randomized trials, in this patient population to further evaluate these outcomes and assess the use of enoxaparin in patients with ACKD.

References

1. Douketis JD, Spyropoulos AC, Spencer FA, et al. Perioperative management of antithrombotic therapy: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(2)(suppl):e326S-350S.

2. Douketis JD, Spyropoulos AC, Kaatz S, et al; BRIDGE Investigators. Perioperative bridging anticoagulation in patients with atrial fibrillation. N Engl J Med. 2015;373(9):823-833.

3. Hammerstingl C, Schmitz A, Fimmers R, Omran H. Bridging of chronic oral anticoagulation with enoxaparin in patients with atrial fibrillation: results from the prospective BRAVE registry. Cardiovasc Ther. 2009;27(4):230-238.

4. Dad T, Weiner DE. Stroke and chronic kidney disease: epidemiology, pathogenesis, and management across kidney disease stages. Semin Nephrol. 2015;35(4):311-322.

5. Wattanakit K, Cushman M. Chronic kidney disease and venous thromboembolism: epidemiology and mechanisms. Curr Opin Pulm Med. 2009;15(5):408-412.

6. Saltiel M. Dosing low molecular weight heparins in kidney disease. J Pharm Pract. 2010;23(3):205-209.

7. Spinler SA, Inverso SM, Cohen M, Goodman SG, Stringer KA, Antman EM; ESSENCE and TIMI 11B Investigators. Safety and efficacy of unfractionated heparin versus enoxaparin in patients who are obese and patients with severe renal impairment: analysis from the ESSENCE and TIMI 11B studies. Am Heart J. 2003;146(1):33-41.

8. Fox KA, Antman EM, Montalescot G, et al. The impact of renal dysfunction on outcomes in the ExTRACT-TIMI 25 trial. J Am Coll Cardiol. 2007;49(23):2249-2255.

9. Spyropoulos AC, Turpie AG, Dunn AS, et al; REGIMEN Investigators. Clinical outcomes with unfractionated heparin or low-molecular-weight heparin as bridging therapy in patients on long-term oral anticoagulants: the REGIMEN registry. J Thromb Haemost. 2006;4(6):1246-1252.

10. Amorosi SL, Tsilimingras K, Thompson D, Fanikos J, Weinstein MC, Goldhaber SZ. Cost analysis of “bridging therapy” with low-molecular-weight heparin versus unfractionated heparin during temporary interruption of chronic anticoagulation. Am J Cardiol. 2004;93(4):509-511.

11. Inker LA, Astor BC, Fox CH, et al. KDOQI US commentary on the 2012 KDIGO clinical practice guideline for the evaluation and management of CKD. Am J Kidney Dis. 2014;63(5):713-735.

12. US Department of Veteran Affairs. Managerial Cost Accounting Financial User Support Reports: fiscal year 2018. https://www.herc.research.va.gov/include/page.asp?id=managerial-cost-accounting. [Source not verified.]

13. Doherty JU, Gluckman TJ, Hucker WJ, et al. 2017 ACC Expert Consensus Decision Pathway for Periprocedural Management of Anticoagulation in Patients With Nonvalvular Atrial Fibrillation: a report of the American College of Cardiology Clinical Expert Consensus Document Task Force. J Am Coll Cardiol. 2017;69(7):871-898.

14. Kearon C, Kahn SR, Agnelli G, et al. Antithrombotic therapy for venous thromboembolic disease: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines (8th Edition). Chest. 2008;133(6 suppl):454S-545S.

15. Charlson M, Szatrowski TP, Peterson J, Gold J. Validation of a combined comorbidity index. J Clin Epidemiol. 1994;47(11):1245-1251. 

16. Lovenox [package insert]. Bridgewater, NJ: Sanofi-Aventis; December 2017.

17. Cruickshank MK, Levine MN, Hirsh J, Roberts R, Siguenza M. A standard heparin nomogram for the management of heparin therapy. Arch Intern Med. 1991;151(2):333-337.

18. Steinberg BA, Peterson ED, Kim S, et al; Outcomes Registry for Better Informed Treatment of Atrial Fibrillation Investigators and Patients. Use and outcomes associated with bridging during anticoagulation interruptions in patients with atrial fibrillation: findings from the Outcomes Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT-AF). Circulation. 2015;131(5):488-494.

19. Verheugt FW, Steinhubl SR, Hamon M, et al. Incidence, prognostic impact, and influence of antithrombotic therapy on access and nonaccess site bleeding in percutaneous coronary intervention. JACC Cardiovasc Interv. 2011;4(2):191-197.

20. Bijsterveld NR, Peters RJ, Murphy SA, Bernink PJ, Tijssen JG, Cohen M. Recurrent cardiac ischemic events early after discontinuation of short-term heparin treatment in acute coronary syndromes: results from the Thrombolysis in Myocardial Infarction (TIMI) 11B and Efficacy and Safety of Subcutaneous Enoxaparin in Non-Q-Wave Coronary Events (ESSENCE) studies. J Am Coll Cardiol. 2003;42(12):2083-2089.

References

1. Douketis JD, Spyropoulos AC, Spencer FA, et al. Perioperative management of antithrombotic therapy: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(2)(suppl):e326S-350S.

2. Douketis JD, Spyropoulos AC, Kaatz S, et al; BRIDGE Investigators. Perioperative bridging anticoagulation in patients with atrial fibrillation. N Engl J Med. 2015;373(9):823-833.

3. Hammerstingl C, Schmitz A, Fimmers R, Omran H. Bridging of chronic oral anticoagulation with enoxaparin in patients with atrial fibrillation: results from the prospective BRAVE registry. Cardiovasc Ther. 2009;27(4):230-238.

4. Dad T, Weiner DE. Stroke and chronic kidney disease: epidemiology, pathogenesis, and management across kidney disease stages. Semin Nephrol. 2015;35(4):311-322.

5. Wattanakit K, Cushman M. Chronic kidney disease and venous thromboembolism: epidemiology and mechanisms. Curr Opin Pulm Med. 2009;15(5):408-412.

6. Saltiel M. Dosing low molecular weight heparins in kidney disease. J Pharm Pract. 2010;23(3):205-209.

7. Spinler SA, Inverso SM, Cohen M, Goodman SG, Stringer KA, Antman EM; ESSENCE and TIMI 11B Investigators. Safety and efficacy of unfractionated heparin versus enoxaparin in patients who are obese and patients with severe renal impairment: analysis from the ESSENCE and TIMI 11B studies. Am Heart J. 2003;146(1):33-41.

8. Fox KA, Antman EM, Montalescot G, et al. The impact of renal dysfunction on outcomes in the ExTRACT-TIMI 25 trial. J Am Coll Cardiol. 2007;49(23):2249-2255.

9. Spyropoulos AC, Turpie AG, Dunn AS, et al; REGIMEN Investigators. Clinical outcomes with unfractionated heparin or low-molecular-weight heparin as bridging therapy in patients on long-term oral anticoagulants: the REGIMEN registry. J Thromb Haemost. 2006;4(6):1246-1252.

10. Amorosi SL, Tsilimingras K, Thompson D, Fanikos J, Weinstein MC, Goldhaber SZ. Cost analysis of “bridging therapy” with low-molecular-weight heparin versus unfractionated heparin during temporary interruption of chronic anticoagulation. Am J Cardiol. 2004;93(4):509-511.

11. Inker LA, Astor BC, Fox CH, et al. KDOQI US commentary on the 2012 KDIGO clinical practice guideline for the evaluation and management of CKD. Am J Kidney Dis. 2014;63(5):713-735.

12. US Department of Veteran Affairs. Managerial Cost Accounting Financial User Support Reports: fiscal year 2018. https://www.herc.research.va.gov/include/page.asp?id=managerial-cost-accounting. [Source not verified.]

13. Doherty JU, Gluckman TJ, Hucker WJ, et al. 2017 ACC Expert Consensus Decision Pathway for Periprocedural Management of Anticoagulation in Patients With Nonvalvular Atrial Fibrillation: a report of the American College of Cardiology Clinical Expert Consensus Document Task Force. J Am Coll Cardiol. 2017;69(7):871-898.

14. Kearon C, Kahn SR, Agnelli G, et al. Antithrombotic therapy for venous thromboembolic disease: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines (8th Edition). Chest. 2008;133(6 suppl):454S-545S.

15. Charlson M, Szatrowski TP, Peterson J, Gold J. Validation of a combined comorbidity index. J Clin Epidemiol. 1994;47(11):1245-1251. 

16. Lovenox [package insert]. Bridgewater, NJ: Sanofi-Aventis; December 2017.

17. Cruickshank MK, Levine MN, Hirsh J, Roberts R, Siguenza M. A standard heparin nomogram for the management of heparin therapy. Arch Intern Med. 1991;151(2):333-337.

18. Steinberg BA, Peterson ED, Kim S, et al; Outcomes Registry for Better Informed Treatment of Atrial Fibrillation Investigators and Patients. Use and outcomes associated with bridging during anticoagulation interruptions in patients with atrial fibrillation: findings from the Outcomes Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT-AF). Circulation. 2015;131(5):488-494.

19. Verheugt FW, Steinhubl SR, Hamon M, et al. Incidence, prognostic impact, and influence of antithrombotic therapy on access and nonaccess site bleeding in percutaneous coronary intervention. JACC Cardiovasc Interv. 2011;4(2):191-197.

20. Bijsterveld NR, Peters RJ, Murphy SA, Bernink PJ, Tijssen JG, Cohen M. Recurrent cardiac ischemic events early after discontinuation of short-term heparin treatment in acute coronary syndromes: results from the Thrombolysis in Myocardial Infarction (TIMI) 11B and Efficacy and Safety of Subcutaneous Enoxaparin in Non-Q-Wave Coronary Events (ESSENCE) studies. J Am Coll Cardiol. 2003;42(12):2083-2089.

Issue
Federal Practitioner - 36(7)a
Issue
Federal Practitioner - 36(7)a
Page Number
306-315
Page Number
306-315
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Fluoroscopically Guided Lateral Approach Hip Injection

Article Type
Changed
Mon, 07/15/2019 - 14:25
A retrospective comparison study of the anterior-oblique and lateral approach to hip injection procedures suggests that the lateral approach may be a valuable interventional skill for those performing hip injections.

Hip injections are performed as diagnostic and therapeutic interventions across a variety of medical subspecialties, including but not limited to those practicing physical medicine and rehabilitation, pain medicine, sports medicine, orthopedic surgery, and radiology. Traditional image-guided intra-articular hip injection commonly uses an anterior-oblique approach from a starting point on the anterior groin traversing soft tissue anterior to the femoral neck to the target needle placement at the femoral head-neck junction.

In fluoroscopic procedures, a coaxial technique for needles placement is used for safe and precise insertion of needles. An X-ray beam is angled in line with the projected path of the needle from skin entry point to injection target. Coaxial, en face technique (also called EF, parallel, hub view, down the barrel, or barrel view) appears as a single radiopaque dot over the target injection site.1 This technique minimizes needle redirection for correction of the injection path and minimal disturbance of surrounding tissue on the approach to the intended target.

Noncoaxial technique, as used in the anterior-oblique approach, intentionally directs the needle away from a skin entry point, the needle barrel traversing the X-ray beam toward an injection target. Clinical challenges to injection with the anterior-oblique approach include using a noncoaxial technique. Additional challenges to the anterior-oblique (also referred to as anterior) approach are body habitus and pannus, proximity to neurovascular structures, and patient positioning. By understanding the risks and benefits of varied technical approaches to accomplish a clinical goal and outcome, trainees are better able to select the technique most appropriate for a varied patient population.

Common risks to patients for all intra-articular interventions include bleeding, infection, and pain. Risk of damage to nearby structures is often mentioned as part of a standard informed consent process as it relates to the femoral vein, artery, and nerve that are in close anatomical proximity to the target injection site. When prior studies have examined the risk of complications resulting from intra-articular hip injections, a common conclusion is that despite a relatively low-risk profile for skilled interventionalists, efforts to avoid needle placement in the medial 50% of the femoral head on antero-posterior imaging is recommended.2

The anterior technique is a commonly described approach, and the same can be used for both ultrasound-guided and fluoroscopically guided hip injections.3 Using ultrasound guidance, the anterior technique can be performed with in-plane direct visualization of the needle throughout the procedure. With fluoroscopic guidance, the anterior approach is performed out-of-plane, using the noncoaxial technique. This requires the interventionalist to use tactile and anatomic guidance to the target injection site. The anterior approach for hip injection is one of few interventions where coaxial technique is not used for the procedure, making the instruction for a learner less concrete and potentially more challenging related to the needle path not under direct visualization in plane with the X-ray beam.

Technical guidance and detailed instruction for the lateral approach is infrequently described in fluoroscopic interventional texts. Reference to a lateral approach hip injection was made as early as the 1970s, without detail provided on the technique, with respect to the advantage of visualization of the hip joint for needle placement when hardware is in place.4 A more recent article described a lateral approach technique involving the patient in a decubitus (lateral) supine position, which presents limitations in consistent fluoroscopic imaging and can be a challenging static position for the patient to maintain.5

The retrospective review of anterior-oblique and lateral approach procedures in this study aims to demonstrate that there is no significant difference in radiation exposure, rate of successful intra-articular injection, or complication rate. If proven as a noninferior technique, the lateral approach may be a valuable interventional skill to those performing hip injections. Potential benefits to the patient and provider include options for the provider to access the joint using either technique. Additionally, the approach can be added to the instructional plan for those practitioners providing technical instruction to trainees within their health care system.

 

 

Methods

The institutional review board at the VA Ann Arbor Healthcare System reviewed and granted approval for this study. One of 5 interventional pain physician staff members at the VA Ann Arbor Healthcare System performed fluoroscopically guided hip injections. Interventional pain fellows under the direct supervision of board-certified physicians performed the procedures for the study cases. Supervising physicians included both physiatrists and anesthesiologists. Images were reviewed and evaluated without corresponding patient biographic data.

For cases using the lateral approach, the patients were positioned supine on the fluoroscopy table. In anterior-posterior and lateral views, trajectory lines are drawn using a long metal marking rod held adjacent to the patient. With pulsed low-dose fluoroscopy, transverse lines are drawn to identify midpoint of the femoral head in lateral view (Figure 1A, x-axis) and the most direct line from skin to lateral femoral head neck junction joint target (Figure 1B, z-axis). Also confirmed in lateral view, the z-axis marked line drawn on the skin is used to confirm that this transverse plane crosses the overlapping femoral heads (Figure 1A, y-axis).



The cross-section of these transverse and coronal plane lines identifies the starting point for the most direct approach from skin to injection target at femoral head-neck junction. Using the coaxial technique in the lateral view, the needle is introduced and advanced using intermittent fluoroscopic images to the lateral joint target. Continuing in this view, the interventionalist can ensure that advancing the needle to the osseous endpoint will place the tip at the midpoint of the femoral head at the target on the lateral surface, avoiding inadvertent advance of the needle anterior or posterior the femoral head. Final needle placement confirmation is then completed in antero-posterior view (Figure 2A). Contrast enhancement is used to confirm intra-articular spread (Figure 2B).



Cases included in the study were performed over an 8-month period in 2017. Case images recorded in IntelliSpace PACS Radiology software (Andover, MA) were included by creating a list of all cases performed and documented using the major joint injection procedure code. The cases reviewed began with the most recent cases. Two research team members (1 radiologist and 1 interventional pain physician) reviewed the series of saved images for each patient and the associated procedure report. The research team members documented and recorded de-identified study data in Microsoft Excel (Redmond, WA).

Imaging reports, using the saved images and the associated procedure report, were classified for technical approach (anterior, lateral, or inconclusive), success of joint injection as evidenced by appropriate contrast enhancement within the joint space (successful, unsuccessful, or incomplete images), documented use of sedation (yes, no), patient positioning (supine, prone), radiation exposure dose, radiation exposure time, and additional comments, such as “notable pannus” or “hardware present” to annotate significant findings on imaging review.

Statistical Analysis

The distribution of 2 outcomes used to compare rates of complication, radiation dose, and exposure time was checked using the Shapiro-Wilk test. Power analysis determined that inclusion of 30 anterior and 30 lateral cases results in adequate power to detect a 1-point mean difference, assuming a standard deviation of 1.5 in each group. Both radiation dose and exposure time were found to be nonnormally distributed (W = 0.65, P < .001; W = 0.86, P < .001; respectively). Median and interquartile range (IQR) of dose and time in seconds for anterior and lateral approaches were computed. Median differences in radiation dose and exposure time between anterior and lateral approaches were assessed with the k-sample test of equality of medians. All analyses were conducted using Stata Version 14.1 (College Station, TX).

 

 

Results

Between June 2017 and January 2018, 88 cases were reviewed as performed, with 30 anterior and 30 lateral approach cases included in this retrospective comparison study. A total of 28 cases were excluded from the study for using an inconclusive approach, multiple or bilateral procedures, cases without recorded dose and time data, and inadequately saved images to provide meaningful data (Figure 3).

Rate of successful intervention with needle placement confirmed within the articular space on contrast enhancement was not significantly different in the study groups with 96.7% (29 of 30) anterior approach cases reported as successful, 100% (30 of 30) lateral approach cases reported as successful. Overhanging pannus in the viewing area was reported in 5 anterior approach cases and 4 lateral cases. Hardware was noted in 2 lateral approach cases, none in anterior approach cases. Sedation was used for 3 of the anterior approach cases and none of the lateral approach cases.



Patients undergoing the lateral approach received a higher median radiation dose than did those undergoing the anterior approach, but this was not statistically significant (P = .07) (Table). Those undergoing the lateral approach also had a longer median exposure time than did those undergoing the anterior approach, but this also was not statistically significant (P = .3). With no immediate complications reported in any of the studied interventions, there was no difference in complication rates between anterior and lateral approach cases.

 

Discussion

Pain medicine fellows who have previously completed residency in a variety of disciplines, often either anesthesiology or physical medicine and rehabilitation, perform fluoroscopically guided procedures and benefit from increased experience with coaxial technique as this improves needle depth and location awareness. Once mastered, this skill set can be applied to and useful for multiple interventional pain procedures. Similar technical instruction with an emphasis on coaxial technique for hip injections as performed in the anterior or anterolateral approach can be used in both fluoroscopic and ultrasound-guided procedures, including facet injection, transforaminal epidural steroid injection, and myriad other procedures performed to ameliorate pain. There are advantages to pursuing a similar approach with all image-guided procedures. Evaluated in this comparison study is an alternative technique that has potential for risk reduction benefit with reduced proximity to neurovascular structures, which ultimately leads to a safer procedure profile.

Using a lateral approach, the interventionalist determines a starting point, entering the skin at a greater distance from any overlying pannus and the elevated concentration of gram-negative and gram-positive bacteria contained within the inguinal skin.6 A previous study demonstrated improved success of intra-articular needle tip placement without image guidance in patients with body mass index (BMI) < 30.7 A prior study of anterior approach using anatomic landmarks as compared to lateral approach demonstrated the anterior approach pierced or contacted the femoral nerve in 27% of anterior cases and came within 5 mm of 60% of anterior cases.2 Use of image guidance, whether ultrasound, fluoroscopy, or computed tomography (CT) is preferred related to reduced risk of contact with adjacent neurovascular structures. Anatomic surface landmarks have been described as an alternative injection technique, without the use of fluoroscopy for confirmatory initial, intraprocedure, and final placement.8 Palpation of anatomic structures is required for this nonimage-guided technique, and although similar to the described technique in this study, the anatomically guided injection starting point is more lateral than the anterior approach but not in the most lateral position in the transverse plane that is used for this fluoroscopically guided lateral approach study.

Physiologic characteristics of subjects and technical aspects of fluoroscopy both can be factors in radiation dose and exposure times for hip injections. Patient BMI was not included in the data collection, but further study would seek to determine whether BMI is a significant risk for any increased radiation dose and exposure times using lateral approach injections. Use of lateral images for fluoroscopy requires penetration of X-ray beam through more tissue compared with that of anterior-posterior images. Further study of these techniques would benefit from comparing the pulse rate of fluoroscopic images and collimation (or focusing of the radiation beam over a smaller area of tissue) as factors in any observed increase in total radiation dose and exposure times.

Improving the safety profile of this procedure could have a positive impact on the patient population receiving fluoroscopic hip injections, both within the VA Ann Arbor Health System and elsewhere. While the study population was limited to the VA patient population seeking subspecialty nonsurgical joint care at a single tertiary care center, this technique is generalizable and can be used in most patients, as hip pain is a common condition necessitating nonoperative evaluation and treatment.

 

 

Radiation Exposures

As our analysis demonstrates, mean radiation dose exposure for each group was consistent with low (≤ 3 mSv) to moderate (> 3-20 mSv) annual effective doses in the general population.7 Both anterior and lateral median radiation dose of 1 mGy and 3 mGy, respectively, are within the standard exposure for radiographs of the pelvis (1.31 mGy).9 It is therefore reasonable to consider a lateral approach for hip injection, given the benefits of direct coaxial approach and avoiding needle entry through higher bacteria-concentrated skin.

The lateral approach did have increased radiation dose and exposure time, although it was not statistically significantly greater than the anterior approach. The difference between radiation dose and time to perform either technique was not clinically significant. One potential explanation for this is that the lateral technique has increased tissue to penetrate, which can be reduced with collimation and other fluoroscopic image adjustments. Additionally, as trainees progress in competency, fewer images should need to be obtained.7 We hypothesize that as familiarity and comfort with this technique increase, the number of images necessary for successful injection would decrease, leading to decreased radiation dose and exposure time. We would expect that in the hands of a board-certified interventionalist, radiation dose and exposure time would be significantly decreased as compared to our current dataset, and this is an area of planned further study. With our existing dataset, the majority of procedures were performed with trainees, with inadequate information documented for comparison of dose over time and procedural experience under individual physicians.

Notable strengths of this study are the direct comparison of the anterior approach when compared to the lateral approach with regard to radiation dose and exposure time, which we have not seen described in the literature. A detailed description of the technique may result in increased utilization by other providers. Data were collected from multiple providers, as board-certified pain physicians and board-eligible interventional pain fellows performed the procedures. This variability in providers increases the generalizability of the findings, with a variety of providers, disciplines, years of experiences, and type of training represented.

 

Limitations

Limitations include the retrospective nature of the study and the relatively small sample size. However, even with this limitation, it is notable that no statistically significant differences were observed in mean radiation dose or fluoroscopy exposure time, making the lateral approach, at minimum, a noninferior technique. Combined with the improved safety profile, this technique is a viable alternative to the traditional anterior-oblique approach. Further study should be performed, such as a prospective, randomized control trial investigating the 2 techniques and following pain scores and functional ability after the procedure.

Conclusion

Given the decreased procedural risk related to proximity of neurovascular structures and coaxial technique for needle advancement, lateral approach for hip injection should be considered by those in any discipline performing fluoroscopically guided procedures. Lateral technique may be particularly useful in technically challenging cases and when skin entry at the anterior groin is suboptimal, as a noninferior alternative to traditional anterior method.

References

1. Cianfoni A, Boulter DJ, Rumboldt Z, Sapton T, Bonaldi G. Guidelines to imaging landmarks for interventional spine procedures: fluoroscopy and CT anatomy. Neurographics. 2011;1(1):39-48.

2. Leopold SS, Battista V, Oliverio JA. Safety and efficacy of intraarticular hip injection using anatomic landmarks. Clin Orthop Relat Res. 2001;(391):192-197.

3. Dodré E, Lefebvre G, Cockenpot E, Chastanet P, Cotten A. Interventional MSK procedures: the hip.  Br J Radiol. 2016;89(1057):20150408.

4. Hankey S, McCall IW, Park WM, O’Connor BT. Technical problems in arthrography of the painful hip arthroplasty. Clin Radiol. 1979;30(6):653-656.

5. Yasar E, Singh JR, Hill J, Akuthota V. Image-guided injections of the hip. J Nov Physiother Phys Rehabil. 2014;1(2):39-48. 

6. Aly R, Maibach HI. Aerobic microbial flora of intertrigenous skin. Appl Environ Microbiol. 1977;33(1):97-100.

7. Fazel R, Krumholz HM, Wang W, et al. Exposure to low-dose ionizing radiation from medical imaging procedures. N Engl J Med. 2009;361(9):849-857.

8. Masoud MA, Said HG. Intra-articular hip injection using anatomic surface landmarks. Arthosc Tech. 2013;2(2):e147-e149.

9. Ofori K, Gordon SW, Akrobortu E, Ampene AA, Darko EO. Estimation of adult patient doses for selected x-ray diagnostic examinations. J Radiat Res Appl Sci. 2014;7(4):459-462.

Article PDF
Author and Disclosure Information

Devon Shuchman is a Clinical Instructor in the Department of Physical Medicine and Rehabilitation; Stephanie Moser is a Research Area Specialty Lead, and Matthew Wixson is a Clinical Instructor, both in the Department of Anesthesiology; David Jamadar is a Professor in the Department of Radiology; all at Michigan Medicine in Ann Arbor. Devon Shuchman is a Pain Physician, and David Jamadar is a Physician in the Department of Radiology, both at the VA Ann Arbor Healthcare System.
Correspondence: Devon Shuchman ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Issue
Federal Practitioner - 36(7)a
Publications
Topics
Page Number
300-305
Sections
Author and Disclosure Information

Devon Shuchman is a Clinical Instructor in the Department of Physical Medicine and Rehabilitation; Stephanie Moser is a Research Area Specialty Lead, and Matthew Wixson is a Clinical Instructor, both in the Department of Anesthesiology; David Jamadar is a Professor in the Department of Radiology; all at Michigan Medicine in Ann Arbor. Devon Shuchman is a Pain Physician, and David Jamadar is a Physician in the Department of Radiology, both at the VA Ann Arbor Healthcare System.
Correspondence: Devon Shuchman ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Author and Disclosure Information

Devon Shuchman is a Clinical Instructor in the Department of Physical Medicine and Rehabilitation; Stephanie Moser is a Research Area Specialty Lead, and Matthew Wixson is a Clinical Instructor, both in the Department of Anesthesiology; David Jamadar is a Professor in the Department of Radiology; all at Michigan Medicine in Ann Arbor. Devon Shuchman is a Pain Physician, and David Jamadar is a Physician in the Department of Radiology, both at the VA Ann Arbor Healthcare System.
Correspondence: Devon Shuchman ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Article PDF
Article PDF
Related Articles
A retrospective comparison study of the anterior-oblique and lateral approach to hip injection procedures suggests that the lateral approach may be a valuable interventional skill for those performing hip injections.
A retrospective comparison study of the anterior-oblique and lateral approach to hip injection procedures suggests that the lateral approach may be a valuable interventional skill for those performing hip injections.

Hip injections are performed as diagnostic and therapeutic interventions across a variety of medical subspecialties, including but not limited to those practicing physical medicine and rehabilitation, pain medicine, sports medicine, orthopedic surgery, and radiology. Traditional image-guided intra-articular hip injection commonly uses an anterior-oblique approach from a starting point on the anterior groin traversing soft tissue anterior to the femoral neck to the target needle placement at the femoral head-neck junction.

In fluoroscopic procedures, a coaxial technique for needles placement is used for safe and precise insertion of needles. An X-ray beam is angled in line with the projected path of the needle from skin entry point to injection target. Coaxial, en face technique (also called EF, parallel, hub view, down the barrel, or barrel view) appears as a single radiopaque dot over the target injection site.1 This technique minimizes needle redirection for correction of the injection path and minimal disturbance of surrounding tissue on the approach to the intended target.

Noncoaxial technique, as used in the anterior-oblique approach, intentionally directs the needle away from a skin entry point, the needle barrel traversing the X-ray beam toward an injection target. Clinical challenges to injection with the anterior-oblique approach include using a noncoaxial technique. Additional challenges to the anterior-oblique (also referred to as anterior) approach are body habitus and pannus, proximity to neurovascular structures, and patient positioning. By understanding the risks and benefits of varied technical approaches to accomplish a clinical goal and outcome, trainees are better able to select the technique most appropriate for a varied patient population.

Common risks to patients for all intra-articular interventions include bleeding, infection, and pain. Risk of damage to nearby structures is often mentioned as part of a standard informed consent process as it relates to the femoral vein, artery, and nerve that are in close anatomical proximity to the target injection site. When prior studies have examined the risk of complications resulting from intra-articular hip injections, a common conclusion is that despite a relatively low-risk profile for skilled interventionalists, efforts to avoid needle placement in the medial 50% of the femoral head on antero-posterior imaging is recommended.2

The anterior technique is a commonly described approach, and the same can be used for both ultrasound-guided and fluoroscopically guided hip injections.3 Using ultrasound guidance, the anterior technique can be performed with in-plane direct visualization of the needle throughout the procedure. With fluoroscopic guidance, the anterior approach is performed out-of-plane, using the noncoaxial technique. This requires the interventionalist to use tactile and anatomic guidance to the target injection site. The anterior approach for hip injection is one of few interventions where coaxial technique is not used for the procedure, making the instruction for a learner less concrete and potentially more challenging related to the needle path not under direct visualization in plane with the X-ray beam.

Technical guidance and detailed instruction for the lateral approach is infrequently described in fluoroscopic interventional texts. Reference to a lateral approach hip injection was made as early as the 1970s, without detail provided on the technique, with respect to the advantage of visualization of the hip joint for needle placement when hardware is in place.4 A more recent article described a lateral approach technique involving the patient in a decubitus (lateral) supine position, which presents limitations in consistent fluoroscopic imaging and can be a challenging static position for the patient to maintain.5

The retrospective review of anterior-oblique and lateral approach procedures in this study aims to demonstrate that there is no significant difference in radiation exposure, rate of successful intra-articular injection, or complication rate. If proven as a noninferior technique, the lateral approach may be a valuable interventional skill to those performing hip injections. Potential benefits to the patient and provider include options for the provider to access the joint using either technique. Additionally, the approach can be added to the instructional plan for those practitioners providing technical instruction to trainees within their health care system.

 

 

Methods

The institutional review board at the VA Ann Arbor Healthcare System reviewed and granted approval for this study. One of 5 interventional pain physician staff members at the VA Ann Arbor Healthcare System performed fluoroscopically guided hip injections. Interventional pain fellows under the direct supervision of board-certified physicians performed the procedures for the study cases. Supervising physicians included both physiatrists and anesthesiologists. Images were reviewed and evaluated without corresponding patient biographic data.

For cases using the lateral approach, the patients were positioned supine on the fluoroscopy table. In anterior-posterior and lateral views, trajectory lines are drawn using a long metal marking rod held adjacent to the patient. With pulsed low-dose fluoroscopy, transverse lines are drawn to identify midpoint of the femoral head in lateral view (Figure 1A, x-axis) and the most direct line from skin to lateral femoral head neck junction joint target (Figure 1B, z-axis). Also confirmed in lateral view, the z-axis marked line drawn on the skin is used to confirm that this transverse plane crosses the overlapping femoral heads (Figure 1A, y-axis).



The cross-section of these transverse and coronal plane lines identifies the starting point for the most direct approach from skin to injection target at femoral head-neck junction. Using the coaxial technique in the lateral view, the needle is introduced and advanced using intermittent fluoroscopic images to the lateral joint target. Continuing in this view, the interventionalist can ensure that advancing the needle to the osseous endpoint will place the tip at the midpoint of the femoral head at the target on the lateral surface, avoiding inadvertent advance of the needle anterior or posterior the femoral head. Final needle placement confirmation is then completed in antero-posterior view (Figure 2A). Contrast enhancement is used to confirm intra-articular spread (Figure 2B).



Cases included in the study were performed over an 8-month period in 2017. Case images recorded in IntelliSpace PACS Radiology software (Andover, MA) were included by creating a list of all cases performed and documented using the major joint injection procedure code. The cases reviewed began with the most recent cases. Two research team members (1 radiologist and 1 interventional pain physician) reviewed the series of saved images for each patient and the associated procedure report. The research team members documented and recorded de-identified study data in Microsoft Excel (Redmond, WA).

Imaging reports, using the saved images and the associated procedure report, were classified for technical approach (anterior, lateral, or inconclusive), success of joint injection as evidenced by appropriate contrast enhancement within the joint space (successful, unsuccessful, or incomplete images), documented use of sedation (yes, no), patient positioning (supine, prone), radiation exposure dose, radiation exposure time, and additional comments, such as “notable pannus” or “hardware present” to annotate significant findings on imaging review.

Statistical Analysis

The distribution of 2 outcomes used to compare rates of complication, radiation dose, and exposure time was checked using the Shapiro-Wilk test. Power analysis determined that inclusion of 30 anterior and 30 lateral cases results in adequate power to detect a 1-point mean difference, assuming a standard deviation of 1.5 in each group. Both radiation dose and exposure time were found to be nonnormally distributed (W = 0.65, P < .001; W = 0.86, P < .001; respectively). Median and interquartile range (IQR) of dose and time in seconds for anterior and lateral approaches were computed. Median differences in radiation dose and exposure time between anterior and lateral approaches were assessed with the k-sample test of equality of medians. All analyses were conducted using Stata Version 14.1 (College Station, TX).

 

 

Results

Between June 2017 and January 2018, 88 cases were reviewed as performed, with 30 anterior and 30 lateral approach cases included in this retrospective comparison study. A total of 28 cases were excluded from the study for using an inconclusive approach, multiple or bilateral procedures, cases without recorded dose and time data, and inadequately saved images to provide meaningful data (Figure 3).

Rate of successful intervention with needle placement confirmed within the articular space on contrast enhancement was not significantly different in the study groups with 96.7% (29 of 30) anterior approach cases reported as successful, 100% (30 of 30) lateral approach cases reported as successful. Overhanging pannus in the viewing area was reported in 5 anterior approach cases and 4 lateral cases. Hardware was noted in 2 lateral approach cases, none in anterior approach cases. Sedation was used for 3 of the anterior approach cases and none of the lateral approach cases.



Patients undergoing the lateral approach received a higher median radiation dose than did those undergoing the anterior approach, but this was not statistically significant (P = .07) (Table). Those undergoing the lateral approach also had a longer median exposure time than did those undergoing the anterior approach, but this also was not statistically significant (P = .3). With no immediate complications reported in any of the studied interventions, there was no difference in complication rates between anterior and lateral approach cases.

 

Discussion

Pain medicine fellows who have previously completed residency in a variety of disciplines, often either anesthesiology or physical medicine and rehabilitation, perform fluoroscopically guided procedures and benefit from increased experience with coaxial technique as this improves needle depth and location awareness. Once mastered, this skill set can be applied to and useful for multiple interventional pain procedures. Similar technical instruction with an emphasis on coaxial technique for hip injections as performed in the anterior or anterolateral approach can be used in both fluoroscopic and ultrasound-guided procedures, including facet injection, transforaminal epidural steroid injection, and myriad other procedures performed to ameliorate pain. There are advantages to pursuing a similar approach with all image-guided procedures. Evaluated in this comparison study is an alternative technique that has potential for risk reduction benefit with reduced proximity to neurovascular structures, which ultimately leads to a safer procedure profile.

Using a lateral approach, the interventionalist determines a starting point, entering the skin at a greater distance from any overlying pannus and the elevated concentration of gram-negative and gram-positive bacteria contained within the inguinal skin.6 A previous study demonstrated improved success of intra-articular needle tip placement without image guidance in patients with body mass index (BMI) < 30.7 A prior study of anterior approach using anatomic landmarks as compared to lateral approach demonstrated the anterior approach pierced or contacted the femoral nerve in 27% of anterior cases and came within 5 mm of 60% of anterior cases.2 Use of image guidance, whether ultrasound, fluoroscopy, or computed tomography (CT) is preferred related to reduced risk of contact with adjacent neurovascular structures. Anatomic surface landmarks have been described as an alternative injection technique, without the use of fluoroscopy for confirmatory initial, intraprocedure, and final placement.8 Palpation of anatomic structures is required for this nonimage-guided technique, and although similar to the described technique in this study, the anatomically guided injection starting point is more lateral than the anterior approach but not in the most lateral position in the transverse plane that is used for this fluoroscopically guided lateral approach study.

Physiologic characteristics of subjects and technical aspects of fluoroscopy both can be factors in radiation dose and exposure times for hip injections. Patient BMI was not included in the data collection, but further study would seek to determine whether BMI is a significant risk for any increased radiation dose and exposure times using lateral approach injections. Use of lateral images for fluoroscopy requires penetration of X-ray beam through more tissue compared with that of anterior-posterior images. Further study of these techniques would benefit from comparing the pulse rate of fluoroscopic images and collimation (or focusing of the radiation beam over a smaller area of tissue) as factors in any observed increase in total radiation dose and exposure times.

Improving the safety profile of this procedure could have a positive impact on the patient population receiving fluoroscopic hip injections, both within the VA Ann Arbor Health System and elsewhere. While the study population was limited to the VA patient population seeking subspecialty nonsurgical joint care at a single tertiary care center, this technique is generalizable and can be used in most patients, as hip pain is a common condition necessitating nonoperative evaluation and treatment.

 

 

Radiation Exposures

As our analysis demonstrates, mean radiation dose exposure for each group was consistent with low (≤ 3 mSv) to moderate (> 3-20 mSv) annual effective doses in the general population.7 Both anterior and lateral median radiation dose of 1 mGy and 3 mGy, respectively, are within the standard exposure for radiographs of the pelvis (1.31 mGy).9 It is therefore reasonable to consider a lateral approach for hip injection, given the benefits of direct coaxial approach and avoiding needle entry through higher bacteria-concentrated skin.

The lateral approach did have increased radiation dose and exposure time, although it was not statistically significantly greater than the anterior approach. The difference between radiation dose and time to perform either technique was not clinically significant. One potential explanation for this is that the lateral technique has increased tissue to penetrate, which can be reduced with collimation and other fluoroscopic image adjustments. Additionally, as trainees progress in competency, fewer images should need to be obtained.7 We hypothesize that as familiarity and comfort with this technique increase, the number of images necessary for successful injection would decrease, leading to decreased radiation dose and exposure time. We would expect that in the hands of a board-certified interventionalist, radiation dose and exposure time would be significantly decreased as compared to our current dataset, and this is an area of planned further study. With our existing dataset, the majority of procedures were performed with trainees, with inadequate information documented for comparison of dose over time and procedural experience under individual physicians.

Notable strengths of this study are the direct comparison of the anterior approach when compared to the lateral approach with regard to radiation dose and exposure time, which we have not seen described in the literature. A detailed description of the technique may result in increased utilization by other providers. Data were collected from multiple providers, as board-certified pain physicians and board-eligible interventional pain fellows performed the procedures. This variability in providers increases the generalizability of the findings, with a variety of providers, disciplines, years of experiences, and type of training represented.

 

Limitations

Limitations include the retrospective nature of the study and the relatively small sample size. However, even with this limitation, it is notable that no statistically significant differences were observed in mean radiation dose or fluoroscopy exposure time, making the lateral approach, at minimum, a noninferior technique. Combined with the improved safety profile, this technique is a viable alternative to the traditional anterior-oblique approach. Further study should be performed, such as a prospective, randomized control trial investigating the 2 techniques and following pain scores and functional ability after the procedure.

Conclusion

Given the decreased procedural risk related to proximity of neurovascular structures and coaxial technique for needle advancement, lateral approach for hip injection should be considered by those in any discipline performing fluoroscopically guided procedures. Lateral technique may be particularly useful in technically challenging cases and when skin entry at the anterior groin is suboptimal, as a noninferior alternative to traditional anterior method.

Hip injections are performed as diagnostic and therapeutic interventions across a variety of medical subspecialties, including but not limited to those practicing physical medicine and rehabilitation, pain medicine, sports medicine, orthopedic surgery, and radiology. Traditional image-guided intra-articular hip injection commonly uses an anterior-oblique approach from a starting point on the anterior groin traversing soft tissue anterior to the femoral neck to the target needle placement at the femoral head-neck junction.

In fluoroscopic procedures, a coaxial technique for needles placement is used for safe and precise insertion of needles. An X-ray beam is angled in line with the projected path of the needle from skin entry point to injection target. Coaxial, en face technique (also called EF, parallel, hub view, down the barrel, or barrel view) appears as a single radiopaque dot over the target injection site.1 This technique minimizes needle redirection for correction of the injection path and minimal disturbance of surrounding tissue on the approach to the intended target.

Noncoaxial technique, as used in the anterior-oblique approach, intentionally directs the needle away from a skin entry point, the needle barrel traversing the X-ray beam toward an injection target. Clinical challenges to injection with the anterior-oblique approach include using a noncoaxial technique. Additional challenges to the anterior-oblique (also referred to as anterior) approach are body habitus and pannus, proximity to neurovascular structures, and patient positioning. By understanding the risks and benefits of varied technical approaches to accomplish a clinical goal and outcome, trainees are better able to select the technique most appropriate for a varied patient population.

Common risks to patients for all intra-articular interventions include bleeding, infection, and pain. Risk of damage to nearby structures is often mentioned as part of a standard informed consent process as it relates to the femoral vein, artery, and nerve that are in close anatomical proximity to the target injection site. When prior studies have examined the risk of complications resulting from intra-articular hip injections, a common conclusion is that despite a relatively low-risk profile for skilled interventionalists, efforts to avoid needle placement in the medial 50% of the femoral head on antero-posterior imaging is recommended.2

The anterior technique is a commonly described approach, and the same can be used for both ultrasound-guided and fluoroscopically guided hip injections.3 Using ultrasound guidance, the anterior technique can be performed with in-plane direct visualization of the needle throughout the procedure. With fluoroscopic guidance, the anterior approach is performed out-of-plane, using the noncoaxial technique. This requires the interventionalist to use tactile and anatomic guidance to the target injection site. The anterior approach for hip injection is one of few interventions where coaxial technique is not used for the procedure, making the instruction for a learner less concrete and potentially more challenging related to the needle path not under direct visualization in plane with the X-ray beam.

Technical guidance and detailed instruction for the lateral approach is infrequently described in fluoroscopic interventional texts. Reference to a lateral approach hip injection was made as early as the 1970s, without detail provided on the technique, with respect to the advantage of visualization of the hip joint for needle placement when hardware is in place.4 A more recent article described a lateral approach technique involving the patient in a decubitus (lateral) supine position, which presents limitations in consistent fluoroscopic imaging and can be a challenging static position for the patient to maintain.5

The retrospective review of anterior-oblique and lateral approach procedures in this study aims to demonstrate that there is no significant difference in radiation exposure, rate of successful intra-articular injection, or complication rate. If proven as a noninferior technique, the lateral approach may be a valuable interventional skill to those performing hip injections. Potential benefits to the patient and provider include options for the provider to access the joint using either technique. Additionally, the approach can be added to the instructional plan for those practitioners providing technical instruction to trainees within their health care system.

 

 

Methods

The institutional review board at the VA Ann Arbor Healthcare System reviewed and granted approval for this study. One of 5 interventional pain physician staff members at the VA Ann Arbor Healthcare System performed fluoroscopically guided hip injections. Interventional pain fellows under the direct supervision of board-certified physicians performed the procedures for the study cases. Supervising physicians included both physiatrists and anesthesiologists. Images were reviewed and evaluated without corresponding patient biographic data.

For cases using the lateral approach, the patients were positioned supine on the fluoroscopy table. In anterior-posterior and lateral views, trajectory lines are drawn using a long metal marking rod held adjacent to the patient. With pulsed low-dose fluoroscopy, transverse lines are drawn to identify midpoint of the femoral head in lateral view (Figure 1A, x-axis) and the most direct line from skin to lateral femoral head neck junction joint target (Figure 1B, z-axis). Also confirmed in lateral view, the z-axis marked line drawn on the skin is used to confirm that this transverse plane crosses the overlapping femoral heads (Figure 1A, y-axis).



The cross-section of these transverse and coronal plane lines identifies the starting point for the most direct approach from skin to injection target at femoral head-neck junction. Using the coaxial technique in the lateral view, the needle is introduced and advanced using intermittent fluoroscopic images to the lateral joint target. Continuing in this view, the interventionalist can ensure that advancing the needle to the osseous endpoint will place the tip at the midpoint of the femoral head at the target on the lateral surface, avoiding inadvertent advance of the needle anterior or posterior the femoral head. Final needle placement confirmation is then completed in antero-posterior view (Figure 2A). Contrast enhancement is used to confirm intra-articular spread (Figure 2B).



Cases included in the study were performed over an 8-month period in 2017. Case images recorded in IntelliSpace PACS Radiology software (Andover, MA) were included by creating a list of all cases performed and documented using the major joint injection procedure code. The cases reviewed began with the most recent cases. Two research team members (1 radiologist and 1 interventional pain physician) reviewed the series of saved images for each patient and the associated procedure report. The research team members documented and recorded de-identified study data in Microsoft Excel (Redmond, WA).

Imaging reports, using the saved images and the associated procedure report, were classified for technical approach (anterior, lateral, or inconclusive), success of joint injection as evidenced by appropriate contrast enhancement within the joint space (successful, unsuccessful, or incomplete images), documented use of sedation (yes, no), patient positioning (supine, prone), radiation exposure dose, radiation exposure time, and additional comments, such as “notable pannus” or “hardware present” to annotate significant findings on imaging review.

Statistical Analysis

The distribution of 2 outcomes used to compare rates of complication, radiation dose, and exposure time was checked using the Shapiro-Wilk test. Power analysis determined that inclusion of 30 anterior and 30 lateral cases results in adequate power to detect a 1-point mean difference, assuming a standard deviation of 1.5 in each group. Both radiation dose and exposure time were found to be nonnormally distributed (W = 0.65, P < .001; W = 0.86, P < .001; respectively). Median and interquartile range (IQR) of dose and time in seconds for anterior and lateral approaches were computed. Median differences in radiation dose and exposure time between anterior and lateral approaches were assessed with the k-sample test of equality of medians. All analyses were conducted using Stata Version 14.1 (College Station, TX).

 

 

Results

Between June 2017 and January 2018, 88 cases were reviewed as performed, with 30 anterior and 30 lateral approach cases included in this retrospective comparison study. A total of 28 cases were excluded from the study for using an inconclusive approach, multiple or bilateral procedures, cases without recorded dose and time data, and inadequately saved images to provide meaningful data (Figure 3).

Rate of successful intervention with needle placement confirmed within the articular space on contrast enhancement was not significantly different in the study groups with 96.7% (29 of 30) anterior approach cases reported as successful, 100% (30 of 30) lateral approach cases reported as successful. Overhanging pannus in the viewing area was reported in 5 anterior approach cases and 4 lateral cases. Hardware was noted in 2 lateral approach cases, none in anterior approach cases. Sedation was used for 3 of the anterior approach cases and none of the lateral approach cases.



Patients undergoing the lateral approach received a higher median radiation dose than did those undergoing the anterior approach, but this was not statistically significant (P = .07) (Table). Those undergoing the lateral approach also had a longer median exposure time than did those undergoing the anterior approach, but this also was not statistically significant (P = .3). With no immediate complications reported in any of the studied interventions, there was no difference in complication rates between anterior and lateral approach cases.

 

Discussion

Pain medicine fellows who have previously completed residency in a variety of disciplines, often either anesthesiology or physical medicine and rehabilitation, perform fluoroscopically guided procedures and benefit from increased experience with coaxial technique as this improves needle depth and location awareness. Once mastered, this skill set can be applied to and useful for multiple interventional pain procedures. Similar technical instruction with an emphasis on coaxial technique for hip injections as performed in the anterior or anterolateral approach can be used in both fluoroscopic and ultrasound-guided procedures, including facet injection, transforaminal epidural steroid injection, and myriad other procedures performed to ameliorate pain. There are advantages to pursuing a similar approach with all image-guided procedures. Evaluated in this comparison study is an alternative technique that has potential for risk reduction benefit with reduced proximity to neurovascular structures, which ultimately leads to a safer procedure profile.

Using a lateral approach, the interventionalist determines a starting point, entering the skin at a greater distance from any overlying pannus and the elevated concentration of gram-negative and gram-positive bacteria contained within the inguinal skin.6 A previous study demonstrated improved success of intra-articular needle tip placement without image guidance in patients with body mass index (BMI) < 30.7 A prior study of anterior approach using anatomic landmarks as compared to lateral approach demonstrated the anterior approach pierced or contacted the femoral nerve in 27% of anterior cases and came within 5 mm of 60% of anterior cases.2 Use of image guidance, whether ultrasound, fluoroscopy, or computed tomography (CT) is preferred related to reduced risk of contact with adjacent neurovascular structures. Anatomic surface landmarks have been described as an alternative injection technique, without the use of fluoroscopy for confirmatory initial, intraprocedure, and final placement.8 Palpation of anatomic structures is required for this nonimage-guided technique, and although similar to the described technique in this study, the anatomically guided injection starting point is more lateral than the anterior approach but not in the most lateral position in the transverse plane that is used for this fluoroscopically guided lateral approach study.

Physiologic characteristics of subjects and technical aspects of fluoroscopy both can be factors in radiation dose and exposure times for hip injections. Patient BMI was not included in the data collection, but further study would seek to determine whether BMI is a significant risk for any increased radiation dose and exposure times using lateral approach injections. Use of lateral images for fluoroscopy requires penetration of X-ray beam through more tissue compared with that of anterior-posterior images. Further study of these techniques would benefit from comparing the pulse rate of fluoroscopic images and collimation (or focusing of the radiation beam over a smaller area of tissue) as factors in any observed increase in total radiation dose and exposure times.

Improving the safety profile of this procedure could have a positive impact on the patient population receiving fluoroscopic hip injections, both within the VA Ann Arbor Health System and elsewhere. While the study population was limited to the VA patient population seeking subspecialty nonsurgical joint care at a single tertiary care center, this technique is generalizable and can be used in most patients, as hip pain is a common condition necessitating nonoperative evaluation and treatment.

 

 

Radiation Exposures

As our analysis demonstrates, mean radiation dose exposure for each group was consistent with low (≤ 3 mSv) to moderate (> 3-20 mSv) annual effective doses in the general population.7 Both anterior and lateral median radiation dose of 1 mGy and 3 mGy, respectively, are within the standard exposure for radiographs of the pelvis (1.31 mGy).9 It is therefore reasonable to consider a lateral approach for hip injection, given the benefits of direct coaxial approach and avoiding needle entry through higher bacteria-concentrated skin.

The lateral approach did have increased radiation dose and exposure time, although it was not statistically significantly greater than the anterior approach. The difference between radiation dose and time to perform either technique was not clinically significant. One potential explanation for this is that the lateral technique has increased tissue to penetrate, which can be reduced with collimation and other fluoroscopic image adjustments. Additionally, as trainees progress in competency, fewer images should need to be obtained.7 We hypothesize that as familiarity and comfort with this technique increase, the number of images necessary for successful injection would decrease, leading to decreased radiation dose and exposure time. We would expect that in the hands of a board-certified interventionalist, radiation dose and exposure time would be significantly decreased as compared to our current dataset, and this is an area of planned further study. With our existing dataset, the majority of procedures were performed with trainees, with inadequate information documented for comparison of dose over time and procedural experience under individual physicians.

Notable strengths of this study are the direct comparison of the anterior approach when compared to the lateral approach with regard to radiation dose and exposure time, which we have not seen described in the literature. A detailed description of the technique may result in increased utilization by other providers. Data were collected from multiple providers, as board-certified pain physicians and board-eligible interventional pain fellows performed the procedures. This variability in providers increases the generalizability of the findings, with a variety of providers, disciplines, years of experiences, and type of training represented.

 

Limitations

Limitations include the retrospective nature of the study and the relatively small sample size. However, even with this limitation, it is notable that no statistically significant differences were observed in mean radiation dose or fluoroscopy exposure time, making the lateral approach, at minimum, a noninferior technique. Combined with the improved safety profile, this technique is a viable alternative to the traditional anterior-oblique approach. Further study should be performed, such as a prospective, randomized control trial investigating the 2 techniques and following pain scores and functional ability after the procedure.

Conclusion

Given the decreased procedural risk related to proximity of neurovascular structures and coaxial technique for needle advancement, lateral approach for hip injection should be considered by those in any discipline performing fluoroscopically guided procedures. Lateral technique may be particularly useful in technically challenging cases and when skin entry at the anterior groin is suboptimal, as a noninferior alternative to traditional anterior method.

References

1. Cianfoni A, Boulter DJ, Rumboldt Z, Sapton T, Bonaldi G. Guidelines to imaging landmarks for interventional spine procedures: fluoroscopy and CT anatomy. Neurographics. 2011;1(1):39-48.

2. Leopold SS, Battista V, Oliverio JA. Safety and efficacy of intraarticular hip injection using anatomic landmarks. Clin Orthop Relat Res. 2001;(391):192-197.

3. Dodré E, Lefebvre G, Cockenpot E, Chastanet P, Cotten A. Interventional MSK procedures: the hip.  Br J Radiol. 2016;89(1057):20150408.

4. Hankey S, McCall IW, Park WM, O’Connor BT. Technical problems in arthrography of the painful hip arthroplasty. Clin Radiol. 1979;30(6):653-656.

5. Yasar E, Singh JR, Hill J, Akuthota V. Image-guided injections of the hip. J Nov Physiother Phys Rehabil. 2014;1(2):39-48. 

6. Aly R, Maibach HI. Aerobic microbial flora of intertrigenous skin. Appl Environ Microbiol. 1977;33(1):97-100.

7. Fazel R, Krumholz HM, Wang W, et al. Exposure to low-dose ionizing radiation from medical imaging procedures. N Engl J Med. 2009;361(9):849-857.

8. Masoud MA, Said HG. Intra-articular hip injection using anatomic surface landmarks. Arthosc Tech. 2013;2(2):e147-e149.

9. Ofori K, Gordon SW, Akrobortu E, Ampene AA, Darko EO. Estimation of adult patient doses for selected x-ray diagnostic examinations. J Radiat Res Appl Sci. 2014;7(4):459-462.

References

1. Cianfoni A, Boulter DJ, Rumboldt Z, Sapton T, Bonaldi G. Guidelines to imaging landmarks for interventional spine procedures: fluoroscopy and CT anatomy. Neurographics. 2011;1(1):39-48.

2. Leopold SS, Battista V, Oliverio JA. Safety and efficacy of intraarticular hip injection using anatomic landmarks. Clin Orthop Relat Res. 2001;(391):192-197.

3. Dodré E, Lefebvre G, Cockenpot E, Chastanet P, Cotten A. Interventional MSK procedures: the hip.  Br J Radiol. 2016;89(1057):20150408.

4. Hankey S, McCall IW, Park WM, O’Connor BT. Technical problems in arthrography of the painful hip arthroplasty. Clin Radiol. 1979;30(6):653-656.

5. Yasar E, Singh JR, Hill J, Akuthota V. Image-guided injections of the hip. J Nov Physiother Phys Rehabil. 2014;1(2):39-48. 

6. Aly R, Maibach HI. Aerobic microbial flora of intertrigenous skin. Appl Environ Microbiol. 1977;33(1):97-100.

7. Fazel R, Krumholz HM, Wang W, et al. Exposure to low-dose ionizing radiation from medical imaging procedures. N Engl J Med. 2009;361(9):849-857.

8. Masoud MA, Said HG. Intra-articular hip injection using anatomic surface landmarks. Arthosc Tech. 2013;2(2):e147-e149.

9. Ofori K, Gordon SW, Akrobortu E, Ampene AA, Darko EO. Estimation of adult patient doses for selected x-ray diagnostic examinations. J Radiat Res Appl Sci. 2014;7(4):459-462.

Issue
Federal Practitioner - 36(7)a
Issue
Federal Practitioner - 36(7)a
Page Number
300-305
Page Number
300-305
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Nurse Responses to Physiologic Monitor Alarms on a General Pediatric Unit

Article Type
Changed
Sun, 10/13/2019 - 21:18

Alarms from bedside continuous physiologic monitors (CPMs) occur frequently in children’s hospitals and can lead to harm. Recent studies conducted in children’s hospitals have identified alarm rates of up to 152 alarms per patient per day outside of the intensive care unit,1-3 with as few as 1% of alarms being considered clinically important.4 Excessive alarms have been linked to alarm fatigue, when providers become desensitized to and may miss alarms indicating impending patient deterioration. Alarm fatigue has been identified by national patient safety organizations as a patient safety concern given the risk of patient harm.5-7 Despite these concerns, CPMs are routinely used: up to 48% of pediatric patients in nonintensive care units at children’s hospitals are monitored.2

Although the low number of alarms that receive responses has been well-described,8,9 the reasons why clinicians do or do not respond to alarms are unclear. A study conducted in an adult perioperative unit noted prolonged nurse response times for patients with high alarm rates.10 A second study conducted in the pediatric inpatient setting demonstrated a dose-response effect and noted progressively prolonged nurse response times with increased rates of nonactionable alarms.4,11 Findings from another study suggested that underlying factors are highly complex and may be a result of excessive alarms, clinician characteristics, and working conditions (eg, workload and unit noise level).12 Evidence also suggests that humans have difficulty distinguishing the importance of alarms in situations where multiple alarm tones are used, a common scenario in hospitals.13,14 Understanding the factors that contribute to clinicians responding or not responding to CPM alarms will be crucial for addressing this serious patient safety issue.

An enhanced understanding of why nurses respond to alarms in daily practice will inform intervention development and improvement work. In the long term, this information could help improve systems for monitoring pediatric inpatients that are less prone to issues with alarm fatigue. The objective of this qualitative study, which employed structured observation, was to describe how bedside nurses think about and act upon bedside monitor alarms in a general pediatric inpatient unit.

METHODS

Study Design and Setting

This prospective observational study took place on a 48-bed hospital medicine unit at a large, freestanding children’s hospital with >650 beds and >19,000 annual admissions. General Electric (Little Chalfont, United Kingdom) physiologic monitors (models Dash 3000, 4000, and 5000) were used at the time of the study, and nurses could be notified of monitor alarms in four ways: First, an in-room auditory alarm sounds. Second, a light positioned above the door outside of each patient room blinks for alarms that are at a “warning” or “critical level” (eg ventricular tachycardia or low oxygen saturation). Third, audible alarms occur at the unit’s central monitoring station. Lastly, another staff member can notify the patient’s nurse via in-person conversion or secure smart phone communication. On the study unit, CPMs are initiated and discontinued through a physician order.

 

 

This study was reviewed and approved by the hospital’s institutional review board.

Study Population

We used a purposive recruitment strategy to enroll bedside nurses working on general hospital medicine units, stratified to ensure varying levels of experience and primary shifts (eg, day vs night). We planned to conduct approximately two observations with each participating nurse and to continue collecting data until we could no longer identify new insights in terms of responses to alarms (ie, thematic saturation15). Observations were targeted to cover times of day that coincided with increased rates of distraction. These times included just prior to and after the morning and evening change of shifts (7:00 am and 7:00 pm), during morning rounds (8:00 am-12:00 pm), and heavy admission times (12:00 pm-10:00 pm). After written informed consent, a nurse was eligible for observation during his/her shift if he/she was caring for at least one monitored patient. Enrolled nurses were made aware of the general study topic but were blinded to the study team’s hypotheses.

Data Sources

Prior to data collection, the research team, which consisted of physicians, bedside nurses, research coordinators, and a human factors expert, created a system for categorizing alarm responses. Categories for observed responses were based on the location and corresponding action taken. Initial categories were developed a priori from existing literature and expanded through input from the multidisciplinary study team, then vetted with bedside staff, and finally pilot tested through >4 hours of observations, thus producing the final categories. These categories were entered into a work-sampling program (WorkStudy by Quetech Ltd., Waterloo, Ontario, Canada) to facilitate quick data recording during observations.

The hospital uses a central alarm collection software (BedMasterEx by Anandic Medical Systems, Feuerthalen, Switzerland), which permitted the collection of date, time, trigger (eg, high heart rate), and level (eg, crisis, warning) of the generated CPM alarms. Alarms collected are based on thresholds preset at the bedside monitor. The central collection software does not differentiate between accurate (eg, correctly representing the physiologic state of the patient) and inaccurate alarms.

Observation Procedure

At the time of observation, nurse demographic information (eg, primary shift worked and years working as a nurse) was obtained. A brief preobservation questionnaire was administered to collect patient information (eg, age and diagnosis) and the nurses’ perspectives on the necessity of monitors for each monitored patient in his/her care.

The observer shadowed the nurse for a two-hour block of his/her shift. During this time, nurses were instructed to “think aloud” as they responded to alarms (eg, “I notice the oxygen saturation monitor alarming off, but the probe has fallen off”). A trained observer (AML or KMT) recorded responses verbalized by the nurse and his/her reaction by selecting the appropriate category using the work-sampling software. Data were also collected on the vital sign associated with the alarm (eg, heart rate). Moreover, the observer kept written notes to provide context for electronically recorded data. Alarms that were not verbalized by the nurse were not counted. Similarly, alarms that were noted outside of the room by the nurse were not classified by vital sign unless the nurse confirmed with the bedside monitor. Observers did not adjudicate the accuracy of the alarms. The session was stopped if monitors were discontinued during the observation period. Alarm data generated by the bedside monitor were pulled for each patient room after observations were completed.

 

 

Analysis

Descriptive statistics were used to assess the percentage of each nurse response category and each alarm type (eg, heart rate and respiratory rate). The observed alarm rate was calculated by taking the total number of observed alarms (ie, alarms noted by the nurse) divided by the total number of patient-hours observed. The monitor-generated alarm rate was calculated by taking the total number of alarms from the bedside-alarm generated data divided by the number of patient-hours observed.

Electronically recorded observations using the work-sampling program were cross-referenced with hand-written field notes to assess for any discrepancies or identify relevant events not captured by the program. Three study team members (AML, KMT, and ACS) reviewed each observation independently and compared field notes to ensure accurate categorization. Discrepancies were referred to the larger study group in cases of uncertainty.

RESULTS

Nine nurses had monitored patients during the available observations and participated in 19 observation sessions, which included 35 monitored patients for a total of 61.3 patient-hours of observation. Nurses were observed for a median of two times each (range 1-4). The median number of monitored patients during a single observation session was two (range 1-3). Observed nurses were female with a median of eight years of experience (range 0.5-26 years). Patients represented a broad range of age categories and were hospitalized with a variety of diagnoses (Table). Nurses, when queried at the start of the observation, felt that monitors were necessary for 29 (82.9%) of the observed patients given either patient condition or unit policy.

A total of 207 observed nurse responses to alarms occurred during the study period for a rate of 3.4 responses per patient per hour. Of the total number of responses, 45 (21.7%) were noted outside of a patient room, and in 15 (33.3%) the nurse chose to go to the room. The other 162 were recorded when the nurse was present in the room when the alarm activated. Of the 177 in-person nurse responses, 50 were related to a pulse oximetry alarm, 66 were related to a heart rate alarm, and 61 were related to a respiratory rate alarm. The most common observed in-person response to an alarm involved the nurse judging that no intervention was necessary (n = 152, 73.1%). Only 14 (7% of total responses) observed in-person responses involved a clinical intervention, such as suctioning or titrating supplemental oxygen. Findings are summarized in the Figure and describe nurse-verbalized reasons to further assess (or not) and then whether the nurse chose to take action (or not) after an alarm.



Alarm data were available for 17 of the 19 observation periods during the study. Technical issues with the central alarm collection software precluded alarm data collection for two of the observation sessions. A total of 483 alarms were recorded on bedside monitors during those 17 observation periods or 8.8 alarms per patient per hour, which was equivalent to 211.2 alarms per patient-day. A total of 175 observed responses were collected during these 17 observation periods. This number of responses was 36% of the number we would have expected on the basis of the alarm count from the central alarm software.

There were no patients transferred to the intensive care unit during the observation period. Nurses who chose not to respond to alarms outside the room most often cited the brevity of the alarm or other reassuring contextual details, such as that a family member was in the room to notify them if anything was truly wrong, that another member of the medical team was with the patient, or that they had recently assessed the patient and thought likely the alarm did not require any action. During three observations, the observed nurse cited the presence of family in the patient’s room in their decision not to conduct further assessment in response to the alarm, noting that the parent would be able to notify the nurse if something required attention. On two occasions in which a nurse had multiple monitored patients, the observed nurse noted that if the other monitored patients were alarming and she happened to be in another patient’s room, she would not be able to hear them. Four nurses cited policy as the reason a patient was on monitors (eg, patient was on respiratory support at night for obstructive sleep apnea).

 

 

DISCUSSION

We characterized responses to physiologic monitor alarms by a group of nurses with a range of experience levels. We found that most nurse responses to alarms in continuously monitored general pediatric patients involved no intervention, and further assessment was often not conducted for alarms that occurred outside of the room if the nurse noted otherwise reassuring clinical context. Observed responses occurred for 36% of alarms during the study period when compared with bedside monitor-alarm generated data. Overall, only 14 clinical interventions were noted among the observed responses. Nurses noted that they felt the monitors were necessary for 82.9% of monitored patients because of the clinical context or because of unit policy.

Our study findings highlight some potential contradictions in the current widespread use of CPMs in general pediatric units and how clinicians respond to them in practice.2 First, while nurses reported that monitors were necessary for most of their patients, participating nurses deemed few alarms clinically actionable and often chose not to further assess when they noted alarms outside of the room. This is in line with findings from prior studies suggesting that clinicians overvalue the contribution of monitoring systems to patient safety.16,17 Second, while this finding occurred in a minority of the observations, the presence of family members at the patient’s bedside was cited by nurses as a rationale for whether they responded to alarms. While family members are capable of identifying safety issues,18 formal systems to engage them in patient safety and physiologic monitoring are lacking. Finally, clinical interventions or responses to the alerts of deteriorating patients, which best represented the original intent of CPMs, were rare and accounted for just 7% of the responses. Further work elucidating why physicians and nurses choose to use CPMs may be helpful to identify interventions to reduce inappropriate monitor use and highlight gaps in frontline staff knowledge about the benefits and risks of CPM use.

Our findings provide a novel understanding of previously observed phenomena, such as long response times or nonresponses in settings with high alarm rates.4,10 Similar to that in a prior study conducted in the pediatric setting,11 alarms with an observed response constituted a minority of the total alarms that occurred in our study. This finding has previously been attributed to mental fatigue, caregiver apathy, and desensitization.8 However, even though a minority of observed responses in our study included an intervention, the nurse had a rationale for why the alarm did or did not need a response. This behavior and the verbalized rationale indicate that in his/her opinion, not responding to the alarm was clinically appropriate. Study participants also reflected on the difficulties of responding to alarms given the monitor system setup, in which they may not always be capable of hearing alarms for their patients. Without data from nurses regarding the alarms that had no observed response, we can only speculate; however, based on our findings, each of these factors could contribute to nonresponse. Finally, while high numbers of false alarms have been posited as an underlying cause of alarm fatigue, we noted that a majority of nonresponse was reported to be related to other clinical factors. This relationship suggests that from the nurse’s perspective, a more applicable framework for understanding alarms would be based on clinical actionability4 over physiologic accuracy.

In total, our findings suggest that a multifaceted approach will be necessary to improve alarm response rates. These interventions should include adjusting parameters such that alarms are highly likely to indicate a need for intervention coupled with educational interventions addressing clinician knowledge of the alarm system and bias about the actionability of alarms may improve response rates. Changes in the monitoring system setup such that nurses can easily be notified when alarms occur may also be indicated, in addition to formally engaging patients and families around response to alarms. Although secondary notification systems (eg, alarms transmitted to individual clinician’s devices) are one solution, the utilization of these systems needs to be balanced with the risks of contributing to existing alarm fatigue and the need to appropriately tailor monitoring thresholds and strategies to patients.

Our study has several limitations. First, nurses may have responded in a way they perceive to be socially desirable, and studies using in-person observers are also prone to a Hawthorne-like effect,19-21 where the nurse may have tried to respond more frequently to alarms than usual during observations. However, given that the majority of bedside alarms did not receive a response and a substantial number of responses involved no action, these effects were likely weak. Second, we were unable to assess which alarms were accurately reflecting the patient’s physiologic status and which were not; we were also unable to link observed alarm response to monitor-recorded alarms. Third, despite the use of silent observers and an actual, rather than a simulated, clinical setting, by virtue of the data collection method we likely captured a more deliberate thought process (so-called System 2 thinking)22 rather than the subconscious processes that may predominate when nurses respond to alarms in the course of clinical care (System 1 thinking).22 Despite this limitation, our study findings, which reflect a nurse’s in-the-moment thinking, remain relevant to guiding the improvement of monitoring systems, and the development of nurse-facing interventions and education. Finally, we studied a small, purposive sample of nurses at a single hospital. Our study sample impacts the generalizability of our results and precluded a detailed analysis of the effect of nurse- and patient-level variables.

 

 

CONCLUSION

We found that nurses often deemed that no response was necessary for CPM alarms. Nurses cited contextual factors, including the duration of alarms and the presence of other providers or parents in their decision-making. Few (7%) of the alarm responses in our study included a clinical intervention. The number of observed alarm responses constituted roughly a third of the alarms recorded by bedside CPMs during the study. This result supports concerns about the nurse’s capacity to hear and process all CPM alarms given system limitations and a heavy clinical workload. Subsequent steps should include staff education, reducing overall alarm rates with appropriate monitor use and actionable alarm thresholds, and ensuring that patient alarms are easily recognizable for frontline staff.

Disclosures

The authors have no conflicts of interest to disclose.

Funding

This work was supported by the Place Outcomes Research Award from the Cincinnati Children’s Research Foundation. Dr. Brady is supported by the Agency for Healthcare Research and Quality under Award Number K08HS23827. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.

References

1. Schondelmeyer AC, Bonafide CP, Goel VV, et al. The frequency of physiologic monitor alarms in a children’s hospital. J Hosp Med. 2016;11(11):796-798. https://doi.org/10.1002/jhm.2612.
2. Schondelmeyer AC, Brady PW, Goel VV, et al. Physiologic monitor alarm rates at 5 children’s hospitals. J Hosp Med. 2018;13(6):396-398. https://doi.org/10.12788/jhm.2918.
3. Schondelmeyer AC, Brady PW, Sucharew H, et al. The impact of reduced pulse oximetry use on alarm frequency. Hosp Pediatr. In press. PubMed
4. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children’s hospital. J Hosp Med. 2015;10(6):345-351. https://doi.org/10.1002/jhm.2331.
5. Siebig S, Kuhls S, Imhoff M, et al. Intensive care unit alarms--how many do we need? Crit Care Med. 2010;38(2):451-456. https://doi.org/10.1097/CCM.0b013e3181cb0888.
6. Sendelbach S, Funk M. Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378-386. https://doi.org/10.1097/NCI.0b013e3182a903f9.
7. Sendelbach S. Alarm fatigue. Nurs Clin North Am. 2012;47(3):375-382. https://doi.org/10.1016/j.cnur.2012.05.009.
8. Cvach M. Monitor alarm fatigue: an integrative review. Biomed Instrum Technol. 2012;46(4):268-277. https://doi.org/10.2345/0899-8205-46.4.268.
9. Paine CW, Goel VV, Ely E, et al. Systematic review of physiologic monitor alarm characteristics and pragmatic interventions to reduce alarm frequency. J Hosp Med. 2016;11(2):136-144. https://doi.org/10.1002/jhm.2520.
10. Voepel-Lewis T, Parker ML, Burke CN, et al. Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):1351-1358. https://doi.org/10.1016/j.ijnurstu.2013.02.006.
11. Bonafide CP, Localio AR, Holmes JH, et al. Video analysis of factors associated With response time to physiologic monitor alarms in a children’s hospital. JAMA Pediatr. 2017;171(6):524-531. https://doi.org/10.1001/jamapediatrics.2016.5123.
12. Deb S, Claudio D. Alarm fatigue and its influence on staff performance. IIE Trans Healthc Syst Eng. 2015;5(3):183-196. https://doi.org/10.1080/19488300.2015.1062065.
13. Mondor TA, Hurlburt J, Thorne L. Categorizing sounds by pitch: effects of stimulus similarity and response repetition. Percept Psychophys. 2003;65(1):107-114. https://doi.org/10.3758/BF03194787.
14. Mondor TA, Finley GA. The perceived urgency of auditory warning alarms used in the hospital operating room is inappropriate. Can J Anaesth. 2003;50(3):221-228. https://doi.org/10.1007/BF03017788.
15. Fusch PI, Ness LR. Are we there yet? Data saturation in qualitative research. Qual Rep; 20(9), 2015:1408-1416.
16. Najafi N, Auerbach A. Use and outcomes of telemetry monitoring on a medicine service. Arch Intern Med. 2012;172(17):1349-1350. https://doi.org/10.1001/archinternmed.2012.3163.
17. Estrada CA, Rosman HS, Prasad NK, et al. Role of telemetry monitoring in the non-intensive care unit. Am J Cardiol. 1995;76(12):960-965. https://doi.org/10.1016/S0002-9149(99)80270-7.
18. Khan A, Furtak SL, Melvin P et al. Parent-reported errors and adverse events in hospitalized children. JAMA Pediatr. 2016;170(4):e154608.https://doi.org/10.1001/jamapediatrics.2015.4608.
19. Adair JG. The Hawthorne effect: a reconsideration of the methodological artifact. J Appl Psychol. 1984;69(2):334-345. https://doi.org/10.1037/0021-9010.69.2.334.
20. Kovacs-Litman A, Wong K, Shojania KG, et al. Do physicians clean their hands? Insights from a covert observational study. J Hosp Med. 2016;11(12):862-864. https://doi.org/10.1002/jhm.2632.
21. Wolfe F, Michaud K. The Hawthorne effect, sponsored trials, and the overestimation of treatment effectiveness. J Rheumatol. 2010;37(11):2216-2220. https://doi.org/10.3899/jrheum.100497.
22. Kahneman D. Thinking, Fast and Slow. 1st Pbk. ed. New York: Farrar, Straus and Giroux; 2013.

Article PDF
Issue
Journal of Hospital Medicine 14(10)
Topics
Page Number
602-606. Published online first June 11, 2019
Sections
Article PDF
Article PDF
Related Articles

Alarms from bedside continuous physiologic monitors (CPMs) occur frequently in children’s hospitals and can lead to harm. Recent studies conducted in children’s hospitals have identified alarm rates of up to 152 alarms per patient per day outside of the intensive care unit,1-3 with as few as 1% of alarms being considered clinically important.4 Excessive alarms have been linked to alarm fatigue, when providers become desensitized to and may miss alarms indicating impending patient deterioration. Alarm fatigue has been identified by national patient safety organizations as a patient safety concern given the risk of patient harm.5-7 Despite these concerns, CPMs are routinely used: up to 48% of pediatric patients in nonintensive care units at children’s hospitals are monitored.2

Although the low number of alarms that receive responses has been well-described,8,9 the reasons why clinicians do or do not respond to alarms are unclear. A study conducted in an adult perioperative unit noted prolonged nurse response times for patients with high alarm rates.10 A second study conducted in the pediatric inpatient setting demonstrated a dose-response effect and noted progressively prolonged nurse response times with increased rates of nonactionable alarms.4,11 Findings from another study suggested that underlying factors are highly complex and may be a result of excessive alarms, clinician characteristics, and working conditions (eg, workload and unit noise level).12 Evidence also suggests that humans have difficulty distinguishing the importance of alarms in situations where multiple alarm tones are used, a common scenario in hospitals.13,14 Understanding the factors that contribute to clinicians responding or not responding to CPM alarms will be crucial for addressing this serious patient safety issue.

An enhanced understanding of why nurses respond to alarms in daily practice will inform intervention development and improvement work. In the long term, this information could help improve systems for monitoring pediatric inpatients that are less prone to issues with alarm fatigue. The objective of this qualitative study, which employed structured observation, was to describe how bedside nurses think about and act upon bedside monitor alarms in a general pediatric inpatient unit.

METHODS

Study Design and Setting

This prospective observational study took place on a 48-bed hospital medicine unit at a large, freestanding children’s hospital with >650 beds and >19,000 annual admissions. General Electric (Little Chalfont, United Kingdom) physiologic monitors (models Dash 3000, 4000, and 5000) were used at the time of the study, and nurses could be notified of monitor alarms in four ways: First, an in-room auditory alarm sounds. Second, a light positioned above the door outside of each patient room blinks for alarms that are at a “warning” or “critical level” (eg ventricular tachycardia or low oxygen saturation). Third, audible alarms occur at the unit’s central monitoring station. Lastly, another staff member can notify the patient’s nurse via in-person conversion or secure smart phone communication. On the study unit, CPMs are initiated and discontinued through a physician order.

 

 

This study was reviewed and approved by the hospital’s institutional review board.

Study Population

We used a purposive recruitment strategy to enroll bedside nurses working on general hospital medicine units, stratified to ensure varying levels of experience and primary shifts (eg, day vs night). We planned to conduct approximately two observations with each participating nurse and to continue collecting data until we could no longer identify new insights in terms of responses to alarms (ie, thematic saturation15). Observations were targeted to cover times of day that coincided with increased rates of distraction. These times included just prior to and after the morning and evening change of shifts (7:00 am and 7:00 pm), during morning rounds (8:00 am-12:00 pm), and heavy admission times (12:00 pm-10:00 pm). After written informed consent, a nurse was eligible for observation during his/her shift if he/she was caring for at least one monitored patient. Enrolled nurses were made aware of the general study topic but were blinded to the study team’s hypotheses.

Data Sources

Prior to data collection, the research team, which consisted of physicians, bedside nurses, research coordinators, and a human factors expert, created a system for categorizing alarm responses. Categories for observed responses were based on the location and corresponding action taken. Initial categories were developed a priori from existing literature and expanded through input from the multidisciplinary study team, then vetted with bedside staff, and finally pilot tested through >4 hours of observations, thus producing the final categories. These categories were entered into a work-sampling program (WorkStudy by Quetech Ltd., Waterloo, Ontario, Canada) to facilitate quick data recording during observations.

The hospital uses a central alarm collection software (BedMasterEx by Anandic Medical Systems, Feuerthalen, Switzerland), which permitted the collection of date, time, trigger (eg, high heart rate), and level (eg, crisis, warning) of the generated CPM alarms. Alarms collected are based on thresholds preset at the bedside monitor. The central collection software does not differentiate between accurate (eg, correctly representing the physiologic state of the patient) and inaccurate alarms.

Observation Procedure

At the time of observation, nurse demographic information (eg, primary shift worked and years working as a nurse) was obtained. A brief preobservation questionnaire was administered to collect patient information (eg, age and diagnosis) and the nurses’ perspectives on the necessity of monitors for each monitored patient in his/her care.

The observer shadowed the nurse for a two-hour block of his/her shift. During this time, nurses were instructed to “think aloud” as they responded to alarms (eg, “I notice the oxygen saturation monitor alarming off, but the probe has fallen off”). A trained observer (AML or KMT) recorded responses verbalized by the nurse and his/her reaction by selecting the appropriate category using the work-sampling software. Data were also collected on the vital sign associated with the alarm (eg, heart rate). Moreover, the observer kept written notes to provide context for electronically recorded data. Alarms that were not verbalized by the nurse were not counted. Similarly, alarms that were noted outside of the room by the nurse were not classified by vital sign unless the nurse confirmed with the bedside monitor. Observers did not adjudicate the accuracy of the alarms. The session was stopped if monitors were discontinued during the observation period. Alarm data generated by the bedside monitor were pulled for each patient room after observations were completed.

 

 

Analysis

Descriptive statistics were used to assess the percentage of each nurse response category and each alarm type (eg, heart rate and respiratory rate). The observed alarm rate was calculated by taking the total number of observed alarms (ie, alarms noted by the nurse) divided by the total number of patient-hours observed. The monitor-generated alarm rate was calculated by taking the total number of alarms from the bedside-alarm generated data divided by the number of patient-hours observed.

Electronically recorded observations using the work-sampling program were cross-referenced with hand-written field notes to assess for any discrepancies or identify relevant events not captured by the program. Three study team members (AML, KMT, and ACS) reviewed each observation independently and compared field notes to ensure accurate categorization. Discrepancies were referred to the larger study group in cases of uncertainty.

RESULTS

Nine nurses had monitored patients during the available observations and participated in 19 observation sessions, which included 35 monitored patients for a total of 61.3 patient-hours of observation. Nurses were observed for a median of two times each (range 1-4). The median number of monitored patients during a single observation session was two (range 1-3). Observed nurses were female with a median of eight years of experience (range 0.5-26 years). Patients represented a broad range of age categories and were hospitalized with a variety of diagnoses (Table). Nurses, when queried at the start of the observation, felt that monitors were necessary for 29 (82.9%) of the observed patients given either patient condition or unit policy.

A total of 207 observed nurse responses to alarms occurred during the study period for a rate of 3.4 responses per patient per hour. Of the total number of responses, 45 (21.7%) were noted outside of a patient room, and in 15 (33.3%) the nurse chose to go to the room. The other 162 were recorded when the nurse was present in the room when the alarm activated. Of the 177 in-person nurse responses, 50 were related to a pulse oximetry alarm, 66 were related to a heart rate alarm, and 61 were related to a respiratory rate alarm. The most common observed in-person response to an alarm involved the nurse judging that no intervention was necessary (n = 152, 73.1%). Only 14 (7% of total responses) observed in-person responses involved a clinical intervention, such as suctioning or titrating supplemental oxygen. Findings are summarized in the Figure and describe nurse-verbalized reasons to further assess (or not) and then whether the nurse chose to take action (or not) after an alarm.



Alarm data were available for 17 of the 19 observation periods during the study. Technical issues with the central alarm collection software precluded alarm data collection for two of the observation sessions. A total of 483 alarms were recorded on bedside monitors during those 17 observation periods or 8.8 alarms per patient per hour, which was equivalent to 211.2 alarms per patient-day. A total of 175 observed responses were collected during these 17 observation periods. This number of responses was 36% of the number we would have expected on the basis of the alarm count from the central alarm software.

There were no patients transferred to the intensive care unit during the observation period. Nurses who chose not to respond to alarms outside the room most often cited the brevity of the alarm or other reassuring contextual details, such as that a family member was in the room to notify them if anything was truly wrong, that another member of the medical team was with the patient, or that they had recently assessed the patient and thought likely the alarm did not require any action. During three observations, the observed nurse cited the presence of family in the patient’s room in their decision not to conduct further assessment in response to the alarm, noting that the parent would be able to notify the nurse if something required attention. On two occasions in which a nurse had multiple monitored patients, the observed nurse noted that if the other monitored patients were alarming and she happened to be in another patient’s room, she would not be able to hear them. Four nurses cited policy as the reason a patient was on monitors (eg, patient was on respiratory support at night for obstructive sleep apnea).

 

 

DISCUSSION

We characterized responses to physiologic monitor alarms by a group of nurses with a range of experience levels. We found that most nurse responses to alarms in continuously monitored general pediatric patients involved no intervention, and further assessment was often not conducted for alarms that occurred outside of the room if the nurse noted otherwise reassuring clinical context. Observed responses occurred for 36% of alarms during the study period when compared with bedside monitor-alarm generated data. Overall, only 14 clinical interventions were noted among the observed responses. Nurses noted that they felt the monitors were necessary for 82.9% of monitored patients because of the clinical context or because of unit policy.

Our study findings highlight some potential contradictions in the current widespread use of CPMs in general pediatric units and how clinicians respond to them in practice.2 First, while nurses reported that monitors were necessary for most of their patients, participating nurses deemed few alarms clinically actionable and often chose not to further assess when they noted alarms outside of the room. This is in line with findings from prior studies suggesting that clinicians overvalue the contribution of monitoring systems to patient safety.16,17 Second, while this finding occurred in a minority of the observations, the presence of family members at the patient’s bedside was cited by nurses as a rationale for whether they responded to alarms. While family members are capable of identifying safety issues,18 formal systems to engage them in patient safety and physiologic monitoring are lacking. Finally, clinical interventions or responses to the alerts of deteriorating patients, which best represented the original intent of CPMs, were rare and accounted for just 7% of the responses. Further work elucidating why physicians and nurses choose to use CPMs may be helpful to identify interventions to reduce inappropriate monitor use and highlight gaps in frontline staff knowledge about the benefits and risks of CPM use.

Our findings provide a novel understanding of previously observed phenomena, such as long response times or nonresponses in settings with high alarm rates.4,10 Similar to that in a prior study conducted in the pediatric setting,11 alarms with an observed response constituted a minority of the total alarms that occurred in our study. This finding has previously been attributed to mental fatigue, caregiver apathy, and desensitization.8 However, even though a minority of observed responses in our study included an intervention, the nurse had a rationale for why the alarm did or did not need a response. This behavior and the verbalized rationale indicate that in his/her opinion, not responding to the alarm was clinically appropriate. Study participants also reflected on the difficulties of responding to alarms given the monitor system setup, in which they may not always be capable of hearing alarms for their patients. Without data from nurses regarding the alarms that had no observed response, we can only speculate; however, based on our findings, each of these factors could contribute to nonresponse. Finally, while high numbers of false alarms have been posited as an underlying cause of alarm fatigue, we noted that a majority of nonresponse was reported to be related to other clinical factors. This relationship suggests that from the nurse’s perspective, a more applicable framework for understanding alarms would be based on clinical actionability4 over physiologic accuracy.

In total, our findings suggest that a multifaceted approach will be necessary to improve alarm response rates. These interventions should include adjusting parameters such that alarms are highly likely to indicate a need for intervention coupled with educational interventions addressing clinician knowledge of the alarm system and bias about the actionability of alarms may improve response rates. Changes in the monitoring system setup such that nurses can easily be notified when alarms occur may also be indicated, in addition to formally engaging patients and families around response to alarms. Although secondary notification systems (eg, alarms transmitted to individual clinician’s devices) are one solution, the utilization of these systems needs to be balanced with the risks of contributing to existing alarm fatigue and the need to appropriately tailor monitoring thresholds and strategies to patients.

Our study has several limitations. First, nurses may have responded in a way they perceive to be socially desirable, and studies using in-person observers are also prone to a Hawthorne-like effect,19-21 where the nurse may have tried to respond more frequently to alarms than usual during observations. However, given that the majority of bedside alarms did not receive a response and a substantial number of responses involved no action, these effects were likely weak. Second, we were unable to assess which alarms were accurately reflecting the patient’s physiologic status and which were not; we were also unable to link observed alarm response to monitor-recorded alarms. Third, despite the use of silent observers and an actual, rather than a simulated, clinical setting, by virtue of the data collection method we likely captured a more deliberate thought process (so-called System 2 thinking)22 rather than the subconscious processes that may predominate when nurses respond to alarms in the course of clinical care (System 1 thinking).22 Despite this limitation, our study findings, which reflect a nurse’s in-the-moment thinking, remain relevant to guiding the improvement of monitoring systems, and the development of nurse-facing interventions and education. Finally, we studied a small, purposive sample of nurses at a single hospital. Our study sample impacts the generalizability of our results and precluded a detailed analysis of the effect of nurse- and patient-level variables.

 

 

CONCLUSION

We found that nurses often deemed that no response was necessary for CPM alarms. Nurses cited contextual factors, including the duration of alarms and the presence of other providers or parents in their decision-making. Few (7%) of the alarm responses in our study included a clinical intervention. The number of observed alarm responses constituted roughly a third of the alarms recorded by bedside CPMs during the study. This result supports concerns about the nurse’s capacity to hear and process all CPM alarms given system limitations and a heavy clinical workload. Subsequent steps should include staff education, reducing overall alarm rates with appropriate monitor use and actionable alarm thresholds, and ensuring that patient alarms are easily recognizable for frontline staff.

Disclosures

The authors have no conflicts of interest to disclose.

Funding

This work was supported by the Place Outcomes Research Award from the Cincinnati Children’s Research Foundation. Dr. Brady is supported by the Agency for Healthcare Research and Quality under Award Number K08HS23827. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.

Alarms from bedside continuous physiologic monitors (CPMs) occur frequently in children’s hospitals and can lead to harm. Recent studies conducted in children’s hospitals have identified alarm rates of up to 152 alarms per patient per day outside of the intensive care unit,1-3 with as few as 1% of alarms being considered clinically important.4 Excessive alarms have been linked to alarm fatigue, when providers become desensitized to and may miss alarms indicating impending patient deterioration. Alarm fatigue has been identified by national patient safety organizations as a patient safety concern given the risk of patient harm.5-7 Despite these concerns, CPMs are routinely used: up to 48% of pediatric patients in nonintensive care units at children’s hospitals are monitored.2

Although the low number of alarms that receive responses has been well-described,8,9 the reasons why clinicians do or do not respond to alarms are unclear. A study conducted in an adult perioperative unit noted prolonged nurse response times for patients with high alarm rates.10 A second study conducted in the pediatric inpatient setting demonstrated a dose-response effect and noted progressively prolonged nurse response times with increased rates of nonactionable alarms.4,11 Findings from another study suggested that underlying factors are highly complex and may be a result of excessive alarms, clinician characteristics, and working conditions (eg, workload and unit noise level).12 Evidence also suggests that humans have difficulty distinguishing the importance of alarms in situations where multiple alarm tones are used, a common scenario in hospitals.13,14 Understanding the factors that contribute to clinicians responding or not responding to CPM alarms will be crucial for addressing this serious patient safety issue.

An enhanced understanding of why nurses respond to alarms in daily practice will inform intervention development and improvement work. In the long term, this information could help improve systems for monitoring pediatric inpatients that are less prone to issues with alarm fatigue. The objective of this qualitative study, which employed structured observation, was to describe how bedside nurses think about and act upon bedside monitor alarms in a general pediatric inpatient unit.

METHODS

Study Design and Setting

This prospective observational study took place on a 48-bed hospital medicine unit at a large, freestanding children’s hospital with >650 beds and >19,000 annual admissions. General Electric (Little Chalfont, United Kingdom) physiologic monitors (models Dash 3000, 4000, and 5000) were used at the time of the study, and nurses could be notified of monitor alarms in four ways: First, an in-room auditory alarm sounds. Second, a light positioned above the door outside of each patient room blinks for alarms that are at a “warning” or “critical level” (eg ventricular tachycardia or low oxygen saturation). Third, audible alarms occur at the unit’s central monitoring station. Lastly, another staff member can notify the patient’s nurse via in-person conversion or secure smart phone communication. On the study unit, CPMs are initiated and discontinued through a physician order.

 

 

This study was reviewed and approved by the hospital’s institutional review board.

Study Population

We used a purposive recruitment strategy to enroll bedside nurses working on general hospital medicine units, stratified to ensure varying levels of experience and primary shifts (eg, day vs night). We planned to conduct approximately two observations with each participating nurse and to continue collecting data until we could no longer identify new insights in terms of responses to alarms (ie, thematic saturation15). Observations were targeted to cover times of day that coincided with increased rates of distraction. These times included just prior to and after the morning and evening change of shifts (7:00 am and 7:00 pm), during morning rounds (8:00 am-12:00 pm), and heavy admission times (12:00 pm-10:00 pm). After written informed consent, a nurse was eligible for observation during his/her shift if he/she was caring for at least one monitored patient. Enrolled nurses were made aware of the general study topic but were blinded to the study team’s hypotheses.

Data Sources

Prior to data collection, the research team, which consisted of physicians, bedside nurses, research coordinators, and a human factors expert, created a system for categorizing alarm responses. Categories for observed responses were based on the location and corresponding action taken. Initial categories were developed a priori from existing literature and expanded through input from the multidisciplinary study team, then vetted with bedside staff, and finally pilot tested through >4 hours of observations, thus producing the final categories. These categories were entered into a work-sampling program (WorkStudy by Quetech Ltd., Waterloo, Ontario, Canada) to facilitate quick data recording during observations.

The hospital uses a central alarm collection software (BedMasterEx by Anandic Medical Systems, Feuerthalen, Switzerland), which permitted the collection of date, time, trigger (eg, high heart rate), and level (eg, crisis, warning) of the generated CPM alarms. Alarms collected are based on thresholds preset at the bedside monitor. The central collection software does not differentiate between accurate (eg, correctly representing the physiologic state of the patient) and inaccurate alarms.

Observation Procedure

At the time of observation, nurse demographic information (eg, primary shift worked and years working as a nurse) was obtained. A brief preobservation questionnaire was administered to collect patient information (eg, age and diagnosis) and the nurses’ perspectives on the necessity of monitors for each monitored patient in his/her care.

The observer shadowed the nurse for a two-hour block of his/her shift. During this time, nurses were instructed to “think aloud” as they responded to alarms (eg, “I notice the oxygen saturation monitor alarming off, but the probe has fallen off”). A trained observer (AML or KMT) recorded responses verbalized by the nurse and his/her reaction by selecting the appropriate category using the work-sampling software. Data were also collected on the vital sign associated with the alarm (eg, heart rate). Moreover, the observer kept written notes to provide context for electronically recorded data. Alarms that were not verbalized by the nurse were not counted. Similarly, alarms that were noted outside of the room by the nurse were not classified by vital sign unless the nurse confirmed with the bedside monitor. Observers did not adjudicate the accuracy of the alarms. The session was stopped if monitors were discontinued during the observation period. Alarm data generated by the bedside monitor were pulled for each patient room after observations were completed.

 

 

Analysis

Descriptive statistics were used to assess the percentage of each nurse response category and each alarm type (eg, heart rate and respiratory rate). The observed alarm rate was calculated by taking the total number of observed alarms (ie, alarms noted by the nurse) divided by the total number of patient-hours observed. The monitor-generated alarm rate was calculated by taking the total number of alarms from the bedside-alarm generated data divided by the number of patient-hours observed.

Electronically recorded observations using the work-sampling program were cross-referenced with hand-written field notes to assess for any discrepancies or identify relevant events not captured by the program. Three study team members (AML, KMT, and ACS) reviewed each observation independently and compared field notes to ensure accurate categorization. Discrepancies were referred to the larger study group in cases of uncertainty.

RESULTS

Nine nurses had monitored patients during the available observations and participated in 19 observation sessions, which included 35 monitored patients for a total of 61.3 patient-hours of observation. Nurses were observed for a median of two times each (range 1-4). The median number of monitored patients during a single observation session was two (range 1-3). Observed nurses were female with a median of eight years of experience (range 0.5-26 years). Patients represented a broad range of age categories and were hospitalized with a variety of diagnoses (Table). Nurses, when queried at the start of the observation, felt that monitors were necessary for 29 (82.9%) of the observed patients given either patient condition or unit policy.

A total of 207 observed nurse responses to alarms occurred during the study period for a rate of 3.4 responses per patient per hour. Of the total number of responses, 45 (21.7%) were noted outside of a patient room, and in 15 (33.3%) the nurse chose to go to the room. The other 162 were recorded when the nurse was present in the room when the alarm activated. Of the 177 in-person nurse responses, 50 were related to a pulse oximetry alarm, 66 were related to a heart rate alarm, and 61 were related to a respiratory rate alarm. The most common observed in-person response to an alarm involved the nurse judging that no intervention was necessary (n = 152, 73.1%). Only 14 (7% of total responses) observed in-person responses involved a clinical intervention, such as suctioning or titrating supplemental oxygen. Findings are summarized in the Figure and describe nurse-verbalized reasons to further assess (or not) and then whether the nurse chose to take action (or not) after an alarm.



Alarm data were available for 17 of the 19 observation periods during the study. Technical issues with the central alarm collection software precluded alarm data collection for two of the observation sessions. A total of 483 alarms were recorded on bedside monitors during those 17 observation periods or 8.8 alarms per patient per hour, which was equivalent to 211.2 alarms per patient-day. A total of 175 observed responses were collected during these 17 observation periods. This number of responses was 36% of the number we would have expected on the basis of the alarm count from the central alarm software.

There were no patients transferred to the intensive care unit during the observation period. Nurses who chose not to respond to alarms outside the room most often cited the brevity of the alarm or other reassuring contextual details, such as that a family member was in the room to notify them if anything was truly wrong, that another member of the medical team was with the patient, or that they had recently assessed the patient and thought likely the alarm did not require any action. During three observations, the observed nurse cited the presence of family in the patient’s room in their decision not to conduct further assessment in response to the alarm, noting that the parent would be able to notify the nurse if something required attention. On two occasions in which a nurse had multiple monitored patients, the observed nurse noted that if the other monitored patients were alarming and she happened to be in another patient’s room, she would not be able to hear them. Four nurses cited policy as the reason a patient was on monitors (eg, patient was on respiratory support at night for obstructive sleep apnea).

 

 

DISCUSSION

We characterized responses to physiologic monitor alarms by a group of nurses with a range of experience levels. We found that most nurse responses to alarms in continuously monitored general pediatric patients involved no intervention, and further assessment was often not conducted for alarms that occurred outside of the room if the nurse noted otherwise reassuring clinical context. Observed responses occurred for 36% of alarms during the study period when compared with bedside monitor-alarm generated data. Overall, only 14 clinical interventions were noted among the observed responses. Nurses noted that they felt the monitors were necessary for 82.9% of monitored patients because of the clinical context or because of unit policy.

Our study findings highlight some potential contradictions in the current widespread use of CPMs in general pediatric units and how clinicians respond to them in practice.2 First, while nurses reported that monitors were necessary for most of their patients, participating nurses deemed few alarms clinically actionable and often chose not to further assess when they noted alarms outside of the room. This is in line with findings from prior studies suggesting that clinicians overvalue the contribution of monitoring systems to patient safety.16,17 Second, while this finding occurred in a minority of the observations, the presence of family members at the patient’s bedside was cited by nurses as a rationale for whether they responded to alarms. While family members are capable of identifying safety issues,18 formal systems to engage them in patient safety and physiologic monitoring are lacking. Finally, clinical interventions or responses to the alerts of deteriorating patients, which best represented the original intent of CPMs, were rare and accounted for just 7% of the responses. Further work elucidating why physicians and nurses choose to use CPMs may be helpful to identify interventions to reduce inappropriate monitor use and highlight gaps in frontline staff knowledge about the benefits and risks of CPM use.

Our findings provide a novel understanding of previously observed phenomena, such as long response times or nonresponses in settings with high alarm rates.4,10 Similar to that in a prior study conducted in the pediatric setting,11 alarms with an observed response constituted a minority of the total alarms that occurred in our study. This finding has previously been attributed to mental fatigue, caregiver apathy, and desensitization.8 However, even though a minority of observed responses in our study included an intervention, the nurse had a rationale for why the alarm did or did not need a response. This behavior and the verbalized rationale indicate that in his/her opinion, not responding to the alarm was clinically appropriate. Study participants also reflected on the difficulties of responding to alarms given the monitor system setup, in which they may not always be capable of hearing alarms for their patients. Without data from nurses regarding the alarms that had no observed response, we can only speculate; however, based on our findings, each of these factors could contribute to nonresponse. Finally, while high numbers of false alarms have been posited as an underlying cause of alarm fatigue, we noted that a majority of nonresponse was reported to be related to other clinical factors. This relationship suggests that from the nurse’s perspective, a more applicable framework for understanding alarms would be based on clinical actionability4 over physiologic accuracy.

In total, our findings suggest that a multifaceted approach will be necessary to improve alarm response rates. These interventions should include adjusting parameters such that alarms are highly likely to indicate a need for intervention coupled with educational interventions addressing clinician knowledge of the alarm system and bias about the actionability of alarms may improve response rates. Changes in the monitoring system setup such that nurses can easily be notified when alarms occur may also be indicated, in addition to formally engaging patients and families around response to alarms. Although secondary notification systems (eg, alarms transmitted to individual clinician’s devices) are one solution, the utilization of these systems needs to be balanced with the risks of contributing to existing alarm fatigue and the need to appropriately tailor monitoring thresholds and strategies to patients.

Our study has several limitations. First, nurses may have responded in a way they perceive to be socially desirable, and studies using in-person observers are also prone to a Hawthorne-like effect,19-21 where the nurse may have tried to respond more frequently to alarms than usual during observations. However, given that the majority of bedside alarms did not receive a response and a substantial number of responses involved no action, these effects were likely weak. Second, we were unable to assess which alarms were accurately reflecting the patient’s physiologic status and which were not; we were also unable to link observed alarm response to monitor-recorded alarms. Third, despite the use of silent observers and an actual, rather than a simulated, clinical setting, by virtue of the data collection method we likely captured a more deliberate thought process (so-called System 2 thinking)22 rather than the subconscious processes that may predominate when nurses respond to alarms in the course of clinical care (System 1 thinking).22 Despite this limitation, our study findings, which reflect a nurse’s in-the-moment thinking, remain relevant to guiding the improvement of monitoring systems, and the development of nurse-facing interventions and education. Finally, we studied a small, purposive sample of nurses at a single hospital. Our study sample impacts the generalizability of our results and precluded a detailed analysis of the effect of nurse- and patient-level variables.

 

 

CONCLUSION

We found that nurses often deemed that no response was necessary for CPM alarms. Nurses cited contextual factors, including the duration of alarms and the presence of other providers or parents in their decision-making. Few (7%) of the alarm responses in our study included a clinical intervention. The number of observed alarm responses constituted roughly a third of the alarms recorded by bedside CPMs during the study. This result supports concerns about the nurse’s capacity to hear and process all CPM alarms given system limitations and a heavy clinical workload. Subsequent steps should include staff education, reducing overall alarm rates with appropriate monitor use and actionable alarm thresholds, and ensuring that patient alarms are easily recognizable for frontline staff.

Disclosures

The authors have no conflicts of interest to disclose.

Funding

This work was supported by the Place Outcomes Research Award from the Cincinnati Children’s Research Foundation. Dr. Brady is supported by the Agency for Healthcare Research and Quality under Award Number K08HS23827. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.

References

1. Schondelmeyer AC, Bonafide CP, Goel VV, et al. The frequency of physiologic monitor alarms in a children’s hospital. J Hosp Med. 2016;11(11):796-798. https://doi.org/10.1002/jhm.2612.
2. Schondelmeyer AC, Brady PW, Goel VV, et al. Physiologic monitor alarm rates at 5 children’s hospitals. J Hosp Med. 2018;13(6):396-398. https://doi.org/10.12788/jhm.2918.
3. Schondelmeyer AC, Brady PW, Sucharew H, et al. The impact of reduced pulse oximetry use on alarm frequency. Hosp Pediatr. In press. PubMed
4. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children’s hospital. J Hosp Med. 2015;10(6):345-351. https://doi.org/10.1002/jhm.2331.
5. Siebig S, Kuhls S, Imhoff M, et al. Intensive care unit alarms--how many do we need? Crit Care Med. 2010;38(2):451-456. https://doi.org/10.1097/CCM.0b013e3181cb0888.
6. Sendelbach S, Funk M. Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378-386. https://doi.org/10.1097/NCI.0b013e3182a903f9.
7. Sendelbach S. Alarm fatigue. Nurs Clin North Am. 2012;47(3):375-382. https://doi.org/10.1016/j.cnur.2012.05.009.
8. Cvach M. Monitor alarm fatigue: an integrative review. Biomed Instrum Technol. 2012;46(4):268-277. https://doi.org/10.2345/0899-8205-46.4.268.
9. Paine CW, Goel VV, Ely E, et al. Systematic review of physiologic monitor alarm characteristics and pragmatic interventions to reduce alarm frequency. J Hosp Med. 2016;11(2):136-144. https://doi.org/10.1002/jhm.2520.
10. Voepel-Lewis T, Parker ML, Burke CN, et al. Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):1351-1358. https://doi.org/10.1016/j.ijnurstu.2013.02.006.
11. Bonafide CP, Localio AR, Holmes JH, et al. Video analysis of factors associated With response time to physiologic monitor alarms in a children’s hospital. JAMA Pediatr. 2017;171(6):524-531. https://doi.org/10.1001/jamapediatrics.2016.5123.
12. Deb S, Claudio D. Alarm fatigue and its influence on staff performance. IIE Trans Healthc Syst Eng. 2015;5(3):183-196. https://doi.org/10.1080/19488300.2015.1062065.
13. Mondor TA, Hurlburt J, Thorne L. Categorizing sounds by pitch: effects of stimulus similarity and response repetition. Percept Psychophys. 2003;65(1):107-114. https://doi.org/10.3758/BF03194787.
14. Mondor TA, Finley GA. The perceived urgency of auditory warning alarms used in the hospital operating room is inappropriate. Can J Anaesth. 2003;50(3):221-228. https://doi.org/10.1007/BF03017788.
15. Fusch PI, Ness LR. Are we there yet? Data saturation in qualitative research. Qual Rep; 20(9), 2015:1408-1416.
16. Najafi N, Auerbach A. Use and outcomes of telemetry monitoring on a medicine service. Arch Intern Med. 2012;172(17):1349-1350. https://doi.org/10.1001/archinternmed.2012.3163.
17. Estrada CA, Rosman HS, Prasad NK, et al. Role of telemetry monitoring in the non-intensive care unit. Am J Cardiol. 1995;76(12):960-965. https://doi.org/10.1016/S0002-9149(99)80270-7.
18. Khan A, Furtak SL, Melvin P et al. Parent-reported errors and adverse events in hospitalized children. JAMA Pediatr. 2016;170(4):e154608.https://doi.org/10.1001/jamapediatrics.2015.4608.
19. Adair JG. The Hawthorne effect: a reconsideration of the methodological artifact. J Appl Psychol. 1984;69(2):334-345. https://doi.org/10.1037/0021-9010.69.2.334.
20. Kovacs-Litman A, Wong K, Shojania KG, et al. Do physicians clean their hands? Insights from a covert observational study. J Hosp Med. 2016;11(12):862-864. https://doi.org/10.1002/jhm.2632.
21. Wolfe F, Michaud K. The Hawthorne effect, sponsored trials, and the overestimation of treatment effectiveness. J Rheumatol. 2010;37(11):2216-2220. https://doi.org/10.3899/jrheum.100497.
22. Kahneman D. Thinking, Fast and Slow. 1st Pbk. ed. New York: Farrar, Straus and Giroux; 2013.

References

1. Schondelmeyer AC, Bonafide CP, Goel VV, et al. The frequency of physiologic monitor alarms in a children’s hospital. J Hosp Med. 2016;11(11):796-798. https://doi.org/10.1002/jhm.2612.
2. Schondelmeyer AC, Brady PW, Goel VV, et al. Physiologic monitor alarm rates at 5 children’s hospitals. J Hosp Med. 2018;13(6):396-398. https://doi.org/10.12788/jhm.2918.
3. Schondelmeyer AC, Brady PW, Sucharew H, et al. The impact of reduced pulse oximetry use on alarm frequency. Hosp Pediatr. In press. PubMed
4. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children’s hospital. J Hosp Med. 2015;10(6):345-351. https://doi.org/10.1002/jhm.2331.
5. Siebig S, Kuhls S, Imhoff M, et al. Intensive care unit alarms--how many do we need? Crit Care Med. 2010;38(2):451-456. https://doi.org/10.1097/CCM.0b013e3181cb0888.
6. Sendelbach S, Funk M. Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378-386. https://doi.org/10.1097/NCI.0b013e3182a903f9.
7. Sendelbach S. Alarm fatigue. Nurs Clin North Am. 2012;47(3):375-382. https://doi.org/10.1016/j.cnur.2012.05.009.
8. Cvach M. Monitor alarm fatigue: an integrative review. Biomed Instrum Technol. 2012;46(4):268-277. https://doi.org/10.2345/0899-8205-46.4.268.
9. Paine CW, Goel VV, Ely E, et al. Systematic review of physiologic monitor alarm characteristics and pragmatic interventions to reduce alarm frequency. J Hosp Med. 2016;11(2):136-144. https://doi.org/10.1002/jhm.2520.
10. Voepel-Lewis T, Parker ML, Burke CN, et al. Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):1351-1358. https://doi.org/10.1016/j.ijnurstu.2013.02.006.
11. Bonafide CP, Localio AR, Holmes JH, et al. Video analysis of factors associated With response time to physiologic monitor alarms in a children’s hospital. JAMA Pediatr. 2017;171(6):524-531. https://doi.org/10.1001/jamapediatrics.2016.5123.
12. Deb S, Claudio D. Alarm fatigue and its influence on staff performance. IIE Trans Healthc Syst Eng. 2015;5(3):183-196. https://doi.org/10.1080/19488300.2015.1062065.
13. Mondor TA, Hurlburt J, Thorne L. Categorizing sounds by pitch: effects of stimulus similarity and response repetition. Percept Psychophys. 2003;65(1):107-114. https://doi.org/10.3758/BF03194787.
14. Mondor TA, Finley GA. The perceived urgency of auditory warning alarms used in the hospital operating room is inappropriate. Can J Anaesth. 2003;50(3):221-228. https://doi.org/10.1007/BF03017788.
15. Fusch PI, Ness LR. Are we there yet? Data saturation in qualitative research. Qual Rep; 20(9), 2015:1408-1416.
16. Najafi N, Auerbach A. Use and outcomes of telemetry monitoring on a medicine service. Arch Intern Med. 2012;172(17):1349-1350. https://doi.org/10.1001/archinternmed.2012.3163.
17. Estrada CA, Rosman HS, Prasad NK, et al. Role of telemetry monitoring in the non-intensive care unit. Am J Cardiol. 1995;76(12):960-965. https://doi.org/10.1016/S0002-9149(99)80270-7.
18. Khan A, Furtak SL, Melvin P et al. Parent-reported errors and adverse events in hospitalized children. JAMA Pediatr. 2016;170(4):e154608.https://doi.org/10.1001/jamapediatrics.2015.4608.
19. Adair JG. The Hawthorne effect: a reconsideration of the methodological artifact. J Appl Psychol. 1984;69(2):334-345. https://doi.org/10.1037/0021-9010.69.2.334.
20. Kovacs-Litman A, Wong K, Shojania KG, et al. Do physicians clean their hands? Insights from a covert observational study. J Hosp Med. 2016;11(12):862-864. https://doi.org/10.1002/jhm.2632.
21. Wolfe F, Michaud K. The Hawthorne effect, sponsored trials, and the overestimation of treatment effectiveness. J Rheumatol. 2010;37(11):2216-2220. https://doi.org/10.3899/jrheum.100497.
22. Kahneman D. Thinking, Fast and Slow. 1st Pbk. ed. New York: Farrar, Straus and Giroux; 2013.

Issue
Journal of Hospital Medicine 14(10)
Issue
Journal of Hospital Medicine 14(10)
Page Number
602-606. Published online first June 11, 2019
Page Number
602-606. Published online first June 11, 2019
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Amanda Schondelmeyer, MD, MSc; E-mail: [email protected]; Telephone: 513-803-9158
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media

Reducing Unneeded Clinical Variation in Sepsis and Heart Failure Care to Improve Outcomes and Reduce Cost: A Collaborative Engagement with Hospitalists in a MultiState System

Article Type
Changed
Tue, 09/17/2019 - 23:17

Sepsis and heart failure are two common, costly, and deadly conditions. Among hospitalized Medicare patients, these conditions rank as the first and second most frequent principal diagnoses accounting for over $33 billion in spending across all payers.1 One-third to one-half of all hospital deaths are estimated to occur in patients with sepsis,2 and heart failure is listed as a contributing factor in over 10% of deaths in the United States.3

Previous research shows that evidence-based care decisions can impact the outcomes for these patients. For example, sepsis patients receiving intravenous fluids, blood cultures, broad-spectrum antibiotics, and lactate measurement within three hours of presentation have lower mortality rates.4 In heart failure, key interventions such as the appropriate use of ACE inhibitors, beta blockers, and referral to disease management programs reduce morbidity and mortality.5

However, rapid dissemination and adoption of evidence-based guidelines remain a challenge.6,7 Policy makers have introduced incentives and penalties to support adoption, with varying levels of success. After four years of Centers for Medicare and Medicaid Services (CMS) penalties for hospitals with excess heart failure readmissions, only 21% performed well enough to avoid a penalty in 2017.8 CMS has been tracking sepsis bundle adherence as a core measure, but the rate in 2018 sat at just 54%.9 It is clear that new solutions are needed.10

AdventHealth (formerly Adventist Health System) is a growing, faith-based health system with hospitals across nine states. AdventHealth is a national leader in quality, safety, and patient satisfaction but is not immune to the challenges of delivering consistent, evidence-based care across an extensive network. To accelerate system-wide practice change, AdventHealth’s Office of Clinical Excellence (OCE) partnered with QURE Healthcare and Premier, Inc., to implement a physician engagement and care standardization collaboration involving nearly 100 hospitalists at eight facilities across five states.

This paper describes the results of the Adventist QURE Quality Project (AQQP), which used QURE’s validated, simulation-based measurement and feedback approach to engage hospitalists and standardize evidence-based practices for patients with sepsis and heart failure. We documented specific areas of variation identified in the simulations, how those practices changed through serial feedback, and the impact of those changes on real-world outcomes and costs.

METHODS

Setting

AdventHealth has its headquarters in Altamonte Springs, Florida. It has facilities in nine states, which includes 48 hospitals. The OCE is comprised of physician leaders, project managers, and data analysts who sponsored the project from July 2016 through July 2018.

Study Participants

AdventHealth hospitals were invited to enroll their hospitalists in AQQP; eight AdventHealth hospitals across five states, representing 91 physicians and 16 nurse practitioners/physician’s assistants (APPs), agreed to participate. Participants included both AdventHealth-employed providers and contracted hospitalist groups. Provider participation was voluntary and not tied to financial incentives; however, participants received Continuing Medical Education credit and, if applicable, Maintenance of Certification points through the American Board of Internal Medicine.

 

 

Quasi-experimental Design

We used AdventHealth hospitals not participating in AQQP as a quasi-experimental control group. We leveraged this to measure the impact of concurrent secular effects, such as order sets and other system-wide training, that could also improve practice and outcomes in our study.

Study Objectives and Approach

The explicit goals of AQQP were to (1) measure how sepsis and heart failure patients are cared for across AdventHealth using Clinical Performance and Value (CPV) case simulations, (2) provide a forum for hospitalists to discuss clinical variation, and (3) reduce unneeded variation to improve quality and reduce cost. QURE developed 12 CPV simulated patient cases (six sepsis and six heart failure cases) with case-specific evidenced-based scoring criteria tied to national and Advent­Health evidence-based guidelines. AdventHealth order sets were embedded in the cases and accessible by participants as they cared for their patients.

CPV vignettes are simulated patient cases administered online, and have been validated as an accurate and responsive measure of clinical decision-making in both ambulatory11-13 and inpatient settings.14,15 Cases take 20-30 minutes each to complete and simulate a typical clinical encounter: taking the medical history, performing a physical examination, ordering tests, making the diagnosis, implementing initial treatment, and outlining a follow-up plan. Each case has predefined, evidence-based scoring criteria for each care domain. Cases and scoring criteria were reviewed by AdventHealth hospitalist program leaders and physician leaders in OCE. Provider responses were double-scored by trained physician abstractors. Scores range from 0%-100%, with higher scores reflecting greater alignment with best practice recommendations.

In each round of the project, AQQP participants completed two CPV cases, received personalized online feedback reports on their care decisions, and met (at the various sites and via web conference) for a facilitated group discussion on areas of high group variation. The personal feedback reports included the participant’s case score compared to the group average, a list of high-priority personalized improvement opportunities, a summary of the cost of unneeded care items, and links to relevant references. The group discussions focused on six items of high variation. Six total rounds of CPV measurement and feedback were held, one every four months.

At the study’s conclusion, we administered a brief satisfaction survey, asking providers to rate various aspects of the project on a five-point Likert scale.

Data

The study used two primary data sources: (1) care decisions made in the CPV simulated cases and (2) patient-level utilization data from Premier Inc.’s QualityAdvisorTM (QA) data system. QA integrates quality, safety, and financial data from AdventHealth’s electronic medical record, claims data, charge master, and other resources. QualityAdvisor also calculates expected performance for critical measures, including cost per case and length of stay (LOS), based on a proprietary algorithm, which uses DRG classification, severity-of-illness, risk-of-mortality, and other patient risk factors. We pulled patient-level observed and expected data from AQQP qualifying physicians, defined as physicians participating in a majority of CPV measurement rounds. Of the 107 total hospitalists who participated, six providers did not participate in enough CPV rounds, and 22 providers left AdventHealth and could not be included in a patient-level impact analysis. These providers were replaced with 21 new hospitalists who were enrolled in the study and included in the CPV analysis but who did not have patient-level data before AQQP enrollment. Overall, 58 providers met the qualifying criteria to be included in the impact analysis. We compared their performance to a group of 96 hospitalists at facilities that were not participating in the project. Comparator facilities were selected based on quantitative measures of size and demographic matching the AQQP-facilities ensuring that both sets of hospitals (comparator and AQQP) exhibited similar levels of engagement with Advent- Health quality activities such as quality dashboard performance and order set usage. Baseline patient-level cost and LOS data covered from October 2015 to June 2016 and were re-measured annually throughout the project, from July 2016 to June 2018.

 

 

Statistical Analyses

We analyzed three primary outcomes: (1) general CPV-measured improvements in each round (scored against evidence-based scoring criteria); (2) disease-specific CPV improvements over each round; and (3) changes in patient-level outcomes and economic savings among AdventHealth pneumonia/sepsis and heart failure patients from the aforementioned improvements. We used Student’s t-test to analyze continuous outcome variables (including CPV, cost of care, and length of stay data) and Fisher’s exact test for binary outcome data. All statistical analyses were performed using Stata 14.2 (StataCorp LLC, College Station, Texas).

RESULTS

Baseline Characteristics and Assessment

A total of 107 AdventHealth hospitalists participated in this study (Appendix Table 1). 78.1% of these providers rated the organization’s focus on quality and lowering unnecessary costs as either “good” or “excellent,” but 78.8% also reported that variation in care provided by the group was “moderate” to “very high”.

At baseline, we observed high variability in the care of pneumonia patients with sepsis (pneumonia/sepsis) and heart failure patients as measured by the care decisions obtained in the CPV cases. The overall quality score, which is a weighted average across all domains, averaged 61.9% ± 10.5% for the group (Table 1). Disaggregating scores by condition, we found an average overall score of 59.4% ± 10.9% for pneumonia/sepsis and 64.4% ± 9.4% for heart failure. The diagnosis and treatment domains, which require the most clinical judgment, had the lowest average domain scores of 53.4% ± 20.9% and 51.6% ± 15.1%, respectively.

Changes in CPV Scores

To determine the impact of serial measurement and feedback, we compared performance in the first two rounds of the project with the last two rounds. We found that overall CPV quality scores showed a 4.8%-point absolute improvement (P < .001; Table 1). We saw improvements in all care domains, and those increases were significant in all but the workup (P = .470); the most significant increase was in diagnostic accuracy (+19.1%; P < .001).

By condition, scores showed similar, statistically significant overall improvements: +4.4%-points for pneumonia/sepsis (P = .001) and +5.5%-points for heart failure (P < .001) driven by increases in the diagnosis and treatment domains. For example, providers increased appropriate identification of HF severity by 21.5%-points (P < .001) and primary diagnosis of pneumonia/sepsis by 3.6%-points (P = .385).

In the treatment domain, which included clinical decisions related to initial management and follow-up care, there were several specific improvements. For HF, we found that performing all the essential treatment elements—prescribing diuretics, ACE inhibitors and beta blockers for appropriate patients—improved by 13.9%-points (P = .038); ordering VTE prophylaxis increased more than threefold, from 16.6% to 51.0% (P < .001; Table 2). For pneumonia/sepsis patients, absolute adherence to all four elements of the 3-hour sepsis bundle improved by 11.7%-points (P = .034). We also saw a decrease in low-value diagnostic workup items for patient cases in which the guidelines suggest they are not needed, such as urinary antigen testing, which declined by 14.6%-points (P = .001) and sputum cultures, which declined 26.4%-points (P = .004). In addition, outlining an evidence-based discharge plan including a follow-up visit, patient education and medication reconciliation improved, especially for pneumonia/sepsis patients by 24.3%-points (P < .001).



Adherence to AdventHealth-preferred, evidence-based empiric antibiotic regimens was only 41.1% at baseline, but by the third round, adherence to preferred antibiotics had increased by 37% (P = .047). In the summer of 2017, after the third round, we updated scoring criteria for the cases to align with new Advent­Health-preferred antibiotic regimens. Not surprisingly, when the new antibiotic regimens were introduced, CPV-measured adherence to the new guidelines then regressed to nearly baseline levels (42.4%) as providers adjusted to the new recommendations. However, by the end of the final round, AdventHealth-preferred antibiotics orders improved by 12%.

Next, we explored whether the improvements seen were due to the best performers getting better, which was not the case. At baseline the bottom-half performers scored 10.7%-points less than top-half performers but, over the course of the study, we found that the bottom half performers had an absolute improvement nearly two times of those in the top half (+5.7%-points vs +2.9%-points; P = .006), indicating that these bottom performers were able to close the gap in quality-of-care provided. In particular, these bottom performers improved the accuracy of their primary diagnosis by 16.7%-points, compared to a 2.0%-point improvement for the top-half performers.

 

 

Patient-Level Impact on LOS and Cost Per Case

We took advantage of the quasi-experimental design, in which only a portion of AdventHealth facilities participated in the project, to compare patient-level results from AQQP-participating physicians against the engagement-matched cohort of hospitalists at nonparticipating AdventHealth facilities. We adjusted for potential differences in patient-level case mix between the two groups by comparing the observed/expected (O/E) LOS and cost per case ratios for pneumonia/sepsis and heart failure patients.

At baseline, AQQP-hospitalists performed better on geometric LOS versus the comparator group (O/E of 1.13 vs 1.22; P = .006) but at about the same on cost per case (O/E of 1.16 vs 1.14; P = .390). Throughout the project, as patient volumes and expected per patient costs rose for both groups, O/E ratios improved among both AQQP and non-AQQP providers.

To set apart the contribution of system-wide improvements from the AQQP project-specific impacts, we applied the O/E improvement rates seen in the comparator group to the AQQP group baseline performance. We then compared that to the actual changes seen in the AQQP throughout the project to see if there was any additional benefit from the simulation measurement and feedback (Figure).



From baseline through year one of the project, the O/E LOS ratio decreased by 8.0% in the AQQP group (1.13 to 1.04; P = .004) and only 2.5% in the comparator group (1.22 to 1.19; P = .480), which is an absolute difference-in-difference of 0.06 LOS O/E. In year 1, these improvements represent a reduction in 892 patient days among patients cared for by AQQP-hospitalists of which 570 appear to be driven by the AQQP intervention and 322 attributable to secular system-wide improvements (Table 3). In year two, both groups continued to improve with the comparator group catching up to the AQQP group.

Geometric mean O/E cost per case also decreased for both AQQP (1.16 Baseline vs 0.98 Year 2; P < .001) and comparator physicians (1.14 Baseline vs 1.01 Year 2; P = .002), for an absolute difference-in-difference of 0.05 cost O/E. However, the AQQP-hospitalists showed greater improvement (15% vs 12%; P = .346; Table 3). As in the LOS analysis, the AQQP-specific impact on cost was markedly accelerated in year one, accounting for $1.6 million of the estimated $2.6 million total savings that year. Over the two-year project, these combined improvements drove an estimated $6.2 million in total savings among AQQP-hospitalists: $3.8 million of this appear to be driven by secular system effects and, based upon our quasi-experimental design, an additional $2.4 million of which are attributable to participation in AQQP.


A Levene’s test for equality of variances on the log-transformed costs and LOS shows that the AQQP reductions in costs and LOS come from reduced variation among providers. Throughout the project, the standard deviation in LOS was reduced by 4.3%, from 3.8 days to 3.6 days (P = .046) and costs by 27.7%, from $9,391 to $6,793 (P < .001). The non-AQQP group saw a smaller, but still significant 14.6% reduction in cost variation (from $9,928 to $8,482), but saw a variation in LOS increase significantly by 20.6%, from 4.1 days to 5.0 days (P < .001).

 

 

Provider Satisfaction

At the project conclusion, we administered a brief survey. Participants were asked to rate aspects of the project (a five-point Likert scale with five being the highest), and 24 responded. The mean ratings of the relevance of the project to their practice and the overall quality of the material were 4.5 and 4.2, respectively. Providers found the individual feedback reports (3.9) slightly more helpful than the webcast group discussions (3.7; Appendix Table 2 ).

DISCUSSION

As health systems expand, the opportunity to standardize clinical practice within a system has the potential to enhance patient care and lower costs. However, achieving these goals is challenging when providers are dispersed across geographically separated sites and clinical decision-making is difficult to measure in a standardized way.16,17 We brought together over 100 physicians and APPs from eight different-sized hospitals in five different states to prospectively determine if we could improve care using a standardized measurement and feedback system. At baseline, we found that care varied dramatically among providers. Care varied in terms of diagnostic accuracy and treatment, which directly relate to care quality and outcomes.4 After serial measurement and feedback, we saw reductions in unnecessary testing, more guideline-based treatment decisions, and better discharge planning in the clinical vignettes.

We confirmed that changes in CPV-measured practice translated into lower costs and shorter LOS at the patient level. We further validated the improvements through a quasi-experimental design that compared these changes to those at nonparticipating AdventHealth facilities. We saw more significant cost reductions and decreases in LOS in the simulation-based measurement and feedback cohort with the biggest impact early on. The overall savings to the system, attributable specifically to the AQQP approach, is estimated to be $2.4 million.

One advantage of the online case simulation approach is the ability to bring geographically remote sites together in a shared quality-of-care discussion. The interventions specifically sought to remove barriers between facilities. For example, individual feedback reports allowed providers to see how they compare with providers at other AdventHealth facilities and webcast results discussions enable providers across facilities to discuss specific care decisions.

There were several limitations to the study. While the quasi-experimental design allowed us to make informative comparisons between AQQP-participating facilities and nonparticipating facilities, the assignments were not random, and participants were generally from higher performing hospital medicine groups. The determination of secular versus CPV-related improvement is confounded by other system improvement initiatives that may have impacted cost and LOS results. This is mitigated by the observation that facilities that opted to participate performed better at baseline in risk-adjusted LOS but slightly worse in cost per case, indicating that baseline differences were not dramatic. While both groups improved over time, the QURE measurement and feedback approach led to larger and more rapid gains than those seen in the comparator group. However, we could not exclude the potential that project participation at the site level was biased to those groups disposed to performance improvement. In addition, our patient-level data analysis was limited to the metrics available and did not allow us to directly compare patient-level performance across the plethora of clinically relevant CPV data that showed improvement. Our inpatient cost per case analysis showed significant savings for the system but did not include all potentially favorable economic impacts such as lower follow-up care costs for patients, more accurate reimbursement through better coding or fewer lost days of productivity.

With continued consolidation in healthcare and broader health systems spanning multiple geographies, new tools are needed to support standardized, evidence-based care across sites. This standardization is especially important, both clinically and financially, for high-volume, high-cost diseases such as sepsis and heart failure. However, changing practice cannot happen without collaborative engagement with providers. Standardized patient vignettes are an opportunity to measure and provide feedback in a systematic way that engages providers and is particularly well-suited to large systems and common clinical conditions. This analysis, from a real-world study, shows that an approach that standardizes care and lowers costs may be particularly helpful for large systems needing to bring disparate sites together as they concurrently move toward value-based payment.

 

 

Disclosures

QURE, LLC, whose intellectual property was used to prepare the cases and collect the data, was contracted by AdventHealth. Otherwise, any of the study authors report no potential conflicts to disclose.

Funding

This work was funded by a contract between AdventHealth (formerly Adventist Health System) and QURE, LLC.

Files
References

1. Torio C, Moore B. National inpatient hospital costs: the most expensive conditions by payer, 2013. HCUP Statistical Brief #204. Published May 2016 http://www.hcup-us.ahrq.gov/reports/statbriefs/sb204-Most-Expensive-Hospital-Conditions.pdf. Accessed December 2018. 
2. Liu, V, GJ Escobar, Greene JD, et al. Hospital deaths in patients with sepsis from 2 independent cohorts. JAMA. 2014;312(1):90-92. https://doi.org/10.1001/jama.2014.5804.
3. Mozzafarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics—2016 update: a report from the American Heart Association. Circulation. 2016;133(4):e38-e360. https://doi.org/10.1161/CIR.0000000000000350.
4. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis. N Engl J Med. 2017;376(23):2235-2244. https://doi.org/10.1056/NEJMoa1703058.
5. Yancy CW, Jessup M, Bozkurt B, et al. 2016 ACC/AHA/HFSA focused update on new pharmacological therapy for heart failure: an update of the 2013 ACCF/AHA guideline for the management of heart failure: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines and the Heart Failure Society of America. Circulation. 2016;134(13):e282-e293. https://doi.org/10.1161/CIR.0000000000000460.
6. Warren JI, McLaughlin M, Bardsley J, et al. The strengths and challenges of implementing EBP in healthcare systems. Worldviews Evid Based Nurs. 2016;13(1):15-24. https://doi.org/10.1111/wvn.12149.
7. Hisham R, Ng CJ, Liew SM, Hamzah N, Ho GJ. Why is there variation in the practice of evidence-based medicine in primary care? A qualitative study. BMJ Open. 2016;6(3):e010565. https://doi.org/10.1136/bmjopen-2015-010565.
8. Boccuti C, Casillas G. Aiming for Fewer Hospital U-turns: The Medicare Hospital Readmission Reduction Program, The Henry J. Kaiser Family Foundation. https://www.kff.org/medicare/issue-brief/aiming-for-fewer-hospital-u-turns-the-medicare-hospital-readmission-reduction-program/. Accessed Mar 10, 2017.
9. Venkatesh AK, Slesinger T, Whittle J, et al. Preliminary performance on the new CMS sepsis-1 national quality measure: early insights from the emergency quality network (E-QUAL). Ann Emerg Med. 2018;71(1):10-15. https://doi.org/10.1016/j.annemergmed.2017.06.032.
10. Braithwaite, J. Changing how we think about healthcare improvement. BMJ. 2018;36:k2014. https://doi.org/10.1136/bmj.k2014.
11. Peabody JW, Luck J, Glassman P, Dresselhaus TR, Lee M. Comparison of vignettes, standardized patients, and chart abstraction: a prospective validation study of 3 methods for measuring quality. JAMA. 2000;283(13):1715-1722. PubMed
12. Peabody JW, Luck J, Glassman P, et al. Measuring the quality of physician practice by using clinical vignettes: a prospective validation study. Ann Intern Med. 2004;141(10):771-780. https://doi.org/10.7326/0003-4819-141-10-200411160-00008.
13. Peabody JW, Shimkhada S, Quimbo S, Solon O, Javier X, McCulloch C. The impact of performance incentives on health outcomes: results from a cluster randomized controlled trial in the Philippines. Health Policy Plan. 2014;29(5):615-621. https://doi.org/10.1093/heapol/czt047.
14. Weems L, Strong J, Plummer D, et al. A quality collaboration in heart failure and pneumonia inpatient care at Novant Health: standardizing hospitalist practices to improve patient care and system performance. Jt Comm J Qual Patient Saf. 2019;45(3):199-206. https://doi.org/10.1016/j.jcjq.2018.09.005.
15. Bergmann S, Tran M, Robison K, et al. Standardizing hospitalist practice in sepsis and COPD care. BMJ Qual Safety. 2019. https://doi.org/10.1136/bmjqs-2018-008829.
16. Chassin MR, Galvin RM. the National Roundtable on Health Care Quality. The urgent need to improve health care quality: Institute of Medicine National Roundtable on Health Care Quality. JAMA. 1998;280(11):1000-1005. https://doi.org/10.1001/jama.280.11.1000.
17. Gupta DM, Boland RJ, Aron DC. The physician’s experience of changing clinical practice: a struggle to unlearn. Implementation Sci. 2017;12(1):28. https://doi.org/10.1186/s13012-017-0555-2.

Article PDF
Issue
Journal of Hospital Medicine 14(9)
Topics
Page Number
541-546. Published online first June 11, 2019
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Sepsis and heart failure are two common, costly, and deadly conditions. Among hospitalized Medicare patients, these conditions rank as the first and second most frequent principal diagnoses accounting for over $33 billion in spending across all payers.1 One-third to one-half of all hospital deaths are estimated to occur in patients with sepsis,2 and heart failure is listed as a contributing factor in over 10% of deaths in the United States.3

Previous research shows that evidence-based care decisions can impact the outcomes for these patients. For example, sepsis patients receiving intravenous fluids, blood cultures, broad-spectrum antibiotics, and lactate measurement within three hours of presentation have lower mortality rates.4 In heart failure, key interventions such as the appropriate use of ACE inhibitors, beta blockers, and referral to disease management programs reduce morbidity and mortality.5

However, rapid dissemination and adoption of evidence-based guidelines remain a challenge.6,7 Policy makers have introduced incentives and penalties to support adoption, with varying levels of success. After four years of Centers for Medicare and Medicaid Services (CMS) penalties for hospitals with excess heart failure readmissions, only 21% performed well enough to avoid a penalty in 2017.8 CMS has been tracking sepsis bundle adherence as a core measure, but the rate in 2018 sat at just 54%.9 It is clear that new solutions are needed.10

AdventHealth (formerly Adventist Health System) is a growing, faith-based health system with hospitals across nine states. AdventHealth is a national leader in quality, safety, and patient satisfaction but is not immune to the challenges of delivering consistent, evidence-based care across an extensive network. To accelerate system-wide practice change, AdventHealth’s Office of Clinical Excellence (OCE) partnered with QURE Healthcare and Premier, Inc., to implement a physician engagement and care standardization collaboration involving nearly 100 hospitalists at eight facilities across five states.

This paper describes the results of the Adventist QURE Quality Project (AQQP), which used QURE’s validated, simulation-based measurement and feedback approach to engage hospitalists and standardize evidence-based practices for patients with sepsis and heart failure. We documented specific areas of variation identified in the simulations, how those practices changed through serial feedback, and the impact of those changes on real-world outcomes and costs.

METHODS

Setting

AdventHealth has its headquarters in Altamonte Springs, Florida. It has facilities in nine states, which includes 48 hospitals. The OCE is comprised of physician leaders, project managers, and data analysts who sponsored the project from July 2016 through July 2018.

Study Participants

AdventHealth hospitals were invited to enroll their hospitalists in AQQP; eight AdventHealth hospitals across five states, representing 91 physicians and 16 nurse practitioners/physician’s assistants (APPs), agreed to participate. Participants included both AdventHealth-employed providers and contracted hospitalist groups. Provider participation was voluntary and not tied to financial incentives; however, participants received Continuing Medical Education credit and, if applicable, Maintenance of Certification points through the American Board of Internal Medicine.

 

 

Quasi-experimental Design

We used AdventHealth hospitals not participating in AQQP as a quasi-experimental control group. We leveraged this to measure the impact of concurrent secular effects, such as order sets and other system-wide training, that could also improve practice and outcomes in our study.

Study Objectives and Approach

The explicit goals of AQQP were to (1) measure how sepsis and heart failure patients are cared for across AdventHealth using Clinical Performance and Value (CPV) case simulations, (2) provide a forum for hospitalists to discuss clinical variation, and (3) reduce unneeded variation to improve quality and reduce cost. QURE developed 12 CPV simulated patient cases (six sepsis and six heart failure cases) with case-specific evidenced-based scoring criteria tied to national and Advent­Health evidence-based guidelines. AdventHealth order sets were embedded in the cases and accessible by participants as they cared for their patients.

CPV vignettes are simulated patient cases administered online, and have been validated as an accurate and responsive measure of clinical decision-making in both ambulatory11-13 and inpatient settings.14,15 Cases take 20-30 minutes each to complete and simulate a typical clinical encounter: taking the medical history, performing a physical examination, ordering tests, making the diagnosis, implementing initial treatment, and outlining a follow-up plan. Each case has predefined, evidence-based scoring criteria for each care domain. Cases and scoring criteria were reviewed by AdventHealth hospitalist program leaders and physician leaders in OCE. Provider responses were double-scored by trained physician abstractors. Scores range from 0%-100%, with higher scores reflecting greater alignment with best practice recommendations.

In each round of the project, AQQP participants completed two CPV cases, received personalized online feedback reports on their care decisions, and met (at the various sites and via web conference) for a facilitated group discussion on areas of high group variation. The personal feedback reports included the participant’s case score compared to the group average, a list of high-priority personalized improvement opportunities, a summary of the cost of unneeded care items, and links to relevant references. The group discussions focused on six items of high variation. Six total rounds of CPV measurement and feedback were held, one every four months.

At the study’s conclusion, we administered a brief satisfaction survey, asking providers to rate various aspects of the project on a five-point Likert scale.

Data

The study used two primary data sources: (1) care decisions made in the CPV simulated cases and (2) patient-level utilization data from Premier Inc.’s QualityAdvisorTM (QA) data system. QA integrates quality, safety, and financial data from AdventHealth’s electronic medical record, claims data, charge master, and other resources. QualityAdvisor also calculates expected performance for critical measures, including cost per case and length of stay (LOS), based on a proprietary algorithm, which uses DRG classification, severity-of-illness, risk-of-mortality, and other patient risk factors. We pulled patient-level observed and expected data from AQQP qualifying physicians, defined as physicians participating in a majority of CPV measurement rounds. Of the 107 total hospitalists who participated, six providers did not participate in enough CPV rounds, and 22 providers left AdventHealth and could not be included in a patient-level impact analysis. These providers were replaced with 21 new hospitalists who were enrolled in the study and included in the CPV analysis but who did not have patient-level data before AQQP enrollment. Overall, 58 providers met the qualifying criteria to be included in the impact analysis. We compared their performance to a group of 96 hospitalists at facilities that were not participating in the project. Comparator facilities were selected based on quantitative measures of size and demographic matching the AQQP-facilities ensuring that both sets of hospitals (comparator and AQQP) exhibited similar levels of engagement with Advent- Health quality activities such as quality dashboard performance and order set usage. Baseline patient-level cost and LOS data covered from October 2015 to June 2016 and were re-measured annually throughout the project, from July 2016 to June 2018.

 

 

Statistical Analyses

We analyzed three primary outcomes: (1) general CPV-measured improvements in each round (scored against evidence-based scoring criteria); (2) disease-specific CPV improvements over each round; and (3) changes in patient-level outcomes and economic savings among AdventHealth pneumonia/sepsis and heart failure patients from the aforementioned improvements. We used Student’s t-test to analyze continuous outcome variables (including CPV, cost of care, and length of stay data) and Fisher’s exact test for binary outcome data. All statistical analyses were performed using Stata 14.2 (StataCorp LLC, College Station, Texas).

RESULTS

Baseline Characteristics and Assessment

A total of 107 AdventHealth hospitalists participated in this study (Appendix Table 1). 78.1% of these providers rated the organization’s focus on quality and lowering unnecessary costs as either “good” or “excellent,” but 78.8% also reported that variation in care provided by the group was “moderate” to “very high”.

At baseline, we observed high variability in the care of pneumonia patients with sepsis (pneumonia/sepsis) and heart failure patients as measured by the care decisions obtained in the CPV cases. The overall quality score, which is a weighted average across all domains, averaged 61.9% ± 10.5% for the group (Table 1). Disaggregating scores by condition, we found an average overall score of 59.4% ± 10.9% for pneumonia/sepsis and 64.4% ± 9.4% for heart failure. The diagnosis and treatment domains, which require the most clinical judgment, had the lowest average domain scores of 53.4% ± 20.9% and 51.6% ± 15.1%, respectively.

Changes in CPV Scores

To determine the impact of serial measurement and feedback, we compared performance in the first two rounds of the project with the last two rounds. We found that overall CPV quality scores showed a 4.8%-point absolute improvement (P < .001; Table 1). We saw improvements in all care domains, and those increases were significant in all but the workup (P = .470); the most significant increase was in diagnostic accuracy (+19.1%; P < .001).

By condition, scores showed similar, statistically significant overall improvements: +4.4%-points for pneumonia/sepsis (P = .001) and +5.5%-points for heart failure (P < .001) driven by increases in the diagnosis and treatment domains. For example, providers increased appropriate identification of HF severity by 21.5%-points (P < .001) and primary diagnosis of pneumonia/sepsis by 3.6%-points (P = .385).

In the treatment domain, which included clinical decisions related to initial management and follow-up care, there were several specific improvements. For HF, we found that performing all the essential treatment elements—prescribing diuretics, ACE inhibitors and beta blockers for appropriate patients—improved by 13.9%-points (P = .038); ordering VTE prophylaxis increased more than threefold, from 16.6% to 51.0% (P < .001; Table 2). For pneumonia/sepsis patients, absolute adherence to all four elements of the 3-hour sepsis bundle improved by 11.7%-points (P = .034). We also saw a decrease in low-value diagnostic workup items for patient cases in which the guidelines suggest they are not needed, such as urinary antigen testing, which declined by 14.6%-points (P = .001) and sputum cultures, which declined 26.4%-points (P = .004). In addition, outlining an evidence-based discharge plan including a follow-up visit, patient education and medication reconciliation improved, especially for pneumonia/sepsis patients by 24.3%-points (P < .001).



Adherence to AdventHealth-preferred, evidence-based empiric antibiotic regimens was only 41.1% at baseline, but by the third round, adherence to preferred antibiotics had increased by 37% (P = .047). In the summer of 2017, after the third round, we updated scoring criteria for the cases to align with new Advent­Health-preferred antibiotic regimens. Not surprisingly, when the new antibiotic regimens were introduced, CPV-measured adherence to the new guidelines then regressed to nearly baseline levels (42.4%) as providers adjusted to the new recommendations. However, by the end of the final round, AdventHealth-preferred antibiotics orders improved by 12%.

Next, we explored whether the improvements seen were due to the best performers getting better, which was not the case. At baseline the bottom-half performers scored 10.7%-points less than top-half performers but, over the course of the study, we found that the bottom half performers had an absolute improvement nearly two times of those in the top half (+5.7%-points vs +2.9%-points; P = .006), indicating that these bottom performers were able to close the gap in quality-of-care provided. In particular, these bottom performers improved the accuracy of their primary diagnosis by 16.7%-points, compared to a 2.0%-point improvement for the top-half performers.

 

 

Patient-Level Impact on LOS and Cost Per Case

We took advantage of the quasi-experimental design, in which only a portion of AdventHealth facilities participated in the project, to compare patient-level results from AQQP-participating physicians against the engagement-matched cohort of hospitalists at nonparticipating AdventHealth facilities. We adjusted for potential differences in patient-level case mix between the two groups by comparing the observed/expected (O/E) LOS and cost per case ratios for pneumonia/sepsis and heart failure patients.

At baseline, AQQP-hospitalists performed better on geometric LOS versus the comparator group (O/E of 1.13 vs 1.22; P = .006) but at about the same on cost per case (O/E of 1.16 vs 1.14; P = .390). Throughout the project, as patient volumes and expected per patient costs rose for both groups, O/E ratios improved among both AQQP and non-AQQP providers.

To set apart the contribution of system-wide improvements from the AQQP project-specific impacts, we applied the O/E improvement rates seen in the comparator group to the AQQP group baseline performance. We then compared that to the actual changes seen in the AQQP throughout the project to see if there was any additional benefit from the simulation measurement and feedback (Figure).



From baseline through year one of the project, the O/E LOS ratio decreased by 8.0% in the AQQP group (1.13 to 1.04; P = .004) and only 2.5% in the comparator group (1.22 to 1.19; P = .480), which is an absolute difference-in-difference of 0.06 LOS O/E. In year 1, these improvements represent a reduction in 892 patient days among patients cared for by AQQP-hospitalists of which 570 appear to be driven by the AQQP intervention and 322 attributable to secular system-wide improvements (Table 3). In year two, both groups continued to improve with the comparator group catching up to the AQQP group.

Geometric mean O/E cost per case also decreased for both AQQP (1.16 Baseline vs 0.98 Year 2; P < .001) and comparator physicians (1.14 Baseline vs 1.01 Year 2; P = .002), for an absolute difference-in-difference of 0.05 cost O/E. However, the AQQP-hospitalists showed greater improvement (15% vs 12%; P = .346; Table 3). As in the LOS analysis, the AQQP-specific impact on cost was markedly accelerated in year one, accounting for $1.6 million of the estimated $2.6 million total savings that year. Over the two-year project, these combined improvements drove an estimated $6.2 million in total savings among AQQP-hospitalists: $3.8 million of this appear to be driven by secular system effects and, based upon our quasi-experimental design, an additional $2.4 million of which are attributable to participation in AQQP.


A Levene’s test for equality of variances on the log-transformed costs and LOS shows that the AQQP reductions in costs and LOS come from reduced variation among providers. Throughout the project, the standard deviation in LOS was reduced by 4.3%, from 3.8 days to 3.6 days (P = .046) and costs by 27.7%, from $9,391 to $6,793 (P < .001). The non-AQQP group saw a smaller, but still significant 14.6% reduction in cost variation (from $9,928 to $8,482), but saw a variation in LOS increase significantly by 20.6%, from 4.1 days to 5.0 days (P < .001).

 

 

Provider Satisfaction

At the project conclusion, we administered a brief survey. Participants were asked to rate aspects of the project (a five-point Likert scale with five being the highest), and 24 responded. The mean ratings of the relevance of the project to their practice and the overall quality of the material were 4.5 and 4.2, respectively. Providers found the individual feedback reports (3.9) slightly more helpful than the webcast group discussions (3.7; Appendix Table 2 ).

DISCUSSION

As health systems expand, the opportunity to standardize clinical practice within a system has the potential to enhance patient care and lower costs. However, achieving these goals is challenging when providers are dispersed across geographically separated sites and clinical decision-making is difficult to measure in a standardized way.16,17 We brought together over 100 physicians and APPs from eight different-sized hospitals in five different states to prospectively determine if we could improve care using a standardized measurement and feedback system. At baseline, we found that care varied dramatically among providers. Care varied in terms of diagnostic accuracy and treatment, which directly relate to care quality and outcomes.4 After serial measurement and feedback, we saw reductions in unnecessary testing, more guideline-based treatment decisions, and better discharge planning in the clinical vignettes.

We confirmed that changes in CPV-measured practice translated into lower costs and shorter LOS at the patient level. We further validated the improvements through a quasi-experimental design that compared these changes to those at nonparticipating AdventHealth facilities. We saw more significant cost reductions and decreases in LOS in the simulation-based measurement and feedback cohort with the biggest impact early on. The overall savings to the system, attributable specifically to the AQQP approach, is estimated to be $2.4 million.

One advantage of the online case simulation approach is the ability to bring geographically remote sites together in a shared quality-of-care discussion. The interventions specifically sought to remove barriers between facilities. For example, individual feedback reports allowed providers to see how they compare with providers at other AdventHealth facilities and webcast results discussions enable providers across facilities to discuss specific care decisions.

There were several limitations to the study. While the quasi-experimental design allowed us to make informative comparisons between AQQP-participating facilities and nonparticipating facilities, the assignments were not random, and participants were generally from higher performing hospital medicine groups. The determination of secular versus CPV-related improvement is confounded by other system improvement initiatives that may have impacted cost and LOS results. This is mitigated by the observation that facilities that opted to participate performed better at baseline in risk-adjusted LOS but slightly worse in cost per case, indicating that baseline differences were not dramatic. While both groups improved over time, the QURE measurement and feedback approach led to larger and more rapid gains than those seen in the comparator group. However, we could not exclude the potential that project participation at the site level was biased to those groups disposed to performance improvement. In addition, our patient-level data analysis was limited to the metrics available and did not allow us to directly compare patient-level performance across the plethora of clinically relevant CPV data that showed improvement. Our inpatient cost per case analysis showed significant savings for the system but did not include all potentially favorable economic impacts such as lower follow-up care costs for patients, more accurate reimbursement through better coding or fewer lost days of productivity.

With continued consolidation in healthcare and broader health systems spanning multiple geographies, new tools are needed to support standardized, evidence-based care across sites. This standardization is especially important, both clinically and financially, for high-volume, high-cost diseases such as sepsis and heart failure. However, changing practice cannot happen without collaborative engagement with providers. Standardized patient vignettes are an opportunity to measure and provide feedback in a systematic way that engages providers and is particularly well-suited to large systems and common clinical conditions. This analysis, from a real-world study, shows that an approach that standardizes care and lowers costs may be particularly helpful for large systems needing to bring disparate sites together as they concurrently move toward value-based payment.

 

 

Disclosures

QURE, LLC, whose intellectual property was used to prepare the cases and collect the data, was contracted by AdventHealth. Otherwise, any of the study authors report no potential conflicts to disclose.

Funding

This work was funded by a contract between AdventHealth (formerly Adventist Health System) and QURE, LLC.

Sepsis and heart failure are two common, costly, and deadly conditions. Among hospitalized Medicare patients, these conditions rank as the first and second most frequent principal diagnoses accounting for over $33 billion in spending across all payers.1 One-third to one-half of all hospital deaths are estimated to occur in patients with sepsis,2 and heart failure is listed as a contributing factor in over 10% of deaths in the United States.3

Previous research shows that evidence-based care decisions can impact the outcomes for these patients. For example, sepsis patients receiving intravenous fluids, blood cultures, broad-spectrum antibiotics, and lactate measurement within three hours of presentation have lower mortality rates.4 In heart failure, key interventions such as the appropriate use of ACE inhibitors, beta blockers, and referral to disease management programs reduce morbidity and mortality.5

However, rapid dissemination and adoption of evidence-based guidelines remain a challenge.6,7 Policy makers have introduced incentives and penalties to support adoption, with varying levels of success. After four years of Centers for Medicare and Medicaid Services (CMS) penalties for hospitals with excess heart failure readmissions, only 21% performed well enough to avoid a penalty in 2017.8 CMS has been tracking sepsis bundle adherence as a core measure, but the rate in 2018 sat at just 54%.9 It is clear that new solutions are needed.10

AdventHealth (formerly Adventist Health System) is a growing, faith-based health system with hospitals across nine states. AdventHealth is a national leader in quality, safety, and patient satisfaction but is not immune to the challenges of delivering consistent, evidence-based care across an extensive network. To accelerate system-wide practice change, AdventHealth’s Office of Clinical Excellence (OCE) partnered with QURE Healthcare and Premier, Inc., to implement a physician engagement and care standardization collaboration involving nearly 100 hospitalists at eight facilities across five states.

This paper describes the results of the Adventist QURE Quality Project (AQQP), which used QURE’s validated, simulation-based measurement and feedback approach to engage hospitalists and standardize evidence-based practices for patients with sepsis and heart failure. We documented specific areas of variation identified in the simulations, how those practices changed through serial feedback, and the impact of those changes on real-world outcomes and costs.

METHODS

Setting

AdventHealth has its headquarters in Altamonte Springs, Florida. It has facilities in nine states, which includes 48 hospitals. The OCE is comprised of physician leaders, project managers, and data analysts who sponsored the project from July 2016 through July 2018.

Study Participants

AdventHealth hospitals were invited to enroll their hospitalists in AQQP; eight AdventHealth hospitals across five states, representing 91 physicians and 16 nurse practitioners/physician’s assistants (APPs), agreed to participate. Participants included both AdventHealth-employed providers and contracted hospitalist groups. Provider participation was voluntary and not tied to financial incentives; however, participants received Continuing Medical Education credit and, if applicable, Maintenance of Certification points through the American Board of Internal Medicine.

 

 

Quasi-experimental Design

We used AdventHealth hospitals not participating in AQQP as a quasi-experimental control group. We leveraged this to measure the impact of concurrent secular effects, such as order sets and other system-wide training, that could also improve practice and outcomes in our study.

Study Objectives and Approach

The explicit goals of AQQP were to (1) measure how sepsis and heart failure patients are cared for across AdventHealth using Clinical Performance and Value (CPV) case simulations, (2) provide a forum for hospitalists to discuss clinical variation, and (3) reduce unneeded variation to improve quality and reduce cost. QURE developed 12 CPV simulated patient cases (six sepsis and six heart failure cases) with case-specific evidenced-based scoring criteria tied to national and Advent­Health evidence-based guidelines. AdventHealth order sets were embedded in the cases and accessible by participants as they cared for their patients.

CPV vignettes are simulated patient cases administered online, and have been validated as an accurate and responsive measure of clinical decision-making in both ambulatory11-13 and inpatient settings.14,15 Cases take 20-30 minutes each to complete and simulate a typical clinical encounter: taking the medical history, performing a physical examination, ordering tests, making the diagnosis, implementing initial treatment, and outlining a follow-up plan. Each case has predefined, evidence-based scoring criteria for each care domain. Cases and scoring criteria were reviewed by AdventHealth hospitalist program leaders and physician leaders in OCE. Provider responses were double-scored by trained physician abstractors. Scores range from 0%-100%, with higher scores reflecting greater alignment with best practice recommendations.

In each round of the project, AQQP participants completed two CPV cases, received personalized online feedback reports on their care decisions, and met (at the various sites and via web conference) for a facilitated group discussion on areas of high group variation. The personal feedback reports included the participant’s case score compared to the group average, a list of high-priority personalized improvement opportunities, a summary of the cost of unneeded care items, and links to relevant references. The group discussions focused on six items of high variation. Six total rounds of CPV measurement and feedback were held, one every four months.

At the study’s conclusion, we administered a brief satisfaction survey, asking providers to rate various aspects of the project on a five-point Likert scale.

Data

The study used two primary data sources: (1) care decisions made in the CPV simulated cases and (2) patient-level utilization data from Premier Inc.’s QualityAdvisorTM (QA) data system. QA integrates quality, safety, and financial data from AdventHealth’s electronic medical record, claims data, charge master, and other resources. QualityAdvisor also calculates expected performance for critical measures, including cost per case and length of stay (LOS), based on a proprietary algorithm, which uses DRG classification, severity-of-illness, risk-of-mortality, and other patient risk factors. We pulled patient-level observed and expected data from AQQP qualifying physicians, defined as physicians participating in a majority of CPV measurement rounds. Of the 107 total hospitalists who participated, six providers did not participate in enough CPV rounds, and 22 providers left AdventHealth and could not be included in a patient-level impact analysis. These providers were replaced with 21 new hospitalists who were enrolled in the study and included in the CPV analysis but who did not have patient-level data before AQQP enrollment. Overall, 58 providers met the qualifying criteria to be included in the impact analysis. We compared their performance to a group of 96 hospitalists at facilities that were not participating in the project. Comparator facilities were selected based on quantitative measures of size and demographic matching the AQQP-facilities ensuring that both sets of hospitals (comparator and AQQP) exhibited similar levels of engagement with Advent- Health quality activities such as quality dashboard performance and order set usage. Baseline patient-level cost and LOS data covered from October 2015 to June 2016 and were re-measured annually throughout the project, from July 2016 to June 2018.

 

 

Statistical Analyses

We analyzed three primary outcomes: (1) general CPV-measured improvements in each round (scored against evidence-based scoring criteria); (2) disease-specific CPV improvements over each round; and (3) changes in patient-level outcomes and economic savings among AdventHealth pneumonia/sepsis and heart failure patients from the aforementioned improvements. We used Student’s t-test to analyze continuous outcome variables (including CPV, cost of care, and length of stay data) and Fisher’s exact test for binary outcome data. All statistical analyses were performed using Stata 14.2 (StataCorp LLC, College Station, Texas).

RESULTS

Baseline Characteristics and Assessment

A total of 107 AdventHealth hospitalists participated in this study (Appendix Table 1). 78.1% of these providers rated the organization’s focus on quality and lowering unnecessary costs as either “good” or “excellent,” but 78.8% also reported that variation in care provided by the group was “moderate” to “very high”.

At baseline, we observed high variability in the care of pneumonia patients with sepsis (pneumonia/sepsis) and heart failure patients as measured by the care decisions obtained in the CPV cases. The overall quality score, which is a weighted average across all domains, averaged 61.9% ± 10.5% for the group (Table 1). Disaggregating scores by condition, we found an average overall score of 59.4% ± 10.9% for pneumonia/sepsis and 64.4% ± 9.4% for heart failure. The diagnosis and treatment domains, which require the most clinical judgment, had the lowest average domain scores of 53.4% ± 20.9% and 51.6% ± 15.1%, respectively.

Changes in CPV Scores

To determine the impact of serial measurement and feedback, we compared performance in the first two rounds of the project with the last two rounds. We found that overall CPV quality scores showed a 4.8%-point absolute improvement (P < .001; Table 1). We saw improvements in all care domains, and those increases were significant in all but the workup (P = .470); the most significant increase was in diagnostic accuracy (+19.1%; P < .001).

By condition, scores showed similar, statistically significant overall improvements: +4.4%-points for pneumonia/sepsis (P = .001) and +5.5%-points for heart failure (P < .001) driven by increases in the diagnosis and treatment domains. For example, providers increased appropriate identification of HF severity by 21.5%-points (P < .001) and primary diagnosis of pneumonia/sepsis by 3.6%-points (P = .385).

In the treatment domain, which included clinical decisions related to initial management and follow-up care, there were several specific improvements. For HF, we found that performing all the essential treatment elements—prescribing diuretics, ACE inhibitors and beta blockers for appropriate patients—improved by 13.9%-points (P = .038); ordering VTE prophylaxis increased more than threefold, from 16.6% to 51.0% (P < .001; Table 2). For pneumonia/sepsis patients, absolute adherence to all four elements of the 3-hour sepsis bundle improved by 11.7%-points (P = .034). We also saw a decrease in low-value diagnostic workup items for patient cases in which the guidelines suggest they are not needed, such as urinary antigen testing, which declined by 14.6%-points (P = .001) and sputum cultures, which declined 26.4%-points (P = .004). In addition, outlining an evidence-based discharge plan including a follow-up visit, patient education and medication reconciliation improved, especially for pneumonia/sepsis patients by 24.3%-points (P < .001).



Adherence to AdventHealth-preferred, evidence-based empiric antibiotic regimens was only 41.1% at baseline, but by the third round, adherence to preferred antibiotics had increased by 37% (P = .047). In the summer of 2017, after the third round, we updated scoring criteria for the cases to align with new Advent­Health-preferred antibiotic regimens. Not surprisingly, when the new antibiotic regimens were introduced, CPV-measured adherence to the new guidelines then regressed to nearly baseline levels (42.4%) as providers adjusted to the new recommendations. However, by the end of the final round, AdventHealth-preferred antibiotics orders improved by 12%.

Next, we explored whether the improvements seen were due to the best performers getting better, which was not the case. At baseline the bottom-half performers scored 10.7%-points less than top-half performers but, over the course of the study, we found that the bottom half performers had an absolute improvement nearly two times of those in the top half (+5.7%-points vs +2.9%-points; P = .006), indicating that these bottom performers were able to close the gap in quality-of-care provided. In particular, these bottom performers improved the accuracy of their primary diagnosis by 16.7%-points, compared to a 2.0%-point improvement for the top-half performers.

 

 

Patient-Level Impact on LOS and Cost Per Case

We took advantage of the quasi-experimental design, in which only a portion of AdventHealth facilities participated in the project, to compare patient-level results from AQQP-participating physicians against the engagement-matched cohort of hospitalists at nonparticipating AdventHealth facilities. We adjusted for potential differences in patient-level case mix between the two groups by comparing the observed/expected (O/E) LOS and cost per case ratios for pneumonia/sepsis and heart failure patients.

At baseline, AQQP-hospitalists performed better on geometric LOS versus the comparator group (O/E of 1.13 vs 1.22; P = .006) but at about the same on cost per case (O/E of 1.16 vs 1.14; P = .390). Throughout the project, as patient volumes and expected per patient costs rose for both groups, O/E ratios improved among both AQQP and non-AQQP providers.

To set apart the contribution of system-wide improvements from the AQQP project-specific impacts, we applied the O/E improvement rates seen in the comparator group to the AQQP group baseline performance. We then compared that to the actual changes seen in the AQQP throughout the project to see if there was any additional benefit from the simulation measurement and feedback (Figure).



From baseline through year one of the project, the O/E LOS ratio decreased by 8.0% in the AQQP group (1.13 to 1.04; P = .004) and only 2.5% in the comparator group (1.22 to 1.19; P = .480), which is an absolute difference-in-difference of 0.06 LOS O/E. In year 1, these improvements represent a reduction in 892 patient days among patients cared for by AQQP-hospitalists of which 570 appear to be driven by the AQQP intervention and 322 attributable to secular system-wide improvements (Table 3). In year two, both groups continued to improve with the comparator group catching up to the AQQP group.

Geometric mean O/E cost per case also decreased for both AQQP (1.16 Baseline vs 0.98 Year 2; P < .001) and comparator physicians (1.14 Baseline vs 1.01 Year 2; P = .002), for an absolute difference-in-difference of 0.05 cost O/E. However, the AQQP-hospitalists showed greater improvement (15% vs 12%; P = .346; Table 3). As in the LOS analysis, the AQQP-specific impact on cost was markedly accelerated in year one, accounting for $1.6 million of the estimated $2.6 million total savings that year. Over the two-year project, these combined improvements drove an estimated $6.2 million in total savings among AQQP-hospitalists: $3.8 million of this appear to be driven by secular system effects and, based upon our quasi-experimental design, an additional $2.4 million of which are attributable to participation in AQQP.


A Levene’s test for equality of variances on the log-transformed costs and LOS shows that the AQQP reductions in costs and LOS come from reduced variation among providers. Throughout the project, the standard deviation in LOS was reduced by 4.3%, from 3.8 days to 3.6 days (P = .046) and costs by 27.7%, from $9,391 to $6,793 (P < .001). The non-AQQP group saw a smaller, but still significant 14.6% reduction in cost variation (from $9,928 to $8,482), but saw a variation in LOS increase significantly by 20.6%, from 4.1 days to 5.0 days (P < .001).

 

 

Provider Satisfaction

At the project conclusion, we administered a brief survey. Participants were asked to rate aspects of the project (a five-point Likert scale with five being the highest), and 24 responded. The mean ratings of the relevance of the project to their practice and the overall quality of the material were 4.5 and 4.2, respectively. Providers found the individual feedback reports (3.9) slightly more helpful than the webcast group discussions (3.7; Appendix Table 2 ).

DISCUSSION

As health systems expand, the opportunity to standardize clinical practice within a system has the potential to enhance patient care and lower costs. However, achieving these goals is challenging when providers are dispersed across geographically separated sites and clinical decision-making is difficult to measure in a standardized way.16,17 We brought together over 100 physicians and APPs from eight different-sized hospitals in five different states to prospectively determine if we could improve care using a standardized measurement and feedback system. At baseline, we found that care varied dramatically among providers. Care varied in terms of diagnostic accuracy and treatment, which directly relate to care quality and outcomes.4 After serial measurement and feedback, we saw reductions in unnecessary testing, more guideline-based treatment decisions, and better discharge planning in the clinical vignettes.

We confirmed that changes in CPV-measured practice translated into lower costs and shorter LOS at the patient level. We further validated the improvements through a quasi-experimental design that compared these changes to those at nonparticipating AdventHealth facilities. We saw more significant cost reductions and decreases in LOS in the simulation-based measurement and feedback cohort with the biggest impact early on. The overall savings to the system, attributable specifically to the AQQP approach, is estimated to be $2.4 million.

One advantage of the online case simulation approach is the ability to bring geographically remote sites together in a shared quality-of-care discussion. The interventions specifically sought to remove barriers between facilities. For example, individual feedback reports allowed providers to see how they compare with providers at other AdventHealth facilities and webcast results discussions enable providers across facilities to discuss specific care decisions.

There were several limitations to the study. While the quasi-experimental design allowed us to make informative comparisons between AQQP-participating facilities and nonparticipating facilities, the assignments were not random, and participants were generally from higher performing hospital medicine groups. The determination of secular versus CPV-related improvement is confounded by other system improvement initiatives that may have impacted cost and LOS results. This is mitigated by the observation that facilities that opted to participate performed better at baseline in risk-adjusted LOS but slightly worse in cost per case, indicating that baseline differences were not dramatic. While both groups improved over time, the QURE measurement and feedback approach led to larger and more rapid gains than those seen in the comparator group. However, we could not exclude the potential that project participation at the site level was biased to those groups disposed to performance improvement. In addition, our patient-level data analysis was limited to the metrics available and did not allow us to directly compare patient-level performance across the plethora of clinically relevant CPV data that showed improvement. Our inpatient cost per case analysis showed significant savings for the system but did not include all potentially favorable economic impacts such as lower follow-up care costs for patients, more accurate reimbursement through better coding or fewer lost days of productivity.

With continued consolidation in healthcare and broader health systems spanning multiple geographies, new tools are needed to support standardized, evidence-based care across sites. This standardization is especially important, both clinically and financially, for high-volume, high-cost diseases such as sepsis and heart failure. However, changing practice cannot happen without collaborative engagement with providers. Standardized patient vignettes are an opportunity to measure and provide feedback in a systematic way that engages providers and is particularly well-suited to large systems and common clinical conditions. This analysis, from a real-world study, shows that an approach that standardizes care and lowers costs may be particularly helpful for large systems needing to bring disparate sites together as they concurrently move toward value-based payment.

 

 

Disclosures

QURE, LLC, whose intellectual property was used to prepare the cases and collect the data, was contracted by AdventHealth. Otherwise, any of the study authors report no potential conflicts to disclose.

Funding

This work was funded by a contract between AdventHealth (formerly Adventist Health System) and QURE, LLC.

References

1. Torio C, Moore B. National inpatient hospital costs: the most expensive conditions by payer, 2013. HCUP Statistical Brief #204. Published May 2016 http://www.hcup-us.ahrq.gov/reports/statbriefs/sb204-Most-Expensive-Hospital-Conditions.pdf. Accessed December 2018. 
2. Liu, V, GJ Escobar, Greene JD, et al. Hospital deaths in patients with sepsis from 2 independent cohorts. JAMA. 2014;312(1):90-92. https://doi.org/10.1001/jama.2014.5804.
3. Mozzafarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics—2016 update: a report from the American Heart Association. Circulation. 2016;133(4):e38-e360. https://doi.org/10.1161/CIR.0000000000000350.
4. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis. N Engl J Med. 2017;376(23):2235-2244. https://doi.org/10.1056/NEJMoa1703058.
5. Yancy CW, Jessup M, Bozkurt B, et al. 2016 ACC/AHA/HFSA focused update on new pharmacological therapy for heart failure: an update of the 2013 ACCF/AHA guideline for the management of heart failure: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines and the Heart Failure Society of America. Circulation. 2016;134(13):e282-e293. https://doi.org/10.1161/CIR.0000000000000460.
6. Warren JI, McLaughlin M, Bardsley J, et al. The strengths and challenges of implementing EBP in healthcare systems. Worldviews Evid Based Nurs. 2016;13(1):15-24. https://doi.org/10.1111/wvn.12149.
7. Hisham R, Ng CJ, Liew SM, Hamzah N, Ho GJ. Why is there variation in the practice of evidence-based medicine in primary care? A qualitative study. BMJ Open. 2016;6(3):e010565. https://doi.org/10.1136/bmjopen-2015-010565.
8. Boccuti C, Casillas G. Aiming for Fewer Hospital U-turns: The Medicare Hospital Readmission Reduction Program, The Henry J. Kaiser Family Foundation. https://www.kff.org/medicare/issue-brief/aiming-for-fewer-hospital-u-turns-the-medicare-hospital-readmission-reduction-program/. Accessed Mar 10, 2017.
9. Venkatesh AK, Slesinger T, Whittle J, et al. Preliminary performance on the new CMS sepsis-1 national quality measure: early insights from the emergency quality network (E-QUAL). Ann Emerg Med. 2018;71(1):10-15. https://doi.org/10.1016/j.annemergmed.2017.06.032.
10. Braithwaite, J. Changing how we think about healthcare improvement. BMJ. 2018;36:k2014. https://doi.org/10.1136/bmj.k2014.
11. Peabody JW, Luck J, Glassman P, Dresselhaus TR, Lee M. Comparison of vignettes, standardized patients, and chart abstraction: a prospective validation study of 3 methods for measuring quality. JAMA. 2000;283(13):1715-1722. PubMed
12. Peabody JW, Luck J, Glassman P, et al. Measuring the quality of physician practice by using clinical vignettes: a prospective validation study. Ann Intern Med. 2004;141(10):771-780. https://doi.org/10.7326/0003-4819-141-10-200411160-00008.
13. Peabody JW, Shimkhada S, Quimbo S, Solon O, Javier X, McCulloch C. The impact of performance incentives on health outcomes: results from a cluster randomized controlled trial in the Philippines. Health Policy Plan. 2014;29(5):615-621. https://doi.org/10.1093/heapol/czt047.
14. Weems L, Strong J, Plummer D, et al. A quality collaboration in heart failure and pneumonia inpatient care at Novant Health: standardizing hospitalist practices to improve patient care and system performance. Jt Comm J Qual Patient Saf. 2019;45(3):199-206. https://doi.org/10.1016/j.jcjq.2018.09.005.
15. Bergmann S, Tran M, Robison K, et al. Standardizing hospitalist practice in sepsis and COPD care. BMJ Qual Safety. 2019. https://doi.org/10.1136/bmjqs-2018-008829.
16. Chassin MR, Galvin RM. the National Roundtable on Health Care Quality. The urgent need to improve health care quality: Institute of Medicine National Roundtable on Health Care Quality. JAMA. 1998;280(11):1000-1005. https://doi.org/10.1001/jama.280.11.1000.
17. Gupta DM, Boland RJ, Aron DC. The physician’s experience of changing clinical practice: a struggle to unlearn. Implementation Sci. 2017;12(1):28. https://doi.org/10.1186/s13012-017-0555-2.

References

1. Torio C, Moore B. National inpatient hospital costs: the most expensive conditions by payer, 2013. HCUP Statistical Brief #204. Published May 2016 http://www.hcup-us.ahrq.gov/reports/statbriefs/sb204-Most-Expensive-Hospital-Conditions.pdf. Accessed December 2018. 
2. Liu, V, GJ Escobar, Greene JD, et al. Hospital deaths in patients with sepsis from 2 independent cohorts. JAMA. 2014;312(1):90-92. https://doi.org/10.1001/jama.2014.5804.
3. Mozzafarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics—2016 update: a report from the American Heart Association. Circulation. 2016;133(4):e38-e360. https://doi.org/10.1161/CIR.0000000000000350.
4. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis. N Engl J Med. 2017;376(23):2235-2244. https://doi.org/10.1056/NEJMoa1703058.
5. Yancy CW, Jessup M, Bozkurt B, et al. 2016 ACC/AHA/HFSA focused update on new pharmacological therapy for heart failure: an update of the 2013 ACCF/AHA guideline for the management of heart failure: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines and the Heart Failure Society of America. Circulation. 2016;134(13):e282-e293. https://doi.org/10.1161/CIR.0000000000000460.
6. Warren JI, McLaughlin M, Bardsley J, et al. The strengths and challenges of implementing EBP in healthcare systems. Worldviews Evid Based Nurs. 2016;13(1):15-24. https://doi.org/10.1111/wvn.12149.
7. Hisham R, Ng CJ, Liew SM, Hamzah N, Ho GJ. Why is there variation in the practice of evidence-based medicine in primary care? A qualitative study. BMJ Open. 2016;6(3):e010565. https://doi.org/10.1136/bmjopen-2015-010565.
8. Boccuti C, Casillas G. Aiming for Fewer Hospital U-turns: The Medicare Hospital Readmission Reduction Program, The Henry J. Kaiser Family Foundation. https://www.kff.org/medicare/issue-brief/aiming-for-fewer-hospital-u-turns-the-medicare-hospital-readmission-reduction-program/. Accessed Mar 10, 2017.
9. Venkatesh AK, Slesinger T, Whittle J, et al. Preliminary performance on the new CMS sepsis-1 national quality measure: early insights from the emergency quality network (E-QUAL). Ann Emerg Med. 2018;71(1):10-15. https://doi.org/10.1016/j.annemergmed.2017.06.032.
10. Braithwaite, J. Changing how we think about healthcare improvement. BMJ. 2018;36:k2014. https://doi.org/10.1136/bmj.k2014.
11. Peabody JW, Luck J, Glassman P, Dresselhaus TR, Lee M. Comparison of vignettes, standardized patients, and chart abstraction: a prospective validation study of 3 methods for measuring quality. JAMA. 2000;283(13):1715-1722. PubMed
12. Peabody JW, Luck J, Glassman P, et al. Measuring the quality of physician practice by using clinical vignettes: a prospective validation study. Ann Intern Med. 2004;141(10):771-780. https://doi.org/10.7326/0003-4819-141-10-200411160-00008.
13. Peabody JW, Shimkhada S, Quimbo S, Solon O, Javier X, McCulloch C. The impact of performance incentives on health outcomes: results from a cluster randomized controlled trial in the Philippines. Health Policy Plan. 2014;29(5):615-621. https://doi.org/10.1093/heapol/czt047.
14. Weems L, Strong J, Plummer D, et al. A quality collaboration in heart failure and pneumonia inpatient care at Novant Health: standardizing hospitalist practices to improve patient care and system performance. Jt Comm J Qual Patient Saf. 2019;45(3):199-206. https://doi.org/10.1016/j.jcjq.2018.09.005.
15. Bergmann S, Tran M, Robison K, et al. Standardizing hospitalist practice in sepsis and COPD care. BMJ Qual Safety. 2019. https://doi.org/10.1136/bmjqs-2018-008829.
16. Chassin MR, Galvin RM. the National Roundtable on Health Care Quality. The urgent need to improve health care quality: Institute of Medicine National Roundtable on Health Care Quality. JAMA. 1998;280(11):1000-1005. https://doi.org/10.1001/jama.280.11.1000.
17. Gupta DM, Boland RJ, Aron DC. The physician’s experience of changing clinical practice: a struggle to unlearn. Implementation Sci. 2017;12(1):28. https://doi.org/10.1186/s13012-017-0555-2.

Issue
Journal of Hospital Medicine 14(9)
Issue
Journal of Hospital Medicine 14(9)
Page Number
541-546. Published online first June 11, 2019
Page Number
541-546. Published online first June 11, 2019
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
John Peabody, MD PhD; E-mail: [email protected]; Telephone: 415-321-3388.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files