Utilizing Telesimulation for Advanced Skills Training in Consultation and Handoff Communication: A Post-COVID-19 GME Bootcamp Experience

Article Type
Changed
Mon, 11/29/2021 - 10:33
Display Headline
Utilizing Telesimulation for Advanced Skills Training in Consultation and Handoff Communication: A Post-COVID-19 GME Bootcamp Experience

Events requiring communication among and within teams are vulnerable points in patient care in hospital medicine, with communication failures representing important contributors to adverse events.1-4 Consultations and handoffs are exceptionally common inpatient practices, yet training in these practices is variable across educational and practice domains.5,6 Advanced inpatient communication-skills training requires an effective, feasible, and scalable format. Simulation-based bootcamps can effectively support clinical skills training, often in procedural domains, and have been increasingly utilized for communication skills.7,8 We previously described the development and implementation of an in-person bootcamp for training and feedback in consultation and handoff communication.5,8

As hospitalist leaders grapple with how to systematically support and assess essential clinical skills, the COVID-19 pandemic has presented another impetus to rethink current processes. The rapid shift to virtual activities met immediate needs of the pandemic, but also inspired creativity in applying new methodologies to improve teaching strategies and implementation long-term.9,10 One such strategy, telesimulation, offers a way to continue simulation-based training limited by the need for physical distancing.10 Furthermore, recent calls to study the efficacy of virtual bootcamp structures have acknowledged potential benefits, even outside of the pandemic.11

The primary objective of this feasibility study was to convert our previously described consultation and handoff bootcamp to a telesimulation bootcamp (TBC), preserving rigorous performance evaluation and opportunities for skills-based feedback. We additionally compared evaluation between virtual and in-person formats to understand the utility of telesimulation for bootcamp-based clinical education moving forward.

METHODS

Setting and Participants

The TBC occurred in June 2020 during the University of Chicago institution-wide graduate medical education (GME) orientation; 130 interns entering 13 residency programs participated. The comparison group was 128 interns who underwent the traditional University of Chicago GME orientation “Advanced Communication Skills Bootcamp” (ACSBC) in 2019.5,8

Program Description

To develop TBC, we adapted observed structured clinical experiences (OSCEs) created for ACSBC. Until 2020, ACSBC included three in-person OSCEs: (1) requesting a consultation; (2) conducting handoffs; and (3) acquiring informed consent. COVID-19 necessitated conversion of ACSBC to virtual in June 2020. For this, we selected the consultation and handoff OSCEs, as these skills require near-universal and immediate application in clinical practice. Additionally, they required only trained facilitators (TFs), whereas informed consent required standardized patients. Hospitalist and emergency medicine faculty were recruited as TFs; 7 of 12 TFs were hospitalists. Each OSCE had two parts: an asynchronous, mandatory training module and a clinical simulation. For TBC, we adapted the simulations, previously separate experiences, into a 20-minute combined handoff/consultation telesimulation using the Zoom® video platform. Interns were paired with one TF who served as both standardized consultant (for one mock case) and handoff receiver (for three mock cases, including the consultation case). TFs rated intern performance and provided feedback.

TBC occurred on June 17 and 18, 2020. Interns were emailed asynchronous modules on June 1, and mock cases and instructions on June 12. When TBC began, GME staff proctors oriented interns in the Zoom® platform. Proctors placed TFs into private breakout rooms into which interns rotated through 20-minute timeslots. Faculty received copies of all TBC materials for review (Appendix 1) and underwent Zoom®-based training 1 to 2 weeks prior.

We evaluated TBC using several methods: (1) consultation and handoff skills performance measured by two validated checklists5,8; (2) survey of intern self-reported preparedness to practice consultations and handoffs; and (3) survey of intern satisfaction. Surveys were administered both immediately post bootcamp (Appendix 2) and 8 weeks into internship (Appendix 3). Skills performance checklists were a 12-item consultation checklist5 and 6-item handoff checklist.8 The handoff checklist was modified to remove activities impossible to assess virtually (ie, orienting sign-outs in a shared space) and to add a three-level rating scale of “outstanding,” “satisfactory,” and “needs improvement.” This was done based on feedback from ACSBC to allow more nuanced feedback for interns. A rating of “outstanding” was used to define successful completion of the item (Appendix 1). Interns rated preparedness and satisfaction on 5-point Likert-type items. All measures were compared to the 2019 in-person ACSBC cohort.

Data Analysis

Stata 16.1 (StataCorp LP) was used for analysis. We dichotomized preparedness and satisfaction scores, defining ratings of “4” or “5” as “prepared” or “satisfied.” As previously described,5 we created a composite score averaging both checklist scores for each intern. We normalized this score by rater to a z score (mean, 0; SD, 1) to account for rater differences. “Poor” and “outstanding” performances were defined as z scores below and above 1 SD, respectively. Fisher’s exact test was used to compare proportions, and Pearson correlation test to correlate z scores. The University of Chicago Institutional Review Board granted exemption.

RESULTS

All 130 entering interns participated in TBC. Internal medicine (IM) was the largest specialty (n = 37), followed by pediatrics (n = 22), emergency medicine (EM) (n = 16), and anesthesiology (n = 12). The remaining 9 programs ranged from 2 to 10 interns per program. The 128 interns in ACSBC were similar, including 40 IM, 23 pediatrics, 14 EM, and 12 anesthesia interns, with 2 to 10 interns in remaining programs.

TBC skills performance evaluations were compared to ACSBC (Table 1). The TBC intern cohort’s consultation performance was the same or better than the ACSBC intern cohort’s. For handoffs, TBC interns completed significantly fewer checklist items compared to ACSBC. Performance in each exercise was moderately correlated (r = 0.39, P < .05). For z scores, 14 TBC interns (10.8%) had “outstanding” and 15 (11.6%) had “poor” performances, compared to ACSBC interns with 7 (5.5%) “outstanding” and 10 (7.81%) “poor” performances (P = .15).

All 130 interns (100%) completed the immediate post-TBC survey. Overall, TBC satisfaction was comparable to ACSBC, and significantly improved for satisfaction with performance (Table 2). Compared to ACSBC, TBC interns felt more prepared for simulation and handoff clinical practice. Nearly all interns would recommend TBC (99% vs 96% of ACSBC interns, P = 0.28), and 99% felt the software used for the simulation ran smoothly.

The 8-week post-TBC survey had a response rate of 88% (115/130); 69% of interns reported conducting more effective handoffs due to TBC, and 79% felt confident in handoff skills. Similarly, 73% felt more effective at calling consultations, and 75% reported retained knowledge of consultation frameworks taught during TBC. Additionally, 71% of interns reported that TBC helped identify areas for self-directed improvement. There were no significant differences in 8-week postsurvey ratings between ACSBC and TBC.

DISCUSSION

In converting the advanced communication skills bootcamp from an in-person to a virtual format, telesimulation was well-received by interns and rated similarly to in-person bootcamp in most respects. Nearly all interns agreed the experience was realistic, provided useful feedback, and prepared them for clinical practice. Although we shifted to virtual out of necessity, our results demonstrate a high-quality, streamlined bootcamp experience that was less labor-intensive for interns, staff, and faculty. Telesimulation may represent an effective strategy beyond the COVID-19 pandemic to increase ease of administration and scale the use of bootcamps in supporting advanced clinical skill training for hospital-based practice.

TBC interns felt better prepared for simulation and more satisfied with their performance than ACSBC interns, potentially due to the revised format. The mock cases were adapted and consolidated for TBC, such that the handoff and consultation simulations shared a common case, whereas previously they were separate. Thus, intern preparation for TBC required familiarity with fewer overall cases. Ultimately, TBC maintained the quality of training but required review of less information.

In comparing performance, TBC interns were rated as well or better during consultation simulation compared to ASCBC, but handoffs were rated lower. This was likely due to the change in the handoff checklist from a dichotomous to a three-level rating scale. This change was made after receiving feedback from ACSBC TFs that a rating scale allowing for more nuance was needed to provide adequate feedback to interns. Although we defined handoff item completion for TBC interns as being rated “outstanding,” if the top two rankings, “outstanding” and “satisfactory,” are dichotomized to reflect completion, TBC handoff performance is equivalent or better than ACSBC. TF recruitment additionally differed between TBC and ACSBC cohorts. In ACSBC, resident physicians served as handoff TFs, whereas only faculty were recruited for TBC. Faculty were primarily clinically active hospitalists, whose expertise in handoffs may resulted in more stringent performance ratings, contributing to differences seen.

Hospitalist groups require clinicians to be immediately proficient in essential communication skills like consultation and handoffs, potentially requiring just-in-time training and feedback for large cohorts.12 Bootcamps can meet this need but require participation and time investment by many faculty members, staff, and administrators.5,8 Combining TBC into one virtual handoff/consultation simulation required recruitment and training of 50% fewer TFs and reduced administrative burden. ACSBC consultation simulations were high-fidelity but resource-heavy, requiring reliable two-way telephones with reliable connections and separate spaces for simulation and feedback.5 Conversely, TBC only required consultations to be “called” via audio-only Zoom® discussion, then both individuals turned on cameras for feedback. The slight decrease in perceived fidelity was certainly outweighed by ease of administration. TBC’s more efficient and less labor-intensive format is an appealing strategy for hospitalist groups looking to train up clinicians, including those operating across multiple or geographically distant sites.

Our study has limitations. It occurred with one group of learners at a single site with consistent consultation and handoff communication practices, which may not be the case elsewhere. Our comparison group was a separate cohort, and groups were not randomized; thus, differences seen may reflect inherent dissimilarities in these groups. Changes to the handoff checklist rating scale between 2019 and 2020 additionally may limit the direct comparison of handoff performance between cohorts. While overall fewer resources were required, TBC implementation did require time and institutional support, along with full virtual platform capability without user or time limitations. Our preparedness outcomes were self-reported without direct measurement of clinical performance, which is an area for future work.

We describe a feasible implementation of an adapted telesimulation communication bootcamp, with comparison to a previous in-person cohort’s skills performance and satisfaction. While COVID-19 has made the future of in-person training activities uncertain, it also served as a catalyst for educational innovation that may be sustained beyond the pandemic. Although developed out of necessity, the telesimulation communication bootcamp was effective and well-received. Telesimulation represents an opportunity for hospital medicine groups to implement advanced communication skills training and assessment in a more efficient, flexible, and potentially preferable way, even after the pandemic ends.

Acknowledgments

The authors thank the staff at the University of Chicago Office of Graduate Medical Education and the UChicago Medicine Simulation Center.

Files
References

1. Sutcliffe KM, Lewton E, Rosenthal MM. Communication failures: an insidious contributor to medical mishaps. Acad Med. 2004;79(2):186-194. https://doi.org/ 10.1097/00001888-200402000-00019
2. Inadequate hand-off communication. Sentinel Event Alert. 2017;(58):1-6.
3. Horwitz LI, Meredith T, Schuur JD, Shah NR, Kulkarni RG, Jenq JY. Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53(6):701-710. https://doi.org/ 10.1016/j.annemergmed.2008.05.007
4. Jagsi R, Kitch BT, Weinstein DF, Campbell EG, Hutter M, Weissman JS. Residents report on adverse events and their causes. Arch Intern Med. 2005;165(22):2607-2613. https://doi.org/10.1001/archinte.165.22.2607
5. Martin SK, Carter K, Hellerman N, et al. The consultation observed simulated clinical experience: training, assessment, and feedback for incoming interns on requesting consultations. Acad Med. 2018; 93(12):1814-1820. https://doi.org/10.1097/ACM.0000000000002337
6. Lopez MA, Campbell J. Developing a communication curriculum for primary and consulting services. Med Educ Online. 2020;25(1):1794341. https://doi.org/10.1080/10872981.2020
7. Cohen, ER, Barsuk JH, Moazed F, et al. Making July safer: simulation-based mastery learning during intern bootcamp. Acad Med. 2013;88(2):233-239. https://doi.org/10.1097/ACM.0b013e31827bfc0a
8. Gaffney S, Farnan JM, Hirsch K, McGinty M, Arora VM. The Modified, Multi-patient Observed Simulated Handoff Experience (M-OSHE): assessment and feedback for entering residents on handoff performance. J Gen Intern Med. 2016;31(4):438-441. https://doi.org/10.1007/s11606-016-3591-8.
9. Woolliscroft, J. Innovation in response to the COVID-19 pandemic crisis. Acad Med. 2020;95(8):1140-1142. https://doi.org/10.1097/ACM.0000000000003402.
10. Anderson ML, Turbow S, Willgerodt MA, Ruhnke G. Education in a crisis: the opportunity of our lives. J Hosp. Med 2020;5;287-291.  https://doi.org/10.12788/jhm.3431
11. Farr DE, Zeh HJ, Abdelfattah KR. Virtual bootcamps—an emerging solution to the undergraduate medical education-graduate medical education transition. JAMA Surg. 2021;156(3):282-283. https://doi.org/10.1001/jamasurg.2020.6162
12. Hepps JH, Yu CE, Calaman S. Simulation in medical education for the hospitalist: moving beyond the mock code. Pediatr Clin North Am. 2019;66(4):855-866. https://doi.org/10.1016/j.pcl.2019.03.014

Article PDF
Author and Disclosure Information

1University of Chicago Pritzker School of Medicine, Department of Medicine, Chicago, Illinois; 2University of Chicago Medicine, Office of Graduate Medical Education, Chicago, Illinois; 3University of Chicago Pritzker School of Medicine, Department of Obstetrics and Gynecology, Chicago, Illinois.

Disclosures
The authors reported no conflicts of interest.

Issue
Journal of Hospital Medicine 16(12)
Publications
Topics
Page Number
730-734. Published Online First November 17, 2021
Sections
Files
Files
Author and Disclosure Information

1University of Chicago Pritzker School of Medicine, Department of Medicine, Chicago, Illinois; 2University of Chicago Medicine, Office of Graduate Medical Education, Chicago, Illinois; 3University of Chicago Pritzker School of Medicine, Department of Obstetrics and Gynecology, Chicago, Illinois.

Disclosures
The authors reported no conflicts of interest.

Author and Disclosure Information

1University of Chicago Pritzker School of Medicine, Department of Medicine, Chicago, Illinois; 2University of Chicago Medicine, Office of Graduate Medical Education, Chicago, Illinois; 3University of Chicago Pritzker School of Medicine, Department of Obstetrics and Gynecology, Chicago, Illinois.

Disclosures
The authors reported no conflicts of interest.

Article PDF
Article PDF
Related Articles

Events requiring communication among and within teams are vulnerable points in patient care in hospital medicine, with communication failures representing important contributors to adverse events.1-4 Consultations and handoffs are exceptionally common inpatient practices, yet training in these practices is variable across educational and practice domains.5,6 Advanced inpatient communication-skills training requires an effective, feasible, and scalable format. Simulation-based bootcamps can effectively support clinical skills training, often in procedural domains, and have been increasingly utilized for communication skills.7,8 We previously described the development and implementation of an in-person bootcamp for training and feedback in consultation and handoff communication.5,8

As hospitalist leaders grapple with how to systematically support and assess essential clinical skills, the COVID-19 pandemic has presented another impetus to rethink current processes. The rapid shift to virtual activities met immediate needs of the pandemic, but also inspired creativity in applying new methodologies to improve teaching strategies and implementation long-term.9,10 One such strategy, telesimulation, offers a way to continue simulation-based training limited by the need for physical distancing.10 Furthermore, recent calls to study the efficacy of virtual bootcamp structures have acknowledged potential benefits, even outside of the pandemic.11

The primary objective of this feasibility study was to convert our previously described consultation and handoff bootcamp to a telesimulation bootcamp (TBC), preserving rigorous performance evaluation and opportunities for skills-based feedback. We additionally compared evaluation between virtual and in-person formats to understand the utility of telesimulation for bootcamp-based clinical education moving forward.

METHODS

Setting and Participants

The TBC occurred in June 2020 during the University of Chicago institution-wide graduate medical education (GME) orientation; 130 interns entering 13 residency programs participated. The comparison group was 128 interns who underwent the traditional University of Chicago GME orientation “Advanced Communication Skills Bootcamp” (ACSBC) in 2019.5,8

Program Description

To develop TBC, we adapted observed structured clinical experiences (OSCEs) created for ACSBC. Until 2020, ACSBC included three in-person OSCEs: (1) requesting a consultation; (2) conducting handoffs; and (3) acquiring informed consent. COVID-19 necessitated conversion of ACSBC to virtual in June 2020. For this, we selected the consultation and handoff OSCEs, as these skills require near-universal and immediate application in clinical practice. Additionally, they required only trained facilitators (TFs), whereas informed consent required standardized patients. Hospitalist and emergency medicine faculty were recruited as TFs; 7 of 12 TFs were hospitalists. Each OSCE had two parts: an asynchronous, mandatory training module and a clinical simulation. For TBC, we adapted the simulations, previously separate experiences, into a 20-minute combined handoff/consultation telesimulation using the Zoom® video platform. Interns were paired with one TF who served as both standardized consultant (for one mock case) and handoff receiver (for three mock cases, including the consultation case). TFs rated intern performance and provided feedback.

TBC occurred on June 17 and 18, 2020. Interns were emailed asynchronous modules on June 1, and mock cases and instructions on June 12. When TBC began, GME staff proctors oriented interns in the Zoom® platform. Proctors placed TFs into private breakout rooms into which interns rotated through 20-minute timeslots. Faculty received copies of all TBC materials for review (Appendix 1) and underwent Zoom®-based training 1 to 2 weeks prior.

We evaluated TBC using several methods: (1) consultation and handoff skills performance measured by two validated checklists5,8; (2) survey of intern self-reported preparedness to practice consultations and handoffs; and (3) survey of intern satisfaction. Surveys were administered both immediately post bootcamp (Appendix 2) and 8 weeks into internship (Appendix 3). Skills performance checklists were a 12-item consultation checklist5 and 6-item handoff checklist.8 The handoff checklist was modified to remove activities impossible to assess virtually (ie, orienting sign-outs in a shared space) and to add a three-level rating scale of “outstanding,” “satisfactory,” and “needs improvement.” This was done based on feedback from ACSBC to allow more nuanced feedback for interns. A rating of “outstanding” was used to define successful completion of the item (Appendix 1). Interns rated preparedness and satisfaction on 5-point Likert-type items. All measures were compared to the 2019 in-person ACSBC cohort.

Data Analysis

Stata 16.1 (StataCorp LP) was used for analysis. We dichotomized preparedness and satisfaction scores, defining ratings of “4” or “5” as “prepared” or “satisfied.” As previously described,5 we created a composite score averaging both checklist scores for each intern. We normalized this score by rater to a z score (mean, 0; SD, 1) to account for rater differences. “Poor” and “outstanding” performances were defined as z scores below and above 1 SD, respectively. Fisher’s exact test was used to compare proportions, and Pearson correlation test to correlate z scores. The University of Chicago Institutional Review Board granted exemption.

RESULTS

All 130 entering interns participated in TBC. Internal medicine (IM) was the largest specialty (n = 37), followed by pediatrics (n = 22), emergency medicine (EM) (n = 16), and anesthesiology (n = 12). The remaining 9 programs ranged from 2 to 10 interns per program. The 128 interns in ACSBC were similar, including 40 IM, 23 pediatrics, 14 EM, and 12 anesthesia interns, with 2 to 10 interns in remaining programs.

TBC skills performance evaluations were compared to ACSBC (Table 1). The TBC intern cohort’s consultation performance was the same or better than the ACSBC intern cohort’s. For handoffs, TBC interns completed significantly fewer checklist items compared to ACSBC. Performance in each exercise was moderately correlated (r = 0.39, P < .05). For z scores, 14 TBC interns (10.8%) had “outstanding” and 15 (11.6%) had “poor” performances, compared to ACSBC interns with 7 (5.5%) “outstanding” and 10 (7.81%) “poor” performances (P = .15).

All 130 interns (100%) completed the immediate post-TBC survey. Overall, TBC satisfaction was comparable to ACSBC, and significantly improved for satisfaction with performance (Table 2). Compared to ACSBC, TBC interns felt more prepared for simulation and handoff clinical practice. Nearly all interns would recommend TBC (99% vs 96% of ACSBC interns, P = 0.28), and 99% felt the software used for the simulation ran smoothly.

The 8-week post-TBC survey had a response rate of 88% (115/130); 69% of interns reported conducting more effective handoffs due to TBC, and 79% felt confident in handoff skills. Similarly, 73% felt more effective at calling consultations, and 75% reported retained knowledge of consultation frameworks taught during TBC. Additionally, 71% of interns reported that TBC helped identify areas for self-directed improvement. There were no significant differences in 8-week postsurvey ratings between ACSBC and TBC.

DISCUSSION

In converting the advanced communication skills bootcamp from an in-person to a virtual format, telesimulation was well-received by interns and rated similarly to in-person bootcamp in most respects. Nearly all interns agreed the experience was realistic, provided useful feedback, and prepared them for clinical practice. Although we shifted to virtual out of necessity, our results demonstrate a high-quality, streamlined bootcamp experience that was less labor-intensive for interns, staff, and faculty. Telesimulation may represent an effective strategy beyond the COVID-19 pandemic to increase ease of administration and scale the use of bootcamps in supporting advanced clinical skill training for hospital-based practice.

TBC interns felt better prepared for simulation and more satisfied with their performance than ACSBC interns, potentially due to the revised format. The mock cases were adapted and consolidated for TBC, such that the handoff and consultation simulations shared a common case, whereas previously they were separate. Thus, intern preparation for TBC required familiarity with fewer overall cases. Ultimately, TBC maintained the quality of training but required review of less information.

In comparing performance, TBC interns were rated as well or better during consultation simulation compared to ASCBC, but handoffs were rated lower. This was likely due to the change in the handoff checklist from a dichotomous to a three-level rating scale. This change was made after receiving feedback from ACSBC TFs that a rating scale allowing for more nuance was needed to provide adequate feedback to interns. Although we defined handoff item completion for TBC interns as being rated “outstanding,” if the top two rankings, “outstanding” and “satisfactory,” are dichotomized to reflect completion, TBC handoff performance is equivalent or better than ACSBC. TF recruitment additionally differed between TBC and ACSBC cohorts. In ACSBC, resident physicians served as handoff TFs, whereas only faculty were recruited for TBC. Faculty were primarily clinically active hospitalists, whose expertise in handoffs may resulted in more stringent performance ratings, contributing to differences seen.

Hospitalist groups require clinicians to be immediately proficient in essential communication skills like consultation and handoffs, potentially requiring just-in-time training and feedback for large cohorts.12 Bootcamps can meet this need but require participation and time investment by many faculty members, staff, and administrators.5,8 Combining TBC into one virtual handoff/consultation simulation required recruitment and training of 50% fewer TFs and reduced administrative burden. ACSBC consultation simulations were high-fidelity but resource-heavy, requiring reliable two-way telephones with reliable connections and separate spaces for simulation and feedback.5 Conversely, TBC only required consultations to be “called” via audio-only Zoom® discussion, then both individuals turned on cameras for feedback. The slight decrease in perceived fidelity was certainly outweighed by ease of administration. TBC’s more efficient and less labor-intensive format is an appealing strategy for hospitalist groups looking to train up clinicians, including those operating across multiple or geographically distant sites.

Our study has limitations. It occurred with one group of learners at a single site with consistent consultation and handoff communication practices, which may not be the case elsewhere. Our comparison group was a separate cohort, and groups were not randomized; thus, differences seen may reflect inherent dissimilarities in these groups. Changes to the handoff checklist rating scale between 2019 and 2020 additionally may limit the direct comparison of handoff performance between cohorts. While overall fewer resources were required, TBC implementation did require time and institutional support, along with full virtual platform capability without user or time limitations. Our preparedness outcomes were self-reported without direct measurement of clinical performance, which is an area for future work.

We describe a feasible implementation of an adapted telesimulation communication bootcamp, with comparison to a previous in-person cohort’s skills performance and satisfaction. While COVID-19 has made the future of in-person training activities uncertain, it also served as a catalyst for educational innovation that may be sustained beyond the pandemic. Although developed out of necessity, the telesimulation communication bootcamp was effective and well-received. Telesimulation represents an opportunity for hospital medicine groups to implement advanced communication skills training and assessment in a more efficient, flexible, and potentially preferable way, even after the pandemic ends.

Acknowledgments

The authors thank the staff at the University of Chicago Office of Graduate Medical Education and the UChicago Medicine Simulation Center.

Events requiring communication among and within teams are vulnerable points in patient care in hospital medicine, with communication failures representing important contributors to adverse events.1-4 Consultations and handoffs are exceptionally common inpatient practices, yet training in these practices is variable across educational and practice domains.5,6 Advanced inpatient communication-skills training requires an effective, feasible, and scalable format. Simulation-based bootcamps can effectively support clinical skills training, often in procedural domains, and have been increasingly utilized for communication skills.7,8 We previously described the development and implementation of an in-person bootcamp for training and feedback in consultation and handoff communication.5,8

As hospitalist leaders grapple with how to systematically support and assess essential clinical skills, the COVID-19 pandemic has presented another impetus to rethink current processes. The rapid shift to virtual activities met immediate needs of the pandemic, but also inspired creativity in applying new methodologies to improve teaching strategies and implementation long-term.9,10 One such strategy, telesimulation, offers a way to continue simulation-based training limited by the need for physical distancing.10 Furthermore, recent calls to study the efficacy of virtual bootcamp structures have acknowledged potential benefits, even outside of the pandemic.11

The primary objective of this feasibility study was to convert our previously described consultation and handoff bootcamp to a telesimulation bootcamp (TBC), preserving rigorous performance evaluation and opportunities for skills-based feedback. We additionally compared evaluation between virtual and in-person formats to understand the utility of telesimulation for bootcamp-based clinical education moving forward.

METHODS

Setting and Participants

The TBC occurred in June 2020 during the University of Chicago institution-wide graduate medical education (GME) orientation; 130 interns entering 13 residency programs participated. The comparison group was 128 interns who underwent the traditional University of Chicago GME orientation “Advanced Communication Skills Bootcamp” (ACSBC) in 2019.5,8

Program Description

To develop TBC, we adapted observed structured clinical experiences (OSCEs) created for ACSBC. Until 2020, ACSBC included three in-person OSCEs: (1) requesting a consultation; (2) conducting handoffs; and (3) acquiring informed consent. COVID-19 necessitated conversion of ACSBC to virtual in June 2020. For this, we selected the consultation and handoff OSCEs, as these skills require near-universal and immediate application in clinical practice. Additionally, they required only trained facilitators (TFs), whereas informed consent required standardized patients. Hospitalist and emergency medicine faculty were recruited as TFs; 7 of 12 TFs were hospitalists. Each OSCE had two parts: an asynchronous, mandatory training module and a clinical simulation. For TBC, we adapted the simulations, previously separate experiences, into a 20-minute combined handoff/consultation telesimulation using the Zoom® video platform. Interns were paired with one TF who served as both standardized consultant (for one mock case) and handoff receiver (for three mock cases, including the consultation case). TFs rated intern performance and provided feedback.

TBC occurred on June 17 and 18, 2020. Interns were emailed asynchronous modules on June 1, and mock cases and instructions on June 12. When TBC began, GME staff proctors oriented interns in the Zoom® platform. Proctors placed TFs into private breakout rooms into which interns rotated through 20-minute timeslots. Faculty received copies of all TBC materials for review (Appendix 1) and underwent Zoom®-based training 1 to 2 weeks prior.

We evaluated TBC using several methods: (1) consultation and handoff skills performance measured by two validated checklists5,8; (2) survey of intern self-reported preparedness to practice consultations and handoffs; and (3) survey of intern satisfaction. Surveys were administered both immediately post bootcamp (Appendix 2) and 8 weeks into internship (Appendix 3). Skills performance checklists were a 12-item consultation checklist5 and 6-item handoff checklist.8 The handoff checklist was modified to remove activities impossible to assess virtually (ie, orienting sign-outs in a shared space) and to add a three-level rating scale of “outstanding,” “satisfactory,” and “needs improvement.” This was done based on feedback from ACSBC to allow more nuanced feedback for interns. A rating of “outstanding” was used to define successful completion of the item (Appendix 1). Interns rated preparedness and satisfaction on 5-point Likert-type items. All measures were compared to the 2019 in-person ACSBC cohort.

Data Analysis

Stata 16.1 (StataCorp LP) was used for analysis. We dichotomized preparedness and satisfaction scores, defining ratings of “4” or “5” as “prepared” or “satisfied.” As previously described,5 we created a composite score averaging both checklist scores for each intern. We normalized this score by rater to a z score (mean, 0; SD, 1) to account for rater differences. “Poor” and “outstanding” performances were defined as z scores below and above 1 SD, respectively. Fisher’s exact test was used to compare proportions, and Pearson correlation test to correlate z scores. The University of Chicago Institutional Review Board granted exemption.

RESULTS

All 130 entering interns participated in TBC. Internal medicine (IM) was the largest specialty (n = 37), followed by pediatrics (n = 22), emergency medicine (EM) (n = 16), and anesthesiology (n = 12). The remaining 9 programs ranged from 2 to 10 interns per program. The 128 interns in ACSBC were similar, including 40 IM, 23 pediatrics, 14 EM, and 12 anesthesia interns, with 2 to 10 interns in remaining programs.

TBC skills performance evaluations were compared to ACSBC (Table 1). The TBC intern cohort’s consultation performance was the same or better than the ACSBC intern cohort’s. For handoffs, TBC interns completed significantly fewer checklist items compared to ACSBC. Performance in each exercise was moderately correlated (r = 0.39, P < .05). For z scores, 14 TBC interns (10.8%) had “outstanding” and 15 (11.6%) had “poor” performances, compared to ACSBC interns with 7 (5.5%) “outstanding” and 10 (7.81%) “poor” performances (P = .15).

All 130 interns (100%) completed the immediate post-TBC survey. Overall, TBC satisfaction was comparable to ACSBC, and significantly improved for satisfaction with performance (Table 2). Compared to ACSBC, TBC interns felt more prepared for simulation and handoff clinical practice. Nearly all interns would recommend TBC (99% vs 96% of ACSBC interns, P = 0.28), and 99% felt the software used for the simulation ran smoothly.

The 8-week post-TBC survey had a response rate of 88% (115/130); 69% of interns reported conducting more effective handoffs due to TBC, and 79% felt confident in handoff skills. Similarly, 73% felt more effective at calling consultations, and 75% reported retained knowledge of consultation frameworks taught during TBC. Additionally, 71% of interns reported that TBC helped identify areas for self-directed improvement. There were no significant differences in 8-week postsurvey ratings between ACSBC and TBC.

DISCUSSION

In converting the advanced communication skills bootcamp from an in-person to a virtual format, telesimulation was well-received by interns and rated similarly to in-person bootcamp in most respects. Nearly all interns agreed the experience was realistic, provided useful feedback, and prepared them for clinical practice. Although we shifted to virtual out of necessity, our results demonstrate a high-quality, streamlined bootcamp experience that was less labor-intensive for interns, staff, and faculty. Telesimulation may represent an effective strategy beyond the COVID-19 pandemic to increase ease of administration and scale the use of bootcamps in supporting advanced clinical skill training for hospital-based practice.

TBC interns felt better prepared for simulation and more satisfied with their performance than ACSBC interns, potentially due to the revised format. The mock cases were adapted and consolidated for TBC, such that the handoff and consultation simulations shared a common case, whereas previously they were separate. Thus, intern preparation for TBC required familiarity with fewer overall cases. Ultimately, TBC maintained the quality of training but required review of less information.

In comparing performance, TBC interns were rated as well or better during consultation simulation compared to ASCBC, but handoffs were rated lower. This was likely due to the change in the handoff checklist from a dichotomous to a three-level rating scale. This change was made after receiving feedback from ACSBC TFs that a rating scale allowing for more nuance was needed to provide adequate feedback to interns. Although we defined handoff item completion for TBC interns as being rated “outstanding,” if the top two rankings, “outstanding” and “satisfactory,” are dichotomized to reflect completion, TBC handoff performance is equivalent or better than ACSBC. TF recruitment additionally differed between TBC and ACSBC cohorts. In ACSBC, resident physicians served as handoff TFs, whereas only faculty were recruited for TBC. Faculty were primarily clinically active hospitalists, whose expertise in handoffs may resulted in more stringent performance ratings, contributing to differences seen.

Hospitalist groups require clinicians to be immediately proficient in essential communication skills like consultation and handoffs, potentially requiring just-in-time training and feedback for large cohorts.12 Bootcamps can meet this need but require participation and time investment by many faculty members, staff, and administrators.5,8 Combining TBC into one virtual handoff/consultation simulation required recruitment and training of 50% fewer TFs and reduced administrative burden. ACSBC consultation simulations were high-fidelity but resource-heavy, requiring reliable two-way telephones with reliable connections and separate spaces for simulation and feedback.5 Conversely, TBC only required consultations to be “called” via audio-only Zoom® discussion, then both individuals turned on cameras for feedback. The slight decrease in perceived fidelity was certainly outweighed by ease of administration. TBC’s more efficient and less labor-intensive format is an appealing strategy for hospitalist groups looking to train up clinicians, including those operating across multiple or geographically distant sites.

Our study has limitations. It occurred with one group of learners at a single site with consistent consultation and handoff communication practices, which may not be the case elsewhere. Our comparison group was a separate cohort, and groups were not randomized; thus, differences seen may reflect inherent dissimilarities in these groups. Changes to the handoff checklist rating scale between 2019 and 2020 additionally may limit the direct comparison of handoff performance between cohorts. While overall fewer resources were required, TBC implementation did require time and institutional support, along with full virtual platform capability without user or time limitations. Our preparedness outcomes were self-reported without direct measurement of clinical performance, which is an area for future work.

We describe a feasible implementation of an adapted telesimulation communication bootcamp, with comparison to a previous in-person cohort’s skills performance and satisfaction. While COVID-19 has made the future of in-person training activities uncertain, it also served as a catalyst for educational innovation that may be sustained beyond the pandemic. Although developed out of necessity, the telesimulation communication bootcamp was effective and well-received. Telesimulation represents an opportunity for hospital medicine groups to implement advanced communication skills training and assessment in a more efficient, flexible, and potentially preferable way, even after the pandemic ends.

Acknowledgments

The authors thank the staff at the University of Chicago Office of Graduate Medical Education and the UChicago Medicine Simulation Center.

References

1. Sutcliffe KM, Lewton E, Rosenthal MM. Communication failures: an insidious contributor to medical mishaps. Acad Med. 2004;79(2):186-194. https://doi.org/ 10.1097/00001888-200402000-00019
2. Inadequate hand-off communication. Sentinel Event Alert. 2017;(58):1-6.
3. Horwitz LI, Meredith T, Schuur JD, Shah NR, Kulkarni RG, Jenq JY. Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53(6):701-710. https://doi.org/ 10.1016/j.annemergmed.2008.05.007
4. Jagsi R, Kitch BT, Weinstein DF, Campbell EG, Hutter M, Weissman JS. Residents report on adverse events and their causes. Arch Intern Med. 2005;165(22):2607-2613. https://doi.org/10.1001/archinte.165.22.2607
5. Martin SK, Carter K, Hellerman N, et al. The consultation observed simulated clinical experience: training, assessment, and feedback for incoming interns on requesting consultations. Acad Med. 2018; 93(12):1814-1820. https://doi.org/10.1097/ACM.0000000000002337
6. Lopez MA, Campbell J. Developing a communication curriculum for primary and consulting services. Med Educ Online. 2020;25(1):1794341. https://doi.org/10.1080/10872981.2020
7. Cohen, ER, Barsuk JH, Moazed F, et al. Making July safer: simulation-based mastery learning during intern bootcamp. Acad Med. 2013;88(2):233-239. https://doi.org/10.1097/ACM.0b013e31827bfc0a
8. Gaffney S, Farnan JM, Hirsch K, McGinty M, Arora VM. The Modified, Multi-patient Observed Simulated Handoff Experience (M-OSHE): assessment and feedback for entering residents on handoff performance. J Gen Intern Med. 2016;31(4):438-441. https://doi.org/10.1007/s11606-016-3591-8.
9. Woolliscroft, J. Innovation in response to the COVID-19 pandemic crisis. Acad Med. 2020;95(8):1140-1142. https://doi.org/10.1097/ACM.0000000000003402.
10. Anderson ML, Turbow S, Willgerodt MA, Ruhnke G. Education in a crisis: the opportunity of our lives. J Hosp. Med 2020;5;287-291.  https://doi.org/10.12788/jhm.3431
11. Farr DE, Zeh HJ, Abdelfattah KR. Virtual bootcamps—an emerging solution to the undergraduate medical education-graduate medical education transition. JAMA Surg. 2021;156(3):282-283. https://doi.org/10.1001/jamasurg.2020.6162
12. Hepps JH, Yu CE, Calaman S. Simulation in medical education for the hospitalist: moving beyond the mock code. Pediatr Clin North Am. 2019;66(4):855-866. https://doi.org/10.1016/j.pcl.2019.03.014

References

1. Sutcliffe KM, Lewton E, Rosenthal MM. Communication failures: an insidious contributor to medical mishaps. Acad Med. 2004;79(2):186-194. https://doi.org/ 10.1097/00001888-200402000-00019
2. Inadequate hand-off communication. Sentinel Event Alert. 2017;(58):1-6.
3. Horwitz LI, Meredith T, Schuur JD, Shah NR, Kulkarni RG, Jenq JY. Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53(6):701-710. https://doi.org/ 10.1016/j.annemergmed.2008.05.007
4. Jagsi R, Kitch BT, Weinstein DF, Campbell EG, Hutter M, Weissman JS. Residents report on adverse events and their causes. Arch Intern Med. 2005;165(22):2607-2613. https://doi.org/10.1001/archinte.165.22.2607
5. Martin SK, Carter K, Hellerman N, et al. The consultation observed simulated clinical experience: training, assessment, and feedback for incoming interns on requesting consultations. Acad Med. 2018; 93(12):1814-1820. https://doi.org/10.1097/ACM.0000000000002337
6. Lopez MA, Campbell J. Developing a communication curriculum for primary and consulting services. Med Educ Online. 2020;25(1):1794341. https://doi.org/10.1080/10872981.2020
7. Cohen, ER, Barsuk JH, Moazed F, et al. Making July safer: simulation-based mastery learning during intern bootcamp. Acad Med. 2013;88(2):233-239. https://doi.org/10.1097/ACM.0b013e31827bfc0a
8. Gaffney S, Farnan JM, Hirsch K, McGinty M, Arora VM. The Modified, Multi-patient Observed Simulated Handoff Experience (M-OSHE): assessment and feedback for entering residents on handoff performance. J Gen Intern Med. 2016;31(4):438-441. https://doi.org/10.1007/s11606-016-3591-8.
9. Woolliscroft, J. Innovation in response to the COVID-19 pandemic crisis. Acad Med. 2020;95(8):1140-1142. https://doi.org/10.1097/ACM.0000000000003402.
10. Anderson ML, Turbow S, Willgerodt MA, Ruhnke G. Education in a crisis: the opportunity of our lives. J Hosp. Med 2020;5;287-291.  https://doi.org/10.12788/jhm.3431
11. Farr DE, Zeh HJ, Abdelfattah KR. Virtual bootcamps—an emerging solution to the undergraduate medical education-graduate medical education transition. JAMA Surg. 2021;156(3):282-283. https://doi.org/10.1001/jamasurg.2020.6162
12. Hepps JH, Yu CE, Calaman S. Simulation in medical education for the hospitalist: moving beyond the mock code. Pediatr Clin North Am. 2019;66(4):855-866. https://doi.org/10.1016/j.pcl.2019.03.014

Issue
Journal of Hospital Medicine 16(12)
Issue
Journal of Hospital Medicine 16(12)
Page Number
730-734. Published Online First November 17, 2021
Page Number
730-734. Published Online First November 17, 2021
Publications
Publications
Topics
Article Type
Display Headline
Utilizing Telesimulation for Advanced Skills Training in Consultation and Handoff Communication: A Post-COVID-19 GME Bootcamp Experience
Display Headline
Utilizing Telesimulation for Advanced Skills Training in Consultation and Handoff Communication: A Post-COVID-19 GME Bootcamp Experience
Sections
Article Source

© 2021 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Shannon K Martin, MD, MS; Email: [email protected]; Telephone: 773-702-2604; Twitter: @ShannonMartinMD.
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media
Media Files

Evaluation of the Order SMARTT: An Initiative to Reduce Phlebotomy and Improve Sleep-Friendly Labs on General Medicine Services

Article Type
Changed
Wed, 03/17/2021 - 14:51

Frequent daily laboratory testing for inpatients contributes to excessive costs,1 anemia,2 and unnecessary testing.3 The ABIM Foundation’s Choosing Wisely® campaign recommends avoiding routine labs, like complete blood counts (CBCs) and basic metabolic panels (BMP), in the face of clinical and laboratory stability.4,5 Prior interventions have reduced unnecessary labs without adverse outcomes.6-8

In addition to lab frequency, hospitalized patients face suboptimal lab timing. Labs are often ordered as early as 4 am at many institutions.9,10 This practice disrupts sleep, undermining patient health.11-13 While prior interventions have reduced daily phlebotomy, few have optimized lab timing for patient sleep.10 No study has harnessed the electronic health record (EHR) to optimize frequency and timing of labs simultaneously.14 We aimed to determine the effectiveness of a multicomponent intervention, called Order SMARTT (Sleep: Making Appropriate Reductions in Testing and Timing), to reduce frequency and optimize timing of daily routine labs for medical inpatients.

METHODS

Setting

This study was conducted on the University of Chicago Medicine (UCM) general medicine services, which consisted of a resident-covered service supervised by general medicine, subspecialist, or hospitalist attendings and a hospitalist service staffed by hospitalists and advanced practice providers.

Development of Order SMARTT

To inform intervention development, we surveyed providers about lab-ordering preferences with use of questions from a prior survey to provide a benchmark (Appendix Table 2).15 While reducing lab frequency was supported, the modal response for how frequently a stable patient should receive routine labs was every 48 hours (Appendix Table 2). Therefore, we hypothesized that labs ordered every 48 hours may be popular. Taking labs every 48 hours would not require an urgent 4 am draw, so we created a 48-hour 6 am phlebotomy option to “step down” from daily labs. To promote these options, we created two EHR tools: First, an “Order Sleep” shortcut was launched in March 2018 by which physicians could type “sleep” in routine lab orders and three sleep-friendly options would become available (a 48-hour 6 am draw, a daily 6 am draw, or a daily 10 pm draw), and second, a “4 am Labs” column and icon on the electronic patient list to signal who had 4 am labs ordered was launched May 2018 (Appendix Table 1).

Physician Education

We created a 20-minute presentation on the harms of excessive labs and the benefits of sleep-friendly ordering. Instructional Order SMARTT posters were posted in clinician workrooms that emphasized forgoing labs on stable patients and using the “Order Sleep” shortcut when nonurgent labs were needed.

Labs Utilization Data

We used Epic Systems software (Verona, Wisconsin) and our institutional Tableau scorecard to obtain data on CBC and BMP ordering, patient census, and demographics for medical inpatients between July 1, 2017, and November 1, 2018.

Cost Analysis

Costs of lab tests (actual cost to our institution) were obtained from our institutional phlebotomy services’ estimates of direct variable labor and benefits costs and direct variable supplies cost.

Statistical Analysis

Data analysis was performed with SAS version 9.4 statistical software (Cary, North Carolina, USA) and R version 3.6.2 (Vienna, Austria). Descriptive statistics were used to summarize data. Surveys were analyzed using chi-square tests for categorical variables and two-sample t tests for continuous variables. For lab ordering data, interrupted time series analyses (ITSA) were used to determine the changes in ordering practices with the implementation of the two interventions controlling for service lines (resident vs hospitalist service). ITSA enables examination of changes in lab ordering while controlling for time. The AUTOREG function in SAS was used to build the model and estimate final parameters. This function automatically tests for autocorrelation, heteroscedasticity, and estimates any autoregressive parameters required in the model. Our main model tested the association between our two separate interventions on ordering practices, controlling for service (hospitalist or resident).16

RESULTS

Of 125 residents, 82 (65.6%) attended the session and completed the survey. Attendance and response rate for hospitalists was 80% (16 of 20). Similar to a prior study, many residents (73.1%) reported they would be comfortable if patients received less daily laboratory testing (Appendix Table 2).

We reviewed data from 7,045 total patients over 50,951 total patient days between July1, 2017, and November 1, 2018 (Appendix Table 3).

Total Lab Draws

After accounting for total patient days, we saw 26.3% reduction on average in total lab draws per patient-day per week postintervention (4.68 before vs 3.45 after; difference, 1.23; 95% CI, 0.82-1.63; P < .05; Appendix Table 3). When total lab draws were stratified by service, we saw 28% reduction on average in total lab draws per patient-day per week on resident services (4.67 before vs 3.36 after; difference, 1.31; 95% CI, 0.88-1.74; P < .05) and 23.9% reduction on average in lab draws/patient-day per week on the hospitalist service (4.73 before vs 3.60 after; difference, 1.13; 95% CI, 0.61-1.64; P < .05; Appendix Table 3).

Sleep-Friendly Labs by Intervention

For patients with routine labs, the proportion of sleep-friendly labs drawn per patient-day increased from 6% preintervention to 21% postintervention (P < .001). ITSA demonstrated both interventions were associated with improving lab timing. There was a statistically significant increase in sleep-friendly labs ordered per patient encounter per week immediately after the launch of “Order Sleep” (intercept, 0.49; standard error (SE), 0.14; P = .001) and the “4 am Labs” column (intercept, 0.32; SE, 0.13; P = .02; Table, Figure A).

Summary of Sleep-Friendly Lab Orders

Sleep-Friendly Lab Orders by Service

Over the study period, there was no significant difference in total sleep-friendly labs ordered/month between resident and hospitalist services (84.88 vs 86.19; P = .95).

In ITSA, “Order Sleep” was associated with a statistically significant immediate increase in sleep-friendly lab orders per patient encounter per week on resident services (intercept, 1.03; SE, 0.29; P < .001). However, this initial increase was followed by a decrease over time in sleep-friendly lab orders per week (slope change, –0.1; SE, 0.04; P = .02; Table, Figure B). There was no statistically significant change observed on the hospitalist service with “Order Sleep.”

Run chart of sleep-friendly lab orders per unique patient encounter per week

In contrast, the “4 am Labs” column was associated with a statistically significant immediate increase in sleep-friendly lab orders per patient encounter per week on hospitalist service (intercept, 1.17; SE, 0.50; P = .02; Table, Figure B). While there was no immediate change on resident service, we observed a significant increase over time in sleep-friendly orders per encounter per week on resident services with the introduction of the “4 am Labs” column (slope change, 0.11; SE, 0.04; P = .01; Table, Figure B).

Cost Savings

Using an estimated cost of $7.70 for CBCs and $8.01 for BMPs from our laboratory, our intervention saved an estimated $60,278 in lab costs alone over the 16-month study period (Appendix Table 4).

DISCUSSION

To our knowledge, this is the first study showing a multicomponent intervention using EHR tools can both reduce frequency and optimize timing of routine lab ordering. Our project had two interventions implemented at two different times: First, an “Order Sleep” shortcut was introduced to select sleep-friendly lab timing, including a 6 am draw every 48 hours, and later, a “4 am Labs” column was added to electronic patient lists to passively nudge physicians to consider sleep-friendly labs. The “Order Sleep” tool was associated with a significant immediate increase in sleep-friendly lab ordering on resident services, while the “4 am Labs” column was associated with a significant immediate increase in sleep-friendly lab ordering on the hospitalist service. An overall reduction in total lab draws was seen on both services.

While the “Order Sleep” tool was initially associated with significant increases in sleep-friendly orders on resident services, this change was not sustained. This could have been caused by the short-lived effect of education more than sustained adoption of the tool. In contrast, the “4 am Labs” column on the patient list resulted in a significant sustained increase in sleep-friendly labs on resident services. While residents responded to both tools, both interventions were associated with lasting changes in practice.

The “4 am Labs” column on patient lists was associated with increased adoption of sleep-friendly labs for hospitalist services. Hospitalists care for a larger census with more frequent handoffs and greater reliance on the patient list, which makes patient lists in general an important tool to target value improvement.

While other institutions have attempted to shift lab-timing by altering phlebotomy workflows10 or via conscious decision-making on rounds,9 our study differs in several ways. We avoided default options and allowed clinicians to select sleep-friendly labs to promote buy-in. It is sometimes necessary to order 4 am labs for sick patients who need urgent decision-making, which highlights the need to preserve this option for clinicians. Similarly, our intervention did not aim to eliminate lab draws entirely but offer a more judicious frequency of every 48 hours, consistent with the survey preferences noted. This intervention encouraged reappraisal of patients’ overall needs for labs and created variability in ordering times to reduce the volume of labs ordered at 4 am.

Our study had several limitations. First, this was a single center study on adult medicine services, which limits generalizability. Although we considered surgical services, their early rounds made deviations from 4 am undesirable. Given the observational study design, we cannot assume causal relationships or rule out secular trends. There were large swings in sleep-friendly lab ordering during our study that could be attributed to different physicians rotating on the services monthly. We did not obtain objective data on patient sleep or patient satisfaction because of the low response rate to the HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems) survey.

In conclusion, a multicomponent intervention using EHR tools can reduce inpatient daily lab frequency and optimize lab timing to help promote patient sleep.

Acknowledgments

The authors would like to thank The University of Chicago Center for Healthcare Delivery Science and Innovation for sponsoring their annual Choosing Wisely Challenge, which allowed for access to institutional support and resources for this study. We would also like to thank Mary Kate Springman, MHA, and John Fahrenbach, PhD, for their assistance with this project. Dr Tapaskar also received mentorship through the Future Leader Program for the High Value Practice Academic Alliance.

Files
References

1. Eaton KP, Levy K, Soong C, et al. Evidence-based guidelines to eliminate repetitive laboratory testing. JAMA Intern Med. 2017;177(12):1833-1839. https://doi.org/10.1001/jamainternmed.2017.5152
2. Thavendiranathan P, Bagai A, Ebidia A, Detsky AS, Choudhry NK. Do blood tests cause anemia in hospitalized patients? J Gen Intern Med. 2005;20(6):520-524. https://doi.org/10.1111/j.1525-1497.2005.0094.x
3. Korenstein D, Husain S, Gennarelli RL, White C, Masciale JN, Roman BR. Impact of clinical specialty on attitudes regarding overuse of inpatient laboratory testing. J Hosp Med. 2018;13(12):844-847. https://doi.org/10.12788/jhm.2978
4. Choosing Wisely. 2020. Accessed January 10, 2020. http://www.choosingwisely.org/getting-started/
5. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. https://doi.org/10.1002/jhm.2063
6. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: reducing the cost of daily phlebotomy. Arch Surg. 2011;146(5):524-527. https://doi.org/10.1001/archsurg.2011.103
7. Attali M, Barel Y, Somin M, et al. A cost-effective method for reducing the volume of laboratory tests in a university-associated teaching hospital. Mt Sinai J Med. 2006;73(5):787-794.
8. Vidyarthi AR, Hamill T, Green AL, Rosenbluth G, Baron RB. Changing resident test ordering behavior: a multilevel intervention to decrease laboratory utilization at an academic medical center. Am J Med Qual. 2015;30(1):81-87. https://doi.org/10.1177/1062860613517502
9. Krafft CA, Biondi EA, Leonard MS, et al. Ending the 4 AM Blood Draw. Presented at: American Academy of Pediatrics Experience; October 25, 2015, Washington, DC. Accessed January 10, 2020. https://aap.confex.com/aap/2015/webprogrampress/Paper31640.html
10. Ramarajan V, Chima HS, Young L. Implementation of later morning specimen draws to improve patient health and satisfaction. Lab Med. 2016;47(1):e1-e4. https://doi.org/10.1093/labmed/lmv013
11. Delaney LJ, Van Haren F, Lopez V. Sleeping on a problem: the impact of sleep disturbance on intensive care patients - a clinical review. Ann Intensive Care. 2015;5:3. https://doi.org/10.1186/s13613-015-0043-2
12. Knutson KL, Spiegel K, Penev P, Van Cauter E. The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163-178. https://doi.org/10.1016/j.smrv.2007.01.002
13. Ho A, Raja B, Waldhorn R, Baez V, Mohammed I. New onset of insomnia in hospitalized patients in general medical wards: incidence, causes, and resolution rate. J Community Hosp Int. 2017;7(5):309-313. https://doi.org/10.1080/20009666.2017.1374108
14. Arora VM, Machado N, Anderson SL, et al. Effectiveness of SIESTA on objective and subjective metrics of nighttime hospital sleep disruptors. J Hosp Med. 2019;14(1):38-41. https://doi.org/10.12788/jhm.3091
15. Roman BR, Yang A, Masciale J, Korenstein D. Association of Attitudes Regarding Overuse of Inpatient Laboratory Testing With Health Care Provider Type. JAMA Intern Med. 2017;177(8):1205-1207. https://doi.org/10.1001/jamainternmed.2017.1634
16. Penfold RB, Zhang F. Use of interrupted time series analysis in evaluating health care quality improvements. Acad Pediatr. 2013;13(6 Suppl):S38-S44. https://doi.org/10.1016/j.acap.2013.08.002

Article PDF
Author and Disclosure Information

1Department of Medicine, University of Chicago, Chicago, Illinois; 2Center for Healthcare Delivery Science and Innovation, University of Chicago Medicine, Chicago, Illinois; 3Department of Pathology and Laboratory Medicine, Children’s Hospital of Los Angeles, Los Angeles, California; 4Booth School of Business, University of Chicago, Chicago, Illinois; 5Department of Surgery, University of Chicago, Chicago, Illinois.

Disclosures

The authors have no financial disclosures.

Funding

This research was supported by NHLBI K24 HL136859 and the Center for Healthcare Delivery Sciences and Innovation Choosing Wisely® Challenge at University of Chicago Medicine.

Issue
Journal of Hospital Medicine 15(8)
Publications
Topics
Page Number
479-482. Published Online First July 22, 2020
Sections
Files
Files
Author and Disclosure Information

1Department of Medicine, University of Chicago, Chicago, Illinois; 2Center for Healthcare Delivery Science and Innovation, University of Chicago Medicine, Chicago, Illinois; 3Department of Pathology and Laboratory Medicine, Children’s Hospital of Los Angeles, Los Angeles, California; 4Booth School of Business, University of Chicago, Chicago, Illinois; 5Department of Surgery, University of Chicago, Chicago, Illinois.

Disclosures

The authors have no financial disclosures.

Funding

This research was supported by NHLBI K24 HL136859 and the Center for Healthcare Delivery Sciences and Innovation Choosing Wisely® Challenge at University of Chicago Medicine.

Author and Disclosure Information

1Department of Medicine, University of Chicago, Chicago, Illinois; 2Center for Healthcare Delivery Science and Innovation, University of Chicago Medicine, Chicago, Illinois; 3Department of Pathology and Laboratory Medicine, Children’s Hospital of Los Angeles, Los Angeles, California; 4Booth School of Business, University of Chicago, Chicago, Illinois; 5Department of Surgery, University of Chicago, Chicago, Illinois.

Disclosures

The authors have no financial disclosures.

Funding

This research was supported by NHLBI K24 HL136859 and the Center for Healthcare Delivery Sciences and Innovation Choosing Wisely® Challenge at University of Chicago Medicine.

Article PDF
Article PDF
Related Articles

Frequent daily laboratory testing for inpatients contributes to excessive costs,1 anemia,2 and unnecessary testing.3 The ABIM Foundation’s Choosing Wisely® campaign recommends avoiding routine labs, like complete blood counts (CBCs) and basic metabolic panels (BMP), in the face of clinical and laboratory stability.4,5 Prior interventions have reduced unnecessary labs without adverse outcomes.6-8

In addition to lab frequency, hospitalized patients face suboptimal lab timing. Labs are often ordered as early as 4 am at many institutions.9,10 This practice disrupts sleep, undermining patient health.11-13 While prior interventions have reduced daily phlebotomy, few have optimized lab timing for patient sleep.10 No study has harnessed the electronic health record (EHR) to optimize frequency and timing of labs simultaneously.14 We aimed to determine the effectiveness of a multicomponent intervention, called Order SMARTT (Sleep: Making Appropriate Reductions in Testing and Timing), to reduce frequency and optimize timing of daily routine labs for medical inpatients.

METHODS

Setting

This study was conducted on the University of Chicago Medicine (UCM) general medicine services, which consisted of a resident-covered service supervised by general medicine, subspecialist, or hospitalist attendings and a hospitalist service staffed by hospitalists and advanced practice providers.

Development of Order SMARTT

To inform intervention development, we surveyed providers about lab-ordering preferences with use of questions from a prior survey to provide a benchmark (Appendix Table 2).15 While reducing lab frequency was supported, the modal response for how frequently a stable patient should receive routine labs was every 48 hours (Appendix Table 2). Therefore, we hypothesized that labs ordered every 48 hours may be popular. Taking labs every 48 hours would not require an urgent 4 am draw, so we created a 48-hour 6 am phlebotomy option to “step down” from daily labs. To promote these options, we created two EHR tools: First, an “Order Sleep” shortcut was launched in March 2018 by which physicians could type “sleep” in routine lab orders and three sleep-friendly options would become available (a 48-hour 6 am draw, a daily 6 am draw, or a daily 10 pm draw), and second, a “4 am Labs” column and icon on the electronic patient list to signal who had 4 am labs ordered was launched May 2018 (Appendix Table 1).

Physician Education

We created a 20-minute presentation on the harms of excessive labs and the benefits of sleep-friendly ordering. Instructional Order SMARTT posters were posted in clinician workrooms that emphasized forgoing labs on stable patients and using the “Order Sleep” shortcut when nonurgent labs were needed.

Labs Utilization Data

We used Epic Systems software (Verona, Wisconsin) and our institutional Tableau scorecard to obtain data on CBC and BMP ordering, patient census, and demographics for medical inpatients between July 1, 2017, and November 1, 2018.

Cost Analysis

Costs of lab tests (actual cost to our institution) were obtained from our institutional phlebotomy services’ estimates of direct variable labor and benefits costs and direct variable supplies cost.

Statistical Analysis

Data analysis was performed with SAS version 9.4 statistical software (Cary, North Carolina, USA) and R version 3.6.2 (Vienna, Austria). Descriptive statistics were used to summarize data. Surveys were analyzed using chi-square tests for categorical variables and two-sample t tests for continuous variables. For lab ordering data, interrupted time series analyses (ITSA) were used to determine the changes in ordering practices with the implementation of the two interventions controlling for service lines (resident vs hospitalist service). ITSA enables examination of changes in lab ordering while controlling for time. The AUTOREG function in SAS was used to build the model and estimate final parameters. This function automatically tests for autocorrelation, heteroscedasticity, and estimates any autoregressive parameters required in the model. Our main model tested the association between our two separate interventions on ordering practices, controlling for service (hospitalist or resident).16

RESULTS

Of 125 residents, 82 (65.6%) attended the session and completed the survey. Attendance and response rate for hospitalists was 80% (16 of 20). Similar to a prior study, many residents (73.1%) reported they would be comfortable if patients received less daily laboratory testing (Appendix Table 2).

We reviewed data from 7,045 total patients over 50,951 total patient days between July1, 2017, and November 1, 2018 (Appendix Table 3).

Total Lab Draws

After accounting for total patient days, we saw 26.3% reduction on average in total lab draws per patient-day per week postintervention (4.68 before vs 3.45 after; difference, 1.23; 95% CI, 0.82-1.63; P < .05; Appendix Table 3). When total lab draws were stratified by service, we saw 28% reduction on average in total lab draws per patient-day per week on resident services (4.67 before vs 3.36 after; difference, 1.31; 95% CI, 0.88-1.74; P < .05) and 23.9% reduction on average in lab draws/patient-day per week on the hospitalist service (4.73 before vs 3.60 after; difference, 1.13; 95% CI, 0.61-1.64; P < .05; Appendix Table 3).

Sleep-Friendly Labs by Intervention

For patients with routine labs, the proportion of sleep-friendly labs drawn per patient-day increased from 6% preintervention to 21% postintervention (P < .001). ITSA demonstrated both interventions were associated with improving lab timing. There was a statistically significant increase in sleep-friendly labs ordered per patient encounter per week immediately after the launch of “Order Sleep” (intercept, 0.49; standard error (SE), 0.14; P = .001) and the “4 am Labs” column (intercept, 0.32; SE, 0.13; P = .02; Table, Figure A).

Summary of Sleep-Friendly Lab Orders

Sleep-Friendly Lab Orders by Service

Over the study period, there was no significant difference in total sleep-friendly labs ordered/month between resident and hospitalist services (84.88 vs 86.19; P = .95).

In ITSA, “Order Sleep” was associated with a statistically significant immediate increase in sleep-friendly lab orders per patient encounter per week on resident services (intercept, 1.03; SE, 0.29; P < .001). However, this initial increase was followed by a decrease over time in sleep-friendly lab orders per week (slope change, –0.1; SE, 0.04; P = .02; Table, Figure B). There was no statistically significant change observed on the hospitalist service with “Order Sleep.”

Run chart of sleep-friendly lab orders per unique patient encounter per week

In contrast, the “4 am Labs” column was associated with a statistically significant immediate increase in sleep-friendly lab orders per patient encounter per week on hospitalist service (intercept, 1.17; SE, 0.50; P = .02; Table, Figure B). While there was no immediate change on resident service, we observed a significant increase over time in sleep-friendly orders per encounter per week on resident services with the introduction of the “4 am Labs” column (slope change, 0.11; SE, 0.04; P = .01; Table, Figure B).

Cost Savings

Using an estimated cost of $7.70 for CBCs and $8.01 for BMPs from our laboratory, our intervention saved an estimated $60,278 in lab costs alone over the 16-month study period (Appendix Table 4).

DISCUSSION

To our knowledge, this is the first study showing a multicomponent intervention using EHR tools can both reduce frequency and optimize timing of routine lab ordering. Our project had two interventions implemented at two different times: First, an “Order Sleep” shortcut was introduced to select sleep-friendly lab timing, including a 6 am draw every 48 hours, and later, a “4 am Labs” column was added to electronic patient lists to passively nudge physicians to consider sleep-friendly labs. The “Order Sleep” tool was associated with a significant immediate increase in sleep-friendly lab ordering on resident services, while the “4 am Labs” column was associated with a significant immediate increase in sleep-friendly lab ordering on the hospitalist service. An overall reduction in total lab draws was seen on both services.

While the “Order Sleep” tool was initially associated with significant increases in sleep-friendly orders on resident services, this change was not sustained. This could have been caused by the short-lived effect of education more than sustained adoption of the tool. In contrast, the “4 am Labs” column on the patient list resulted in a significant sustained increase in sleep-friendly labs on resident services. While residents responded to both tools, both interventions were associated with lasting changes in practice.

The “4 am Labs” column on patient lists was associated with increased adoption of sleep-friendly labs for hospitalist services. Hospitalists care for a larger census with more frequent handoffs and greater reliance on the patient list, which makes patient lists in general an important tool to target value improvement.

While other institutions have attempted to shift lab-timing by altering phlebotomy workflows10 or via conscious decision-making on rounds,9 our study differs in several ways. We avoided default options and allowed clinicians to select sleep-friendly labs to promote buy-in. It is sometimes necessary to order 4 am labs for sick patients who need urgent decision-making, which highlights the need to preserve this option for clinicians. Similarly, our intervention did not aim to eliminate lab draws entirely but offer a more judicious frequency of every 48 hours, consistent with the survey preferences noted. This intervention encouraged reappraisal of patients’ overall needs for labs and created variability in ordering times to reduce the volume of labs ordered at 4 am.

Our study had several limitations. First, this was a single center study on adult medicine services, which limits generalizability. Although we considered surgical services, their early rounds made deviations from 4 am undesirable. Given the observational study design, we cannot assume causal relationships or rule out secular trends. There were large swings in sleep-friendly lab ordering during our study that could be attributed to different physicians rotating on the services monthly. We did not obtain objective data on patient sleep or patient satisfaction because of the low response rate to the HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems) survey.

In conclusion, a multicomponent intervention using EHR tools can reduce inpatient daily lab frequency and optimize lab timing to help promote patient sleep.

Acknowledgments

The authors would like to thank The University of Chicago Center for Healthcare Delivery Science and Innovation for sponsoring their annual Choosing Wisely Challenge, which allowed for access to institutional support and resources for this study. We would also like to thank Mary Kate Springman, MHA, and John Fahrenbach, PhD, for their assistance with this project. Dr Tapaskar also received mentorship through the Future Leader Program for the High Value Practice Academic Alliance.

Frequent daily laboratory testing for inpatients contributes to excessive costs,1 anemia,2 and unnecessary testing.3 The ABIM Foundation’s Choosing Wisely® campaign recommends avoiding routine labs, like complete blood counts (CBCs) and basic metabolic panels (BMP), in the face of clinical and laboratory stability.4,5 Prior interventions have reduced unnecessary labs without adverse outcomes.6-8

In addition to lab frequency, hospitalized patients face suboptimal lab timing. Labs are often ordered as early as 4 am at many institutions.9,10 This practice disrupts sleep, undermining patient health.11-13 While prior interventions have reduced daily phlebotomy, few have optimized lab timing for patient sleep.10 No study has harnessed the electronic health record (EHR) to optimize frequency and timing of labs simultaneously.14 We aimed to determine the effectiveness of a multicomponent intervention, called Order SMARTT (Sleep: Making Appropriate Reductions in Testing and Timing), to reduce frequency and optimize timing of daily routine labs for medical inpatients.

METHODS

Setting

This study was conducted on the University of Chicago Medicine (UCM) general medicine services, which consisted of a resident-covered service supervised by general medicine, subspecialist, or hospitalist attendings and a hospitalist service staffed by hospitalists and advanced practice providers.

Development of Order SMARTT

To inform intervention development, we surveyed providers about lab-ordering preferences with use of questions from a prior survey to provide a benchmark (Appendix Table 2).15 While reducing lab frequency was supported, the modal response for how frequently a stable patient should receive routine labs was every 48 hours (Appendix Table 2). Therefore, we hypothesized that labs ordered every 48 hours may be popular. Taking labs every 48 hours would not require an urgent 4 am draw, so we created a 48-hour 6 am phlebotomy option to “step down” from daily labs. To promote these options, we created two EHR tools: First, an “Order Sleep” shortcut was launched in March 2018 by which physicians could type “sleep” in routine lab orders and three sleep-friendly options would become available (a 48-hour 6 am draw, a daily 6 am draw, or a daily 10 pm draw), and second, a “4 am Labs” column and icon on the electronic patient list to signal who had 4 am labs ordered was launched May 2018 (Appendix Table 1).

Physician Education

We created a 20-minute presentation on the harms of excessive labs and the benefits of sleep-friendly ordering. Instructional Order SMARTT posters were posted in clinician workrooms that emphasized forgoing labs on stable patients and using the “Order Sleep” shortcut when nonurgent labs were needed.

Labs Utilization Data

We used Epic Systems software (Verona, Wisconsin) and our institutional Tableau scorecard to obtain data on CBC and BMP ordering, patient census, and demographics for medical inpatients between July 1, 2017, and November 1, 2018.

Cost Analysis

Costs of lab tests (actual cost to our institution) were obtained from our institutional phlebotomy services’ estimates of direct variable labor and benefits costs and direct variable supplies cost.

Statistical Analysis

Data analysis was performed with SAS version 9.4 statistical software (Cary, North Carolina, USA) and R version 3.6.2 (Vienna, Austria). Descriptive statistics were used to summarize data. Surveys were analyzed using chi-square tests for categorical variables and two-sample t tests for continuous variables. For lab ordering data, interrupted time series analyses (ITSA) were used to determine the changes in ordering practices with the implementation of the two interventions controlling for service lines (resident vs hospitalist service). ITSA enables examination of changes in lab ordering while controlling for time. The AUTOREG function in SAS was used to build the model and estimate final parameters. This function automatically tests for autocorrelation, heteroscedasticity, and estimates any autoregressive parameters required in the model. Our main model tested the association between our two separate interventions on ordering practices, controlling for service (hospitalist or resident).16

RESULTS

Of 125 residents, 82 (65.6%) attended the session and completed the survey. Attendance and response rate for hospitalists was 80% (16 of 20). Similar to a prior study, many residents (73.1%) reported they would be comfortable if patients received less daily laboratory testing (Appendix Table 2).

We reviewed data from 7,045 total patients over 50,951 total patient days between July1, 2017, and November 1, 2018 (Appendix Table 3).

Total Lab Draws

After accounting for total patient days, we saw 26.3% reduction on average in total lab draws per patient-day per week postintervention (4.68 before vs 3.45 after; difference, 1.23; 95% CI, 0.82-1.63; P < .05; Appendix Table 3). When total lab draws were stratified by service, we saw 28% reduction on average in total lab draws per patient-day per week on resident services (4.67 before vs 3.36 after; difference, 1.31; 95% CI, 0.88-1.74; P < .05) and 23.9% reduction on average in lab draws/patient-day per week on the hospitalist service (4.73 before vs 3.60 after; difference, 1.13; 95% CI, 0.61-1.64; P < .05; Appendix Table 3).

Sleep-Friendly Labs by Intervention

For patients with routine labs, the proportion of sleep-friendly labs drawn per patient-day increased from 6% preintervention to 21% postintervention (P < .001). ITSA demonstrated both interventions were associated with improving lab timing. There was a statistically significant increase in sleep-friendly labs ordered per patient encounter per week immediately after the launch of “Order Sleep” (intercept, 0.49; standard error (SE), 0.14; P = .001) and the “4 am Labs” column (intercept, 0.32; SE, 0.13; P = .02; Table, Figure A).

Summary of Sleep-Friendly Lab Orders

Sleep-Friendly Lab Orders by Service

Over the study period, there was no significant difference in total sleep-friendly labs ordered/month between resident and hospitalist services (84.88 vs 86.19; P = .95).

In ITSA, “Order Sleep” was associated with a statistically significant immediate increase in sleep-friendly lab orders per patient encounter per week on resident services (intercept, 1.03; SE, 0.29; P < .001). However, this initial increase was followed by a decrease over time in sleep-friendly lab orders per week (slope change, –0.1; SE, 0.04; P = .02; Table, Figure B). There was no statistically significant change observed on the hospitalist service with “Order Sleep.”

Run chart of sleep-friendly lab orders per unique patient encounter per week

In contrast, the “4 am Labs” column was associated with a statistically significant immediate increase in sleep-friendly lab orders per patient encounter per week on hospitalist service (intercept, 1.17; SE, 0.50; P = .02; Table, Figure B). While there was no immediate change on resident service, we observed a significant increase over time in sleep-friendly orders per encounter per week on resident services with the introduction of the “4 am Labs” column (slope change, 0.11; SE, 0.04; P = .01; Table, Figure B).

Cost Savings

Using an estimated cost of $7.70 for CBCs and $8.01 for BMPs from our laboratory, our intervention saved an estimated $60,278 in lab costs alone over the 16-month study period (Appendix Table 4).

DISCUSSION

To our knowledge, this is the first study showing a multicomponent intervention using EHR tools can both reduce frequency and optimize timing of routine lab ordering. Our project had two interventions implemented at two different times: First, an “Order Sleep” shortcut was introduced to select sleep-friendly lab timing, including a 6 am draw every 48 hours, and later, a “4 am Labs” column was added to electronic patient lists to passively nudge physicians to consider sleep-friendly labs. The “Order Sleep” tool was associated with a significant immediate increase in sleep-friendly lab ordering on resident services, while the “4 am Labs” column was associated with a significant immediate increase in sleep-friendly lab ordering on the hospitalist service. An overall reduction in total lab draws was seen on both services.

While the “Order Sleep” tool was initially associated with significant increases in sleep-friendly orders on resident services, this change was not sustained. This could have been caused by the short-lived effect of education more than sustained adoption of the tool. In contrast, the “4 am Labs” column on the patient list resulted in a significant sustained increase in sleep-friendly labs on resident services. While residents responded to both tools, both interventions were associated with lasting changes in practice.

The “4 am Labs” column on patient lists was associated with increased adoption of sleep-friendly labs for hospitalist services. Hospitalists care for a larger census with more frequent handoffs and greater reliance on the patient list, which makes patient lists in general an important tool to target value improvement.

While other institutions have attempted to shift lab-timing by altering phlebotomy workflows10 or via conscious decision-making on rounds,9 our study differs in several ways. We avoided default options and allowed clinicians to select sleep-friendly labs to promote buy-in. It is sometimes necessary to order 4 am labs for sick patients who need urgent decision-making, which highlights the need to preserve this option for clinicians. Similarly, our intervention did not aim to eliminate lab draws entirely but offer a more judicious frequency of every 48 hours, consistent with the survey preferences noted. This intervention encouraged reappraisal of patients’ overall needs for labs and created variability in ordering times to reduce the volume of labs ordered at 4 am.

Our study had several limitations. First, this was a single center study on adult medicine services, which limits generalizability. Although we considered surgical services, their early rounds made deviations from 4 am undesirable. Given the observational study design, we cannot assume causal relationships or rule out secular trends. There were large swings in sleep-friendly lab ordering during our study that could be attributed to different physicians rotating on the services monthly. We did not obtain objective data on patient sleep or patient satisfaction because of the low response rate to the HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems) survey.

In conclusion, a multicomponent intervention using EHR tools can reduce inpatient daily lab frequency and optimize lab timing to help promote patient sleep.

Acknowledgments

The authors would like to thank The University of Chicago Center for Healthcare Delivery Science and Innovation for sponsoring their annual Choosing Wisely Challenge, which allowed for access to institutional support and resources for this study. We would also like to thank Mary Kate Springman, MHA, and John Fahrenbach, PhD, for their assistance with this project. Dr Tapaskar also received mentorship through the Future Leader Program for the High Value Practice Academic Alliance.

References

1. Eaton KP, Levy K, Soong C, et al. Evidence-based guidelines to eliminate repetitive laboratory testing. JAMA Intern Med. 2017;177(12):1833-1839. https://doi.org/10.1001/jamainternmed.2017.5152
2. Thavendiranathan P, Bagai A, Ebidia A, Detsky AS, Choudhry NK. Do blood tests cause anemia in hospitalized patients? J Gen Intern Med. 2005;20(6):520-524. https://doi.org/10.1111/j.1525-1497.2005.0094.x
3. Korenstein D, Husain S, Gennarelli RL, White C, Masciale JN, Roman BR. Impact of clinical specialty on attitudes regarding overuse of inpatient laboratory testing. J Hosp Med. 2018;13(12):844-847. https://doi.org/10.12788/jhm.2978
4. Choosing Wisely. 2020. Accessed January 10, 2020. http://www.choosingwisely.org/getting-started/
5. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. https://doi.org/10.1002/jhm.2063
6. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: reducing the cost of daily phlebotomy. Arch Surg. 2011;146(5):524-527. https://doi.org/10.1001/archsurg.2011.103
7. Attali M, Barel Y, Somin M, et al. A cost-effective method for reducing the volume of laboratory tests in a university-associated teaching hospital. Mt Sinai J Med. 2006;73(5):787-794.
8. Vidyarthi AR, Hamill T, Green AL, Rosenbluth G, Baron RB. Changing resident test ordering behavior: a multilevel intervention to decrease laboratory utilization at an academic medical center. Am J Med Qual. 2015;30(1):81-87. https://doi.org/10.1177/1062860613517502
9. Krafft CA, Biondi EA, Leonard MS, et al. Ending the 4 AM Blood Draw. Presented at: American Academy of Pediatrics Experience; October 25, 2015, Washington, DC. Accessed January 10, 2020. https://aap.confex.com/aap/2015/webprogrampress/Paper31640.html
10. Ramarajan V, Chima HS, Young L. Implementation of later morning specimen draws to improve patient health and satisfaction. Lab Med. 2016;47(1):e1-e4. https://doi.org/10.1093/labmed/lmv013
11. Delaney LJ, Van Haren F, Lopez V. Sleeping on a problem: the impact of sleep disturbance on intensive care patients - a clinical review. Ann Intensive Care. 2015;5:3. https://doi.org/10.1186/s13613-015-0043-2
12. Knutson KL, Spiegel K, Penev P, Van Cauter E. The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163-178. https://doi.org/10.1016/j.smrv.2007.01.002
13. Ho A, Raja B, Waldhorn R, Baez V, Mohammed I. New onset of insomnia in hospitalized patients in general medical wards: incidence, causes, and resolution rate. J Community Hosp Int. 2017;7(5):309-313. https://doi.org/10.1080/20009666.2017.1374108
14. Arora VM, Machado N, Anderson SL, et al. Effectiveness of SIESTA on objective and subjective metrics of nighttime hospital sleep disruptors. J Hosp Med. 2019;14(1):38-41. https://doi.org/10.12788/jhm.3091
15. Roman BR, Yang A, Masciale J, Korenstein D. Association of Attitudes Regarding Overuse of Inpatient Laboratory Testing With Health Care Provider Type. JAMA Intern Med. 2017;177(8):1205-1207. https://doi.org/10.1001/jamainternmed.2017.1634
16. Penfold RB, Zhang F. Use of interrupted time series analysis in evaluating health care quality improvements. Acad Pediatr. 2013;13(6 Suppl):S38-S44. https://doi.org/10.1016/j.acap.2013.08.002

References

1. Eaton KP, Levy K, Soong C, et al. Evidence-based guidelines to eliminate repetitive laboratory testing. JAMA Intern Med. 2017;177(12):1833-1839. https://doi.org/10.1001/jamainternmed.2017.5152
2. Thavendiranathan P, Bagai A, Ebidia A, Detsky AS, Choudhry NK. Do blood tests cause anemia in hospitalized patients? J Gen Intern Med. 2005;20(6):520-524. https://doi.org/10.1111/j.1525-1497.2005.0094.x
3. Korenstein D, Husain S, Gennarelli RL, White C, Masciale JN, Roman BR. Impact of clinical specialty on attitudes regarding overuse of inpatient laboratory testing. J Hosp Med. 2018;13(12):844-847. https://doi.org/10.12788/jhm.2978
4. Choosing Wisely. 2020. Accessed January 10, 2020. http://www.choosingwisely.org/getting-started/
5. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. https://doi.org/10.1002/jhm.2063
6. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: reducing the cost of daily phlebotomy. Arch Surg. 2011;146(5):524-527. https://doi.org/10.1001/archsurg.2011.103
7. Attali M, Barel Y, Somin M, et al. A cost-effective method for reducing the volume of laboratory tests in a university-associated teaching hospital. Mt Sinai J Med. 2006;73(5):787-794.
8. Vidyarthi AR, Hamill T, Green AL, Rosenbluth G, Baron RB. Changing resident test ordering behavior: a multilevel intervention to decrease laboratory utilization at an academic medical center. Am J Med Qual. 2015;30(1):81-87. https://doi.org/10.1177/1062860613517502
9. Krafft CA, Biondi EA, Leonard MS, et al. Ending the 4 AM Blood Draw. Presented at: American Academy of Pediatrics Experience; October 25, 2015, Washington, DC. Accessed January 10, 2020. https://aap.confex.com/aap/2015/webprogrampress/Paper31640.html
10. Ramarajan V, Chima HS, Young L. Implementation of later morning specimen draws to improve patient health and satisfaction. Lab Med. 2016;47(1):e1-e4. https://doi.org/10.1093/labmed/lmv013
11. Delaney LJ, Van Haren F, Lopez V. Sleeping on a problem: the impact of sleep disturbance on intensive care patients - a clinical review. Ann Intensive Care. 2015;5:3. https://doi.org/10.1186/s13613-015-0043-2
12. Knutson KL, Spiegel K, Penev P, Van Cauter E. The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163-178. https://doi.org/10.1016/j.smrv.2007.01.002
13. Ho A, Raja B, Waldhorn R, Baez V, Mohammed I. New onset of insomnia in hospitalized patients in general medical wards: incidence, causes, and resolution rate. J Community Hosp Int. 2017;7(5):309-313. https://doi.org/10.1080/20009666.2017.1374108
14. Arora VM, Machado N, Anderson SL, et al. Effectiveness of SIESTA on objective and subjective metrics of nighttime hospital sleep disruptors. J Hosp Med. 2019;14(1):38-41. https://doi.org/10.12788/jhm.3091
15. Roman BR, Yang A, Masciale J, Korenstein D. Association of Attitudes Regarding Overuse of Inpatient Laboratory Testing With Health Care Provider Type. JAMA Intern Med. 2017;177(8):1205-1207. https://doi.org/10.1001/jamainternmed.2017.1634
16. Penfold RB, Zhang F. Use of interrupted time series analysis in evaluating health care quality improvements. Acad Pediatr. 2013;13(6 Suppl):S38-S44. https://doi.org/10.1016/j.acap.2013.08.002

Issue
Journal of Hospital Medicine 15(8)
Issue
Journal of Hospital Medicine 15(8)
Page Number
479-482. Published Online First July 22, 2020
Page Number
479-482. Published Online First July 22, 2020
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Natalie Tapaskar, MD; Email: [email protected]; Telephone: 630-303-6574; Twitter: @NatalieTapaskar.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Peek Free
Medscape Article
Display survey writer
Reuters content
Article PDF Media
Media Files

Describing Variability of Inpatient Consultation Practices: Physician, Patient, and Admission Factors

Article Type
Changed
Thu, 03/25/2021 - 12:19

Inpatient consultation is an extremely common practice with the potential to improve patient outcomes significantly.1-3 However, variability in consultation practices may be risky for patients. In addition to underuse when the benefit is clear, the overuse of consultation may lead to additional testing and therapies, increased length of stay (LOS) and costs, conflicting recommendations, and opportunities for communication breakdown.

Consultation use is often at the discretion of individual providers. While this decision is frequently driven by patient needs, significant variation in consultation practices not fully explained by patient factors exists.1 Prior work has described hospital-level variation1 and that primary care physicians use more consultation than hospitalists.4 However, other factors affecting consultation remain unknown. We sought to explore physician-, patient-, and admission-level factors associated with consultation use on inpatient general medicine services.

METHODS

Study Design

We conducted a retrospective analysis of data from the University of Chicago Hospitalist Project (UCHP). UCHP is a longstanding study of the care of hospitalized patients admitted to the University of Chicago general medicine services, involving both patient data collection and physician experience surveys.5 Data were obtained for enrolled UCHP patients between 2011-2016 from the Center for Research Informatics (CRI). The University of Chicago Institutional Review Board approved this study.

Data Collection

Attendings and patients consented to UCHP participation. Data collection details are described elsewhere.5,6 Data from EpicCare (EpicSystems Corp, Wisconsin) and Centricity Billing (GE Healthcare, Illinois) were obtained via CRI for all encounters of enrolled UCHP patients during the study period (N = 218,591).

Attending Attribution

We determined attending attribution for admissions as follows: the attending author of the first history and physical (H&P) was assigned. If this was unavailable, the attending author of the first progress note (PN) was assigned. For patients admitted by hospitalists on admitting shifts to nonteaching services (ie, service without residents/students), the author of the first PN was assigned if different from H&P. Where available, attribution was corroborated with call schedules.

Sample and Variables

All encounters containing inpatient admissions to the University of Chicago from May 10, 2011 (Electronic Health Record activation date), through December 31, 2016, were considered for inclusion (N = 51,171, Appendix 1). Admissions including only documentation from ancillary services were excluded (eg, encounters for hemodialysis or physical therapy). Admissions were limited to a length of stay (LOS) ≤ 5 days, corresponding to the average US inpatient LOS of 4.6 days,7 to minimize the likelihood of attending handoffs (N = 31,592). If attending attribution was not possible via the above-described methods, the admission was eliminated (N = 3,103; 10.9% of admissions with LOS ≤ 5 days). Finally, the sample was restricted to general medicine service admissions under attendings enrolled in UCHP who completed surveys. After the application of all criteria, 6,153 admissions remained for analysis.

 

 

The outcome variable was the number of consultations per admission, determined by counting the unique number of services creating clinical documentation, and subtracting one for the primary team. If the Medical/Surgical intensive care unit (ICU) was a service, then two were subtracted to account for the ICU transfer.

Attending years in practice (ie, years since medical school graduation) and gender were determined from public resources. Practice characteristics were determined from UCHP attending surveys, which address perceptions of workload and satisfaction (Appendix 2).

Patient characteristics (gender, age, Elixhauser Indices) and admission characteristics (LOS, season of admission, payor) were determined from UCHP and CRI data. The Elixhauser Index uses a well-validated system combining the presence/absence of 31 comorbidities to predict mortality and 30-day readmission.8 Elixhauser Indices were calculated using the “Creation of Elixhauser Comorbidity Index Scores 1.0” software.9 For admissions under hospitalist attendings, teaching/nonteaching team was ascertained via internal teaching service calendars.

Analysis

We used descriptive statistics to examine demographic characteristics. The difference between the lowest and highest quartile consultation use was determined via a two-sample t test. Given the multilevel nature of our count data, we used a mixed-effects Poisson model accounting for within-group variation by clustering on attending and patient (3-level random-effects model). The analysis was done using Stata 15 (StataCorp, Texas).

RESULTS

From 2011 to 2016, 14,848 patients and 88 attendings were enrolled in UCHP; 4,772 patients (32%) and 69 attendings (59.4%) had data available and were included. Mean LOS was 3.0 days (SD = 1.3). Table 1 describes the characteristics of attendings, patients, and admissions.

Seventy-six percent of admissions included at least one consultation. Consultation use varied widely, ranging from 0 to 10 per admission (mean = 1.39, median = 1; standard deviation [SD] = 1.17). The number of consultations per admission in the highest quartile of consultation frequency (mean = 3.47, median = 3) was 5.7-fold that of the lowest quartile (mean = 0.613, median = 1; P <.001).

In multivariable regression, physician-, patient-, and admission-level characteristics were associated with the differential use of consultation (Table 2). On teaching services, consultations called by hospitalist vs nonhospitalist generalists did not differ (P =.361). However, hospitalists on nonteaching services called 8.6% more consultations than hospitalists on teaching services (P =.02). Attending agreement with survey item “The interruption of my personal life by work is a problem” was associated with 8.2% fewer consultations per admission (P =.002).

Patients older than 75 years received 19% fewer consultations compared with patients younger than 49 years (P <.001). Compared with Medicare, Medicaid admissions had 12.2% fewer consultations (P <.001), whereas privately insured admissions had 10.7% more (P =.001). The number of consultations per admission decreased every year, with 45.3% fewer consultations in 2015 than 2011 (P <.001). Consultations increased by each 22% per day increase in LOS (P <.001).

DISCUSSION

Our analysis described several physician-, patient-, and admission-level characteristics associated with the use of inpatient consultation. Our results strengthen prior work demonstrating that patient-level factors alone are insufficient to explain consultation variability.1

 

 

Hospitalists on nonteaching services called more consultations, which may reflect a higher workload on these services. Busy hospitalists on nonteaching teams may lack time to delve deeply into clinical problems and require more consultations, especially for work with heavy cognitive loads such as diagnosis. “Outsourcing” tasks when workload increases occurs in other cognitive activities such as teaching.10 The association between work interrupting personal life and fewer consultations may also implicate the effects of time. Attendings who are experiencing work encroaching on their personal lives may be those spending more time with patients and consulting less. This finding merits further study, especially with increasing concern about balancing time spent in meaningful patient care activities with risk of physician burnout.

This finding could also indicate that trainee participation modifies consultation use for hospitalists. Teaching service teams with more individual members may allow a greater pool of collective knowledge, decreasing the need for consultation to answer clinical questions.11 Interestingly, there was no difference in consultation use between generalists or subspecialists and hospitalists on teaching services, possibly suggesting a unique effect in hospitalists who vary clinical practice depending on team structure. These differences deserve further investigation, with implications for education and resource utilization.

We were surprised by the finding that consultations decreased each year, despite increasing patient complexity and availability of consultation services. This could be explained by a growing emphasis on shortening LOS in our institution, thus shifting consultative care to outpatient settings. Understanding these effects is critically important with growing evidence that consultation improves patient outcomes because these external pressures could lead to unintended consequences for quality or access to care.

Several findings related to patient factors additionally emerged, including age and insurance status. Although related to medical complexity, these effects persist despite adjustment, which raises the question of whether they contribute to the decision to seek consultation. Older patients received fewer consultations, which could reflect the use of more conservative practice models in the elderly,12 or ageism, which is associated with undertreatment.13 With respect to insurance status, Medicaid patients were associated with fewer consultations. This finding is consistent with previous work showing the decreased intensity of hospital services used for Medicaid patients.14Our study has limitations. Our data were from one large urban academic center that limits generalizability. Although systematic and redundant, attending attribution may have been flawed: incomplete or erroneous documentation could have led to attribution error, and we cannot rule out the possibility of service handoffs. We used a LOS ≤ 5 days to minimize this possibility, but this limits the applicability of our findings to longer admissions. Unsurprisingly, longer LOS correlated with the increased use of consultation even within our restricted sample, and future work should examine the effects of prolonged LOS. As a retrospective analysis, unmeasured confounders due to our limited adjustment will likely explain some findings, although we took steps to address this in our statistical design. Finally, we could not measure patient outcomes and, therefore, cannot determine the value of more or fewer consultations for specific patients or illnesses. Positive and negative outcomes of increased consultation are described, and understanding the impact of consultation is critical for further study.2,3

 

 

CONCLUSION

We found that the use of consultation on general medicine services varies widely between admissions, with large differences between the highest and lowest frequencies of use. This variation can be partially explained by several physician-, patient-, and admission-level characteristics. Our work may help identify patient and attending groups at high risk for under- or overuse of consultation and guide the subsequent development of interventions to improve value in consultation. One additional consultation over the average LOS of 4.6 days adds $420 per admission or $4.8 billion to the 11.5 million annual Medicare admissions.15 Increasing research, guidelines, and education on the judicious use of inpatient consultation will be key in maximizing high-value care and improving patient outcomes.

Acknowledgments

The authors would like to acknowledge the invaluable support and assistance of the University of Chicago Hospitalist Project, the Pritzker School of Medicine Summer Research Program, the University of Chicago Center for Quality, and the University of Chicago Center for Health and the Social Sciences (CHeSS). The authors would additionally like to thank John Cursio, PhD, for his support and guidance in statistical analysis for this project.

Disclaimer

The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The funders had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve publication of the finished manuscript. Preliminary results of this analysis were presented at the 2018 Society of Hospital Medicine Annual Meeting in Orlando, Florida. All coauthors have seen and agree with the contents of the manuscript. The submission is not under review by any other publication.

Files
References

1. Stevens JP, Nyweide D, Maresh S, et al. Variation in inpatient consultation among older adults in the United States. J Gen Intern Med. 2015;30(7):992-999. https://doi.org/10.1007/s11606-015-3216-7.
2. Lahey T, Shah R, Gittzus J, Schwartzman J, Kirkland K. Infectious diseases consultation lowers mortality from Staphylococcus aureus bacteremia. Medicine (Baltimore). 2009;88(5):263-267. https://doi.org/10.1097/MD.0b013e3181b8fccb.
3. Morrison RS, Dietrich J, Ladwig S, et al. Palliative care consultation teams cut hospital costs for Medicaid beneficiaries. Health Aff Proj Hope. 2011;30(3):454-463. https://doi.org/10.1377/hlthaff.2010.0929.
4. Stevens JP, Nyweide DJ, Maresh S, Hatfield LA, Howell MD, Landon BE. Comparison of hospital resource use and outcomes among hospitalists, primary care physicians, and other generalists. JAMA Intern Med. 2017;177(12):1781. https://doi.org/10.1001/jamainternmed.2017.5824.
5. Meltzer D. Effects of physician experience on costs and outcomes on an academic general medicine service: Results of a trial of hospitalists. Ann Intern Med. 2002;137(11):866. https://doi.org/10.7326/0003-4819-137-11-200212030-00007.
6. Martin SK, Farnan JM, Flores A, Kurina LM, Meltzer DO, Arora VM. Exploring entrustment: Housestaff autonomy and patient readmission. Am J Med. 2014;127(8):791-797. https://doi.org/10.1016/j.amjmed.2014.04.013.
7. HCUP-US NIS Overview. https://www.hcup-us.ahrq.gov/nisoverview.jsp. Accessed July 7, 2017.
8. Austin SR, Wong Y-N, Uzzo RG, Beck JR, Egleston BL. Why summary comorbidity measures such as the Charlson Comorbidity Index and Elixhauser Score work. Med Care. 2015;53(9):e65-e72. https://doi.org/10.1097/MLR.0b013e318297429c.
9. Elixhauser Comorbidity Software. Elixhauser Comorbidity Software. https://www.hcup-us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp#references. Accessed May 13, 2019.
10. Roshetsky LM, Coltri A, Flores A, et al. No time for teaching? Inpatient attending physicians’ workload and teaching before and after the implementation of the 2003 duty hours regulations. Acad Med J Assoc Am Med Coll. 2013;88(9):1293-1298. https://doi.org/10.1097/ACM.0b013e31829eb795.
11. Barnett ML, Boddupalli D, Nundy S, Bates DW. Comparative accuracy of diagnosis by collective intelligence of multiple physicians vs individual physicians. JAMA Netw Open. 2019;2(3):e190096. https://doi.org/10.1001/jamanetworkopen.2019.0096.
12. Aoyama T, Kunisawa S, Fushimi K, Sawa T, Imanaka Y. Comparison of surgical and conservative treatment outcomes for type A aortic dissection in elderly patients. J Cardiothorac Surg. 2018;13(1):129. https://doi.org/10.1186/s13019-018-0814-6.
13. Lindau ST, Schumm LP, Laumann EO, Levinson W, O’Muircheartaigh CA, Waite LJ. A study of sexuality and health among older adults in the United States. N Engl J Med. 2007;357(8):762-774. https://doi.org/10.1056/NEJMoa067423.
14. Yergan J, Flood AB, Diehr P, LoGerfo JP. Relationship between patient source of payment and the intensity of hospital services. Med Care. 1988;26(11):1111-1114. https://doi.org/10.1097/00005650-198811000-00009.
15. Center for Medicare and Medicaid Services. MDCR INPT HOSP 1.; 2008. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/CMSProgramStatistics/2013/Downloads/MDCR_UTIL/CPS_MDCR_INPT_HOSP_1.pdf. Accessed April 15, 2018.

Article PDF
Author and Disclosure Information

1University of Chicago Pritzker School of Medicine, Chicago, Illinois; 2Department of Medicine, University of Chicago, Chicago, Illinois.

Disclosures

The authors have nothing to disclose.

Funding

The authors acknowledge funding from the Alliance of Academic Internal Medicine 2017 Innovation Grant; the American Board of Medical Specialties Visiting Scholars Program; the National Heart, Lung, and Blood Institute Grant# K24 – HL136859; and the National Institute on Aging Grant #4T35AG029795-10. This project was also supported by the National Center for Advancing Translational Sciences of the National Institutes of Health (NIH) through Grant Number 5UL1TR002389-02 that funds the Institute for Translational Medicine.

Issue
Journal of Hospital Medicine 15(3)
Publications
Topics
Page Number
164-168. Published Online First February 19, 2020.
Sections
Files
Files
Author and Disclosure Information

1University of Chicago Pritzker School of Medicine, Chicago, Illinois; 2Department of Medicine, University of Chicago, Chicago, Illinois.

Disclosures

The authors have nothing to disclose.

Funding

The authors acknowledge funding from the Alliance of Academic Internal Medicine 2017 Innovation Grant; the American Board of Medical Specialties Visiting Scholars Program; the National Heart, Lung, and Blood Institute Grant# K24 – HL136859; and the National Institute on Aging Grant #4T35AG029795-10. This project was also supported by the National Center for Advancing Translational Sciences of the National Institutes of Health (NIH) through Grant Number 5UL1TR002389-02 that funds the Institute for Translational Medicine.

Author and Disclosure Information

1University of Chicago Pritzker School of Medicine, Chicago, Illinois; 2Department of Medicine, University of Chicago, Chicago, Illinois.

Disclosures

The authors have nothing to disclose.

Funding

The authors acknowledge funding from the Alliance of Academic Internal Medicine 2017 Innovation Grant; the American Board of Medical Specialties Visiting Scholars Program; the National Heart, Lung, and Blood Institute Grant# K24 – HL136859; and the National Institute on Aging Grant #4T35AG029795-10. This project was also supported by the National Center for Advancing Translational Sciences of the National Institutes of Health (NIH) through Grant Number 5UL1TR002389-02 that funds the Institute for Translational Medicine.

Article PDF
Article PDF
Related Articles

Inpatient consultation is an extremely common practice with the potential to improve patient outcomes significantly.1-3 However, variability in consultation practices may be risky for patients. In addition to underuse when the benefit is clear, the overuse of consultation may lead to additional testing and therapies, increased length of stay (LOS) and costs, conflicting recommendations, and opportunities for communication breakdown.

Consultation use is often at the discretion of individual providers. While this decision is frequently driven by patient needs, significant variation in consultation practices not fully explained by patient factors exists.1 Prior work has described hospital-level variation1 and that primary care physicians use more consultation than hospitalists.4 However, other factors affecting consultation remain unknown. We sought to explore physician-, patient-, and admission-level factors associated with consultation use on inpatient general medicine services.

METHODS

Study Design

We conducted a retrospective analysis of data from the University of Chicago Hospitalist Project (UCHP). UCHP is a longstanding study of the care of hospitalized patients admitted to the University of Chicago general medicine services, involving both patient data collection and physician experience surveys.5 Data were obtained for enrolled UCHP patients between 2011-2016 from the Center for Research Informatics (CRI). The University of Chicago Institutional Review Board approved this study.

Data Collection

Attendings and patients consented to UCHP participation. Data collection details are described elsewhere.5,6 Data from EpicCare (EpicSystems Corp, Wisconsin) and Centricity Billing (GE Healthcare, Illinois) were obtained via CRI for all encounters of enrolled UCHP patients during the study period (N = 218,591).

Attending Attribution

We determined attending attribution for admissions as follows: the attending author of the first history and physical (H&P) was assigned. If this was unavailable, the attending author of the first progress note (PN) was assigned. For patients admitted by hospitalists on admitting shifts to nonteaching services (ie, service without residents/students), the author of the first PN was assigned if different from H&P. Where available, attribution was corroborated with call schedules.

Sample and Variables

All encounters containing inpatient admissions to the University of Chicago from May 10, 2011 (Electronic Health Record activation date), through December 31, 2016, were considered for inclusion (N = 51,171, Appendix 1). Admissions including only documentation from ancillary services were excluded (eg, encounters for hemodialysis or physical therapy). Admissions were limited to a length of stay (LOS) ≤ 5 days, corresponding to the average US inpatient LOS of 4.6 days,7 to minimize the likelihood of attending handoffs (N = 31,592). If attending attribution was not possible via the above-described methods, the admission was eliminated (N = 3,103; 10.9% of admissions with LOS ≤ 5 days). Finally, the sample was restricted to general medicine service admissions under attendings enrolled in UCHP who completed surveys. After the application of all criteria, 6,153 admissions remained for analysis.

 

 

The outcome variable was the number of consultations per admission, determined by counting the unique number of services creating clinical documentation, and subtracting one for the primary team. If the Medical/Surgical intensive care unit (ICU) was a service, then two were subtracted to account for the ICU transfer.

Attending years in practice (ie, years since medical school graduation) and gender were determined from public resources. Practice characteristics were determined from UCHP attending surveys, which address perceptions of workload and satisfaction (Appendix 2).

Patient characteristics (gender, age, Elixhauser Indices) and admission characteristics (LOS, season of admission, payor) were determined from UCHP and CRI data. The Elixhauser Index uses a well-validated system combining the presence/absence of 31 comorbidities to predict mortality and 30-day readmission.8 Elixhauser Indices were calculated using the “Creation of Elixhauser Comorbidity Index Scores 1.0” software.9 For admissions under hospitalist attendings, teaching/nonteaching team was ascertained via internal teaching service calendars.

Analysis

We used descriptive statistics to examine demographic characteristics. The difference between the lowest and highest quartile consultation use was determined via a two-sample t test. Given the multilevel nature of our count data, we used a mixed-effects Poisson model accounting for within-group variation by clustering on attending and patient (3-level random-effects model). The analysis was done using Stata 15 (StataCorp, Texas).

RESULTS

From 2011 to 2016, 14,848 patients and 88 attendings were enrolled in UCHP; 4,772 patients (32%) and 69 attendings (59.4%) had data available and were included. Mean LOS was 3.0 days (SD = 1.3). Table 1 describes the characteristics of attendings, patients, and admissions.

Seventy-six percent of admissions included at least one consultation. Consultation use varied widely, ranging from 0 to 10 per admission (mean = 1.39, median = 1; standard deviation [SD] = 1.17). The number of consultations per admission in the highest quartile of consultation frequency (mean = 3.47, median = 3) was 5.7-fold that of the lowest quartile (mean = 0.613, median = 1; P <.001).

In multivariable regression, physician-, patient-, and admission-level characteristics were associated with the differential use of consultation (Table 2). On teaching services, consultations called by hospitalist vs nonhospitalist generalists did not differ (P =.361). However, hospitalists on nonteaching services called 8.6% more consultations than hospitalists on teaching services (P =.02). Attending agreement with survey item “The interruption of my personal life by work is a problem” was associated with 8.2% fewer consultations per admission (P =.002).

Patients older than 75 years received 19% fewer consultations compared with patients younger than 49 years (P <.001). Compared with Medicare, Medicaid admissions had 12.2% fewer consultations (P <.001), whereas privately insured admissions had 10.7% more (P =.001). The number of consultations per admission decreased every year, with 45.3% fewer consultations in 2015 than 2011 (P <.001). Consultations increased by each 22% per day increase in LOS (P <.001).

DISCUSSION

Our analysis described several physician-, patient-, and admission-level characteristics associated with the use of inpatient consultation. Our results strengthen prior work demonstrating that patient-level factors alone are insufficient to explain consultation variability.1

 

 

Hospitalists on nonteaching services called more consultations, which may reflect a higher workload on these services. Busy hospitalists on nonteaching teams may lack time to delve deeply into clinical problems and require more consultations, especially for work with heavy cognitive loads such as diagnosis. “Outsourcing” tasks when workload increases occurs in other cognitive activities such as teaching.10 The association between work interrupting personal life and fewer consultations may also implicate the effects of time. Attendings who are experiencing work encroaching on their personal lives may be those spending more time with patients and consulting less. This finding merits further study, especially with increasing concern about balancing time spent in meaningful patient care activities with risk of physician burnout.

This finding could also indicate that trainee participation modifies consultation use for hospitalists. Teaching service teams with more individual members may allow a greater pool of collective knowledge, decreasing the need for consultation to answer clinical questions.11 Interestingly, there was no difference in consultation use between generalists or subspecialists and hospitalists on teaching services, possibly suggesting a unique effect in hospitalists who vary clinical practice depending on team structure. These differences deserve further investigation, with implications for education and resource utilization.

We were surprised by the finding that consultations decreased each year, despite increasing patient complexity and availability of consultation services. This could be explained by a growing emphasis on shortening LOS in our institution, thus shifting consultative care to outpatient settings. Understanding these effects is critically important with growing evidence that consultation improves patient outcomes because these external pressures could lead to unintended consequences for quality or access to care.

Several findings related to patient factors additionally emerged, including age and insurance status. Although related to medical complexity, these effects persist despite adjustment, which raises the question of whether they contribute to the decision to seek consultation. Older patients received fewer consultations, which could reflect the use of more conservative practice models in the elderly,12 or ageism, which is associated with undertreatment.13 With respect to insurance status, Medicaid patients were associated with fewer consultations. This finding is consistent with previous work showing the decreased intensity of hospital services used for Medicaid patients.14Our study has limitations. Our data were from one large urban academic center that limits generalizability. Although systematic and redundant, attending attribution may have been flawed: incomplete or erroneous documentation could have led to attribution error, and we cannot rule out the possibility of service handoffs. We used a LOS ≤ 5 days to minimize this possibility, but this limits the applicability of our findings to longer admissions. Unsurprisingly, longer LOS correlated with the increased use of consultation even within our restricted sample, and future work should examine the effects of prolonged LOS. As a retrospective analysis, unmeasured confounders due to our limited adjustment will likely explain some findings, although we took steps to address this in our statistical design. Finally, we could not measure patient outcomes and, therefore, cannot determine the value of more or fewer consultations for specific patients or illnesses. Positive and negative outcomes of increased consultation are described, and understanding the impact of consultation is critical for further study.2,3

 

 

CONCLUSION

We found that the use of consultation on general medicine services varies widely between admissions, with large differences between the highest and lowest frequencies of use. This variation can be partially explained by several physician-, patient-, and admission-level characteristics. Our work may help identify patient and attending groups at high risk for under- or overuse of consultation and guide the subsequent development of interventions to improve value in consultation. One additional consultation over the average LOS of 4.6 days adds $420 per admission or $4.8 billion to the 11.5 million annual Medicare admissions.15 Increasing research, guidelines, and education on the judicious use of inpatient consultation will be key in maximizing high-value care and improving patient outcomes.

Acknowledgments

The authors would like to acknowledge the invaluable support and assistance of the University of Chicago Hospitalist Project, the Pritzker School of Medicine Summer Research Program, the University of Chicago Center for Quality, and the University of Chicago Center for Health and the Social Sciences (CHeSS). The authors would additionally like to thank John Cursio, PhD, for his support and guidance in statistical analysis for this project.

Disclaimer

The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The funders had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve publication of the finished manuscript. Preliminary results of this analysis were presented at the 2018 Society of Hospital Medicine Annual Meeting in Orlando, Florida. All coauthors have seen and agree with the contents of the manuscript. The submission is not under review by any other publication.

Inpatient consultation is an extremely common practice with the potential to improve patient outcomes significantly.1-3 However, variability in consultation practices may be risky for patients. In addition to underuse when the benefit is clear, the overuse of consultation may lead to additional testing and therapies, increased length of stay (LOS) and costs, conflicting recommendations, and opportunities for communication breakdown.

Consultation use is often at the discretion of individual providers. While this decision is frequently driven by patient needs, significant variation in consultation practices not fully explained by patient factors exists.1 Prior work has described hospital-level variation1 and that primary care physicians use more consultation than hospitalists.4 However, other factors affecting consultation remain unknown. We sought to explore physician-, patient-, and admission-level factors associated with consultation use on inpatient general medicine services.

METHODS

Study Design

We conducted a retrospective analysis of data from the University of Chicago Hospitalist Project (UCHP). UCHP is a longstanding study of the care of hospitalized patients admitted to the University of Chicago general medicine services, involving both patient data collection and physician experience surveys.5 Data were obtained for enrolled UCHP patients between 2011-2016 from the Center for Research Informatics (CRI). The University of Chicago Institutional Review Board approved this study.

Data Collection

Attendings and patients consented to UCHP participation. Data collection details are described elsewhere.5,6 Data from EpicCare (EpicSystems Corp, Wisconsin) and Centricity Billing (GE Healthcare, Illinois) were obtained via CRI for all encounters of enrolled UCHP patients during the study period (N = 218,591).

Attending Attribution

We determined attending attribution for admissions as follows: the attending author of the first history and physical (H&P) was assigned. If this was unavailable, the attending author of the first progress note (PN) was assigned. For patients admitted by hospitalists on admitting shifts to nonteaching services (ie, service without residents/students), the author of the first PN was assigned if different from H&P. Where available, attribution was corroborated with call schedules.

Sample and Variables

All encounters containing inpatient admissions to the University of Chicago from May 10, 2011 (Electronic Health Record activation date), through December 31, 2016, were considered for inclusion (N = 51,171, Appendix 1). Admissions including only documentation from ancillary services were excluded (eg, encounters for hemodialysis or physical therapy). Admissions were limited to a length of stay (LOS) ≤ 5 days, corresponding to the average US inpatient LOS of 4.6 days,7 to minimize the likelihood of attending handoffs (N = 31,592). If attending attribution was not possible via the above-described methods, the admission was eliminated (N = 3,103; 10.9% of admissions with LOS ≤ 5 days). Finally, the sample was restricted to general medicine service admissions under attendings enrolled in UCHP who completed surveys. After the application of all criteria, 6,153 admissions remained for analysis.

 

 

The outcome variable was the number of consultations per admission, determined by counting the unique number of services creating clinical documentation, and subtracting one for the primary team. If the Medical/Surgical intensive care unit (ICU) was a service, then two were subtracted to account for the ICU transfer.

Attending years in practice (ie, years since medical school graduation) and gender were determined from public resources. Practice characteristics were determined from UCHP attending surveys, which address perceptions of workload and satisfaction (Appendix 2).

Patient characteristics (gender, age, Elixhauser Indices) and admission characteristics (LOS, season of admission, payor) were determined from UCHP and CRI data. The Elixhauser Index uses a well-validated system combining the presence/absence of 31 comorbidities to predict mortality and 30-day readmission.8 Elixhauser Indices were calculated using the “Creation of Elixhauser Comorbidity Index Scores 1.0” software.9 For admissions under hospitalist attendings, teaching/nonteaching team was ascertained via internal teaching service calendars.

Analysis

We used descriptive statistics to examine demographic characteristics. The difference between the lowest and highest quartile consultation use was determined via a two-sample t test. Given the multilevel nature of our count data, we used a mixed-effects Poisson model accounting for within-group variation by clustering on attending and patient (3-level random-effects model). The analysis was done using Stata 15 (StataCorp, Texas).

RESULTS

From 2011 to 2016, 14,848 patients and 88 attendings were enrolled in UCHP; 4,772 patients (32%) and 69 attendings (59.4%) had data available and were included. Mean LOS was 3.0 days (SD = 1.3). Table 1 describes the characteristics of attendings, patients, and admissions.

Seventy-six percent of admissions included at least one consultation. Consultation use varied widely, ranging from 0 to 10 per admission (mean = 1.39, median = 1; standard deviation [SD] = 1.17). The number of consultations per admission in the highest quartile of consultation frequency (mean = 3.47, median = 3) was 5.7-fold that of the lowest quartile (mean = 0.613, median = 1; P <.001).

In multivariable regression, physician-, patient-, and admission-level characteristics were associated with the differential use of consultation (Table 2). On teaching services, consultations called by hospitalist vs nonhospitalist generalists did not differ (P =.361). However, hospitalists on nonteaching services called 8.6% more consultations than hospitalists on teaching services (P =.02). Attending agreement with survey item “The interruption of my personal life by work is a problem” was associated with 8.2% fewer consultations per admission (P =.002).

Patients older than 75 years received 19% fewer consultations compared with patients younger than 49 years (P <.001). Compared with Medicare, Medicaid admissions had 12.2% fewer consultations (P <.001), whereas privately insured admissions had 10.7% more (P =.001). The number of consultations per admission decreased every year, with 45.3% fewer consultations in 2015 than 2011 (P <.001). Consultations increased by each 22% per day increase in LOS (P <.001).

DISCUSSION

Our analysis described several physician-, patient-, and admission-level characteristics associated with the use of inpatient consultation. Our results strengthen prior work demonstrating that patient-level factors alone are insufficient to explain consultation variability.1

 

 

Hospitalists on nonteaching services called more consultations, which may reflect a higher workload on these services. Busy hospitalists on nonteaching teams may lack time to delve deeply into clinical problems and require more consultations, especially for work with heavy cognitive loads such as diagnosis. “Outsourcing” tasks when workload increases occurs in other cognitive activities such as teaching.10 The association between work interrupting personal life and fewer consultations may also implicate the effects of time. Attendings who are experiencing work encroaching on their personal lives may be those spending more time with patients and consulting less. This finding merits further study, especially with increasing concern about balancing time spent in meaningful patient care activities with risk of physician burnout.

This finding could also indicate that trainee participation modifies consultation use for hospitalists. Teaching service teams with more individual members may allow a greater pool of collective knowledge, decreasing the need for consultation to answer clinical questions.11 Interestingly, there was no difference in consultation use between generalists or subspecialists and hospitalists on teaching services, possibly suggesting a unique effect in hospitalists who vary clinical practice depending on team structure. These differences deserve further investigation, with implications for education and resource utilization.

We were surprised by the finding that consultations decreased each year, despite increasing patient complexity and availability of consultation services. This could be explained by a growing emphasis on shortening LOS in our institution, thus shifting consultative care to outpatient settings. Understanding these effects is critically important with growing evidence that consultation improves patient outcomes because these external pressures could lead to unintended consequences for quality or access to care.

Several findings related to patient factors additionally emerged, including age and insurance status. Although related to medical complexity, these effects persist despite adjustment, which raises the question of whether they contribute to the decision to seek consultation. Older patients received fewer consultations, which could reflect the use of more conservative practice models in the elderly,12 or ageism, which is associated with undertreatment.13 With respect to insurance status, Medicaid patients were associated with fewer consultations. This finding is consistent with previous work showing the decreased intensity of hospital services used for Medicaid patients.14Our study has limitations. Our data were from one large urban academic center that limits generalizability. Although systematic and redundant, attending attribution may have been flawed: incomplete or erroneous documentation could have led to attribution error, and we cannot rule out the possibility of service handoffs. We used a LOS ≤ 5 days to minimize this possibility, but this limits the applicability of our findings to longer admissions. Unsurprisingly, longer LOS correlated with the increased use of consultation even within our restricted sample, and future work should examine the effects of prolonged LOS. As a retrospective analysis, unmeasured confounders due to our limited adjustment will likely explain some findings, although we took steps to address this in our statistical design. Finally, we could not measure patient outcomes and, therefore, cannot determine the value of more or fewer consultations for specific patients or illnesses. Positive and negative outcomes of increased consultation are described, and understanding the impact of consultation is critical for further study.2,3

 

 

CONCLUSION

We found that the use of consultation on general medicine services varies widely between admissions, with large differences between the highest and lowest frequencies of use. This variation can be partially explained by several physician-, patient-, and admission-level characteristics. Our work may help identify patient and attending groups at high risk for under- or overuse of consultation and guide the subsequent development of interventions to improve value in consultation. One additional consultation over the average LOS of 4.6 days adds $420 per admission or $4.8 billion to the 11.5 million annual Medicare admissions.15 Increasing research, guidelines, and education on the judicious use of inpatient consultation will be key in maximizing high-value care and improving patient outcomes.

Acknowledgments

The authors would like to acknowledge the invaluable support and assistance of the University of Chicago Hospitalist Project, the Pritzker School of Medicine Summer Research Program, the University of Chicago Center for Quality, and the University of Chicago Center for Health and the Social Sciences (CHeSS). The authors would additionally like to thank John Cursio, PhD, for his support and guidance in statistical analysis for this project.

Disclaimer

The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The funders had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve publication of the finished manuscript. Preliminary results of this analysis were presented at the 2018 Society of Hospital Medicine Annual Meeting in Orlando, Florida. All coauthors have seen and agree with the contents of the manuscript. The submission is not under review by any other publication.

References

1. Stevens JP, Nyweide D, Maresh S, et al. Variation in inpatient consultation among older adults in the United States. J Gen Intern Med. 2015;30(7):992-999. https://doi.org/10.1007/s11606-015-3216-7.
2. Lahey T, Shah R, Gittzus J, Schwartzman J, Kirkland K. Infectious diseases consultation lowers mortality from Staphylococcus aureus bacteremia. Medicine (Baltimore). 2009;88(5):263-267. https://doi.org/10.1097/MD.0b013e3181b8fccb.
3. Morrison RS, Dietrich J, Ladwig S, et al. Palliative care consultation teams cut hospital costs for Medicaid beneficiaries. Health Aff Proj Hope. 2011;30(3):454-463. https://doi.org/10.1377/hlthaff.2010.0929.
4. Stevens JP, Nyweide DJ, Maresh S, Hatfield LA, Howell MD, Landon BE. Comparison of hospital resource use and outcomes among hospitalists, primary care physicians, and other generalists. JAMA Intern Med. 2017;177(12):1781. https://doi.org/10.1001/jamainternmed.2017.5824.
5. Meltzer D. Effects of physician experience on costs and outcomes on an academic general medicine service: Results of a trial of hospitalists. Ann Intern Med. 2002;137(11):866. https://doi.org/10.7326/0003-4819-137-11-200212030-00007.
6. Martin SK, Farnan JM, Flores A, Kurina LM, Meltzer DO, Arora VM. Exploring entrustment: Housestaff autonomy and patient readmission. Am J Med. 2014;127(8):791-797. https://doi.org/10.1016/j.amjmed.2014.04.013.
7. HCUP-US NIS Overview. https://www.hcup-us.ahrq.gov/nisoverview.jsp. Accessed July 7, 2017.
8. Austin SR, Wong Y-N, Uzzo RG, Beck JR, Egleston BL. Why summary comorbidity measures such as the Charlson Comorbidity Index and Elixhauser Score work. Med Care. 2015;53(9):e65-e72. https://doi.org/10.1097/MLR.0b013e318297429c.
9. Elixhauser Comorbidity Software. Elixhauser Comorbidity Software. https://www.hcup-us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp#references. Accessed May 13, 2019.
10. Roshetsky LM, Coltri A, Flores A, et al. No time for teaching? Inpatient attending physicians’ workload and teaching before and after the implementation of the 2003 duty hours regulations. Acad Med J Assoc Am Med Coll. 2013;88(9):1293-1298. https://doi.org/10.1097/ACM.0b013e31829eb795.
11. Barnett ML, Boddupalli D, Nundy S, Bates DW. Comparative accuracy of diagnosis by collective intelligence of multiple physicians vs individual physicians. JAMA Netw Open. 2019;2(3):e190096. https://doi.org/10.1001/jamanetworkopen.2019.0096.
12. Aoyama T, Kunisawa S, Fushimi K, Sawa T, Imanaka Y. Comparison of surgical and conservative treatment outcomes for type A aortic dissection in elderly patients. J Cardiothorac Surg. 2018;13(1):129. https://doi.org/10.1186/s13019-018-0814-6.
13. Lindau ST, Schumm LP, Laumann EO, Levinson W, O’Muircheartaigh CA, Waite LJ. A study of sexuality and health among older adults in the United States. N Engl J Med. 2007;357(8):762-774. https://doi.org/10.1056/NEJMoa067423.
14. Yergan J, Flood AB, Diehr P, LoGerfo JP. Relationship between patient source of payment and the intensity of hospital services. Med Care. 1988;26(11):1111-1114. https://doi.org/10.1097/00005650-198811000-00009.
15. Center for Medicare and Medicaid Services. MDCR INPT HOSP 1.; 2008. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/CMSProgramStatistics/2013/Downloads/MDCR_UTIL/CPS_MDCR_INPT_HOSP_1.pdf. Accessed April 15, 2018.

References

1. Stevens JP, Nyweide D, Maresh S, et al. Variation in inpatient consultation among older adults in the United States. J Gen Intern Med. 2015;30(7):992-999. https://doi.org/10.1007/s11606-015-3216-7.
2. Lahey T, Shah R, Gittzus J, Schwartzman J, Kirkland K. Infectious diseases consultation lowers mortality from Staphylococcus aureus bacteremia. Medicine (Baltimore). 2009;88(5):263-267. https://doi.org/10.1097/MD.0b013e3181b8fccb.
3. Morrison RS, Dietrich J, Ladwig S, et al. Palliative care consultation teams cut hospital costs for Medicaid beneficiaries. Health Aff Proj Hope. 2011;30(3):454-463. https://doi.org/10.1377/hlthaff.2010.0929.
4. Stevens JP, Nyweide DJ, Maresh S, Hatfield LA, Howell MD, Landon BE. Comparison of hospital resource use and outcomes among hospitalists, primary care physicians, and other generalists. JAMA Intern Med. 2017;177(12):1781. https://doi.org/10.1001/jamainternmed.2017.5824.
5. Meltzer D. Effects of physician experience on costs and outcomes on an academic general medicine service: Results of a trial of hospitalists. Ann Intern Med. 2002;137(11):866. https://doi.org/10.7326/0003-4819-137-11-200212030-00007.
6. Martin SK, Farnan JM, Flores A, Kurina LM, Meltzer DO, Arora VM. Exploring entrustment: Housestaff autonomy and patient readmission. Am J Med. 2014;127(8):791-797. https://doi.org/10.1016/j.amjmed.2014.04.013.
7. HCUP-US NIS Overview. https://www.hcup-us.ahrq.gov/nisoverview.jsp. Accessed July 7, 2017.
8. Austin SR, Wong Y-N, Uzzo RG, Beck JR, Egleston BL. Why summary comorbidity measures such as the Charlson Comorbidity Index and Elixhauser Score work. Med Care. 2015;53(9):e65-e72. https://doi.org/10.1097/MLR.0b013e318297429c.
9. Elixhauser Comorbidity Software. Elixhauser Comorbidity Software. https://www.hcup-us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp#references. Accessed May 13, 2019.
10. Roshetsky LM, Coltri A, Flores A, et al. No time for teaching? Inpatient attending physicians’ workload and teaching before and after the implementation of the 2003 duty hours regulations. Acad Med J Assoc Am Med Coll. 2013;88(9):1293-1298. https://doi.org/10.1097/ACM.0b013e31829eb795.
11. Barnett ML, Boddupalli D, Nundy S, Bates DW. Comparative accuracy of diagnosis by collective intelligence of multiple physicians vs individual physicians. JAMA Netw Open. 2019;2(3):e190096. https://doi.org/10.1001/jamanetworkopen.2019.0096.
12. Aoyama T, Kunisawa S, Fushimi K, Sawa T, Imanaka Y. Comparison of surgical and conservative treatment outcomes for type A aortic dissection in elderly patients. J Cardiothorac Surg. 2018;13(1):129. https://doi.org/10.1186/s13019-018-0814-6.
13. Lindau ST, Schumm LP, Laumann EO, Levinson W, O’Muircheartaigh CA, Waite LJ. A study of sexuality and health among older adults in the United States. N Engl J Med. 2007;357(8):762-774. https://doi.org/10.1056/NEJMoa067423.
14. Yergan J, Flood AB, Diehr P, LoGerfo JP. Relationship between patient source of payment and the intensity of hospital services. Med Care. 1988;26(11):1111-1114. https://doi.org/10.1097/00005650-198811000-00009.
15. Center for Medicare and Medicaid Services. MDCR INPT HOSP 1.; 2008. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/CMSProgramStatistics/2013/Downloads/MDCR_UTIL/CPS_MDCR_INPT_HOSP_1.pdf. Accessed April 15, 2018.

Issue
Journal of Hospital Medicine 15(3)
Issue
Journal of Hospital Medicine 15(3)
Page Number
164-168. Published Online First February 19, 2020.
Page Number
164-168. Published Online First February 19, 2020.
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Marika Kachman, BA; Email: [email protected]; Telephone: 773-702-2604
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Peek Free
Medscape Article
Display survey writer
Reuters content
Article PDF Media
Media Files

Top Qualifications Hospitalist Leaders Seek in Candidates: Results from a National Survey

Article Type
Changed
Thu, 11/21/2019 - 14:40

Hospital Medicine (HM) is medicine’s fastest growing specialty.1 Rapid expansion of the field has been met with rising interest by young physicians, many of whom are first-time job seekers and may desire information on best practices for applying and interviewing in HM.2-4 However, no prior work has examined HM-specific candidate qualifications and qualities that may be most valued in the hiring process.

As members of the Society of Hospital Medicine (SHM) Physicians in Training Committee, a group charged with “prepar[ing] trainees and early career hospitalists in their transition into hospital medicine,” we aimed to fill this knowledge gap around the HM-specific hiring process.

METHODS

Survey Instrument

The authors developed the survey based on expertise as HM interviewers (JAD, AH, CD, EE, BK, DS, and SM) and local and national interview workshop leaders (JAD, CD, BK, SM). The questionnaire focused on objective applicant qualifications, qualities and attributes displayed during interviews (Appendix 1). Content, length, and reliability of physician understanding were assessed via feedback from local HM group leaders.

Respondents were asked to provide nonidentifying demographics and their role in their HM group’s hiring process. If they reported no role, the survey was terminated. Subsequent standardized HM group demographic questions were adapted from the Society of Hospital Medicine (SHM) State of Hospital Medicine Report.5

Survey questions were multiple choice, ranking and free-response aimed at understanding how respondents assess HM candidate attributes, skills, and behavior. For ranking questions, answer choice order was randomized to reduce answer order-based bias. One free-response question asked the respondent to provide a unique interview question they use that “reveals the most about a hospitalist candidate.” Responses were then individually inserted into the list of choices for a subsequent ranking question regarding the most important qualities a candidate must demonstrate.

Respondents were asked four open-ended questions designed to understand the approach to candidate assessment: (1) use of unique interview questions (as above); (2) identification of “red flags” during interviews; (3) distinctions between assessment of long-term (LT) career hospitalist candidates versus short-term (ST) candidates (eg, those seeking positions prior to fellowship); and (4) key qualifications of ST candidates.

Survey Administration

Survey recipients were identified via SHM administrative rosters. Surveys were distributed electronically via SHM to all current nontrainee physician members who reported a United States mailing address. The survey was determined to not constitute human subjects research by the Beth Israel Deaconess Medical Center Committee on Clinical Investigations.

 

 

Data Analysis

Multiple-choice responses were analyzed descriptively. For ranking-type questions, answers were weighted based on ranking order.

Responses to all open-ended survey questions were analyzed using thematic analysis. We used an iterative process to develop and refine codes identifying key concepts that emerged from the data. Three authors independently coded survey responses. As a group, research team members established the coding framework and resolved discrepancies via discussion to achieve consensus.

RESULTS

Survey links were sent to 8,398 e-mail addresses, of which 7,306 were undeliverable or unopened, leaving 1,092 total eligible respondents. Of these, 347 (31.8%) responded.

A total of 236 respondents reported having a formal role in HM hiring. Of these roles, 79.0% were one-on-one interviewers, 49.6% group interviewers, 45.5% telephone/videoconference interviewers, 41.5% participated on a selection committee, and 32.1% identified as the ultimate decision-maker. Regarding graduate medical education teaching status, 42.0% of respondents identified their primary workplace as a community/affiliated teaching hospital, 33.05% as a university-based teaching hospital, and 23.0% as a nonteaching hospital. Additional characteristics are reported in Appendix 2.

Quantitative Analysis

Respondents ranked the top five qualifications of HM candidates and the top five qualities a candidate should demonstrate on the interview day to be considered for hiring (Table 1).

When asked to rate agreement with the statement “I evaluate and consider all hospital medicine candidates similarly, regardless of whether they articulate an interest in hospital medicine as a long-term career or as a short-term position before fellowship,” 99 (57.23%) respondents disagreed.

Qualitative Analysis

Thematic analysis of responses to open-ended survey questions identified several “red flag” themes (Table 2). Negative interactions with current providers or staff were commonly noted. Additional red flags were a lack of knowledge or interest in the specific HM group, an inability to articulate career goals, or abnormalities in employment history or application materials. Respondents identified an overly strong focus on lifestyle or salary as factors that might limit a candidate’s chance of advancing in the hiring process.

Responses to free-text questions additionally highlighted preferred questioning techniques and approaches to HM candidate assessment (Appendix 3). Many interview questions addressed candidate interest in a particular HM program and candidate responses to challenging scenarios they had encountered. Other questions explored career development. Respondents wanted LT candidates to have specific HM career goals, while they expected ST candidates to demonstrate commitment to and appreciation of HM as a discipline.

Some respondents described their approach to candidate assessment in terms of investment and risk. LT candidates were often viewed as investments in stability and performance; they were evaluated on current abilities and future potential as related to group-specific goals. Some respondents viewed hiring ST candidates as more risky given concerns that they might be less engaged or integrated with the group. Others viewed the hiring of LT candidates as comparably more risky, relating the longer time commitment to the potential for higher impact on the group and patient care. Accordingly, these respondents viewed ST candidate hiring as less risky, estimating their shorter time commitment as having less of a positive or negative impact, with the benefit of addressing urgent staffing issues or unfilled less desirable positions. One respondent summarized: “If they plan to be a career candidate, I care more about them as people and future coworkers. Short term folks are great if we are in a pinch and can deal with personality issues for a short period of time.”

Respondents also described how valued candidate qualities could help mitigate the risk inherent in hiring, especially for ST hires. Strong interpersonal and teamwork skills were highlighted, as well as a demonstrated record of clinical excellence, evidenced by strong training backgrounds and superlative references. A key factor aiding in ST hiring decisions was prior knowledge of the candidate, such as residents or moonlighters previously working in the respondent’s institution. This allowed for familiarity with the candidate’s clinical acumen as well as perceived ease of onboarding and knowledge of the system.

 

 

DISCUSSION

We present the results of a national survey of hospitalists identifying candidate attributes, skills, and behaviors viewed most favorably by those involved in the HM hiring process. To our knowledge, this is the first research to be published on the topic of evaluating HM candidates.

Survey respondents identified demonstrable HM candidate clinical skills and experience as highly important, consistent with prior research identifying clinical skills as being among those that hospitalists most value.6 Based on these responses, job seekers should be prepared to discuss objective measures of clinical experience when appropriate, such as number of cases seen or procedures performed. HM groups may accordingly consider the use of hiring rubrics or scoring systems to standardize these measures and reduce bias.

Respondents also highly valued more subjective assessments of HM applicants’ candidacy. The most highly ranked action item was a candidate’s ability to meaningfully respond to a respondent’s customized interview question. There was also a preference for candidates who were knowledgeable about and interested in the specifics of a particular HM group. The high value placed on these elements may suggest the need for formalized coaching or interview preparation for HM candidates. Similarly, interviewer emphasis on customized questions may also highlight an opportunity for HM groups to internally standardize how to best approach subjective components of the interview.

Our heterogeneous findings on the distinctions between ST and LT candidate hiring practices support the need for additional research on the ST HM job market. Until then, our findings reinforce the importance of applicant transparency about ST versus LT career goals. Although many programs may prefer LT candidates over ST candidates, our results suggest ST candidates may benefit from targeting groups with ST needs and using the application process as an opportunity to highlight certain mitigating strengths.

Our study has limitations. While our population included diverse national representation, the response rate and demographics of our respondents may limit generalizability beyond our study population. Respondents represented multiple perspectives within the HM hiring process and were not limited to those making the final hiring decisions. For questions with prespecified multiple-choice answers, answer choices may have influenced participant responses. Our conclusions are based on the reported preferences of those involved in the HM hiring process and not actual hiring behavior. Future research should attempt to identify factors (eg, region, graduate medical education status, practice setting type) that may be responsible for some of the heterogeneous themes we observed in our analysis.

Our research represents introductory work into the previously unpublished topic of HM-specific hiring practices. These findings may provide relevant insight for trainees considering careers in HM, hospitalists reentering the job market, and those involved in career advising, professional development and the HM hiring process.

Acknowledgments

The authors would like to acknowledge current and former members of SHM’s Physicians in Training Committee whose feedback and leadership helped to inspire this project, as well as those students, residents, and hospitalists who have participated in our Hospital Medicine Annual Meeting interview workshop.

Disclosures

The authors have no conflicts of interest to disclose.

 

 

Files
References

1. Wachter RM, Goldman L. Zero to 50,000-The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Leyenaar JK, Frintner MP. Graduating pediatric residents entering the hospital medicine workforce, 2006-2015. Acad Pediatr. 2018;18(2):200-207. https://doi.org/10.1016/j.acap.2017.05.001.
3. Ratelle JT, Dupras DM, Alguire P, Masters P, Weissman A, West CP. Hospitalist career decisions among internal medicine residents. J Gen Intern Med. 2014;29(7):1026-1030. doi: 10.1007/s11606-014-2811-3.
4. Sweigart JR, Tad-Y D, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12(3):173-176. doi: 10.12788/jhm.2703.
5. 2016 State of Hospital Medicine Report. 2016. https://www.hospitalmedicine.org/practice-management/shms-state-of-hospital-medicine/. Accessed 7/1/2017.
6. Plauth WH, 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Emerg Med. 2001;111(3):247-254. doi: https://doi.org/10.1016/S0002-9343(01)00837-3.

Article PDF
Issue
Journal of Hospital Medicine 14(12)
Publications
Topics
Page Number
754-757. Published online first July 24, 2019
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Hospital Medicine (HM) is medicine’s fastest growing specialty.1 Rapid expansion of the field has been met with rising interest by young physicians, many of whom are first-time job seekers and may desire information on best practices for applying and interviewing in HM.2-4 However, no prior work has examined HM-specific candidate qualifications and qualities that may be most valued in the hiring process.

As members of the Society of Hospital Medicine (SHM) Physicians in Training Committee, a group charged with “prepar[ing] trainees and early career hospitalists in their transition into hospital medicine,” we aimed to fill this knowledge gap around the HM-specific hiring process.

METHODS

Survey Instrument

The authors developed the survey based on expertise as HM interviewers (JAD, AH, CD, EE, BK, DS, and SM) and local and national interview workshop leaders (JAD, CD, BK, SM). The questionnaire focused on objective applicant qualifications, qualities and attributes displayed during interviews (Appendix 1). Content, length, and reliability of physician understanding were assessed via feedback from local HM group leaders.

Respondents were asked to provide nonidentifying demographics and their role in their HM group’s hiring process. If they reported no role, the survey was terminated. Subsequent standardized HM group demographic questions were adapted from the Society of Hospital Medicine (SHM) State of Hospital Medicine Report.5

Survey questions were multiple choice, ranking and free-response aimed at understanding how respondents assess HM candidate attributes, skills, and behavior. For ranking questions, answer choice order was randomized to reduce answer order-based bias. One free-response question asked the respondent to provide a unique interview question they use that “reveals the most about a hospitalist candidate.” Responses were then individually inserted into the list of choices for a subsequent ranking question regarding the most important qualities a candidate must demonstrate.

Respondents were asked four open-ended questions designed to understand the approach to candidate assessment: (1) use of unique interview questions (as above); (2) identification of “red flags” during interviews; (3) distinctions between assessment of long-term (LT) career hospitalist candidates versus short-term (ST) candidates (eg, those seeking positions prior to fellowship); and (4) key qualifications of ST candidates.

Survey Administration

Survey recipients were identified via SHM administrative rosters. Surveys were distributed electronically via SHM to all current nontrainee physician members who reported a United States mailing address. The survey was determined to not constitute human subjects research by the Beth Israel Deaconess Medical Center Committee on Clinical Investigations.

 

 

Data Analysis

Multiple-choice responses were analyzed descriptively. For ranking-type questions, answers were weighted based on ranking order.

Responses to all open-ended survey questions were analyzed using thematic analysis. We used an iterative process to develop and refine codes identifying key concepts that emerged from the data. Three authors independently coded survey responses. As a group, research team members established the coding framework and resolved discrepancies via discussion to achieve consensus.

RESULTS

Survey links were sent to 8,398 e-mail addresses, of which 7,306 were undeliverable or unopened, leaving 1,092 total eligible respondents. Of these, 347 (31.8%) responded.

A total of 236 respondents reported having a formal role in HM hiring. Of these roles, 79.0% were one-on-one interviewers, 49.6% group interviewers, 45.5% telephone/videoconference interviewers, 41.5% participated on a selection committee, and 32.1% identified as the ultimate decision-maker. Regarding graduate medical education teaching status, 42.0% of respondents identified their primary workplace as a community/affiliated teaching hospital, 33.05% as a university-based teaching hospital, and 23.0% as a nonteaching hospital. Additional characteristics are reported in Appendix 2.

Quantitative Analysis

Respondents ranked the top five qualifications of HM candidates and the top five qualities a candidate should demonstrate on the interview day to be considered for hiring (Table 1).

When asked to rate agreement with the statement “I evaluate and consider all hospital medicine candidates similarly, regardless of whether they articulate an interest in hospital medicine as a long-term career or as a short-term position before fellowship,” 99 (57.23%) respondents disagreed.

Qualitative Analysis

Thematic analysis of responses to open-ended survey questions identified several “red flag” themes (Table 2). Negative interactions with current providers or staff were commonly noted. Additional red flags were a lack of knowledge or interest in the specific HM group, an inability to articulate career goals, or abnormalities in employment history or application materials. Respondents identified an overly strong focus on lifestyle or salary as factors that might limit a candidate’s chance of advancing in the hiring process.

Responses to free-text questions additionally highlighted preferred questioning techniques and approaches to HM candidate assessment (Appendix 3). Many interview questions addressed candidate interest in a particular HM program and candidate responses to challenging scenarios they had encountered. Other questions explored career development. Respondents wanted LT candidates to have specific HM career goals, while they expected ST candidates to demonstrate commitment to and appreciation of HM as a discipline.

Some respondents described their approach to candidate assessment in terms of investment and risk. LT candidates were often viewed as investments in stability and performance; they were evaluated on current abilities and future potential as related to group-specific goals. Some respondents viewed hiring ST candidates as more risky given concerns that they might be less engaged or integrated with the group. Others viewed the hiring of LT candidates as comparably more risky, relating the longer time commitment to the potential for higher impact on the group and patient care. Accordingly, these respondents viewed ST candidate hiring as less risky, estimating their shorter time commitment as having less of a positive or negative impact, with the benefit of addressing urgent staffing issues or unfilled less desirable positions. One respondent summarized: “If they plan to be a career candidate, I care more about them as people and future coworkers. Short term folks are great if we are in a pinch and can deal with personality issues for a short period of time.”

Respondents also described how valued candidate qualities could help mitigate the risk inherent in hiring, especially for ST hires. Strong interpersonal and teamwork skills were highlighted, as well as a demonstrated record of clinical excellence, evidenced by strong training backgrounds and superlative references. A key factor aiding in ST hiring decisions was prior knowledge of the candidate, such as residents or moonlighters previously working in the respondent’s institution. This allowed for familiarity with the candidate’s clinical acumen as well as perceived ease of onboarding and knowledge of the system.

 

 

DISCUSSION

We present the results of a national survey of hospitalists identifying candidate attributes, skills, and behaviors viewed most favorably by those involved in the HM hiring process. To our knowledge, this is the first research to be published on the topic of evaluating HM candidates.

Survey respondents identified demonstrable HM candidate clinical skills and experience as highly important, consistent with prior research identifying clinical skills as being among those that hospitalists most value.6 Based on these responses, job seekers should be prepared to discuss objective measures of clinical experience when appropriate, such as number of cases seen or procedures performed. HM groups may accordingly consider the use of hiring rubrics or scoring systems to standardize these measures and reduce bias.

Respondents also highly valued more subjective assessments of HM applicants’ candidacy. The most highly ranked action item was a candidate’s ability to meaningfully respond to a respondent’s customized interview question. There was also a preference for candidates who were knowledgeable about and interested in the specifics of a particular HM group. The high value placed on these elements may suggest the need for formalized coaching or interview preparation for HM candidates. Similarly, interviewer emphasis on customized questions may also highlight an opportunity for HM groups to internally standardize how to best approach subjective components of the interview.

Our heterogeneous findings on the distinctions between ST and LT candidate hiring practices support the need for additional research on the ST HM job market. Until then, our findings reinforce the importance of applicant transparency about ST versus LT career goals. Although many programs may prefer LT candidates over ST candidates, our results suggest ST candidates may benefit from targeting groups with ST needs and using the application process as an opportunity to highlight certain mitigating strengths.

Our study has limitations. While our population included diverse national representation, the response rate and demographics of our respondents may limit generalizability beyond our study population. Respondents represented multiple perspectives within the HM hiring process and were not limited to those making the final hiring decisions. For questions with prespecified multiple-choice answers, answer choices may have influenced participant responses. Our conclusions are based on the reported preferences of those involved in the HM hiring process and not actual hiring behavior. Future research should attempt to identify factors (eg, region, graduate medical education status, practice setting type) that may be responsible for some of the heterogeneous themes we observed in our analysis.

Our research represents introductory work into the previously unpublished topic of HM-specific hiring practices. These findings may provide relevant insight for trainees considering careers in HM, hospitalists reentering the job market, and those involved in career advising, professional development and the HM hiring process.

Acknowledgments

The authors would like to acknowledge current and former members of SHM’s Physicians in Training Committee whose feedback and leadership helped to inspire this project, as well as those students, residents, and hospitalists who have participated in our Hospital Medicine Annual Meeting interview workshop.

Disclosures

The authors have no conflicts of interest to disclose.

 

 

Hospital Medicine (HM) is medicine’s fastest growing specialty.1 Rapid expansion of the field has been met with rising interest by young physicians, many of whom are first-time job seekers and may desire information on best practices for applying and interviewing in HM.2-4 However, no prior work has examined HM-specific candidate qualifications and qualities that may be most valued in the hiring process.

As members of the Society of Hospital Medicine (SHM) Physicians in Training Committee, a group charged with “prepar[ing] trainees and early career hospitalists in their transition into hospital medicine,” we aimed to fill this knowledge gap around the HM-specific hiring process.

METHODS

Survey Instrument

The authors developed the survey based on expertise as HM interviewers (JAD, AH, CD, EE, BK, DS, and SM) and local and national interview workshop leaders (JAD, CD, BK, SM). The questionnaire focused on objective applicant qualifications, qualities and attributes displayed during interviews (Appendix 1). Content, length, and reliability of physician understanding were assessed via feedback from local HM group leaders.

Respondents were asked to provide nonidentifying demographics and their role in their HM group’s hiring process. If they reported no role, the survey was terminated. Subsequent standardized HM group demographic questions were adapted from the Society of Hospital Medicine (SHM) State of Hospital Medicine Report.5

Survey questions were multiple choice, ranking and free-response aimed at understanding how respondents assess HM candidate attributes, skills, and behavior. For ranking questions, answer choice order was randomized to reduce answer order-based bias. One free-response question asked the respondent to provide a unique interview question they use that “reveals the most about a hospitalist candidate.” Responses were then individually inserted into the list of choices for a subsequent ranking question regarding the most important qualities a candidate must demonstrate.

Respondents were asked four open-ended questions designed to understand the approach to candidate assessment: (1) use of unique interview questions (as above); (2) identification of “red flags” during interviews; (3) distinctions between assessment of long-term (LT) career hospitalist candidates versus short-term (ST) candidates (eg, those seeking positions prior to fellowship); and (4) key qualifications of ST candidates.

Survey Administration

Survey recipients were identified via SHM administrative rosters. Surveys were distributed electronically via SHM to all current nontrainee physician members who reported a United States mailing address. The survey was determined to not constitute human subjects research by the Beth Israel Deaconess Medical Center Committee on Clinical Investigations.

 

 

Data Analysis

Multiple-choice responses were analyzed descriptively. For ranking-type questions, answers were weighted based on ranking order.

Responses to all open-ended survey questions were analyzed using thematic analysis. We used an iterative process to develop and refine codes identifying key concepts that emerged from the data. Three authors independently coded survey responses. As a group, research team members established the coding framework and resolved discrepancies via discussion to achieve consensus.

RESULTS

Survey links were sent to 8,398 e-mail addresses, of which 7,306 were undeliverable or unopened, leaving 1,092 total eligible respondents. Of these, 347 (31.8%) responded.

A total of 236 respondents reported having a formal role in HM hiring. Of these roles, 79.0% were one-on-one interviewers, 49.6% group interviewers, 45.5% telephone/videoconference interviewers, 41.5% participated on a selection committee, and 32.1% identified as the ultimate decision-maker. Regarding graduate medical education teaching status, 42.0% of respondents identified their primary workplace as a community/affiliated teaching hospital, 33.05% as a university-based teaching hospital, and 23.0% as a nonteaching hospital. Additional characteristics are reported in Appendix 2.

Quantitative Analysis

Respondents ranked the top five qualifications of HM candidates and the top five qualities a candidate should demonstrate on the interview day to be considered for hiring (Table 1).

When asked to rate agreement with the statement “I evaluate and consider all hospital medicine candidates similarly, regardless of whether they articulate an interest in hospital medicine as a long-term career or as a short-term position before fellowship,” 99 (57.23%) respondents disagreed.

Qualitative Analysis

Thematic analysis of responses to open-ended survey questions identified several “red flag” themes (Table 2). Negative interactions with current providers or staff were commonly noted. Additional red flags were a lack of knowledge or interest in the specific HM group, an inability to articulate career goals, or abnormalities in employment history or application materials. Respondents identified an overly strong focus on lifestyle or salary as factors that might limit a candidate’s chance of advancing in the hiring process.

Responses to free-text questions additionally highlighted preferred questioning techniques and approaches to HM candidate assessment (Appendix 3). Many interview questions addressed candidate interest in a particular HM program and candidate responses to challenging scenarios they had encountered. Other questions explored career development. Respondents wanted LT candidates to have specific HM career goals, while they expected ST candidates to demonstrate commitment to and appreciation of HM as a discipline.

Some respondents described their approach to candidate assessment in terms of investment and risk. LT candidates were often viewed as investments in stability and performance; they were evaluated on current abilities and future potential as related to group-specific goals. Some respondents viewed hiring ST candidates as more risky given concerns that they might be less engaged or integrated with the group. Others viewed the hiring of LT candidates as comparably more risky, relating the longer time commitment to the potential for higher impact on the group and patient care. Accordingly, these respondents viewed ST candidate hiring as less risky, estimating their shorter time commitment as having less of a positive or negative impact, with the benefit of addressing urgent staffing issues or unfilled less desirable positions. One respondent summarized: “If they plan to be a career candidate, I care more about them as people and future coworkers. Short term folks are great if we are in a pinch and can deal with personality issues for a short period of time.”

Respondents also described how valued candidate qualities could help mitigate the risk inherent in hiring, especially for ST hires. Strong interpersonal and teamwork skills were highlighted, as well as a demonstrated record of clinical excellence, evidenced by strong training backgrounds and superlative references. A key factor aiding in ST hiring decisions was prior knowledge of the candidate, such as residents or moonlighters previously working in the respondent’s institution. This allowed for familiarity with the candidate’s clinical acumen as well as perceived ease of onboarding and knowledge of the system.

 

 

DISCUSSION

We present the results of a national survey of hospitalists identifying candidate attributes, skills, and behaviors viewed most favorably by those involved in the HM hiring process. To our knowledge, this is the first research to be published on the topic of evaluating HM candidates.

Survey respondents identified demonstrable HM candidate clinical skills and experience as highly important, consistent with prior research identifying clinical skills as being among those that hospitalists most value.6 Based on these responses, job seekers should be prepared to discuss objective measures of clinical experience when appropriate, such as number of cases seen or procedures performed. HM groups may accordingly consider the use of hiring rubrics or scoring systems to standardize these measures and reduce bias.

Respondents also highly valued more subjective assessments of HM applicants’ candidacy. The most highly ranked action item was a candidate’s ability to meaningfully respond to a respondent’s customized interview question. There was also a preference for candidates who were knowledgeable about and interested in the specifics of a particular HM group. The high value placed on these elements may suggest the need for formalized coaching or interview preparation for HM candidates. Similarly, interviewer emphasis on customized questions may also highlight an opportunity for HM groups to internally standardize how to best approach subjective components of the interview.

Our heterogeneous findings on the distinctions between ST and LT candidate hiring practices support the need for additional research on the ST HM job market. Until then, our findings reinforce the importance of applicant transparency about ST versus LT career goals. Although many programs may prefer LT candidates over ST candidates, our results suggest ST candidates may benefit from targeting groups with ST needs and using the application process as an opportunity to highlight certain mitigating strengths.

Our study has limitations. While our population included diverse national representation, the response rate and demographics of our respondents may limit generalizability beyond our study population. Respondents represented multiple perspectives within the HM hiring process and were not limited to those making the final hiring decisions. For questions with prespecified multiple-choice answers, answer choices may have influenced participant responses. Our conclusions are based on the reported preferences of those involved in the HM hiring process and not actual hiring behavior. Future research should attempt to identify factors (eg, region, graduate medical education status, practice setting type) that may be responsible for some of the heterogeneous themes we observed in our analysis.

Our research represents introductory work into the previously unpublished topic of HM-specific hiring practices. These findings may provide relevant insight for trainees considering careers in HM, hospitalists reentering the job market, and those involved in career advising, professional development and the HM hiring process.

Acknowledgments

The authors would like to acknowledge current and former members of SHM’s Physicians in Training Committee whose feedback and leadership helped to inspire this project, as well as those students, residents, and hospitalists who have participated in our Hospital Medicine Annual Meeting interview workshop.

Disclosures

The authors have no conflicts of interest to disclose.

 

 

References

1. Wachter RM, Goldman L. Zero to 50,000-The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Leyenaar JK, Frintner MP. Graduating pediatric residents entering the hospital medicine workforce, 2006-2015. Acad Pediatr. 2018;18(2):200-207. https://doi.org/10.1016/j.acap.2017.05.001.
3. Ratelle JT, Dupras DM, Alguire P, Masters P, Weissman A, West CP. Hospitalist career decisions among internal medicine residents. J Gen Intern Med. 2014;29(7):1026-1030. doi: 10.1007/s11606-014-2811-3.
4. Sweigart JR, Tad-Y D, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12(3):173-176. doi: 10.12788/jhm.2703.
5. 2016 State of Hospital Medicine Report. 2016. https://www.hospitalmedicine.org/practice-management/shms-state-of-hospital-medicine/. Accessed 7/1/2017.
6. Plauth WH, 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Emerg Med. 2001;111(3):247-254. doi: https://doi.org/10.1016/S0002-9343(01)00837-3.

References

1. Wachter RM, Goldman L. Zero to 50,000-The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Leyenaar JK, Frintner MP. Graduating pediatric residents entering the hospital medicine workforce, 2006-2015. Acad Pediatr. 2018;18(2):200-207. https://doi.org/10.1016/j.acap.2017.05.001.
3. Ratelle JT, Dupras DM, Alguire P, Masters P, Weissman A, West CP. Hospitalist career decisions among internal medicine residents. J Gen Intern Med. 2014;29(7):1026-1030. doi: 10.1007/s11606-014-2811-3.
4. Sweigart JR, Tad-Y D, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12(3):173-176. doi: 10.12788/jhm.2703.
5. 2016 State of Hospital Medicine Report. 2016. https://www.hospitalmedicine.org/practice-management/shms-state-of-hospital-medicine/. Accessed 7/1/2017.
6. Plauth WH, 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Emerg Med. 2001;111(3):247-254. doi: https://doi.org/10.1016/S0002-9343(01)00837-3.

Issue
Journal of Hospital Medicine 14(12)
Issue
Journal of Hospital Medicine 14(12)
Page Number
754-757. Published online first July 24, 2019
Page Number
754-757. Published online first July 24, 2019
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Corresponding Author: Joshua Allen-Dicker, MD, MPH; E-mail: [email protected]; Telephone: 617-754-4677; Twitter: @DrJoshuaAD.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Effectiveness of SIESTA on Objective and Subjective Metrics of Nighttime Hospital Sleep Disruptors

Article Type
Changed
Fri, 03/15/2019 - 05:34

Although sleep is critical to patient recovery in the hospital, hospitalization is not restful,1,2 and inpatient sleep deprivation has been linked to poor health outcomes.1-4 The American Academy of Nursing’s Choosing Wisely® campaign recommends nurses reduce unnecessary nocturnal care.5 However, interventions to improve inpatient sleep are not widely implemented.6 Targeting routine disruptions, such as overnight vital signs, by changing default settings in the electronic health record (EHR)with “nudges” could be a cost-effective strategy to improve inpatient sleep.4,7

We created Sleep for Inpatients: Empowering Staff to Act (SIESTA), which pairs nudges in the EHR with interprofessional education and empowerment,8 and tested its effectiveness on objectively and subjectively measured nocturnal sleep disruptors.

METHODS

Study Design

Two 18-room University of Chicago Medicine general-medicine units were used in this prospective study. The SIESTA-enhanced unit underwent the full sleep intervention: nursing education and empowerment, physician education, and EHR changes. The standard unit did not receive nursing interventions but received all other forms of intervention. Because physicians simultaneously cared for patients on both units, all internal medicine residents and hospitalists received the same education. The study population included physicians, nurses, and awake English-speaking patients who were cognitively intact and admitted to these two units. The University of Chicago Institutional Review Board approved this study (12-1766; 16685B).

Development of SIESTA

To develop SIESTA, patients were surveyed, and focus groups of staff were conducted; overnight vitals, medications, and phlebotomy were identified as major barriers to patient sleep.9 We found that physicians did not know how to change the default vital signs order “every 4 hours” or how to batch-order morning phlebotomy at a time other than 4:00 am. Nurses reported having to wake patients up at 1:00 am for q8h subcutaneous heparin.

Behavioral Nudges

The SIESTA team worked with clinical informaticists to change the default orders in EpicTM (Epic Systems Corporation, 2017, Verona, Wisconsin) in September 2015 so that physicians would be asked, “Continue vital signs throughout the night?”10 Previously, this question was marked “Yes” by default and hidden. While the default protocol for heparin q8h was maintained, heparin q12h (9:00 am and 9:00 pm) was introduced as an option, since q12h heparin is equally effective for VTE prophylaxis.11 Laboratory ordering was streamlined so that physicians could batch-order laboratory draws at 6:00 am or 10:00 pm.

SIESTA Physician Education

We created a 20-minute presentation on the consequences and causes of in-hospital sleep deprivation and evidence-based behavioral modification. We distributed pocket cards describing the mnemonic SIESTA (Screen patients for sleep disorders, Instruct patients on sleep hygiene, Eliminate disruptions, Shut doors, Treat pain, and Alarm and noise control). Physicians were instructed to consider forgoing overnight vitals, using clinical judgment to identify stable patients, use a sleep-promoting VTE prophylaxis option, and order daily labs at 10:00 pm or 6:00 am. An online educational module was sent to staff who missed live sessions due to days off.

 

 

SIESTA-Enhanced Unit

In the SIESTA-enhanced unit, nurses received education using pocket cards and were coached to collaborate with physicians to implement sleep-friendly orders. Customized signage depicting empowered nurses advocating for patients was posted near the huddle board. Because these nurses suggested adding SIESTA to the nurses’ ongoing daily huddles at 4:00 pm and 3:00 am, beginning on January 1, 2016, nurses were asked to identify at least two stable patients for sleep-friendly orders at the huddle. Night nurses incorporated SIESTA into their handoff to day nurses for eligible patients. Day nurses would then call physicians to advocate changing of orders.

Data Collection

Objectively Measured Sleep Disruptors

Adoption of SIESTA orders from March 2015 to March 2016 was assessed with a monthly EpicTM Clarity report. From August 1, 2015 to April 1, 2016, nocturnal room entries were recorded using the GOJO SMARTLINKTM Hand Hygiene system (GOJO Industries Inc., 2017, Akron, Ohio). This system includes two components: the hand-sanitizer dispensers, which track dispenses (numerator), and door-mounted Activity Counters, which use heat sensors that react to body heat emitted by a person passing through the doorway (denominator for hand-hygiene compliance). For our analysis, we only used Activity Counter data, which count room entries and exits, regardless of whether sanitizer was dispensed.

Patient-Reported Nighttime Sleep Disruptions

From June 2015 to March 2016, research assistants administered a 10-item Potential Hospital Sleep Disruptions and Noises Questionnaire (PHSDNQ) to patients in both units. Responses to this questionnaire correlate with actigraphy-based sleep measurements.9,12,13 Surveys were administered every other weekday to patients available to participate (eg, willing to participate, on the unit, awake). Survey data were stored on the REDCap Database (Version 6.14.0; Vanderbilt University, 2016, Nashville, Tennessee). Pre- and post-intervention Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) “top-box ratings” for percent quiet at night and percent pain well controlled were also compared.

Data Analysis

Objectively Measured Potential Sleep Disruptors

The proportion of sleep-friendly orders was analyzed using a two-sample test for proportions pre-post for the SIESTA-enhanced and standard units. The difference in use of SIESTA orders between units was analyzed via multivariable logistic regression, testing for independent associations between post-period, SIESTA-enhanced unit, and an interaction term (post-period × SIESTA unit) on use of sleep-friendly orders.

Room entries per night (11:00 pm–7:00 am) were analyzed via single-group interrupted time-series. Multiple Activity Counter entries within three minutes were counted as a single room entry. In addition, the pre-post cutoff was set to 7:00 am, September 8, 2015; after the SIESTA launch, a second cutoff marking when SIESTA was added to the nurses’ MDI Huddle was added at 7:00 am, January 1, 2016.

Patient-Reported Nighttime Sleep Disruptions

Per prior studies, we defined a score 2 or higher as “sleep disruption.”9 Differences between units were evaluated via multivariable logistic regression to examine the association between the interaction of post-period × SIESTA-enhanced unit and odds of not reporting a sleep disruption. Significance was denoted as P = .05.

 

 

RESULTS

Between March 2015 and March 2016, 1,083 general-medicine patients were admitted to the SIESTA-enhanced and standard units (Table).

Nocturnal Orders

From March 2015 to March 2016, 1,669 EpicTM general medicine orders were reviewed (Figure). In the SIESTA-enhanced unit, the mean percentage of sleep-friendly orders rose for both vital signs (+31% [95% CI = 25%, 36%]; P < .001, npre = 306, npost = 306] and VTE prophylaxis (+28% [95% CI = 18%, 37%]; P < .001, npre = 158, npost = 173]. Similar changes were observed in the standard unit for sleep-friendly vital signs (+20% [95% CI = 14%, 25%]; P < .001, npre = 252, npost = 219) and VTE prophylaxis (+16% [95% CI = 6%, 25%]; P = .002, npre = 130, npost = 125). Differences between the two units were not statistically significant, and no significant change in timing of laboratory orders postintervention was found.

Nighttime Room Entries

Immediately after SIESTA launch, an average decrease of 114 total entries/night were noted in the SIESTA-enhanced unit, ([95% CI = −138, −91]; P < .001), corresponding to a 44% reduction (−6.3 entries/room) from the mean of 14.3 entries per patient room at baseline (Figure). No statistically significant change was seen in the standard unit. After SIESTA was incorporated into nursing huddles, total disruptions/night decreased by 1.31 disruptions/night ([95% CI = −1.64, −0.98]; P < .001) in the SIESTA-enhanced unit; by comparison, no significant changes were observed in the standard unit.

Patient-Reported Nighttime Sleep Disruptions

Between June 2015 and March 2016, 201 patient surveys were collected. A significant interaction was observed between the SIESTA-enhanced unit and post-period, and patients in the SIESTA-enhanced unit were more likely to report not being disrupted by medications (OR 4.08 [95% CI = 1.13–14.07]; P = .031) and vital signs (OR 3.35 [95% CI = 1.00–11.2]; P = .05) than those in the standard unit. HCAHPS top-box scores for the SIESTA unit increased by 7% for the “Quiet at night” category and 9% for the “Pain well controlled” category; by comparison, no major changes (>5%) were observed in the standard unit.

DISCUSSION

The present SIESTA intervention demonstrated that physician education coupled with EHR default changes are associated with a significant reduction in orders for overnight vital signs and medication administration in both units. However, addition of nursing education and empowerment in the SIESTA-enhanced unit was associated with fewer nocturnal room entries and improvements in patient-reported outcomes compared with those in the standard unit.

This study presents several implications for hospital initiatives aiming to improve patient sleep.14 Our study is consistent with other research highlighting the hypothesis that altering the default settings of EHR systems can influence physician behavior in a sustainable manner.15 However, our study also finds that, even when sleep-friendly orders are present, creating a sleep-friendly environment likely depends on the unit-based nurses championing the cause. While the initial decrease in nocturnal room entries post-SIESTA eventually faded, sustainable changes were observed only after SIESTA was added to nursing huddles, which illustrates the importance of using multiple methods to nudge staff.

Our study includes a number of limitations. It is not a randomized controlled trial, we cannot assume causality, and contamination was assumed, as residents and hospitalists worked in both units. Our single-site study may not be generalizable. Low HCAHPS response rates (10%-20%) also prevent demonstration of statistically significant differences. Finally, our convenience sampling strategy means not all inpatients were surveyed, and objective sleep duration was not measured.

In summary, at the University of Chicago, SIESTA could be associated with adoption of sleep-friendly vitals and medication orders, a decrease in nighttime room entries, and improved patient experience.

 

 

Disclosures

The authors have nothing to disclose.

Funding

This study was funded by the National Institute on Aging (NIA Grant No. T35AG029795) and the National Heart, Lung, and Blood Institute (NHLBI Grant Nos. R25HL116372 and K24HL136859).

 

Files
References

1. Delaney LJ, Van Haren F, Lopez V. Sleeping on a problem: the impact of sleep disturbance on intensive care patients - a clinical review [published online ahead of print February 26, 2016]. Ann Intensive Care. 2015;5(3). doi: 10.1186/s13613-015-0043-2. PubMed
2. Arora VM, Chang KL, Fazal AZ, et al. Objective sleep duration and quality in hospitalized older adults: associations with blood pressure and mood. J Am Geriatr Soc. 2011;59(11):2185-2186. doi: 10.1111/j.1532-5415.2011.03644.x. PubMed
3. Knutson KL, Spiegel K, Penev P, Van Cauter E. The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163-178. doi: 10.1016/j.smrv.2007.01.002. PubMed
4. Manian FA, Manian CJ. Sleep quality in adult hospitalized patients with infection: an observational study. Am J Med Sci. 2015;349(1):56-60. doi: 10.1097/MAJ.0000000000000355. PubMed
5. American Academy of Nursing announced engagement in National Choosing Wisely Campaign. Nurs Outlook. 2015;63(1):96-98. doi: 10.1016/j.outlook.2014.12.017. PubMed
6. Gathecha E, Rios R, Buenaver LF, Landis R, Howell E, Wright S. Pilot study aiming to support sleep quality and duration during hospitalizations. J Hosp Med. 2016;11(7):467-472. doi: 10.1002/jhm.2578. PubMed
7. Fillary J, Chaplin H, Jones G, Thompson A, Holme A, Wilson P. Noise at night in hospital general wards: a mapping of the literature. Br J Nurs. 2015;24(10):536-540. doi: 10.12968/bjon.2015.24.10.536. PubMed
8. Thaler R, Sunstein C. Nudge: Improving Decisions About Health, Wealth and Happiness. Yale University Press; 2008. 
9. Grossman MN, Anderson SL, Worku A, et al. Awakenings? Patient and hospital staff perceptions of nighttime disruptions and their effect on patient sleep. J Clin Sleep Med. 2017;13(2):301-306. doi: 10.5664/jcsm.6468. PubMed
10. Yoder JC, Yuen TC, Churpek MM, Arora VM, Edelson DP. A prospective study of nighttime vital sign monitoring frequency and risk of clinical deterioration. JAMA Intern Med. 2013;173(16):1554-1555. doi: 10.1001/jamainternmed.2013.7791. PubMed
11. Phung OJ, Kahn SR, Cook DJ, Murad MH. Dosing frequency of unfractionated heparin thromboprophylaxis: a meta-analysis. Chest. 2011;140(2):374-381. doi: 10.1378/chest.10-3084. PubMed
12. Gabor JY, Cooper AB, Hanly PJ. Sleep disruption in the intensive care unit. Curr Opin Crit Care. 2001;7(1):21-27. PubMed
13. Topf M. Personal and environmental predictors of patient disturbance due to hospital noise. J Appl Psychol. 1985;70(1):22-28. doi: 10.1037/0021-9010.70.1.22. PubMed
14. Cho HJ, Wray CM, Maione S, et al. Right care in hospital medicine: co-creation of ten opportunities in overuse and underuse for improving value in hospital medicine. J Gen Intern Med. 2018;33(6):804-806. doi: 10.1007/s11606-018-4371-4. PubMed
15. Halpern SD, Ubel PA, Asch DA. Harnessing the power of default options to improve health care. N Engl J Med. 2007;357(13):1340-1344. doi: 10.1056/NEJMsb071595. PubMed

Article PDF
Issue
Journal of Hospital Medicine 14(1)
Publications
Topics
Page Number
38-41
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Although sleep is critical to patient recovery in the hospital, hospitalization is not restful,1,2 and inpatient sleep deprivation has been linked to poor health outcomes.1-4 The American Academy of Nursing’s Choosing Wisely® campaign recommends nurses reduce unnecessary nocturnal care.5 However, interventions to improve inpatient sleep are not widely implemented.6 Targeting routine disruptions, such as overnight vital signs, by changing default settings in the electronic health record (EHR)with “nudges” could be a cost-effective strategy to improve inpatient sleep.4,7

We created Sleep for Inpatients: Empowering Staff to Act (SIESTA), which pairs nudges in the EHR with interprofessional education and empowerment,8 and tested its effectiveness on objectively and subjectively measured nocturnal sleep disruptors.

METHODS

Study Design

Two 18-room University of Chicago Medicine general-medicine units were used in this prospective study. The SIESTA-enhanced unit underwent the full sleep intervention: nursing education and empowerment, physician education, and EHR changes. The standard unit did not receive nursing interventions but received all other forms of intervention. Because physicians simultaneously cared for patients on both units, all internal medicine residents and hospitalists received the same education. The study population included physicians, nurses, and awake English-speaking patients who were cognitively intact and admitted to these two units. The University of Chicago Institutional Review Board approved this study (12-1766; 16685B).

Development of SIESTA

To develop SIESTA, patients were surveyed, and focus groups of staff were conducted; overnight vitals, medications, and phlebotomy were identified as major barriers to patient sleep.9 We found that physicians did not know how to change the default vital signs order “every 4 hours” or how to batch-order morning phlebotomy at a time other than 4:00 am. Nurses reported having to wake patients up at 1:00 am for q8h subcutaneous heparin.

Behavioral Nudges

The SIESTA team worked with clinical informaticists to change the default orders in EpicTM (Epic Systems Corporation, 2017, Verona, Wisconsin) in September 2015 so that physicians would be asked, “Continue vital signs throughout the night?”10 Previously, this question was marked “Yes” by default and hidden. While the default protocol for heparin q8h was maintained, heparin q12h (9:00 am and 9:00 pm) was introduced as an option, since q12h heparin is equally effective for VTE prophylaxis.11 Laboratory ordering was streamlined so that physicians could batch-order laboratory draws at 6:00 am or 10:00 pm.

SIESTA Physician Education

We created a 20-minute presentation on the consequences and causes of in-hospital sleep deprivation and evidence-based behavioral modification. We distributed pocket cards describing the mnemonic SIESTA (Screen patients for sleep disorders, Instruct patients on sleep hygiene, Eliminate disruptions, Shut doors, Treat pain, and Alarm and noise control). Physicians were instructed to consider forgoing overnight vitals, using clinical judgment to identify stable patients, use a sleep-promoting VTE prophylaxis option, and order daily labs at 10:00 pm or 6:00 am. An online educational module was sent to staff who missed live sessions due to days off.

 

 

SIESTA-Enhanced Unit

In the SIESTA-enhanced unit, nurses received education using pocket cards and were coached to collaborate with physicians to implement sleep-friendly orders. Customized signage depicting empowered nurses advocating for patients was posted near the huddle board. Because these nurses suggested adding SIESTA to the nurses’ ongoing daily huddles at 4:00 pm and 3:00 am, beginning on January 1, 2016, nurses were asked to identify at least two stable patients for sleep-friendly orders at the huddle. Night nurses incorporated SIESTA into their handoff to day nurses for eligible patients. Day nurses would then call physicians to advocate changing of orders.

Data Collection

Objectively Measured Sleep Disruptors

Adoption of SIESTA orders from March 2015 to March 2016 was assessed with a monthly EpicTM Clarity report. From August 1, 2015 to April 1, 2016, nocturnal room entries were recorded using the GOJO SMARTLINKTM Hand Hygiene system (GOJO Industries Inc., 2017, Akron, Ohio). This system includes two components: the hand-sanitizer dispensers, which track dispenses (numerator), and door-mounted Activity Counters, which use heat sensors that react to body heat emitted by a person passing through the doorway (denominator for hand-hygiene compliance). For our analysis, we only used Activity Counter data, which count room entries and exits, regardless of whether sanitizer was dispensed.

Patient-Reported Nighttime Sleep Disruptions

From June 2015 to March 2016, research assistants administered a 10-item Potential Hospital Sleep Disruptions and Noises Questionnaire (PHSDNQ) to patients in both units. Responses to this questionnaire correlate with actigraphy-based sleep measurements.9,12,13 Surveys were administered every other weekday to patients available to participate (eg, willing to participate, on the unit, awake). Survey data were stored on the REDCap Database (Version 6.14.0; Vanderbilt University, 2016, Nashville, Tennessee). Pre- and post-intervention Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) “top-box ratings” for percent quiet at night and percent pain well controlled were also compared.

Data Analysis

Objectively Measured Potential Sleep Disruptors

The proportion of sleep-friendly orders was analyzed using a two-sample test for proportions pre-post for the SIESTA-enhanced and standard units. The difference in use of SIESTA orders between units was analyzed via multivariable logistic regression, testing for independent associations between post-period, SIESTA-enhanced unit, and an interaction term (post-period × SIESTA unit) on use of sleep-friendly orders.

Room entries per night (11:00 pm–7:00 am) were analyzed via single-group interrupted time-series. Multiple Activity Counter entries within three minutes were counted as a single room entry. In addition, the pre-post cutoff was set to 7:00 am, September 8, 2015; after the SIESTA launch, a second cutoff marking when SIESTA was added to the nurses’ MDI Huddle was added at 7:00 am, January 1, 2016.

Patient-Reported Nighttime Sleep Disruptions

Per prior studies, we defined a score 2 or higher as “sleep disruption.”9 Differences between units were evaluated via multivariable logistic regression to examine the association between the interaction of post-period × SIESTA-enhanced unit and odds of not reporting a sleep disruption. Significance was denoted as P = .05.

 

 

RESULTS

Between March 2015 and March 2016, 1,083 general-medicine patients were admitted to the SIESTA-enhanced and standard units (Table).

Nocturnal Orders

From March 2015 to March 2016, 1,669 EpicTM general medicine orders were reviewed (Figure). In the SIESTA-enhanced unit, the mean percentage of sleep-friendly orders rose for both vital signs (+31% [95% CI = 25%, 36%]; P < .001, npre = 306, npost = 306] and VTE prophylaxis (+28% [95% CI = 18%, 37%]; P < .001, npre = 158, npost = 173]. Similar changes were observed in the standard unit for sleep-friendly vital signs (+20% [95% CI = 14%, 25%]; P < .001, npre = 252, npost = 219) and VTE prophylaxis (+16% [95% CI = 6%, 25%]; P = .002, npre = 130, npost = 125). Differences between the two units were not statistically significant, and no significant change in timing of laboratory orders postintervention was found.

Nighttime Room Entries

Immediately after SIESTA launch, an average decrease of 114 total entries/night were noted in the SIESTA-enhanced unit, ([95% CI = −138, −91]; P < .001), corresponding to a 44% reduction (−6.3 entries/room) from the mean of 14.3 entries per patient room at baseline (Figure). No statistically significant change was seen in the standard unit. After SIESTA was incorporated into nursing huddles, total disruptions/night decreased by 1.31 disruptions/night ([95% CI = −1.64, −0.98]; P < .001) in the SIESTA-enhanced unit; by comparison, no significant changes were observed in the standard unit.

Patient-Reported Nighttime Sleep Disruptions

Between June 2015 and March 2016, 201 patient surveys were collected. A significant interaction was observed between the SIESTA-enhanced unit and post-period, and patients in the SIESTA-enhanced unit were more likely to report not being disrupted by medications (OR 4.08 [95% CI = 1.13–14.07]; P = .031) and vital signs (OR 3.35 [95% CI = 1.00–11.2]; P = .05) than those in the standard unit. HCAHPS top-box scores for the SIESTA unit increased by 7% for the “Quiet at night” category and 9% for the “Pain well controlled” category; by comparison, no major changes (>5%) were observed in the standard unit.

DISCUSSION

The present SIESTA intervention demonstrated that physician education coupled with EHR default changes are associated with a significant reduction in orders for overnight vital signs and medication administration in both units. However, addition of nursing education and empowerment in the SIESTA-enhanced unit was associated with fewer nocturnal room entries and improvements in patient-reported outcomes compared with those in the standard unit.

This study presents several implications for hospital initiatives aiming to improve patient sleep.14 Our study is consistent with other research highlighting the hypothesis that altering the default settings of EHR systems can influence physician behavior in a sustainable manner.15 However, our study also finds that, even when sleep-friendly orders are present, creating a sleep-friendly environment likely depends on the unit-based nurses championing the cause. While the initial decrease in nocturnal room entries post-SIESTA eventually faded, sustainable changes were observed only after SIESTA was added to nursing huddles, which illustrates the importance of using multiple methods to nudge staff.

Our study includes a number of limitations. It is not a randomized controlled trial, we cannot assume causality, and contamination was assumed, as residents and hospitalists worked in both units. Our single-site study may not be generalizable. Low HCAHPS response rates (10%-20%) also prevent demonstration of statistically significant differences. Finally, our convenience sampling strategy means not all inpatients were surveyed, and objective sleep duration was not measured.

In summary, at the University of Chicago, SIESTA could be associated with adoption of sleep-friendly vitals and medication orders, a decrease in nighttime room entries, and improved patient experience.

 

 

Disclosures

The authors have nothing to disclose.

Funding

This study was funded by the National Institute on Aging (NIA Grant No. T35AG029795) and the National Heart, Lung, and Blood Institute (NHLBI Grant Nos. R25HL116372 and K24HL136859).

 

Although sleep is critical to patient recovery in the hospital, hospitalization is not restful,1,2 and inpatient sleep deprivation has been linked to poor health outcomes.1-4 The American Academy of Nursing’s Choosing Wisely® campaign recommends nurses reduce unnecessary nocturnal care.5 However, interventions to improve inpatient sleep are not widely implemented.6 Targeting routine disruptions, such as overnight vital signs, by changing default settings in the electronic health record (EHR)with “nudges” could be a cost-effective strategy to improve inpatient sleep.4,7

We created Sleep for Inpatients: Empowering Staff to Act (SIESTA), which pairs nudges in the EHR with interprofessional education and empowerment,8 and tested its effectiveness on objectively and subjectively measured nocturnal sleep disruptors.

METHODS

Study Design

Two 18-room University of Chicago Medicine general-medicine units were used in this prospective study. The SIESTA-enhanced unit underwent the full sleep intervention: nursing education and empowerment, physician education, and EHR changes. The standard unit did not receive nursing interventions but received all other forms of intervention. Because physicians simultaneously cared for patients on both units, all internal medicine residents and hospitalists received the same education. The study population included physicians, nurses, and awake English-speaking patients who were cognitively intact and admitted to these two units. The University of Chicago Institutional Review Board approved this study (12-1766; 16685B).

Development of SIESTA

To develop SIESTA, patients were surveyed, and focus groups of staff were conducted; overnight vitals, medications, and phlebotomy were identified as major barriers to patient sleep.9 We found that physicians did not know how to change the default vital signs order “every 4 hours” or how to batch-order morning phlebotomy at a time other than 4:00 am. Nurses reported having to wake patients up at 1:00 am for q8h subcutaneous heparin.

Behavioral Nudges

The SIESTA team worked with clinical informaticists to change the default orders in EpicTM (Epic Systems Corporation, 2017, Verona, Wisconsin) in September 2015 so that physicians would be asked, “Continue vital signs throughout the night?”10 Previously, this question was marked “Yes” by default and hidden. While the default protocol for heparin q8h was maintained, heparin q12h (9:00 am and 9:00 pm) was introduced as an option, since q12h heparin is equally effective for VTE prophylaxis.11 Laboratory ordering was streamlined so that physicians could batch-order laboratory draws at 6:00 am or 10:00 pm.

SIESTA Physician Education

We created a 20-minute presentation on the consequences and causes of in-hospital sleep deprivation and evidence-based behavioral modification. We distributed pocket cards describing the mnemonic SIESTA (Screen patients for sleep disorders, Instruct patients on sleep hygiene, Eliminate disruptions, Shut doors, Treat pain, and Alarm and noise control). Physicians were instructed to consider forgoing overnight vitals, using clinical judgment to identify stable patients, use a sleep-promoting VTE prophylaxis option, and order daily labs at 10:00 pm or 6:00 am. An online educational module was sent to staff who missed live sessions due to days off.

 

 

SIESTA-Enhanced Unit

In the SIESTA-enhanced unit, nurses received education using pocket cards and were coached to collaborate with physicians to implement sleep-friendly orders. Customized signage depicting empowered nurses advocating for patients was posted near the huddle board. Because these nurses suggested adding SIESTA to the nurses’ ongoing daily huddles at 4:00 pm and 3:00 am, beginning on January 1, 2016, nurses were asked to identify at least two stable patients for sleep-friendly orders at the huddle. Night nurses incorporated SIESTA into their handoff to day nurses for eligible patients. Day nurses would then call physicians to advocate changing of orders.

Data Collection

Objectively Measured Sleep Disruptors

Adoption of SIESTA orders from March 2015 to March 2016 was assessed with a monthly EpicTM Clarity report. From August 1, 2015 to April 1, 2016, nocturnal room entries were recorded using the GOJO SMARTLINKTM Hand Hygiene system (GOJO Industries Inc., 2017, Akron, Ohio). This system includes two components: the hand-sanitizer dispensers, which track dispenses (numerator), and door-mounted Activity Counters, which use heat sensors that react to body heat emitted by a person passing through the doorway (denominator for hand-hygiene compliance). For our analysis, we only used Activity Counter data, which count room entries and exits, regardless of whether sanitizer was dispensed.

Patient-Reported Nighttime Sleep Disruptions

From June 2015 to March 2016, research assistants administered a 10-item Potential Hospital Sleep Disruptions and Noises Questionnaire (PHSDNQ) to patients in both units. Responses to this questionnaire correlate with actigraphy-based sleep measurements.9,12,13 Surveys were administered every other weekday to patients available to participate (eg, willing to participate, on the unit, awake). Survey data were stored on the REDCap Database (Version 6.14.0; Vanderbilt University, 2016, Nashville, Tennessee). Pre- and post-intervention Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) “top-box ratings” for percent quiet at night and percent pain well controlled were also compared.

Data Analysis

Objectively Measured Potential Sleep Disruptors

The proportion of sleep-friendly orders was analyzed using a two-sample test for proportions pre-post for the SIESTA-enhanced and standard units. The difference in use of SIESTA orders between units was analyzed via multivariable logistic regression, testing for independent associations between post-period, SIESTA-enhanced unit, and an interaction term (post-period × SIESTA unit) on use of sleep-friendly orders.

Room entries per night (11:00 pm–7:00 am) were analyzed via single-group interrupted time-series. Multiple Activity Counter entries within three minutes were counted as a single room entry. In addition, the pre-post cutoff was set to 7:00 am, September 8, 2015; after the SIESTA launch, a second cutoff marking when SIESTA was added to the nurses’ MDI Huddle was added at 7:00 am, January 1, 2016.

Patient-Reported Nighttime Sleep Disruptions

Per prior studies, we defined a score 2 or higher as “sleep disruption.”9 Differences between units were evaluated via multivariable logistic regression to examine the association between the interaction of post-period × SIESTA-enhanced unit and odds of not reporting a sleep disruption. Significance was denoted as P = .05.

 

 

RESULTS

Between March 2015 and March 2016, 1,083 general-medicine patients were admitted to the SIESTA-enhanced and standard units (Table).

Nocturnal Orders

From March 2015 to March 2016, 1,669 EpicTM general medicine orders were reviewed (Figure). In the SIESTA-enhanced unit, the mean percentage of sleep-friendly orders rose for both vital signs (+31% [95% CI = 25%, 36%]; P < .001, npre = 306, npost = 306] and VTE prophylaxis (+28% [95% CI = 18%, 37%]; P < .001, npre = 158, npost = 173]. Similar changes were observed in the standard unit for sleep-friendly vital signs (+20% [95% CI = 14%, 25%]; P < .001, npre = 252, npost = 219) and VTE prophylaxis (+16% [95% CI = 6%, 25%]; P = .002, npre = 130, npost = 125). Differences between the two units were not statistically significant, and no significant change in timing of laboratory orders postintervention was found.

Nighttime Room Entries

Immediately after SIESTA launch, an average decrease of 114 total entries/night were noted in the SIESTA-enhanced unit, ([95% CI = −138, −91]; P < .001), corresponding to a 44% reduction (−6.3 entries/room) from the mean of 14.3 entries per patient room at baseline (Figure). No statistically significant change was seen in the standard unit. After SIESTA was incorporated into nursing huddles, total disruptions/night decreased by 1.31 disruptions/night ([95% CI = −1.64, −0.98]; P < .001) in the SIESTA-enhanced unit; by comparison, no significant changes were observed in the standard unit.

Patient-Reported Nighttime Sleep Disruptions

Between June 2015 and March 2016, 201 patient surveys were collected. A significant interaction was observed between the SIESTA-enhanced unit and post-period, and patients in the SIESTA-enhanced unit were more likely to report not being disrupted by medications (OR 4.08 [95% CI = 1.13–14.07]; P = .031) and vital signs (OR 3.35 [95% CI = 1.00–11.2]; P = .05) than those in the standard unit. HCAHPS top-box scores for the SIESTA unit increased by 7% for the “Quiet at night” category and 9% for the “Pain well controlled” category; by comparison, no major changes (>5%) were observed in the standard unit.

DISCUSSION

The present SIESTA intervention demonstrated that physician education coupled with EHR default changes are associated with a significant reduction in orders for overnight vital signs and medication administration in both units. However, addition of nursing education and empowerment in the SIESTA-enhanced unit was associated with fewer nocturnal room entries and improvements in patient-reported outcomes compared with those in the standard unit.

This study presents several implications for hospital initiatives aiming to improve patient sleep.14 Our study is consistent with other research highlighting the hypothesis that altering the default settings of EHR systems can influence physician behavior in a sustainable manner.15 However, our study also finds that, even when sleep-friendly orders are present, creating a sleep-friendly environment likely depends on the unit-based nurses championing the cause. While the initial decrease in nocturnal room entries post-SIESTA eventually faded, sustainable changes were observed only after SIESTA was added to nursing huddles, which illustrates the importance of using multiple methods to nudge staff.

Our study includes a number of limitations. It is not a randomized controlled trial, we cannot assume causality, and contamination was assumed, as residents and hospitalists worked in both units. Our single-site study may not be generalizable. Low HCAHPS response rates (10%-20%) also prevent demonstration of statistically significant differences. Finally, our convenience sampling strategy means not all inpatients were surveyed, and objective sleep duration was not measured.

In summary, at the University of Chicago, SIESTA could be associated with adoption of sleep-friendly vitals and medication orders, a decrease in nighttime room entries, and improved patient experience.

 

 

Disclosures

The authors have nothing to disclose.

Funding

This study was funded by the National Institute on Aging (NIA Grant No. T35AG029795) and the National Heart, Lung, and Blood Institute (NHLBI Grant Nos. R25HL116372 and K24HL136859).

 

References

1. Delaney LJ, Van Haren F, Lopez V. Sleeping on a problem: the impact of sleep disturbance on intensive care patients - a clinical review [published online ahead of print February 26, 2016]. Ann Intensive Care. 2015;5(3). doi: 10.1186/s13613-015-0043-2. PubMed
2. Arora VM, Chang KL, Fazal AZ, et al. Objective sleep duration and quality in hospitalized older adults: associations with blood pressure and mood. J Am Geriatr Soc. 2011;59(11):2185-2186. doi: 10.1111/j.1532-5415.2011.03644.x. PubMed
3. Knutson KL, Spiegel K, Penev P, Van Cauter E. The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163-178. doi: 10.1016/j.smrv.2007.01.002. PubMed
4. Manian FA, Manian CJ. Sleep quality in adult hospitalized patients with infection: an observational study. Am J Med Sci. 2015;349(1):56-60. doi: 10.1097/MAJ.0000000000000355. PubMed
5. American Academy of Nursing announced engagement in National Choosing Wisely Campaign. Nurs Outlook. 2015;63(1):96-98. doi: 10.1016/j.outlook.2014.12.017. PubMed
6. Gathecha E, Rios R, Buenaver LF, Landis R, Howell E, Wright S. Pilot study aiming to support sleep quality and duration during hospitalizations. J Hosp Med. 2016;11(7):467-472. doi: 10.1002/jhm.2578. PubMed
7. Fillary J, Chaplin H, Jones G, Thompson A, Holme A, Wilson P. Noise at night in hospital general wards: a mapping of the literature. Br J Nurs. 2015;24(10):536-540. doi: 10.12968/bjon.2015.24.10.536. PubMed
8. Thaler R, Sunstein C. Nudge: Improving Decisions About Health, Wealth and Happiness. Yale University Press; 2008. 
9. Grossman MN, Anderson SL, Worku A, et al. Awakenings? Patient and hospital staff perceptions of nighttime disruptions and their effect on patient sleep. J Clin Sleep Med. 2017;13(2):301-306. doi: 10.5664/jcsm.6468. PubMed
10. Yoder JC, Yuen TC, Churpek MM, Arora VM, Edelson DP. A prospective study of nighttime vital sign monitoring frequency and risk of clinical deterioration. JAMA Intern Med. 2013;173(16):1554-1555. doi: 10.1001/jamainternmed.2013.7791. PubMed
11. Phung OJ, Kahn SR, Cook DJ, Murad MH. Dosing frequency of unfractionated heparin thromboprophylaxis: a meta-analysis. Chest. 2011;140(2):374-381. doi: 10.1378/chest.10-3084. PubMed
12. Gabor JY, Cooper AB, Hanly PJ. Sleep disruption in the intensive care unit. Curr Opin Crit Care. 2001;7(1):21-27. PubMed
13. Topf M. Personal and environmental predictors of patient disturbance due to hospital noise. J Appl Psychol. 1985;70(1):22-28. doi: 10.1037/0021-9010.70.1.22. PubMed
14. Cho HJ, Wray CM, Maione S, et al. Right care in hospital medicine: co-creation of ten opportunities in overuse and underuse for improving value in hospital medicine. J Gen Intern Med. 2018;33(6):804-806. doi: 10.1007/s11606-018-4371-4. PubMed
15. Halpern SD, Ubel PA, Asch DA. Harnessing the power of default options to improve health care. N Engl J Med. 2007;357(13):1340-1344. doi: 10.1056/NEJMsb071595. PubMed

References

1. Delaney LJ, Van Haren F, Lopez V. Sleeping on a problem: the impact of sleep disturbance on intensive care patients - a clinical review [published online ahead of print February 26, 2016]. Ann Intensive Care. 2015;5(3). doi: 10.1186/s13613-015-0043-2. PubMed
2. Arora VM, Chang KL, Fazal AZ, et al. Objective sleep duration and quality in hospitalized older adults: associations with blood pressure and mood. J Am Geriatr Soc. 2011;59(11):2185-2186. doi: 10.1111/j.1532-5415.2011.03644.x. PubMed
3. Knutson KL, Spiegel K, Penev P, Van Cauter E. The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163-178. doi: 10.1016/j.smrv.2007.01.002. PubMed
4. Manian FA, Manian CJ. Sleep quality in adult hospitalized patients with infection: an observational study. Am J Med Sci. 2015;349(1):56-60. doi: 10.1097/MAJ.0000000000000355. PubMed
5. American Academy of Nursing announced engagement in National Choosing Wisely Campaign. Nurs Outlook. 2015;63(1):96-98. doi: 10.1016/j.outlook.2014.12.017. PubMed
6. Gathecha E, Rios R, Buenaver LF, Landis R, Howell E, Wright S. Pilot study aiming to support sleep quality and duration during hospitalizations. J Hosp Med. 2016;11(7):467-472. doi: 10.1002/jhm.2578. PubMed
7. Fillary J, Chaplin H, Jones G, Thompson A, Holme A, Wilson P. Noise at night in hospital general wards: a mapping of the literature. Br J Nurs. 2015;24(10):536-540. doi: 10.12968/bjon.2015.24.10.536. PubMed
8. Thaler R, Sunstein C. Nudge: Improving Decisions About Health, Wealth and Happiness. Yale University Press; 2008. 
9. Grossman MN, Anderson SL, Worku A, et al. Awakenings? Patient and hospital staff perceptions of nighttime disruptions and their effect on patient sleep. J Clin Sleep Med. 2017;13(2):301-306. doi: 10.5664/jcsm.6468. PubMed
10. Yoder JC, Yuen TC, Churpek MM, Arora VM, Edelson DP. A prospective study of nighttime vital sign monitoring frequency and risk of clinical deterioration. JAMA Intern Med. 2013;173(16):1554-1555. doi: 10.1001/jamainternmed.2013.7791. PubMed
11. Phung OJ, Kahn SR, Cook DJ, Murad MH. Dosing frequency of unfractionated heparin thromboprophylaxis: a meta-analysis. Chest. 2011;140(2):374-381. doi: 10.1378/chest.10-3084. PubMed
12. Gabor JY, Cooper AB, Hanly PJ. Sleep disruption in the intensive care unit. Curr Opin Crit Care. 2001;7(1):21-27. PubMed
13. Topf M. Personal and environmental predictors of patient disturbance due to hospital noise. J Appl Psychol. 1985;70(1):22-28. doi: 10.1037/0021-9010.70.1.22. PubMed
14. Cho HJ, Wray CM, Maione S, et al. Right care in hospital medicine: co-creation of ten opportunities in overuse and underuse for improving value in hospital medicine. J Gen Intern Med. 2018;33(6):804-806. doi: 10.1007/s11606-018-4371-4. PubMed
15. Halpern SD, Ubel PA, Asch DA. Harnessing the power of default options to improve health care. N Engl J Med. 2007;357(13):1340-1344. doi: 10.1056/NEJMsb071595. PubMed

Issue
Journal of Hospital Medicine 14(1)
Issue
Journal of Hospital Medicine 14(1)
Page Number
38-41
Page Number
38-41
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Vineet M. Arora, MD, MAPP, Email: [email protected]; Telephone: 773-702-8157; Twitter: @futuredocs
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 03/15/2019 - 05:00
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media
Media Files