Affiliations
Department of Medicine, Stanford School of Medicine, Stanford University
Given name(s)
Neera
Family name
Ahuja
Degrees
MD

Portable Ultrasound Device Usage and Learning Outcomes Among Internal Medicine Trainees: A Parallel-Group Randomized Trial

Article Type
Changed
Thu, 03/25/2021 - 12:23

Point-of-care ultrasonography (POCUS) can transform healthcare delivery through its diagnostic and therapeutic expediency.1 POCUS has been shown to bolster diagnostic accuracy, reduce procedural complications, decrease inpatient length of stay, and improve patient satisfaction by encouraging the physician to be present at the bedside.2-8

POCUS has become widespread across a variety of clinical settings as more investigations have demonstrated its positive impact on patient care.1,9-12 This includes the use of POCUS by trainees, who are now utilizing this technology as part of their assessments of patients.13,14 However, trainees may be performing these examinations with minimal oversight, and outside of emergency medicine, there are few guidelines on how to effectively teach POCUS or measure competency.13,14 While POCUS is rapidly becoming a part of inpatient care, teaching physicians may have little experience in ultrasound or the expertise to adequately supervise trainees.14 There is a growing need to study what trainees can learn and how this knowledge is acquired.

Previous investigations have demonstrated that inexperienced users can be taught to use POCUS to identify a variety of pathological states.2,3,15-23 Most of these curricula used a single lecture series as their pedagogical vehicle, and they variably included junior medical trainees. More importantly, the investigations did not explore whether personal access to handheld ultrasound devices (HUDs) improved learning. In theory, improved access to POCUS devices increases opportunities for authentic and deliberate practice, which may be needed to improve trainee skill with POCUS beyond the classroom setting.14

This study aimed to address several ongoing gaps in knowledge related to learning POCUS. First, we hypothesized that personal HUD access would improve trainees’ POCUS-­related knowledge and interpretive ability as a result of increased practice opportunities. Second, we hypothesized that trainees who receive personal access to HUDs would be more likely to perform POCUS examinations and feel more confident in their interpretations. Finally, we hypothesized that repeated exposure to POCUS-related lectures would result in greater improvements in knowledge as compared with a single lecture series.

METHODS

Participants and Setting

The 2017 intern class (n = 47) at an academic internal medicine residency program participated in the study. Control data were obtained from the 2016 intern class (historical control; n = 50) and the 2018 intern class (contemporaneous control; n = 52). The Stanford University Institutional Review Board approved this study.

Study Design

The 2017 intern class (n = 47) received POCUS didactics from June 2017 to June 2018. To evaluate if increased access to HUDs improved learning outcomes, the 2017 interns were randomized 1:1 to receive their own personal HUD that could be used for patient care and/or self-directed learning (n = 24) vs no-HUD (n = 23; Figure). Learning outcomes were assessed over the course of 1 year (see “Outcomes” below) and were compared with the 2016 and 2018 controls. The 2016 intern class had completed a year of training but had not received formalized POCUS didactics (historical control), whereas the 2018 intern class was assessed at the beginning of their year (contemporaneous control; Figure). In order to make comparisons based on intern experience, baseline data for the 2017 intern class were compared with the 2018 intern class, whereas end-of-study data for 2017 interns were compared with 2016 interns.

 

 

Outcomes

The primary outcome was the difference in assessment scores at the end of the study period between interns randomized to receive a HUD and those who were not. Secondary outcomes included differences in HUD usage rates, lecture attendance, and assessment scores. To assess whether repeated lecture exposure resulted in greater amounts of learning, this study evaluated for assessment score improvements after each lecture block. Finally, trainee attitudes toward POCUS and their confidence in their interpretative ability were measured at the beginning and end of the study period.

Curriculum Implementation

The lectures were administered as once-weekly didactics of 1-hour duration to interns rotating on the inpatient wards rotation. This rotation is 4 weeks long, and each intern will experience the rotation two to four times per year. Each lecture contained two parts: (1) 20-30 minutes of didactics via Microsoft PowerPointTM and (2) 30-40 minutes of supervised practice using HUDs on standardized patients. Four lectures were given each month: (1) introduction to POCUS and ultrasound physics, (2) thoracic/lung ultrasound, (3) echocardiography, and (4) abdominal POCUS. The lectures consisted of contrasting cases of normal/abnormal videos and clinical vignettes. These four lectures were repeated each month as new interns rotated on service. Some interns experienced the same content multiple times, which was intentional in order to assess their rates of learning over time. Lecture contents were based on previously published guidelines and expert consensus for teaching POCUS in internal medicine.13, 24-26 Content from the Accreditation Council for Graduate Medical Education (ACGME) and the American College of Emergency Physicians (ACEP) was also incorporated because these organizations had published relevant guidelines for teaching POCUS.13,26 Further development of the lectures occurred through review of previously described POCUS-relevant curricula.27-32

Handheld Ultrasound Devices

This study used the Philips LumifyTM, a United States Food and Drug Administration–approved device. Interns randomized to HUDs received their own device at the start of the rotation. It was at their discretion to use the device outside of the course. All devices were approved for patient use and were encrypted in compliance with our information security office. For privacy reasons, any saved patient images were not reviewed by the researchers. Interns were encouraged to share their findings with supervising physicians during rounds, but actual oversight was not measured. Interns not randomized to HUDs could access a single community device that was shared among all residents and fellows in the hospital. Interns reported the average number of POCUS examinations performed each week via a survey sent during the last week of the rotation.

Assessment Design and Implementation

Assessments evaluating trainee knowledge were administered before, during, and after the study period (Figure). For the 2017 cohort, assessments were also administered at the start and end of the ward month to track knowledge acquisition. Assessment contents were selected from POCUS guidelines for internal medicine and adaptation of the ACGME and ACEP guidelines.13,24,26 Additional content was obtained from major society POCUS tutorials and deidentified images collected by the study authors.13,24,33 In keeping with previously described methodology, the images were shown for approximately 12 seconds, followed by five additional seconds to allow the learner to answer the question.32 Final assessment contents were determined by the authors using the Delphi method.34 A sample assessment can be found in the Appendix Material.

 

 

Surveys

Surveys were administered alongside the assessments to the 2016-2018 intern classes. These surveys assessed trainee attitudes toward POCUS and were based on previously validated assessments.27,28,30 Attitudes were measured using 5-point Likert scales.

Statistical Analysis

For the primary outcome, we performed generalized binomial mixed-effect regressions using the survey periods, randomization group, and the interaction of the two as independent variables after adjusting for attendance and controlling of intra-intern correlations. The bivariate unadjusted analysis was performed to display the distribution of overall correctness on the assessments. Wilcoxon signed rank test was used to determine score significance for dependent score variables (R-­Statistical Programming Language, Vienna, Austria).

RESULTS

Baseline Characteristics

There were 149 interns who participated in this study (Figure). Assessment/survey completion rates were as follows: 2016 control: 68.0%; 2017 preintervention: 97.9%; 2017 postintervention: 89.4%; and 2018 control: 100%. The 2017 interns reported similar amounts of prior POCUS exposure in medical school (Table 1).

Primary Outcome: Assessment Scores (HUD vs no HUD)

There were no significant differences in assessment scores at the end of the study between interns randomized to personal HUD access vs those to no-HUD access (Table 1). HUD interns reported performing POCUS assessments on patients a mean 6.8 (standard deviation [SD] 2.2) times per week vs 6.4 (SD 2.9) times per week in the no-HUD arm (P = .66). The mean lecture attendance was 75.0% and did not significantly differ between the HUD arms (Table 1).

Secondary Outcomes

Impact of Repeating Lectures

The 2017 interns demonstrated significant increases in preblock vs postblock assessment scores after first-time exposure to the lectures (median preblock score 0.61 [interquartile range (IQR), 0.53-0.70] vs postblock score 0.81 [IQR, 0.72-0.86]; P < .001; Table 2). However, intern performance on the preblock vs postblock assessments after second-time exposure to the curriculum failed to improve (median second preblock score 0.78 [IQR, 0.69-0.83] vs postblock score 0.81 [IQR, 0.64-0.89]; P = .94). Intern performance on individual domains of knowledge for each block is listed in Appendix Table 1.

Intervention Performance vs Controls

The 2016 historical control had significantly higher scores compared with the 2017 preintervention group (P < .001; Appendix Table 2). The year-long lecture series resulted in significant increases in median scores for the 2017 group (median preintervention score 0.55 [0.41-0.61] vs median postintervention score 0.84 [0.71-0.90]; P = .006; Appendix Table 1). At the end of the study, the 2017 postintervention scores were significantly higher across multiple knowledge domains compared with the 2016 historical control (Appendix Table 2).

Survey Results

Notably, the 2017 intern class at the end of the intervention did not have significantly different assessment scores for several disease-specific domains, compared with the 2016 control (Appendix Table 2). Nonetheless, the 2017 intern class reported higher levels of confidence in these same domains despite similar scores (Supplementary Figure). The HUD group seldomly cited a lack of confidence in their abilities as a barrier to performing POCUS examinations (17.6%), compared with the no-HUD group (50.0%), despite nearly identical assessment scores between the two groups (Table 1).

 

 

DISCUSSION

Previous guidelines have recommended increased HUD access for learners,13,24,35,36 but there have been few investigations that have evaluated the impact of such access on learning POCUS. One previous investigation found that hospitalists who carried HUDs were more likely to identify heart failure on bedside examination.37 In contrast, our study found no improvement in interpretative ability when randomizing interns to carry HUDs for patient care. Notably, interns did not perform more POCUS examinations when given HUDs. We offer several explanations for this finding. First, time-motion studies have demonstrated that internal medicine interns spend less than 15% of their time toward direct patient care.38 It is possible that the demands of being an intern impeded their ability to perform more POCUS examinations on their patients, regardless of HUD access. Alternatively, the interns randomized to no personal access may have used the community device more frequently as a result of the lecture series. Given the cost of HUDs, further studies are needed to assess the degree to which HUD access will improve trainee interpretive ability, especially as more training programs consider the creation of ultrasound curricula.10,11,24,39,40

This study was unique because it followed interns over a year-long course that repeated the same material to assess rates of learning with repeated exposure. Learners improved their scores after the first, but not second, block. Furthermore, the median scores were nearly identical between the first postblock assessment and second preblock assessment (0.81 vs 0.78), suggesting that knowledge was retained between blocks. Together, these findings suggest there may be limitations of traditional lectures that use standardized patient models for practice. Supplementary pedagogies, such as in-the-moment feedback with actual patients, may be needed to promote mastery.14,35

Despite no formal curriculum, the 2016 intern class (historical control) had learned POCUS to some degree based on their higher assessment scores compared with the 2017 intern class during the preintervention period. Such learning may be informal, and yet, trainees may feel confident in making clinical decisions without formalized training, accreditation, or oversight. As suggested by this study, adding regular didactics or giving trainees HUDs may not immediately solve this issue. For assessment items in which the 2017 interns did not significantly differ from the controls, they nonetheless reported higher confidence in their abilities. Similarly, interns randomized to HUDs less frequently cited a lack of confidence in their abilities, despite similar scores to the no-HUD group. Such confidence may be incongruent with their actual knowledge or ability to safely use POCUS. This phenomenon of misplaced confidence is known as the Dunning–Kruger effect, and it may be common with ultrasound learning.41 While confidence can be part of a holistic definition of competency,14 these results raise the concern that trainees may have difficulty assessing their own competency level with POCUS.35

There are several limitations to this study. It was performed at a single institution with limited sample size. It examined only intern physicians because of funding constraints, which limits the generalizability of these findings among medical trainees. Technical ability assessments (including obtaining and interpreting images) were not included. We were unable to track the timing or location of the devices’ usage, and the interns’ self-reported usage rates may be subject to recall bias. To our knowledge, there were no significant lapses in device availability/functionality. Intern physicians in the HUD arm did not receive formal feedback on personally acquired patient images, which may have limited the intervention’s impact.

In conclusion, internal medicine interns who received personal HUDs were not better at recognizing normal/abnormal findings on image assessments, and they did not report performing more POCUS examinations. Since the minority of a trainee’s time is spent toward direct patient care, offering trainees HUDs without substantial guidance may not be enough to promote mastery. Notably, trainees who received HUDs felt more confident in their abilities, despite no objective increase in their actual skill. Finally, interns who received POCUS-related lectures experienced significant benefit upon first exposure to the material, while repeated exposures did not improve performance. Future investigations should stringently track trainee POCUS usage rates with HUDs and assess whether image acquisition ability improves as a result of personal access.

 

 

Files
References

1. Moore CL, Copel JA. Point-of-care ultrasonography. N Engl J Med. 2011;364(8):749-757. https://doi.org/10.1056/NEJMra0909487.
2. Akkaya A, Yesilaras M, Aksay E, Sever M, Atilla OD. The interrater reliability of ultrasound imaging of the inferior vena cava performed by emergency residents. Am J Emerg Med. 2013;31(10):1509-1511. https://doi.org/10.1016/j.ajem.2013.07.006.
3. Razi R, Estrada JR, Doll J, Spencer KT. Bedside hand-carried ultrasound by internal medicine residents versus traditional clinical assessment for the identification of systolic dysfunction in patients admitted with decompensated heart failure. J Am Soc Echocardiogr. 2011;24(12):1319-1324. https://doi.org/10.1016/j.echo.2011.07.013.
4. Dodge KL, Lynch CA, Moore CL, Biroscak BJ, Evans LV. Use of ultrasound guidance improves central venous catheter insertion success rates among junior residents. J Ultrasound Med. 2012;31(10):1519-1526. https://doi.org/10.7863/jum.2012.31.10.1519.
5. Cavanna L, Mordenti P, Bertè R, et al. Ultrasound guidance reduces pneumothorax rate and improves safety of thoracentesis in malignant pleural effusion: Report on 445 consecutive patients with advanced cancer. World J Surg Oncol. 2014;12:139. https://doi.org/10.1186/1477-7819-12-139.
6. Testa A, Francesconi A, Giannuzzi R, Berardi S, Sbraccia P. Economic analysis of bedside ultrasonography (US) implementation in an Internal Medicine department. Intern Emerg Med. 2015;10(8):1015-1024. https://doi.org/10.1007/s11739-015-1320-7.
7. Howard ZD, Noble VE, Marill KA, et al. Bedside ultrasound maximizes patient satisfaction. J Emerg Med. 2014;46(1):46-53. https://doi.org/10.1016/j.jemermed.2013.05.044.
8. Park YH, Jung RB, Lee YG, et al. Does the use of bedside ultrasonography reduce emergency department length of stay for patients with renal colic? A pilot study. Clin Exp Emerg Med. 2016;3(4):197-203. https://doi.org/10.15441/ceem.15.109.
9. Glomb N, D’Amico B, Rus M, Chen C. Point-of-care ultrasound in resource-­limited settings. Clin Pediatr Emerg Med. 2015;16(4):256-261. https://doi.org/10.1016/j.cpem.2015.10.001.
10. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89(12):1681-1686. https://doi.org/10.1097/ACM.0000000000000414.
11. Hall JWW, Holman H, Bornemann P, et al. Point of care ultrasound in family medicine residency programs: A CERA study. Fam Med. 2015;47(9):706-711.
12. Schnobrich DJ, Gladding S, Olson APJ, Duran-Nelson A. Point-of-care ultrasound in internal medicine: A national survey of educational leadership. J Grad Med Educ. 2013;5(3):498-502. https://doi.org/10.4300/JGME-D-12-00215.1.
13. Stolz LA, Stolz U, Fields JM, et al. Emergency medicine resident assessment of the emergency ultrasound milestones and current training recommendations. Acad Emerg Med. 2017;24(3):353-361. https://doi.org/10.1111/acem.13113.
14. Kumar, A., Jensen, T., Kugler, J. Evaluation of trainee competency with point-of-care ultrasonography (POCUS): A conceptual framework and review of existing assessments. J Gen Intern Med. 2019;34(6):1025-1031. https://doi.org/10.1007/s11606-019-04945-4.
15. Levitov A, Frankel HL, Blaivas M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients—part ii: Cardiac ultrasonography. Crit Care Med. 2016;44(6):1206-1227. https://doi.org/10.1097/CCM.0000000000001847.
16. Kobal SL, Trento L, Baharami S, et al. Comparison of effectiveness of hand-carried ultrasound to bedside cardiovascular physical examination. Am J Cardiol. 2005;96(7):1002-1006. https://doi.org/10.1016/j.amjcard.2005.05.060.
17. Ceriani E, Cogliati C. Update on bedside ultrasound diagnosis of pericardial effusion. Intern Emerg Med. 2016;11(3):477-480. https://doi.org/10.1007/s11739-015-1372-8.
18. Labovitz AJ, Noble VE, Bierig M, et al. Focused cardiac ultrasound in the emergent setting: A consensus statement of the American Society of Echocardiography and American College of Emergency Physicians. J Am Soc Echocardiogr. 2010;23(12):1225-1230. https://doi.org/10.1016/j.echo.2010.10.005.
19. Keil-Ríos D, Terrazas-Solís H, González-Garay A, Sánchez-Ávila JF, García-Juárez I. Pocket ultrasound device as a complement to physical examination for ascites evaluation and guided paracentesis. Intern Emerg Med. 2016;11(3):461-466. https://doi.org/10.1007/s11739-016-1406-x.
20. Riddell J, Case A, Wopat R, et al. Sensitivity of emergency bedside ultrasound to detect hydronephrosis in patients with computed tomography–proven stones. West J Emerg Med. 2014;15(1):96-100. https://doi.org/10.5811/westjem.2013.9.15874.
21. Dalziel PJ, Noble VE. Bedside ultrasound and the assessment of renal colic: A review. Emerg Med J. 2013;30(1):3-8. https://doi.org/10.1136/emermed-2012-201375.
22. Whitson MR, Mayo PH. Ultrasonography in the emergency department. Crit Care. 2016;20(1):227. https://doi.org/10.1186/s13054-016-1399-x.
23. Kumar A, Liu G, Chi J, Kugler J. The role of technology in the bedside encounter. Med Clin North Am. 2018;102(3):443-451. https://doi.org/10.1016/j.mcna.2017.12.006.
24. Ma IWY, Arishenkoff S, Wiseman J, et al. Internal medicine point-of-care ultrasound curriculum: Consensus recommendations from the Canadian Internal Medicine Ultrasound (CIMUS) Group. J Gen Intern Med. 2017;32(9):1052-1057. https://doi.org/10.1007/s11606-017-4071-5.
15. Sabath BF, Singh G. Point-of-care ultrasonography as a training milestone for internal medicine residents: The time is now. J Community Hosp Intern Med Perspect. 2016;6(5):33094. https://doi.org/10.3402/jchimp.v6.33094.
26. American College of Emergency Physicians. Ultrasound guidelines: emergency, point-of-care and clinical ultrasound guidelines in medicine. Ann Emerg Med. 2017;69(5):e27-e54. https://doi.org/10.1016/j.annemergmed.2016.08.457.
27. Ramsingh D, Rinehart J, Kain Z, et al. Impact assessment of perioperative point-of-care ultrasound training on anesthesiology residents. Anesthesiology. 2015;123(3):670-682. https://doi.org/10.1097/ALN.0000000000000776.
28. Keddis MT, Cullen MW, Reed DA, et al. Effectiveness of an ultrasound training module for internal medicine residents. BMC Med Educ. 2011;11:75. https://doi.org/10.1186/1472-6920-11-75.
29. Townsend NT, Kendall J, Barnett C, Robinson T. An effective curriculum for focused assessment diagnostic echocardiography: Establishing the learning curve in surgical residents. J Surg Educ. 2016;73(2):190-196. https://doi.org/10.1016/j.jsurg.2015.10.009.
30. Hoppmann RA, Rao VV, Bell F, et al. The evolution of an integrated ultrasound curriculum (iUSC) for medical students: 9-year experience. Crit Ultrasound J. 2015;7(1):18. https://doi.org/10.1186/s13089-015-0035-3.
31. Skalski JH, Elrashidi M, Reed DA, McDonald FS, Bhagra A. Using standardized patients to teach point-of-care ultrasound–guided physical examination skills to internal medicine residents. J Grad Med Educ. 2015;7(1):95-97. https://doi.org/10.4300/JGME-D-14-00178.1.
32. Chisholm CB, Dodge WR, Balise RR, Williams SR, Gharahbaghian L, Beraud A-S. Focused cardiac ultrasound training: How much is enough? J Emerg Med. 2013;44(4):818-822. https://doi.org/10.1016/j.jemermed.2012.07.092.
33. Schmidt GA, Schraufnagel D. Introduction to ATS seminars: Intensive care ultrasound. Ann Am Thorac Soc. 2013;10(5):538-539. https://doi.org/10.1513/AnnalsATS.201306-203ED.
34. Skaarup SH, Laursen CB, Bjerrum AS, Hilberg O. Objective and structured assessment of lung ultrasound competence. A multispecialty Delphi consensus and construct validity study. Ann Am Thorac Soc. 2017;14(4):555-560. https://doi.org/10.1513/AnnalsATS.201611-894OC.
35. Lucas BP, Tierney DM, Jensen TP, et al. Credentialing of hospitalists in ultrasound-guided bedside procedures: A position statement of the Society of Hospital Medicine. J Hosp Med. 2018;13(2):117-125. https://doi.org/10.12788/jhm.2917.
36. Frankel HL, Kirkpatrick AW, Elbarbary M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part i: General ultrasonography. Crit Care Med. 2015;43(11):2479-2502. https://doi.org/10.1097/CCM.0000000000001216.
37. Martin LD, Howell EE, Ziegelstein RC, et al. Hand-carried ultrasound performed by hospitalists: Does it improve the cardiac physical examination? Am J Med. 2009;122(1):35-41. https://doi.org/10.1016/j.amjmed.2008.07.022.
38. Desai SV, Asch DA, Bellini LM, et al. Education outcomes in a duty-hour flexibility trial in internal medicine. N Engl J Med. 2018;378(16):1494-1508. https://doi.org/10.1056/NEJMoa1800965.
39. Baltarowich OH, Di Salvo DN, Scoutt LM, et al. National ultrasound curriculum for medical students. Ultrasound Q. 2014;30(1):13-19. https://doi.org/10.1097/RUQ.0000000000000066.
40. Beal EW, Sigmond BR, Sage-Silski L, Lahey S, Nguyen V, Bahner DP. Point-of-care ultrasound in general surgery residency training: A proposal for milestones in graduate medical education ultrasound. J Ultrasound Med. 2017;36(12):2577-2584. https://doi.org/10.1002/jum.14298.
41. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6):1121-1134. https://doi.org/10.1037//0022-3514.77.6.1121.

 

 

Article PDF
Author and Disclosure Information

1Department of Medicine, Stanford University School of Medicine, Stanford, California; 2Quantitative Science Unit, Stanford University School of Medicine, Stanford, California.

Disclosures

Dr. Kumar received a Stanford Seed Grant for Junior Faculty to purchase equipment used in the study. Dr. Witteles received honorarium from Pfizer and Alnylam Pharmaceuticals outside the submitted work. All other authors have nothing to disclose.

Issue
Journal of Hospital Medicine 15(3)
Publications
Topics
Page Number
154-159. Published Online First February 19, 2020
Sections
Files
Files
Author and Disclosure Information

1Department of Medicine, Stanford University School of Medicine, Stanford, California; 2Quantitative Science Unit, Stanford University School of Medicine, Stanford, California.

Disclosures

Dr. Kumar received a Stanford Seed Grant for Junior Faculty to purchase equipment used in the study. Dr. Witteles received honorarium from Pfizer and Alnylam Pharmaceuticals outside the submitted work. All other authors have nothing to disclose.

Author and Disclosure Information

1Department of Medicine, Stanford University School of Medicine, Stanford, California; 2Quantitative Science Unit, Stanford University School of Medicine, Stanford, California.

Disclosures

Dr. Kumar received a Stanford Seed Grant for Junior Faculty to purchase equipment used in the study. Dr. Witteles received honorarium from Pfizer and Alnylam Pharmaceuticals outside the submitted work. All other authors have nothing to disclose.

Article PDF
Article PDF
Related Articles

Point-of-care ultrasonography (POCUS) can transform healthcare delivery through its diagnostic and therapeutic expediency.1 POCUS has been shown to bolster diagnostic accuracy, reduce procedural complications, decrease inpatient length of stay, and improve patient satisfaction by encouraging the physician to be present at the bedside.2-8

POCUS has become widespread across a variety of clinical settings as more investigations have demonstrated its positive impact on patient care.1,9-12 This includes the use of POCUS by trainees, who are now utilizing this technology as part of their assessments of patients.13,14 However, trainees may be performing these examinations with minimal oversight, and outside of emergency medicine, there are few guidelines on how to effectively teach POCUS or measure competency.13,14 While POCUS is rapidly becoming a part of inpatient care, teaching physicians may have little experience in ultrasound or the expertise to adequately supervise trainees.14 There is a growing need to study what trainees can learn and how this knowledge is acquired.

Previous investigations have demonstrated that inexperienced users can be taught to use POCUS to identify a variety of pathological states.2,3,15-23 Most of these curricula used a single lecture series as their pedagogical vehicle, and they variably included junior medical trainees. More importantly, the investigations did not explore whether personal access to handheld ultrasound devices (HUDs) improved learning. In theory, improved access to POCUS devices increases opportunities for authentic and deliberate practice, which may be needed to improve trainee skill with POCUS beyond the classroom setting.14

This study aimed to address several ongoing gaps in knowledge related to learning POCUS. First, we hypothesized that personal HUD access would improve trainees’ POCUS-­related knowledge and interpretive ability as a result of increased practice opportunities. Second, we hypothesized that trainees who receive personal access to HUDs would be more likely to perform POCUS examinations and feel more confident in their interpretations. Finally, we hypothesized that repeated exposure to POCUS-related lectures would result in greater improvements in knowledge as compared with a single lecture series.

METHODS

Participants and Setting

The 2017 intern class (n = 47) at an academic internal medicine residency program participated in the study. Control data were obtained from the 2016 intern class (historical control; n = 50) and the 2018 intern class (contemporaneous control; n = 52). The Stanford University Institutional Review Board approved this study.

Study Design

The 2017 intern class (n = 47) received POCUS didactics from June 2017 to June 2018. To evaluate if increased access to HUDs improved learning outcomes, the 2017 interns were randomized 1:1 to receive their own personal HUD that could be used for patient care and/or self-directed learning (n = 24) vs no-HUD (n = 23; Figure). Learning outcomes were assessed over the course of 1 year (see “Outcomes” below) and were compared with the 2016 and 2018 controls. The 2016 intern class had completed a year of training but had not received formalized POCUS didactics (historical control), whereas the 2018 intern class was assessed at the beginning of their year (contemporaneous control; Figure). In order to make comparisons based on intern experience, baseline data for the 2017 intern class were compared with the 2018 intern class, whereas end-of-study data for 2017 interns were compared with 2016 interns.

 

 

Outcomes

The primary outcome was the difference in assessment scores at the end of the study period between interns randomized to receive a HUD and those who were not. Secondary outcomes included differences in HUD usage rates, lecture attendance, and assessment scores. To assess whether repeated lecture exposure resulted in greater amounts of learning, this study evaluated for assessment score improvements after each lecture block. Finally, trainee attitudes toward POCUS and their confidence in their interpretative ability were measured at the beginning and end of the study period.

Curriculum Implementation

The lectures were administered as once-weekly didactics of 1-hour duration to interns rotating on the inpatient wards rotation. This rotation is 4 weeks long, and each intern will experience the rotation two to four times per year. Each lecture contained two parts: (1) 20-30 minutes of didactics via Microsoft PowerPointTM and (2) 30-40 minutes of supervised practice using HUDs on standardized patients. Four lectures were given each month: (1) introduction to POCUS and ultrasound physics, (2) thoracic/lung ultrasound, (3) echocardiography, and (4) abdominal POCUS. The lectures consisted of contrasting cases of normal/abnormal videos and clinical vignettes. These four lectures were repeated each month as new interns rotated on service. Some interns experienced the same content multiple times, which was intentional in order to assess their rates of learning over time. Lecture contents were based on previously published guidelines and expert consensus for teaching POCUS in internal medicine.13, 24-26 Content from the Accreditation Council for Graduate Medical Education (ACGME) and the American College of Emergency Physicians (ACEP) was also incorporated because these organizations had published relevant guidelines for teaching POCUS.13,26 Further development of the lectures occurred through review of previously described POCUS-relevant curricula.27-32

Handheld Ultrasound Devices

This study used the Philips LumifyTM, a United States Food and Drug Administration–approved device. Interns randomized to HUDs received their own device at the start of the rotation. It was at their discretion to use the device outside of the course. All devices were approved for patient use and were encrypted in compliance with our information security office. For privacy reasons, any saved patient images were not reviewed by the researchers. Interns were encouraged to share their findings with supervising physicians during rounds, but actual oversight was not measured. Interns not randomized to HUDs could access a single community device that was shared among all residents and fellows in the hospital. Interns reported the average number of POCUS examinations performed each week via a survey sent during the last week of the rotation.

Assessment Design and Implementation

Assessments evaluating trainee knowledge were administered before, during, and after the study period (Figure). For the 2017 cohort, assessments were also administered at the start and end of the ward month to track knowledge acquisition. Assessment contents were selected from POCUS guidelines for internal medicine and adaptation of the ACGME and ACEP guidelines.13,24,26 Additional content was obtained from major society POCUS tutorials and deidentified images collected by the study authors.13,24,33 In keeping with previously described methodology, the images were shown for approximately 12 seconds, followed by five additional seconds to allow the learner to answer the question.32 Final assessment contents were determined by the authors using the Delphi method.34 A sample assessment can be found in the Appendix Material.

 

 

Surveys

Surveys were administered alongside the assessments to the 2016-2018 intern classes. These surveys assessed trainee attitudes toward POCUS and were based on previously validated assessments.27,28,30 Attitudes were measured using 5-point Likert scales.

Statistical Analysis

For the primary outcome, we performed generalized binomial mixed-effect regressions using the survey periods, randomization group, and the interaction of the two as independent variables after adjusting for attendance and controlling of intra-intern correlations. The bivariate unadjusted analysis was performed to display the distribution of overall correctness on the assessments. Wilcoxon signed rank test was used to determine score significance for dependent score variables (R-­Statistical Programming Language, Vienna, Austria).

RESULTS

Baseline Characteristics

There were 149 interns who participated in this study (Figure). Assessment/survey completion rates were as follows: 2016 control: 68.0%; 2017 preintervention: 97.9%; 2017 postintervention: 89.4%; and 2018 control: 100%. The 2017 interns reported similar amounts of prior POCUS exposure in medical school (Table 1).

Primary Outcome: Assessment Scores (HUD vs no HUD)

There were no significant differences in assessment scores at the end of the study between interns randomized to personal HUD access vs those to no-HUD access (Table 1). HUD interns reported performing POCUS assessments on patients a mean 6.8 (standard deviation [SD] 2.2) times per week vs 6.4 (SD 2.9) times per week in the no-HUD arm (P = .66). The mean lecture attendance was 75.0% and did not significantly differ between the HUD arms (Table 1).

Secondary Outcomes

Impact of Repeating Lectures

The 2017 interns demonstrated significant increases in preblock vs postblock assessment scores after first-time exposure to the lectures (median preblock score 0.61 [interquartile range (IQR), 0.53-0.70] vs postblock score 0.81 [IQR, 0.72-0.86]; P < .001; Table 2). However, intern performance on the preblock vs postblock assessments after second-time exposure to the curriculum failed to improve (median second preblock score 0.78 [IQR, 0.69-0.83] vs postblock score 0.81 [IQR, 0.64-0.89]; P = .94). Intern performance on individual domains of knowledge for each block is listed in Appendix Table 1.

Intervention Performance vs Controls

The 2016 historical control had significantly higher scores compared with the 2017 preintervention group (P < .001; Appendix Table 2). The year-long lecture series resulted in significant increases in median scores for the 2017 group (median preintervention score 0.55 [0.41-0.61] vs median postintervention score 0.84 [0.71-0.90]; P = .006; Appendix Table 1). At the end of the study, the 2017 postintervention scores were significantly higher across multiple knowledge domains compared with the 2016 historical control (Appendix Table 2).

Survey Results

Notably, the 2017 intern class at the end of the intervention did not have significantly different assessment scores for several disease-specific domains, compared with the 2016 control (Appendix Table 2). Nonetheless, the 2017 intern class reported higher levels of confidence in these same domains despite similar scores (Supplementary Figure). The HUD group seldomly cited a lack of confidence in their abilities as a barrier to performing POCUS examinations (17.6%), compared with the no-HUD group (50.0%), despite nearly identical assessment scores between the two groups (Table 1).

 

 

DISCUSSION

Previous guidelines have recommended increased HUD access for learners,13,24,35,36 but there have been few investigations that have evaluated the impact of such access on learning POCUS. One previous investigation found that hospitalists who carried HUDs were more likely to identify heart failure on bedside examination.37 In contrast, our study found no improvement in interpretative ability when randomizing interns to carry HUDs for patient care. Notably, interns did not perform more POCUS examinations when given HUDs. We offer several explanations for this finding. First, time-motion studies have demonstrated that internal medicine interns spend less than 15% of their time toward direct patient care.38 It is possible that the demands of being an intern impeded their ability to perform more POCUS examinations on their patients, regardless of HUD access. Alternatively, the interns randomized to no personal access may have used the community device more frequently as a result of the lecture series. Given the cost of HUDs, further studies are needed to assess the degree to which HUD access will improve trainee interpretive ability, especially as more training programs consider the creation of ultrasound curricula.10,11,24,39,40

This study was unique because it followed interns over a year-long course that repeated the same material to assess rates of learning with repeated exposure. Learners improved their scores after the first, but not second, block. Furthermore, the median scores were nearly identical between the first postblock assessment and second preblock assessment (0.81 vs 0.78), suggesting that knowledge was retained between blocks. Together, these findings suggest there may be limitations of traditional lectures that use standardized patient models for practice. Supplementary pedagogies, such as in-the-moment feedback with actual patients, may be needed to promote mastery.14,35

Despite no formal curriculum, the 2016 intern class (historical control) had learned POCUS to some degree based on their higher assessment scores compared with the 2017 intern class during the preintervention period. Such learning may be informal, and yet, trainees may feel confident in making clinical decisions without formalized training, accreditation, or oversight. As suggested by this study, adding regular didactics or giving trainees HUDs may not immediately solve this issue. For assessment items in which the 2017 interns did not significantly differ from the controls, they nonetheless reported higher confidence in their abilities. Similarly, interns randomized to HUDs less frequently cited a lack of confidence in their abilities, despite similar scores to the no-HUD group. Such confidence may be incongruent with their actual knowledge or ability to safely use POCUS. This phenomenon of misplaced confidence is known as the Dunning–Kruger effect, and it may be common with ultrasound learning.41 While confidence can be part of a holistic definition of competency,14 these results raise the concern that trainees may have difficulty assessing their own competency level with POCUS.35

There are several limitations to this study. It was performed at a single institution with limited sample size. It examined only intern physicians because of funding constraints, which limits the generalizability of these findings among medical trainees. Technical ability assessments (including obtaining and interpreting images) were not included. We were unable to track the timing or location of the devices’ usage, and the interns’ self-reported usage rates may be subject to recall bias. To our knowledge, there were no significant lapses in device availability/functionality. Intern physicians in the HUD arm did not receive formal feedback on personally acquired patient images, which may have limited the intervention’s impact.

In conclusion, internal medicine interns who received personal HUDs were not better at recognizing normal/abnormal findings on image assessments, and they did not report performing more POCUS examinations. Since the minority of a trainee’s time is spent toward direct patient care, offering trainees HUDs without substantial guidance may not be enough to promote mastery. Notably, trainees who received HUDs felt more confident in their abilities, despite no objective increase in their actual skill. Finally, interns who received POCUS-related lectures experienced significant benefit upon first exposure to the material, while repeated exposures did not improve performance. Future investigations should stringently track trainee POCUS usage rates with HUDs and assess whether image acquisition ability improves as a result of personal access.

 

 

Point-of-care ultrasonography (POCUS) can transform healthcare delivery through its diagnostic and therapeutic expediency.1 POCUS has been shown to bolster diagnostic accuracy, reduce procedural complications, decrease inpatient length of stay, and improve patient satisfaction by encouraging the physician to be present at the bedside.2-8

POCUS has become widespread across a variety of clinical settings as more investigations have demonstrated its positive impact on patient care.1,9-12 This includes the use of POCUS by trainees, who are now utilizing this technology as part of their assessments of patients.13,14 However, trainees may be performing these examinations with minimal oversight, and outside of emergency medicine, there are few guidelines on how to effectively teach POCUS or measure competency.13,14 While POCUS is rapidly becoming a part of inpatient care, teaching physicians may have little experience in ultrasound or the expertise to adequately supervise trainees.14 There is a growing need to study what trainees can learn and how this knowledge is acquired.

Previous investigations have demonstrated that inexperienced users can be taught to use POCUS to identify a variety of pathological states.2,3,15-23 Most of these curricula used a single lecture series as their pedagogical vehicle, and they variably included junior medical trainees. More importantly, the investigations did not explore whether personal access to handheld ultrasound devices (HUDs) improved learning. In theory, improved access to POCUS devices increases opportunities for authentic and deliberate practice, which may be needed to improve trainee skill with POCUS beyond the classroom setting.14

This study aimed to address several ongoing gaps in knowledge related to learning POCUS. First, we hypothesized that personal HUD access would improve trainees’ POCUS-­related knowledge and interpretive ability as a result of increased practice opportunities. Second, we hypothesized that trainees who receive personal access to HUDs would be more likely to perform POCUS examinations and feel more confident in their interpretations. Finally, we hypothesized that repeated exposure to POCUS-related lectures would result in greater improvements in knowledge as compared with a single lecture series.

METHODS

Participants and Setting

The 2017 intern class (n = 47) at an academic internal medicine residency program participated in the study. Control data were obtained from the 2016 intern class (historical control; n = 50) and the 2018 intern class (contemporaneous control; n = 52). The Stanford University Institutional Review Board approved this study.

Study Design

The 2017 intern class (n = 47) received POCUS didactics from June 2017 to June 2018. To evaluate if increased access to HUDs improved learning outcomes, the 2017 interns were randomized 1:1 to receive their own personal HUD that could be used for patient care and/or self-directed learning (n = 24) vs no-HUD (n = 23; Figure). Learning outcomes were assessed over the course of 1 year (see “Outcomes” below) and were compared with the 2016 and 2018 controls. The 2016 intern class had completed a year of training but had not received formalized POCUS didactics (historical control), whereas the 2018 intern class was assessed at the beginning of their year (contemporaneous control; Figure). In order to make comparisons based on intern experience, baseline data for the 2017 intern class were compared with the 2018 intern class, whereas end-of-study data for 2017 interns were compared with 2016 interns.

 

 

Outcomes

The primary outcome was the difference in assessment scores at the end of the study period between interns randomized to receive a HUD and those who were not. Secondary outcomes included differences in HUD usage rates, lecture attendance, and assessment scores. To assess whether repeated lecture exposure resulted in greater amounts of learning, this study evaluated for assessment score improvements after each lecture block. Finally, trainee attitudes toward POCUS and their confidence in their interpretative ability were measured at the beginning and end of the study period.

Curriculum Implementation

The lectures were administered as once-weekly didactics of 1-hour duration to interns rotating on the inpatient wards rotation. This rotation is 4 weeks long, and each intern will experience the rotation two to four times per year. Each lecture contained two parts: (1) 20-30 minutes of didactics via Microsoft PowerPointTM and (2) 30-40 minutes of supervised practice using HUDs on standardized patients. Four lectures were given each month: (1) introduction to POCUS and ultrasound physics, (2) thoracic/lung ultrasound, (3) echocardiography, and (4) abdominal POCUS. The lectures consisted of contrasting cases of normal/abnormal videos and clinical vignettes. These four lectures were repeated each month as new interns rotated on service. Some interns experienced the same content multiple times, which was intentional in order to assess their rates of learning over time. Lecture contents were based on previously published guidelines and expert consensus for teaching POCUS in internal medicine.13, 24-26 Content from the Accreditation Council for Graduate Medical Education (ACGME) and the American College of Emergency Physicians (ACEP) was also incorporated because these organizations had published relevant guidelines for teaching POCUS.13,26 Further development of the lectures occurred through review of previously described POCUS-relevant curricula.27-32

Handheld Ultrasound Devices

This study used the Philips LumifyTM, a United States Food and Drug Administration–approved device. Interns randomized to HUDs received their own device at the start of the rotation. It was at their discretion to use the device outside of the course. All devices were approved for patient use and were encrypted in compliance with our information security office. For privacy reasons, any saved patient images were not reviewed by the researchers. Interns were encouraged to share their findings with supervising physicians during rounds, but actual oversight was not measured. Interns not randomized to HUDs could access a single community device that was shared among all residents and fellows in the hospital. Interns reported the average number of POCUS examinations performed each week via a survey sent during the last week of the rotation.

Assessment Design and Implementation

Assessments evaluating trainee knowledge were administered before, during, and after the study period (Figure). For the 2017 cohort, assessments were also administered at the start and end of the ward month to track knowledge acquisition. Assessment contents were selected from POCUS guidelines for internal medicine and adaptation of the ACGME and ACEP guidelines.13,24,26 Additional content was obtained from major society POCUS tutorials and deidentified images collected by the study authors.13,24,33 In keeping with previously described methodology, the images were shown for approximately 12 seconds, followed by five additional seconds to allow the learner to answer the question.32 Final assessment contents were determined by the authors using the Delphi method.34 A sample assessment can be found in the Appendix Material.

 

 

Surveys

Surveys were administered alongside the assessments to the 2016-2018 intern classes. These surveys assessed trainee attitudes toward POCUS and were based on previously validated assessments.27,28,30 Attitudes were measured using 5-point Likert scales.

Statistical Analysis

For the primary outcome, we performed generalized binomial mixed-effect regressions using the survey periods, randomization group, and the interaction of the two as independent variables after adjusting for attendance and controlling of intra-intern correlations. The bivariate unadjusted analysis was performed to display the distribution of overall correctness on the assessments. Wilcoxon signed rank test was used to determine score significance for dependent score variables (R-­Statistical Programming Language, Vienna, Austria).

RESULTS

Baseline Characteristics

There were 149 interns who participated in this study (Figure). Assessment/survey completion rates were as follows: 2016 control: 68.0%; 2017 preintervention: 97.9%; 2017 postintervention: 89.4%; and 2018 control: 100%. The 2017 interns reported similar amounts of prior POCUS exposure in medical school (Table 1).

Primary Outcome: Assessment Scores (HUD vs no HUD)

There were no significant differences in assessment scores at the end of the study between interns randomized to personal HUD access vs those to no-HUD access (Table 1). HUD interns reported performing POCUS assessments on patients a mean 6.8 (standard deviation [SD] 2.2) times per week vs 6.4 (SD 2.9) times per week in the no-HUD arm (P = .66). The mean lecture attendance was 75.0% and did not significantly differ between the HUD arms (Table 1).

Secondary Outcomes

Impact of Repeating Lectures

The 2017 interns demonstrated significant increases in preblock vs postblock assessment scores after first-time exposure to the lectures (median preblock score 0.61 [interquartile range (IQR), 0.53-0.70] vs postblock score 0.81 [IQR, 0.72-0.86]; P < .001; Table 2). However, intern performance on the preblock vs postblock assessments after second-time exposure to the curriculum failed to improve (median second preblock score 0.78 [IQR, 0.69-0.83] vs postblock score 0.81 [IQR, 0.64-0.89]; P = .94). Intern performance on individual domains of knowledge for each block is listed in Appendix Table 1.

Intervention Performance vs Controls

The 2016 historical control had significantly higher scores compared with the 2017 preintervention group (P < .001; Appendix Table 2). The year-long lecture series resulted in significant increases in median scores for the 2017 group (median preintervention score 0.55 [0.41-0.61] vs median postintervention score 0.84 [0.71-0.90]; P = .006; Appendix Table 1). At the end of the study, the 2017 postintervention scores were significantly higher across multiple knowledge domains compared with the 2016 historical control (Appendix Table 2).

Survey Results

Notably, the 2017 intern class at the end of the intervention did not have significantly different assessment scores for several disease-specific domains, compared with the 2016 control (Appendix Table 2). Nonetheless, the 2017 intern class reported higher levels of confidence in these same domains despite similar scores (Supplementary Figure). The HUD group seldomly cited a lack of confidence in their abilities as a barrier to performing POCUS examinations (17.6%), compared with the no-HUD group (50.0%), despite nearly identical assessment scores between the two groups (Table 1).

 

 

DISCUSSION

Previous guidelines have recommended increased HUD access for learners,13,24,35,36 but there have been few investigations that have evaluated the impact of such access on learning POCUS. One previous investigation found that hospitalists who carried HUDs were more likely to identify heart failure on bedside examination.37 In contrast, our study found no improvement in interpretative ability when randomizing interns to carry HUDs for patient care. Notably, interns did not perform more POCUS examinations when given HUDs. We offer several explanations for this finding. First, time-motion studies have demonstrated that internal medicine interns spend less than 15% of their time toward direct patient care.38 It is possible that the demands of being an intern impeded their ability to perform more POCUS examinations on their patients, regardless of HUD access. Alternatively, the interns randomized to no personal access may have used the community device more frequently as a result of the lecture series. Given the cost of HUDs, further studies are needed to assess the degree to which HUD access will improve trainee interpretive ability, especially as more training programs consider the creation of ultrasound curricula.10,11,24,39,40

This study was unique because it followed interns over a year-long course that repeated the same material to assess rates of learning with repeated exposure. Learners improved their scores after the first, but not second, block. Furthermore, the median scores were nearly identical between the first postblock assessment and second preblock assessment (0.81 vs 0.78), suggesting that knowledge was retained between blocks. Together, these findings suggest there may be limitations of traditional lectures that use standardized patient models for practice. Supplementary pedagogies, such as in-the-moment feedback with actual patients, may be needed to promote mastery.14,35

Despite no formal curriculum, the 2016 intern class (historical control) had learned POCUS to some degree based on their higher assessment scores compared with the 2017 intern class during the preintervention period. Such learning may be informal, and yet, trainees may feel confident in making clinical decisions without formalized training, accreditation, or oversight. As suggested by this study, adding regular didactics or giving trainees HUDs may not immediately solve this issue. For assessment items in which the 2017 interns did not significantly differ from the controls, they nonetheless reported higher confidence in their abilities. Similarly, interns randomized to HUDs less frequently cited a lack of confidence in their abilities, despite similar scores to the no-HUD group. Such confidence may be incongruent with their actual knowledge or ability to safely use POCUS. This phenomenon of misplaced confidence is known as the Dunning–Kruger effect, and it may be common with ultrasound learning.41 While confidence can be part of a holistic definition of competency,14 these results raise the concern that trainees may have difficulty assessing their own competency level with POCUS.35

There are several limitations to this study. It was performed at a single institution with limited sample size. It examined only intern physicians because of funding constraints, which limits the generalizability of these findings among medical trainees. Technical ability assessments (including obtaining and interpreting images) were not included. We were unable to track the timing or location of the devices’ usage, and the interns’ self-reported usage rates may be subject to recall bias. To our knowledge, there were no significant lapses in device availability/functionality. Intern physicians in the HUD arm did not receive formal feedback on personally acquired patient images, which may have limited the intervention’s impact.

In conclusion, internal medicine interns who received personal HUDs were not better at recognizing normal/abnormal findings on image assessments, and they did not report performing more POCUS examinations. Since the minority of a trainee’s time is spent toward direct patient care, offering trainees HUDs without substantial guidance may not be enough to promote mastery. Notably, trainees who received HUDs felt more confident in their abilities, despite no objective increase in their actual skill. Finally, interns who received POCUS-related lectures experienced significant benefit upon first exposure to the material, while repeated exposures did not improve performance. Future investigations should stringently track trainee POCUS usage rates with HUDs and assess whether image acquisition ability improves as a result of personal access.

 

 

References

1. Moore CL, Copel JA. Point-of-care ultrasonography. N Engl J Med. 2011;364(8):749-757. https://doi.org/10.1056/NEJMra0909487.
2. Akkaya A, Yesilaras M, Aksay E, Sever M, Atilla OD. The interrater reliability of ultrasound imaging of the inferior vena cava performed by emergency residents. Am J Emerg Med. 2013;31(10):1509-1511. https://doi.org/10.1016/j.ajem.2013.07.006.
3. Razi R, Estrada JR, Doll J, Spencer KT. Bedside hand-carried ultrasound by internal medicine residents versus traditional clinical assessment for the identification of systolic dysfunction in patients admitted with decompensated heart failure. J Am Soc Echocardiogr. 2011;24(12):1319-1324. https://doi.org/10.1016/j.echo.2011.07.013.
4. Dodge KL, Lynch CA, Moore CL, Biroscak BJ, Evans LV. Use of ultrasound guidance improves central venous catheter insertion success rates among junior residents. J Ultrasound Med. 2012;31(10):1519-1526. https://doi.org/10.7863/jum.2012.31.10.1519.
5. Cavanna L, Mordenti P, Bertè R, et al. Ultrasound guidance reduces pneumothorax rate and improves safety of thoracentesis in malignant pleural effusion: Report on 445 consecutive patients with advanced cancer. World J Surg Oncol. 2014;12:139. https://doi.org/10.1186/1477-7819-12-139.
6. Testa A, Francesconi A, Giannuzzi R, Berardi S, Sbraccia P. Economic analysis of bedside ultrasonography (US) implementation in an Internal Medicine department. Intern Emerg Med. 2015;10(8):1015-1024. https://doi.org/10.1007/s11739-015-1320-7.
7. Howard ZD, Noble VE, Marill KA, et al. Bedside ultrasound maximizes patient satisfaction. J Emerg Med. 2014;46(1):46-53. https://doi.org/10.1016/j.jemermed.2013.05.044.
8. Park YH, Jung RB, Lee YG, et al. Does the use of bedside ultrasonography reduce emergency department length of stay for patients with renal colic? A pilot study. Clin Exp Emerg Med. 2016;3(4):197-203. https://doi.org/10.15441/ceem.15.109.
9. Glomb N, D’Amico B, Rus M, Chen C. Point-of-care ultrasound in resource-­limited settings. Clin Pediatr Emerg Med. 2015;16(4):256-261. https://doi.org/10.1016/j.cpem.2015.10.001.
10. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89(12):1681-1686. https://doi.org/10.1097/ACM.0000000000000414.
11. Hall JWW, Holman H, Bornemann P, et al. Point of care ultrasound in family medicine residency programs: A CERA study. Fam Med. 2015;47(9):706-711.
12. Schnobrich DJ, Gladding S, Olson APJ, Duran-Nelson A. Point-of-care ultrasound in internal medicine: A national survey of educational leadership. J Grad Med Educ. 2013;5(3):498-502. https://doi.org/10.4300/JGME-D-12-00215.1.
13. Stolz LA, Stolz U, Fields JM, et al. Emergency medicine resident assessment of the emergency ultrasound milestones and current training recommendations. Acad Emerg Med. 2017;24(3):353-361. https://doi.org/10.1111/acem.13113.
14. Kumar, A., Jensen, T., Kugler, J. Evaluation of trainee competency with point-of-care ultrasonography (POCUS): A conceptual framework and review of existing assessments. J Gen Intern Med. 2019;34(6):1025-1031. https://doi.org/10.1007/s11606-019-04945-4.
15. Levitov A, Frankel HL, Blaivas M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients—part ii: Cardiac ultrasonography. Crit Care Med. 2016;44(6):1206-1227. https://doi.org/10.1097/CCM.0000000000001847.
16. Kobal SL, Trento L, Baharami S, et al. Comparison of effectiveness of hand-carried ultrasound to bedside cardiovascular physical examination. Am J Cardiol. 2005;96(7):1002-1006. https://doi.org/10.1016/j.amjcard.2005.05.060.
17. Ceriani E, Cogliati C. Update on bedside ultrasound diagnosis of pericardial effusion. Intern Emerg Med. 2016;11(3):477-480. https://doi.org/10.1007/s11739-015-1372-8.
18. Labovitz AJ, Noble VE, Bierig M, et al. Focused cardiac ultrasound in the emergent setting: A consensus statement of the American Society of Echocardiography and American College of Emergency Physicians. J Am Soc Echocardiogr. 2010;23(12):1225-1230. https://doi.org/10.1016/j.echo.2010.10.005.
19. Keil-Ríos D, Terrazas-Solís H, González-Garay A, Sánchez-Ávila JF, García-Juárez I. Pocket ultrasound device as a complement to physical examination for ascites evaluation and guided paracentesis. Intern Emerg Med. 2016;11(3):461-466. https://doi.org/10.1007/s11739-016-1406-x.
20. Riddell J, Case A, Wopat R, et al. Sensitivity of emergency bedside ultrasound to detect hydronephrosis in patients with computed tomography–proven stones. West J Emerg Med. 2014;15(1):96-100. https://doi.org/10.5811/westjem.2013.9.15874.
21. Dalziel PJ, Noble VE. Bedside ultrasound and the assessment of renal colic: A review. Emerg Med J. 2013;30(1):3-8. https://doi.org/10.1136/emermed-2012-201375.
22. Whitson MR, Mayo PH. Ultrasonography in the emergency department. Crit Care. 2016;20(1):227. https://doi.org/10.1186/s13054-016-1399-x.
23. Kumar A, Liu G, Chi J, Kugler J. The role of technology in the bedside encounter. Med Clin North Am. 2018;102(3):443-451. https://doi.org/10.1016/j.mcna.2017.12.006.
24. Ma IWY, Arishenkoff S, Wiseman J, et al. Internal medicine point-of-care ultrasound curriculum: Consensus recommendations from the Canadian Internal Medicine Ultrasound (CIMUS) Group. J Gen Intern Med. 2017;32(9):1052-1057. https://doi.org/10.1007/s11606-017-4071-5.
15. Sabath BF, Singh G. Point-of-care ultrasonography as a training milestone for internal medicine residents: The time is now. J Community Hosp Intern Med Perspect. 2016;6(5):33094. https://doi.org/10.3402/jchimp.v6.33094.
26. American College of Emergency Physicians. Ultrasound guidelines: emergency, point-of-care and clinical ultrasound guidelines in medicine. Ann Emerg Med. 2017;69(5):e27-e54. https://doi.org/10.1016/j.annemergmed.2016.08.457.
27. Ramsingh D, Rinehart J, Kain Z, et al. Impact assessment of perioperative point-of-care ultrasound training on anesthesiology residents. Anesthesiology. 2015;123(3):670-682. https://doi.org/10.1097/ALN.0000000000000776.
28. Keddis MT, Cullen MW, Reed DA, et al. Effectiveness of an ultrasound training module for internal medicine residents. BMC Med Educ. 2011;11:75. https://doi.org/10.1186/1472-6920-11-75.
29. Townsend NT, Kendall J, Barnett C, Robinson T. An effective curriculum for focused assessment diagnostic echocardiography: Establishing the learning curve in surgical residents. J Surg Educ. 2016;73(2):190-196. https://doi.org/10.1016/j.jsurg.2015.10.009.
30. Hoppmann RA, Rao VV, Bell F, et al. The evolution of an integrated ultrasound curriculum (iUSC) for medical students: 9-year experience. Crit Ultrasound J. 2015;7(1):18. https://doi.org/10.1186/s13089-015-0035-3.
31. Skalski JH, Elrashidi M, Reed DA, McDonald FS, Bhagra A. Using standardized patients to teach point-of-care ultrasound–guided physical examination skills to internal medicine residents. J Grad Med Educ. 2015;7(1):95-97. https://doi.org/10.4300/JGME-D-14-00178.1.
32. Chisholm CB, Dodge WR, Balise RR, Williams SR, Gharahbaghian L, Beraud A-S. Focused cardiac ultrasound training: How much is enough? J Emerg Med. 2013;44(4):818-822. https://doi.org/10.1016/j.jemermed.2012.07.092.
33. Schmidt GA, Schraufnagel D. Introduction to ATS seminars: Intensive care ultrasound. Ann Am Thorac Soc. 2013;10(5):538-539. https://doi.org/10.1513/AnnalsATS.201306-203ED.
34. Skaarup SH, Laursen CB, Bjerrum AS, Hilberg O. Objective and structured assessment of lung ultrasound competence. A multispecialty Delphi consensus and construct validity study. Ann Am Thorac Soc. 2017;14(4):555-560. https://doi.org/10.1513/AnnalsATS.201611-894OC.
35. Lucas BP, Tierney DM, Jensen TP, et al. Credentialing of hospitalists in ultrasound-guided bedside procedures: A position statement of the Society of Hospital Medicine. J Hosp Med. 2018;13(2):117-125. https://doi.org/10.12788/jhm.2917.
36. Frankel HL, Kirkpatrick AW, Elbarbary M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part i: General ultrasonography. Crit Care Med. 2015;43(11):2479-2502. https://doi.org/10.1097/CCM.0000000000001216.
37. Martin LD, Howell EE, Ziegelstein RC, et al. Hand-carried ultrasound performed by hospitalists: Does it improve the cardiac physical examination? Am J Med. 2009;122(1):35-41. https://doi.org/10.1016/j.amjmed.2008.07.022.
38. Desai SV, Asch DA, Bellini LM, et al. Education outcomes in a duty-hour flexibility trial in internal medicine. N Engl J Med. 2018;378(16):1494-1508. https://doi.org/10.1056/NEJMoa1800965.
39. Baltarowich OH, Di Salvo DN, Scoutt LM, et al. National ultrasound curriculum for medical students. Ultrasound Q. 2014;30(1):13-19. https://doi.org/10.1097/RUQ.0000000000000066.
40. Beal EW, Sigmond BR, Sage-Silski L, Lahey S, Nguyen V, Bahner DP. Point-of-care ultrasound in general surgery residency training: A proposal for milestones in graduate medical education ultrasound. J Ultrasound Med. 2017;36(12):2577-2584. https://doi.org/10.1002/jum.14298.
41. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6):1121-1134. https://doi.org/10.1037//0022-3514.77.6.1121.

 

 

References

1. Moore CL, Copel JA. Point-of-care ultrasonography. N Engl J Med. 2011;364(8):749-757. https://doi.org/10.1056/NEJMra0909487.
2. Akkaya A, Yesilaras M, Aksay E, Sever M, Atilla OD. The interrater reliability of ultrasound imaging of the inferior vena cava performed by emergency residents. Am J Emerg Med. 2013;31(10):1509-1511. https://doi.org/10.1016/j.ajem.2013.07.006.
3. Razi R, Estrada JR, Doll J, Spencer KT. Bedside hand-carried ultrasound by internal medicine residents versus traditional clinical assessment for the identification of systolic dysfunction in patients admitted with decompensated heart failure. J Am Soc Echocardiogr. 2011;24(12):1319-1324. https://doi.org/10.1016/j.echo.2011.07.013.
4. Dodge KL, Lynch CA, Moore CL, Biroscak BJ, Evans LV. Use of ultrasound guidance improves central venous catheter insertion success rates among junior residents. J Ultrasound Med. 2012;31(10):1519-1526. https://doi.org/10.7863/jum.2012.31.10.1519.
5. Cavanna L, Mordenti P, Bertè R, et al. Ultrasound guidance reduces pneumothorax rate and improves safety of thoracentesis in malignant pleural effusion: Report on 445 consecutive patients with advanced cancer. World J Surg Oncol. 2014;12:139. https://doi.org/10.1186/1477-7819-12-139.
6. Testa A, Francesconi A, Giannuzzi R, Berardi S, Sbraccia P. Economic analysis of bedside ultrasonography (US) implementation in an Internal Medicine department. Intern Emerg Med. 2015;10(8):1015-1024. https://doi.org/10.1007/s11739-015-1320-7.
7. Howard ZD, Noble VE, Marill KA, et al. Bedside ultrasound maximizes patient satisfaction. J Emerg Med. 2014;46(1):46-53. https://doi.org/10.1016/j.jemermed.2013.05.044.
8. Park YH, Jung RB, Lee YG, et al. Does the use of bedside ultrasonography reduce emergency department length of stay for patients with renal colic? A pilot study. Clin Exp Emerg Med. 2016;3(4):197-203. https://doi.org/10.15441/ceem.15.109.
9. Glomb N, D’Amico B, Rus M, Chen C. Point-of-care ultrasound in resource-­limited settings. Clin Pediatr Emerg Med. 2015;16(4):256-261. https://doi.org/10.1016/j.cpem.2015.10.001.
10. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89(12):1681-1686. https://doi.org/10.1097/ACM.0000000000000414.
11. Hall JWW, Holman H, Bornemann P, et al. Point of care ultrasound in family medicine residency programs: A CERA study. Fam Med. 2015;47(9):706-711.
12. Schnobrich DJ, Gladding S, Olson APJ, Duran-Nelson A. Point-of-care ultrasound in internal medicine: A national survey of educational leadership. J Grad Med Educ. 2013;5(3):498-502. https://doi.org/10.4300/JGME-D-12-00215.1.
13. Stolz LA, Stolz U, Fields JM, et al. Emergency medicine resident assessment of the emergency ultrasound milestones and current training recommendations. Acad Emerg Med. 2017;24(3):353-361. https://doi.org/10.1111/acem.13113.
14. Kumar, A., Jensen, T., Kugler, J. Evaluation of trainee competency with point-of-care ultrasonography (POCUS): A conceptual framework and review of existing assessments. J Gen Intern Med. 2019;34(6):1025-1031. https://doi.org/10.1007/s11606-019-04945-4.
15. Levitov A, Frankel HL, Blaivas M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients—part ii: Cardiac ultrasonography. Crit Care Med. 2016;44(6):1206-1227. https://doi.org/10.1097/CCM.0000000000001847.
16. Kobal SL, Trento L, Baharami S, et al. Comparison of effectiveness of hand-carried ultrasound to bedside cardiovascular physical examination. Am J Cardiol. 2005;96(7):1002-1006. https://doi.org/10.1016/j.amjcard.2005.05.060.
17. Ceriani E, Cogliati C. Update on bedside ultrasound diagnosis of pericardial effusion. Intern Emerg Med. 2016;11(3):477-480. https://doi.org/10.1007/s11739-015-1372-8.
18. Labovitz AJ, Noble VE, Bierig M, et al. Focused cardiac ultrasound in the emergent setting: A consensus statement of the American Society of Echocardiography and American College of Emergency Physicians. J Am Soc Echocardiogr. 2010;23(12):1225-1230. https://doi.org/10.1016/j.echo.2010.10.005.
19. Keil-Ríos D, Terrazas-Solís H, González-Garay A, Sánchez-Ávila JF, García-Juárez I. Pocket ultrasound device as a complement to physical examination for ascites evaluation and guided paracentesis. Intern Emerg Med. 2016;11(3):461-466. https://doi.org/10.1007/s11739-016-1406-x.
20. Riddell J, Case A, Wopat R, et al. Sensitivity of emergency bedside ultrasound to detect hydronephrosis in patients with computed tomography–proven stones. West J Emerg Med. 2014;15(1):96-100. https://doi.org/10.5811/westjem.2013.9.15874.
21. Dalziel PJ, Noble VE. Bedside ultrasound and the assessment of renal colic: A review. Emerg Med J. 2013;30(1):3-8. https://doi.org/10.1136/emermed-2012-201375.
22. Whitson MR, Mayo PH. Ultrasonography in the emergency department. Crit Care. 2016;20(1):227. https://doi.org/10.1186/s13054-016-1399-x.
23. Kumar A, Liu G, Chi J, Kugler J. The role of technology in the bedside encounter. Med Clin North Am. 2018;102(3):443-451. https://doi.org/10.1016/j.mcna.2017.12.006.
24. Ma IWY, Arishenkoff S, Wiseman J, et al. Internal medicine point-of-care ultrasound curriculum: Consensus recommendations from the Canadian Internal Medicine Ultrasound (CIMUS) Group. J Gen Intern Med. 2017;32(9):1052-1057. https://doi.org/10.1007/s11606-017-4071-5.
15. Sabath BF, Singh G. Point-of-care ultrasonography as a training milestone for internal medicine residents: The time is now. J Community Hosp Intern Med Perspect. 2016;6(5):33094. https://doi.org/10.3402/jchimp.v6.33094.
26. American College of Emergency Physicians. Ultrasound guidelines: emergency, point-of-care and clinical ultrasound guidelines in medicine. Ann Emerg Med. 2017;69(5):e27-e54. https://doi.org/10.1016/j.annemergmed.2016.08.457.
27. Ramsingh D, Rinehart J, Kain Z, et al. Impact assessment of perioperative point-of-care ultrasound training on anesthesiology residents. Anesthesiology. 2015;123(3):670-682. https://doi.org/10.1097/ALN.0000000000000776.
28. Keddis MT, Cullen MW, Reed DA, et al. Effectiveness of an ultrasound training module for internal medicine residents. BMC Med Educ. 2011;11:75. https://doi.org/10.1186/1472-6920-11-75.
29. Townsend NT, Kendall J, Barnett C, Robinson T. An effective curriculum for focused assessment diagnostic echocardiography: Establishing the learning curve in surgical residents. J Surg Educ. 2016;73(2):190-196. https://doi.org/10.1016/j.jsurg.2015.10.009.
30. Hoppmann RA, Rao VV, Bell F, et al. The evolution of an integrated ultrasound curriculum (iUSC) for medical students: 9-year experience. Crit Ultrasound J. 2015;7(1):18. https://doi.org/10.1186/s13089-015-0035-3.
31. Skalski JH, Elrashidi M, Reed DA, McDonald FS, Bhagra A. Using standardized patients to teach point-of-care ultrasound–guided physical examination skills to internal medicine residents. J Grad Med Educ. 2015;7(1):95-97. https://doi.org/10.4300/JGME-D-14-00178.1.
32. Chisholm CB, Dodge WR, Balise RR, Williams SR, Gharahbaghian L, Beraud A-S. Focused cardiac ultrasound training: How much is enough? J Emerg Med. 2013;44(4):818-822. https://doi.org/10.1016/j.jemermed.2012.07.092.
33. Schmidt GA, Schraufnagel D. Introduction to ATS seminars: Intensive care ultrasound. Ann Am Thorac Soc. 2013;10(5):538-539. https://doi.org/10.1513/AnnalsATS.201306-203ED.
34. Skaarup SH, Laursen CB, Bjerrum AS, Hilberg O. Objective and structured assessment of lung ultrasound competence. A multispecialty Delphi consensus and construct validity study. Ann Am Thorac Soc. 2017;14(4):555-560. https://doi.org/10.1513/AnnalsATS.201611-894OC.
35. Lucas BP, Tierney DM, Jensen TP, et al. Credentialing of hospitalists in ultrasound-guided bedside procedures: A position statement of the Society of Hospital Medicine. J Hosp Med. 2018;13(2):117-125. https://doi.org/10.12788/jhm.2917.
36. Frankel HL, Kirkpatrick AW, Elbarbary M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part i: General ultrasonography. Crit Care Med. 2015;43(11):2479-2502. https://doi.org/10.1097/CCM.0000000000001216.
37. Martin LD, Howell EE, Ziegelstein RC, et al. Hand-carried ultrasound performed by hospitalists: Does it improve the cardiac physical examination? Am J Med. 2009;122(1):35-41. https://doi.org/10.1016/j.amjmed.2008.07.022.
38. Desai SV, Asch DA, Bellini LM, et al. Education outcomes in a duty-hour flexibility trial in internal medicine. N Engl J Med. 2018;378(16):1494-1508. https://doi.org/10.1056/NEJMoa1800965.
39. Baltarowich OH, Di Salvo DN, Scoutt LM, et al. National ultrasound curriculum for medical students. Ultrasound Q. 2014;30(1):13-19. https://doi.org/10.1097/RUQ.0000000000000066.
40. Beal EW, Sigmond BR, Sage-Silski L, Lahey S, Nguyen V, Bahner DP. Point-of-care ultrasound in general surgery residency training: A proposal for milestones in graduate medical education ultrasound. J Ultrasound Med. 2017;36(12):2577-2584. https://doi.org/10.1002/jum.14298.
41. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6):1121-1134. https://doi.org/10.1037//0022-3514.77.6.1121.

 

 

Issue
Journal of Hospital Medicine 15(3)
Issue
Journal of Hospital Medicine 15(3)
Page Number
154-159. Published Online First February 19, 2020
Page Number
154-159. Published Online First February 19, 2020
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Andre Kumar, MD; E-mail: [email protected]; Telephone: 650-723-2300
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Peek Free
Medscape Article
Display survey writer
Reuters content
Article PDF Media
Media Files

Surgical Comanagement by Hospitalists: Continued Improvement Over 5 Years

Article Type
Changed
Thu, 03/25/2021 - 14:00

In surgical comanagement (SCM), surgeons and hospitalists share responsibility of care for surgical patients. While SCM has been increasingly utilized, many of the reported models are a modification of the consultation model, in which a group of rotating hospitalists, internists, or geriatricians care for the surgical patients, often after medical complications may have occured.1-4

In August 2012, we implemented SCM in Orthopedic and Neurosurgery services at our institution.5 This model is unique because the same Internal Medicine hospitalists are dedicated year round to the same surgical service. SCM hospitalists see patients on their assigned surgical service only; they do not see patients on the Internal Medicine service. After the first year of implementing SCM, we conducted a propensity score–weighted study with 17,057 discharges in the pre-SCM group (January 2009 to July 2012) and 5,533 discharges in the post-SCM group (September 2012 to September 2013).5 In this study, SCM was associated with a decrease in medical complications, length of stay (LOS), medical consultations, 30-day readmissions, and cost.5

Since SCM requires ongoing investment by institutions, we now report a follow-up study to explore if there were continued improvements in patient outcomes with SCM. In this study, we evaluate if there was a decrease in medical complications, LOS, number of medical consultations, rapid response team calls, and code blues and an increase in patient satisfaction with SCM in Orthopedic and Neurosurgery services between 2012 and 2018.

METHODS

We included 26,380 discharges from Orthopedic and Neurosurgery services between September 1, 2012, and June 30, 2018, at our academic medical center. We excluded patients discharged in August 2012 as we transitioned to the SCM model. Our Institutional Review Board exempted this study from further review.

SCM Structure

SCM structure was detailed in a prior article.5 We have 3.0 clinical full-time equivalents on the Orthopedic surgery SCM service and 1.2 on the Neurosurgery SCM service. On weekdays, during the day (8 am to 5 pm), there are two SCM hospitalists on Orthopedic surgery service and one on Neurosurgery service. One SCM hospitalist is on call every week and takes after-hours calls from both surgical services and sees patients on both services on the weekend.

During the day, SCM hospitalists receive the first call for medical issues. After 5 pm and on weekends and holidays, surgical services take all calls first and reach out to the on-call SCM hospitalist for any medical issues for which they need assistance. Surgery service is the primary team and does the discharge summaries. SCM hospitalists write any medical orders as needed. Medical students, physician assistant students, medicine housestaff, and geriatric medicine fellows rotate through SCM. SCM hospitalists directly communicate with the surgical service and not through the learners. There are no advanced practice providers on SCM service. Surgery housestaff attend the multidisciplinary team care rounds with the case manager, social worker, rehabilitation services, and pharmacy with ad hoc presence of SCM hospitalists for selected patients. SCM hospitalists often see sick patients with the surgery service at the bedside, and they work together with the surgery service on order sets, quality improvement projects, and scholarly work.

SCM hospitalists screen the entire patient list on their assigned surgery service each day. After screening the patient list, SCM hospitalists formally see select patients with preventable or active medical conditions and write notes on the patient’s chart. There are no set criteria to determine which patients would be seen by SCM. This is because surgeries can decompensate stable medical conditions or new unexpected medical complications may occur. Additionally, in our prior study, we reported that SCM reduced medical complications and LOS regardless of age or patient acuity.5

 

 

Outcomes

Our primary outcome was proportion of patients with ≥1 medical complication (sepsis, pneumonia, urinary tract infection, delirium, acute kidney injury, atrial fibrillation, or ileus). Our secondary outcomes included mean LOS, proportion of patients with ≥2 medical consultations, rapid response team calls, code blues, and top-box patient satisfaction score. Though cost is an important consideration in implementing SCM, limited financial data were available. However, since LOS is a key component in calculating direct costs,6 we estimated the cost savings per discharge using mean direct cost per day and the difference in mean LOS between pre- and post-SCM groups.5

We defined medical complications using International Classification of Disease (ICD) Codes 9 or 10 that were coded as “not present on admission” (Appendix 1). We used Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey for three questions for patient satisfaction: Did doctors treat with courtesy and respect, listen carefully, and explain things in a way you could understand?

Statistical Analysis

We used regression analysis to assess trends in patient characteristics by year (Appendix 2). Logistic regression with logit link was used to assess the yearly change in our binary outcomes (proportion of patients with ≥1 medical complication, those with ≥2 medical consultations, rapid response team calls, code blue, and top-box patient satisfaction score) and reported odds ratios. Gamma regression with identity link was performed for our continuous outcome (LOS). Beta coefficient was reported to estimate the yearly change in LOS under their original scales. Age, primary insurance, race, Charlson comorbidity score, general or regional anesthesia, surgical service, and duration of surgery were adjusted in the regression analyses for outcomes. SAS 9.4 was used for analysis.

RESULTS

Patient characteristics are shown in Table 1. Overall, 62.8% patients were discharged from Orthopedic surgery service, 72.5% patients underwent elective surgery, and 88.8% received general anesthesia. Between 2012 and 2018, there was a significant increase in the median age of patients (from 60 years to 63 years), mean Charlson comorbidity score increased from 1.07 to 1.46, and median case mix index, a measure of patient acuity, increased from 2.10 to 2.36 (Appendix 2).

Comparing pre-SCM unadjusted rates reported in our prior study (January 2009 to July 2012) to post-SCM (September 2012 to June 2018; Appendix 3), patients with ≥1 medical complication decreased from 10.1% to 6.1%, LOS (mean ± standard deviation) changed from 5.4 ± 2.2 days to 4.6 ± 5.8 days, patients with ≥2 medical consultations decreased from 19.4% to 9.2%, rapid response team calls changed from 1% to 0.9%, code blues changed from 0.3% to 0.2%, and patients with top-box patient satisfaction score increased from 86.4% to 94.2%.5

In the adjusted analysis from 2012 to 2018, the odds of patients with ≥1 medical complication decreased by 3.8% per year (P = .01), estimated LOS decreased by 0.3 days per year (P < .0001), and the odds of rapid response team calls decreased by 12.2% per year (P = .001; Table 2). Changes over time in the odds of patients with ≥2 medical consultations, code blues, or top-box patient satisfaction score were not statistically significant (Table 2). Based on the LOS reduction pre- to post-SCM, there were estimated average direct cost savings of $3,424 per discharge between 2012 and 2018.

 

 

DISCUSSION

Since the implementation of SCM on Orthopedic and Neurosurgery services at our institution, there was a decrease in medical complications, LOS, and rapid response team calls. To our knowledge, this is one of the largest studies evaluating the benefits of SCM over 5.8 years. Similar to our prior studies on this SCM model of care,5,7 other studies have reported a decrease in medical complications,8-10 LOS,11-13 and cost of care14 with SCM.

While the changes in the unadjusted rates of outcomes over the years appeared to be small, while our patient population became older and sicker, there were significant changes in several of our outcomes in the adjusted analysis. We believe that SCM hospitalists have developed a skill set and understanding of these surgical patients over time and can manage more medically complex patients without an increase in medical complications or LOS. We attribute this to our unique SCM model in which the same hospitalists stay year round on the same surgical service. SCM hospitalists have built trusting relationships with the surgical team with greater involvement in decision making, care planning, and patient selection. With minimal turnover in the SCM group and with ongoing learning, SCM hospitalists can anticipate fluid or pain medication requirements after specific surgeries and the surgery-specific medical complications. SCM hospitalists are available on the patient units to provide timely intervention in case of medical deterioration; answer any questions from patients, families, or nursing while the surgical teams may be in the operating room; and coordinate with other medical consultants or outpatient providers as needed.

This study has several limitations. This is a single-center study at an academic institution, limited to two surgical services. We did not have a control group and multiple hospital-­wide interventions may have affected these outcomes. This is an observational study in which unobserved variables may bias the results. We used ICD codes to identify medical complications, which relies on the quality of physician documentation. While our response rate of 21.1% for HCAHPS was comparable to the national average of 26.7%, it may not reliably represent our patient population.15 Lastly, we had limited financial data.

CONCLUSION

With the move toward value-based payment and increasing medical complexity of surgical patients, SCM by hospitalists may deliver high-quality care.

Files
References

1. Auerbach AD, Wachter RM, Cheng HQ, et al. Comanagement of surgical patients between neurosurgeons and hospitalists. Arch Intern Med. 2010;170(22):2004-2010. https://doi.org/10.1001/archinternmed.2010.432
2. Ruiz ME, Merino RÁ, Rodríguez R, Sánchez GM, Alonso A, Barbero M. Effect of comanagement with internal medicine on hospital stay of patients admitted to the service of otolaryngology. Acta Otorrinolaringol Esp. 2015;66(5):264-268. https://doi.org/10.1016/j.otorri.2014.09.010.
3. Tadros RO, Faries PL, Malik R, et al. The effect of a hospitalist comanagement service on vascular surgery inpatients. J Vasc Surg. 2015;61(6):1550-1555. https://doi.org/10.1016/j.jvs.2015.01.006
4. Gregersen M, Mørch MM, Hougaard K, Damsgaard EM. Geriatric intervention in elderly patients with hip fracture in an orthopedic ward. J Inj Violence Res. 2012;4(2):45-51. https://doi.org/10.5249/jivr.v4i2.96
5. Rohatgi N, Loftus P, Grujic O, Cullen M, Hopkins J, Ahuja N. Surgical comanagement by hospitalists improves patient outcomes: A propensity score analysis. Ann Surg. 2016;264(2):275-282. https://doi.org/10.1097/SLA.0000000000001629
6. Polverejan E, Gardiner JC, Bradley CJ, Holmes-Rovner M, Rovner D. Estimating mean hospital cost as a function of length of stay and patient characteristics. Health Econ. 2003;12(11):935-947. https://doi.org/10.1002/hec.774
7. Rohatgi N, Wei PH, Grujic O, Ahuja N. Surgical Comanagement by hospitalists in colorectal surgery. J Am Coll Surg. 2018;227(4):404-410. https://doi.org/10.1016/j.jamcollsurg.2018.06.011
8. Huddleston JM, Long KH, Naessens JM, et al. Medical and surgical comanagement after elective hip and knee arthroplasty: A randomized, controlled trial. Ann Intern Med. 2004;141(1):28-38. https://doi.org/10.7326/0003-4819-141-1-200407060-00012.
9. Swart E, Vasudeva E, Makhni EC, Macaulay W, Bozic KJ. Dedicated perioperative hip fracture comanagement programs are cost-effective in high-volume centers: An economic analysis. Clin Orthop Relat Res. 2016;474(1):222-233. https://doi.org/10.1007/s11999-015-4494-4.
10. Iberti CT, Briones A, Gabriel E, Dunn AS. Hospitalist-vascular surgery comanagement: Effects on complications and mortality. Hosp Pract. 2016;44(5):233-236. https://doi.org/10.1080/21548331.2016.1259543.
11. Kammerlander C, Roth T, Friedman SM, et al. Ortho-geriatric service--A literature review comparing different models. Osteoporos Int. 2010;21(Suppl 4):S637-S646. https://doi.org/10.1007/s00198-010-1396-x.
12. Bracey DN, Kiymaz TC, Holst DC, et al. An orthopedic-hospitalist comanaged hip fracture service reduces inpatient length of stay. Geriatr Orthop Surg Rehabil. 2016;7(4):171-177. https://doi.org/10.1177/2151458516661383.
13. Duplantier NL, Briski DC, Luce LT, Meyer MS, Ochsner JL, Chimento GF. The effects of a hospitalist comanagement model for joint arthroplasty patients in a teaching facility. J Arthroplasty. 2016;31(3):567-572. https://doi.org/10.1016/j.arth.2015.10.010.
14. Roy A, Heckman MG, Roy V. Associations between the hospitalist model of care and quality-of-care-related outcomes in patients undergoing hip fracture surgery. Mayo Clin Proc. 2006;81(1):28-31. https://doi.org/10.4065/81.1.28.
15. Godden E, Paseka A, Gnida J, Inguanzo J. The impact of response rate on Hospital Consumer Assessment of Healthcare Providers and System (HCAHPS) dimension scores. Patient Exp J. 2019;6(1):105-114. https://doi.org/10.35680/2372-0247.1357.

Article PDF
Author and Disclosure Information

1Division of Hospital Medicine, Department of Medicine, Stanford University School of Medicine, Stanford, California; 2Quantitative Sciences Unit, Division of Biomedical Informatics Research, Department of Medicine, Stanford University School of Medicine, Stanford, California.

Disclosures

The authors have nothing to disclose.

Issue
Journal of Hospital Medicine 15(4)
Publications
Topics
Page Number
232-235. Published Online First February 19, 2020
Sections
Files
Files
Author and Disclosure Information

1Division of Hospital Medicine, Department of Medicine, Stanford University School of Medicine, Stanford, California; 2Quantitative Sciences Unit, Division of Biomedical Informatics Research, Department of Medicine, Stanford University School of Medicine, Stanford, California.

Disclosures

The authors have nothing to disclose.

Author and Disclosure Information

1Division of Hospital Medicine, Department of Medicine, Stanford University School of Medicine, Stanford, California; 2Quantitative Sciences Unit, Division of Biomedical Informatics Research, Department of Medicine, Stanford University School of Medicine, Stanford, California.

Disclosures

The authors have nothing to disclose.

Article PDF
Article PDF
Related Articles

In surgical comanagement (SCM), surgeons and hospitalists share responsibility of care for surgical patients. While SCM has been increasingly utilized, many of the reported models are a modification of the consultation model, in which a group of rotating hospitalists, internists, or geriatricians care for the surgical patients, often after medical complications may have occured.1-4

In August 2012, we implemented SCM in Orthopedic and Neurosurgery services at our institution.5 This model is unique because the same Internal Medicine hospitalists are dedicated year round to the same surgical service. SCM hospitalists see patients on their assigned surgical service only; they do not see patients on the Internal Medicine service. After the first year of implementing SCM, we conducted a propensity score–weighted study with 17,057 discharges in the pre-SCM group (January 2009 to July 2012) and 5,533 discharges in the post-SCM group (September 2012 to September 2013).5 In this study, SCM was associated with a decrease in medical complications, length of stay (LOS), medical consultations, 30-day readmissions, and cost.5

Since SCM requires ongoing investment by institutions, we now report a follow-up study to explore if there were continued improvements in patient outcomes with SCM. In this study, we evaluate if there was a decrease in medical complications, LOS, number of medical consultations, rapid response team calls, and code blues and an increase in patient satisfaction with SCM in Orthopedic and Neurosurgery services between 2012 and 2018.

METHODS

We included 26,380 discharges from Orthopedic and Neurosurgery services between September 1, 2012, and June 30, 2018, at our academic medical center. We excluded patients discharged in August 2012 as we transitioned to the SCM model. Our Institutional Review Board exempted this study from further review.

SCM Structure

SCM structure was detailed in a prior article.5 We have 3.0 clinical full-time equivalents on the Orthopedic surgery SCM service and 1.2 on the Neurosurgery SCM service. On weekdays, during the day (8 am to 5 pm), there are two SCM hospitalists on Orthopedic surgery service and one on Neurosurgery service. One SCM hospitalist is on call every week and takes after-hours calls from both surgical services and sees patients on both services on the weekend.

During the day, SCM hospitalists receive the first call for medical issues. After 5 pm and on weekends and holidays, surgical services take all calls first and reach out to the on-call SCM hospitalist for any medical issues for which they need assistance. Surgery service is the primary team and does the discharge summaries. SCM hospitalists write any medical orders as needed. Medical students, physician assistant students, medicine housestaff, and geriatric medicine fellows rotate through SCM. SCM hospitalists directly communicate with the surgical service and not through the learners. There are no advanced practice providers on SCM service. Surgery housestaff attend the multidisciplinary team care rounds with the case manager, social worker, rehabilitation services, and pharmacy with ad hoc presence of SCM hospitalists for selected patients. SCM hospitalists often see sick patients with the surgery service at the bedside, and they work together with the surgery service on order sets, quality improvement projects, and scholarly work.

SCM hospitalists screen the entire patient list on their assigned surgery service each day. After screening the patient list, SCM hospitalists formally see select patients with preventable or active medical conditions and write notes on the patient’s chart. There are no set criteria to determine which patients would be seen by SCM. This is because surgeries can decompensate stable medical conditions or new unexpected medical complications may occur. Additionally, in our prior study, we reported that SCM reduced medical complications and LOS regardless of age or patient acuity.5

 

 

Outcomes

Our primary outcome was proportion of patients with ≥1 medical complication (sepsis, pneumonia, urinary tract infection, delirium, acute kidney injury, atrial fibrillation, or ileus). Our secondary outcomes included mean LOS, proportion of patients with ≥2 medical consultations, rapid response team calls, code blues, and top-box patient satisfaction score. Though cost is an important consideration in implementing SCM, limited financial data were available. However, since LOS is a key component in calculating direct costs,6 we estimated the cost savings per discharge using mean direct cost per day and the difference in mean LOS between pre- and post-SCM groups.5

We defined medical complications using International Classification of Disease (ICD) Codes 9 or 10 that were coded as “not present on admission” (Appendix 1). We used Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey for three questions for patient satisfaction: Did doctors treat with courtesy and respect, listen carefully, and explain things in a way you could understand?

Statistical Analysis

We used regression analysis to assess trends in patient characteristics by year (Appendix 2). Logistic regression with logit link was used to assess the yearly change in our binary outcomes (proportion of patients with ≥1 medical complication, those with ≥2 medical consultations, rapid response team calls, code blue, and top-box patient satisfaction score) and reported odds ratios. Gamma regression with identity link was performed for our continuous outcome (LOS). Beta coefficient was reported to estimate the yearly change in LOS under their original scales. Age, primary insurance, race, Charlson comorbidity score, general or regional anesthesia, surgical service, and duration of surgery were adjusted in the regression analyses for outcomes. SAS 9.4 was used for analysis.

RESULTS

Patient characteristics are shown in Table 1. Overall, 62.8% patients were discharged from Orthopedic surgery service, 72.5% patients underwent elective surgery, and 88.8% received general anesthesia. Between 2012 and 2018, there was a significant increase in the median age of patients (from 60 years to 63 years), mean Charlson comorbidity score increased from 1.07 to 1.46, and median case mix index, a measure of patient acuity, increased from 2.10 to 2.36 (Appendix 2).

Comparing pre-SCM unadjusted rates reported in our prior study (January 2009 to July 2012) to post-SCM (September 2012 to June 2018; Appendix 3), patients with ≥1 medical complication decreased from 10.1% to 6.1%, LOS (mean ± standard deviation) changed from 5.4 ± 2.2 days to 4.6 ± 5.8 days, patients with ≥2 medical consultations decreased from 19.4% to 9.2%, rapid response team calls changed from 1% to 0.9%, code blues changed from 0.3% to 0.2%, and patients with top-box patient satisfaction score increased from 86.4% to 94.2%.5

In the adjusted analysis from 2012 to 2018, the odds of patients with ≥1 medical complication decreased by 3.8% per year (P = .01), estimated LOS decreased by 0.3 days per year (P < .0001), and the odds of rapid response team calls decreased by 12.2% per year (P = .001; Table 2). Changes over time in the odds of patients with ≥2 medical consultations, code blues, or top-box patient satisfaction score were not statistically significant (Table 2). Based on the LOS reduction pre- to post-SCM, there were estimated average direct cost savings of $3,424 per discharge between 2012 and 2018.

 

 

DISCUSSION

Since the implementation of SCM on Orthopedic and Neurosurgery services at our institution, there was a decrease in medical complications, LOS, and rapid response team calls. To our knowledge, this is one of the largest studies evaluating the benefits of SCM over 5.8 years. Similar to our prior studies on this SCM model of care,5,7 other studies have reported a decrease in medical complications,8-10 LOS,11-13 and cost of care14 with SCM.

While the changes in the unadjusted rates of outcomes over the years appeared to be small, while our patient population became older and sicker, there were significant changes in several of our outcomes in the adjusted analysis. We believe that SCM hospitalists have developed a skill set and understanding of these surgical patients over time and can manage more medically complex patients without an increase in medical complications or LOS. We attribute this to our unique SCM model in which the same hospitalists stay year round on the same surgical service. SCM hospitalists have built trusting relationships with the surgical team with greater involvement in decision making, care planning, and patient selection. With minimal turnover in the SCM group and with ongoing learning, SCM hospitalists can anticipate fluid or pain medication requirements after specific surgeries and the surgery-specific medical complications. SCM hospitalists are available on the patient units to provide timely intervention in case of medical deterioration; answer any questions from patients, families, or nursing while the surgical teams may be in the operating room; and coordinate with other medical consultants or outpatient providers as needed.

This study has several limitations. This is a single-center study at an academic institution, limited to two surgical services. We did not have a control group and multiple hospital-­wide interventions may have affected these outcomes. This is an observational study in which unobserved variables may bias the results. We used ICD codes to identify medical complications, which relies on the quality of physician documentation. While our response rate of 21.1% for HCAHPS was comparable to the national average of 26.7%, it may not reliably represent our patient population.15 Lastly, we had limited financial data.

CONCLUSION

With the move toward value-based payment and increasing medical complexity of surgical patients, SCM by hospitalists may deliver high-quality care.

In surgical comanagement (SCM), surgeons and hospitalists share responsibility of care for surgical patients. While SCM has been increasingly utilized, many of the reported models are a modification of the consultation model, in which a group of rotating hospitalists, internists, or geriatricians care for the surgical patients, often after medical complications may have occured.1-4

In August 2012, we implemented SCM in Orthopedic and Neurosurgery services at our institution.5 This model is unique because the same Internal Medicine hospitalists are dedicated year round to the same surgical service. SCM hospitalists see patients on their assigned surgical service only; they do not see patients on the Internal Medicine service. After the first year of implementing SCM, we conducted a propensity score–weighted study with 17,057 discharges in the pre-SCM group (January 2009 to July 2012) and 5,533 discharges in the post-SCM group (September 2012 to September 2013).5 In this study, SCM was associated with a decrease in medical complications, length of stay (LOS), medical consultations, 30-day readmissions, and cost.5

Since SCM requires ongoing investment by institutions, we now report a follow-up study to explore if there were continued improvements in patient outcomes with SCM. In this study, we evaluate if there was a decrease in medical complications, LOS, number of medical consultations, rapid response team calls, and code blues and an increase in patient satisfaction with SCM in Orthopedic and Neurosurgery services between 2012 and 2018.

METHODS

We included 26,380 discharges from Orthopedic and Neurosurgery services between September 1, 2012, and June 30, 2018, at our academic medical center. We excluded patients discharged in August 2012 as we transitioned to the SCM model. Our Institutional Review Board exempted this study from further review.

SCM Structure

SCM structure was detailed in a prior article.5 We have 3.0 clinical full-time equivalents on the Orthopedic surgery SCM service and 1.2 on the Neurosurgery SCM service. On weekdays, during the day (8 am to 5 pm), there are two SCM hospitalists on Orthopedic surgery service and one on Neurosurgery service. One SCM hospitalist is on call every week and takes after-hours calls from both surgical services and sees patients on both services on the weekend.

During the day, SCM hospitalists receive the first call for medical issues. After 5 pm and on weekends and holidays, surgical services take all calls first and reach out to the on-call SCM hospitalist for any medical issues for which they need assistance. Surgery service is the primary team and does the discharge summaries. SCM hospitalists write any medical orders as needed. Medical students, physician assistant students, medicine housestaff, and geriatric medicine fellows rotate through SCM. SCM hospitalists directly communicate with the surgical service and not through the learners. There are no advanced practice providers on SCM service. Surgery housestaff attend the multidisciplinary team care rounds with the case manager, social worker, rehabilitation services, and pharmacy with ad hoc presence of SCM hospitalists for selected patients. SCM hospitalists often see sick patients with the surgery service at the bedside, and they work together with the surgery service on order sets, quality improvement projects, and scholarly work.

SCM hospitalists screen the entire patient list on their assigned surgery service each day. After screening the patient list, SCM hospitalists formally see select patients with preventable or active medical conditions and write notes on the patient’s chart. There are no set criteria to determine which patients would be seen by SCM. This is because surgeries can decompensate stable medical conditions or new unexpected medical complications may occur. Additionally, in our prior study, we reported that SCM reduced medical complications and LOS regardless of age or patient acuity.5

 

 

Outcomes

Our primary outcome was proportion of patients with ≥1 medical complication (sepsis, pneumonia, urinary tract infection, delirium, acute kidney injury, atrial fibrillation, or ileus). Our secondary outcomes included mean LOS, proportion of patients with ≥2 medical consultations, rapid response team calls, code blues, and top-box patient satisfaction score. Though cost is an important consideration in implementing SCM, limited financial data were available. However, since LOS is a key component in calculating direct costs,6 we estimated the cost savings per discharge using mean direct cost per day and the difference in mean LOS between pre- and post-SCM groups.5

We defined medical complications using International Classification of Disease (ICD) Codes 9 or 10 that were coded as “not present on admission” (Appendix 1). We used Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey for three questions for patient satisfaction: Did doctors treat with courtesy and respect, listen carefully, and explain things in a way you could understand?

Statistical Analysis

We used regression analysis to assess trends in patient characteristics by year (Appendix 2). Logistic regression with logit link was used to assess the yearly change in our binary outcomes (proportion of patients with ≥1 medical complication, those with ≥2 medical consultations, rapid response team calls, code blue, and top-box patient satisfaction score) and reported odds ratios. Gamma regression with identity link was performed for our continuous outcome (LOS). Beta coefficient was reported to estimate the yearly change in LOS under their original scales. Age, primary insurance, race, Charlson comorbidity score, general or regional anesthesia, surgical service, and duration of surgery were adjusted in the regression analyses for outcomes. SAS 9.4 was used for analysis.

RESULTS

Patient characteristics are shown in Table 1. Overall, 62.8% patients were discharged from Orthopedic surgery service, 72.5% patients underwent elective surgery, and 88.8% received general anesthesia. Between 2012 and 2018, there was a significant increase in the median age of patients (from 60 years to 63 years), mean Charlson comorbidity score increased from 1.07 to 1.46, and median case mix index, a measure of patient acuity, increased from 2.10 to 2.36 (Appendix 2).

Comparing pre-SCM unadjusted rates reported in our prior study (January 2009 to July 2012) to post-SCM (September 2012 to June 2018; Appendix 3), patients with ≥1 medical complication decreased from 10.1% to 6.1%, LOS (mean ± standard deviation) changed from 5.4 ± 2.2 days to 4.6 ± 5.8 days, patients with ≥2 medical consultations decreased from 19.4% to 9.2%, rapid response team calls changed from 1% to 0.9%, code blues changed from 0.3% to 0.2%, and patients with top-box patient satisfaction score increased from 86.4% to 94.2%.5

In the adjusted analysis from 2012 to 2018, the odds of patients with ≥1 medical complication decreased by 3.8% per year (P = .01), estimated LOS decreased by 0.3 days per year (P < .0001), and the odds of rapid response team calls decreased by 12.2% per year (P = .001; Table 2). Changes over time in the odds of patients with ≥2 medical consultations, code blues, or top-box patient satisfaction score were not statistically significant (Table 2). Based on the LOS reduction pre- to post-SCM, there were estimated average direct cost savings of $3,424 per discharge between 2012 and 2018.

 

 

DISCUSSION

Since the implementation of SCM on Orthopedic and Neurosurgery services at our institution, there was a decrease in medical complications, LOS, and rapid response team calls. To our knowledge, this is one of the largest studies evaluating the benefits of SCM over 5.8 years. Similar to our prior studies on this SCM model of care,5,7 other studies have reported a decrease in medical complications,8-10 LOS,11-13 and cost of care14 with SCM.

While the changes in the unadjusted rates of outcomes over the years appeared to be small, while our patient population became older and sicker, there were significant changes in several of our outcomes in the adjusted analysis. We believe that SCM hospitalists have developed a skill set and understanding of these surgical patients over time and can manage more medically complex patients without an increase in medical complications or LOS. We attribute this to our unique SCM model in which the same hospitalists stay year round on the same surgical service. SCM hospitalists have built trusting relationships with the surgical team with greater involvement in decision making, care planning, and patient selection. With minimal turnover in the SCM group and with ongoing learning, SCM hospitalists can anticipate fluid or pain medication requirements after specific surgeries and the surgery-specific medical complications. SCM hospitalists are available on the patient units to provide timely intervention in case of medical deterioration; answer any questions from patients, families, or nursing while the surgical teams may be in the operating room; and coordinate with other medical consultants or outpatient providers as needed.

This study has several limitations. This is a single-center study at an academic institution, limited to two surgical services. We did not have a control group and multiple hospital-­wide interventions may have affected these outcomes. This is an observational study in which unobserved variables may bias the results. We used ICD codes to identify medical complications, which relies on the quality of physician documentation. While our response rate of 21.1% for HCAHPS was comparable to the national average of 26.7%, it may not reliably represent our patient population.15 Lastly, we had limited financial data.

CONCLUSION

With the move toward value-based payment and increasing medical complexity of surgical patients, SCM by hospitalists may deliver high-quality care.

References

1. Auerbach AD, Wachter RM, Cheng HQ, et al. Comanagement of surgical patients between neurosurgeons and hospitalists. Arch Intern Med. 2010;170(22):2004-2010. https://doi.org/10.1001/archinternmed.2010.432
2. Ruiz ME, Merino RÁ, Rodríguez R, Sánchez GM, Alonso A, Barbero M. Effect of comanagement with internal medicine on hospital stay of patients admitted to the service of otolaryngology. Acta Otorrinolaringol Esp. 2015;66(5):264-268. https://doi.org/10.1016/j.otorri.2014.09.010.
3. Tadros RO, Faries PL, Malik R, et al. The effect of a hospitalist comanagement service on vascular surgery inpatients. J Vasc Surg. 2015;61(6):1550-1555. https://doi.org/10.1016/j.jvs.2015.01.006
4. Gregersen M, Mørch MM, Hougaard K, Damsgaard EM. Geriatric intervention in elderly patients with hip fracture in an orthopedic ward. J Inj Violence Res. 2012;4(2):45-51. https://doi.org/10.5249/jivr.v4i2.96
5. Rohatgi N, Loftus P, Grujic O, Cullen M, Hopkins J, Ahuja N. Surgical comanagement by hospitalists improves patient outcomes: A propensity score analysis. Ann Surg. 2016;264(2):275-282. https://doi.org/10.1097/SLA.0000000000001629
6. Polverejan E, Gardiner JC, Bradley CJ, Holmes-Rovner M, Rovner D. Estimating mean hospital cost as a function of length of stay and patient characteristics. Health Econ. 2003;12(11):935-947. https://doi.org/10.1002/hec.774
7. Rohatgi N, Wei PH, Grujic O, Ahuja N. Surgical Comanagement by hospitalists in colorectal surgery. J Am Coll Surg. 2018;227(4):404-410. https://doi.org/10.1016/j.jamcollsurg.2018.06.011
8. Huddleston JM, Long KH, Naessens JM, et al. Medical and surgical comanagement after elective hip and knee arthroplasty: A randomized, controlled trial. Ann Intern Med. 2004;141(1):28-38. https://doi.org/10.7326/0003-4819-141-1-200407060-00012.
9. Swart E, Vasudeva E, Makhni EC, Macaulay W, Bozic KJ. Dedicated perioperative hip fracture comanagement programs are cost-effective in high-volume centers: An economic analysis. Clin Orthop Relat Res. 2016;474(1):222-233. https://doi.org/10.1007/s11999-015-4494-4.
10. Iberti CT, Briones A, Gabriel E, Dunn AS. Hospitalist-vascular surgery comanagement: Effects on complications and mortality. Hosp Pract. 2016;44(5):233-236. https://doi.org/10.1080/21548331.2016.1259543.
11. Kammerlander C, Roth T, Friedman SM, et al. Ortho-geriatric service--A literature review comparing different models. Osteoporos Int. 2010;21(Suppl 4):S637-S646. https://doi.org/10.1007/s00198-010-1396-x.
12. Bracey DN, Kiymaz TC, Holst DC, et al. An orthopedic-hospitalist comanaged hip fracture service reduces inpatient length of stay. Geriatr Orthop Surg Rehabil. 2016;7(4):171-177. https://doi.org/10.1177/2151458516661383.
13. Duplantier NL, Briski DC, Luce LT, Meyer MS, Ochsner JL, Chimento GF. The effects of a hospitalist comanagement model for joint arthroplasty patients in a teaching facility. J Arthroplasty. 2016;31(3):567-572. https://doi.org/10.1016/j.arth.2015.10.010.
14. Roy A, Heckman MG, Roy V. Associations between the hospitalist model of care and quality-of-care-related outcomes in patients undergoing hip fracture surgery. Mayo Clin Proc. 2006;81(1):28-31. https://doi.org/10.4065/81.1.28.
15. Godden E, Paseka A, Gnida J, Inguanzo J. The impact of response rate on Hospital Consumer Assessment of Healthcare Providers and System (HCAHPS) dimension scores. Patient Exp J. 2019;6(1):105-114. https://doi.org/10.35680/2372-0247.1357.

References

1. Auerbach AD, Wachter RM, Cheng HQ, et al. Comanagement of surgical patients between neurosurgeons and hospitalists. Arch Intern Med. 2010;170(22):2004-2010. https://doi.org/10.1001/archinternmed.2010.432
2. Ruiz ME, Merino RÁ, Rodríguez R, Sánchez GM, Alonso A, Barbero M. Effect of comanagement with internal medicine on hospital stay of patients admitted to the service of otolaryngology. Acta Otorrinolaringol Esp. 2015;66(5):264-268. https://doi.org/10.1016/j.otorri.2014.09.010.
3. Tadros RO, Faries PL, Malik R, et al. The effect of a hospitalist comanagement service on vascular surgery inpatients. J Vasc Surg. 2015;61(6):1550-1555. https://doi.org/10.1016/j.jvs.2015.01.006
4. Gregersen M, Mørch MM, Hougaard K, Damsgaard EM. Geriatric intervention in elderly patients with hip fracture in an orthopedic ward. J Inj Violence Res. 2012;4(2):45-51. https://doi.org/10.5249/jivr.v4i2.96
5. Rohatgi N, Loftus P, Grujic O, Cullen M, Hopkins J, Ahuja N. Surgical comanagement by hospitalists improves patient outcomes: A propensity score analysis. Ann Surg. 2016;264(2):275-282. https://doi.org/10.1097/SLA.0000000000001629
6. Polverejan E, Gardiner JC, Bradley CJ, Holmes-Rovner M, Rovner D. Estimating mean hospital cost as a function of length of stay and patient characteristics. Health Econ. 2003;12(11):935-947. https://doi.org/10.1002/hec.774
7. Rohatgi N, Wei PH, Grujic O, Ahuja N. Surgical Comanagement by hospitalists in colorectal surgery. J Am Coll Surg. 2018;227(4):404-410. https://doi.org/10.1016/j.jamcollsurg.2018.06.011
8. Huddleston JM, Long KH, Naessens JM, et al. Medical and surgical comanagement after elective hip and knee arthroplasty: A randomized, controlled trial. Ann Intern Med. 2004;141(1):28-38. https://doi.org/10.7326/0003-4819-141-1-200407060-00012.
9. Swart E, Vasudeva E, Makhni EC, Macaulay W, Bozic KJ. Dedicated perioperative hip fracture comanagement programs are cost-effective in high-volume centers: An economic analysis. Clin Orthop Relat Res. 2016;474(1):222-233. https://doi.org/10.1007/s11999-015-4494-4.
10. Iberti CT, Briones A, Gabriel E, Dunn AS. Hospitalist-vascular surgery comanagement: Effects on complications and mortality. Hosp Pract. 2016;44(5):233-236. https://doi.org/10.1080/21548331.2016.1259543.
11. Kammerlander C, Roth T, Friedman SM, et al. Ortho-geriatric service--A literature review comparing different models. Osteoporos Int. 2010;21(Suppl 4):S637-S646. https://doi.org/10.1007/s00198-010-1396-x.
12. Bracey DN, Kiymaz TC, Holst DC, et al. An orthopedic-hospitalist comanaged hip fracture service reduces inpatient length of stay. Geriatr Orthop Surg Rehabil. 2016;7(4):171-177. https://doi.org/10.1177/2151458516661383.
13. Duplantier NL, Briski DC, Luce LT, Meyer MS, Ochsner JL, Chimento GF. The effects of a hospitalist comanagement model for joint arthroplasty patients in a teaching facility. J Arthroplasty. 2016;31(3):567-572. https://doi.org/10.1016/j.arth.2015.10.010.
14. Roy A, Heckman MG, Roy V. Associations between the hospitalist model of care and quality-of-care-related outcomes in patients undergoing hip fracture surgery. Mayo Clin Proc. 2006;81(1):28-31. https://doi.org/10.4065/81.1.28.
15. Godden E, Paseka A, Gnida J, Inguanzo J. The impact of response rate on Hospital Consumer Assessment of Healthcare Providers and System (HCAHPS) dimension scores. Patient Exp J. 2019;6(1):105-114. https://doi.org/10.35680/2372-0247.1357.

Issue
Journal of Hospital Medicine 15(4)
Issue
Journal of Hospital Medicine 15(4)
Page Number
232-235. Published Online First February 19, 2020
Page Number
232-235. Published Online First February 19, 2020
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Nidhi Rohatgi, MD, MS; Email: [email protected]; Telephone: 650-725-4890; Twitter: @nrohatgi2
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Peek Free
Medscape Article
Display survey writer
Reuters content
Article PDF Media
Media Files

State of Research in Adult Hospital Medicine: Results of a National Survey

Article Type
Changed
Wed, 05/15/2019 - 23:05

Almost all specialties in internal medicine have a sound scientific research base through which clinical practice is informed.1 For the field of Hospital Medicine (HM), this evidence has largely comprised research generated from fields outside of the specialty. The need to develop, invest, and grow investigators in hospital-based medicine remains unmet as HM and its footprint in hospital systems continue to grow.2,3

Despite this fact, little is known about the current state of research in HM. A 2014 survey of the members of the Society of Hospital Medicine (SHM) found that research output across the field of HM, as measured on the basis of peer-reviewed publications, was growing.4 Since then, however, the numbers of individuals engaged in research activities, their background and training, publication output, or funding sources have not been quantified. Similarly, little is known about which institutions support the development of junior investigators (ie, HM research fellowships), how these programs are funded, and whether or not matriculants enter the field as investigators. These gaps must be measured, evaluated, and ideally addressed through strategic policy and funding initiatives to advance the state of science within HM.

Members of the SHM Research Committee developed, designed, and deployed a survey to improve the understanding of the state of research in HM. In this study, we aimed to establish the baseline of research in HM to enable the measurement of progress through periodic waves of data collection. Specifically, we sought to quantify and describe the characteristics of existing research programs, the sources and types of funding, the number and background of faculty, and the availability of resources for training researchers in HM.

 

 

METHODS

Study Setting and Participants

Given that no defined list, database, or external resource that identifies research programs and contacts in HM exists, we began by creating a strategy to identify and sample adult HM programs and their leaders engaged in research activity. We iteratively developed a two-step approach to maximize inclusivity. First, we partnered with SHM to identify programs and leaders actively engaging in research activities. SHM is the largest professional organization within HM and maintains an extensive membership database that includes the titles, e-mail addresses, and affiliations of hospitalists in the United States, including academic and nonacademic sites. This list was manually scanned, and the leaders of academic and research programs in adult HM were identified by examining their titles (eg, Division Chief, Research Lead, etc.) and academic affiliations. During this step, members of the committee noticed that certain key individuals were either missing, no longer occupying their role/title, or had been replaced by others. Therefore, we performed a second step and asked the members of the SHM Research Committee to identify academic and research leaders by using current personal contacts, publication history, and social networks. We asked members to identify individuals and programs that had received grant funding, were actively presenting research at SHM (or other major national venues), and/or were producing peer-reviewed publications related to HM. These programs were purposefully chosen (ie, over HM programs known for clinical activities) to create an enriched sample of those engaged in research in HM. The research committee performed the “second pass” to ensure that established investigators who may not be accurately captured within the SHM database were included to maximize yield for the survey. Finally, these two sources were merged to ensure the absence of duplicate contacts and the identification of a primary respondent for each affiliate. As a result, a convenience sample of 100 programs and corresponding individuals was compiled for the purposes of this survey.

Survey Development

A workgroup within the SHM Research Committee was tasked to create a survey that would achieve four distinct goals: (1) identify institutions currently engaging in hospital-based research; (2) define the characteristics, including sources of research funding, training opportunities, criteria for promotion, and grant support, of research programs within institutions; (3) understand the prevalence of research fellowship programs, including size, training curricula, and funding sources; and (4) evaluate the productivity and funding sources of HM investigators at each site.

Survey questions that target each of these domains were drafted by the workgroup. Questions were pretested with colleagues outside the workgroup focused on this project (ie, from the main research committee). The instrument was refined and edited to improve the readability and clarity of questions on the basis of the feedback obtained through the iterative process. The revised instrument was then programmed into an online survey administration tool (SurveyMonkey®) to facilitate electronic dissemination. Finally, the members of the workgroup tested the online survey to ensure functionality. No identifiable information was collected from respondents, and no monetary incentive was offered for the completion of the survey. An invitation to participate in the survey was sent via e-mail to each of the program contacts identified.

 

 

Statistical Analysis

Descriptive statistics, including proportions, means, and percentages, were used to tabulate results. All analyses were conducted using Stata 13 MP/SE (StataCorp, College Station, Texas).

Ethical and Regulatory Considerations

The study was reviewed and deemed exempt from regulation by the University of Michigan Institutional Review Board (HUM000138628).

RESULTS

General Characteristics of Research Programs and Faculty

Out of 100 program contacts, 28 (representing 1,586 faculty members) responded and were included in the survey (program response rate = 28%). When comparing programs that did respond with those that did not, a greater proportion of programs in university settings were noted among respondents (79% vs 21%). Respondents represented programs from all regions of the United States, with most representing university-based (79%), university-affiliated (14%) or Veterans Health Administration (VHA; 11%) programs. Most respondents were in leadership roles, including division chiefs (32%), research directors/leads (21%), section chiefs (18%), and related titles, such as program director. Respondents indicated that the total number of faculty members in their programs (including nonclinicians and advance practice providers) varied from eight to 152 (mean [SD] = 57 [36]) members, with physicians representing the majority of faculty members (Table 1).

Among the 1,586 faculty members within the 28 programs, respondents identified 192 faculty members (12%) as currently receiving extra- or intramural support for research activities. Of these faculty, over half (58%) received <25% of effort from intra or extramural sources, and 28 (15%) and 52 (27%) faculty members received 25%-50% or >50% of support for their effort, respectively. The number of investigators who received funding across programs ranged from 0 to 28 faculty members. Compared with the 192 funded investigators, respondents indicated that a larger number of faculty in their programs (n = 656 or 41%) were involved in local quality improvement (QI) efforts. Of the 656 faculty members involved in QI efforts, 241 individuals (37%) were internally funded and received protected time/effort for their work.

Key Attributes of Research Programs

In the evaluation of the amount of total grant funding, respondents from 17 programs indicated that they received $500,000 in annual extra and intramural funding, and those from three programs stated that they received $500,000 to $999,999 in funding. Five respondents indicated that their programs currently received $1 million to $5 million in grant funding, and three reported >$5 million in research support. The sources of research funding included several divisions within the National Institute of Health (NIH, 12 programs), Agency for Healthcare Research and Quality (AHRQ, four programs), foundations (four programs), and internal grants (six programs). Additionally, six programs indicated “other” sources of funding that included the VHA, Patient-Centered Outcomes Research Institute (PCORI), Centers for Medicare and Medicaid Services, Centers for Disease Control (CDC), and industry sources.

A range of grants, including career development awards (11 programs); small grants, such as R21 and R03s (eight programs); R-level grants, including VA merit awards (five programs); program series grants, such as P and U grants (five programs), and foundation grants (eight programs), were reported as types of awards. Respondents from 16 programs indicated that they provided internal pilot grants. Amounts for such grants ranged from <$50,000 (14 programs) to $50,000-$100,000 (two programs).

 

 

Research Fellowship Programs/Training Programs

Only five of the 28 surveyed programs indicated that they currently had a research training or fellowship program for developing hospitalist investigators. The age of these programs varied from <1 year to 10 years. Three of the five programs stated that they had two fellows per year, and two stated they had spots for one trainee annually. All respondents indicated that fellows received training on study design, research methods, quantitative (eg, large database and secondary analyses) and qualitative data analysis. In addition, two programs included training in systematic review and meta-analyses, and three included focused courses on healthcare policy. Four of the five programs included training in QI tools, such as LEAN and Six Sigma. Funding for four of the five fellowship programs came from internal sources (eg, department and CTSA). However, two programs added they received some support from extramural funding and philanthropy. Following training, respondents from programs indicated that the majority of their graduates (60%) went on to hybrid research/QI roles (50/50 research/clinical effort), whereas 40% obtained dedicated research investigator (80/20) positions (Table 2).

The 23 institutions without research training programs cited that the most important barrier for establishing such programs was lack of funding (12 programs) and the lack of a pipeline of hospitalists seeking such training (six programs). However, 15 programs indicated that opportunities for hospitalists to gain research training in the form of courses were available internally (eg, courses in the department or medical school) or externally (eg, School of Public Health). Seven programs indicated that they were planning to start a HM research fellowship within the next five years.

Research Faculty

Among the 28 respondents, 15 stated that they have faculty members who conduct research as their main professional activity (ie, >50% effort). The number of faculty members in each program in such roles varied from one to 10. Respondents indicated that faculty members in this category were most often midcareer assistant or associate professors with few full professors. All programs indicated that scholarship in the form of peer-reviewed publications was required for the promotion of faculty. Faculty members who performed research as their main activity had all received formal fellowship training and consequently had dual degrees (MD with MPH or MD, with MSc being the two most common combinations). With respect to clinical activities, most respondents indicated that research faculty spent 10% to 49% of their effort on clinical work. However, five respondents indicated that research faculty had <10% effort on clinical duties (Table 3).

Eleven respondents (39%) identified the main focus of faculty as health service research, where four (14%) identified their main focus as clinical trials. Regardless of funding status, all respondents stated that their faculty were interested in studying quality and process improvement efforts (eg, transitions or readmissions, n = 19), patient safety initiatives (eg, hospital-acquired complications, n = 17), and disease-specific areas (eg, thrombosis, n = 15).

In terms of research output, 12 respondents stated that their research/QI faculty collectively published 11-50 peer-reviewed papers during the academic year, and 10 programs indicated that their faculty published 0-10 papers per year. Only three programs reported that their faculty collectively published 50-99 peer-reviewed papers per year. With respect to abstract presentations at national conferences, 13 programs indicated that they presented 0-10 abstracts, and 12 indicated that they presented 11-50.

 

 

DISCUSSION

In this first survey quantifying research activities in HM, respondents from 28 programs shared important insights into research activities at their institutions. Although our sample size was small, substantial variation in the size, composition, and structure of research programs in HM among respondents was observed. For example, few respondents indicated the availability of training programs for research in HM at their institutions. Similarly, among faculty who focused mainly on research, variation in funding streams and effort protection was observed. A preponderance of midcareer faculty with a range of funding sources, including NIH, AHRQ, VHA, CMS, and CDC was reported. Collectively, these data not only provide a unique glimpse into the state of research in HM but also help establish a baseline of the status of the field at large.

Some findings of our study are intuitive given our sampling strategy and the types of programs that responded. For example, the fact that most respondents for research programs represented university-based or affiliated institutions is expected given the tripartite academic mission. However, even within our sample of highly motivated programs, some findings are surprising and merit further exploration. For example, the observation that some respondents identified HM investigators within their program with <25% in intra- or extramural funding was unexpected. On the other extreme, we were surprised to find that three programs reported >$5 million in research funding. Understanding whether specific factors, such as the availability of experienced mentors within and outside departments or assistance from support staff (eg, statisticians and project managers), are associated with success and funding within these programs are important questions to answer. By focusing on these issues, we will be well poised as a field to understand what works, what does not work, and why.

Likewise, the finding that few programs within our sample offer formal training in the form of fellowships to research investigators represents an improvement opportunity. A pipeline for growing investigators is critical for the specialty that is HM. Notably, this call is not new; rather, previous investigators have highlighted the importance of developing academically oriented hospitalists for the future of the field.5 The implementation of faculty scholarship development programs has improved the scholarly output, mentoring activities, and succession planning of academics within HM.6,7 Conversely, lack of adequate mentorship and support for academic activities remains a challenge and as a factor associated with the failure to produce academic work.8 Without a cadre of investigators asking critical questions related to care delivery, the legitimacy of our field may be threatened.

While extrapolating to the field is difficult given the small number of our respondents, highlighting the progress that has been made is important. For example, while misalignment between funding and clinical and research mission persists, our survey found that several programs have been successful in securing extramural funding for their investigators. Additionally, internal funding for QI work appears to be increasing, with hospitalists receiving dedicated effort for much of this work. Innovation in how best to support and develop these types of efforts have also emerged. For example, the University of Michigan Specialist Hospitalist Allied Research Program offers dedicated effort and funding for hospitalists tackling projects germane to HM (eg, ordering of blood cultures for febrile inpatients) that overlap with subspecialists (eg, infectious diseases).9 Thus, hospitalists are linked with other specialties in the development of research agendas and academic products. Similarly, the launch of the HOMERUN network, a coalition of investigators who bridge health systems to study problems central to HM, has helped usher in a new era of research opportunities in the specialty.10 Fundamentally, the culture of HM has begun to place an emphasis on academic and scholarly productivity in addition to clinical prowess.11-13 Increased support and funding for training programs geared toward innovation and research in HM is needed to continue this mission. The Society for General Internal Medicine, American College of Physicians, and SHM have important roles to play as the largest professional organizations for generalists in this respect. Support for research, QI, and investigators in HM remains an urgent and largely unmet need.

Our study has limitations. First, our response rate was low at 28% but is consistent with the response rates of other surveys of physician groups.14 Caution in making inferences to the field at large is necessary given the potential for selection and nonresponse bias. However, we expect that respondents are likely biased toward programs actively conducting research and engaged in QI, thus better reflecting the state of these activities in HM. Second, given that we did not ask for any identifying information, we have no way of establishing the accuracy of the data provided by respondents. However, we have no reason to believe that responses would be altered in a systematic fashion. Future studies that link our findings to publicly available data (eg, databases of active grants and funding) might be useful. Third, while our survey instrument was created and internally validated by hospitalist researchers, its lack of external validation could limit findings. Finally, our results vary on the basis of how respondents answered questions related to effort and time allocation given that these measures differ across programs.

In summary, the findings from this study highlight substantial variations in the number, training, and funding of research faculty across HM programs. Understanding the factors behind the success of some programs and the failures of others appears important in informing and growing the research in the field. Future studies that aim to expand survey participation, raise the awareness of the state of research in HM, and identify barriers and facilitators to academic success in HM are needed.

 

 

Disclosures

Dr. Chopra discloses grant funding from the Agency for Healthcare Research and Quality (AHRQ), VA Health Services and Research Department, and Centers for Disease Control. Dr. Jones discloses grant funding from AHRQ. All other authors disclose no conflicts of interest.

References

1. International Working Party to Promote and Revitalise Academic Medicine. Academic medicine: the evidence base. BMJ. 2004;329(7469):789-792. PubMed
2. Flanders SA, Saint S, McMahon LF, Howell JD. Where should hospitalists sit within the academic medical center? J Gen Intern Med. 2008;23(8):1269-1272. PubMed
3. Flanders SA, Centor B, Weber V, McGinn T, Desalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636-641. PubMed
4. Dang Do AN, Munchhof AM, Terry C, Emmett T, Kara A. Research and publication trends in hospital medicine. J Hosp Med. 2014;9(3):148-154. PubMed
5. Harrison R, Hunter AJ, Sharpe B, Auerbach AD. Survey of US academic hospitalist leaders about mentorship and academic activities in hospitalist groups. J Hosp Med. 2011;6(1):5-9. PubMed
6. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161-166. PubMed
7. Nagarur A, O’Neill RM, Lawton D, Greenwald JL. Supporting faculty development in hospital medicine: design and implementation of a personalized structured mentoring program. J Hosp Med. 2018;13(2):96-99. PubMed
8. Reid MB, Misky GJ, Harrison RA, Sharpe B, Auerbach A, Glasheen JJ. Mentorship, productivity, and promotion among academic hospitalists. J Gen Intern Med. 2012;27(1):23-27. PubMed
9. Flanders SA, Kaufman SR, Nallamothu BK, Saint S. The University of Michigan Specialist-Hospitalist Allied Research Program: jumpstarting hospital medicine research. J Hosp Med. 2008;3(4):308-313. PubMed
10. Auerbach AD, Patel MS, Metlay JP, et al. The Hospital Medicine Reengineering Network (HOMERuN): a learning organization focused on improving hospital care. Acad Med. 2014;89(3):415-420. PubMed
11. Souba WW. Academic medicine’s core values: what do they mean? J Surg Res. 2003;115(2):171-173. PubMed
12. Bonsall J, Chopra V. Building an academic pipeline: a combined society of hospital medicine committee initiative. J Hosp Med. 2016;11(10):735-736. PubMed
13. Sweigart JR, Tad YD, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12(3):173-176. PubMed
14. Cunningham CT, Quan H, Hemmelgarn B, et al. Exploring physician specialist response rates to web-based surveys. BMC Med Res Methodol. 2015;15(1):32. PubMed

Article PDF
Issue
Journal of Hospital Medicine 14(4)
Publications
Topics
Page Number
207-211
Sections
Article PDF
Article PDF

Almost all specialties in internal medicine have a sound scientific research base through which clinical practice is informed.1 For the field of Hospital Medicine (HM), this evidence has largely comprised research generated from fields outside of the specialty. The need to develop, invest, and grow investigators in hospital-based medicine remains unmet as HM and its footprint in hospital systems continue to grow.2,3

Despite this fact, little is known about the current state of research in HM. A 2014 survey of the members of the Society of Hospital Medicine (SHM) found that research output across the field of HM, as measured on the basis of peer-reviewed publications, was growing.4 Since then, however, the numbers of individuals engaged in research activities, their background and training, publication output, or funding sources have not been quantified. Similarly, little is known about which institutions support the development of junior investigators (ie, HM research fellowships), how these programs are funded, and whether or not matriculants enter the field as investigators. These gaps must be measured, evaluated, and ideally addressed through strategic policy and funding initiatives to advance the state of science within HM.

Members of the SHM Research Committee developed, designed, and deployed a survey to improve the understanding of the state of research in HM. In this study, we aimed to establish the baseline of research in HM to enable the measurement of progress through periodic waves of data collection. Specifically, we sought to quantify and describe the characteristics of existing research programs, the sources and types of funding, the number and background of faculty, and the availability of resources for training researchers in HM.

 

 

METHODS

Study Setting and Participants

Given that no defined list, database, or external resource that identifies research programs and contacts in HM exists, we began by creating a strategy to identify and sample adult HM programs and their leaders engaged in research activity. We iteratively developed a two-step approach to maximize inclusivity. First, we partnered with SHM to identify programs and leaders actively engaging in research activities. SHM is the largest professional organization within HM and maintains an extensive membership database that includes the titles, e-mail addresses, and affiliations of hospitalists in the United States, including academic and nonacademic sites. This list was manually scanned, and the leaders of academic and research programs in adult HM were identified by examining their titles (eg, Division Chief, Research Lead, etc.) and academic affiliations. During this step, members of the committee noticed that certain key individuals were either missing, no longer occupying their role/title, or had been replaced by others. Therefore, we performed a second step and asked the members of the SHM Research Committee to identify academic and research leaders by using current personal contacts, publication history, and social networks. We asked members to identify individuals and programs that had received grant funding, were actively presenting research at SHM (or other major national venues), and/or were producing peer-reviewed publications related to HM. These programs were purposefully chosen (ie, over HM programs known for clinical activities) to create an enriched sample of those engaged in research in HM. The research committee performed the “second pass” to ensure that established investigators who may not be accurately captured within the SHM database were included to maximize yield for the survey. Finally, these two sources were merged to ensure the absence of duplicate contacts and the identification of a primary respondent for each affiliate. As a result, a convenience sample of 100 programs and corresponding individuals was compiled for the purposes of this survey.

Survey Development

A workgroup within the SHM Research Committee was tasked to create a survey that would achieve four distinct goals: (1) identify institutions currently engaging in hospital-based research; (2) define the characteristics, including sources of research funding, training opportunities, criteria for promotion, and grant support, of research programs within institutions; (3) understand the prevalence of research fellowship programs, including size, training curricula, and funding sources; and (4) evaluate the productivity and funding sources of HM investigators at each site.

Survey questions that target each of these domains were drafted by the workgroup. Questions were pretested with colleagues outside the workgroup focused on this project (ie, from the main research committee). The instrument was refined and edited to improve the readability and clarity of questions on the basis of the feedback obtained through the iterative process. The revised instrument was then programmed into an online survey administration tool (SurveyMonkey®) to facilitate electronic dissemination. Finally, the members of the workgroup tested the online survey to ensure functionality. No identifiable information was collected from respondents, and no monetary incentive was offered for the completion of the survey. An invitation to participate in the survey was sent via e-mail to each of the program contacts identified.

 

 

Statistical Analysis

Descriptive statistics, including proportions, means, and percentages, were used to tabulate results. All analyses were conducted using Stata 13 MP/SE (StataCorp, College Station, Texas).

Ethical and Regulatory Considerations

The study was reviewed and deemed exempt from regulation by the University of Michigan Institutional Review Board (HUM000138628).

RESULTS

General Characteristics of Research Programs and Faculty

Out of 100 program contacts, 28 (representing 1,586 faculty members) responded and were included in the survey (program response rate = 28%). When comparing programs that did respond with those that did not, a greater proportion of programs in university settings were noted among respondents (79% vs 21%). Respondents represented programs from all regions of the United States, with most representing university-based (79%), university-affiliated (14%) or Veterans Health Administration (VHA; 11%) programs. Most respondents were in leadership roles, including division chiefs (32%), research directors/leads (21%), section chiefs (18%), and related titles, such as program director. Respondents indicated that the total number of faculty members in their programs (including nonclinicians and advance practice providers) varied from eight to 152 (mean [SD] = 57 [36]) members, with physicians representing the majority of faculty members (Table 1).

Among the 1,586 faculty members within the 28 programs, respondents identified 192 faculty members (12%) as currently receiving extra- or intramural support for research activities. Of these faculty, over half (58%) received <25% of effort from intra or extramural sources, and 28 (15%) and 52 (27%) faculty members received 25%-50% or >50% of support for their effort, respectively. The number of investigators who received funding across programs ranged from 0 to 28 faculty members. Compared with the 192 funded investigators, respondents indicated that a larger number of faculty in their programs (n = 656 or 41%) were involved in local quality improvement (QI) efforts. Of the 656 faculty members involved in QI efforts, 241 individuals (37%) were internally funded and received protected time/effort for their work.

Key Attributes of Research Programs

In the evaluation of the amount of total grant funding, respondents from 17 programs indicated that they received $500,000 in annual extra and intramural funding, and those from three programs stated that they received $500,000 to $999,999 in funding. Five respondents indicated that their programs currently received $1 million to $5 million in grant funding, and three reported >$5 million in research support. The sources of research funding included several divisions within the National Institute of Health (NIH, 12 programs), Agency for Healthcare Research and Quality (AHRQ, four programs), foundations (four programs), and internal grants (six programs). Additionally, six programs indicated “other” sources of funding that included the VHA, Patient-Centered Outcomes Research Institute (PCORI), Centers for Medicare and Medicaid Services, Centers for Disease Control (CDC), and industry sources.

A range of grants, including career development awards (11 programs); small grants, such as R21 and R03s (eight programs); R-level grants, including VA merit awards (five programs); program series grants, such as P and U grants (five programs), and foundation grants (eight programs), were reported as types of awards. Respondents from 16 programs indicated that they provided internal pilot grants. Amounts for such grants ranged from <$50,000 (14 programs) to $50,000-$100,000 (two programs).

 

 

Research Fellowship Programs/Training Programs

Only five of the 28 surveyed programs indicated that they currently had a research training or fellowship program for developing hospitalist investigators. The age of these programs varied from <1 year to 10 years. Three of the five programs stated that they had two fellows per year, and two stated they had spots for one trainee annually. All respondents indicated that fellows received training on study design, research methods, quantitative (eg, large database and secondary analyses) and qualitative data analysis. In addition, two programs included training in systematic review and meta-analyses, and three included focused courses on healthcare policy. Four of the five programs included training in QI tools, such as LEAN and Six Sigma. Funding for four of the five fellowship programs came from internal sources (eg, department and CTSA). However, two programs added they received some support from extramural funding and philanthropy. Following training, respondents from programs indicated that the majority of their graduates (60%) went on to hybrid research/QI roles (50/50 research/clinical effort), whereas 40% obtained dedicated research investigator (80/20) positions (Table 2).

The 23 institutions without research training programs cited that the most important barrier for establishing such programs was lack of funding (12 programs) and the lack of a pipeline of hospitalists seeking such training (six programs). However, 15 programs indicated that opportunities for hospitalists to gain research training in the form of courses were available internally (eg, courses in the department or medical school) or externally (eg, School of Public Health). Seven programs indicated that they were planning to start a HM research fellowship within the next five years.

Research Faculty

Among the 28 respondents, 15 stated that they have faculty members who conduct research as their main professional activity (ie, >50% effort). The number of faculty members in each program in such roles varied from one to 10. Respondents indicated that faculty members in this category were most often midcareer assistant or associate professors with few full professors. All programs indicated that scholarship in the form of peer-reviewed publications was required for the promotion of faculty. Faculty members who performed research as their main activity had all received formal fellowship training and consequently had dual degrees (MD with MPH or MD, with MSc being the two most common combinations). With respect to clinical activities, most respondents indicated that research faculty spent 10% to 49% of their effort on clinical work. However, five respondents indicated that research faculty had <10% effort on clinical duties (Table 3).

Eleven respondents (39%) identified the main focus of faculty as health service research, where four (14%) identified their main focus as clinical trials. Regardless of funding status, all respondents stated that their faculty were interested in studying quality and process improvement efforts (eg, transitions or readmissions, n = 19), patient safety initiatives (eg, hospital-acquired complications, n = 17), and disease-specific areas (eg, thrombosis, n = 15).

In terms of research output, 12 respondents stated that their research/QI faculty collectively published 11-50 peer-reviewed papers during the academic year, and 10 programs indicated that their faculty published 0-10 papers per year. Only three programs reported that their faculty collectively published 50-99 peer-reviewed papers per year. With respect to abstract presentations at national conferences, 13 programs indicated that they presented 0-10 abstracts, and 12 indicated that they presented 11-50.

 

 

DISCUSSION

In this first survey quantifying research activities in HM, respondents from 28 programs shared important insights into research activities at their institutions. Although our sample size was small, substantial variation in the size, composition, and structure of research programs in HM among respondents was observed. For example, few respondents indicated the availability of training programs for research in HM at their institutions. Similarly, among faculty who focused mainly on research, variation in funding streams and effort protection was observed. A preponderance of midcareer faculty with a range of funding sources, including NIH, AHRQ, VHA, CMS, and CDC was reported. Collectively, these data not only provide a unique glimpse into the state of research in HM but also help establish a baseline of the status of the field at large.

Some findings of our study are intuitive given our sampling strategy and the types of programs that responded. For example, the fact that most respondents for research programs represented university-based or affiliated institutions is expected given the tripartite academic mission. However, even within our sample of highly motivated programs, some findings are surprising and merit further exploration. For example, the observation that some respondents identified HM investigators within their program with <25% in intra- or extramural funding was unexpected. On the other extreme, we were surprised to find that three programs reported >$5 million in research funding. Understanding whether specific factors, such as the availability of experienced mentors within and outside departments or assistance from support staff (eg, statisticians and project managers), are associated with success and funding within these programs are important questions to answer. By focusing on these issues, we will be well poised as a field to understand what works, what does not work, and why.

Likewise, the finding that few programs within our sample offer formal training in the form of fellowships to research investigators represents an improvement opportunity. A pipeline for growing investigators is critical for the specialty that is HM. Notably, this call is not new; rather, previous investigators have highlighted the importance of developing academically oriented hospitalists for the future of the field.5 The implementation of faculty scholarship development programs has improved the scholarly output, mentoring activities, and succession planning of academics within HM.6,7 Conversely, lack of adequate mentorship and support for academic activities remains a challenge and as a factor associated with the failure to produce academic work.8 Without a cadre of investigators asking critical questions related to care delivery, the legitimacy of our field may be threatened.

While extrapolating to the field is difficult given the small number of our respondents, highlighting the progress that has been made is important. For example, while misalignment between funding and clinical and research mission persists, our survey found that several programs have been successful in securing extramural funding for their investigators. Additionally, internal funding for QI work appears to be increasing, with hospitalists receiving dedicated effort for much of this work. Innovation in how best to support and develop these types of efforts have also emerged. For example, the University of Michigan Specialist Hospitalist Allied Research Program offers dedicated effort and funding for hospitalists tackling projects germane to HM (eg, ordering of blood cultures for febrile inpatients) that overlap with subspecialists (eg, infectious diseases).9 Thus, hospitalists are linked with other specialties in the development of research agendas and academic products. Similarly, the launch of the HOMERUN network, a coalition of investigators who bridge health systems to study problems central to HM, has helped usher in a new era of research opportunities in the specialty.10 Fundamentally, the culture of HM has begun to place an emphasis on academic and scholarly productivity in addition to clinical prowess.11-13 Increased support and funding for training programs geared toward innovation and research in HM is needed to continue this mission. The Society for General Internal Medicine, American College of Physicians, and SHM have important roles to play as the largest professional organizations for generalists in this respect. Support for research, QI, and investigators in HM remains an urgent and largely unmet need.

Our study has limitations. First, our response rate was low at 28% but is consistent with the response rates of other surveys of physician groups.14 Caution in making inferences to the field at large is necessary given the potential for selection and nonresponse bias. However, we expect that respondents are likely biased toward programs actively conducting research and engaged in QI, thus better reflecting the state of these activities in HM. Second, given that we did not ask for any identifying information, we have no way of establishing the accuracy of the data provided by respondents. However, we have no reason to believe that responses would be altered in a systematic fashion. Future studies that link our findings to publicly available data (eg, databases of active grants and funding) might be useful. Third, while our survey instrument was created and internally validated by hospitalist researchers, its lack of external validation could limit findings. Finally, our results vary on the basis of how respondents answered questions related to effort and time allocation given that these measures differ across programs.

In summary, the findings from this study highlight substantial variations in the number, training, and funding of research faculty across HM programs. Understanding the factors behind the success of some programs and the failures of others appears important in informing and growing the research in the field. Future studies that aim to expand survey participation, raise the awareness of the state of research in HM, and identify barriers and facilitators to academic success in HM are needed.

 

 

Disclosures

Dr. Chopra discloses grant funding from the Agency for Healthcare Research and Quality (AHRQ), VA Health Services and Research Department, and Centers for Disease Control. Dr. Jones discloses grant funding from AHRQ. All other authors disclose no conflicts of interest.

Almost all specialties in internal medicine have a sound scientific research base through which clinical practice is informed.1 For the field of Hospital Medicine (HM), this evidence has largely comprised research generated from fields outside of the specialty. The need to develop, invest, and grow investigators in hospital-based medicine remains unmet as HM and its footprint in hospital systems continue to grow.2,3

Despite this fact, little is known about the current state of research in HM. A 2014 survey of the members of the Society of Hospital Medicine (SHM) found that research output across the field of HM, as measured on the basis of peer-reviewed publications, was growing.4 Since then, however, the numbers of individuals engaged in research activities, their background and training, publication output, or funding sources have not been quantified. Similarly, little is known about which institutions support the development of junior investigators (ie, HM research fellowships), how these programs are funded, and whether or not matriculants enter the field as investigators. These gaps must be measured, evaluated, and ideally addressed through strategic policy and funding initiatives to advance the state of science within HM.

Members of the SHM Research Committee developed, designed, and deployed a survey to improve the understanding of the state of research in HM. In this study, we aimed to establish the baseline of research in HM to enable the measurement of progress through periodic waves of data collection. Specifically, we sought to quantify and describe the characteristics of existing research programs, the sources and types of funding, the number and background of faculty, and the availability of resources for training researchers in HM.

 

 

METHODS

Study Setting and Participants

Given that no defined list, database, or external resource that identifies research programs and contacts in HM exists, we began by creating a strategy to identify and sample adult HM programs and their leaders engaged in research activity. We iteratively developed a two-step approach to maximize inclusivity. First, we partnered with SHM to identify programs and leaders actively engaging in research activities. SHM is the largest professional organization within HM and maintains an extensive membership database that includes the titles, e-mail addresses, and affiliations of hospitalists in the United States, including academic and nonacademic sites. This list was manually scanned, and the leaders of academic and research programs in adult HM were identified by examining their titles (eg, Division Chief, Research Lead, etc.) and academic affiliations. During this step, members of the committee noticed that certain key individuals were either missing, no longer occupying their role/title, or had been replaced by others. Therefore, we performed a second step and asked the members of the SHM Research Committee to identify academic and research leaders by using current personal contacts, publication history, and social networks. We asked members to identify individuals and programs that had received grant funding, were actively presenting research at SHM (or other major national venues), and/or were producing peer-reviewed publications related to HM. These programs were purposefully chosen (ie, over HM programs known for clinical activities) to create an enriched sample of those engaged in research in HM. The research committee performed the “second pass” to ensure that established investigators who may not be accurately captured within the SHM database were included to maximize yield for the survey. Finally, these two sources were merged to ensure the absence of duplicate contacts and the identification of a primary respondent for each affiliate. As a result, a convenience sample of 100 programs and corresponding individuals was compiled for the purposes of this survey.

Survey Development

A workgroup within the SHM Research Committee was tasked to create a survey that would achieve four distinct goals: (1) identify institutions currently engaging in hospital-based research; (2) define the characteristics, including sources of research funding, training opportunities, criteria for promotion, and grant support, of research programs within institutions; (3) understand the prevalence of research fellowship programs, including size, training curricula, and funding sources; and (4) evaluate the productivity and funding sources of HM investigators at each site.

Survey questions that target each of these domains were drafted by the workgroup. Questions were pretested with colleagues outside the workgroup focused on this project (ie, from the main research committee). The instrument was refined and edited to improve the readability and clarity of questions on the basis of the feedback obtained through the iterative process. The revised instrument was then programmed into an online survey administration tool (SurveyMonkey®) to facilitate electronic dissemination. Finally, the members of the workgroup tested the online survey to ensure functionality. No identifiable information was collected from respondents, and no monetary incentive was offered for the completion of the survey. An invitation to participate in the survey was sent via e-mail to each of the program contacts identified.

 

 

Statistical Analysis

Descriptive statistics, including proportions, means, and percentages, were used to tabulate results. All analyses were conducted using Stata 13 MP/SE (StataCorp, College Station, Texas).

Ethical and Regulatory Considerations

The study was reviewed and deemed exempt from regulation by the University of Michigan Institutional Review Board (HUM000138628).

RESULTS

General Characteristics of Research Programs and Faculty

Out of 100 program contacts, 28 (representing 1,586 faculty members) responded and were included in the survey (program response rate = 28%). When comparing programs that did respond with those that did not, a greater proportion of programs in university settings were noted among respondents (79% vs 21%). Respondents represented programs from all regions of the United States, with most representing university-based (79%), university-affiliated (14%) or Veterans Health Administration (VHA; 11%) programs. Most respondents were in leadership roles, including division chiefs (32%), research directors/leads (21%), section chiefs (18%), and related titles, such as program director. Respondents indicated that the total number of faculty members in their programs (including nonclinicians and advance practice providers) varied from eight to 152 (mean [SD] = 57 [36]) members, with physicians representing the majority of faculty members (Table 1).

Among the 1,586 faculty members within the 28 programs, respondents identified 192 faculty members (12%) as currently receiving extra- or intramural support for research activities. Of these faculty, over half (58%) received <25% of effort from intra or extramural sources, and 28 (15%) and 52 (27%) faculty members received 25%-50% or >50% of support for their effort, respectively. The number of investigators who received funding across programs ranged from 0 to 28 faculty members. Compared with the 192 funded investigators, respondents indicated that a larger number of faculty in their programs (n = 656 or 41%) were involved in local quality improvement (QI) efforts. Of the 656 faculty members involved in QI efforts, 241 individuals (37%) were internally funded and received protected time/effort for their work.

Key Attributes of Research Programs

In the evaluation of the amount of total grant funding, respondents from 17 programs indicated that they received $500,000 in annual extra and intramural funding, and those from three programs stated that they received $500,000 to $999,999 in funding. Five respondents indicated that their programs currently received $1 million to $5 million in grant funding, and three reported >$5 million in research support. The sources of research funding included several divisions within the National Institute of Health (NIH, 12 programs), Agency for Healthcare Research and Quality (AHRQ, four programs), foundations (four programs), and internal grants (six programs). Additionally, six programs indicated “other” sources of funding that included the VHA, Patient-Centered Outcomes Research Institute (PCORI), Centers for Medicare and Medicaid Services, Centers for Disease Control (CDC), and industry sources.

A range of grants, including career development awards (11 programs); small grants, such as R21 and R03s (eight programs); R-level grants, including VA merit awards (five programs); program series grants, such as P and U grants (five programs), and foundation grants (eight programs), were reported as types of awards. Respondents from 16 programs indicated that they provided internal pilot grants. Amounts for such grants ranged from <$50,000 (14 programs) to $50,000-$100,000 (two programs).

 

 

Research Fellowship Programs/Training Programs

Only five of the 28 surveyed programs indicated that they currently had a research training or fellowship program for developing hospitalist investigators. The age of these programs varied from <1 year to 10 years. Three of the five programs stated that they had two fellows per year, and two stated they had spots for one trainee annually. All respondents indicated that fellows received training on study design, research methods, quantitative (eg, large database and secondary analyses) and qualitative data analysis. In addition, two programs included training in systematic review and meta-analyses, and three included focused courses on healthcare policy. Four of the five programs included training in QI tools, such as LEAN and Six Sigma. Funding for four of the five fellowship programs came from internal sources (eg, department and CTSA). However, two programs added they received some support from extramural funding and philanthropy. Following training, respondents from programs indicated that the majority of their graduates (60%) went on to hybrid research/QI roles (50/50 research/clinical effort), whereas 40% obtained dedicated research investigator (80/20) positions (Table 2).

The 23 institutions without research training programs cited that the most important barrier for establishing such programs was lack of funding (12 programs) and the lack of a pipeline of hospitalists seeking such training (six programs). However, 15 programs indicated that opportunities for hospitalists to gain research training in the form of courses were available internally (eg, courses in the department or medical school) or externally (eg, School of Public Health). Seven programs indicated that they were planning to start a HM research fellowship within the next five years.

Research Faculty

Among the 28 respondents, 15 stated that they have faculty members who conduct research as their main professional activity (ie, >50% effort). The number of faculty members in each program in such roles varied from one to 10. Respondents indicated that faculty members in this category were most often midcareer assistant or associate professors with few full professors. All programs indicated that scholarship in the form of peer-reviewed publications was required for the promotion of faculty. Faculty members who performed research as their main activity had all received formal fellowship training and consequently had dual degrees (MD with MPH or MD, with MSc being the two most common combinations). With respect to clinical activities, most respondents indicated that research faculty spent 10% to 49% of their effort on clinical work. However, five respondents indicated that research faculty had <10% effort on clinical duties (Table 3).

Eleven respondents (39%) identified the main focus of faculty as health service research, where four (14%) identified their main focus as clinical trials. Regardless of funding status, all respondents stated that their faculty were interested in studying quality and process improvement efforts (eg, transitions or readmissions, n = 19), patient safety initiatives (eg, hospital-acquired complications, n = 17), and disease-specific areas (eg, thrombosis, n = 15).

In terms of research output, 12 respondents stated that their research/QI faculty collectively published 11-50 peer-reviewed papers during the academic year, and 10 programs indicated that their faculty published 0-10 papers per year. Only three programs reported that their faculty collectively published 50-99 peer-reviewed papers per year. With respect to abstract presentations at national conferences, 13 programs indicated that they presented 0-10 abstracts, and 12 indicated that they presented 11-50.

 

 

DISCUSSION

In this first survey quantifying research activities in HM, respondents from 28 programs shared important insights into research activities at their institutions. Although our sample size was small, substantial variation in the size, composition, and structure of research programs in HM among respondents was observed. For example, few respondents indicated the availability of training programs for research in HM at their institutions. Similarly, among faculty who focused mainly on research, variation in funding streams and effort protection was observed. A preponderance of midcareer faculty with a range of funding sources, including NIH, AHRQ, VHA, CMS, and CDC was reported. Collectively, these data not only provide a unique glimpse into the state of research in HM but also help establish a baseline of the status of the field at large.

Some findings of our study are intuitive given our sampling strategy and the types of programs that responded. For example, the fact that most respondents for research programs represented university-based or affiliated institutions is expected given the tripartite academic mission. However, even within our sample of highly motivated programs, some findings are surprising and merit further exploration. For example, the observation that some respondents identified HM investigators within their program with <25% in intra- or extramural funding was unexpected. On the other extreme, we were surprised to find that three programs reported >$5 million in research funding. Understanding whether specific factors, such as the availability of experienced mentors within and outside departments or assistance from support staff (eg, statisticians and project managers), are associated with success and funding within these programs are important questions to answer. By focusing on these issues, we will be well poised as a field to understand what works, what does not work, and why.

Likewise, the finding that few programs within our sample offer formal training in the form of fellowships to research investigators represents an improvement opportunity. A pipeline for growing investigators is critical for the specialty that is HM. Notably, this call is not new; rather, previous investigators have highlighted the importance of developing academically oriented hospitalists for the future of the field.5 The implementation of faculty scholarship development programs has improved the scholarly output, mentoring activities, and succession planning of academics within HM.6,7 Conversely, lack of adequate mentorship and support for academic activities remains a challenge and as a factor associated with the failure to produce academic work.8 Without a cadre of investigators asking critical questions related to care delivery, the legitimacy of our field may be threatened.

While extrapolating to the field is difficult given the small number of our respondents, highlighting the progress that has been made is important. For example, while misalignment between funding and clinical and research mission persists, our survey found that several programs have been successful in securing extramural funding for their investigators. Additionally, internal funding for QI work appears to be increasing, with hospitalists receiving dedicated effort for much of this work. Innovation in how best to support and develop these types of efforts have also emerged. For example, the University of Michigan Specialist Hospitalist Allied Research Program offers dedicated effort and funding for hospitalists tackling projects germane to HM (eg, ordering of blood cultures for febrile inpatients) that overlap with subspecialists (eg, infectious diseases).9 Thus, hospitalists are linked with other specialties in the development of research agendas and academic products. Similarly, the launch of the HOMERUN network, a coalition of investigators who bridge health systems to study problems central to HM, has helped usher in a new era of research opportunities in the specialty.10 Fundamentally, the culture of HM has begun to place an emphasis on academic and scholarly productivity in addition to clinical prowess.11-13 Increased support and funding for training programs geared toward innovation and research in HM is needed to continue this mission. The Society for General Internal Medicine, American College of Physicians, and SHM have important roles to play as the largest professional organizations for generalists in this respect. Support for research, QI, and investigators in HM remains an urgent and largely unmet need.

Our study has limitations. First, our response rate was low at 28% but is consistent with the response rates of other surveys of physician groups.14 Caution in making inferences to the field at large is necessary given the potential for selection and nonresponse bias. However, we expect that respondents are likely biased toward programs actively conducting research and engaged in QI, thus better reflecting the state of these activities in HM. Second, given that we did not ask for any identifying information, we have no way of establishing the accuracy of the data provided by respondents. However, we have no reason to believe that responses would be altered in a systematic fashion. Future studies that link our findings to publicly available data (eg, databases of active grants and funding) might be useful. Third, while our survey instrument was created and internally validated by hospitalist researchers, its lack of external validation could limit findings. Finally, our results vary on the basis of how respondents answered questions related to effort and time allocation given that these measures differ across programs.

In summary, the findings from this study highlight substantial variations in the number, training, and funding of research faculty across HM programs. Understanding the factors behind the success of some programs and the failures of others appears important in informing and growing the research in the field. Future studies that aim to expand survey participation, raise the awareness of the state of research in HM, and identify barriers and facilitators to academic success in HM are needed.

 

 

Disclosures

Dr. Chopra discloses grant funding from the Agency for Healthcare Research and Quality (AHRQ), VA Health Services and Research Department, and Centers for Disease Control. Dr. Jones discloses grant funding from AHRQ. All other authors disclose no conflicts of interest.

References

1. International Working Party to Promote and Revitalise Academic Medicine. Academic medicine: the evidence base. BMJ. 2004;329(7469):789-792. PubMed
2. Flanders SA, Saint S, McMahon LF, Howell JD. Where should hospitalists sit within the academic medical center? J Gen Intern Med. 2008;23(8):1269-1272. PubMed
3. Flanders SA, Centor B, Weber V, McGinn T, Desalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636-641. PubMed
4. Dang Do AN, Munchhof AM, Terry C, Emmett T, Kara A. Research and publication trends in hospital medicine. J Hosp Med. 2014;9(3):148-154. PubMed
5. Harrison R, Hunter AJ, Sharpe B, Auerbach AD. Survey of US academic hospitalist leaders about mentorship and academic activities in hospitalist groups. J Hosp Med. 2011;6(1):5-9. PubMed
6. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161-166. PubMed
7. Nagarur A, O’Neill RM, Lawton D, Greenwald JL. Supporting faculty development in hospital medicine: design and implementation of a personalized structured mentoring program. J Hosp Med. 2018;13(2):96-99. PubMed
8. Reid MB, Misky GJ, Harrison RA, Sharpe B, Auerbach A, Glasheen JJ. Mentorship, productivity, and promotion among academic hospitalists. J Gen Intern Med. 2012;27(1):23-27. PubMed
9. Flanders SA, Kaufman SR, Nallamothu BK, Saint S. The University of Michigan Specialist-Hospitalist Allied Research Program: jumpstarting hospital medicine research. J Hosp Med. 2008;3(4):308-313. PubMed
10. Auerbach AD, Patel MS, Metlay JP, et al. The Hospital Medicine Reengineering Network (HOMERuN): a learning organization focused on improving hospital care. Acad Med. 2014;89(3):415-420. PubMed
11. Souba WW. Academic medicine’s core values: what do they mean? J Surg Res. 2003;115(2):171-173. PubMed
12. Bonsall J, Chopra V. Building an academic pipeline: a combined society of hospital medicine committee initiative. J Hosp Med. 2016;11(10):735-736. PubMed
13. Sweigart JR, Tad YD, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12(3):173-176. PubMed
14. Cunningham CT, Quan H, Hemmelgarn B, et al. Exploring physician specialist response rates to web-based surveys. BMC Med Res Methodol. 2015;15(1):32. PubMed

References

1. International Working Party to Promote and Revitalise Academic Medicine. Academic medicine: the evidence base. BMJ. 2004;329(7469):789-792. PubMed
2. Flanders SA, Saint S, McMahon LF, Howell JD. Where should hospitalists sit within the academic medical center? J Gen Intern Med. 2008;23(8):1269-1272. PubMed
3. Flanders SA, Centor B, Weber V, McGinn T, Desalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636-641. PubMed
4. Dang Do AN, Munchhof AM, Terry C, Emmett T, Kara A. Research and publication trends in hospital medicine. J Hosp Med. 2014;9(3):148-154. PubMed
5. Harrison R, Hunter AJ, Sharpe B, Auerbach AD. Survey of US academic hospitalist leaders about mentorship and academic activities in hospitalist groups. J Hosp Med. 2011;6(1):5-9. PubMed
6. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161-166. PubMed
7. Nagarur A, O’Neill RM, Lawton D, Greenwald JL. Supporting faculty development in hospital medicine: design and implementation of a personalized structured mentoring program. J Hosp Med. 2018;13(2):96-99. PubMed
8. Reid MB, Misky GJ, Harrison RA, Sharpe B, Auerbach A, Glasheen JJ. Mentorship, productivity, and promotion among academic hospitalists. J Gen Intern Med. 2012;27(1):23-27. PubMed
9. Flanders SA, Kaufman SR, Nallamothu BK, Saint S. The University of Michigan Specialist-Hospitalist Allied Research Program: jumpstarting hospital medicine research. J Hosp Med. 2008;3(4):308-313. PubMed
10. Auerbach AD, Patel MS, Metlay JP, et al. The Hospital Medicine Reengineering Network (HOMERuN): a learning organization focused on improving hospital care. Acad Med. 2014;89(3):415-420. PubMed
11. Souba WW. Academic medicine’s core values: what do they mean? J Surg Res. 2003;115(2):171-173. PubMed
12. Bonsall J, Chopra V. Building an academic pipeline: a combined society of hospital medicine committee initiative. J Hosp Med. 2016;11(10):735-736. PubMed
13. Sweigart JR, Tad YD, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12(3):173-176. PubMed
14. Cunningham CT, Quan H, Hemmelgarn B, et al. Exploring physician specialist response rates to web-based surveys. BMC Med Res Methodol. 2015;15(1):32. PubMed

Issue
Journal of Hospital Medicine 14(4)
Issue
Journal of Hospital Medicine 14(4)
Page Number
207-211
Page Number
207-211
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Vineet Chopra MD, MSc; E-mail: [email protected]; Telephone: 734-936-4000; Twitter: @vineet_chopra
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media

Lean-Based Redesign of Multidisciplinary Rounds on General Medicine Service

Article Type
Changed
Wed, 08/15/2018 - 06:54

Given that multiple disciplines are often involved in caring for patients admitted to the hospital, timely communication, collaboration, and coordination amongst various disciplines is necessary for safe and effective patient care.1 With the focus on improving patient satisfaction and throughput in hospitals, it is also important to make more accurate predictions of the discharge date and allow time for patients and their families to prepare for discharge.2-4

Multidisciplinary rounds (MDR) are defined as structured daily communication amongst key members of the patient’s care team (eg, nurses, physicians, case managers, social workers, pharmacists, and rehabilitation services). MDR have shown to be a useful strategy for ensuring that all members of the care team are updated on the plan of care for the patient.5 During MDR, a brief “check-in” discussing the patient’s plan of care, pending needs, and barriers to discharge allows all team members, patients, and families to effectively coordinate care and plan and prepare for discharge.

Multiple studies have reported increased collaboration and improved communication between disciplines with the use of such multidisciplinary rounding.2,5-7 Additionally, MDR have been shown to improve patient outcomes8 and reduce adverse events,9 length of stay (LOS),6,8 cost of care,8 and readmissions.1

We redesigned MDR on the general medicine wards at our institution in October 2014 by using Lean management techniques. Lean is defined as a set of philosophies and methods that aim to create transformation in thinking, behavior, and culture in each process, with the goal of maximizing the value for the patients and providers, adding efficiency, and reducing waste and waits.10

In this study, we evaluate whether this new model of MDR was associated with a decrease in the LOS. We also evaluate whether this new model of MDR was associated with an increase in discharges before noon, documentation of estimated discharge date (EDD) in our electronic health record (EHR), and patient satisfaction.

METHODS

Setting, Design, and Patients

The study was conducted on the teaching general medicine service at our institution, an urban, 484-bed academic hospital. The general medicine service has patients on 4 inpatient units (total of 95 beds) and is managed by 5 teaching service teams.

We performed a pre-post study. The preperiod (in which the old model of MDR was followed) included 4000 patients discharged between September 1, 2013, and October 22, 2014. The postperiod (in which the new model of MDR was followed) included 2085 patients discharged between October 23, 2014, and April 30, 2015. We excluded 139 patients that died in the hospital prior to discharge and patients on the nonteaching and/or private practice service.

All data were provided by our institution’s Digital Solutions Department. Our institutional review board issued a letter of determination exempting this study from further review because it was deemed to be a quality improvement initiative.

Use of Lean Management to Redesign our MDR

Our institution has incorporated the Lean management system to continually add value to services through the elimination of waste, thus simultaneously optimizing the quality of patient care, cost, and patient satisfaction.11 Lean, derived from the Toyota Production System, has long been used in manufacturing and in recent decades has spread to healthcare.12 We leveraged the following 3 key Lean techniques to redesign our MDR: (1) value stream management (VSM), (2) rapid process improvement workshops (RPIW), and (3) active daily management (ADM), as detailed in supplementary Appendix 1.

Interventions

Our interventions comparing the old model of the MDR to the new model are shown in Table 1. The purpose of these interventions was to (1) increase provider engagement and input in discharge planning, (2) improve early identification of patient discharge needs, (3) have clearly defined roles and responsibilities for each team member, and (4) have a visual feedback regarding patient care plan for all members of the care team, even if they were not present at MDR.

Outcomes

The primary outcome was mean LOS. The secondary outcomes were (1) discharges before noon, (2) recording of the EDD in our EHR within 24 hours of admission (as time stamped on our EHR), and (3) patient satisfaction.

 

 

Data for patient satisfaction were obtained using the Press Ganey survey. We used data on patient satisfaction scores for the following 2 relevant questions on this survey: (1) extent to which the patient felt ready to be discharged and (2) how well staff worked together to care for the patient. Proportions of the “top-box” (“very good”) were used for the analysis. These survey data were available on 467 patients (11.7%) in the preperiod and 188 patients (9.0%) in the postperiod.

Data Analysis

Absolute difference in days (mean LOS) or change in percentage and their corresponding 95% confidence intervals (CIs) were calculated for all outcome measures in the pre-post periods. Two-tailed t tests were used to calculate P values for continuous variables. LOS was truncated at 30 days to minimize the influence of outliers. A multiple regression model was also run to assess change in mean LOS, adjusted for the patient’s case mix index (CMI), a measure of patient acuity (Table 3). CMI is a relative value assigned to a diagnosis-related group of patients in a medical care environment and is used in determining the allocation of resources to care for and/or treat the patients in the group.

A sensitivity analysis was conducted on a second cohort that included a subset of patients from the preperiod between November 1, 2013, and April 30, 2014, and a subset of patients from the postperiod between November 1, 2014, and April 1, 2015, to control for the calendar period (supplementary Appendix 2).

All analyses were conducted in R version 3.3.0, with the linear mixed-effects model lme4 statistical package.13,14

RESULTS

Table 2 shows patient characteristics in the pre- and postperiods. There were no significant differences between age, sex, race and/or ethnicity, language, or CMI between patients in the pre- and postperiods. Discharge volume was higher by 1.3 patients per day in the postperiod compared with the preperiod (P < .001).

Table 3 shows the differences in the outcomes between the pre- and postperiods. There was no change in the LOS or LOS adjusted for CMI. There was a 3.9% increase in discharges before noon in the postperiod compared with the preperiod (95% CI, 2.4% to 5.3%; P < .001). There was a 9.9% increase in the percentage of patients for whom the EDD was recorded in our EHR within 24 hours of admission (95% CI, 7.4% to 12.4%; P < .001). There was no change in the “top-box” patient satisfaction scores.

There were only marginal differences in the results between the entire cohort and a second subset cohort used for sensitivity analysis (supplementary Appendix 2).

DISCUSSION

In our study, there was no change in the mean LOS with the new model of MDR. There was an increase in discharges before noon and in recording of the EDD in our EHR within 24 hours of admission in the postperiod when the Lean-based new model of MDR was utilized. There was no change in patient satisfaction. With no change in staffing, we were able to accommodate the increase in the discharge volume in the postperiod.

We believe our results are attributable to several factors, including clearly defined roles and responsibilities for all participants of MDR, the inclusion of more experienced general medicine attending physician (compared with housestaff), Lean management techniques to identify gaps in the patient’s journey from emergency department to discharge using VSM, the development of appropriate workflows and standard work on how the multidisciplinary teams would work together at RPIWs, and ADM to ensure sustainability and engagement among frontline members and institutional leaders. In order to sustain this, we planned to continue monitoring data in daily, weekly, and monthly forums with senior physician and administrative leaders. Planning for additional interventions is underway, including moving MDR to the bedside, instituting an afternoon “check-in” that would enable more detailed action planning, and addressing barriers in a timely manner for patients ready to discharge the following day.

Our study has a few limitations. First, this is an observational study that cannot determine causation. Second, this is a single-center study conducted on patients only on the general medicine teaching service. Third, there were several concurrent interventions implemented at our institution to improve LOS, throughput, and patient satisfaction in addition to MDR, thus making it difficult to isolate the impact of our intervention. Fourth, in the new model of MDR, rounds took place only 5 days per week, thereby possibly limiting the potential impact on our outcomes. Fifth, while we showed improvements in the discharges before noon and recording of EDD in the post period, we were not able to achieve our target of 25% discharges before noon or 100% recording of EDD in this time period. We believe the limited amount of time between the pre- and postperiods to allow for adoption and learning of the processes might have contributed to the underestimation of the impact of the new model of MDR, thereby limiting our ability to achieve our targets. Sixth, the response rate on the Press Ganey survey was low, and we did not directly survey patients or families for their satisfaction with MDR.

Our study has several strengths. To our knowledge, this is the first study to embed Lean management techniques in the design of MDR in the inpatient setting. While several studies have demonstrated improvements in discharges before noon through the implementation of MDR, they have not incorporated Lean management techniques, which we believe are critical to ensure the sustainability of results.1,3,5,6,8,15 Second, while it was not measured, there was a high level of provider engagement in the process in the new model of MDR. Third, because the MDR were conducted at the nurse’s station on each inpatient unit in the new model instead of in a conference room, it was well attended by all members of the multidisciplinary team. Fourth, the presence of a visibility board allowed for all team members to have easy access to visual feedback throughout the day, even if they were not present at the MDR. Fifth, we believe that there was also more accurate estimation of the date and time of discharge in the new model of MDR because the discussion was facilitated by the case manager, who is experienced in identifying barriers to discharge (compared with the housestaff in the old model of MDR), and included the more experienced attending physician. Finally, the consistent presence of a multidisciplinary team at MDR allowed for the incorporation of everyone’s concerns at one time, thereby limiting the need for paging multiple disciplines throughout the day, which led to quicker resolution of issues and assignment of pending tasks.

In conclusion, our study shows no change in the mean LOS when the Lean-based model of MDR was utilized. Our study demonstrates an increase in discharges before noon and in recording of EDD on our EHR within 24 hours of admission in the post period when the Lean-based model of MDR was utilized. There was no change in patient satisfaction. While this study was conducted at an academic medical center on the general medicine wards, we believe our new model of MDR, which leveraged Lean management techniques, may successfully impact patient flow in all inpatient clinical services and nonteaching hospitals.

 

 

Disclosure

The authors report no financial conflicts of interest and have nothing to disclose.

Files
References

1. Townsend-Gervis M, Cornell P, Vardaman JM. Interdisciplinary Rounds and Structured Communication Reduce Re-Admissions and Improve Some Patient Outcomes. West J Nurs Res. 2014;36(7):917-928. PubMed
2. Vazirani S, Hays RD, Shapiro MF, Cowan M. Effect of a multidisciplinary intervention on communication and collaboration among physicians and nurses. Am J Crit Care. 2005;14(1):71-77. PubMed
3. Wertheimer B, Jacobs RE, Bailey M, et al. Discharge before noon: an achievable hospital goal. J Hosp Med. 2014;9(4):210-214. PubMed
4. Wertheimer B, Jacobs RE, Iturrate E, Bailey M, Hochman K. Discharge before noon: Effect on throughput and sustainability. J Hosp Med. 2015;10(10):664-669. PubMed
5. Halm MA, Gagner S, Goering M, Sabo J, Smith M, Zaccagnini M. Interdisciplinary rounds: impact on patients, families, and staff. Clin Nurse Spec. 2003;17(3):133-142. PubMed
6. O’Mahony S, Mazur E, Charney P, Wang Y, Fine J. Use of multidisciplinary rounds to simultaneously improve quality outcomes, enhance resident education, and shorten length of stay. J Gen Intern Med. 2007;22(8):1073-1079. PubMed
7. Reimer N, Herbener L. Round and round we go: rounding strategies to impact exemplary professional practice. Clin J Oncol Nurs. 2014;18(6):654-660. PubMed
8. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(8 Suppl):AS4-AS12. PubMed
9. Baggs JG, Ryan SA, Phelps CE, Richeson JF, Johnson JE. The association between interdisciplinary collaboration and patient outcomes in a medical intensive care unit. Heart Lung. 1992;21(1):18-24. PubMed
10. Lawal AK, Rotter T, Kinsman L, et al. Lean management in health care: definition, concepts, methodology and effects reported (systematic review protocol). Syst Rev. 2014;3:103. PubMed
11. Liker JK. Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer. New York, Chicago, San Francisco, Athens, London, Madrid, Mexico City, Milan, New Delhi, Singapore, Sydney, Toronto: McGraw-Hill Education; 2004. 
12. Kane M, Chui K, Rimicci J, et al. Lean Manufacturing Improves Emergency Department Throughput and Patient Satisfaction. J Nurs Adm. 2015;45(9):429-434. PubMed
13. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2016. http://www.R-project.org/. Accessed November 7, 2017.
14. Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J Stat Softw. 2015;67(1):1-48. 
15. O’Leary KJ, Buck R, Fligiel HM, et al. Structured interdisciplinary rounds in a medical teaching unit: improving patient safety. Arch Intern Med. 2011;171(7):678-684. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(7)
Publications
Topics
Page Number
482-485. Published online first February 2, 2018
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Given that multiple disciplines are often involved in caring for patients admitted to the hospital, timely communication, collaboration, and coordination amongst various disciplines is necessary for safe and effective patient care.1 With the focus on improving patient satisfaction and throughput in hospitals, it is also important to make more accurate predictions of the discharge date and allow time for patients and their families to prepare for discharge.2-4

Multidisciplinary rounds (MDR) are defined as structured daily communication amongst key members of the patient’s care team (eg, nurses, physicians, case managers, social workers, pharmacists, and rehabilitation services). MDR have shown to be a useful strategy for ensuring that all members of the care team are updated on the plan of care for the patient.5 During MDR, a brief “check-in” discussing the patient’s plan of care, pending needs, and barriers to discharge allows all team members, patients, and families to effectively coordinate care and plan and prepare for discharge.

Multiple studies have reported increased collaboration and improved communication between disciplines with the use of such multidisciplinary rounding.2,5-7 Additionally, MDR have been shown to improve patient outcomes8 and reduce adverse events,9 length of stay (LOS),6,8 cost of care,8 and readmissions.1

We redesigned MDR on the general medicine wards at our institution in October 2014 by using Lean management techniques. Lean is defined as a set of philosophies and methods that aim to create transformation in thinking, behavior, and culture in each process, with the goal of maximizing the value for the patients and providers, adding efficiency, and reducing waste and waits.10

In this study, we evaluate whether this new model of MDR was associated with a decrease in the LOS. We also evaluate whether this new model of MDR was associated with an increase in discharges before noon, documentation of estimated discharge date (EDD) in our electronic health record (EHR), and patient satisfaction.

METHODS

Setting, Design, and Patients

The study was conducted on the teaching general medicine service at our institution, an urban, 484-bed academic hospital. The general medicine service has patients on 4 inpatient units (total of 95 beds) and is managed by 5 teaching service teams.

We performed a pre-post study. The preperiod (in which the old model of MDR was followed) included 4000 patients discharged between September 1, 2013, and October 22, 2014. The postperiod (in which the new model of MDR was followed) included 2085 patients discharged between October 23, 2014, and April 30, 2015. We excluded 139 patients that died in the hospital prior to discharge and patients on the nonteaching and/or private practice service.

All data were provided by our institution’s Digital Solutions Department. Our institutional review board issued a letter of determination exempting this study from further review because it was deemed to be a quality improvement initiative.

Use of Lean Management to Redesign our MDR

Our institution has incorporated the Lean management system to continually add value to services through the elimination of waste, thus simultaneously optimizing the quality of patient care, cost, and patient satisfaction.11 Lean, derived from the Toyota Production System, has long been used in manufacturing and in recent decades has spread to healthcare.12 We leveraged the following 3 key Lean techniques to redesign our MDR: (1) value stream management (VSM), (2) rapid process improvement workshops (RPIW), and (3) active daily management (ADM), as detailed in supplementary Appendix 1.

Interventions

Our interventions comparing the old model of the MDR to the new model are shown in Table 1. The purpose of these interventions was to (1) increase provider engagement and input in discharge planning, (2) improve early identification of patient discharge needs, (3) have clearly defined roles and responsibilities for each team member, and (4) have a visual feedback regarding patient care plan for all members of the care team, even if they were not present at MDR.

Outcomes

The primary outcome was mean LOS. The secondary outcomes were (1) discharges before noon, (2) recording of the EDD in our EHR within 24 hours of admission (as time stamped on our EHR), and (3) patient satisfaction.

 

 

Data for patient satisfaction were obtained using the Press Ganey survey. We used data on patient satisfaction scores for the following 2 relevant questions on this survey: (1) extent to which the patient felt ready to be discharged and (2) how well staff worked together to care for the patient. Proportions of the “top-box” (“very good”) were used for the analysis. These survey data were available on 467 patients (11.7%) in the preperiod and 188 patients (9.0%) in the postperiod.

Data Analysis

Absolute difference in days (mean LOS) or change in percentage and their corresponding 95% confidence intervals (CIs) were calculated for all outcome measures in the pre-post periods. Two-tailed t tests were used to calculate P values for continuous variables. LOS was truncated at 30 days to minimize the influence of outliers. A multiple regression model was also run to assess change in mean LOS, adjusted for the patient’s case mix index (CMI), a measure of patient acuity (Table 3). CMI is a relative value assigned to a diagnosis-related group of patients in a medical care environment and is used in determining the allocation of resources to care for and/or treat the patients in the group.

A sensitivity analysis was conducted on a second cohort that included a subset of patients from the preperiod between November 1, 2013, and April 30, 2014, and a subset of patients from the postperiod between November 1, 2014, and April 1, 2015, to control for the calendar period (supplementary Appendix 2).

All analyses were conducted in R version 3.3.0, with the linear mixed-effects model lme4 statistical package.13,14

RESULTS

Table 2 shows patient characteristics in the pre- and postperiods. There were no significant differences between age, sex, race and/or ethnicity, language, or CMI between patients in the pre- and postperiods. Discharge volume was higher by 1.3 patients per day in the postperiod compared with the preperiod (P < .001).

Table 3 shows the differences in the outcomes between the pre- and postperiods. There was no change in the LOS or LOS adjusted for CMI. There was a 3.9% increase in discharges before noon in the postperiod compared with the preperiod (95% CI, 2.4% to 5.3%; P < .001). There was a 9.9% increase in the percentage of patients for whom the EDD was recorded in our EHR within 24 hours of admission (95% CI, 7.4% to 12.4%; P < .001). There was no change in the “top-box” patient satisfaction scores.

There were only marginal differences in the results between the entire cohort and a second subset cohort used for sensitivity analysis (supplementary Appendix 2).

DISCUSSION

In our study, there was no change in the mean LOS with the new model of MDR. There was an increase in discharges before noon and in recording of the EDD in our EHR within 24 hours of admission in the postperiod when the Lean-based new model of MDR was utilized. There was no change in patient satisfaction. With no change in staffing, we were able to accommodate the increase in the discharge volume in the postperiod.

We believe our results are attributable to several factors, including clearly defined roles and responsibilities for all participants of MDR, the inclusion of more experienced general medicine attending physician (compared with housestaff), Lean management techniques to identify gaps in the patient’s journey from emergency department to discharge using VSM, the development of appropriate workflows and standard work on how the multidisciplinary teams would work together at RPIWs, and ADM to ensure sustainability and engagement among frontline members and institutional leaders. In order to sustain this, we planned to continue monitoring data in daily, weekly, and monthly forums with senior physician and administrative leaders. Planning for additional interventions is underway, including moving MDR to the bedside, instituting an afternoon “check-in” that would enable more detailed action planning, and addressing barriers in a timely manner for patients ready to discharge the following day.

Our study has a few limitations. First, this is an observational study that cannot determine causation. Second, this is a single-center study conducted on patients only on the general medicine teaching service. Third, there were several concurrent interventions implemented at our institution to improve LOS, throughput, and patient satisfaction in addition to MDR, thus making it difficult to isolate the impact of our intervention. Fourth, in the new model of MDR, rounds took place only 5 days per week, thereby possibly limiting the potential impact on our outcomes. Fifth, while we showed improvements in the discharges before noon and recording of EDD in the post period, we were not able to achieve our target of 25% discharges before noon or 100% recording of EDD in this time period. We believe the limited amount of time between the pre- and postperiods to allow for adoption and learning of the processes might have contributed to the underestimation of the impact of the new model of MDR, thereby limiting our ability to achieve our targets. Sixth, the response rate on the Press Ganey survey was low, and we did not directly survey patients or families for their satisfaction with MDR.

Our study has several strengths. To our knowledge, this is the first study to embed Lean management techniques in the design of MDR in the inpatient setting. While several studies have demonstrated improvements in discharges before noon through the implementation of MDR, they have not incorporated Lean management techniques, which we believe are critical to ensure the sustainability of results.1,3,5,6,8,15 Second, while it was not measured, there was a high level of provider engagement in the process in the new model of MDR. Third, because the MDR were conducted at the nurse’s station on each inpatient unit in the new model instead of in a conference room, it was well attended by all members of the multidisciplinary team. Fourth, the presence of a visibility board allowed for all team members to have easy access to visual feedback throughout the day, even if they were not present at the MDR. Fifth, we believe that there was also more accurate estimation of the date and time of discharge in the new model of MDR because the discussion was facilitated by the case manager, who is experienced in identifying barriers to discharge (compared with the housestaff in the old model of MDR), and included the more experienced attending physician. Finally, the consistent presence of a multidisciplinary team at MDR allowed for the incorporation of everyone’s concerns at one time, thereby limiting the need for paging multiple disciplines throughout the day, which led to quicker resolution of issues and assignment of pending tasks.

In conclusion, our study shows no change in the mean LOS when the Lean-based model of MDR was utilized. Our study demonstrates an increase in discharges before noon and in recording of EDD on our EHR within 24 hours of admission in the post period when the Lean-based model of MDR was utilized. There was no change in patient satisfaction. While this study was conducted at an academic medical center on the general medicine wards, we believe our new model of MDR, which leveraged Lean management techniques, may successfully impact patient flow in all inpatient clinical services and nonteaching hospitals.

 

 

Disclosure

The authors report no financial conflicts of interest and have nothing to disclose.

Given that multiple disciplines are often involved in caring for patients admitted to the hospital, timely communication, collaboration, and coordination amongst various disciplines is necessary for safe and effective patient care.1 With the focus on improving patient satisfaction and throughput in hospitals, it is also important to make more accurate predictions of the discharge date and allow time for patients and their families to prepare for discharge.2-4

Multidisciplinary rounds (MDR) are defined as structured daily communication amongst key members of the patient’s care team (eg, nurses, physicians, case managers, social workers, pharmacists, and rehabilitation services). MDR have shown to be a useful strategy for ensuring that all members of the care team are updated on the plan of care for the patient.5 During MDR, a brief “check-in” discussing the patient’s plan of care, pending needs, and barriers to discharge allows all team members, patients, and families to effectively coordinate care and plan and prepare for discharge.

Multiple studies have reported increased collaboration and improved communication between disciplines with the use of such multidisciplinary rounding.2,5-7 Additionally, MDR have been shown to improve patient outcomes8 and reduce adverse events,9 length of stay (LOS),6,8 cost of care,8 and readmissions.1

We redesigned MDR on the general medicine wards at our institution in October 2014 by using Lean management techniques. Lean is defined as a set of philosophies and methods that aim to create transformation in thinking, behavior, and culture in each process, with the goal of maximizing the value for the patients and providers, adding efficiency, and reducing waste and waits.10

In this study, we evaluate whether this new model of MDR was associated with a decrease in the LOS. We also evaluate whether this new model of MDR was associated with an increase in discharges before noon, documentation of estimated discharge date (EDD) in our electronic health record (EHR), and patient satisfaction.

METHODS

Setting, Design, and Patients

The study was conducted on the teaching general medicine service at our institution, an urban, 484-bed academic hospital. The general medicine service has patients on 4 inpatient units (total of 95 beds) and is managed by 5 teaching service teams.

We performed a pre-post study. The preperiod (in which the old model of MDR was followed) included 4000 patients discharged between September 1, 2013, and October 22, 2014. The postperiod (in which the new model of MDR was followed) included 2085 patients discharged between October 23, 2014, and April 30, 2015. We excluded 139 patients that died in the hospital prior to discharge and patients on the nonteaching and/or private practice service.

All data were provided by our institution’s Digital Solutions Department. Our institutional review board issued a letter of determination exempting this study from further review because it was deemed to be a quality improvement initiative.

Use of Lean Management to Redesign our MDR

Our institution has incorporated the Lean management system to continually add value to services through the elimination of waste, thus simultaneously optimizing the quality of patient care, cost, and patient satisfaction.11 Lean, derived from the Toyota Production System, has long been used in manufacturing and in recent decades has spread to healthcare.12 We leveraged the following 3 key Lean techniques to redesign our MDR: (1) value stream management (VSM), (2) rapid process improvement workshops (RPIW), and (3) active daily management (ADM), as detailed in supplementary Appendix 1.

Interventions

Our interventions comparing the old model of the MDR to the new model are shown in Table 1. The purpose of these interventions was to (1) increase provider engagement and input in discharge planning, (2) improve early identification of patient discharge needs, (3) have clearly defined roles and responsibilities for each team member, and (4) have a visual feedback regarding patient care plan for all members of the care team, even if they were not present at MDR.

Outcomes

The primary outcome was mean LOS. The secondary outcomes were (1) discharges before noon, (2) recording of the EDD in our EHR within 24 hours of admission (as time stamped on our EHR), and (3) patient satisfaction.

 

 

Data for patient satisfaction were obtained using the Press Ganey survey. We used data on patient satisfaction scores for the following 2 relevant questions on this survey: (1) extent to which the patient felt ready to be discharged and (2) how well staff worked together to care for the patient. Proportions of the “top-box” (“very good”) were used for the analysis. These survey data were available on 467 patients (11.7%) in the preperiod and 188 patients (9.0%) in the postperiod.

Data Analysis

Absolute difference in days (mean LOS) or change in percentage and their corresponding 95% confidence intervals (CIs) were calculated for all outcome measures in the pre-post periods. Two-tailed t tests were used to calculate P values for continuous variables. LOS was truncated at 30 days to minimize the influence of outliers. A multiple regression model was also run to assess change in mean LOS, adjusted for the patient’s case mix index (CMI), a measure of patient acuity (Table 3). CMI is a relative value assigned to a diagnosis-related group of patients in a medical care environment and is used in determining the allocation of resources to care for and/or treat the patients in the group.

A sensitivity analysis was conducted on a second cohort that included a subset of patients from the preperiod between November 1, 2013, and April 30, 2014, and a subset of patients from the postperiod between November 1, 2014, and April 1, 2015, to control for the calendar period (supplementary Appendix 2).

All analyses were conducted in R version 3.3.0, with the linear mixed-effects model lme4 statistical package.13,14

RESULTS

Table 2 shows patient characteristics in the pre- and postperiods. There were no significant differences between age, sex, race and/or ethnicity, language, or CMI between patients in the pre- and postperiods. Discharge volume was higher by 1.3 patients per day in the postperiod compared with the preperiod (P < .001).

Table 3 shows the differences in the outcomes between the pre- and postperiods. There was no change in the LOS or LOS adjusted for CMI. There was a 3.9% increase in discharges before noon in the postperiod compared with the preperiod (95% CI, 2.4% to 5.3%; P < .001). There was a 9.9% increase in the percentage of patients for whom the EDD was recorded in our EHR within 24 hours of admission (95% CI, 7.4% to 12.4%; P < .001). There was no change in the “top-box” patient satisfaction scores.

There were only marginal differences in the results between the entire cohort and a second subset cohort used for sensitivity analysis (supplementary Appendix 2).

DISCUSSION

In our study, there was no change in the mean LOS with the new model of MDR. There was an increase in discharges before noon and in recording of the EDD in our EHR within 24 hours of admission in the postperiod when the Lean-based new model of MDR was utilized. There was no change in patient satisfaction. With no change in staffing, we were able to accommodate the increase in the discharge volume in the postperiod.

We believe our results are attributable to several factors, including clearly defined roles and responsibilities for all participants of MDR, the inclusion of more experienced general medicine attending physician (compared with housestaff), Lean management techniques to identify gaps in the patient’s journey from emergency department to discharge using VSM, the development of appropriate workflows and standard work on how the multidisciplinary teams would work together at RPIWs, and ADM to ensure sustainability and engagement among frontline members and institutional leaders. In order to sustain this, we planned to continue monitoring data in daily, weekly, and monthly forums with senior physician and administrative leaders. Planning for additional interventions is underway, including moving MDR to the bedside, instituting an afternoon “check-in” that would enable more detailed action planning, and addressing barriers in a timely manner for patients ready to discharge the following day.

Our study has a few limitations. First, this is an observational study that cannot determine causation. Second, this is a single-center study conducted on patients only on the general medicine teaching service. Third, there were several concurrent interventions implemented at our institution to improve LOS, throughput, and patient satisfaction in addition to MDR, thus making it difficult to isolate the impact of our intervention. Fourth, in the new model of MDR, rounds took place only 5 days per week, thereby possibly limiting the potential impact on our outcomes. Fifth, while we showed improvements in the discharges before noon and recording of EDD in the post period, we were not able to achieve our target of 25% discharges before noon or 100% recording of EDD in this time period. We believe the limited amount of time between the pre- and postperiods to allow for adoption and learning of the processes might have contributed to the underestimation of the impact of the new model of MDR, thereby limiting our ability to achieve our targets. Sixth, the response rate on the Press Ganey survey was low, and we did not directly survey patients or families for their satisfaction with MDR.

Our study has several strengths. To our knowledge, this is the first study to embed Lean management techniques in the design of MDR in the inpatient setting. While several studies have demonstrated improvements in discharges before noon through the implementation of MDR, they have not incorporated Lean management techniques, which we believe are critical to ensure the sustainability of results.1,3,5,6,8,15 Second, while it was not measured, there was a high level of provider engagement in the process in the new model of MDR. Third, because the MDR were conducted at the nurse’s station on each inpatient unit in the new model instead of in a conference room, it was well attended by all members of the multidisciplinary team. Fourth, the presence of a visibility board allowed for all team members to have easy access to visual feedback throughout the day, even if they were not present at the MDR. Fifth, we believe that there was also more accurate estimation of the date and time of discharge in the new model of MDR because the discussion was facilitated by the case manager, who is experienced in identifying barriers to discharge (compared with the housestaff in the old model of MDR), and included the more experienced attending physician. Finally, the consistent presence of a multidisciplinary team at MDR allowed for the incorporation of everyone’s concerns at one time, thereby limiting the need for paging multiple disciplines throughout the day, which led to quicker resolution of issues and assignment of pending tasks.

In conclusion, our study shows no change in the mean LOS when the Lean-based model of MDR was utilized. Our study demonstrates an increase in discharges before noon and in recording of EDD on our EHR within 24 hours of admission in the post period when the Lean-based model of MDR was utilized. There was no change in patient satisfaction. While this study was conducted at an academic medical center on the general medicine wards, we believe our new model of MDR, which leveraged Lean management techniques, may successfully impact patient flow in all inpatient clinical services and nonteaching hospitals.

 

 

Disclosure

The authors report no financial conflicts of interest and have nothing to disclose.

References

1. Townsend-Gervis M, Cornell P, Vardaman JM. Interdisciplinary Rounds and Structured Communication Reduce Re-Admissions and Improve Some Patient Outcomes. West J Nurs Res. 2014;36(7):917-928. PubMed
2. Vazirani S, Hays RD, Shapiro MF, Cowan M. Effect of a multidisciplinary intervention on communication and collaboration among physicians and nurses. Am J Crit Care. 2005;14(1):71-77. PubMed
3. Wertheimer B, Jacobs RE, Bailey M, et al. Discharge before noon: an achievable hospital goal. J Hosp Med. 2014;9(4):210-214. PubMed
4. Wertheimer B, Jacobs RE, Iturrate E, Bailey M, Hochman K. Discharge before noon: Effect on throughput and sustainability. J Hosp Med. 2015;10(10):664-669. PubMed
5. Halm MA, Gagner S, Goering M, Sabo J, Smith M, Zaccagnini M. Interdisciplinary rounds: impact on patients, families, and staff. Clin Nurse Spec. 2003;17(3):133-142. PubMed
6. O’Mahony S, Mazur E, Charney P, Wang Y, Fine J. Use of multidisciplinary rounds to simultaneously improve quality outcomes, enhance resident education, and shorten length of stay. J Gen Intern Med. 2007;22(8):1073-1079. PubMed
7. Reimer N, Herbener L. Round and round we go: rounding strategies to impact exemplary professional practice. Clin J Oncol Nurs. 2014;18(6):654-660. PubMed
8. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(8 Suppl):AS4-AS12. PubMed
9. Baggs JG, Ryan SA, Phelps CE, Richeson JF, Johnson JE. The association between interdisciplinary collaboration and patient outcomes in a medical intensive care unit. Heart Lung. 1992;21(1):18-24. PubMed
10. Lawal AK, Rotter T, Kinsman L, et al. Lean management in health care: definition, concepts, methodology and effects reported (systematic review protocol). Syst Rev. 2014;3:103. PubMed
11. Liker JK. Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer. New York, Chicago, San Francisco, Athens, London, Madrid, Mexico City, Milan, New Delhi, Singapore, Sydney, Toronto: McGraw-Hill Education; 2004. 
12. Kane M, Chui K, Rimicci J, et al. Lean Manufacturing Improves Emergency Department Throughput and Patient Satisfaction. J Nurs Adm. 2015;45(9):429-434. PubMed
13. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2016. http://www.R-project.org/. Accessed November 7, 2017.
14. Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J Stat Softw. 2015;67(1):1-48. 
15. O’Leary KJ, Buck R, Fligiel HM, et al. Structured interdisciplinary rounds in a medical teaching unit: improving patient safety. Arch Intern Med. 2011;171(7):678-684. PubMed

References

1. Townsend-Gervis M, Cornell P, Vardaman JM. Interdisciplinary Rounds and Structured Communication Reduce Re-Admissions and Improve Some Patient Outcomes. West J Nurs Res. 2014;36(7):917-928. PubMed
2. Vazirani S, Hays RD, Shapiro MF, Cowan M. Effect of a multidisciplinary intervention on communication and collaboration among physicians and nurses. Am J Crit Care. 2005;14(1):71-77. PubMed
3. Wertheimer B, Jacobs RE, Bailey M, et al. Discharge before noon: an achievable hospital goal. J Hosp Med. 2014;9(4):210-214. PubMed
4. Wertheimer B, Jacobs RE, Iturrate E, Bailey M, Hochman K. Discharge before noon: Effect on throughput and sustainability. J Hosp Med. 2015;10(10):664-669. PubMed
5. Halm MA, Gagner S, Goering M, Sabo J, Smith M, Zaccagnini M. Interdisciplinary rounds: impact on patients, families, and staff. Clin Nurse Spec. 2003;17(3):133-142. PubMed
6. O’Mahony S, Mazur E, Charney P, Wang Y, Fine J. Use of multidisciplinary rounds to simultaneously improve quality outcomes, enhance resident education, and shorten length of stay. J Gen Intern Med. 2007;22(8):1073-1079. PubMed
7. Reimer N, Herbener L. Round and round we go: rounding strategies to impact exemplary professional practice. Clin J Oncol Nurs. 2014;18(6):654-660. PubMed
8. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(8 Suppl):AS4-AS12. PubMed
9. Baggs JG, Ryan SA, Phelps CE, Richeson JF, Johnson JE. The association between interdisciplinary collaboration and patient outcomes in a medical intensive care unit. Heart Lung. 1992;21(1):18-24. PubMed
10. Lawal AK, Rotter T, Kinsman L, et al. Lean management in health care: definition, concepts, methodology and effects reported (systematic review protocol). Syst Rev. 2014;3:103. PubMed
11. Liker JK. Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer. New York, Chicago, San Francisco, Athens, London, Madrid, Mexico City, Milan, New Delhi, Singapore, Sydney, Toronto: McGraw-Hill Education; 2004. 
12. Kane M, Chui K, Rimicci J, et al. Lean Manufacturing Improves Emergency Department Throughput and Patient Satisfaction. J Nurs Adm. 2015;45(9):429-434. PubMed
13. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2016. http://www.R-project.org/. Accessed November 7, 2017.
14. Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J Stat Softw. 2015;67(1):1-48. 
15. O’Leary KJ, Buck R, Fligiel HM, et al. Structured interdisciplinary rounds in a medical teaching unit: improving patient safety. Arch Intern Med. 2011;171(7):678-684. PubMed

Issue
Journal of Hospital Medicine 13(7)
Issue
Journal of Hospital Medicine 13(7)
Page Number
482-485. Published online first February 2, 2018
Page Number
482-485. Published online first February 2, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
"Nidhi Rohatgi, MD, MS", 1265 Welch Road, Mail code 5475, Stanford, CA 94305; Telephone: 650-498-4094; Fax: 650-723-8596; E-mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 08/15/2018 - 05:00
Un-Gate On Date
Wed, 07/11/2018 - 05:00
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Resident‐Created Hospitalist Curriculum

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
A resident‐created hospitalist curriculum for internal medicine housestaff

Hospital medicine has grown tremendously since its inception in the 1990s.[1, 2] This expansion has led to the firm establishment of hospitalists in medical education, quality improvement (QI), research, subspecialty comanagement, and administration.[3, 4, 5]

This growth has also created new challenges. The training needs for the next generation of hospitalists are changing given the expanded clinical duties expected of hospitalists.[6, 7, 8] Prior surveys have suggested that some graduates employed as hospitalists have reported feeling underprepared in the areas of surgical comanagement, neurology, geriatrics, palliative care, and navigating the interdisciplinary care system.[9, 10]

In keeping with national trends, the number of residents interested in hospital medicine at our institution has dramatically increased. As internal medicine residents interested in careers in hospitalist medicine, we felt that improving hospitalist training at our institution was imperative given the increasing scope of practice and job competitiveness.[11, 12] We therefore sought to design and implement a hospitalist curriculum within our residency. In this article, we describe the genesis of our program, our final product, and the challenges of creating a curriculum while being internal medicine residents.

METHODS

Needs Assessment

To improve hospitalist training at our institution, we first performed a needs assessment. We contacted recent hospitalist graduates and current faculty to identify aspects of their clinical duties that may have been underemphasized during their training. Next, we performed a literature search in PubMed using the combined terms of hospitalist, hospital medicine, residency, education, training gaps, or curriculum. Based on these efforts, we developed a resident survey that assessed their attitudes toward various components of a potential curriculum. The survey was sent to all categorical internal medicine residents at our institution in December 2014. The survey specified that the respondents only include those who were interested in careers in hospital medicine. Responses were measured using a 5‐point Likert scale (1 = least important to 5 = most important).

Curriculum Development

Our intention was to develop a well‐rounded program that utilized mentorship, research, and clinical experience to augment our learner's knowledge and skills for a successful, long‐term career in the increasingly competitive field of hospital medicine. When designing our curriculum, we accounted for our program's current rotational requirements and local culture. Several previously identified underemphasized areas within hospital medicine, such as palliative care and neurology, were already required rotations at our program.[3, 4, 5] Therefore, any proposed curricular changes would need to mold into program requirements while still providing a preparatory experience in hospital medicine beyond what our current rotations offered. We felt this could be accomplished by including rotations that could provide specific skills pertinent to hospital medicine, such as ultrasound diagnostics or QI.

Key Differences in Curriculum Requirements Between Our Internal Medicine Residency Program and the Hospitalist Curriculum
Rotation Non‐SHAPE SHAPE
  • NOTE: Abbreviations: ICU, intensive care unit; SHAPE, Stanford Hospitalist Advanced Practice and Education.

ICU At least 12 weeks At least 16 weeks
Medical wards At least 16 weeks At least 16 weeks
Ultrasound diagnostics Elective Required
Quality improvement Elective Required
Surgical comanagement Elective Required
Medicine consult Elective Required
Neurology Required Required
Palliative care Required Required

Meeting With Stakeholders

We presented our curriculum proposal to the chief of the Stanford Hospital Medicine Program. We identified her early in the process to be our primary mentor, and she proved instrumental in being an advocate. After several meetings with the hospitalist group to further develop our program, we presented it to the residency program leadership who helped us to finalize our program.

RESULTS

Needs Assessment

Twenty‐two out of 111 categorical residents in our program (19.8%) identified themselves as interested in hospital medicine and responded to the survey. There were several areas of a potential hospitalist curriculum that the residents identified as important (defined as 4 or 5 on a 5‐point Likert scale). These areas included mentorship (90.9% of residents; mean 4.6, standard deviation [SD] 0.7), opportunities to teach (86.3%; mean 4.4, SD 0.9), and the establishment of a formal hospitalist curriculum (85.7%; mean 4.2, SD 0.8). The residents also identified several rotations that would be beneficial (defined as a 4 or 5 on a 5‐point Likert scale). These included medicine consult/procedures team (95.5% of residents; mean 4.7, SD 0.6), point‐of‐care ultrasound diagnostics (90.8%; mean 4.7, SD 0.8), and a community hospitalist preceptorship (86.4%; mean 4.4, SD 1.0). The residents also identified several rotations deemed to be of lesser benefit. These rotations included inpatient neurology (only 27.3% of residents; mean 3.2, SD 0.8) and palliative care (50.0%; mean 3.5, SD 1.0).

The Final Product: A Hospitalist Training Curriculum

Based on the needs assessment and meetings with program leadership, we designed a hospitalist program and named it the Stanford Hospitalist Advanced Practice and Education (SHAPE) program. The program was based on 3 core principles: (1) clinical excellence: by training in hospitalist‐relevant clinical areas, (2) academic development: with required research, QI, and teaching, and (3) career mentorship.

Clinical Excellence By Training in Hospitalist‐Relevant Clinical Areas

The SHAPE curriculum builds off of our institution's current curriculum with additional required rotations to improve the resident's skillsets. These included ultrasound diagnostics, surgical comanagement, and QI (Box 1). Given that some hospitalists work in an open intensive care unit (ICU), we increased the amount of required ICU time to provide expanded procedural and critical care experiences. The residents also receive 10 seminars focused on hospital medicine, including patient safety, QI, and career development (Box 1).

Box

The Stanford Hospitalist Advanced Practice and Education (SHAPE) program curriculum. Members of the program are required to complete the requirements listed before the end of their third year. Note that the clinical rotations are spread over the 3 years of residency.

Stanford Hospitalist Advanced Practice and Education Required Clinical Rotations

  • Medicine Consult (24 weeks)
  • Critical Care (16 weeks)
  • Ultrasound Diagnostics (2 weeks)
  • Quality Improvement (4 weeks)
  • Inpatient Neurology (2 weeks)
  • Palliative Care (2 weeks)
  • Surgical Comanagement (2 weeks)

Required Nonclinical Work

  • Quality improvement, clinical or educational project with a presentation at an academic conference or manuscript submission in a peer‐reviewed journal
  • Enrollment in the Stanford Faculty Development Center workshop on effective clinical teaching
  • Attendance at the hospitalist lecture series (10 lectures): patient safety, hospital efficiency, fundamentals of perioperative medicine, healthcare structure and changing reimbursement patterns, patient handoff, career development, prevention of burnout, inpatient nutrition, hospitalist research, and lean modeling in the hospital setting

Mentorship

  • Each participant is matched with 3 hospitalist mentors in order to provide comprehensive career and personal mentorship

Academic Development With Required Research and Teaching

SHAPE program residents are required to develop a QI, education, or clinical research project before graduation. They are required to present their work at a hospitalist conference or submit to a peer‐reviewed journal. They are also encouraged to attend the Society of Hospital Medicine annual meeting for their own career development.

SHAPE program residents also have increased opportunities to improve their teaching skills. The residents are enrolled in a clinical teaching workshop. Furthermore, the residents are responsible for leading regular lectures regarding common inpatient conditions for first‐ and second‐year medical students enrolled in a transitions‐of‐care elective.

Career Mentorship

Each resident is paired with 3 faculty hospitalists who have different areas of expertise (ie, clinical teaching, surgical comanagement, QI). They individually meet on a quarterly basis to discuss their career development and research projects. The SHAPE program will also host an annual resume‐development and career workshop.

SHAPE Resident Characteristics

In its first year, 13 of 25 residents (52%) interested in hospital medicine enrolled in the program. The SHAPE residents were predominantly second‐year residents (11 residents, 84.6%).

Among the 12 residents who did not enroll, there were 7 seniors (58.3%) who would soon be graduating and would not be eligible.

DISCUSSION

The training needs of aspiring hospitalists are changing as the scope of hospital medicine has expanded.[6] Residency programs can facilitate this by implementing a hospitalist curriculum that augments training and provides focused mentorship.[13, 14] An emphasis on resident leadership within these programs ensures positive housestaff buy‐in and satisfaction.

There were several key lessons we learned while designing our curriculum because of our unique role as residents and curriculum founders. This included the early engagement of departmental leadership as mentors. They assisted us in integrating our program within the existing internal medicine residency and the selection of electives. It was also imperative to secure adequate buy‐in from the academic hospitalists at our institution, as they would be our primary source of faculty mentors and lecturers.

A second challenge was balancing curriculum requirements and ensuring adequate buy‐in from our residents. The residents had fewer electives over their second and third years. However, this was balanced by the fact that the residents were given first preference on historically desirable rotations at our institution (including ultrasound, medicine consult, and QI). Furthermore, we purposefully included current resident opinions when performing our needs assessment to ensure adequate buy‐in. Surprisingly, the residents found several key rotations to be of low importance in our needs assessment, such as palliative care and inpatient neurology. Although this may seem confounding, several of these rotations (ie, neurology and palliative care) are already required of all residents at our program. It may be that some residents feel comfortable in these areas based on their previous experiences. Alternatively, this result may represent a lack of knowledge on the residents' part of what skill sets are imperative for career hospitalists. [4, 6]

Finally, we recognize that our program was based on our local needs assessment. Other residency programs may already have similar curricula built into their rotation schedule. In those instances, a hospitalist curriculum that emphasizes scholarly advancement and mentorship may be more appropriate.

CONCLUSIONS AND FUTURE DIRECTIONS

At out institution, we have created a hospitalist program designed to train the next generation of hospitalists with improved clinical, research, and teaching skills. Our cohort of residents will be observed over the next year, and we will administer a follow‐up study to assess the effectiveness of the program.

Acknowledgements

The authors acknowledge Karina Delgado, program manager at Stanford's internal medicine residency, for providing data on recent graduate plans.

Disclosures: Andre Kumar, MD, and Andrea Smeraglio, MD, are cofirst authors. The authors report no conflicts of interest.

Files
References
  1. Wachter RM. The hospitalist field turns 15: new opportunities and challenges. J Hosp Med. 2011;6(4):1013.
  2. Glasheen JJ, Epstein KR, Siegal E, Kutner JS, Prochazka AV. The spectrum of community based hospitalist practice: A call to tailor internal medicine residency training. Arch Intern Med. 2007;167:727729.
  3. Pham HH, Devers KJ, Kuo S, Berenson R. Health care market trends and the evolution of hospitalist use and roles. J Gen Intern Med. 2005;20(2):101107.
  4. Lindenauer PK, Pantilat SZ, Katz PP, Wachter RM. Survey of the National Association of Inpatient Physicians. Ann Intern Med. 1999:343349.
  5. Goldenberg J, Glasheen JJ. Hospitalist educators: future of inpatient internal medicine training. Mt Sinai J Med. 2008;75(5):430435.
  6. Glasheen JJ, Siegal EM, Epstein K, Kutner J, Prochazka AV. Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists' needs. J Gen Intern Med. 2008;23(7):11101115.
  7. Arora V, Guardiano S, Donaldson D, Storch I, Hemstreet P. Closing the gap between internal medicine training and practice: recommendations from recent graduates. Am J Med. 2005;118(6):680685
  8. Chaudhry SI, Lien C, Ehrlich J, et al. Curricular content of internal medicine residency programs: a nationwide report. Am J Med. 2014;127(12):12471254.
  9. Plauth WH, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists' perceptions of their residency training needs: results of a national survey. Am J Med. 2001;111(3):247254.
  10. Holmboe ES, Bowen JL, Green M, et al. Reforming internal medicine residency training: a report from the Society of General Internal Medicine's Task Force for Residency Reform. J Gen Intern Med. 2005;20(12):11651172.
  11. Goodman PH, Januska A. Clinical hospital medicine fellowships: perspectives of employers, hospitalists, and medicine residents. J Hosp Med. 2008;3(1):2834.
  12. Flanders SA, Centor B, Weber V, McGinn T, DeSalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the Academic hospital medicine Summit. J Hosp Med. 2009;4(4):240246.
  13. Glasheen JJ, Goldenberg J, Nelson JR. Achieving hospital medicine's promise through internal medicine residency redesign. Mt Sinai J Med. 2008;75(5):436441.
  14. Hauer , Karen E, Flanders , Scott A, Wachter RM. Training Future Hospitalists. Cult Med. 1999;171(12):367370.
Article PDF
Issue
Journal of Hospital Medicine - 11(9)
Publications
Page Number
646-649
Sections
Files
Files
Article PDF
Article PDF

Hospital medicine has grown tremendously since its inception in the 1990s.[1, 2] This expansion has led to the firm establishment of hospitalists in medical education, quality improvement (QI), research, subspecialty comanagement, and administration.[3, 4, 5]

This growth has also created new challenges. The training needs for the next generation of hospitalists are changing given the expanded clinical duties expected of hospitalists.[6, 7, 8] Prior surveys have suggested that some graduates employed as hospitalists have reported feeling underprepared in the areas of surgical comanagement, neurology, geriatrics, palliative care, and navigating the interdisciplinary care system.[9, 10]

In keeping with national trends, the number of residents interested in hospital medicine at our institution has dramatically increased. As internal medicine residents interested in careers in hospitalist medicine, we felt that improving hospitalist training at our institution was imperative given the increasing scope of practice and job competitiveness.[11, 12] We therefore sought to design and implement a hospitalist curriculum within our residency. In this article, we describe the genesis of our program, our final product, and the challenges of creating a curriculum while being internal medicine residents.

METHODS

Needs Assessment

To improve hospitalist training at our institution, we first performed a needs assessment. We contacted recent hospitalist graduates and current faculty to identify aspects of their clinical duties that may have been underemphasized during their training. Next, we performed a literature search in PubMed using the combined terms of hospitalist, hospital medicine, residency, education, training gaps, or curriculum. Based on these efforts, we developed a resident survey that assessed their attitudes toward various components of a potential curriculum. The survey was sent to all categorical internal medicine residents at our institution in December 2014. The survey specified that the respondents only include those who were interested in careers in hospital medicine. Responses were measured using a 5‐point Likert scale (1 = least important to 5 = most important).

Curriculum Development

Our intention was to develop a well‐rounded program that utilized mentorship, research, and clinical experience to augment our learner's knowledge and skills for a successful, long‐term career in the increasingly competitive field of hospital medicine. When designing our curriculum, we accounted for our program's current rotational requirements and local culture. Several previously identified underemphasized areas within hospital medicine, such as palliative care and neurology, were already required rotations at our program.[3, 4, 5] Therefore, any proposed curricular changes would need to mold into program requirements while still providing a preparatory experience in hospital medicine beyond what our current rotations offered. We felt this could be accomplished by including rotations that could provide specific skills pertinent to hospital medicine, such as ultrasound diagnostics or QI.

Key Differences in Curriculum Requirements Between Our Internal Medicine Residency Program and the Hospitalist Curriculum
Rotation Non‐SHAPE SHAPE
  • NOTE: Abbreviations: ICU, intensive care unit; SHAPE, Stanford Hospitalist Advanced Practice and Education.

ICU At least 12 weeks At least 16 weeks
Medical wards At least 16 weeks At least 16 weeks
Ultrasound diagnostics Elective Required
Quality improvement Elective Required
Surgical comanagement Elective Required
Medicine consult Elective Required
Neurology Required Required
Palliative care Required Required

Meeting With Stakeholders

We presented our curriculum proposal to the chief of the Stanford Hospital Medicine Program. We identified her early in the process to be our primary mentor, and she proved instrumental in being an advocate. After several meetings with the hospitalist group to further develop our program, we presented it to the residency program leadership who helped us to finalize our program.

RESULTS

Needs Assessment

Twenty‐two out of 111 categorical residents in our program (19.8%) identified themselves as interested in hospital medicine and responded to the survey. There were several areas of a potential hospitalist curriculum that the residents identified as important (defined as 4 or 5 on a 5‐point Likert scale). These areas included mentorship (90.9% of residents; mean 4.6, standard deviation [SD] 0.7), opportunities to teach (86.3%; mean 4.4, SD 0.9), and the establishment of a formal hospitalist curriculum (85.7%; mean 4.2, SD 0.8). The residents also identified several rotations that would be beneficial (defined as a 4 or 5 on a 5‐point Likert scale). These included medicine consult/procedures team (95.5% of residents; mean 4.7, SD 0.6), point‐of‐care ultrasound diagnostics (90.8%; mean 4.7, SD 0.8), and a community hospitalist preceptorship (86.4%; mean 4.4, SD 1.0). The residents also identified several rotations deemed to be of lesser benefit. These rotations included inpatient neurology (only 27.3% of residents; mean 3.2, SD 0.8) and palliative care (50.0%; mean 3.5, SD 1.0).

The Final Product: A Hospitalist Training Curriculum

Based on the needs assessment and meetings with program leadership, we designed a hospitalist program and named it the Stanford Hospitalist Advanced Practice and Education (SHAPE) program. The program was based on 3 core principles: (1) clinical excellence: by training in hospitalist‐relevant clinical areas, (2) academic development: with required research, QI, and teaching, and (3) career mentorship.

Clinical Excellence By Training in Hospitalist‐Relevant Clinical Areas

The SHAPE curriculum builds off of our institution's current curriculum with additional required rotations to improve the resident's skillsets. These included ultrasound diagnostics, surgical comanagement, and QI (Box 1). Given that some hospitalists work in an open intensive care unit (ICU), we increased the amount of required ICU time to provide expanded procedural and critical care experiences. The residents also receive 10 seminars focused on hospital medicine, including patient safety, QI, and career development (Box 1).

Box

The Stanford Hospitalist Advanced Practice and Education (SHAPE) program curriculum. Members of the program are required to complete the requirements listed before the end of their third year. Note that the clinical rotations are spread over the 3 years of residency.

Stanford Hospitalist Advanced Practice and Education Required Clinical Rotations

  • Medicine Consult (24 weeks)
  • Critical Care (16 weeks)
  • Ultrasound Diagnostics (2 weeks)
  • Quality Improvement (4 weeks)
  • Inpatient Neurology (2 weeks)
  • Palliative Care (2 weeks)
  • Surgical Comanagement (2 weeks)

Required Nonclinical Work

  • Quality improvement, clinical or educational project with a presentation at an academic conference or manuscript submission in a peer‐reviewed journal
  • Enrollment in the Stanford Faculty Development Center workshop on effective clinical teaching
  • Attendance at the hospitalist lecture series (10 lectures): patient safety, hospital efficiency, fundamentals of perioperative medicine, healthcare structure and changing reimbursement patterns, patient handoff, career development, prevention of burnout, inpatient nutrition, hospitalist research, and lean modeling in the hospital setting

Mentorship

  • Each participant is matched with 3 hospitalist mentors in order to provide comprehensive career and personal mentorship

Academic Development With Required Research and Teaching

SHAPE program residents are required to develop a QI, education, or clinical research project before graduation. They are required to present their work at a hospitalist conference or submit to a peer‐reviewed journal. They are also encouraged to attend the Society of Hospital Medicine annual meeting for their own career development.

SHAPE program residents also have increased opportunities to improve their teaching skills. The residents are enrolled in a clinical teaching workshop. Furthermore, the residents are responsible for leading regular lectures regarding common inpatient conditions for first‐ and second‐year medical students enrolled in a transitions‐of‐care elective.

Career Mentorship

Each resident is paired with 3 faculty hospitalists who have different areas of expertise (ie, clinical teaching, surgical comanagement, QI). They individually meet on a quarterly basis to discuss their career development and research projects. The SHAPE program will also host an annual resume‐development and career workshop.

SHAPE Resident Characteristics

In its first year, 13 of 25 residents (52%) interested in hospital medicine enrolled in the program. The SHAPE residents were predominantly second‐year residents (11 residents, 84.6%).

Among the 12 residents who did not enroll, there were 7 seniors (58.3%) who would soon be graduating and would not be eligible.

DISCUSSION

The training needs of aspiring hospitalists are changing as the scope of hospital medicine has expanded.[6] Residency programs can facilitate this by implementing a hospitalist curriculum that augments training and provides focused mentorship.[13, 14] An emphasis on resident leadership within these programs ensures positive housestaff buy‐in and satisfaction.

There were several key lessons we learned while designing our curriculum because of our unique role as residents and curriculum founders. This included the early engagement of departmental leadership as mentors. They assisted us in integrating our program within the existing internal medicine residency and the selection of electives. It was also imperative to secure adequate buy‐in from the academic hospitalists at our institution, as they would be our primary source of faculty mentors and lecturers.

A second challenge was balancing curriculum requirements and ensuring adequate buy‐in from our residents. The residents had fewer electives over their second and third years. However, this was balanced by the fact that the residents were given first preference on historically desirable rotations at our institution (including ultrasound, medicine consult, and QI). Furthermore, we purposefully included current resident opinions when performing our needs assessment to ensure adequate buy‐in. Surprisingly, the residents found several key rotations to be of low importance in our needs assessment, such as palliative care and inpatient neurology. Although this may seem confounding, several of these rotations (ie, neurology and palliative care) are already required of all residents at our program. It may be that some residents feel comfortable in these areas based on their previous experiences. Alternatively, this result may represent a lack of knowledge on the residents' part of what skill sets are imperative for career hospitalists. [4, 6]

Finally, we recognize that our program was based on our local needs assessment. Other residency programs may already have similar curricula built into their rotation schedule. In those instances, a hospitalist curriculum that emphasizes scholarly advancement and mentorship may be more appropriate.

CONCLUSIONS AND FUTURE DIRECTIONS

At out institution, we have created a hospitalist program designed to train the next generation of hospitalists with improved clinical, research, and teaching skills. Our cohort of residents will be observed over the next year, and we will administer a follow‐up study to assess the effectiveness of the program.

Acknowledgements

The authors acknowledge Karina Delgado, program manager at Stanford's internal medicine residency, for providing data on recent graduate plans.

Disclosures: Andre Kumar, MD, and Andrea Smeraglio, MD, are cofirst authors. The authors report no conflicts of interest.

Hospital medicine has grown tremendously since its inception in the 1990s.[1, 2] This expansion has led to the firm establishment of hospitalists in medical education, quality improvement (QI), research, subspecialty comanagement, and administration.[3, 4, 5]

This growth has also created new challenges. The training needs for the next generation of hospitalists are changing given the expanded clinical duties expected of hospitalists.[6, 7, 8] Prior surveys have suggested that some graduates employed as hospitalists have reported feeling underprepared in the areas of surgical comanagement, neurology, geriatrics, palliative care, and navigating the interdisciplinary care system.[9, 10]

In keeping with national trends, the number of residents interested in hospital medicine at our institution has dramatically increased. As internal medicine residents interested in careers in hospitalist medicine, we felt that improving hospitalist training at our institution was imperative given the increasing scope of practice and job competitiveness.[11, 12] We therefore sought to design and implement a hospitalist curriculum within our residency. In this article, we describe the genesis of our program, our final product, and the challenges of creating a curriculum while being internal medicine residents.

METHODS

Needs Assessment

To improve hospitalist training at our institution, we first performed a needs assessment. We contacted recent hospitalist graduates and current faculty to identify aspects of their clinical duties that may have been underemphasized during their training. Next, we performed a literature search in PubMed using the combined terms of hospitalist, hospital medicine, residency, education, training gaps, or curriculum. Based on these efforts, we developed a resident survey that assessed their attitudes toward various components of a potential curriculum. The survey was sent to all categorical internal medicine residents at our institution in December 2014. The survey specified that the respondents only include those who were interested in careers in hospital medicine. Responses were measured using a 5‐point Likert scale (1 = least important to 5 = most important).

Curriculum Development

Our intention was to develop a well‐rounded program that utilized mentorship, research, and clinical experience to augment our learner's knowledge and skills for a successful, long‐term career in the increasingly competitive field of hospital medicine. When designing our curriculum, we accounted for our program's current rotational requirements and local culture. Several previously identified underemphasized areas within hospital medicine, such as palliative care and neurology, were already required rotations at our program.[3, 4, 5] Therefore, any proposed curricular changes would need to mold into program requirements while still providing a preparatory experience in hospital medicine beyond what our current rotations offered. We felt this could be accomplished by including rotations that could provide specific skills pertinent to hospital medicine, such as ultrasound diagnostics or QI.

Key Differences in Curriculum Requirements Between Our Internal Medicine Residency Program and the Hospitalist Curriculum
Rotation Non‐SHAPE SHAPE
  • NOTE: Abbreviations: ICU, intensive care unit; SHAPE, Stanford Hospitalist Advanced Practice and Education.

ICU At least 12 weeks At least 16 weeks
Medical wards At least 16 weeks At least 16 weeks
Ultrasound diagnostics Elective Required
Quality improvement Elective Required
Surgical comanagement Elective Required
Medicine consult Elective Required
Neurology Required Required
Palliative care Required Required

Meeting With Stakeholders

We presented our curriculum proposal to the chief of the Stanford Hospital Medicine Program. We identified her early in the process to be our primary mentor, and she proved instrumental in being an advocate. After several meetings with the hospitalist group to further develop our program, we presented it to the residency program leadership who helped us to finalize our program.

RESULTS

Needs Assessment

Twenty‐two out of 111 categorical residents in our program (19.8%) identified themselves as interested in hospital medicine and responded to the survey. There were several areas of a potential hospitalist curriculum that the residents identified as important (defined as 4 or 5 on a 5‐point Likert scale). These areas included mentorship (90.9% of residents; mean 4.6, standard deviation [SD] 0.7), opportunities to teach (86.3%; mean 4.4, SD 0.9), and the establishment of a formal hospitalist curriculum (85.7%; mean 4.2, SD 0.8). The residents also identified several rotations that would be beneficial (defined as a 4 or 5 on a 5‐point Likert scale). These included medicine consult/procedures team (95.5% of residents; mean 4.7, SD 0.6), point‐of‐care ultrasound diagnostics (90.8%; mean 4.7, SD 0.8), and a community hospitalist preceptorship (86.4%; mean 4.4, SD 1.0). The residents also identified several rotations deemed to be of lesser benefit. These rotations included inpatient neurology (only 27.3% of residents; mean 3.2, SD 0.8) and palliative care (50.0%; mean 3.5, SD 1.0).

The Final Product: A Hospitalist Training Curriculum

Based on the needs assessment and meetings with program leadership, we designed a hospitalist program and named it the Stanford Hospitalist Advanced Practice and Education (SHAPE) program. The program was based on 3 core principles: (1) clinical excellence: by training in hospitalist‐relevant clinical areas, (2) academic development: with required research, QI, and teaching, and (3) career mentorship.

Clinical Excellence By Training in Hospitalist‐Relevant Clinical Areas

The SHAPE curriculum builds off of our institution's current curriculum with additional required rotations to improve the resident's skillsets. These included ultrasound diagnostics, surgical comanagement, and QI (Box 1). Given that some hospitalists work in an open intensive care unit (ICU), we increased the amount of required ICU time to provide expanded procedural and critical care experiences. The residents also receive 10 seminars focused on hospital medicine, including patient safety, QI, and career development (Box 1).

Box

The Stanford Hospitalist Advanced Practice and Education (SHAPE) program curriculum. Members of the program are required to complete the requirements listed before the end of their third year. Note that the clinical rotations are spread over the 3 years of residency.

Stanford Hospitalist Advanced Practice and Education Required Clinical Rotations

  • Medicine Consult (24 weeks)
  • Critical Care (16 weeks)
  • Ultrasound Diagnostics (2 weeks)
  • Quality Improvement (4 weeks)
  • Inpatient Neurology (2 weeks)
  • Palliative Care (2 weeks)
  • Surgical Comanagement (2 weeks)

Required Nonclinical Work

  • Quality improvement, clinical or educational project with a presentation at an academic conference or manuscript submission in a peer‐reviewed journal
  • Enrollment in the Stanford Faculty Development Center workshop on effective clinical teaching
  • Attendance at the hospitalist lecture series (10 lectures): patient safety, hospital efficiency, fundamentals of perioperative medicine, healthcare structure and changing reimbursement patterns, patient handoff, career development, prevention of burnout, inpatient nutrition, hospitalist research, and lean modeling in the hospital setting

Mentorship

  • Each participant is matched with 3 hospitalist mentors in order to provide comprehensive career and personal mentorship

Academic Development With Required Research and Teaching

SHAPE program residents are required to develop a QI, education, or clinical research project before graduation. They are required to present their work at a hospitalist conference or submit to a peer‐reviewed journal. They are also encouraged to attend the Society of Hospital Medicine annual meeting for their own career development.

SHAPE program residents also have increased opportunities to improve their teaching skills. The residents are enrolled in a clinical teaching workshop. Furthermore, the residents are responsible for leading regular lectures regarding common inpatient conditions for first‐ and second‐year medical students enrolled in a transitions‐of‐care elective.

Career Mentorship

Each resident is paired with 3 faculty hospitalists who have different areas of expertise (ie, clinical teaching, surgical comanagement, QI). They individually meet on a quarterly basis to discuss their career development and research projects. The SHAPE program will also host an annual resume‐development and career workshop.

SHAPE Resident Characteristics

In its first year, 13 of 25 residents (52%) interested in hospital medicine enrolled in the program. The SHAPE residents were predominantly second‐year residents (11 residents, 84.6%).

Among the 12 residents who did not enroll, there were 7 seniors (58.3%) who would soon be graduating and would not be eligible.

DISCUSSION

The training needs of aspiring hospitalists are changing as the scope of hospital medicine has expanded.[6] Residency programs can facilitate this by implementing a hospitalist curriculum that augments training and provides focused mentorship.[13, 14] An emphasis on resident leadership within these programs ensures positive housestaff buy‐in and satisfaction.

There were several key lessons we learned while designing our curriculum because of our unique role as residents and curriculum founders. This included the early engagement of departmental leadership as mentors. They assisted us in integrating our program within the existing internal medicine residency and the selection of electives. It was also imperative to secure adequate buy‐in from the academic hospitalists at our institution, as they would be our primary source of faculty mentors and lecturers.

A second challenge was balancing curriculum requirements and ensuring adequate buy‐in from our residents. The residents had fewer electives over their second and third years. However, this was balanced by the fact that the residents were given first preference on historically desirable rotations at our institution (including ultrasound, medicine consult, and QI). Furthermore, we purposefully included current resident opinions when performing our needs assessment to ensure adequate buy‐in. Surprisingly, the residents found several key rotations to be of low importance in our needs assessment, such as palliative care and inpatient neurology. Although this may seem confounding, several of these rotations (ie, neurology and palliative care) are already required of all residents at our program. It may be that some residents feel comfortable in these areas based on their previous experiences. Alternatively, this result may represent a lack of knowledge on the residents' part of what skill sets are imperative for career hospitalists. [4, 6]

Finally, we recognize that our program was based on our local needs assessment. Other residency programs may already have similar curricula built into their rotation schedule. In those instances, a hospitalist curriculum that emphasizes scholarly advancement and mentorship may be more appropriate.

CONCLUSIONS AND FUTURE DIRECTIONS

At out institution, we have created a hospitalist program designed to train the next generation of hospitalists with improved clinical, research, and teaching skills. Our cohort of residents will be observed over the next year, and we will administer a follow‐up study to assess the effectiveness of the program.

Acknowledgements

The authors acknowledge Karina Delgado, program manager at Stanford's internal medicine residency, for providing data on recent graduate plans.

Disclosures: Andre Kumar, MD, and Andrea Smeraglio, MD, are cofirst authors. The authors report no conflicts of interest.

References
  1. Wachter RM. The hospitalist field turns 15: new opportunities and challenges. J Hosp Med. 2011;6(4):1013.
  2. Glasheen JJ, Epstein KR, Siegal E, Kutner JS, Prochazka AV. The spectrum of community based hospitalist practice: A call to tailor internal medicine residency training. Arch Intern Med. 2007;167:727729.
  3. Pham HH, Devers KJ, Kuo S, Berenson R. Health care market trends and the evolution of hospitalist use and roles. J Gen Intern Med. 2005;20(2):101107.
  4. Lindenauer PK, Pantilat SZ, Katz PP, Wachter RM. Survey of the National Association of Inpatient Physicians. Ann Intern Med. 1999:343349.
  5. Goldenberg J, Glasheen JJ. Hospitalist educators: future of inpatient internal medicine training. Mt Sinai J Med. 2008;75(5):430435.
  6. Glasheen JJ, Siegal EM, Epstein K, Kutner J, Prochazka AV. Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists' needs. J Gen Intern Med. 2008;23(7):11101115.
  7. Arora V, Guardiano S, Donaldson D, Storch I, Hemstreet P. Closing the gap between internal medicine training and practice: recommendations from recent graduates. Am J Med. 2005;118(6):680685
  8. Chaudhry SI, Lien C, Ehrlich J, et al. Curricular content of internal medicine residency programs: a nationwide report. Am J Med. 2014;127(12):12471254.
  9. Plauth WH, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists' perceptions of their residency training needs: results of a national survey. Am J Med. 2001;111(3):247254.
  10. Holmboe ES, Bowen JL, Green M, et al. Reforming internal medicine residency training: a report from the Society of General Internal Medicine's Task Force for Residency Reform. J Gen Intern Med. 2005;20(12):11651172.
  11. Goodman PH, Januska A. Clinical hospital medicine fellowships: perspectives of employers, hospitalists, and medicine residents. J Hosp Med. 2008;3(1):2834.
  12. Flanders SA, Centor B, Weber V, McGinn T, DeSalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the Academic hospital medicine Summit. J Hosp Med. 2009;4(4):240246.
  13. Glasheen JJ, Goldenberg J, Nelson JR. Achieving hospital medicine's promise through internal medicine residency redesign. Mt Sinai J Med. 2008;75(5):436441.
  14. Hauer , Karen E, Flanders , Scott A, Wachter RM. Training Future Hospitalists. Cult Med. 1999;171(12):367370.
References
  1. Wachter RM. The hospitalist field turns 15: new opportunities and challenges. J Hosp Med. 2011;6(4):1013.
  2. Glasheen JJ, Epstein KR, Siegal E, Kutner JS, Prochazka AV. The spectrum of community based hospitalist practice: A call to tailor internal medicine residency training. Arch Intern Med. 2007;167:727729.
  3. Pham HH, Devers KJ, Kuo S, Berenson R. Health care market trends and the evolution of hospitalist use and roles. J Gen Intern Med. 2005;20(2):101107.
  4. Lindenauer PK, Pantilat SZ, Katz PP, Wachter RM. Survey of the National Association of Inpatient Physicians. Ann Intern Med. 1999:343349.
  5. Goldenberg J, Glasheen JJ. Hospitalist educators: future of inpatient internal medicine training. Mt Sinai J Med. 2008;75(5):430435.
  6. Glasheen JJ, Siegal EM, Epstein K, Kutner J, Prochazka AV. Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists' needs. J Gen Intern Med. 2008;23(7):11101115.
  7. Arora V, Guardiano S, Donaldson D, Storch I, Hemstreet P. Closing the gap between internal medicine training and practice: recommendations from recent graduates. Am J Med. 2005;118(6):680685
  8. Chaudhry SI, Lien C, Ehrlich J, et al. Curricular content of internal medicine residency programs: a nationwide report. Am J Med. 2014;127(12):12471254.
  9. Plauth WH, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists' perceptions of their residency training needs: results of a national survey. Am J Med. 2001;111(3):247254.
  10. Holmboe ES, Bowen JL, Green M, et al. Reforming internal medicine residency training: a report from the Society of General Internal Medicine's Task Force for Residency Reform. J Gen Intern Med. 2005;20(12):11651172.
  11. Goodman PH, Januska A. Clinical hospital medicine fellowships: perspectives of employers, hospitalists, and medicine residents. J Hosp Med. 2008;3(1):2834.
  12. Flanders SA, Centor B, Weber V, McGinn T, DeSalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the Academic hospital medicine Summit. J Hosp Med. 2009;4(4):240246.
  13. Glasheen JJ, Goldenberg J, Nelson JR. Achieving hospital medicine's promise through internal medicine residency redesign. Mt Sinai J Med. 2008;75(5):436441.
  14. Hauer , Karen E, Flanders , Scott A, Wachter RM. Training Future Hospitalists. Cult Med. 1999;171(12):367370.
Issue
Journal of Hospital Medicine - 11(9)
Issue
Journal of Hospital Medicine - 11(9)
Page Number
646-649
Page Number
646-649
Publications
Publications
Article Type
Display Headline
A resident‐created hospitalist curriculum for internal medicine housestaff
Display Headline
A resident‐created hospitalist curriculum for internal medicine housestaff
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Andre Kumar, MD, Department of Medicine, Stanford University Hospital, 300 Pasteur Drive, Lane 154, Stanford, CA 94305‐5133; Telephone: 650‐723‐6661; Fax: 650‐498‐6205; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Telemetry Use for LOS and Cost Reduction

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Hospitalist intervention for appropriate use of telemetry reduces length of stay and cost

Inpatient hospital services are a major component of total US civilian noninstitutionalized healthcare expenses, accounting for 29.3% of spending in 2009[1] when the average cost per stay was $9700.[2] Telemetry monitoring, a widely used resource for the identification of life‐threatening arrhythmias, contributes to these costs. In 1998, Sivaram et al. estimated the cost per patient at $683; in 2010, Ivonye et al. published the cost difference between a telemetry bed and a nonmonitored bed in their inner‐city public teaching facility reached $800.[3, 4]

In 1991, the American College of Cardiology published guidelines for telemetry use, which were later revised by the American Heart Association in 2004.[5, 6] Notably, the guidelines are based on expert opinion and on research data in electrocardiography.[7] The guidelines divide patients into 3 classes based on clinical condition: recommending telemetry monitoring for almost all class I patients, stating possible benefit in class II patients, and discouraging cardiac monitoring for the low‐risk class III patients.[5, 6] The Choosing Wisely campaign, an initiative of the American Board of Internal Medicine and the Society of Hospital Medicine, highlights telemetry monitoring as 1 of the top 5 interventions that physicians and patients should question when determining tests and procedures.[8] Choosing Wisely suggests using a protocol to govern continuation of telemetry outside of the intensive care unit (ICU), as inappropriate monitoring increases care costs and may result in patient harm.[8] The Joint Commission 2014 National Patient Safety Goals notes that numerous alarm signals and the resulting noise and displayed information tends to desensitize staff and cause them to miss or ignore alarm signals or even disable them.[9]

Few studies have examined implementation methods for improved telemetry bed utilization. One study evaluated the impact of a multispecialty telemetry policy with enforcement by an outside cardiologist and nurse team, noting improved cardiac monitoring bed utilization and decreased academic hospital closure, which previously resulted in inability to accept new patients or procedure cancellation.[10] Another study provided an orientation handout discussed by the chief resident and telemetry indication reviews during multidisciplinary rounds 3 times a week.[11]

Our study is one the first to demonstrate a model for a hospitalist‐led approach to guide appropriate telemetry use. We investigated the impact of a multipronged approach to guide telemetry usage: (1) a hospitalist‐led, daily review of bed utilization during attending rounds, (2) a hospitalist attending‐driven, trainee‐focused education module on telemetry utilization, (3) quarterly feedback on telemetry bed utilization rates, and (4) financial incentives. We analyzed pre‐ and post‐evaluation results from the education module to measure impact on knowledge, skills, and attitudes. Additionally, we evaluated the effect of the intervention on length of stay (LOS) and bed utilization costs, while monitoring case mix index (CMI) and overall mortality.

METHODS

Setting

This study took place at Stanford Hospital and Clinics, a teaching academic center in Stanford, California. Stanford Hospital is a 444‐bed, urban medical center with 114 telemetry intermediate ICU beds, and 66 ICU beds. The 264 medicalsurgical beds lack telemetry monitoring, which can only be completed in the intermediate and full ICU. All patients on telemetry units receive both cardiac monitoring and increased nursing ratios. Transfer orders are placed in the electronic medical record to shift patients between care levels. Bed control attempts to transfer patients as soon as an open bed in the appropriate care level exists.

The study included all 5 housestaff inpatient general internal medicine wards teams (which excludes cardiology, pulmonary hypertension, hematology, oncology, and post‐transplant patients). Hospitalists and nonhospitalists attend on the wards for 1‐ to 2‐week blocks. Teaching teams are staffed by 1 to 2 medical students, 2 interns, 1 resident, and 1 attending. The university institutional review board notice of determination waived review for this study because it was classified as quality improvement.

Participants

Ten full‐ and part‐time hospitalist physicians participated in the standardized telemetry teaching. Fifty‐six of the approximately 80 medical students and housestaff on hospitalists' teams completed the educational evaluation. Both hospitalist and nonhospitalist teams participated in daily multidisciplinary rounds, focusing on barriers to discharge including telemetry use. Twelve nonhospitalists served on the wards during the intervention period. Hospitalists covered 72% of the internal medicine wards during the intervention period.

Study Design

We investigated the impact of a multipronged approach to guide telemetry usage from January 2013 to August 2013 (intervention period).

Hospitalist‐Led Daily Review of Bed Utilization

Hospitalists were encouraged to discuss the need of telemetry on daily attending rounds and review indications for telemetry while on service. Prior to starting a ward block, attendings were emailed the teaching module with a reminder to discuss the need for telemetry on attending rounds. Reminders to discuss telemetry utilization were also provided during every‐other‐week hospitalist meetings. Compliance of daily discussion was not tracked.

Hospitalist‐Driven, Trainee‐Focused, Education Module on Telemetry Utilization

The educational module was taught during teaching sessions only by the hospitalists. Trainees on nonhospitalist teams did not receive dedicated teaching about telemetry usage. The module was given to learners only once. The module was a 10‐slide, Microsoft PowerPoint (Microsoft Corp., Redmond, WA) presentation that reviewed the history of telemetry, the American College of Cardiology and the American Heart Association guidelines, the cost difference between telemetry and nonmonitored beds, and the perceived barriers to discontinuation. The presentation was accompanied by a pre‐ and post‐evaluation to elicit knowledge, skills, and attitudes of telemetry use (see Supporting Information, Appendix A, in the online version of this article). The pre‐ and post‐evaluations were created through consensus with a multidisciplinary, expert panel after reviewing the evidence‐based literature.

Quarterly Feedback on Telemetry Bed Utilization Rates

Hospital beduse and CMI data were obtained from the Stanford finance department for the intervention period and for the baseline period, which was the year prior to the study, January 1, 2012 to December 31, 2012. Hospital beduse data included the number of days patients were on telemetry units versus medicalsurgical units (nontelemetry units), differentiated by hospitalists and nonhospitalists. Cost savings were calculated by the Stanford finance department that used Stanford‐specific, internal cost accounting data to determine the impact of the intervention. These data were reviewed at hospitalist meetings on a quarterly basis. We also obtained the University Healthsystem Consortium mortality index (observed to expected) for the general internal medicine service during the baseline and intervention periods.

To measure sustainment of telemetry reduction in the postintervention period, we measured telemetry LOS from September 2014 to March 2015 (extension period).

Financial Incentives

Hospitalists were provided a $2000 bonus at the end of fiscal year 2013 if the group showed a decrease in telemetry bed use in comparison to the baseline period.

Statistical Analysis of Clinical Outcome Measures

Continuous outcomes were tested using 2‐tailed t tests. Comparison of continuous outcome included differences in telemetry and nontelemetry LOS and CMI. Pairwise comparisons were made for various time periods. A P value of <0.05 was considered statistically significant. Statistical analyses were performed using Stata 12.0 software (StataCorp, College Station, TX).

RESULTS

Clinical and Value Outcomes

Baseline (January 2012December 2012) Versus Intervention Period (January 2013August 2013)

LOS for telemetry beds was significantly reduced over the intervention period (2.75 days vs 2.13 days, P=0.005) for hospitalists. Notably, there was no significant difference in mean LOS between baseline and intervention periods for nontelemetry beds (2.84 days vs 2.72 days, P=0.32) for hospitalists. In comparison, for nonhospitalists, there was no difference in mean LOS for telemetry beds between baseline and intervention periods (2.75 days vs 2.46 days, P=0.33) and nontelemetry beds (2.64 days vs 2.89 days, P=0.26) (Table 1).

Bed Utilization Over Baseline, Intervention, and Extension Time Periods for Hospitalists and Nonhospitalists
Baseline Period Intervention Period P Value Extension Period P Value
  • NOTE: Length of stay (LOS) for telemetry beds was significantly reduced over the intervention period (2.75 days vs 2.13 days, P=0.005) for hospitalists. Nonhospitalists demonstrated no difference in mean LOS for telemetry beds between baseline and intervention periods (2.75 days vs 2.46 days, P=0.33). The results were sustained in the hospitalist group, with a telemetry LOS of 1.93 in the extension period. The mean case mix index managed by the hospitalist and nonhospitalist groups remained unchanged.

Length of stay
Hospitalists
Telemetry beds 2.75 2.13 0.005 1.93 0.09
Nontelemetry beds 2.84 2.72 0.324 2.44 0.21
Nonhospitalists
Telemetry beds 2.75 2.46 0.331 2.22 0.43
Nontelemetry beds 2.64 2.89 0.261 2.26 0.05
Case mix index
Hospitalists 1.44 1.45 0.68 1.40 0.21
Nonhospitalists 1.46 1.40 0.53 1.53 0.18

Costs of hospital stay were also reduced in the multipronged, hospitalist‐driven intervention group. Expenditures for telemetry beds were reduced by 22.5% over the intervention period for hospitalists (Table 2).

Percent Change in Accommodation Costs Over Baseline to Intervention and Intervention to Extension Periods
Baseline to Intervention Period Intervention to Extension Period
  • NOTE: Accommodation costs were reduced in the hospitalist group. Expenditures for telemetry beds were reduced by 22.5% over the intervention period for hospitalists.

Hospitalists
Telemetry beds 22.55% 9.55%
Nontelemetry beds 4.23% 10.14%
Nonhospitalists
Telemetry beds 10.55% 9.89%
Nontelemetry beds 9.47% 21.84%

The mean CMI of the patient cohort managed by the hospitalists in the baseline and intervention periods was not significantly different (1.44 vs 1.45, P=0.68). The mean CMI of the patients managed by the nonhospitalists in the baseline and intervention periods was also not significantly different (1.46 vs 1.40, P=0.53) (Table 1). Mortality index during the baseline and intervention periods was not significantly different (0.770.22 vs 0.660.23, P=0.54), as during the intervention and extension periods (0.660.23 vs 0.650.15, P=0.95).

Intervention Period (January 2013August 2013) Versus Extension Period (September 2014‐March 2015)

The decreased telemetry LOS for hospitalists was sustained from the intervention period to the extension period, from 2.13 to 1.93 (P=0.09). There was no significant change in the nontelemetry LOS in the intervention period compared to the extension period (2.72 vs 2.44, P=0.21). There was no change in the telemetry LOS for nonhospitalists from the intervention period to the extension period (2.46 vs 2.22, P=0.43).

The mean CMI in the hospitalist group was not significantly different in the intervention period compared to the extension period (1.45 to 1.40, P=0.21). The mean CMI in the nonhospitalist group did not change from the intervention period to the extension period (1.40 vs 1.53, P=0.18) (Table 1).

Education Outcomes

Out of the 56 participants completing the education module and survey, 28.6% were medical students, 53.6% were interns, 12.5% were second‐year residents, and 5.4% were third‐year residents. Several findings were seen at baseline via pretest. In evaluating patterns of current telemetry use, 32.2% of participants reported evaluating the necessity of telemetry for patients on admission only, 26.3% during transitions of care, 5.1% after discharge plans were cemented, 33.1% on a daily basis, and 3.4% rarely. When asked which member of the care team was most likely to encourage use of appropriate telemetry, 20.8% identified another resident, 13.9% nursing, 37.5% attending physician, 20.8% self, 4.2% the team as a whole, and 2.8% as not any.

Figure 1 shows premodule results regarding the trainees perceived percentage of patient encounters during which a participant's team discussed their patient's need for telemetry.

Figure 1
Premodule, trainee‐perceived percentage of patient encounters for which the team discussed a patient's need for telemetry; N/R, no response.

In assessing perception of current telemetry utilization, 1.8% of participants thought 0% to 10% of patients were currently on telemetry, 19.6% thought 11% to 20%, 42.9% thought 21% to 31%, 30.4% thought 31% to 40%, and 3.6% thought 41% to 50%.

Two areas were assessed at both baseline and after the intervention: knowledge of indications of telemetry use and cost related to telemetry use. We saw increased awareness of cost‐saving actions. To assess current knowledge of the indications of proper telemetry use according to American Heart Association guidelines, participants were presented with a list of 5 patients with different clinical indications for telemetry use and asked which patient required telemetry the most. Of the participants, 54.5% identified the correct answer in the pretest and 61.8% identified the correct answer in the post‐test. To assess knowledge of the costs of telemetry relative to other patient care, participants were presented with a patient case and asked to identify the most and least cost‐saving actions to safely care for the patient. When asked to identify the most cost‐saving action, 20.3% identified the correct answer in the pretest and 61.0% identified the correct answer in the post‐test. Of those who answered incorrectly in the pretest, 51.1% answered correctly in the post‐test (P=0.002). When asked to identify the least cost‐saving action, 23.7% identified the correct answer in the pretest and 50.9% identified the correct answer in the posttest. Of those who answered incorrectly in the pretest, 60.0% answered correctly in the post‐test (P=0.003).

In the post‐test, when asked about the importance of appropriate telemetry usage in providing cost‐conscious care and assuring appropriate hospital resource management, 76.8% of participants found the need very important, 21.4% somewhat important, and 1.8% as not applicable. The most commonly perceived barriers impeding discontinuation of telemetry, as reported by participants via post‐test, were nursing desires and time. Figure 2 shows all perceived barriers.

Figure 2
Postmodule, trainee‐perceived barriers to discontinuation of telemetry.

DISCUSSION

Our study is one of the first to our knowledge to demonstrate reductions in telemetry LOS by a hospitalist intervention for telemetry utilization. Others[10, 11] have studied the impact of an orientation handout by chief residents or a multispecialty telemetry policy with enforcement by an outside cardiologist and nurse team. Dressler et al. later sustained a 70% reduction in telemetry use without adversely affecting patient safety, as assessed through numbers of rapid response activations, codes, and deaths, through integrating the AHA guidelines into their electronic ordering system.[12] However, our study has the advantage of the primary team, who knows the patient and clinical scenario best, driving the change during attending rounds. In an era where cost consciousness intersects the practice of medicine, any intervention in patient care that demonstrates cost savings without an adverse impact on patient care and resource utilization must be emphasized. This is particularly important in academic institutions, where residents and medical students are learning to integrate the principles of patient safety and quality improvement into their clinical practice.[13] We actually showed sustained telemetry LOS reductions into the extension period after our intervention. We believe this may be due to telemetry triage being integrated into our attending and resident rounding practices. Future work should include integration of telemetry triage into clinical decision support in the electronic medical record and multidisciplinary rounds to disseminate telemetry triage hospital‐wide in both the academic and community settings.

Our study also revealed that nearly half of participants were not aware of the criteria for appropriate utilization of telemetry before our intervention; in the preintervention period, there were many anecdotal and objective findings of inappropriate utilization of telemetry as well as prolonged continuation beyond the clinical needs in both the hospitalist and nonhospitalist group. For the hospitalist group (ie, the group receiving guideline‐based education on appropriate indications for telemetry utilization), there was an assessment of both appropriate usage and timely discontinuation of telemetry in the postintervention period, which we attribute in large part to adherence to the education provided to this group.

We were able to show increased knowledge of cost‐saving actions among trainees with our educational module. We believe it is imperative to educate our providers (physicians, nurses, case managers, and students within these disciplines) on the appropriate indications for telemetry use, not only to help with cost savings and resource availability (ie, allowing telemetry beds to be available for patients who need them most), but also to instill consistent expectations among our patients. For the hospitalist group (ie, the group receiving guideline‐based education on appropriate indications for telemetry utilization), there was an assessment of both appropriate usage and timely discontinuation of telemetry in the postintervention period, which we attribute in large part to adherence to the education provided to this group.

Additionally, we feel it is important to consider the impacts of inappropriate use of telemetry from a patient's perspective: it is physically restrictive/emnconvenient, alarms are disruptive, it can be a barrier for other treatments such as physical therapy, it may increase the time it takes for imaging studies, a nurse may be required to accompany patients on telemetry, and poses additional costs to their medical bill.

We believe our success is due to several strategies. First, at the start of the fiscal year when quality improvement metrics are established, this particular metric (improving the appropriate utilization and timely discontinuation of telemetry) was deemed important by all hospitalists, engendering group buy‐in prior to the intervention. Our hospitalists received a detailed and interactive tutorial session in person at the beginning of the study. This tutorial provided the hospitalists with a comprehensive understanding of the appropriate (and inappropriate) indications for telemetry monitoring, hence facilitating guideline‐directed utilization. Email reminders and the tutorial tool were provided each time a hospitalist attended on the wards, and hospitalists received a small financial incentive to comply with appropriate telemetry utilization.

Our study has several strengths. First, the time frame of our study was long enough (8 months) to allow consistent trends to emerge and to optimize exposure of housestaff and medical students to this quality‐improvement initiative. Second, our cost savings came from 2 factors, direct reduction of inappropriate telemetry use and reduction in length of stay, highlighting the dual impact of appropriate telemetry utilization on cost. The overall reductions in telemetry utilization for the intervention group were a result of both reductions in initial placement on telemetry for patients who did not meet criteria for such monitoring as well as timely discontinuation of telemetry during the patient's hospitalization. Third, our study demonstrates that physicians can be effective in driving appropriate telemetry usage by participating in the clinical decision making regarding necessity and educating providers, trainees/students, and patients on appropriate indications. Finally, we show sustainment of our intervention in the extension period, suggesting telemetry triage integration into rounding practice.

Our study has limitations as well. First, our sample size is relatively small at a single academic center. Second, due to complexities in our faculty scheduling, we were unable to completely randomize patients to a hospitalist versus nonhospitalist team. However, we believe that despite the inability to randomize, our study does show the benefit of a hospitalist attending to reduce telemetry LOS given there was no change in nonhospitalist telemetry LOS despite all of the other hospital‐wide interventions (multidisciplinary rounds, similar housestaff). Third, our study was limited in that the CMI was used as a proxy for patient complexity, and the mortality index was used as the overall marker of safety. Further studies should monitor frequency and outcomes of arrhythmic events of patients transferred from telemetry monitoring to medicalsurgical beds. Finally, as the intervention was multipronged, we are unable to determine which component led to the reductions in telemetry utilization. Each component, however, remains easily transferrable to outside institutions. We demonstrated both a reduction in initiation of telemetry as well as timely discontinuation; however, due to the complexity in capturing this accurately, we were unable to numerically quantify these individual outcomes.

Additionally, there were approximately 10 nonhospitalist attendings who also staffed the wards during the intervention time period of our study; these attendings did not undergo the telemetry tutorial/orientation. This difference, along with the Hawthorne effect for the hospitalist attendings, also likely contributed to the difference in outcomes between the 2 attending cohorts in the intervention period.

CONCLUSIONS

Our results demonstrate that a multipronged hospitalist‐driven intervention to improve appropriate use of telemetry reduces telemetry LOS and cost. Hence, we believe that targeted, education‐driven interventions with monitoring of progress can have demonstrable impacts on changing practice. Physicians will need to make trade‐offs in clinical practice to balance efficient resource utilization with the patient's evolving condition in the inpatient setting, the complexities of clinical workflow, and the patient's expectations.[14] Appropriate telemetry utilization is a prime example of what needs to be done well in the future for high‐value care.

Acknowledgements

The authors acknowledge the hospitalists who participated in the intervention: Jeffrey Chi, Willliam Daines, Sumbul Desai, Poonam Hosamani, John Kugler, Charles Liao, Errol Ozdalga, and Sang Hoon Woo. The authors also acknowledge Joan Hendershott in the Finance Department and Joseph Hopkins in the Quality Department.

Disclosures: All coauthors have seen and agree with the contents of the article; submission (aside from abstracts) was not under review by any other publication. The authors report no disclosures of financial support from, or equity positions in, manufacturers of drugs or products mentioned in the article.

Files
References
  1. Kashihara D, Carper K. National health care expenses in the U.S. civilian noninstitutionalized population, 2009. Statistical brief 355. 2012. Agency for Healthcare Research and Quality, Rockville, MD.
  2. Pfuntner A, Wier L, Steiner C. Costs for hospital stays in the United States, 2010. Statistical brief 146. 2013. Agency for Healthcare Research and Quality, Rockville, MD.
  3. Sivaram CA, Summers JH, Ahmed N. Telemetry outside critical care units: patterns of utilization and influence on management decisions. Clin Cardiol. 1998;21(7):503505.
  4. Ivonye C, Ohuabunwo C, Henriques‐Forsythe M, et al. Evaluation of telemetry utilization, policy, and outcomes in an inner‐city academic medical center. J Natl Med Assoc. 2010;102(7):598604.
  5. Jaffe AS, Atkins JM, Field JM. Recommended guidelines for in‐hospital cardiac monitoring of adults for detection of arrhythmia. Emergency Cardiac Care Committee members. J Am Coll Cardiol. 1991;18(6):14311433.
  6. Drew BJ, Califf RM, Funk M, et al. Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):27212746.
  7. Henriques‐Forsythe MN, Ivonye CC, Jamched U, Kamuguisha LK, Olejeme KA, Onwuanyi AE. Is telemetry overused? Is it as helpful as thought? Cleve Clin J Med. 2009;76(6):368372.
  8. Society of Hospital Medicine. Adult Hospital Medicine. Five things physicians and patients should question. Available at: http://www.choosingwisely.org/societies/society‐of‐hospital‐medicine‐adult. Published February 21, 2013. Accessed October 5, 2014.
  9. Joint Commission on Accreditation of Healthcare Organizations. The Joint Commission announces 2014 national patient safety goal. Jt Comm Perspect. 2013;33(7):14.
  10. Lee JC, Lamb P, Rand E, Ryan C, Rubel B. Optimizing telemetry utilization in an academic medical center. J Clin Outcomes Manage. 2008;15(9):435440.
  11. Silverstein N, Silverman A. Improving utilization of telemetry in a university hospital. J Clin Outcomes Manage. 2005;12(10):519522.
  12. Dressler R, Dryer MM, Coletti C, Mahoney D, Doorey AJ. Altering overuse of cardiac telemetry in non‐intensive care unit settings by hardwiring the use of American Heart Association guidelines. JAMA Intern Med. 2014;174:18521854.
  13. Pines JM, Farmer SA, Akman JS. "Innovation" institutes in academic health centers: enhancing value through leadership, education, engagement, and scholarship. Acad Med. 2014;89(9):12041206.
  14. Sabbatini AK, Tilburt JC, Campbell EG, Sheeler RD, Egginton JS, Goold SD. Controlling health costs: physician responses to patient expectations for medical care. J Gen Intern Med. 2014;29(9):12341241.
Article PDF
Issue
Journal of Hospital Medicine - 10(9)
Publications
Page Number
627-632
Sections
Files
Files
Article PDF
Article PDF

Inpatient hospital services are a major component of total US civilian noninstitutionalized healthcare expenses, accounting for 29.3% of spending in 2009[1] when the average cost per stay was $9700.[2] Telemetry monitoring, a widely used resource for the identification of life‐threatening arrhythmias, contributes to these costs. In 1998, Sivaram et al. estimated the cost per patient at $683; in 2010, Ivonye et al. published the cost difference between a telemetry bed and a nonmonitored bed in their inner‐city public teaching facility reached $800.[3, 4]

In 1991, the American College of Cardiology published guidelines for telemetry use, which were later revised by the American Heart Association in 2004.[5, 6] Notably, the guidelines are based on expert opinion and on research data in electrocardiography.[7] The guidelines divide patients into 3 classes based on clinical condition: recommending telemetry monitoring for almost all class I patients, stating possible benefit in class II patients, and discouraging cardiac monitoring for the low‐risk class III patients.[5, 6] The Choosing Wisely campaign, an initiative of the American Board of Internal Medicine and the Society of Hospital Medicine, highlights telemetry monitoring as 1 of the top 5 interventions that physicians and patients should question when determining tests and procedures.[8] Choosing Wisely suggests using a protocol to govern continuation of telemetry outside of the intensive care unit (ICU), as inappropriate monitoring increases care costs and may result in patient harm.[8] The Joint Commission 2014 National Patient Safety Goals notes that numerous alarm signals and the resulting noise and displayed information tends to desensitize staff and cause them to miss or ignore alarm signals or even disable them.[9]

Few studies have examined implementation methods for improved telemetry bed utilization. One study evaluated the impact of a multispecialty telemetry policy with enforcement by an outside cardiologist and nurse team, noting improved cardiac monitoring bed utilization and decreased academic hospital closure, which previously resulted in inability to accept new patients or procedure cancellation.[10] Another study provided an orientation handout discussed by the chief resident and telemetry indication reviews during multidisciplinary rounds 3 times a week.[11]

Our study is one the first to demonstrate a model for a hospitalist‐led approach to guide appropriate telemetry use. We investigated the impact of a multipronged approach to guide telemetry usage: (1) a hospitalist‐led, daily review of bed utilization during attending rounds, (2) a hospitalist attending‐driven, trainee‐focused education module on telemetry utilization, (3) quarterly feedback on telemetry bed utilization rates, and (4) financial incentives. We analyzed pre‐ and post‐evaluation results from the education module to measure impact on knowledge, skills, and attitudes. Additionally, we evaluated the effect of the intervention on length of stay (LOS) and bed utilization costs, while monitoring case mix index (CMI) and overall mortality.

METHODS

Setting

This study took place at Stanford Hospital and Clinics, a teaching academic center in Stanford, California. Stanford Hospital is a 444‐bed, urban medical center with 114 telemetry intermediate ICU beds, and 66 ICU beds. The 264 medicalsurgical beds lack telemetry monitoring, which can only be completed in the intermediate and full ICU. All patients on telemetry units receive both cardiac monitoring and increased nursing ratios. Transfer orders are placed in the electronic medical record to shift patients between care levels. Bed control attempts to transfer patients as soon as an open bed in the appropriate care level exists.

The study included all 5 housestaff inpatient general internal medicine wards teams (which excludes cardiology, pulmonary hypertension, hematology, oncology, and post‐transplant patients). Hospitalists and nonhospitalists attend on the wards for 1‐ to 2‐week blocks. Teaching teams are staffed by 1 to 2 medical students, 2 interns, 1 resident, and 1 attending. The university institutional review board notice of determination waived review for this study because it was classified as quality improvement.

Participants

Ten full‐ and part‐time hospitalist physicians participated in the standardized telemetry teaching. Fifty‐six of the approximately 80 medical students and housestaff on hospitalists' teams completed the educational evaluation. Both hospitalist and nonhospitalist teams participated in daily multidisciplinary rounds, focusing on barriers to discharge including telemetry use. Twelve nonhospitalists served on the wards during the intervention period. Hospitalists covered 72% of the internal medicine wards during the intervention period.

Study Design

We investigated the impact of a multipronged approach to guide telemetry usage from January 2013 to August 2013 (intervention period).

Hospitalist‐Led Daily Review of Bed Utilization

Hospitalists were encouraged to discuss the need of telemetry on daily attending rounds and review indications for telemetry while on service. Prior to starting a ward block, attendings were emailed the teaching module with a reminder to discuss the need for telemetry on attending rounds. Reminders to discuss telemetry utilization were also provided during every‐other‐week hospitalist meetings. Compliance of daily discussion was not tracked.

Hospitalist‐Driven, Trainee‐Focused, Education Module on Telemetry Utilization

The educational module was taught during teaching sessions only by the hospitalists. Trainees on nonhospitalist teams did not receive dedicated teaching about telemetry usage. The module was given to learners only once. The module was a 10‐slide, Microsoft PowerPoint (Microsoft Corp., Redmond, WA) presentation that reviewed the history of telemetry, the American College of Cardiology and the American Heart Association guidelines, the cost difference between telemetry and nonmonitored beds, and the perceived barriers to discontinuation. The presentation was accompanied by a pre‐ and post‐evaluation to elicit knowledge, skills, and attitudes of telemetry use (see Supporting Information, Appendix A, in the online version of this article). The pre‐ and post‐evaluations were created through consensus with a multidisciplinary, expert panel after reviewing the evidence‐based literature.

Quarterly Feedback on Telemetry Bed Utilization Rates

Hospital beduse and CMI data were obtained from the Stanford finance department for the intervention period and for the baseline period, which was the year prior to the study, January 1, 2012 to December 31, 2012. Hospital beduse data included the number of days patients were on telemetry units versus medicalsurgical units (nontelemetry units), differentiated by hospitalists and nonhospitalists. Cost savings were calculated by the Stanford finance department that used Stanford‐specific, internal cost accounting data to determine the impact of the intervention. These data were reviewed at hospitalist meetings on a quarterly basis. We also obtained the University Healthsystem Consortium mortality index (observed to expected) for the general internal medicine service during the baseline and intervention periods.

To measure sustainment of telemetry reduction in the postintervention period, we measured telemetry LOS from September 2014 to March 2015 (extension period).

Financial Incentives

Hospitalists were provided a $2000 bonus at the end of fiscal year 2013 if the group showed a decrease in telemetry bed use in comparison to the baseline period.

Statistical Analysis of Clinical Outcome Measures

Continuous outcomes were tested using 2‐tailed t tests. Comparison of continuous outcome included differences in telemetry and nontelemetry LOS and CMI. Pairwise comparisons were made for various time periods. A P value of <0.05 was considered statistically significant. Statistical analyses were performed using Stata 12.0 software (StataCorp, College Station, TX).

RESULTS

Clinical and Value Outcomes

Baseline (January 2012December 2012) Versus Intervention Period (January 2013August 2013)

LOS for telemetry beds was significantly reduced over the intervention period (2.75 days vs 2.13 days, P=0.005) for hospitalists. Notably, there was no significant difference in mean LOS between baseline and intervention periods for nontelemetry beds (2.84 days vs 2.72 days, P=0.32) for hospitalists. In comparison, for nonhospitalists, there was no difference in mean LOS for telemetry beds between baseline and intervention periods (2.75 days vs 2.46 days, P=0.33) and nontelemetry beds (2.64 days vs 2.89 days, P=0.26) (Table 1).

Bed Utilization Over Baseline, Intervention, and Extension Time Periods for Hospitalists and Nonhospitalists
Baseline Period Intervention Period P Value Extension Period P Value
  • NOTE: Length of stay (LOS) for telemetry beds was significantly reduced over the intervention period (2.75 days vs 2.13 days, P=0.005) for hospitalists. Nonhospitalists demonstrated no difference in mean LOS for telemetry beds between baseline and intervention periods (2.75 days vs 2.46 days, P=0.33). The results were sustained in the hospitalist group, with a telemetry LOS of 1.93 in the extension period. The mean case mix index managed by the hospitalist and nonhospitalist groups remained unchanged.

Length of stay
Hospitalists
Telemetry beds 2.75 2.13 0.005 1.93 0.09
Nontelemetry beds 2.84 2.72 0.324 2.44 0.21
Nonhospitalists
Telemetry beds 2.75 2.46 0.331 2.22 0.43
Nontelemetry beds 2.64 2.89 0.261 2.26 0.05
Case mix index
Hospitalists 1.44 1.45 0.68 1.40 0.21
Nonhospitalists 1.46 1.40 0.53 1.53 0.18

Costs of hospital stay were also reduced in the multipronged, hospitalist‐driven intervention group. Expenditures for telemetry beds were reduced by 22.5% over the intervention period for hospitalists (Table 2).

Percent Change in Accommodation Costs Over Baseline to Intervention and Intervention to Extension Periods
Baseline to Intervention Period Intervention to Extension Period
  • NOTE: Accommodation costs were reduced in the hospitalist group. Expenditures for telemetry beds were reduced by 22.5% over the intervention period for hospitalists.

Hospitalists
Telemetry beds 22.55% 9.55%
Nontelemetry beds 4.23% 10.14%
Nonhospitalists
Telemetry beds 10.55% 9.89%
Nontelemetry beds 9.47% 21.84%

The mean CMI of the patient cohort managed by the hospitalists in the baseline and intervention periods was not significantly different (1.44 vs 1.45, P=0.68). The mean CMI of the patients managed by the nonhospitalists in the baseline and intervention periods was also not significantly different (1.46 vs 1.40, P=0.53) (Table 1). Mortality index during the baseline and intervention periods was not significantly different (0.770.22 vs 0.660.23, P=0.54), as during the intervention and extension periods (0.660.23 vs 0.650.15, P=0.95).

Intervention Period (January 2013August 2013) Versus Extension Period (September 2014‐March 2015)

The decreased telemetry LOS for hospitalists was sustained from the intervention period to the extension period, from 2.13 to 1.93 (P=0.09). There was no significant change in the nontelemetry LOS in the intervention period compared to the extension period (2.72 vs 2.44, P=0.21). There was no change in the telemetry LOS for nonhospitalists from the intervention period to the extension period (2.46 vs 2.22, P=0.43).

The mean CMI in the hospitalist group was not significantly different in the intervention period compared to the extension period (1.45 to 1.40, P=0.21). The mean CMI in the nonhospitalist group did not change from the intervention period to the extension period (1.40 vs 1.53, P=0.18) (Table 1).

Education Outcomes

Out of the 56 participants completing the education module and survey, 28.6% were medical students, 53.6% were interns, 12.5% were second‐year residents, and 5.4% were third‐year residents. Several findings were seen at baseline via pretest. In evaluating patterns of current telemetry use, 32.2% of participants reported evaluating the necessity of telemetry for patients on admission only, 26.3% during transitions of care, 5.1% after discharge plans were cemented, 33.1% on a daily basis, and 3.4% rarely. When asked which member of the care team was most likely to encourage use of appropriate telemetry, 20.8% identified another resident, 13.9% nursing, 37.5% attending physician, 20.8% self, 4.2% the team as a whole, and 2.8% as not any.

Figure 1 shows premodule results regarding the trainees perceived percentage of patient encounters during which a participant's team discussed their patient's need for telemetry.

Figure 1
Premodule, trainee‐perceived percentage of patient encounters for which the team discussed a patient's need for telemetry; N/R, no response.

In assessing perception of current telemetry utilization, 1.8% of participants thought 0% to 10% of patients were currently on telemetry, 19.6% thought 11% to 20%, 42.9% thought 21% to 31%, 30.4% thought 31% to 40%, and 3.6% thought 41% to 50%.

Two areas were assessed at both baseline and after the intervention: knowledge of indications of telemetry use and cost related to telemetry use. We saw increased awareness of cost‐saving actions. To assess current knowledge of the indications of proper telemetry use according to American Heart Association guidelines, participants were presented with a list of 5 patients with different clinical indications for telemetry use and asked which patient required telemetry the most. Of the participants, 54.5% identified the correct answer in the pretest and 61.8% identified the correct answer in the post‐test. To assess knowledge of the costs of telemetry relative to other patient care, participants were presented with a patient case and asked to identify the most and least cost‐saving actions to safely care for the patient. When asked to identify the most cost‐saving action, 20.3% identified the correct answer in the pretest and 61.0% identified the correct answer in the post‐test. Of those who answered incorrectly in the pretest, 51.1% answered correctly in the post‐test (P=0.002). When asked to identify the least cost‐saving action, 23.7% identified the correct answer in the pretest and 50.9% identified the correct answer in the posttest. Of those who answered incorrectly in the pretest, 60.0% answered correctly in the post‐test (P=0.003).

In the post‐test, when asked about the importance of appropriate telemetry usage in providing cost‐conscious care and assuring appropriate hospital resource management, 76.8% of participants found the need very important, 21.4% somewhat important, and 1.8% as not applicable. The most commonly perceived barriers impeding discontinuation of telemetry, as reported by participants via post‐test, were nursing desires and time. Figure 2 shows all perceived barriers.

Figure 2
Postmodule, trainee‐perceived barriers to discontinuation of telemetry.

DISCUSSION

Our study is one of the first to our knowledge to demonstrate reductions in telemetry LOS by a hospitalist intervention for telemetry utilization. Others[10, 11] have studied the impact of an orientation handout by chief residents or a multispecialty telemetry policy with enforcement by an outside cardiologist and nurse team. Dressler et al. later sustained a 70% reduction in telemetry use without adversely affecting patient safety, as assessed through numbers of rapid response activations, codes, and deaths, through integrating the AHA guidelines into their electronic ordering system.[12] However, our study has the advantage of the primary team, who knows the patient and clinical scenario best, driving the change during attending rounds. In an era where cost consciousness intersects the practice of medicine, any intervention in patient care that demonstrates cost savings without an adverse impact on patient care and resource utilization must be emphasized. This is particularly important in academic institutions, where residents and medical students are learning to integrate the principles of patient safety and quality improvement into their clinical practice.[13] We actually showed sustained telemetry LOS reductions into the extension period after our intervention. We believe this may be due to telemetry triage being integrated into our attending and resident rounding practices. Future work should include integration of telemetry triage into clinical decision support in the electronic medical record and multidisciplinary rounds to disseminate telemetry triage hospital‐wide in both the academic and community settings.

Our study also revealed that nearly half of participants were not aware of the criteria for appropriate utilization of telemetry before our intervention; in the preintervention period, there were many anecdotal and objective findings of inappropriate utilization of telemetry as well as prolonged continuation beyond the clinical needs in both the hospitalist and nonhospitalist group. For the hospitalist group (ie, the group receiving guideline‐based education on appropriate indications for telemetry utilization), there was an assessment of both appropriate usage and timely discontinuation of telemetry in the postintervention period, which we attribute in large part to adherence to the education provided to this group.

We were able to show increased knowledge of cost‐saving actions among trainees with our educational module. We believe it is imperative to educate our providers (physicians, nurses, case managers, and students within these disciplines) on the appropriate indications for telemetry use, not only to help with cost savings and resource availability (ie, allowing telemetry beds to be available for patients who need them most), but also to instill consistent expectations among our patients. For the hospitalist group (ie, the group receiving guideline‐based education on appropriate indications for telemetry utilization), there was an assessment of both appropriate usage and timely discontinuation of telemetry in the postintervention period, which we attribute in large part to adherence to the education provided to this group.

Additionally, we feel it is important to consider the impacts of inappropriate use of telemetry from a patient's perspective: it is physically restrictive/emnconvenient, alarms are disruptive, it can be a barrier for other treatments such as physical therapy, it may increase the time it takes for imaging studies, a nurse may be required to accompany patients on telemetry, and poses additional costs to their medical bill.

We believe our success is due to several strategies. First, at the start of the fiscal year when quality improvement metrics are established, this particular metric (improving the appropriate utilization and timely discontinuation of telemetry) was deemed important by all hospitalists, engendering group buy‐in prior to the intervention. Our hospitalists received a detailed and interactive tutorial session in person at the beginning of the study. This tutorial provided the hospitalists with a comprehensive understanding of the appropriate (and inappropriate) indications for telemetry monitoring, hence facilitating guideline‐directed utilization. Email reminders and the tutorial tool were provided each time a hospitalist attended on the wards, and hospitalists received a small financial incentive to comply with appropriate telemetry utilization.

Our study has several strengths. First, the time frame of our study was long enough (8 months) to allow consistent trends to emerge and to optimize exposure of housestaff and medical students to this quality‐improvement initiative. Second, our cost savings came from 2 factors, direct reduction of inappropriate telemetry use and reduction in length of stay, highlighting the dual impact of appropriate telemetry utilization on cost. The overall reductions in telemetry utilization for the intervention group were a result of both reductions in initial placement on telemetry for patients who did not meet criteria for such monitoring as well as timely discontinuation of telemetry during the patient's hospitalization. Third, our study demonstrates that physicians can be effective in driving appropriate telemetry usage by participating in the clinical decision making regarding necessity and educating providers, trainees/students, and patients on appropriate indications. Finally, we show sustainment of our intervention in the extension period, suggesting telemetry triage integration into rounding practice.

Our study has limitations as well. First, our sample size is relatively small at a single academic center. Second, due to complexities in our faculty scheduling, we were unable to completely randomize patients to a hospitalist versus nonhospitalist team. However, we believe that despite the inability to randomize, our study does show the benefit of a hospitalist attending to reduce telemetry LOS given there was no change in nonhospitalist telemetry LOS despite all of the other hospital‐wide interventions (multidisciplinary rounds, similar housestaff). Third, our study was limited in that the CMI was used as a proxy for patient complexity, and the mortality index was used as the overall marker of safety. Further studies should monitor frequency and outcomes of arrhythmic events of patients transferred from telemetry monitoring to medicalsurgical beds. Finally, as the intervention was multipronged, we are unable to determine which component led to the reductions in telemetry utilization. Each component, however, remains easily transferrable to outside institutions. We demonstrated both a reduction in initiation of telemetry as well as timely discontinuation; however, due to the complexity in capturing this accurately, we were unable to numerically quantify these individual outcomes.

Additionally, there were approximately 10 nonhospitalist attendings who also staffed the wards during the intervention time period of our study; these attendings did not undergo the telemetry tutorial/orientation. This difference, along with the Hawthorne effect for the hospitalist attendings, also likely contributed to the difference in outcomes between the 2 attending cohorts in the intervention period.

CONCLUSIONS

Our results demonstrate that a multipronged hospitalist‐driven intervention to improve appropriate use of telemetry reduces telemetry LOS and cost. Hence, we believe that targeted, education‐driven interventions with monitoring of progress can have demonstrable impacts on changing practice. Physicians will need to make trade‐offs in clinical practice to balance efficient resource utilization with the patient's evolving condition in the inpatient setting, the complexities of clinical workflow, and the patient's expectations.[14] Appropriate telemetry utilization is a prime example of what needs to be done well in the future for high‐value care.

Acknowledgements

The authors acknowledge the hospitalists who participated in the intervention: Jeffrey Chi, Willliam Daines, Sumbul Desai, Poonam Hosamani, John Kugler, Charles Liao, Errol Ozdalga, and Sang Hoon Woo. The authors also acknowledge Joan Hendershott in the Finance Department and Joseph Hopkins in the Quality Department.

Disclosures: All coauthors have seen and agree with the contents of the article; submission (aside from abstracts) was not under review by any other publication. The authors report no disclosures of financial support from, or equity positions in, manufacturers of drugs or products mentioned in the article.

Inpatient hospital services are a major component of total US civilian noninstitutionalized healthcare expenses, accounting for 29.3% of spending in 2009[1] when the average cost per stay was $9700.[2] Telemetry monitoring, a widely used resource for the identification of life‐threatening arrhythmias, contributes to these costs. In 1998, Sivaram et al. estimated the cost per patient at $683; in 2010, Ivonye et al. published the cost difference between a telemetry bed and a nonmonitored bed in their inner‐city public teaching facility reached $800.[3, 4]

In 1991, the American College of Cardiology published guidelines for telemetry use, which were later revised by the American Heart Association in 2004.[5, 6] Notably, the guidelines are based on expert opinion and on research data in electrocardiography.[7] The guidelines divide patients into 3 classes based on clinical condition: recommending telemetry monitoring for almost all class I patients, stating possible benefit in class II patients, and discouraging cardiac monitoring for the low‐risk class III patients.[5, 6] The Choosing Wisely campaign, an initiative of the American Board of Internal Medicine and the Society of Hospital Medicine, highlights telemetry monitoring as 1 of the top 5 interventions that physicians and patients should question when determining tests and procedures.[8] Choosing Wisely suggests using a protocol to govern continuation of telemetry outside of the intensive care unit (ICU), as inappropriate monitoring increases care costs and may result in patient harm.[8] The Joint Commission 2014 National Patient Safety Goals notes that numerous alarm signals and the resulting noise and displayed information tends to desensitize staff and cause them to miss or ignore alarm signals or even disable them.[9]

Few studies have examined implementation methods for improved telemetry bed utilization. One study evaluated the impact of a multispecialty telemetry policy with enforcement by an outside cardiologist and nurse team, noting improved cardiac monitoring bed utilization and decreased academic hospital closure, which previously resulted in inability to accept new patients or procedure cancellation.[10] Another study provided an orientation handout discussed by the chief resident and telemetry indication reviews during multidisciplinary rounds 3 times a week.[11]

Our study is one the first to demonstrate a model for a hospitalist‐led approach to guide appropriate telemetry use. We investigated the impact of a multipronged approach to guide telemetry usage: (1) a hospitalist‐led, daily review of bed utilization during attending rounds, (2) a hospitalist attending‐driven, trainee‐focused education module on telemetry utilization, (3) quarterly feedback on telemetry bed utilization rates, and (4) financial incentives. We analyzed pre‐ and post‐evaluation results from the education module to measure impact on knowledge, skills, and attitudes. Additionally, we evaluated the effect of the intervention on length of stay (LOS) and bed utilization costs, while monitoring case mix index (CMI) and overall mortality.

METHODS

Setting

This study took place at Stanford Hospital and Clinics, a teaching academic center in Stanford, California. Stanford Hospital is a 444‐bed, urban medical center with 114 telemetry intermediate ICU beds, and 66 ICU beds. The 264 medicalsurgical beds lack telemetry monitoring, which can only be completed in the intermediate and full ICU. All patients on telemetry units receive both cardiac monitoring and increased nursing ratios. Transfer orders are placed in the electronic medical record to shift patients between care levels. Bed control attempts to transfer patients as soon as an open bed in the appropriate care level exists.

The study included all 5 housestaff inpatient general internal medicine wards teams (which excludes cardiology, pulmonary hypertension, hematology, oncology, and post‐transplant patients). Hospitalists and nonhospitalists attend on the wards for 1‐ to 2‐week blocks. Teaching teams are staffed by 1 to 2 medical students, 2 interns, 1 resident, and 1 attending. The university institutional review board notice of determination waived review for this study because it was classified as quality improvement.

Participants

Ten full‐ and part‐time hospitalist physicians participated in the standardized telemetry teaching. Fifty‐six of the approximately 80 medical students and housestaff on hospitalists' teams completed the educational evaluation. Both hospitalist and nonhospitalist teams participated in daily multidisciplinary rounds, focusing on barriers to discharge including telemetry use. Twelve nonhospitalists served on the wards during the intervention period. Hospitalists covered 72% of the internal medicine wards during the intervention period.

Study Design

We investigated the impact of a multipronged approach to guide telemetry usage from January 2013 to August 2013 (intervention period).

Hospitalist‐Led Daily Review of Bed Utilization

Hospitalists were encouraged to discuss the need of telemetry on daily attending rounds and review indications for telemetry while on service. Prior to starting a ward block, attendings were emailed the teaching module with a reminder to discuss the need for telemetry on attending rounds. Reminders to discuss telemetry utilization were also provided during every‐other‐week hospitalist meetings. Compliance of daily discussion was not tracked.

Hospitalist‐Driven, Trainee‐Focused, Education Module on Telemetry Utilization

The educational module was taught during teaching sessions only by the hospitalists. Trainees on nonhospitalist teams did not receive dedicated teaching about telemetry usage. The module was given to learners only once. The module was a 10‐slide, Microsoft PowerPoint (Microsoft Corp., Redmond, WA) presentation that reviewed the history of telemetry, the American College of Cardiology and the American Heart Association guidelines, the cost difference between telemetry and nonmonitored beds, and the perceived barriers to discontinuation. The presentation was accompanied by a pre‐ and post‐evaluation to elicit knowledge, skills, and attitudes of telemetry use (see Supporting Information, Appendix A, in the online version of this article). The pre‐ and post‐evaluations were created through consensus with a multidisciplinary, expert panel after reviewing the evidence‐based literature.

Quarterly Feedback on Telemetry Bed Utilization Rates

Hospital beduse and CMI data were obtained from the Stanford finance department for the intervention period and for the baseline period, which was the year prior to the study, January 1, 2012 to December 31, 2012. Hospital beduse data included the number of days patients were on telemetry units versus medicalsurgical units (nontelemetry units), differentiated by hospitalists and nonhospitalists. Cost savings were calculated by the Stanford finance department that used Stanford‐specific, internal cost accounting data to determine the impact of the intervention. These data were reviewed at hospitalist meetings on a quarterly basis. We also obtained the University Healthsystem Consortium mortality index (observed to expected) for the general internal medicine service during the baseline and intervention periods.

To measure sustainment of telemetry reduction in the postintervention period, we measured telemetry LOS from September 2014 to March 2015 (extension period).

Financial Incentives

Hospitalists were provided a $2000 bonus at the end of fiscal year 2013 if the group showed a decrease in telemetry bed use in comparison to the baseline period.

Statistical Analysis of Clinical Outcome Measures

Continuous outcomes were tested using 2‐tailed t tests. Comparison of continuous outcome included differences in telemetry and nontelemetry LOS and CMI. Pairwise comparisons were made for various time periods. A P value of <0.05 was considered statistically significant. Statistical analyses were performed using Stata 12.0 software (StataCorp, College Station, TX).

RESULTS

Clinical and Value Outcomes

Baseline (January 2012December 2012) Versus Intervention Period (January 2013August 2013)

LOS for telemetry beds was significantly reduced over the intervention period (2.75 days vs 2.13 days, P=0.005) for hospitalists. Notably, there was no significant difference in mean LOS between baseline and intervention periods for nontelemetry beds (2.84 days vs 2.72 days, P=0.32) for hospitalists. In comparison, for nonhospitalists, there was no difference in mean LOS for telemetry beds between baseline and intervention periods (2.75 days vs 2.46 days, P=0.33) and nontelemetry beds (2.64 days vs 2.89 days, P=0.26) (Table 1).

Bed Utilization Over Baseline, Intervention, and Extension Time Periods for Hospitalists and Nonhospitalists
Baseline Period Intervention Period P Value Extension Period P Value
  • NOTE: Length of stay (LOS) for telemetry beds was significantly reduced over the intervention period (2.75 days vs 2.13 days, P=0.005) for hospitalists. Nonhospitalists demonstrated no difference in mean LOS for telemetry beds between baseline and intervention periods (2.75 days vs 2.46 days, P=0.33). The results were sustained in the hospitalist group, with a telemetry LOS of 1.93 in the extension period. The mean case mix index managed by the hospitalist and nonhospitalist groups remained unchanged.

Length of stay
Hospitalists
Telemetry beds 2.75 2.13 0.005 1.93 0.09
Nontelemetry beds 2.84 2.72 0.324 2.44 0.21
Nonhospitalists
Telemetry beds 2.75 2.46 0.331 2.22 0.43
Nontelemetry beds 2.64 2.89 0.261 2.26 0.05
Case mix index
Hospitalists 1.44 1.45 0.68 1.40 0.21
Nonhospitalists 1.46 1.40 0.53 1.53 0.18

Costs of hospital stay were also reduced in the multipronged, hospitalist‐driven intervention group. Expenditures for telemetry beds were reduced by 22.5% over the intervention period for hospitalists (Table 2).

Percent Change in Accommodation Costs Over Baseline to Intervention and Intervention to Extension Periods
Baseline to Intervention Period Intervention to Extension Period
  • NOTE: Accommodation costs were reduced in the hospitalist group. Expenditures for telemetry beds were reduced by 22.5% over the intervention period for hospitalists.

Hospitalists
Telemetry beds 22.55% 9.55%
Nontelemetry beds 4.23% 10.14%
Nonhospitalists
Telemetry beds 10.55% 9.89%
Nontelemetry beds 9.47% 21.84%

The mean CMI of the patient cohort managed by the hospitalists in the baseline and intervention periods was not significantly different (1.44 vs 1.45, P=0.68). The mean CMI of the patients managed by the nonhospitalists in the baseline and intervention periods was also not significantly different (1.46 vs 1.40, P=0.53) (Table 1). Mortality index during the baseline and intervention periods was not significantly different (0.770.22 vs 0.660.23, P=0.54), as during the intervention and extension periods (0.660.23 vs 0.650.15, P=0.95).

Intervention Period (January 2013August 2013) Versus Extension Period (September 2014‐March 2015)

The decreased telemetry LOS for hospitalists was sustained from the intervention period to the extension period, from 2.13 to 1.93 (P=0.09). There was no significant change in the nontelemetry LOS in the intervention period compared to the extension period (2.72 vs 2.44, P=0.21). There was no change in the telemetry LOS for nonhospitalists from the intervention period to the extension period (2.46 vs 2.22, P=0.43).

The mean CMI in the hospitalist group was not significantly different in the intervention period compared to the extension period (1.45 to 1.40, P=0.21). The mean CMI in the nonhospitalist group did not change from the intervention period to the extension period (1.40 vs 1.53, P=0.18) (Table 1).

Education Outcomes

Out of the 56 participants completing the education module and survey, 28.6% were medical students, 53.6% were interns, 12.5% were second‐year residents, and 5.4% were third‐year residents. Several findings were seen at baseline via pretest. In evaluating patterns of current telemetry use, 32.2% of participants reported evaluating the necessity of telemetry for patients on admission only, 26.3% during transitions of care, 5.1% after discharge plans were cemented, 33.1% on a daily basis, and 3.4% rarely. When asked which member of the care team was most likely to encourage use of appropriate telemetry, 20.8% identified another resident, 13.9% nursing, 37.5% attending physician, 20.8% self, 4.2% the team as a whole, and 2.8% as not any.

Figure 1 shows premodule results regarding the trainees perceived percentage of patient encounters during which a participant's team discussed their patient's need for telemetry.

Figure 1
Premodule, trainee‐perceived percentage of patient encounters for which the team discussed a patient's need for telemetry; N/R, no response.

In assessing perception of current telemetry utilization, 1.8% of participants thought 0% to 10% of patients were currently on telemetry, 19.6% thought 11% to 20%, 42.9% thought 21% to 31%, 30.4% thought 31% to 40%, and 3.6% thought 41% to 50%.

Two areas were assessed at both baseline and after the intervention: knowledge of indications of telemetry use and cost related to telemetry use. We saw increased awareness of cost‐saving actions. To assess current knowledge of the indications of proper telemetry use according to American Heart Association guidelines, participants were presented with a list of 5 patients with different clinical indications for telemetry use and asked which patient required telemetry the most. Of the participants, 54.5% identified the correct answer in the pretest and 61.8% identified the correct answer in the post‐test. To assess knowledge of the costs of telemetry relative to other patient care, participants were presented with a patient case and asked to identify the most and least cost‐saving actions to safely care for the patient. When asked to identify the most cost‐saving action, 20.3% identified the correct answer in the pretest and 61.0% identified the correct answer in the post‐test. Of those who answered incorrectly in the pretest, 51.1% answered correctly in the post‐test (P=0.002). When asked to identify the least cost‐saving action, 23.7% identified the correct answer in the pretest and 50.9% identified the correct answer in the posttest. Of those who answered incorrectly in the pretest, 60.0% answered correctly in the post‐test (P=0.003).

In the post‐test, when asked about the importance of appropriate telemetry usage in providing cost‐conscious care and assuring appropriate hospital resource management, 76.8% of participants found the need very important, 21.4% somewhat important, and 1.8% as not applicable. The most commonly perceived barriers impeding discontinuation of telemetry, as reported by participants via post‐test, were nursing desires and time. Figure 2 shows all perceived barriers.

Figure 2
Postmodule, trainee‐perceived barriers to discontinuation of telemetry.

DISCUSSION

Our study is one of the first to our knowledge to demonstrate reductions in telemetry LOS by a hospitalist intervention for telemetry utilization. Others[10, 11] have studied the impact of an orientation handout by chief residents or a multispecialty telemetry policy with enforcement by an outside cardiologist and nurse team. Dressler et al. later sustained a 70% reduction in telemetry use without adversely affecting patient safety, as assessed through numbers of rapid response activations, codes, and deaths, through integrating the AHA guidelines into their electronic ordering system.[12] However, our study has the advantage of the primary team, who knows the patient and clinical scenario best, driving the change during attending rounds. In an era where cost consciousness intersects the practice of medicine, any intervention in patient care that demonstrates cost savings without an adverse impact on patient care and resource utilization must be emphasized. This is particularly important in academic institutions, where residents and medical students are learning to integrate the principles of patient safety and quality improvement into their clinical practice.[13] We actually showed sustained telemetry LOS reductions into the extension period after our intervention. We believe this may be due to telemetry triage being integrated into our attending and resident rounding practices. Future work should include integration of telemetry triage into clinical decision support in the electronic medical record and multidisciplinary rounds to disseminate telemetry triage hospital‐wide in both the academic and community settings.

Our study also revealed that nearly half of participants were not aware of the criteria for appropriate utilization of telemetry before our intervention; in the preintervention period, there were many anecdotal and objective findings of inappropriate utilization of telemetry as well as prolonged continuation beyond the clinical needs in both the hospitalist and nonhospitalist group. For the hospitalist group (ie, the group receiving guideline‐based education on appropriate indications for telemetry utilization), there was an assessment of both appropriate usage and timely discontinuation of telemetry in the postintervention period, which we attribute in large part to adherence to the education provided to this group.

We were able to show increased knowledge of cost‐saving actions among trainees with our educational module. We believe it is imperative to educate our providers (physicians, nurses, case managers, and students within these disciplines) on the appropriate indications for telemetry use, not only to help with cost savings and resource availability (ie, allowing telemetry beds to be available for patients who need them most), but also to instill consistent expectations among our patients. For the hospitalist group (ie, the group receiving guideline‐based education on appropriate indications for telemetry utilization), there was an assessment of both appropriate usage and timely discontinuation of telemetry in the postintervention period, which we attribute in large part to adherence to the education provided to this group.

Additionally, we feel it is important to consider the impacts of inappropriate use of telemetry from a patient's perspective: it is physically restrictive/emnconvenient, alarms are disruptive, it can be a barrier for other treatments such as physical therapy, it may increase the time it takes for imaging studies, a nurse may be required to accompany patients on telemetry, and poses additional costs to their medical bill.

We believe our success is due to several strategies. First, at the start of the fiscal year when quality improvement metrics are established, this particular metric (improving the appropriate utilization and timely discontinuation of telemetry) was deemed important by all hospitalists, engendering group buy‐in prior to the intervention. Our hospitalists received a detailed and interactive tutorial session in person at the beginning of the study. This tutorial provided the hospitalists with a comprehensive understanding of the appropriate (and inappropriate) indications for telemetry monitoring, hence facilitating guideline‐directed utilization. Email reminders and the tutorial tool were provided each time a hospitalist attended on the wards, and hospitalists received a small financial incentive to comply with appropriate telemetry utilization.

Our study has several strengths. First, the time frame of our study was long enough (8 months) to allow consistent trends to emerge and to optimize exposure of housestaff and medical students to this quality‐improvement initiative. Second, our cost savings came from 2 factors, direct reduction of inappropriate telemetry use and reduction in length of stay, highlighting the dual impact of appropriate telemetry utilization on cost. The overall reductions in telemetry utilization for the intervention group were a result of both reductions in initial placement on telemetry for patients who did not meet criteria for such monitoring as well as timely discontinuation of telemetry during the patient's hospitalization. Third, our study demonstrates that physicians can be effective in driving appropriate telemetry usage by participating in the clinical decision making regarding necessity and educating providers, trainees/students, and patients on appropriate indications. Finally, we show sustainment of our intervention in the extension period, suggesting telemetry triage integration into rounding practice.

Our study has limitations as well. First, our sample size is relatively small at a single academic center. Second, due to complexities in our faculty scheduling, we were unable to completely randomize patients to a hospitalist versus nonhospitalist team. However, we believe that despite the inability to randomize, our study does show the benefit of a hospitalist attending to reduce telemetry LOS given there was no change in nonhospitalist telemetry LOS despite all of the other hospital‐wide interventions (multidisciplinary rounds, similar housestaff). Third, our study was limited in that the CMI was used as a proxy for patient complexity, and the mortality index was used as the overall marker of safety. Further studies should monitor frequency and outcomes of arrhythmic events of patients transferred from telemetry monitoring to medicalsurgical beds. Finally, as the intervention was multipronged, we are unable to determine which component led to the reductions in telemetry utilization. Each component, however, remains easily transferrable to outside institutions. We demonstrated both a reduction in initiation of telemetry as well as timely discontinuation; however, due to the complexity in capturing this accurately, we were unable to numerically quantify these individual outcomes.

Additionally, there were approximately 10 nonhospitalist attendings who also staffed the wards during the intervention time period of our study; these attendings did not undergo the telemetry tutorial/orientation. This difference, along with the Hawthorne effect for the hospitalist attendings, also likely contributed to the difference in outcomes between the 2 attending cohorts in the intervention period.

CONCLUSIONS

Our results demonstrate that a multipronged hospitalist‐driven intervention to improve appropriate use of telemetry reduces telemetry LOS and cost. Hence, we believe that targeted, education‐driven interventions with monitoring of progress can have demonstrable impacts on changing practice. Physicians will need to make trade‐offs in clinical practice to balance efficient resource utilization with the patient's evolving condition in the inpatient setting, the complexities of clinical workflow, and the patient's expectations.[14] Appropriate telemetry utilization is a prime example of what needs to be done well in the future for high‐value care.

Acknowledgements

The authors acknowledge the hospitalists who participated in the intervention: Jeffrey Chi, Willliam Daines, Sumbul Desai, Poonam Hosamani, John Kugler, Charles Liao, Errol Ozdalga, and Sang Hoon Woo. The authors also acknowledge Joan Hendershott in the Finance Department and Joseph Hopkins in the Quality Department.

Disclosures: All coauthors have seen and agree with the contents of the article; submission (aside from abstracts) was not under review by any other publication. The authors report no disclosures of financial support from, or equity positions in, manufacturers of drugs or products mentioned in the article.

References
  1. Kashihara D, Carper K. National health care expenses in the U.S. civilian noninstitutionalized population, 2009. Statistical brief 355. 2012. Agency for Healthcare Research and Quality, Rockville, MD.
  2. Pfuntner A, Wier L, Steiner C. Costs for hospital stays in the United States, 2010. Statistical brief 146. 2013. Agency for Healthcare Research and Quality, Rockville, MD.
  3. Sivaram CA, Summers JH, Ahmed N. Telemetry outside critical care units: patterns of utilization and influence on management decisions. Clin Cardiol. 1998;21(7):503505.
  4. Ivonye C, Ohuabunwo C, Henriques‐Forsythe M, et al. Evaluation of telemetry utilization, policy, and outcomes in an inner‐city academic medical center. J Natl Med Assoc. 2010;102(7):598604.
  5. Jaffe AS, Atkins JM, Field JM. Recommended guidelines for in‐hospital cardiac monitoring of adults for detection of arrhythmia. Emergency Cardiac Care Committee members. J Am Coll Cardiol. 1991;18(6):14311433.
  6. Drew BJ, Califf RM, Funk M, et al. Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):27212746.
  7. Henriques‐Forsythe MN, Ivonye CC, Jamched U, Kamuguisha LK, Olejeme KA, Onwuanyi AE. Is telemetry overused? Is it as helpful as thought? Cleve Clin J Med. 2009;76(6):368372.
  8. Society of Hospital Medicine. Adult Hospital Medicine. Five things physicians and patients should question. Available at: http://www.choosingwisely.org/societies/society‐of‐hospital‐medicine‐adult. Published February 21, 2013. Accessed October 5, 2014.
  9. Joint Commission on Accreditation of Healthcare Organizations. The Joint Commission announces 2014 national patient safety goal. Jt Comm Perspect. 2013;33(7):14.
  10. Lee JC, Lamb P, Rand E, Ryan C, Rubel B. Optimizing telemetry utilization in an academic medical center. J Clin Outcomes Manage. 2008;15(9):435440.
  11. Silverstein N, Silverman A. Improving utilization of telemetry in a university hospital. J Clin Outcomes Manage. 2005;12(10):519522.
  12. Dressler R, Dryer MM, Coletti C, Mahoney D, Doorey AJ. Altering overuse of cardiac telemetry in non‐intensive care unit settings by hardwiring the use of American Heart Association guidelines. JAMA Intern Med. 2014;174:18521854.
  13. Pines JM, Farmer SA, Akman JS. "Innovation" institutes in academic health centers: enhancing value through leadership, education, engagement, and scholarship. Acad Med. 2014;89(9):12041206.
  14. Sabbatini AK, Tilburt JC, Campbell EG, Sheeler RD, Egginton JS, Goold SD. Controlling health costs: physician responses to patient expectations for medical care. J Gen Intern Med. 2014;29(9):12341241.
References
  1. Kashihara D, Carper K. National health care expenses in the U.S. civilian noninstitutionalized population, 2009. Statistical brief 355. 2012. Agency for Healthcare Research and Quality, Rockville, MD.
  2. Pfuntner A, Wier L, Steiner C. Costs for hospital stays in the United States, 2010. Statistical brief 146. 2013. Agency for Healthcare Research and Quality, Rockville, MD.
  3. Sivaram CA, Summers JH, Ahmed N. Telemetry outside critical care units: patterns of utilization and influence on management decisions. Clin Cardiol. 1998;21(7):503505.
  4. Ivonye C, Ohuabunwo C, Henriques‐Forsythe M, et al. Evaluation of telemetry utilization, policy, and outcomes in an inner‐city academic medical center. J Natl Med Assoc. 2010;102(7):598604.
  5. Jaffe AS, Atkins JM, Field JM. Recommended guidelines for in‐hospital cardiac monitoring of adults for detection of arrhythmia. Emergency Cardiac Care Committee members. J Am Coll Cardiol. 1991;18(6):14311433.
  6. Drew BJ, Califf RM, Funk M, et al. Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):27212746.
  7. Henriques‐Forsythe MN, Ivonye CC, Jamched U, Kamuguisha LK, Olejeme KA, Onwuanyi AE. Is telemetry overused? Is it as helpful as thought? Cleve Clin J Med. 2009;76(6):368372.
  8. Society of Hospital Medicine. Adult Hospital Medicine. Five things physicians and patients should question. Available at: http://www.choosingwisely.org/societies/society‐of‐hospital‐medicine‐adult. Published February 21, 2013. Accessed October 5, 2014.
  9. Joint Commission on Accreditation of Healthcare Organizations. The Joint Commission announces 2014 national patient safety goal. Jt Comm Perspect. 2013;33(7):14.
  10. Lee JC, Lamb P, Rand E, Ryan C, Rubel B. Optimizing telemetry utilization in an academic medical center. J Clin Outcomes Manage. 2008;15(9):435440.
  11. Silverstein N, Silverman A. Improving utilization of telemetry in a university hospital. J Clin Outcomes Manage. 2005;12(10):519522.
  12. Dressler R, Dryer MM, Coletti C, Mahoney D, Doorey AJ. Altering overuse of cardiac telemetry in non‐intensive care unit settings by hardwiring the use of American Heart Association guidelines. JAMA Intern Med. 2014;174:18521854.
  13. Pines JM, Farmer SA, Akman JS. "Innovation" institutes in academic health centers: enhancing value through leadership, education, engagement, and scholarship. Acad Med. 2014;89(9):12041206.
  14. Sabbatini AK, Tilburt JC, Campbell EG, Sheeler RD, Egginton JS, Goold SD. Controlling health costs: physician responses to patient expectations for medical care. J Gen Intern Med. 2014;29(9):12341241.
Issue
Journal of Hospital Medicine - 10(9)
Issue
Journal of Hospital Medicine - 10(9)
Page Number
627-632
Page Number
627-632
Publications
Publications
Article Type
Display Headline
Hospitalist intervention for appropriate use of telemetry reduces length of stay and cost
Display Headline
Hospitalist intervention for appropriate use of telemetry reduces length of stay and cost
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Kambria H. Evans, Program Officer of Quality and Organizational Improvement, Department of Medicine, Stanford University, 700 Welch Road, Suite 310B, Palo Alto, CA 94304; Telephone: 650‐725‐8803; Fax: 650‐725‐1675; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files