Simulation-Based Training in Medical Education: Immediate Growth or Cautious Optimism?

Article Type
Changed
Wed, 12/01/2021 - 07:40
Display Headline
Simulation-Based Training in Medical Education: Immediate Growth or Cautious Optimism?

For years, professional athletes have used simulation-based training (SBT), a combination of virtual and experiential learning that aims to optimize technical skills, teamwork, and communication.1 In SBT, critical plays and skills are first watched on video or reviewed on a chalkboard, and then run in the presence of a coach who offers immediate feedback to the player. The hope is that the individual will then be able to perfectly execute that play or scenario when it is game time. While SBT is a developing tool in medical education—allowing learners to practice important clinical skills prior to practicing in the higher-stakes clinical environment—an important question remains: what training can go virtual and what needs to stay in person?

In this issue, Carter et al2 present a single-site, telesimulation curriculum that addresses consult request and handoff communication using SBT. Due to the COVID-19 pandemic, the authors converted an in-person intern bootcamp into a virtual, Zoom®-based workshop and compared assessments and evaluations to the previous year’s (2019) in-person bootcamp. Compared to the in-person class, the telesimulation-based cohort were equally or better trained in the consult request portion of the workshop. However, participants were significantly less likely to perform the assessed handoff skills optimally, with only a quarter (26%) appropriately prioritizing patients and less than half (49%) providing an appropriate amount of information in the patient summary. Additionally, postworkshop surveys found that SBT participants were more satisfied with their performance in both the consult request and handoff scenarios and felt more prepared (99% vs 91%) to perform handoffs in clinical practice compared to the previous year’s in-person cohort.

We focus on this work as it explores the role that SBT or virtual training could have in hospital communication and patient safety training. While previous work has highlighted that technical and procedural skills often lend themselves to in-person adaptation (eg, point-of-care ultrasound), this work suggests that nontechnical skills training could be adapted to the virtual environment. Hospitalists and internal medicine trainees perform a myriad of nontechnical activities, such as end-of-life discussions, obtaining informed consent, providing peer-to-peer feedback, and leading multidisciplinary teams. Activities like these, which require no hands-on interactions, may be well-suited for simulation or virtual-based training.3

However, we make this suggestion with some caution. In Carter et al’s study,2 while we assumed that telesimulation would work for the handoff portion of the workshop, interestingly, the telesimulation-based cohort performed worse than the interns who participated in the previous year’s in-person training while simultaneously and paradoxically reporting that they felt more prepared. The authors offer several possible explanations, including alterations in the assessment checklist and a shift in the facilitators from peer observers to faculty hospitalists. We suspect that differences in the participants’ experiences prior to the bootcamp may also be at play. Given the onset of the pandemic during their final year in undergraduate training, many in this intern cohort were likely removed from their fourth-year clinical clerkships,4 taking from them pivotal opportunities to hone and refine this skill set prior to starting their graduate medical education.

As telesimulation and other virtual care educational opportunities continue to evolve, we must ensure that such training does not sacrifice quality for ease and satisfaction. As the authors’ findings show, simply replicating an in-person curriculum in a virtual environment does not ensure equivalence for all skill sets. We remain cautiously optimistic that as we adjust to a postpandemic world, more SBT and virtual-based educational interventions will allow medical trainees to be ready to perform come game time.

References

1. McCaskill S. Sports tech comes of age with VR training, coaching apps and smart gear. Forbes. March 31, 2020. https://www.forbes.com/sites/stevemccaskill/2020/03/31/sports-tech-comes-of-age-with-vr-training-coaching-apps-and-smart-gear/?sh=309a8fa219c9
2. Carter K, Podczerwinski J, Love L, et al. Utilizing telesimulation for advanced skills training in consultation and handoff communication: a post-COVID-19 GME bootcamp experience. J Hosp Med. 2021;16(12)730-734. https://doi.org/10.12788/jhm.3733
3. Paige JT, Sonesh SC, Garbee DD, Bonanno LS. Comprensive Healthcare Simulation: Interprofessional Team Training and Simulation. 1st ed. Springer International Publishing; 2020. https://doi.org/10.1007/978-3-030-28845-7
4. Goldenberg MN, Hersh DC, Wilkins KM, Schwartz ML. Suspending medical student clerkships due to COVID-19. Med Sci Educ. 2020;30(3):1-4. https://doi.org/10.1007/s40670-020-00994-1

Article PDF
Author and Disclosure Information

1Division of Hospital Medicine, Department of Internal Medicine, Virginia Commonwealth University Health, Richmond, Virginia; 2Section of Hospital Medicine, San Francisco VA Medical Center, San Francisco, California; 3Department of Medicine, University of California, San Francisco, San Francisco, California;

Disclosures
The authors reported no conflicts of interest.

Issue
Journal of Hospital Medicine 16(12)
Publications
Topics
Page Number
767
Sections
Author and Disclosure Information

1Division of Hospital Medicine, Department of Internal Medicine, Virginia Commonwealth University Health, Richmond, Virginia; 2Section of Hospital Medicine, San Francisco VA Medical Center, San Francisco, California; 3Department of Medicine, University of California, San Francisco, San Francisco, California;

Disclosures
The authors reported no conflicts of interest.

Author and Disclosure Information

1Division of Hospital Medicine, Department of Internal Medicine, Virginia Commonwealth University Health, Richmond, Virginia; 2Section of Hospital Medicine, San Francisco VA Medical Center, San Francisco, California; 3Department of Medicine, University of California, San Francisco, San Francisco, California;

Disclosures
The authors reported no conflicts of interest.

Article PDF
Article PDF
Related Articles

For years, professional athletes have used simulation-based training (SBT), a combination of virtual and experiential learning that aims to optimize technical skills, teamwork, and communication.1 In SBT, critical plays and skills are first watched on video or reviewed on a chalkboard, and then run in the presence of a coach who offers immediate feedback to the player. The hope is that the individual will then be able to perfectly execute that play or scenario when it is game time. While SBT is a developing tool in medical education—allowing learners to practice important clinical skills prior to practicing in the higher-stakes clinical environment—an important question remains: what training can go virtual and what needs to stay in person?

In this issue, Carter et al2 present a single-site, telesimulation curriculum that addresses consult request and handoff communication using SBT. Due to the COVID-19 pandemic, the authors converted an in-person intern bootcamp into a virtual, Zoom®-based workshop and compared assessments and evaluations to the previous year’s (2019) in-person bootcamp. Compared to the in-person class, the telesimulation-based cohort were equally or better trained in the consult request portion of the workshop. However, participants were significantly less likely to perform the assessed handoff skills optimally, with only a quarter (26%) appropriately prioritizing patients and less than half (49%) providing an appropriate amount of information in the patient summary. Additionally, postworkshop surveys found that SBT participants were more satisfied with their performance in both the consult request and handoff scenarios and felt more prepared (99% vs 91%) to perform handoffs in clinical practice compared to the previous year’s in-person cohort.

We focus on this work as it explores the role that SBT or virtual training could have in hospital communication and patient safety training. While previous work has highlighted that technical and procedural skills often lend themselves to in-person adaptation (eg, point-of-care ultrasound), this work suggests that nontechnical skills training could be adapted to the virtual environment. Hospitalists and internal medicine trainees perform a myriad of nontechnical activities, such as end-of-life discussions, obtaining informed consent, providing peer-to-peer feedback, and leading multidisciplinary teams. Activities like these, which require no hands-on interactions, may be well-suited for simulation or virtual-based training.3

However, we make this suggestion with some caution. In Carter et al’s study,2 while we assumed that telesimulation would work for the handoff portion of the workshop, interestingly, the telesimulation-based cohort performed worse than the interns who participated in the previous year’s in-person training while simultaneously and paradoxically reporting that they felt more prepared. The authors offer several possible explanations, including alterations in the assessment checklist and a shift in the facilitators from peer observers to faculty hospitalists. We suspect that differences in the participants’ experiences prior to the bootcamp may also be at play. Given the onset of the pandemic during their final year in undergraduate training, many in this intern cohort were likely removed from their fourth-year clinical clerkships,4 taking from them pivotal opportunities to hone and refine this skill set prior to starting their graduate medical education.

As telesimulation and other virtual care educational opportunities continue to evolve, we must ensure that such training does not sacrifice quality for ease and satisfaction. As the authors’ findings show, simply replicating an in-person curriculum in a virtual environment does not ensure equivalence for all skill sets. We remain cautiously optimistic that as we adjust to a postpandemic world, more SBT and virtual-based educational interventions will allow medical trainees to be ready to perform come game time.

For years, professional athletes have used simulation-based training (SBT), a combination of virtual and experiential learning that aims to optimize technical skills, teamwork, and communication.1 In SBT, critical plays and skills are first watched on video or reviewed on a chalkboard, and then run in the presence of a coach who offers immediate feedback to the player. The hope is that the individual will then be able to perfectly execute that play or scenario when it is game time. While SBT is a developing tool in medical education—allowing learners to practice important clinical skills prior to practicing in the higher-stakes clinical environment—an important question remains: what training can go virtual and what needs to stay in person?

In this issue, Carter et al2 present a single-site, telesimulation curriculum that addresses consult request and handoff communication using SBT. Due to the COVID-19 pandemic, the authors converted an in-person intern bootcamp into a virtual, Zoom®-based workshop and compared assessments and evaluations to the previous year’s (2019) in-person bootcamp. Compared to the in-person class, the telesimulation-based cohort were equally or better trained in the consult request portion of the workshop. However, participants were significantly less likely to perform the assessed handoff skills optimally, with only a quarter (26%) appropriately prioritizing patients and less than half (49%) providing an appropriate amount of information in the patient summary. Additionally, postworkshop surveys found that SBT participants were more satisfied with their performance in both the consult request and handoff scenarios and felt more prepared (99% vs 91%) to perform handoffs in clinical practice compared to the previous year’s in-person cohort.

We focus on this work as it explores the role that SBT or virtual training could have in hospital communication and patient safety training. While previous work has highlighted that technical and procedural skills often lend themselves to in-person adaptation (eg, point-of-care ultrasound), this work suggests that nontechnical skills training could be adapted to the virtual environment. Hospitalists and internal medicine trainees perform a myriad of nontechnical activities, such as end-of-life discussions, obtaining informed consent, providing peer-to-peer feedback, and leading multidisciplinary teams. Activities like these, which require no hands-on interactions, may be well-suited for simulation or virtual-based training.3

However, we make this suggestion with some caution. In Carter et al’s study,2 while we assumed that telesimulation would work for the handoff portion of the workshop, interestingly, the telesimulation-based cohort performed worse than the interns who participated in the previous year’s in-person training while simultaneously and paradoxically reporting that they felt more prepared. The authors offer several possible explanations, including alterations in the assessment checklist and a shift in the facilitators from peer observers to faculty hospitalists. We suspect that differences in the participants’ experiences prior to the bootcamp may also be at play. Given the onset of the pandemic during their final year in undergraduate training, many in this intern cohort were likely removed from their fourth-year clinical clerkships,4 taking from them pivotal opportunities to hone and refine this skill set prior to starting their graduate medical education.

As telesimulation and other virtual care educational opportunities continue to evolve, we must ensure that such training does not sacrifice quality for ease and satisfaction. As the authors’ findings show, simply replicating an in-person curriculum in a virtual environment does not ensure equivalence for all skill sets. We remain cautiously optimistic that as we adjust to a postpandemic world, more SBT and virtual-based educational interventions will allow medical trainees to be ready to perform come game time.

References

1. McCaskill S. Sports tech comes of age with VR training, coaching apps and smart gear. Forbes. March 31, 2020. https://www.forbes.com/sites/stevemccaskill/2020/03/31/sports-tech-comes-of-age-with-vr-training-coaching-apps-and-smart-gear/?sh=309a8fa219c9
2. Carter K, Podczerwinski J, Love L, et al. Utilizing telesimulation for advanced skills training in consultation and handoff communication: a post-COVID-19 GME bootcamp experience. J Hosp Med. 2021;16(12)730-734. https://doi.org/10.12788/jhm.3733
3. Paige JT, Sonesh SC, Garbee DD, Bonanno LS. Comprensive Healthcare Simulation: Interprofessional Team Training and Simulation. 1st ed. Springer International Publishing; 2020. https://doi.org/10.1007/978-3-030-28845-7
4. Goldenberg MN, Hersh DC, Wilkins KM, Schwartz ML. Suspending medical student clerkships due to COVID-19. Med Sci Educ. 2020;30(3):1-4. https://doi.org/10.1007/s40670-020-00994-1

References

1. McCaskill S. Sports tech comes of age with VR training, coaching apps and smart gear. Forbes. March 31, 2020. https://www.forbes.com/sites/stevemccaskill/2020/03/31/sports-tech-comes-of-age-with-vr-training-coaching-apps-and-smart-gear/?sh=309a8fa219c9
2. Carter K, Podczerwinski J, Love L, et al. Utilizing telesimulation for advanced skills training in consultation and handoff communication: a post-COVID-19 GME bootcamp experience. J Hosp Med. 2021;16(12)730-734. https://doi.org/10.12788/jhm.3733
3. Paige JT, Sonesh SC, Garbee DD, Bonanno LS. Comprensive Healthcare Simulation: Interprofessional Team Training and Simulation. 1st ed. Springer International Publishing; 2020. https://doi.org/10.1007/978-3-030-28845-7
4. Goldenberg MN, Hersh DC, Wilkins KM, Schwartz ML. Suspending medical student clerkships due to COVID-19. Med Sci Educ. 2020;30(3):1-4. https://doi.org/10.1007/s40670-020-00994-1

Issue
Journal of Hospital Medicine 16(12)
Issue
Journal of Hospital Medicine 16(12)
Page Number
767
Page Number
767
Publications
Publications
Topics
Article Type
Display Headline
Simulation-Based Training in Medical Education: Immediate Growth or Cautious Optimism?
Display Headline
Simulation-Based Training in Medical Education: Immediate Growth or Cautious Optimism?
Sections
Disallow All Ads
Correspondence Location
Michelle Brooks, MD; Email: [email protected]; Twitter: @Michellebr00ks.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

How Organizations Can Build a Successful and Sustainable Social Media Presence

Article Type
Changed
Thu, 09/30/2021 - 13:42
Display Headline
How Organizations Can Build a Successful and Sustainable Social Media Presence

Horwitz and Detsky1 provide readers with a personal, experientially based primer on how healthcare professionals can more effectively engage on Twitter. As experienced physicians, researchers, and active social media users, the authors outline pragmatic and specific recommendations on how to engage misinformation and add value to social media discourse. We applaud the authors for offering best-practice approaches that are valuable to newcomers as well as seasoned social media users. In highlighting that social media is merely a modern tool for engagement and discussion, the authors underscore the time-held idea that only when a tool is used effectively will it yield the desired outcome. As a medical journal that regularly uses social media as a tool for outreach and dissemination, we could not agree more with the authors’ assertion.

Since 2015, the Journal of Hospital Medicine (JHM) has used social media to engage its readership and extend the impact of the work published in its pages. Like Horwitz and Detsky, JHM has developed insights and experience in how medical journals, organizations, institutions, and other academic programs can use social media effectively. Because of our experience in this area, we are often asked how to build a successful and sustainable social media presence. Here, we share five primary lessons on how to use social media as a tool to disseminate, connect, and engage.

ESTABLISH YOUR GOALS

As the flagship journal for the field of hospital medicine, we seek to disseminate the ideas and research that will inform health policy, optimize healthcare delivery, and improve patient outcomes while also building and sustaining an online community for professional engagement and growth. Our social media goals provide direction on how to interact, allow us to focus attention on what is important, and motivate our growth in this area. Simply put, we believe that using social media without defined goals would be like sailing a ship without a rudder.

KNOW YOUR AUDIENCE

As your organization establishes its goals, it is important to consider with whom you want to connect. Knowing your audience will allow you to better tailor the content you deliver through social media. For instance, we understand that as a journal focused on hospital medicine, our audience consists of busy clinicians, researchers, and medical educators who are trying to efficiently gather the most up-to-date information in our field. Recognizing this, we produce (and make available for download) Visual Abstracts and publish them on Twitter to help our followers assimilate information from new studies quickly and easily.2 Moreover, we recognize that our followers are interested in how to use social media in their professional lives and have published several articles in this topic area.3-5

BUILD YOUR TEAM

We have found that having multiple individuals on our social media team has led to greater creativity and thoughtfulness on how we engage our readership. Our teams span generations, clinical experience, institutions, and cultural backgrounds. This intentional approach has allowed for diversity in thoughts and opinions and has helped shape the JHM social media message. Additionally, we have not only formalized editorial roles through the creation of Digital Media Editor positions, but we have also created the JHM Digital Media Fellowship, a training program and development pipeline for those interested in cultivating organization-based social media experiences and skill sets.6

ENGAGE CONSISTENTLY

Many organizations believe that successful social media outreach means creating an account and posting content when convenient. Experience has taught us that daily postings and regular engagement will build your brand as a regular and reliable source of information for your followers. Additionally, while many academic journals and organizations only occasionally post material and rarely interact with their followers, we have found that engaging and facilitating conversations through our monthly Twitter discussion (#JHMChat) has established a community, created opportunities for professional networking, and further disseminated the work published in JHM.7 As an academic journal or organization entering this field, recognize the product for which people follow you and deliver that product on a consistent basis.

OWN YOUR MISTAKES

It will only be a matter of time before your organization makes a misstep on social media. Instead of hiding, we recommend stepping into that tension and owning the mistake. For example, we recently published an article that contained a culturally offensive term. As a journal, we reflected on our error and took concrete steps to correct it. Further, we shared our thoughts with our followers to ensure transparency.8 Moving forward, we have inserted specific stopgaps in our editorial review process to avoid such missteps in the future.

Although every organization will have different goals and reasons for engaging on social media, we believe these central tenets will help optimize the use of this platform. Although we have established specific objectives for our engagement on social media, we believe Horwitz and Detsky1 put it best when they note that, at the end of the day, our ultimate goal is in “…promoting knowledge and science in a way that helps us all live healthier and happier lives."

References

1. Horwitz LI, Detsky AS. Tweeting into the void: effective use of social media for healthcare professionals. J Hosp Med. 2021;16(10):581-582. https://doi.org/10.12788/jhm.3684
2. 2021 Visual Abstracts. Accessed September 8, 2021. https://www.journalofhospitalmedicine.com/jhospmed/page/2021-visual-abstracts
3. Kumar A, Chen N, Singh A. #ConsentObtained - patient privacy in the age of social media. J Hosp Med. 2020;15(11):702-704. https://doi.org/10.12788/jhm.3416
4. Minter DJ, Patel A, Ganeshan S, Nematollahi S. Medical communities go virtual. J Hosp Med. 2021;16(6):378-380. https://doi.org/10.12788/jhm.3532
5. Marcelin JR, Cawcutt KA, Shapiro M, Varghese T, O’Glasser A. Moment vs movement: mission-based tweeting for physician advocacy. J Hosp Med. 2021;16(8):507-509. https://doi.org/10.12788/jhm.3636
6. Editorial Fellowships (Digital Media and Editorial). Accessed September 8, 2021. https://www.journalofhospitalmedicine.com/content/editorial-fellowships-digital-media-and-editorial
7. Wray CM, Auerbach AD, Arora VM. The adoption of an online journal club to improve research dissemination and social media engagement among hospitalists. J Hosp Med. 2018;13(11):764-769. https://doi.org/10.12788/jhm.2987
8. Shah SS, Manning KD, Wray CM, Castellanos A, Jerardi KE. Microaggressions, accountability, and our commitment to doing better [editorial]. J Hosp Med. 2021;16(6):325. https://doi.org/10.12788/jhm.3646

Article PDF
Author and Disclosure Information

1Department of Medicine, University of California, San Francisco, California; 2Section of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California; 3Division of Hospital Medicine, Northwestern University, Feinberg School of Medicine, Chicago, Illinois; 4Divisions of Hospital Medicine and Infectious Diseases, Cincinnati Children’s Hospital Medical Center and the University of Cincinnati College of Medicine, Cincinnati, Ohio.

Disclosures
Dr Wray is a Deputy Digital Media Editor, Dr Kulkarni is an Associate Editor, and Dr Shah is the Editor-in-Chief for the Journal of Hospital Medicine.

Funding
Dr Wray is supported by a VA Health Services Research and Development Career Development Award (IK2HX003139-01A2).

Issue
Journal of Hospital Medicine 16(10)
Publications
Topics
Page Number
581-582. Published Online First September 15, 2021
Sections
Author and Disclosure Information

1Department of Medicine, University of California, San Francisco, California; 2Section of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California; 3Division of Hospital Medicine, Northwestern University, Feinberg School of Medicine, Chicago, Illinois; 4Divisions of Hospital Medicine and Infectious Diseases, Cincinnati Children’s Hospital Medical Center and the University of Cincinnati College of Medicine, Cincinnati, Ohio.

Disclosures
Dr Wray is a Deputy Digital Media Editor, Dr Kulkarni is an Associate Editor, and Dr Shah is the Editor-in-Chief for the Journal of Hospital Medicine.

Funding
Dr Wray is supported by a VA Health Services Research and Development Career Development Award (IK2HX003139-01A2).

Author and Disclosure Information

1Department of Medicine, University of California, San Francisco, California; 2Section of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California; 3Division of Hospital Medicine, Northwestern University, Feinberg School of Medicine, Chicago, Illinois; 4Divisions of Hospital Medicine and Infectious Diseases, Cincinnati Children’s Hospital Medical Center and the University of Cincinnati College of Medicine, Cincinnati, Ohio.

Disclosures
Dr Wray is a Deputy Digital Media Editor, Dr Kulkarni is an Associate Editor, and Dr Shah is the Editor-in-Chief for the Journal of Hospital Medicine.

Funding
Dr Wray is supported by a VA Health Services Research and Development Career Development Award (IK2HX003139-01A2).

Article PDF
Article PDF
Related Articles

Horwitz and Detsky1 provide readers with a personal, experientially based primer on how healthcare professionals can more effectively engage on Twitter. As experienced physicians, researchers, and active social media users, the authors outline pragmatic and specific recommendations on how to engage misinformation and add value to social media discourse. We applaud the authors for offering best-practice approaches that are valuable to newcomers as well as seasoned social media users. In highlighting that social media is merely a modern tool for engagement and discussion, the authors underscore the time-held idea that only when a tool is used effectively will it yield the desired outcome. As a medical journal that regularly uses social media as a tool for outreach and dissemination, we could not agree more with the authors’ assertion.

Since 2015, the Journal of Hospital Medicine (JHM) has used social media to engage its readership and extend the impact of the work published in its pages. Like Horwitz and Detsky, JHM has developed insights and experience in how medical journals, organizations, institutions, and other academic programs can use social media effectively. Because of our experience in this area, we are often asked how to build a successful and sustainable social media presence. Here, we share five primary lessons on how to use social media as a tool to disseminate, connect, and engage.

ESTABLISH YOUR GOALS

As the flagship journal for the field of hospital medicine, we seek to disseminate the ideas and research that will inform health policy, optimize healthcare delivery, and improve patient outcomes while also building and sustaining an online community for professional engagement and growth. Our social media goals provide direction on how to interact, allow us to focus attention on what is important, and motivate our growth in this area. Simply put, we believe that using social media without defined goals would be like sailing a ship without a rudder.

KNOW YOUR AUDIENCE

As your organization establishes its goals, it is important to consider with whom you want to connect. Knowing your audience will allow you to better tailor the content you deliver through social media. For instance, we understand that as a journal focused on hospital medicine, our audience consists of busy clinicians, researchers, and medical educators who are trying to efficiently gather the most up-to-date information in our field. Recognizing this, we produce (and make available for download) Visual Abstracts and publish them on Twitter to help our followers assimilate information from new studies quickly and easily.2 Moreover, we recognize that our followers are interested in how to use social media in their professional lives and have published several articles in this topic area.3-5

BUILD YOUR TEAM

We have found that having multiple individuals on our social media team has led to greater creativity and thoughtfulness on how we engage our readership. Our teams span generations, clinical experience, institutions, and cultural backgrounds. This intentional approach has allowed for diversity in thoughts and opinions and has helped shape the JHM social media message. Additionally, we have not only formalized editorial roles through the creation of Digital Media Editor positions, but we have also created the JHM Digital Media Fellowship, a training program and development pipeline for those interested in cultivating organization-based social media experiences and skill sets.6

ENGAGE CONSISTENTLY

Many organizations believe that successful social media outreach means creating an account and posting content when convenient. Experience has taught us that daily postings and regular engagement will build your brand as a regular and reliable source of information for your followers. Additionally, while many academic journals and organizations only occasionally post material and rarely interact with their followers, we have found that engaging and facilitating conversations through our monthly Twitter discussion (#JHMChat) has established a community, created opportunities for professional networking, and further disseminated the work published in JHM.7 As an academic journal or organization entering this field, recognize the product for which people follow you and deliver that product on a consistent basis.

OWN YOUR MISTAKES

It will only be a matter of time before your organization makes a misstep on social media. Instead of hiding, we recommend stepping into that tension and owning the mistake. For example, we recently published an article that contained a culturally offensive term. As a journal, we reflected on our error and took concrete steps to correct it. Further, we shared our thoughts with our followers to ensure transparency.8 Moving forward, we have inserted specific stopgaps in our editorial review process to avoid such missteps in the future.

Although every organization will have different goals and reasons for engaging on social media, we believe these central tenets will help optimize the use of this platform. Although we have established specific objectives for our engagement on social media, we believe Horwitz and Detsky1 put it best when they note that, at the end of the day, our ultimate goal is in “…promoting knowledge and science in a way that helps us all live healthier and happier lives."

Horwitz and Detsky1 provide readers with a personal, experientially based primer on how healthcare professionals can more effectively engage on Twitter. As experienced physicians, researchers, and active social media users, the authors outline pragmatic and specific recommendations on how to engage misinformation and add value to social media discourse. We applaud the authors for offering best-practice approaches that are valuable to newcomers as well as seasoned social media users. In highlighting that social media is merely a modern tool for engagement and discussion, the authors underscore the time-held idea that only when a tool is used effectively will it yield the desired outcome. As a medical journal that regularly uses social media as a tool for outreach and dissemination, we could not agree more with the authors’ assertion.

Since 2015, the Journal of Hospital Medicine (JHM) has used social media to engage its readership and extend the impact of the work published in its pages. Like Horwitz and Detsky, JHM has developed insights and experience in how medical journals, organizations, institutions, and other academic programs can use social media effectively. Because of our experience in this area, we are often asked how to build a successful and sustainable social media presence. Here, we share five primary lessons on how to use social media as a tool to disseminate, connect, and engage.

ESTABLISH YOUR GOALS

As the flagship journal for the field of hospital medicine, we seek to disseminate the ideas and research that will inform health policy, optimize healthcare delivery, and improve patient outcomes while also building and sustaining an online community for professional engagement and growth. Our social media goals provide direction on how to interact, allow us to focus attention on what is important, and motivate our growth in this area. Simply put, we believe that using social media without defined goals would be like sailing a ship without a rudder.

KNOW YOUR AUDIENCE

As your organization establishes its goals, it is important to consider with whom you want to connect. Knowing your audience will allow you to better tailor the content you deliver through social media. For instance, we understand that as a journal focused on hospital medicine, our audience consists of busy clinicians, researchers, and medical educators who are trying to efficiently gather the most up-to-date information in our field. Recognizing this, we produce (and make available for download) Visual Abstracts and publish them on Twitter to help our followers assimilate information from new studies quickly and easily.2 Moreover, we recognize that our followers are interested in how to use social media in their professional lives and have published several articles in this topic area.3-5

BUILD YOUR TEAM

We have found that having multiple individuals on our social media team has led to greater creativity and thoughtfulness on how we engage our readership. Our teams span generations, clinical experience, institutions, and cultural backgrounds. This intentional approach has allowed for diversity in thoughts and opinions and has helped shape the JHM social media message. Additionally, we have not only formalized editorial roles through the creation of Digital Media Editor positions, but we have also created the JHM Digital Media Fellowship, a training program and development pipeline for those interested in cultivating organization-based social media experiences and skill sets.6

ENGAGE CONSISTENTLY

Many organizations believe that successful social media outreach means creating an account and posting content when convenient. Experience has taught us that daily postings and regular engagement will build your brand as a regular and reliable source of information for your followers. Additionally, while many academic journals and organizations only occasionally post material and rarely interact with their followers, we have found that engaging and facilitating conversations through our monthly Twitter discussion (#JHMChat) has established a community, created opportunities for professional networking, and further disseminated the work published in JHM.7 As an academic journal or organization entering this field, recognize the product for which people follow you and deliver that product on a consistent basis.

OWN YOUR MISTAKES

It will only be a matter of time before your organization makes a misstep on social media. Instead of hiding, we recommend stepping into that tension and owning the mistake. For example, we recently published an article that contained a culturally offensive term. As a journal, we reflected on our error and took concrete steps to correct it. Further, we shared our thoughts with our followers to ensure transparency.8 Moving forward, we have inserted specific stopgaps in our editorial review process to avoid such missteps in the future.

Although every organization will have different goals and reasons for engaging on social media, we believe these central tenets will help optimize the use of this platform. Although we have established specific objectives for our engagement on social media, we believe Horwitz and Detsky1 put it best when they note that, at the end of the day, our ultimate goal is in “…promoting knowledge and science in a way that helps us all live healthier and happier lives."

References

1. Horwitz LI, Detsky AS. Tweeting into the void: effective use of social media for healthcare professionals. J Hosp Med. 2021;16(10):581-582. https://doi.org/10.12788/jhm.3684
2. 2021 Visual Abstracts. Accessed September 8, 2021. https://www.journalofhospitalmedicine.com/jhospmed/page/2021-visual-abstracts
3. Kumar A, Chen N, Singh A. #ConsentObtained - patient privacy in the age of social media. J Hosp Med. 2020;15(11):702-704. https://doi.org/10.12788/jhm.3416
4. Minter DJ, Patel A, Ganeshan S, Nematollahi S. Medical communities go virtual. J Hosp Med. 2021;16(6):378-380. https://doi.org/10.12788/jhm.3532
5. Marcelin JR, Cawcutt KA, Shapiro M, Varghese T, O’Glasser A. Moment vs movement: mission-based tweeting for physician advocacy. J Hosp Med. 2021;16(8):507-509. https://doi.org/10.12788/jhm.3636
6. Editorial Fellowships (Digital Media and Editorial). Accessed September 8, 2021. https://www.journalofhospitalmedicine.com/content/editorial-fellowships-digital-media-and-editorial
7. Wray CM, Auerbach AD, Arora VM. The adoption of an online journal club to improve research dissemination and social media engagement among hospitalists. J Hosp Med. 2018;13(11):764-769. https://doi.org/10.12788/jhm.2987
8. Shah SS, Manning KD, Wray CM, Castellanos A, Jerardi KE. Microaggressions, accountability, and our commitment to doing better [editorial]. J Hosp Med. 2021;16(6):325. https://doi.org/10.12788/jhm.3646

References

1. Horwitz LI, Detsky AS. Tweeting into the void: effective use of social media for healthcare professionals. J Hosp Med. 2021;16(10):581-582. https://doi.org/10.12788/jhm.3684
2. 2021 Visual Abstracts. Accessed September 8, 2021. https://www.journalofhospitalmedicine.com/jhospmed/page/2021-visual-abstracts
3. Kumar A, Chen N, Singh A. #ConsentObtained - patient privacy in the age of social media. J Hosp Med. 2020;15(11):702-704. https://doi.org/10.12788/jhm.3416
4. Minter DJ, Patel A, Ganeshan S, Nematollahi S. Medical communities go virtual. J Hosp Med. 2021;16(6):378-380. https://doi.org/10.12788/jhm.3532
5. Marcelin JR, Cawcutt KA, Shapiro M, Varghese T, O’Glasser A. Moment vs movement: mission-based tweeting for physician advocacy. J Hosp Med. 2021;16(8):507-509. https://doi.org/10.12788/jhm.3636
6. Editorial Fellowships (Digital Media and Editorial). Accessed September 8, 2021. https://www.journalofhospitalmedicine.com/content/editorial-fellowships-digital-media-and-editorial
7. Wray CM, Auerbach AD, Arora VM. The adoption of an online journal club to improve research dissemination and social media engagement among hospitalists. J Hosp Med. 2018;13(11):764-769. https://doi.org/10.12788/jhm.2987
8. Shah SS, Manning KD, Wray CM, Castellanos A, Jerardi KE. Microaggressions, accountability, and our commitment to doing better [editorial]. J Hosp Med. 2021;16(6):325. https://doi.org/10.12788/jhm.3646

Issue
Journal of Hospital Medicine 16(10)
Issue
Journal of Hospital Medicine 16(10)
Page Number
581-582. Published Online First September 15, 2021
Page Number
581-582. Published Online First September 15, 2021
Publications
Publications
Topics
Article Type
Display Headline
How Organizations Can Build a Successful and Sustainable Social Media Presence
Display Headline
How Organizations Can Build a Successful and Sustainable Social Media Presence
Sections
Article Source

© 2021 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Charlie M Wray, DO, MS; Email: [email protected]; Telephone: 415-595-9662; Twitter: @WrayCharles.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Leveraging the Care Team to Optimize Disposition Planning

Article Type
Changed
Tue, 08/31/2021 - 13:44
Display Headline
Leveraging the Care Team to Optimize Disposition Planning

Is this patient a good candidate? In medicine, we subconsciously answer this question for every clinical decision we make. Occasionally, though, a clinical scenario is so complex that it cannot or should not be answered by a single individual. One example is the decision on whether a patient should receive an organ transplant. In this situation, a multidisciplinary committee weighs the complex ethical, clinical, and financial implications of the decision before coming to a verdict. Together, team members discuss the risks and benefits of each patient’s candidacy and, in a united fashion, decide the best course of care. For hospitalists, a far more common question occurs every day and is similarly fraught with multifaceted implications: Is my patient a good candidate for a skilled nursing facility (SNF)? We often rely on a single individual to make the final call, but should we instead be leveraging the expertise of other care team members to assist with this decision?

In this issue, Boyle et al1 describe the implementation of a multidisciplinary team consisting of physicians, case managers, social workers, physical and occupational therapists, and home-health representatives that reviewed all patients with an expected discharge to a SNF. Case managers or social workers began the process by referring eligible patients to the committee for review. If deemed appropriate, the committee discussed each case and reached a consensus recommendation as to whether a SNF was an appropriate discharge destination. The investigators used a matched, preintervention sample as a comparison group, with a primary outcome of total discharges to SNFs, and secondary outcomes consisting of readmissions, time to readmission, and median length of stay. The authors observed a 49.7% relative reduction in total SNF discharges (25.5% of preintervention patients discharged to a SNF vs 12.8% postintervention), as well as a 66.9% relative reduction in new SNF discharges. Despite the significant reduction in SNF utilization, no differences were noted in readmissions, time to readmission, or readmission length of stay.

While this study was performed during the COVID-19 pandemic, several characteristics make its findings applicable beyond this period. First, the structure and workflow of the team are extensively detailed and make the intervention easily generalizable to most hospitals. Second, while not specifically examined, the outcome of SNF reduction likely corresponds to an increase in the patient’s time at home—an important patient-centered target for most posthospitalization plans.2 Finally, the intervention used existing infrastructure and individuals, and did not require new resources to improve patient care, which increases the feasibility of implementation at other institutions.

These findings also reveal potential overutilization of SNFs in the discharge process. On average, a typical SNF stay costs the health system more than $11,000.3 A simple intervention could lead to substantial savings for individuals and the healthcare system. With a nearly 50% reduction in SNF use, understanding why patients who were eligible to go home were ultimately discharged to a SNF will be a crucial question to answer. Are there barriers to patient or family education? Is there a perceived safety difference between a SNF and home for nonskilled nursing needs? Additionally, care should be taken to ensure that decreases in SNF utilization do not disproportionately affect certain populations. Further work should assess the performance of similar models in a non-COVID era and among multiple institutions to verify potential scalability and generalizability.

Like organ transplant committees, Boyle et al’s multidisciplinary approach to reduce SNF discharges had to include thoughtful and intentional decisions. Perhaps it is time we use this same model to transplant patients back into their homes as safely and efficiently as possible.

References

1. Boyle CA, Ravichandran U, Hankamp V, et al. Safe transitions and congregate living in the age of COVID-19: a retrospective cohort study. J Hosp Med. 2021;16(9):524-530. https://doi.org/10.12788/jhm.3657
2. Barnett ML, Grabowski DC, Mehrotra A. Home-to-home time—measuring what matters to patients and payers. N Engl J Med. 2017;377(1):4-6. https://doi.org/10.1056/NEJMp1703423
3. Werner RM, Coe NB, Qi M, Konetzka RT. Patient outcomes after hospital discharge to home with home health care vs to a skilled nursing facility. JAMA Intern Med. 2019;179(5):617-623. https://doi.org/10.1001/jamainternmed.2018.7998

Article PDF
Author and Disclosure Information

1Department of Internal Medicine, UT Southwestern Medical Center, Dallas, Texas; 2Division of Hospital Medicine, Parkland Memorial Hospital, Dallas, Texas; 3Department of Medicine, University of California, San Francisco, California; 4Section of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California.

Disclosures
The authors reported no conflicts of interest.

Funding
Dr Wray is supported by a VA Health Services Research and Development Career Development Award (IK2HX003139-01A2).

Issue
Journal of Hospital Medicine 16(9)
Publications
Topics
Page Number
574
Sections
Author and Disclosure Information

1Department of Internal Medicine, UT Southwestern Medical Center, Dallas, Texas; 2Division of Hospital Medicine, Parkland Memorial Hospital, Dallas, Texas; 3Department of Medicine, University of California, San Francisco, California; 4Section of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California.

Disclosures
The authors reported no conflicts of interest.

Funding
Dr Wray is supported by a VA Health Services Research and Development Career Development Award (IK2HX003139-01A2).

Author and Disclosure Information

1Department of Internal Medicine, UT Southwestern Medical Center, Dallas, Texas; 2Division of Hospital Medicine, Parkland Memorial Hospital, Dallas, Texas; 3Department of Medicine, University of California, San Francisco, California; 4Section of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California.

Disclosures
The authors reported no conflicts of interest.

Funding
Dr Wray is supported by a VA Health Services Research and Development Career Development Award (IK2HX003139-01A2).

Article PDF
Article PDF
Related Articles

Is this patient a good candidate? In medicine, we subconsciously answer this question for every clinical decision we make. Occasionally, though, a clinical scenario is so complex that it cannot or should not be answered by a single individual. One example is the decision on whether a patient should receive an organ transplant. In this situation, a multidisciplinary committee weighs the complex ethical, clinical, and financial implications of the decision before coming to a verdict. Together, team members discuss the risks and benefits of each patient’s candidacy and, in a united fashion, decide the best course of care. For hospitalists, a far more common question occurs every day and is similarly fraught with multifaceted implications: Is my patient a good candidate for a skilled nursing facility (SNF)? We often rely on a single individual to make the final call, but should we instead be leveraging the expertise of other care team members to assist with this decision?

In this issue, Boyle et al1 describe the implementation of a multidisciplinary team consisting of physicians, case managers, social workers, physical and occupational therapists, and home-health representatives that reviewed all patients with an expected discharge to a SNF. Case managers or social workers began the process by referring eligible patients to the committee for review. If deemed appropriate, the committee discussed each case and reached a consensus recommendation as to whether a SNF was an appropriate discharge destination. The investigators used a matched, preintervention sample as a comparison group, with a primary outcome of total discharges to SNFs, and secondary outcomes consisting of readmissions, time to readmission, and median length of stay. The authors observed a 49.7% relative reduction in total SNF discharges (25.5% of preintervention patients discharged to a SNF vs 12.8% postintervention), as well as a 66.9% relative reduction in new SNF discharges. Despite the significant reduction in SNF utilization, no differences were noted in readmissions, time to readmission, or readmission length of stay.

While this study was performed during the COVID-19 pandemic, several characteristics make its findings applicable beyond this period. First, the structure and workflow of the team are extensively detailed and make the intervention easily generalizable to most hospitals. Second, while not specifically examined, the outcome of SNF reduction likely corresponds to an increase in the patient’s time at home—an important patient-centered target for most posthospitalization plans.2 Finally, the intervention used existing infrastructure and individuals, and did not require new resources to improve patient care, which increases the feasibility of implementation at other institutions.

These findings also reveal potential overutilization of SNFs in the discharge process. On average, a typical SNF stay costs the health system more than $11,000.3 A simple intervention could lead to substantial savings for individuals and the healthcare system. With a nearly 50% reduction in SNF use, understanding why patients who were eligible to go home were ultimately discharged to a SNF will be a crucial question to answer. Are there barriers to patient or family education? Is there a perceived safety difference between a SNF and home for nonskilled nursing needs? Additionally, care should be taken to ensure that decreases in SNF utilization do not disproportionately affect certain populations. Further work should assess the performance of similar models in a non-COVID era and among multiple institutions to verify potential scalability and generalizability.

Like organ transplant committees, Boyle et al’s multidisciplinary approach to reduce SNF discharges had to include thoughtful and intentional decisions. Perhaps it is time we use this same model to transplant patients back into their homes as safely and efficiently as possible.

Is this patient a good candidate? In medicine, we subconsciously answer this question for every clinical decision we make. Occasionally, though, a clinical scenario is so complex that it cannot or should not be answered by a single individual. One example is the decision on whether a patient should receive an organ transplant. In this situation, a multidisciplinary committee weighs the complex ethical, clinical, and financial implications of the decision before coming to a verdict. Together, team members discuss the risks and benefits of each patient’s candidacy and, in a united fashion, decide the best course of care. For hospitalists, a far more common question occurs every day and is similarly fraught with multifaceted implications: Is my patient a good candidate for a skilled nursing facility (SNF)? We often rely on a single individual to make the final call, but should we instead be leveraging the expertise of other care team members to assist with this decision?

In this issue, Boyle et al1 describe the implementation of a multidisciplinary team consisting of physicians, case managers, social workers, physical and occupational therapists, and home-health representatives that reviewed all patients with an expected discharge to a SNF. Case managers or social workers began the process by referring eligible patients to the committee for review. If deemed appropriate, the committee discussed each case and reached a consensus recommendation as to whether a SNF was an appropriate discharge destination. The investigators used a matched, preintervention sample as a comparison group, with a primary outcome of total discharges to SNFs, and secondary outcomes consisting of readmissions, time to readmission, and median length of stay. The authors observed a 49.7% relative reduction in total SNF discharges (25.5% of preintervention patients discharged to a SNF vs 12.8% postintervention), as well as a 66.9% relative reduction in new SNF discharges. Despite the significant reduction in SNF utilization, no differences were noted in readmissions, time to readmission, or readmission length of stay.

While this study was performed during the COVID-19 pandemic, several characteristics make its findings applicable beyond this period. First, the structure and workflow of the team are extensively detailed and make the intervention easily generalizable to most hospitals. Second, while not specifically examined, the outcome of SNF reduction likely corresponds to an increase in the patient’s time at home—an important patient-centered target for most posthospitalization plans.2 Finally, the intervention used existing infrastructure and individuals, and did not require new resources to improve patient care, which increases the feasibility of implementation at other institutions.

These findings also reveal potential overutilization of SNFs in the discharge process. On average, a typical SNF stay costs the health system more than $11,000.3 A simple intervention could lead to substantial savings for individuals and the healthcare system. With a nearly 50% reduction in SNF use, understanding why patients who were eligible to go home were ultimately discharged to a SNF will be a crucial question to answer. Are there barriers to patient or family education? Is there a perceived safety difference between a SNF and home for nonskilled nursing needs? Additionally, care should be taken to ensure that decreases in SNF utilization do not disproportionately affect certain populations. Further work should assess the performance of similar models in a non-COVID era and among multiple institutions to verify potential scalability and generalizability.

Like organ transplant committees, Boyle et al’s multidisciplinary approach to reduce SNF discharges had to include thoughtful and intentional decisions. Perhaps it is time we use this same model to transplant patients back into their homes as safely and efficiently as possible.

References

1. Boyle CA, Ravichandran U, Hankamp V, et al. Safe transitions and congregate living in the age of COVID-19: a retrospective cohort study. J Hosp Med. 2021;16(9):524-530. https://doi.org/10.12788/jhm.3657
2. Barnett ML, Grabowski DC, Mehrotra A. Home-to-home time—measuring what matters to patients and payers. N Engl J Med. 2017;377(1):4-6. https://doi.org/10.1056/NEJMp1703423
3. Werner RM, Coe NB, Qi M, Konetzka RT. Patient outcomes after hospital discharge to home with home health care vs to a skilled nursing facility. JAMA Intern Med. 2019;179(5):617-623. https://doi.org/10.1001/jamainternmed.2018.7998

References

1. Boyle CA, Ravichandran U, Hankamp V, et al. Safe transitions and congregate living in the age of COVID-19: a retrospective cohort study. J Hosp Med. 2021;16(9):524-530. https://doi.org/10.12788/jhm.3657
2. Barnett ML, Grabowski DC, Mehrotra A. Home-to-home time—measuring what matters to patients and payers. N Engl J Med. 2017;377(1):4-6. https://doi.org/10.1056/NEJMp1703423
3. Werner RM, Coe NB, Qi M, Konetzka RT. Patient outcomes after hospital discharge to home with home health care vs to a skilled nursing facility. JAMA Intern Med. 2019;179(5):617-623. https://doi.org/10.1001/jamainternmed.2018.7998

Issue
Journal of Hospital Medicine 16(9)
Issue
Journal of Hospital Medicine 16(9)
Page Number
574
Page Number
574
Publications
Publications
Topics
Article Type
Display Headline
Leveraging the Care Team to Optimize Disposition Planning
Display Headline
Leveraging the Care Team to Optimize Disposition Planning
Sections
Article Source

© 2021 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Andrew Sumarsono, MD; Email: [email protected].
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Microaggressions, Accountability, and Our Commitment to Doing Better

Article Type
Changed
Tue, 06/01/2021 - 09:13
Display Headline
Microaggressions, Accountability, and Our Commitment to Doing Better

We recently published an article in our Leadership & Professional Development series titled “Tribalism: The Good, the Bad, and the Future.” Despite pre- and post-acceptance manuscript review and discussion by a diverse and thoughtful team of editors, we did not appreciate how particular language in this article would be hurtful to some communities. We also promoted the article using the hashtag “tribalism” in a journal tweet. Shortly after we posted the tweet, several readers on social media reached out with constructive feedback on the prejudicial nature of this terminology. Within hours of receiving this feedback, our editorial team met to better understand our error, and we made the decision to immediately retract the manuscript. We also deleted the tweet and issued an apology referencing a screenshot of the original tweet.1,2 We have republished the original article with appropriate language.3 Tweets promoting the new article will incorporate this new language.

From this experience, we learned that the words “tribe” and “tribalism” have no consistent meaning, are associated with negative historical and cultural assumptions, and can promote misleading stereotypes.4 The term “tribe” became popular as a colonial construct to describe forms of social organization considered ”uncivilized” or ”primitive.“5 In using the term “tribe” to describe members of medical communities, we ignored the complex and dynamic identities of Native American, African, and other Indigenous Peoples and the history of their oppression.

The intent of the original article was to highlight how being part of a distinct medical discipline, such as hospital medicine or emergency medicine, conferred benefits, such as shared identity and social support structure, and caution how this group identity could also lead to nonconstructive partisan behaviors that might not best serve our patients. We recognize that other words more accurately convey our intent and do not cause harm. We used “tribe” when we meant “group,” “discipline,” or “specialty.” We used “tribalism” when we meant “siloed” or “factional.”

This misstep underscores how, even with the best intentions and diverse teams, microaggressions can happen. We accept responsibility for this mistake, and we will continue to do the work of respecting and advocating for all members of our community. To minimize the likelihood of future errors, we are developing a systematic process to identify language within manuscripts accepted for publication that may be racist, sexist, ableist, homophobic, or otherwise harmful. As we embrace a growth mindset, we vow to remain transparent, responsive, and welcoming of feedback. We are grateful to our readers for helping us learn.

References

1. Shah SS [@SamirShahMD]. We are still learning. Despite review by a diverse group of team members, we did not appreciate how language in…. April 30, 2021. Accessed May 5, 2021. https://twitter.com/SamirShahMD/status/1388228974573244431
2. Journal of Hospital Medicine [@JHospMedicine]. We want to apologize. We used insensitive language that may be hurtful to Indigenous Americans & others. We are learning…. April 30, 2021. Accessed May 5, 2021. https://twitter.com/JHospMedicine/status/1388227448962052097
3. Kanjee Z, Bilello L. Specialty silos in medicine: the good, the bad, and the future. J Hosp Med. Published online May 21, 2021. https://doi.org/10.12788/jhm.3647
4. Lowe C. The trouble with tribe: How a common word masks complex African realities. Learning for Justice. Spring 2001. Accessed May 5, 2021. https://www.learningforjustice.org/magazine/spring-2001/the-trouble-with-tribe
5. Mungai C. Pundits who decry ‘tribalism’ know nothing about real tribes. Washington Post. January 30, 2019. Accessed May 6, 2021. https://www.washingtonpost.com/outlook/pundits-who-decry-tribalism-know-nothing-about-real-tribes/2019/01/29/8d14eb44-232f-11e9-90cd-dedb0c92dc17_story.html

Article PDF
Author and Disclosure Information

1Division of Hospital Medicine, Cincinnati Children’s Hospital Medical Center and the Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH; 2Department of Medicine, Emory University, Atlanta, GA; 3University of California, San Francisco, and San Francisco Veterans Affairs Medical Center, San Francisco, CA; 4Department of Pediatrics, Tufts Children’s Hospital, Tufts University School of Medicine, Boston, MA.

Disclosures
The authors have no conflicts to disclose.

Issue
Journal of Hospital Medicine 16(6)
Publications
Topics
Page Number
325. Published Online First May 21, 2021
Sections
Author and Disclosure Information

1Division of Hospital Medicine, Cincinnati Children’s Hospital Medical Center and the Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH; 2Department of Medicine, Emory University, Atlanta, GA; 3University of California, San Francisco, and San Francisco Veterans Affairs Medical Center, San Francisco, CA; 4Department of Pediatrics, Tufts Children’s Hospital, Tufts University School of Medicine, Boston, MA.

Disclosures
The authors have no conflicts to disclose.

Author and Disclosure Information

1Division of Hospital Medicine, Cincinnati Children’s Hospital Medical Center and the Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH; 2Department of Medicine, Emory University, Atlanta, GA; 3University of California, San Francisco, and San Francisco Veterans Affairs Medical Center, San Francisco, CA; 4Department of Pediatrics, Tufts Children’s Hospital, Tufts University School of Medicine, Boston, MA.

Disclosures
The authors have no conflicts to disclose.

Article PDF
Article PDF
Related Articles

We recently published an article in our Leadership & Professional Development series titled “Tribalism: The Good, the Bad, and the Future.” Despite pre- and post-acceptance manuscript review and discussion by a diverse and thoughtful team of editors, we did not appreciate how particular language in this article would be hurtful to some communities. We also promoted the article using the hashtag “tribalism” in a journal tweet. Shortly after we posted the tweet, several readers on social media reached out with constructive feedback on the prejudicial nature of this terminology. Within hours of receiving this feedback, our editorial team met to better understand our error, and we made the decision to immediately retract the manuscript. We also deleted the tweet and issued an apology referencing a screenshot of the original tweet.1,2 We have republished the original article with appropriate language.3 Tweets promoting the new article will incorporate this new language.

From this experience, we learned that the words “tribe” and “tribalism” have no consistent meaning, are associated with negative historical and cultural assumptions, and can promote misleading stereotypes.4 The term “tribe” became popular as a colonial construct to describe forms of social organization considered ”uncivilized” or ”primitive.“5 In using the term “tribe” to describe members of medical communities, we ignored the complex and dynamic identities of Native American, African, and other Indigenous Peoples and the history of their oppression.

The intent of the original article was to highlight how being part of a distinct medical discipline, such as hospital medicine or emergency medicine, conferred benefits, such as shared identity and social support structure, and caution how this group identity could also lead to nonconstructive partisan behaviors that might not best serve our patients. We recognize that other words more accurately convey our intent and do not cause harm. We used “tribe” when we meant “group,” “discipline,” or “specialty.” We used “tribalism” when we meant “siloed” or “factional.”

This misstep underscores how, even with the best intentions and diverse teams, microaggressions can happen. We accept responsibility for this mistake, and we will continue to do the work of respecting and advocating for all members of our community. To minimize the likelihood of future errors, we are developing a systematic process to identify language within manuscripts accepted for publication that may be racist, sexist, ableist, homophobic, or otherwise harmful. As we embrace a growth mindset, we vow to remain transparent, responsive, and welcoming of feedback. We are grateful to our readers for helping us learn.

We recently published an article in our Leadership & Professional Development series titled “Tribalism: The Good, the Bad, and the Future.” Despite pre- and post-acceptance manuscript review and discussion by a diverse and thoughtful team of editors, we did not appreciate how particular language in this article would be hurtful to some communities. We also promoted the article using the hashtag “tribalism” in a journal tweet. Shortly after we posted the tweet, several readers on social media reached out with constructive feedback on the prejudicial nature of this terminology. Within hours of receiving this feedback, our editorial team met to better understand our error, and we made the decision to immediately retract the manuscript. We also deleted the tweet and issued an apology referencing a screenshot of the original tweet.1,2 We have republished the original article with appropriate language.3 Tweets promoting the new article will incorporate this new language.

From this experience, we learned that the words “tribe” and “tribalism” have no consistent meaning, are associated with negative historical and cultural assumptions, and can promote misleading stereotypes.4 The term “tribe” became popular as a colonial construct to describe forms of social organization considered ”uncivilized” or ”primitive.“5 In using the term “tribe” to describe members of medical communities, we ignored the complex and dynamic identities of Native American, African, and other Indigenous Peoples and the history of their oppression.

The intent of the original article was to highlight how being part of a distinct medical discipline, such as hospital medicine or emergency medicine, conferred benefits, such as shared identity and social support structure, and caution how this group identity could also lead to nonconstructive partisan behaviors that might not best serve our patients. We recognize that other words more accurately convey our intent and do not cause harm. We used “tribe” when we meant “group,” “discipline,” or “specialty.” We used “tribalism” when we meant “siloed” or “factional.”

This misstep underscores how, even with the best intentions and diverse teams, microaggressions can happen. We accept responsibility for this mistake, and we will continue to do the work of respecting and advocating for all members of our community. To minimize the likelihood of future errors, we are developing a systematic process to identify language within manuscripts accepted for publication that may be racist, sexist, ableist, homophobic, or otherwise harmful. As we embrace a growth mindset, we vow to remain transparent, responsive, and welcoming of feedback. We are grateful to our readers for helping us learn.

References

1. Shah SS [@SamirShahMD]. We are still learning. Despite review by a diverse group of team members, we did not appreciate how language in…. April 30, 2021. Accessed May 5, 2021. https://twitter.com/SamirShahMD/status/1388228974573244431
2. Journal of Hospital Medicine [@JHospMedicine]. We want to apologize. We used insensitive language that may be hurtful to Indigenous Americans & others. We are learning…. April 30, 2021. Accessed May 5, 2021. https://twitter.com/JHospMedicine/status/1388227448962052097
3. Kanjee Z, Bilello L. Specialty silos in medicine: the good, the bad, and the future. J Hosp Med. Published online May 21, 2021. https://doi.org/10.12788/jhm.3647
4. Lowe C. The trouble with tribe: How a common word masks complex African realities. Learning for Justice. Spring 2001. Accessed May 5, 2021. https://www.learningforjustice.org/magazine/spring-2001/the-trouble-with-tribe
5. Mungai C. Pundits who decry ‘tribalism’ know nothing about real tribes. Washington Post. January 30, 2019. Accessed May 6, 2021. https://www.washingtonpost.com/outlook/pundits-who-decry-tribalism-know-nothing-about-real-tribes/2019/01/29/8d14eb44-232f-11e9-90cd-dedb0c92dc17_story.html

References

1. Shah SS [@SamirShahMD]. We are still learning. Despite review by a diverse group of team members, we did not appreciate how language in…. April 30, 2021. Accessed May 5, 2021. https://twitter.com/SamirShahMD/status/1388228974573244431
2. Journal of Hospital Medicine [@JHospMedicine]. We want to apologize. We used insensitive language that may be hurtful to Indigenous Americans & others. We are learning…. April 30, 2021. Accessed May 5, 2021. https://twitter.com/JHospMedicine/status/1388227448962052097
3. Kanjee Z, Bilello L. Specialty silos in medicine: the good, the bad, and the future. J Hosp Med. Published online May 21, 2021. https://doi.org/10.12788/jhm.3647
4. Lowe C. The trouble with tribe: How a common word masks complex African realities. Learning for Justice. Spring 2001. Accessed May 5, 2021. https://www.learningforjustice.org/magazine/spring-2001/the-trouble-with-tribe
5. Mungai C. Pundits who decry ‘tribalism’ know nothing about real tribes. Washington Post. January 30, 2019. Accessed May 6, 2021. https://www.washingtonpost.com/outlook/pundits-who-decry-tribalism-know-nothing-about-real-tribes/2019/01/29/8d14eb44-232f-11e9-90cd-dedb0c92dc17_story.html

Issue
Journal of Hospital Medicine 16(6)
Issue
Journal of Hospital Medicine 16(6)
Page Number
325. Published Online First May 21, 2021
Page Number
325. Published Online First May 21, 2021
Publications
Publications
Topics
Article Type
Display Headline
Microaggressions, Accountability, and Our Commitment to Doing Better
Display Headline
Microaggressions, Accountability, and Our Commitment to Doing Better
Sections
Article Source

© 2021 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Samir S Shah, MD, MSCE; E-mail: [email protected]; Twitter: @SamirShahMD.
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Leveling the Playing Field: Accounting for Academic Productivity During the COVID-19 Pandemic

Article Type
Changed
Thu, 03/18/2021 - 12:56

Professional upheavals caused by the coronavirus disease 2019 (COVID-19) pandemic have affected the academic productivity of many physicians. This is due in part to rapid changes in clinical care and medical education: physician-researchers have been redeployed to frontline clinical care; clinician-educators have been forced to rapidly transition in-person curricula to virtual platforms; and primary care physicians and subspecialists have been forced to transition to telehealth-based practices. In addition to these changes in clinical and educational responsibilities, the COVID-19 pandemic has substantially altered the personal lives of physicians. During the height of the pandemic, clinicians simultaneously wrestled with a lack of available childcare, unexpected home-schooling responsibilities, decreased income, and many other COVID-19-related stresses.1 Additionally, the ever-present “second pandemic” of structural racism, persistent health disparities, and racial inequity has further increased the personal and professional demands facing academic faculty.2

In particular, the pandemic has placed personal and professional pressure on female and minority faculty members. In spite of these pressures, however, the academic promotions process still requires rigid accounting of scholarly productivity. As the focus of academic practices has shifted to support clinical care during the pandemic, scholarly productivity has suffered for clinicians on the frontline. As a result, academic clinical faculty have expressed significant stress and concerns about failing to meet benchmarks for promotion (eg, publications, curricula development, national presentations). To counter these shifts (and the inherent inequity that they create for female clinicians and for men and women who are Black, Indigenous, and/or of color), academic institutions should not only recognize the effects the COVID-19 pandemic has had on faculty, but also adopt immediate solutions to more equitably account for such disruptions to academic portfolios. In this paper, we explore populations whose career trajectories are most at-risk and propose a framework to capture novel and nontraditional contributions while also acknowledging the rapid changes the COVID-19 pandemic has brought to academic medicine.

POPULATIONS AT RISK FOR CAREER DISRUPTION

Even before the COVID-19 pandemic, physician mothers, underrepresented racial/ethnic minority groups, and junior faculty were most at-risk for career disruptions. The closure of daycare facilities and schools and shift to online learning resulting from the pandemic, along with the common challenges of parenting, have taken a significant toll on the lives of working parents. Because women tend to carry a disproportionate share of childcare and household responsibilities, these changes have inequitably leveraged themselves as a “mommy tax” on working women.3,4

As underrepresented medicine faculty (particularly Black, Hispanic, Latino, and Native American clinicians) comprise only 8% of the academic medical workforce,they currently face a variety of personal and professional challenges.5 This is especially true for Black and Latinx physicians who have been experiencing an increased COVID-19 burden in their communities, while concurrently fighting entrenched structural racism and police violence. In academia, these challenges have worsened because of the “minority tax”—the toll of often uncompensated extra responsibilities (time or money) placed on minority faculty in the name of achieving diversity. The unintended consequences of these responsibilities result in having fewer mentors,6 caring for underserved populations,7 and performing more clinical care8 than non-underrepresented minority faculty. Because minority faculty are unlikely to be in leadership positions, it is reasonable to conclude they have been shouldering heavier clinical obligations and facing greater career disruption of scholarly work due to the COVID-19 pandemic.

Junior faculty (eg, instructors and assistant professors) also remain professionally vulnerable during the COVID-19 pandemic. Because junior faculty are often more clinically focused and less likely to hold leadership positions than senior faculty, they are more likely to have assumed frontline clinical positions, which come at the expense of academic work. Junior faculty are also at a critical building phase in their academic career—a time when they benefit from the opportunity to share their scholarly work and network at conferences. Unfortunately, many conferences have been canceled or moved to a virtual platform. Given that some institutions may be freezing academic funding for conferences due to budgetary shortfalls from the pandemic, junior faculty may be particularly at risk if they are not able to present their work. In addition, junior faculty often face disproportionate struggles at home, trying to balance demands of work and caring for young children. Considering the unique needs of each of these groups, it is especially important to consider intersectionality, or the compounded issues for individuals who exist in multiple disproportionately affected groups (eg, a Black female junior faculty member who is also a mother).

THE COVID-19-CURRICULUM VITAE MATRIX

The typical format of a professional curriculum vitae (CV) at most academic institutions does not allow one to document potential disruptions or novel contributions, including those that occurred during the COVID-19 pandemic. As a group of academic clinicians, educators, and researchers whose careers have been affected by the pandemic, we created a COVID-19 CV matrix, a potential framework to serve as a supplement for faculty. In this matrix, faculty members may document their contributions, disruptions that affected their work, and caregiving responsibilities during this time period, while also providing a rubric for promotions and tenure committees to equitably evaluate the pandemic period on an academic CV. Our COVID-19 CV matrix consists of six domains: (1) clinical care, (2) research, (3) education, (4) service, (5) advocacy/media, and (6) social media. These domains encompass traditional and nontraditional contributions made by healthcare professionals during the pandemic (Table). This matrix broadens the ability of both faculty and institutions to determine the actual impact of individuals during the pandemic.

COVID-19 Curriculum Vitae Matrix Supplement

ACCOUNT FOR YOUR (NEW) IMPACT

Throughout the COVID-19 pandemic, academic faculty have been innovative, contributing in novel ways not routinely captured by promotions committees—eg, the digital health researcher who now directs the telemedicine response for their institution and the health disparities researcher who now leads daily webinar sessions on structural racism to medical students. Other novel contributions include advancing COVID-19 innovations and engaging in media and community advocacy (eg, organizing large-scale donations of equipment and funds to support organizations in need). While such nontraditional contributions may not have been readily captured or thought “CV worthy” in the past, faculty should now account for them. More importantly, promotions committees need to recognize that these pivots or alterations in career paths are not signals of professional failure, but rather evidence of a shifting landscape and the respective response of the individual. Furthermore, because these pivots often help fulfill an institutional mission, they are impactful.

ACKNOWLEDGE THE DISRUPTION

It is important for promotions and tenure committees to recognize the impact and disruption COVID-19 has had on traditional academic work, acknowledging the time and energy required for a faculty member to make needed work adjustments. This enables a leader to better assess how a faculty member’s academic portfolio has been affected. For example, researchers have had to halt studies, medical educators have had to redevelop and transition curricula to virtual platforms, and physicians have had to discontinue clinician quality improvement initiatives due to competing hospital priorities. Faculty members who document such unintentional alterations in their academic career path can explain to their institution how they have continued to positively influence their field and the community during the pandemic. This approach is analogous to the current model of accounting for clinical time when judging faculty members’ contributions in scholarly achievement.

The COVID-19 CV matrix has the potential to be annotated to explain the burden of one’s personal situation, which is often “invisible” in the professional environment. For example, many physicians have had to assume additional childcare responsibilities, tend to sick family members, friends, and even themselves. It is also possible that a faculty member has a partner who is also an essential worker, one who had to self-isolate due to COVID-19 exposure or illness, or who has been working overtime due to high patient volumes.

INSTITUTIONAL RESPONSE

How can institutions respond to the altered academic landscape caused by the COVID-19 pandemic? Promotions committees typically have two main tools at their disposal: adjusting the tenure clock or the benchmarks. Extending the period of time available to qualify for tenure is commonplace in the “publish-or-perish” academic tracks of university research professors. Clock adjustments are typically granted to faculty following the birth of a child or for other specific family- or health-related hardships, in accordance with the Family and Medical Leave Act. Unfortunately, tenure-clock extensions for female faculty members can exacerbate gender inequity: Data on tenure-clock extensions show a higher rate of tenure granted to male faculty compared to female faculty.9 For this reason, it is also important to explore adjustments or modifications to benchmark criteria. This could be accomplished by broadening the criteria for promotion, recognizing that impact occurs in many forms, thereby enabling meeting a benchmark. It can also occur by examining the trajectory of an individual within a promotion pathway before it was disrupted to determine impact. To avoid exacerbating social and gender inequities within academia, institutions should use these professional levers and create new ones to provide parity and equality across the promotional playing field. While the CV matrix openly acknowledges the disruptions and tangents the COVID-19 pandemic has had on academic careers, it remains important for academic institutions to recognize these disruptions and innovate the manner in which they acknowledge scholarly contributions.

Conclusion

While academic rigidity and known social taxes (minority and mommy taxes) are particularly problematic in the current climate, these issues have always been at play in evaluating academic success. Improved documentation of novel contributions, disruptions, caregiving, and other challenges can enable more holistic and timely professional advancement for all faculty, regardless of their sex, race, ethnicity, or social background. Ultimately, we hope this framework initiates further conversations among academic institutions on how to define productivity in an age where journal impact factor or number of publications is not the fullest measure of one’s impact in their field.

References

1. Jones Y, Durand V, Morton K, et al; ADVANCE PHM Steering Committee. Collateral damage: how covid-19 is adversely impacting women physicians. J Hosp Med. 2020;15(8):507-509. https://doi.org/10.12788/jhm.3470
2. Manning KD. When grief and crises intersect: perspectives of a black physician in the time of two pandemics. J Hosp Med. 2020;15(9):566-567. https://doi.org/10.12788/jhm.3481
3. Cohen P, Hsu T. Pandemic could scar a generation of working mothers. New York Times. Published June 3, 2020. Updated June 30, 2020. Accessed November 11, 2020. https://www.nytimes.com/2020/06/03/business/economy/coronavirus-working-women.html
4. Cain Miller C. Nearly half of men say they do most of the home schooling. 3 percent of women agree. Published May 6, 2020. Updated May 8, 2020. Accessed November 11, 2020. New York Times. https://www.nytimes.com/2020/05/06/upshot/pandemic-chores-homeschooling-gender.html
5. Rodríguez JE, Campbell KM, Pololi LH. Addressing disparities in academic medicine: what of the minority tax? BMC Med Educ. 2015;15:6. https://doi.org/10.1186/s12909-015-0290-9
6. Lewellen-Williams C, Johnson VA, Deloney LA, Thomas BR, Goyol A, Henry-Tillman R. The POD: a new model for mentoring underrepresented minority faculty. Acad Med. 2006;81(3):275-279. https://doi.org/10.1097/00001888-200603000-00020
7. Pololi LH, Evans AT, Gibbs BK, Krupat E, Brennan RT, Civian JT. The experience of minority faculty who are underrepresented in medicine, at 26 representative U.S. medical schools. Acad Med. 2013;88(9):1308-1314. https://doi.org/10.1097/acm.0b013e31829eefff
8. Richert A, Campbell K, Rodríguez J, Borowsky IW, Parikh R, Colwell A. ACU workforce column: expanding and supporting the health care workforce. J Health Care Poor Underserved. 2013;24(4):1423-1431. https://doi.org/10.1353/hpu.2013.0162
9. Woitowich NC, Jain S, Arora VM, Joffe H. COVID-19 threatens progress toward gender equity within academic medicine. Acad Med. 2020;29:10.1097/ACM.0000000000003782. https://doi.org/10.1097/acm.0000000000003782

Article PDF
Author and Disclosure Information

1Department of Medicine, University of Chicago, Chicago, Illinois; 2Department of Medicine, University of California, San Francisco, California; 3San Francisco VA Medical Center, San Francisco, California; 4Division of Hospital Medicine, Department of Medicine, Oregon Health & Science University, Portland, Oregon; 5St. Joseph Health Medical Group, Santa Rosa, California; 6Division of Hematology and Oncology, Department of Medicine, University of Illinois, Chicago, Illinois; 7ADvancing Vitae And Novel Contributions for Everyone (ADVANCE), Santa Rosa, California.

Disclosures

The authors reported they have nothing to disclose.

Funding

Dr Wray is a US federal government employee and prepared the paper as part of his official duties.

Issue
Journal of Hospital Medicine 16(2)
Publications
Topics
Page Number
120-123. Published Online First January 20, 2021
Sections
Author and Disclosure Information

1Department of Medicine, University of Chicago, Chicago, Illinois; 2Department of Medicine, University of California, San Francisco, California; 3San Francisco VA Medical Center, San Francisco, California; 4Division of Hospital Medicine, Department of Medicine, Oregon Health & Science University, Portland, Oregon; 5St. Joseph Health Medical Group, Santa Rosa, California; 6Division of Hematology and Oncology, Department of Medicine, University of Illinois, Chicago, Illinois; 7ADvancing Vitae And Novel Contributions for Everyone (ADVANCE), Santa Rosa, California.

Disclosures

The authors reported they have nothing to disclose.

Funding

Dr Wray is a US federal government employee and prepared the paper as part of his official duties.

Author and Disclosure Information

1Department of Medicine, University of Chicago, Chicago, Illinois; 2Department of Medicine, University of California, San Francisco, California; 3San Francisco VA Medical Center, San Francisco, California; 4Division of Hospital Medicine, Department of Medicine, Oregon Health & Science University, Portland, Oregon; 5St. Joseph Health Medical Group, Santa Rosa, California; 6Division of Hematology and Oncology, Department of Medicine, University of Illinois, Chicago, Illinois; 7ADvancing Vitae And Novel Contributions for Everyone (ADVANCE), Santa Rosa, California.

Disclosures

The authors reported they have nothing to disclose.

Funding

Dr Wray is a US federal government employee and prepared the paper as part of his official duties.

Article PDF
Article PDF
Related Articles

Professional upheavals caused by the coronavirus disease 2019 (COVID-19) pandemic have affected the academic productivity of many physicians. This is due in part to rapid changes in clinical care and medical education: physician-researchers have been redeployed to frontline clinical care; clinician-educators have been forced to rapidly transition in-person curricula to virtual platforms; and primary care physicians and subspecialists have been forced to transition to telehealth-based practices. In addition to these changes in clinical and educational responsibilities, the COVID-19 pandemic has substantially altered the personal lives of physicians. During the height of the pandemic, clinicians simultaneously wrestled with a lack of available childcare, unexpected home-schooling responsibilities, decreased income, and many other COVID-19-related stresses.1 Additionally, the ever-present “second pandemic” of structural racism, persistent health disparities, and racial inequity has further increased the personal and professional demands facing academic faculty.2

In particular, the pandemic has placed personal and professional pressure on female and minority faculty members. In spite of these pressures, however, the academic promotions process still requires rigid accounting of scholarly productivity. As the focus of academic practices has shifted to support clinical care during the pandemic, scholarly productivity has suffered for clinicians on the frontline. As a result, academic clinical faculty have expressed significant stress and concerns about failing to meet benchmarks for promotion (eg, publications, curricula development, national presentations). To counter these shifts (and the inherent inequity that they create for female clinicians and for men and women who are Black, Indigenous, and/or of color), academic institutions should not only recognize the effects the COVID-19 pandemic has had on faculty, but also adopt immediate solutions to more equitably account for such disruptions to academic portfolios. In this paper, we explore populations whose career trajectories are most at-risk and propose a framework to capture novel and nontraditional contributions while also acknowledging the rapid changes the COVID-19 pandemic has brought to academic medicine.

POPULATIONS AT RISK FOR CAREER DISRUPTION

Even before the COVID-19 pandemic, physician mothers, underrepresented racial/ethnic minority groups, and junior faculty were most at-risk for career disruptions. The closure of daycare facilities and schools and shift to online learning resulting from the pandemic, along with the common challenges of parenting, have taken a significant toll on the lives of working parents. Because women tend to carry a disproportionate share of childcare and household responsibilities, these changes have inequitably leveraged themselves as a “mommy tax” on working women.3,4

As underrepresented medicine faculty (particularly Black, Hispanic, Latino, and Native American clinicians) comprise only 8% of the academic medical workforce,they currently face a variety of personal and professional challenges.5 This is especially true for Black and Latinx physicians who have been experiencing an increased COVID-19 burden in their communities, while concurrently fighting entrenched structural racism and police violence. In academia, these challenges have worsened because of the “minority tax”—the toll of often uncompensated extra responsibilities (time or money) placed on minority faculty in the name of achieving diversity. The unintended consequences of these responsibilities result in having fewer mentors,6 caring for underserved populations,7 and performing more clinical care8 than non-underrepresented minority faculty. Because minority faculty are unlikely to be in leadership positions, it is reasonable to conclude they have been shouldering heavier clinical obligations and facing greater career disruption of scholarly work due to the COVID-19 pandemic.

Junior faculty (eg, instructors and assistant professors) also remain professionally vulnerable during the COVID-19 pandemic. Because junior faculty are often more clinically focused and less likely to hold leadership positions than senior faculty, they are more likely to have assumed frontline clinical positions, which come at the expense of academic work. Junior faculty are also at a critical building phase in their academic career—a time when they benefit from the opportunity to share their scholarly work and network at conferences. Unfortunately, many conferences have been canceled or moved to a virtual platform. Given that some institutions may be freezing academic funding for conferences due to budgetary shortfalls from the pandemic, junior faculty may be particularly at risk if they are not able to present their work. In addition, junior faculty often face disproportionate struggles at home, trying to balance demands of work and caring for young children. Considering the unique needs of each of these groups, it is especially important to consider intersectionality, or the compounded issues for individuals who exist in multiple disproportionately affected groups (eg, a Black female junior faculty member who is also a mother).

THE COVID-19-CURRICULUM VITAE MATRIX

The typical format of a professional curriculum vitae (CV) at most academic institutions does not allow one to document potential disruptions or novel contributions, including those that occurred during the COVID-19 pandemic. As a group of academic clinicians, educators, and researchers whose careers have been affected by the pandemic, we created a COVID-19 CV matrix, a potential framework to serve as a supplement for faculty. In this matrix, faculty members may document their contributions, disruptions that affected their work, and caregiving responsibilities during this time period, while also providing a rubric for promotions and tenure committees to equitably evaluate the pandemic period on an academic CV. Our COVID-19 CV matrix consists of six domains: (1) clinical care, (2) research, (3) education, (4) service, (5) advocacy/media, and (6) social media. These domains encompass traditional and nontraditional contributions made by healthcare professionals during the pandemic (Table). This matrix broadens the ability of both faculty and institutions to determine the actual impact of individuals during the pandemic.

COVID-19 Curriculum Vitae Matrix Supplement

ACCOUNT FOR YOUR (NEW) IMPACT

Throughout the COVID-19 pandemic, academic faculty have been innovative, contributing in novel ways not routinely captured by promotions committees—eg, the digital health researcher who now directs the telemedicine response for their institution and the health disparities researcher who now leads daily webinar sessions on structural racism to medical students. Other novel contributions include advancing COVID-19 innovations and engaging in media and community advocacy (eg, organizing large-scale donations of equipment and funds to support organizations in need). While such nontraditional contributions may not have been readily captured or thought “CV worthy” in the past, faculty should now account for them. More importantly, promotions committees need to recognize that these pivots or alterations in career paths are not signals of professional failure, but rather evidence of a shifting landscape and the respective response of the individual. Furthermore, because these pivots often help fulfill an institutional mission, they are impactful.

ACKNOWLEDGE THE DISRUPTION

It is important for promotions and tenure committees to recognize the impact and disruption COVID-19 has had on traditional academic work, acknowledging the time and energy required for a faculty member to make needed work adjustments. This enables a leader to better assess how a faculty member’s academic portfolio has been affected. For example, researchers have had to halt studies, medical educators have had to redevelop and transition curricula to virtual platforms, and physicians have had to discontinue clinician quality improvement initiatives due to competing hospital priorities. Faculty members who document such unintentional alterations in their academic career path can explain to their institution how they have continued to positively influence their field and the community during the pandemic. This approach is analogous to the current model of accounting for clinical time when judging faculty members’ contributions in scholarly achievement.

The COVID-19 CV matrix has the potential to be annotated to explain the burden of one’s personal situation, which is often “invisible” in the professional environment. For example, many physicians have had to assume additional childcare responsibilities, tend to sick family members, friends, and even themselves. It is also possible that a faculty member has a partner who is also an essential worker, one who had to self-isolate due to COVID-19 exposure or illness, or who has been working overtime due to high patient volumes.

INSTITUTIONAL RESPONSE

How can institutions respond to the altered academic landscape caused by the COVID-19 pandemic? Promotions committees typically have two main tools at their disposal: adjusting the tenure clock or the benchmarks. Extending the period of time available to qualify for tenure is commonplace in the “publish-or-perish” academic tracks of university research professors. Clock adjustments are typically granted to faculty following the birth of a child or for other specific family- or health-related hardships, in accordance with the Family and Medical Leave Act. Unfortunately, tenure-clock extensions for female faculty members can exacerbate gender inequity: Data on tenure-clock extensions show a higher rate of tenure granted to male faculty compared to female faculty.9 For this reason, it is also important to explore adjustments or modifications to benchmark criteria. This could be accomplished by broadening the criteria for promotion, recognizing that impact occurs in many forms, thereby enabling meeting a benchmark. It can also occur by examining the trajectory of an individual within a promotion pathway before it was disrupted to determine impact. To avoid exacerbating social and gender inequities within academia, institutions should use these professional levers and create new ones to provide parity and equality across the promotional playing field. While the CV matrix openly acknowledges the disruptions and tangents the COVID-19 pandemic has had on academic careers, it remains important for academic institutions to recognize these disruptions and innovate the manner in which they acknowledge scholarly contributions.

Conclusion

While academic rigidity and known social taxes (minority and mommy taxes) are particularly problematic in the current climate, these issues have always been at play in evaluating academic success. Improved documentation of novel contributions, disruptions, caregiving, and other challenges can enable more holistic and timely professional advancement for all faculty, regardless of their sex, race, ethnicity, or social background. Ultimately, we hope this framework initiates further conversations among academic institutions on how to define productivity in an age where journal impact factor or number of publications is not the fullest measure of one’s impact in their field.

Professional upheavals caused by the coronavirus disease 2019 (COVID-19) pandemic have affected the academic productivity of many physicians. This is due in part to rapid changes in clinical care and medical education: physician-researchers have been redeployed to frontline clinical care; clinician-educators have been forced to rapidly transition in-person curricula to virtual platforms; and primary care physicians and subspecialists have been forced to transition to telehealth-based practices. In addition to these changes in clinical and educational responsibilities, the COVID-19 pandemic has substantially altered the personal lives of physicians. During the height of the pandemic, clinicians simultaneously wrestled with a lack of available childcare, unexpected home-schooling responsibilities, decreased income, and many other COVID-19-related stresses.1 Additionally, the ever-present “second pandemic” of structural racism, persistent health disparities, and racial inequity has further increased the personal and professional demands facing academic faculty.2

In particular, the pandemic has placed personal and professional pressure on female and minority faculty members. In spite of these pressures, however, the academic promotions process still requires rigid accounting of scholarly productivity. As the focus of academic practices has shifted to support clinical care during the pandemic, scholarly productivity has suffered for clinicians on the frontline. As a result, academic clinical faculty have expressed significant stress and concerns about failing to meet benchmarks for promotion (eg, publications, curricula development, national presentations). To counter these shifts (and the inherent inequity that they create for female clinicians and for men and women who are Black, Indigenous, and/or of color), academic institutions should not only recognize the effects the COVID-19 pandemic has had on faculty, but also adopt immediate solutions to more equitably account for such disruptions to academic portfolios. In this paper, we explore populations whose career trajectories are most at-risk and propose a framework to capture novel and nontraditional contributions while also acknowledging the rapid changes the COVID-19 pandemic has brought to academic medicine.

POPULATIONS AT RISK FOR CAREER DISRUPTION

Even before the COVID-19 pandemic, physician mothers, underrepresented racial/ethnic minority groups, and junior faculty were most at-risk for career disruptions. The closure of daycare facilities and schools and shift to online learning resulting from the pandemic, along with the common challenges of parenting, have taken a significant toll on the lives of working parents. Because women tend to carry a disproportionate share of childcare and household responsibilities, these changes have inequitably leveraged themselves as a “mommy tax” on working women.3,4

As underrepresented medicine faculty (particularly Black, Hispanic, Latino, and Native American clinicians) comprise only 8% of the academic medical workforce,they currently face a variety of personal and professional challenges.5 This is especially true for Black and Latinx physicians who have been experiencing an increased COVID-19 burden in their communities, while concurrently fighting entrenched structural racism and police violence. In academia, these challenges have worsened because of the “minority tax”—the toll of often uncompensated extra responsibilities (time or money) placed on minority faculty in the name of achieving diversity. The unintended consequences of these responsibilities result in having fewer mentors,6 caring for underserved populations,7 and performing more clinical care8 than non-underrepresented minority faculty. Because minority faculty are unlikely to be in leadership positions, it is reasonable to conclude they have been shouldering heavier clinical obligations and facing greater career disruption of scholarly work due to the COVID-19 pandemic.

Junior faculty (eg, instructors and assistant professors) also remain professionally vulnerable during the COVID-19 pandemic. Because junior faculty are often more clinically focused and less likely to hold leadership positions than senior faculty, they are more likely to have assumed frontline clinical positions, which come at the expense of academic work. Junior faculty are also at a critical building phase in their academic career—a time when they benefit from the opportunity to share their scholarly work and network at conferences. Unfortunately, many conferences have been canceled or moved to a virtual platform. Given that some institutions may be freezing academic funding for conferences due to budgetary shortfalls from the pandemic, junior faculty may be particularly at risk if they are not able to present their work. In addition, junior faculty often face disproportionate struggles at home, trying to balance demands of work and caring for young children. Considering the unique needs of each of these groups, it is especially important to consider intersectionality, or the compounded issues for individuals who exist in multiple disproportionately affected groups (eg, a Black female junior faculty member who is also a mother).

THE COVID-19-CURRICULUM VITAE MATRIX

The typical format of a professional curriculum vitae (CV) at most academic institutions does not allow one to document potential disruptions or novel contributions, including those that occurred during the COVID-19 pandemic. As a group of academic clinicians, educators, and researchers whose careers have been affected by the pandemic, we created a COVID-19 CV matrix, a potential framework to serve as a supplement for faculty. In this matrix, faculty members may document their contributions, disruptions that affected their work, and caregiving responsibilities during this time period, while also providing a rubric for promotions and tenure committees to equitably evaluate the pandemic period on an academic CV. Our COVID-19 CV matrix consists of six domains: (1) clinical care, (2) research, (3) education, (4) service, (5) advocacy/media, and (6) social media. These domains encompass traditional and nontraditional contributions made by healthcare professionals during the pandemic (Table). This matrix broadens the ability of both faculty and institutions to determine the actual impact of individuals during the pandemic.

COVID-19 Curriculum Vitae Matrix Supplement

ACCOUNT FOR YOUR (NEW) IMPACT

Throughout the COVID-19 pandemic, academic faculty have been innovative, contributing in novel ways not routinely captured by promotions committees—eg, the digital health researcher who now directs the telemedicine response for their institution and the health disparities researcher who now leads daily webinar sessions on structural racism to medical students. Other novel contributions include advancing COVID-19 innovations and engaging in media and community advocacy (eg, organizing large-scale donations of equipment and funds to support organizations in need). While such nontraditional contributions may not have been readily captured or thought “CV worthy” in the past, faculty should now account for them. More importantly, promotions committees need to recognize that these pivots or alterations in career paths are not signals of professional failure, but rather evidence of a shifting landscape and the respective response of the individual. Furthermore, because these pivots often help fulfill an institutional mission, they are impactful.

ACKNOWLEDGE THE DISRUPTION

It is important for promotions and tenure committees to recognize the impact and disruption COVID-19 has had on traditional academic work, acknowledging the time and energy required for a faculty member to make needed work adjustments. This enables a leader to better assess how a faculty member’s academic portfolio has been affected. For example, researchers have had to halt studies, medical educators have had to redevelop and transition curricula to virtual platforms, and physicians have had to discontinue clinician quality improvement initiatives due to competing hospital priorities. Faculty members who document such unintentional alterations in their academic career path can explain to their institution how they have continued to positively influence their field and the community during the pandemic. This approach is analogous to the current model of accounting for clinical time when judging faculty members’ contributions in scholarly achievement.

The COVID-19 CV matrix has the potential to be annotated to explain the burden of one’s personal situation, which is often “invisible” in the professional environment. For example, many physicians have had to assume additional childcare responsibilities, tend to sick family members, friends, and even themselves. It is also possible that a faculty member has a partner who is also an essential worker, one who had to self-isolate due to COVID-19 exposure or illness, or who has been working overtime due to high patient volumes.

INSTITUTIONAL RESPONSE

How can institutions respond to the altered academic landscape caused by the COVID-19 pandemic? Promotions committees typically have two main tools at their disposal: adjusting the tenure clock or the benchmarks. Extending the period of time available to qualify for tenure is commonplace in the “publish-or-perish” academic tracks of university research professors. Clock adjustments are typically granted to faculty following the birth of a child or for other specific family- or health-related hardships, in accordance with the Family and Medical Leave Act. Unfortunately, tenure-clock extensions for female faculty members can exacerbate gender inequity: Data on tenure-clock extensions show a higher rate of tenure granted to male faculty compared to female faculty.9 For this reason, it is also important to explore adjustments or modifications to benchmark criteria. This could be accomplished by broadening the criteria for promotion, recognizing that impact occurs in many forms, thereby enabling meeting a benchmark. It can also occur by examining the trajectory of an individual within a promotion pathway before it was disrupted to determine impact. To avoid exacerbating social and gender inequities within academia, institutions should use these professional levers and create new ones to provide parity and equality across the promotional playing field. While the CV matrix openly acknowledges the disruptions and tangents the COVID-19 pandemic has had on academic careers, it remains important for academic institutions to recognize these disruptions and innovate the manner in which they acknowledge scholarly contributions.

Conclusion

While academic rigidity and known social taxes (minority and mommy taxes) are particularly problematic in the current climate, these issues have always been at play in evaluating academic success. Improved documentation of novel contributions, disruptions, caregiving, and other challenges can enable more holistic and timely professional advancement for all faculty, regardless of their sex, race, ethnicity, or social background. Ultimately, we hope this framework initiates further conversations among academic institutions on how to define productivity in an age where journal impact factor or number of publications is not the fullest measure of one’s impact in their field.

References

1. Jones Y, Durand V, Morton K, et al; ADVANCE PHM Steering Committee. Collateral damage: how covid-19 is adversely impacting women physicians. J Hosp Med. 2020;15(8):507-509. https://doi.org/10.12788/jhm.3470
2. Manning KD. When grief and crises intersect: perspectives of a black physician in the time of two pandemics. J Hosp Med. 2020;15(9):566-567. https://doi.org/10.12788/jhm.3481
3. Cohen P, Hsu T. Pandemic could scar a generation of working mothers. New York Times. Published June 3, 2020. Updated June 30, 2020. Accessed November 11, 2020. https://www.nytimes.com/2020/06/03/business/economy/coronavirus-working-women.html
4. Cain Miller C. Nearly half of men say they do most of the home schooling. 3 percent of women agree. Published May 6, 2020. Updated May 8, 2020. Accessed November 11, 2020. New York Times. https://www.nytimes.com/2020/05/06/upshot/pandemic-chores-homeschooling-gender.html
5. Rodríguez JE, Campbell KM, Pololi LH. Addressing disparities in academic medicine: what of the minority tax? BMC Med Educ. 2015;15:6. https://doi.org/10.1186/s12909-015-0290-9
6. Lewellen-Williams C, Johnson VA, Deloney LA, Thomas BR, Goyol A, Henry-Tillman R. The POD: a new model for mentoring underrepresented minority faculty. Acad Med. 2006;81(3):275-279. https://doi.org/10.1097/00001888-200603000-00020
7. Pololi LH, Evans AT, Gibbs BK, Krupat E, Brennan RT, Civian JT. The experience of minority faculty who are underrepresented in medicine, at 26 representative U.S. medical schools. Acad Med. 2013;88(9):1308-1314. https://doi.org/10.1097/acm.0b013e31829eefff
8. Richert A, Campbell K, Rodríguez J, Borowsky IW, Parikh R, Colwell A. ACU workforce column: expanding and supporting the health care workforce. J Health Care Poor Underserved. 2013;24(4):1423-1431. https://doi.org/10.1353/hpu.2013.0162
9. Woitowich NC, Jain S, Arora VM, Joffe H. COVID-19 threatens progress toward gender equity within academic medicine. Acad Med. 2020;29:10.1097/ACM.0000000000003782. https://doi.org/10.1097/acm.0000000000003782

References

1. Jones Y, Durand V, Morton K, et al; ADVANCE PHM Steering Committee. Collateral damage: how covid-19 is adversely impacting women physicians. J Hosp Med. 2020;15(8):507-509. https://doi.org/10.12788/jhm.3470
2. Manning KD. When grief and crises intersect: perspectives of a black physician in the time of two pandemics. J Hosp Med. 2020;15(9):566-567. https://doi.org/10.12788/jhm.3481
3. Cohen P, Hsu T. Pandemic could scar a generation of working mothers. New York Times. Published June 3, 2020. Updated June 30, 2020. Accessed November 11, 2020. https://www.nytimes.com/2020/06/03/business/economy/coronavirus-working-women.html
4. Cain Miller C. Nearly half of men say they do most of the home schooling. 3 percent of women agree. Published May 6, 2020. Updated May 8, 2020. Accessed November 11, 2020. New York Times. https://www.nytimes.com/2020/05/06/upshot/pandemic-chores-homeschooling-gender.html
5. Rodríguez JE, Campbell KM, Pololi LH. Addressing disparities in academic medicine: what of the minority tax? BMC Med Educ. 2015;15:6. https://doi.org/10.1186/s12909-015-0290-9
6. Lewellen-Williams C, Johnson VA, Deloney LA, Thomas BR, Goyol A, Henry-Tillman R. The POD: a new model for mentoring underrepresented minority faculty. Acad Med. 2006;81(3):275-279. https://doi.org/10.1097/00001888-200603000-00020
7. Pololi LH, Evans AT, Gibbs BK, Krupat E, Brennan RT, Civian JT. The experience of minority faculty who are underrepresented in medicine, at 26 representative U.S. medical schools. Acad Med. 2013;88(9):1308-1314. https://doi.org/10.1097/acm.0b013e31829eefff
8. Richert A, Campbell K, Rodríguez J, Borowsky IW, Parikh R, Colwell A. ACU workforce column: expanding and supporting the health care workforce. J Health Care Poor Underserved. 2013;24(4):1423-1431. https://doi.org/10.1353/hpu.2013.0162
9. Woitowich NC, Jain S, Arora VM, Joffe H. COVID-19 threatens progress toward gender equity within academic medicine. Acad Med. 2020;29:10.1097/ACM.0000000000003782. https://doi.org/10.1097/acm.0000000000003782

Issue
Journal of Hospital Medicine 16(2)
Issue
Journal of Hospital Medicine 16(2)
Page Number
120-123. Published Online First January 20, 2021
Page Number
120-123. Published Online First January 20, 2021
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2021 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Vineet M. Arora MD, MAPP; Email: [email protected]; Telephone: 773-702-8157; Twitter: @futuredocs.
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Article PDF Media

Leadership & Professional Development: From Seed to Fruit—How to Get Your Academic Project Across the Finish Line

Article Type
Changed
Thu, 03/18/2021 - 14:51

“Our goals can only be reached through the vehicle of a plan. There is no other route to success.”

—Pablo Picasso

Whether it be a research manuscript, quality improvement (QI) initiative, or educational curriculum, busy clinicians often struggle getting projects past the idea stage. Barriers to completion, such as a busy clinical schedule or lack of experience and mentorship, are well known. Importantly, these projects serve as “academic currency” used for promotion and advancement and also create generalizable knowledge, which can help others improve clinical practice or operational processes. Those who are successful in completing their academic project frequently follow a well-structured path. Consider the following principles to get your idea across the finish line:

Find a blueprint. Among most academic projects, whether a research paper, QI project or new curriculum, an underlying formula is commonly applied. Before starting, do your background research. Is there a paper or method that resembles your desired approach? Is there a question or concept that caught your eye? Using a blueprint from existing evidence allows you to identify important structures, phrases, and terms to inform your manuscript. Once you have identified the blueprint, define your project and approach.

Find a mentor. While career mentorship is important for professional growth, you first need a project mentor. Being a project mentor is a smaller ask for a more senior colleague than being a career mentor, and it’s a great way to test-drive a potential long-term working relationship. Moreover, the successful completion of one project can potentially lead to further opportunities, and perhaps even a long-term career mentor.

Take initiative. In business, there is a common adage: “Never bring a problem to your boss without a proposed solution in hand.”1 In academics, consider: “Never show up with an idea without bringing a proposal.” By bringing a defined proposal to the conversation, your inquiry is more likely to get a response because (a) it is not a blind-ask and (b) it creates a foundation to build on. This is analogous to an early learner presenting their assessment and plan in the clinical setting; you don’t stop at the diagnosis (your idea) without having a plan for how you want to manage it.

Get an accountability partner. Publicly committing to a goal increases the probability of accomplishing your task by 65%, while having an accountability partner increases that by 95%.2 An accountability partner serves as a coach to help you accomplish a task. This individual can be a colleague, spouse, or friend and is typically not a part of the project. By leveraging peer pressure, you increase the odds of successfully completing your project.

Carve out dedicated time. The entrepreneur and author Jim Rohn once said, “Discipline is the bridge between goals and accomplishments.”3 To complete a project, you have to make the time to do the work. While many believe that successful writers sit and write for hours on end, many famous writers only wrote for a few hours at a time—but they did so consistently.4 Create your routine by setting aside consistent, defined time to work on your project. To extract the most value, select a time of the day in which you work best (eg, early morning). Then, set a timer for 30 minutes and write—or work.

 

 

Because you are a busy clinician with constant demands on your time, having the skillset to reliably turn an idea into “academic currency” is a necessity. Having a plan and following these principles will help you earn that academic coin.

References

1. Gallo A. The right way to bring a problem to your boss. Harvard Business Review. December 5, 2014. Accessed April 11, 2020. https://hbr.org/2014/12/the-right-way-to-bring-a-problem-to-your-boss

2. Hardy B. Accountability partners are great. But “success” partners will change your life. May 14, 2019. Accessed April 11, 2020. Medium. https://medium.com/@benjaminhardy/accountability-partners-are-great-but-...

3. Rohn J. 10 unforgettable quotes by Jim Rohn. Accessed June 20, 2020. https://www.success.com/10-unforgettable-quotes-by-jim-rohn/

4. Clear J. Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones. Avery; 2018. https://jamesclear.com/atomic-habits

Article PDF
Author and Disclosure Information

1Department of Medicine, University of California, San Francisco, San Francisco, California; 2Division of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California.

Disclosures

The authors report having no conflicts of interest.

Issue
Journal of Hospital Medicine 16(1)
Publications
Topics
Page Number
J. Hosp. Med. 2021 January;16(1):34. | doi: 10.12788/jhm.3486
Sections
Author and Disclosure Information

1Department of Medicine, University of California, San Francisco, San Francisco, California; 2Division of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California.

Disclosures

The authors report having no conflicts of interest.

Author and Disclosure Information

1Department of Medicine, University of California, San Francisco, San Francisco, California; 2Division of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California.

Disclosures

The authors report having no conflicts of interest.

Article PDF
Article PDF
Related Articles

“Our goals can only be reached through the vehicle of a plan. There is no other route to success.”

—Pablo Picasso

Whether it be a research manuscript, quality improvement (QI) initiative, or educational curriculum, busy clinicians often struggle getting projects past the idea stage. Barriers to completion, such as a busy clinical schedule or lack of experience and mentorship, are well known. Importantly, these projects serve as “academic currency” used for promotion and advancement and also create generalizable knowledge, which can help others improve clinical practice or operational processes. Those who are successful in completing their academic project frequently follow a well-structured path. Consider the following principles to get your idea across the finish line:

Find a blueprint. Among most academic projects, whether a research paper, QI project or new curriculum, an underlying formula is commonly applied. Before starting, do your background research. Is there a paper or method that resembles your desired approach? Is there a question or concept that caught your eye? Using a blueprint from existing evidence allows you to identify important structures, phrases, and terms to inform your manuscript. Once you have identified the blueprint, define your project and approach.

Find a mentor. While career mentorship is important for professional growth, you first need a project mentor. Being a project mentor is a smaller ask for a more senior colleague than being a career mentor, and it’s a great way to test-drive a potential long-term working relationship. Moreover, the successful completion of one project can potentially lead to further opportunities, and perhaps even a long-term career mentor.

Take initiative. In business, there is a common adage: “Never bring a problem to your boss without a proposed solution in hand.”1 In academics, consider: “Never show up with an idea without bringing a proposal.” By bringing a defined proposal to the conversation, your inquiry is more likely to get a response because (a) it is not a blind-ask and (b) it creates a foundation to build on. This is analogous to an early learner presenting their assessment and plan in the clinical setting; you don’t stop at the diagnosis (your idea) without having a plan for how you want to manage it.

Get an accountability partner. Publicly committing to a goal increases the probability of accomplishing your task by 65%, while having an accountability partner increases that by 95%.2 An accountability partner serves as a coach to help you accomplish a task. This individual can be a colleague, spouse, or friend and is typically not a part of the project. By leveraging peer pressure, you increase the odds of successfully completing your project.

Carve out dedicated time. The entrepreneur and author Jim Rohn once said, “Discipline is the bridge between goals and accomplishments.”3 To complete a project, you have to make the time to do the work. While many believe that successful writers sit and write for hours on end, many famous writers only wrote for a few hours at a time—but they did so consistently.4 Create your routine by setting aside consistent, defined time to work on your project. To extract the most value, select a time of the day in which you work best (eg, early morning). Then, set a timer for 30 minutes and write—or work.

 

 

Because you are a busy clinician with constant demands on your time, having the skillset to reliably turn an idea into “academic currency” is a necessity. Having a plan and following these principles will help you earn that academic coin.

“Our goals can only be reached through the vehicle of a plan. There is no other route to success.”

—Pablo Picasso

Whether it be a research manuscript, quality improvement (QI) initiative, or educational curriculum, busy clinicians often struggle getting projects past the idea stage. Barriers to completion, such as a busy clinical schedule or lack of experience and mentorship, are well known. Importantly, these projects serve as “academic currency” used for promotion and advancement and also create generalizable knowledge, which can help others improve clinical practice or operational processes. Those who are successful in completing their academic project frequently follow a well-structured path. Consider the following principles to get your idea across the finish line:

Find a blueprint. Among most academic projects, whether a research paper, QI project or new curriculum, an underlying formula is commonly applied. Before starting, do your background research. Is there a paper or method that resembles your desired approach? Is there a question or concept that caught your eye? Using a blueprint from existing evidence allows you to identify important structures, phrases, and terms to inform your manuscript. Once you have identified the blueprint, define your project and approach.

Find a mentor. While career mentorship is important for professional growth, you first need a project mentor. Being a project mentor is a smaller ask for a more senior colleague than being a career mentor, and it’s a great way to test-drive a potential long-term working relationship. Moreover, the successful completion of one project can potentially lead to further opportunities, and perhaps even a long-term career mentor.

Take initiative. In business, there is a common adage: “Never bring a problem to your boss without a proposed solution in hand.”1 In academics, consider: “Never show up with an idea without bringing a proposal.” By bringing a defined proposal to the conversation, your inquiry is more likely to get a response because (a) it is not a blind-ask and (b) it creates a foundation to build on. This is analogous to an early learner presenting their assessment and plan in the clinical setting; you don’t stop at the diagnosis (your idea) without having a plan for how you want to manage it.

Get an accountability partner. Publicly committing to a goal increases the probability of accomplishing your task by 65%, while having an accountability partner increases that by 95%.2 An accountability partner serves as a coach to help you accomplish a task. This individual can be a colleague, spouse, or friend and is typically not a part of the project. By leveraging peer pressure, you increase the odds of successfully completing your project.

Carve out dedicated time. The entrepreneur and author Jim Rohn once said, “Discipline is the bridge between goals and accomplishments.”3 To complete a project, you have to make the time to do the work. While many believe that successful writers sit and write for hours on end, many famous writers only wrote for a few hours at a time—but they did so consistently.4 Create your routine by setting aside consistent, defined time to work on your project. To extract the most value, select a time of the day in which you work best (eg, early morning). Then, set a timer for 30 minutes and write—or work.

 

 

Because you are a busy clinician with constant demands on your time, having the skillset to reliably turn an idea into “academic currency” is a necessity. Having a plan and following these principles will help you earn that academic coin.

References

1. Gallo A. The right way to bring a problem to your boss. Harvard Business Review. December 5, 2014. Accessed April 11, 2020. https://hbr.org/2014/12/the-right-way-to-bring-a-problem-to-your-boss

2. Hardy B. Accountability partners are great. But “success” partners will change your life. May 14, 2019. Accessed April 11, 2020. Medium. https://medium.com/@benjaminhardy/accountability-partners-are-great-but-...

3. Rohn J. 10 unforgettable quotes by Jim Rohn. Accessed June 20, 2020. https://www.success.com/10-unforgettable-quotes-by-jim-rohn/

4. Clear J. Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones. Avery; 2018. https://jamesclear.com/atomic-habits

References

1. Gallo A. The right way to bring a problem to your boss. Harvard Business Review. December 5, 2014. Accessed April 11, 2020. https://hbr.org/2014/12/the-right-way-to-bring-a-problem-to-your-boss

2. Hardy B. Accountability partners are great. But “success” partners will change your life. May 14, 2019. Accessed April 11, 2020. Medium. https://medium.com/@benjaminhardy/accountability-partners-are-great-but-...

3. Rohn J. 10 unforgettable quotes by Jim Rohn. Accessed June 20, 2020. https://www.success.com/10-unforgettable-quotes-by-jim-rohn/

4. Clear J. Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones. Avery; 2018. https://jamesclear.com/atomic-habits

Issue
Journal of Hospital Medicine 16(1)
Issue
Journal of Hospital Medicine 16(1)
Page Number
J. Hosp. Med. 2021 January;16(1):34. | doi: 10.12788/jhm.3486
Page Number
J. Hosp. Med. 2021 January;16(1):34. | doi: 10.12788/jhm.3486
Publications
Publications
Topics
Article Type
Sections
Article Source


© 2021 Society of Hospital Medicine

Citation Override
J. Hosp. Med. 2021 January;16(1):34. | doi: 10.12788/jhm.3486
Disallow All Ads
Correspondence Location
Sharmin Shekarchian, MD
Email: [email protected]; Telephone: 415-221-4810 x22084; Twitter: @sharminzi.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Display survey writer
Reuters content
Article PDF Media

Hospital Medicine Management in the Time of COVID-19: Preparing for a Sprint and a Marathon

Article Type
Changed
Thu, 03/25/2021 - 15:06

The pandemic of coronavirus disease 2019 (COVID-19) is confronting the modern world like nothing else before. With over 20 million individuals expected to require hospitalization in the US, this health crisis may become a generation-defining moment for healthcare systems and the field of hospital medicine.1 The specific challenges facing hospital medicine are comparable to running a sprint and a marathon—at the same time. For the sprint underway, hospitalists must learn to respond to a rapidly changing environment in which critical decisions are made within hours and days. At the same time, hospitalists need to plan for the marathon of increased clinical needs over the coming months, the possibility of burnout, and concerns about staff well-­being. Although runners typically focus on either the sprint or the marathon, healthcare systems and hospital medicine providers will need to simultaneously prepare for both types of races.

GET READY FOR THE SPRINT

Over the past several weeks, hospital medicine leaders have been rapidly responding to an evolving crisis. Leaders and clinicians are quickly learning how to restructure clinical operations, negotiate the short supply of personal protective equipment (PPE), and manage delays in COVID-19 testing. In these areas, our hospitalist group has experienced a steep learning curve. In addition to the strategies outlined in the Table, we will share here our experiences and insights on managing and preparing for the COVID-19 pandemic.

Communication Is Central

During the sprint, focused, regular communication is imperative to ameliorate anxiety and fear. A study of crisis communication after 9/11 found that, for employees, good communication from leadership was one of the most valued factors.2 Communications experts also note that, in times of crisis, leaders have a special role in communication, specifically around demystifying the situation, providing hope, and maintaining transparency.3

Mental bandwidth may be limited in a stressful environment, so efforts should be taken to maximize the value of each communication. Information on hospital metrics should be provided regularly, including the number of COVID-19 cases, the status of clinical services and staffing, hospital capacity, and resource availability.4 Although the ubiquity and ease of email is convenient, recognize that providers are likely receiving email updates from multiple layers within your healthcare organization. To guard against losing important information, we use the same templated format for daily email updates with changes highlighted, which allows busy clinicians to digest pertinent information easily.5 Finally, consider having a single individual be responsible for collating COVID-19–related emails sent to your group. Although clinicians may want to share the most recent studies or their clinical experiences with a group email, instead have them send this information to a single individual who can organize these materials and share them on a regular basis.

To keep two-way communication channels open in a busy, asynchronous environment, consider having a centralized shared document in which providers can give real-time feedback to capture on-the-ground experiences or share questions they would like answered. Within our group, we found that centralizing our conversation in a shared document eliminated redundancy, focused our meetings, and kept everyone up to date. Additionally, regularly scheduled meetings may need to be adapted to a remote format (eg, Zoom, WebEx) as clinicians are asked to work from home when not on clinical service. Finally, recognize that virtual meetings require a different skill set than that required by in-person meetings, including reestablishment of social norms and technology preparation.6

 

 

Optimize Your Staffing

Hospital volumes could increase to as high as 270% of current hospital bed capacities during this pandemic.1 This surge is further complicated by the effort involved in caring for these patients, given their increased medical complexity, the use of new protocols, and the extra time needed to update staff and family. As the workload intensifies, staffing models and operations will also need to adapt.

First, optimize your inpatient resources based on the changes your hospital system is making. For instance, as elective surgeries were cancelled, we dissolved our surgical comanagement and consult services to better accommodate our hospitals’ needs. Further, consider using advanced practice providers (eg, physician assistants and nurse practitioners) released from their clinical duties to help with inpatient care in the event of a surge. If your hospital has trainees (eg, residents or fellows), consider reassigning those whose rotations have been postponed to newly created inpatient teams; trainees often have strong institutional knowledge and understanding of hospital protocols and resources.

Second, use hospitalists for their most relevant skills. Hospitalists are pluripotent clinicians who are comfortable with high-­acuity patients and can fit into a myriad of clinical positions. The initial instinct at our institution was to mobilize hospitalists across all areas of increasing needs in the hospital (eg, screening clinics,7 advice phone lines for patients, or in the Emergency Department), but we quickly recognized that the hospitalist group is a finite resource. We focused our hospitalists’ clinical work on the expanding inpatient needs and allowed other outpatient or procedure-based specialties that have less inpatient experience to fill the broader institutional gaps.

Finally, consider long-term implications of staffing decisions. Leaders are making challenging coverage decisions that can affect the morale and autonomy of staff. Does backup staffing happen on a volunteer basis? Who fills the need—those with less clinical time or those with fewer personal obligations? When a staffing model is challenged and your group is making such decisions, engaged communication again becomes paramount.

PREPARE FOR THE MARATHON

Experts believe that we are only at the beginning of this crisis, one for which we don’t know what the end looks like or when it will come. With this in mind, hospital medicine leadership must plan for the long-term implications of the lengthy race ahead. Recognizing that morale, motivation, and burnout will be issues to deal with on the horizon, a focus on sustainability and wellness will become increasingly important as the marathon continues. To date, we’ve found the following principles to be helpful.

Delegate Responsibilities

Hospitals will not be able to survive COVID-19 through the efforts of single individuals. Instead, consider creating “operational champion” roles for frontline clinicians. These individuals can lead in specific areas (eg, PPE, updates on COVID-19 testing, discharge protocols) and act as conduits for information, updates, and resources for your group. At our institution, such operational meetings and activities take hours out of each day. By creating a breadth of leadership roles, our group has spread the operational workload while still allowing clinicians to care for patients, avoid burnout, and build autonomy and opportunities for both personal and professional growth. While for most institutions, these positions are temporary and not compensated with salary or time, the contribution to the group should be recognized both now and in the future.

 

 

Focus on Wellness

Providers are battling a laundry list of both clinical and personal stressors. The Centers for Disease Control and Prevention has already recognized that stress and mental health are going to be large hurdles for both patients and providers during this crisis.8 From the beginning, hospitalist leadership should be attuned to physician wellness and be aware that burnout, mental and physical exhaustion, and the possibility of contracting COVID-19 will be issues in the coming weeks and months. Volunteerism is built into the physician’s work ethic, but we must be mindful about its cost for long-term staffing demands. In addition, scarce medical resources add an additional moral strain for clinicians as they face tough allocation decisions, as we’ve seen with our Italian colleagues.9

As regular meetings around COVID-19 have become commonplace, we’ve made sure to set aside defined time for staff to discuss and reflect on their experiences. Doing so has allowed our clinicians to feel heard and to acknowledge the difficulties they are facing in their clinical duties. Leaders should also consider frequent check-ins with individual providers. At our institution, the first positive COVID-19 patient did not radically change any protocol that was in place, but a check-in with the hospitalist on service that day proved helpful for a debrief and processing opportunity. Individual conversations can help those on the front lines feel supported and remind them they are not operating alone in an anonymous vacuum.

Continue by celebrating small victories because this marathon is not going to end with an obvious finish line or a singular moment in which everyone can rejoice. A negative test, a patient with a good outcome, and a donation of PPE are all opportunities to celebrate. It may be what keeps us going when there is no end in sight. We have relied on these celebrations and moments of levity as an integral part of our regular group meetings.

CONCLUSION

At the end of this pandemic, just as we hope that our social distancing feels like an overreaction, we similarly hope that our sprint to build capacity ends up being unnecessary as well. As we wrote this Perspectives piece, uncertainty about the extent, length, and impact of this pandemic still existed. By the time it is published it may be that the sprint is over, and the marathon is beginning. Or, if our wildest hopes come true, there will be no marathon to run at all.

References

1. Tsai TC, Jacobson BH, Jha AK. American Hospital Capacity and Projected Need for COVID-19. Health Affairs. March 17, 2020. https://www.healthaffairs.org/do/10.1377/hblog20200317.457910/full/. Accessed April 1, 2020.
2. Argenti PA. Crisis communication: lessons from 9/11. Harvard Business Review. December 2002. https://hbr.org/2002/12/crisis-communication-lessons-from-911. Accessed April 2, 2020.
3. Argenti PA. Communicating through the coronavirus crisis. Harvard Business Review. March 2020. https://hbr.org/2020/03/communicating-­through-the-coronavirus-crisis. Accessed April 2, 2020.
4. Chopra V, Toner E, Waldhorn R, Washer L. How should US hospitals prepare for COVID-19? Ann Intern Med. 2020. https://doi.org/10.7326/M20-0907.
5. National Institutes of Health. Formatting and Visual Clarity. Published July 1, 2015. Updated March 27, 2017. https://www.nih.gov/institutes-nih/nih-office-director/office-communications-public-liaison/clear-communication/plain-language/formatting-visual-clarity. Accessed April 2, 2020.
6. Frisch B, Greene C. What it takes to run a great virtual meeting. Harvard Business Review. March 2020. https://hbr.org/2020/03/what-it-takes-to-run-a-great-virtual-meeting. Accessed April 2, 2020.
7. Yan W. Coronavirus testing goes mobile in Seattle. New York Times. March 13, 2020. https://www.nytimes.com/2020/03/13/us/coronavirus-testing-drive-through-seattle.html. Accessed April 2, 2020.
8. Centers for Disease Control and Prevention. Coronavirus Disease 2019 (COVID-19). Stress and Coping. February 11, 2020. https://www.cdc.gov/coronavirus/2019-ncov/prepare/managing-stress-anxiety.html. Accessed April 2, 2020.
9. Rosenbaum L. Facing Covid-19 in Italy—ethics, logistics, and therapeutics on the epidemic’s front line. N Engl J Med. 2020. https://doi.org/10.1056/NEJMp2005492.

Article PDF
Author and Disclosure Information

1Department of Medicine, University of California, San Francisco, California; 2Division of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California.

Disclosures

The authors have no conflicts to report.

Issue
Journal of Hospital Medicine 15(5)
Publications
Topics
Page Number
305-307. Published online first April 8, 2020
Sections
Author and Disclosure Information

1Department of Medicine, University of California, San Francisco, California; 2Division of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California.

Disclosures

The authors have no conflicts to report.

Author and Disclosure Information

1Department of Medicine, University of California, San Francisco, California; 2Division of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California.

Disclosures

The authors have no conflicts to report.

Article PDF
Article PDF

The pandemic of coronavirus disease 2019 (COVID-19) is confronting the modern world like nothing else before. With over 20 million individuals expected to require hospitalization in the US, this health crisis may become a generation-defining moment for healthcare systems and the field of hospital medicine.1 The specific challenges facing hospital medicine are comparable to running a sprint and a marathon—at the same time. For the sprint underway, hospitalists must learn to respond to a rapidly changing environment in which critical decisions are made within hours and days. At the same time, hospitalists need to plan for the marathon of increased clinical needs over the coming months, the possibility of burnout, and concerns about staff well-­being. Although runners typically focus on either the sprint or the marathon, healthcare systems and hospital medicine providers will need to simultaneously prepare for both types of races.

GET READY FOR THE SPRINT

Over the past several weeks, hospital medicine leaders have been rapidly responding to an evolving crisis. Leaders and clinicians are quickly learning how to restructure clinical operations, negotiate the short supply of personal protective equipment (PPE), and manage delays in COVID-19 testing. In these areas, our hospitalist group has experienced a steep learning curve. In addition to the strategies outlined in the Table, we will share here our experiences and insights on managing and preparing for the COVID-19 pandemic.

Communication Is Central

During the sprint, focused, regular communication is imperative to ameliorate anxiety and fear. A study of crisis communication after 9/11 found that, for employees, good communication from leadership was one of the most valued factors.2 Communications experts also note that, in times of crisis, leaders have a special role in communication, specifically around demystifying the situation, providing hope, and maintaining transparency.3

Mental bandwidth may be limited in a stressful environment, so efforts should be taken to maximize the value of each communication. Information on hospital metrics should be provided regularly, including the number of COVID-19 cases, the status of clinical services and staffing, hospital capacity, and resource availability.4 Although the ubiquity and ease of email is convenient, recognize that providers are likely receiving email updates from multiple layers within your healthcare organization. To guard against losing important information, we use the same templated format for daily email updates with changes highlighted, which allows busy clinicians to digest pertinent information easily.5 Finally, consider having a single individual be responsible for collating COVID-19–related emails sent to your group. Although clinicians may want to share the most recent studies or their clinical experiences with a group email, instead have them send this information to a single individual who can organize these materials and share them on a regular basis.

To keep two-way communication channels open in a busy, asynchronous environment, consider having a centralized shared document in which providers can give real-time feedback to capture on-the-ground experiences or share questions they would like answered. Within our group, we found that centralizing our conversation in a shared document eliminated redundancy, focused our meetings, and kept everyone up to date. Additionally, regularly scheduled meetings may need to be adapted to a remote format (eg, Zoom, WebEx) as clinicians are asked to work from home when not on clinical service. Finally, recognize that virtual meetings require a different skill set than that required by in-person meetings, including reestablishment of social norms and technology preparation.6

 

 

Optimize Your Staffing

Hospital volumes could increase to as high as 270% of current hospital bed capacities during this pandemic.1 This surge is further complicated by the effort involved in caring for these patients, given their increased medical complexity, the use of new protocols, and the extra time needed to update staff and family. As the workload intensifies, staffing models and operations will also need to adapt.

First, optimize your inpatient resources based on the changes your hospital system is making. For instance, as elective surgeries were cancelled, we dissolved our surgical comanagement and consult services to better accommodate our hospitals’ needs. Further, consider using advanced practice providers (eg, physician assistants and nurse practitioners) released from their clinical duties to help with inpatient care in the event of a surge. If your hospital has trainees (eg, residents or fellows), consider reassigning those whose rotations have been postponed to newly created inpatient teams; trainees often have strong institutional knowledge and understanding of hospital protocols and resources.

Second, use hospitalists for their most relevant skills. Hospitalists are pluripotent clinicians who are comfortable with high-­acuity patients and can fit into a myriad of clinical positions. The initial instinct at our institution was to mobilize hospitalists across all areas of increasing needs in the hospital (eg, screening clinics,7 advice phone lines for patients, or in the Emergency Department), but we quickly recognized that the hospitalist group is a finite resource. We focused our hospitalists’ clinical work on the expanding inpatient needs and allowed other outpatient or procedure-based specialties that have less inpatient experience to fill the broader institutional gaps.

Finally, consider long-term implications of staffing decisions. Leaders are making challenging coverage decisions that can affect the morale and autonomy of staff. Does backup staffing happen on a volunteer basis? Who fills the need—those with less clinical time or those with fewer personal obligations? When a staffing model is challenged and your group is making such decisions, engaged communication again becomes paramount.

PREPARE FOR THE MARATHON

Experts believe that we are only at the beginning of this crisis, one for which we don’t know what the end looks like or when it will come. With this in mind, hospital medicine leadership must plan for the long-term implications of the lengthy race ahead. Recognizing that morale, motivation, and burnout will be issues to deal with on the horizon, a focus on sustainability and wellness will become increasingly important as the marathon continues. To date, we’ve found the following principles to be helpful.

Delegate Responsibilities

Hospitals will not be able to survive COVID-19 through the efforts of single individuals. Instead, consider creating “operational champion” roles for frontline clinicians. These individuals can lead in specific areas (eg, PPE, updates on COVID-19 testing, discharge protocols) and act as conduits for information, updates, and resources for your group. At our institution, such operational meetings and activities take hours out of each day. By creating a breadth of leadership roles, our group has spread the operational workload while still allowing clinicians to care for patients, avoid burnout, and build autonomy and opportunities for both personal and professional growth. While for most institutions, these positions are temporary and not compensated with salary or time, the contribution to the group should be recognized both now and in the future.

 

 

Focus on Wellness

Providers are battling a laundry list of both clinical and personal stressors. The Centers for Disease Control and Prevention has already recognized that stress and mental health are going to be large hurdles for both patients and providers during this crisis.8 From the beginning, hospitalist leadership should be attuned to physician wellness and be aware that burnout, mental and physical exhaustion, and the possibility of contracting COVID-19 will be issues in the coming weeks and months. Volunteerism is built into the physician’s work ethic, but we must be mindful about its cost for long-term staffing demands. In addition, scarce medical resources add an additional moral strain for clinicians as they face tough allocation decisions, as we’ve seen with our Italian colleagues.9

As regular meetings around COVID-19 have become commonplace, we’ve made sure to set aside defined time for staff to discuss and reflect on their experiences. Doing so has allowed our clinicians to feel heard and to acknowledge the difficulties they are facing in their clinical duties. Leaders should also consider frequent check-ins with individual providers. At our institution, the first positive COVID-19 patient did not radically change any protocol that was in place, but a check-in with the hospitalist on service that day proved helpful for a debrief and processing opportunity. Individual conversations can help those on the front lines feel supported and remind them they are not operating alone in an anonymous vacuum.

Continue by celebrating small victories because this marathon is not going to end with an obvious finish line or a singular moment in which everyone can rejoice. A negative test, a patient with a good outcome, and a donation of PPE are all opportunities to celebrate. It may be what keeps us going when there is no end in sight. We have relied on these celebrations and moments of levity as an integral part of our regular group meetings.

CONCLUSION

At the end of this pandemic, just as we hope that our social distancing feels like an overreaction, we similarly hope that our sprint to build capacity ends up being unnecessary as well. As we wrote this Perspectives piece, uncertainty about the extent, length, and impact of this pandemic still existed. By the time it is published it may be that the sprint is over, and the marathon is beginning. Or, if our wildest hopes come true, there will be no marathon to run at all.

The pandemic of coronavirus disease 2019 (COVID-19) is confronting the modern world like nothing else before. With over 20 million individuals expected to require hospitalization in the US, this health crisis may become a generation-defining moment for healthcare systems and the field of hospital medicine.1 The specific challenges facing hospital medicine are comparable to running a sprint and a marathon—at the same time. For the sprint underway, hospitalists must learn to respond to a rapidly changing environment in which critical decisions are made within hours and days. At the same time, hospitalists need to plan for the marathon of increased clinical needs over the coming months, the possibility of burnout, and concerns about staff well-­being. Although runners typically focus on either the sprint or the marathon, healthcare systems and hospital medicine providers will need to simultaneously prepare for both types of races.

GET READY FOR THE SPRINT

Over the past several weeks, hospital medicine leaders have been rapidly responding to an evolving crisis. Leaders and clinicians are quickly learning how to restructure clinical operations, negotiate the short supply of personal protective equipment (PPE), and manage delays in COVID-19 testing. In these areas, our hospitalist group has experienced a steep learning curve. In addition to the strategies outlined in the Table, we will share here our experiences and insights on managing and preparing for the COVID-19 pandemic.

Communication Is Central

During the sprint, focused, regular communication is imperative to ameliorate anxiety and fear. A study of crisis communication after 9/11 found that, for employees, good communication from leadership was one of the most valued factors.2 Communications experts also note that, in times of crisis, leaders have a special role in communication, specifically around demystifying the situation, providing hope, and maintaining transparency.3

Mental bandwidth may be limited in a stressful environment, so efforts should be taken to maximize the value of each communication. Information on hospital metrics should be provided regularly, including the number of COVID-19 cases, the status of clinical services and staffing, hospital capacity, and resource availability.4 Although the ubiquity and ease of email is convenient, recognize that providers are likely receiving email updates from multiple layers within your healthcare organization. To guard against losing important information, we use the same templated format for daily email updates with changes highlighted, which allows busy clinicians to digest pertinent information easily.5 Finally, consider having a single individual be responsible for collating COVID-19–related emails sent to your group. Although clinicians may want to share the most recent studies or their clinical experiences with a group email, instead have them send this information to a single individual who can organize these materials and share them on a regular basis.

To keep two-way communication channels open in a busy, asynchronous environment, consider having a centralized shared document in which providers can give real-time feedback to capture on-the-ground experiences or share questions they would like answered. Within our group, we found that centralizing our conversation in a shared document eliminated redundancy, focused our meetings, and kept everyone up to date. Additionally, regularly scheduled meetings may need to be adapted to a remote format (eg, Zoom, WebEx) as clinicians are asked to work from home when not on clinical service. Finally, recognize that virtual meetings require a different skill set than that required by in-person meetings, including reestablishment of social norms and technology preparation.6

 

 

Optimize Your Staffing

Hospital volumes could increase to as high as 270% of current hospital bed capacities during this pandemic.1 This surge is further complicated by the effort involved in caring for these patients, given their increased medical complexity, the use of new protocols, and the extra time needed to update staff and family. As the workload intensifies, staffing models and operations will also need to adapt.

First, optimize your inpatient resources based on the changes your hospital system is making. For instance, as elective surgeries were cancelled, we dissolved our surgical comanagement and consult services to better accommodate our hospitals’ needs. Further, consider using advanced practice providers (eg, physician assistants and nurse practitioners) released from their clinical duties to help with inpatient care in the event of a surge. If your hospital has trainees (eg, residents or fellows), consider reassigning those whose rotations have been postponed to newly created inpatient teams; trainees often have strong institutional knowledge and understanding of hospital protocols and resources.

Second, use hospitalists for their most relevant skills. Hospitalists are pluripotent clinicians who are comfortable with high-­acuity patients and can fit into a myriad of clinical positions. The initial instinct at our institution was to mobilize hospitalists across all areas of increasing needs in the hospital (eg, screening clinics,7 advice phone lines for patients, or in the Emergency Department), but we quickly recognized that the hospitalist group is a finite resource. We focused our hospitalists’ clinical work on the expanding inpatient needs and allowed other outpatient or procedure-based specialties that have less inpatient experience to fill the broader institutional gaps.

Finally, consider long-term implications of staffing decisions. Leaders are making challenging coverage decisions that can affect the morale and autonomy of staff. Does backup staffing happen on a volunteer basis? Who fills the need—those with less clinical time or those with fewer personal obligations? When a staffing model is challenged and your group is making such decisions, engaged communication again becomes paramount.

PREPARE FOR THE MARATHON

Experts believe that we are only at the beginning of this crisis, one for which we don’t know what the end looks like or when it will come. With this in mind, hospital medicine leadership must plan for the long-term implications of the lengthy race ahead. Recognizing that morale, motivation, and burnout will be issues to deal with on the horizon, a focus on sustainability and wellness will become increasingly important as the marathon continues. To date, we’ve found the following principles to be helpful.

Delegate Responsibilities

Hospitals will not be able to survive COVID-19 through the efforts of single individuals. Instead, consider creating “operational champion” roles for frontline clinicians. These individuals can lead in specific areas (eg, PPE, updates on COVID-19 testing, discharge protocols) and act as conduits for information, updates, and resources for your group. At our institution, such operational meetings and activities take hours out of each day. By creating a breadth of leadership roles, our group has spread the operational workload while still allowing clinicians to care for patients, avoid burnout, and build autonomy and opportunities for both personal and professional growth. While for most institutions, these positions are temporary and not compensated with salary or time, the contribution to the group should be recognized both now and in the future.

 

 

Focus on Wellness

Providers are battling a laundry list of both clinical and personal stressors. The Centers for Disease Control and Prevention has already recognized that stress and mental health are going to be large hurdles for both patients and providers during this crisis.8 From the beginning, hospitalist leadership should be attuned to physician wellness and be aware that burnout, mental and physical exhaustion, and the possibility of contracting COVID-19 will be issues in the coming weeks and months. Volunteerism is built into the physician’s work ethic, but we must be mindful about its cost for long-term staffing demands. In addition, scarce medical resources add an additional moral strain for clinicians as they face tough allocation decisions, as we’ve seen with our Italian colleagues.9

As regular meetings around COVID-19 have become commonplace, we’ve made sure to set aside defined time for staff to discuss and reflect on their experiences. Doing so has allowed our clinicians to feel heard and to acknowledge the difficulties they are facing in their clinical duties. Leaders should also consider frequent check-ins with individual providers. At our institution, the first positive COVID-19 patient did not radically change any protocol that was in place, but a check-in with the hospitalist on service that day proved helpful for a debrief and processing opportunity. Individual conversations can help those on the front lines feel supported and remind them they are not operating alone in an anonymous vacuum.

Continue by celebrating small victories because this marathon is not going to end with an obvious finish line or a singular moment in which everyone can rejoice. A negative test, a patient with a good outcome, and a donation of PPE are all opportunities to celebrate. It may be what keeps us going when there is no end in sight. We have relied on these celebrations and moments of levity as an integral part of our regular group meetings.

CONCLUSION

At the end of this pandemic, just as we hope that our social distancing feels like an overreaction, we similarly hope that our sprint to build capacity ends up being unnecessary as well. As we wrote this Perspectives piece, uncertainty about the extent, length, and impact of this pandemic still existed. By the time it is published it may be that the sprint is over, and the marathon is beginning. Or, if our wildest hopes come true, there will be no marathon to run at all.

References

1. Tsai TC, Jacobson BH, Jha AK. American Hospital Capacity and Projected Need for COVID-19. Health Affairs. March 17, 2020. https://www.healthaffairs.org/do/10.1377/hblog20200317.457910/full/. Accessed April 1, 2020.
2. Argenti PA. Crisis communication: lessons from 9/11. Harvard Business Review. December 2002. https://hbr.org/2002/12/crisis-communication-lessons-from-911. Accessed April 2, 2020.
3. Argenti PA. Communicating through the coronavirus crisis. Harvard Business Review. March 2020. https://hbr.org/2020/03/communicating-­through-the-coronavirus-crisis. Accessed April 2, 2020.
4. Chopra V, Toner E, Waldhorn R, Washer L. How should US hospitals prepare for COVID-19? Ann Intern Med. 2020. https://doi.org/10.7326/M20-0907.
5. National Institutes of Health. Formatting and Visual Clarity. Published July 1, 2015. Updated March 27, 2017. https://www.nih.gov/institutes-nih/nih-office-director/office-communications-public-liaison/clear-communication/plain-language/formatting-visual-clarity. Accessed April 2, 2020.
6. Frisch B, Greene C. What it takes to run a great virtual meeting. Harvard Business Review. March 2020. https://hbr.org/2020/03/what-it-takes-to-run-a-great-virtual-meeting. Accessed April 2, 2020.
7. Yan W. Coronavirus testing goes mobile in Seattle. New York Times. March 13, 2020. https://www.nytimes.com/2020/03/13/us/coronavirus-testing-drive-through-seattle.html. Accessed April 2, 2020.
8. Centers for Disease Control and Prevention. Coronavirus Disease 2019 (COVID-19). Stress and Coping. February 11, 2020. https://www.cdc.gov/coronavirus/2019-ncov/prepare/managing-stress-anxiety.html. Accessed April 2, 2020.
9. Rosenbaum L. Facing Covid-19 in Italy—ethics, logistics, and therapeutics on the epidemic’s front line. N Engl J Med. 2020. https://doi.org/10.1056/NEJMp2005492.

References

1. Tsai TC, Jacobson BH, Jha AK. American Hospital Capacity and Projected Need for COVID-19. Health Affairs. March 17, 2020. https://www.healthaffairs.org/do/10.1377/hblog20200317.457910/full/. Accessed April 1, 2020.
2. Argenti PA. Crisis communication: lessons from 9/11. Harvard Business Review. December 2002. https://hbr.org/2002/12/crisis-communication-lessons-from-911. Accessed April 2, 2020.
3. Argenti PA. Communicating through the coronavirus crisis. Harvard Business Review. March 2020. https://hbr.org/2020/03/communicating-­through-the-coronavirus-crisis. Accessed April 2, 2020.
4. Chopra V, Toner E, Waldhorn R, Washer L. How should US hospitals prepare for COVID-19? Ann Intern Med. 2020. https://doi.org/10.7326/M20-0907.
5. National Institutes of Health. Formatting and Visual Clarity. Published July 1, 2015. Updated March 27, 2017. https://www.nih.gov/institutes-nih/nih-office-director/office-communications-public-liaison/clear-communication/plain-language/formatting-visual-clarity. Accessed April 2, 2020.
6. Frisch B, Greene C. What it takes to run a great virtual meeting. Harvard Business Review. March 2020. https://hbr.org/2020/03/what-it-takes-to-run-a-great-virtual-meeting. Accessed April 2, 2020.
7. Yan W. Coronavirus testing goes mobile in Seattle. New York Times. March 13, 2020. https://www.nytimes.com/2020/03/13/us/coronavirus-testing-drive-through-seattle.html. Accessed April 2, 2020.
8. Centers for Disease Control and Prevention. Coronavirus Disease 2019 (COVID-19). Stress and Coping. February 11, 2020. https://www.cdc.gov/coronavirus/2019-ncov/prepare/managing-stress-anxiety.html. Accessed April 2, 2020.
9. Rosenbaum L. Facing Covid-19 in Italy—ethics, logistics, and therapeutics on the epidemic’s front line. N Engl J Med. 2020. https://doi.org/10.1056/NEJMp2005492.

Issue
Journal of Hospital Medicine 15(5)
Issue
Journal of Hospital Medicine 15(5)
Page Number
305-307. Published online first April 8, 2020
Page Number
305-307. Published online first April 8, 2020
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Megha Garg, MD, MPH; Email: [email protected]; Twitter: @MeghaGargMD
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Article PDF Media

Methodological Progress Note: Classification and Regression Tree Analysis

Article Type
Changed
Thu, 04/22/2021 - 15:11

Machine-learning is a type of artificial intelligence in which systems automatically learn and improve from experience without being explicitly programmed. Classification and Regression Tree (CART) analysis is a machine-learning algorithm that was developed to visually classify or segment populations into subgroups with similar characteristics and outcomes. CART analysis is a decision tree methodology that was initially developed in the 1960s for use in product marketing.1 Since then, a number of health disciplines have used it to isolate patient subgroups from larger populations to guide clinical decision-making by better identifying those most likely to benefit.2 The clinical utility of CART mirrors how most clinicians think, which is not in terms of coefficients (ie, regression output) but rather in terms of categories or classifications (eg, low vs high risk).

In this issue of the Journal of Hospital Medicine, Young and colleagues use classification trees to predict discharge placement (postacute care facility vs home) based on a patient’s hospital admission characteristics and mobility score. The resulting decision tree indicates that patients with the lowest mobility scores, as well as those 65 years and older, were most likely to be discharged to postacute care facilities.3 In this review, we orient the reader to the basics of CART analysis, discuss important intricacies, and weigh its pros, cons, and application as a statistical tool.

WHAT IS CART ANALYSIS?

CART is a nonparametric (ie, makes no assumptions about data distribution) statistical tool that identifies subgroups within a population whose members share common characteristics as defined by the independent variables included in the model. CART analysis is unique in that it yields a visual output of the data in the form of a multisegmented structure that resembles the branches of a tree (Figure). CART analysis consists of four basic steps: (1) tree-building (including splitting criteria and estimation of classification error), (2) stopping the tree-building process, (3) tree “pruning,” and (4) tree selection.

In general, CART analysis begins with a single “node” or group, which contains the entire sample population. This is referred to as the “parent node.” The CART procedure simultaneously examines all available independent variables and selects one that results in two groups that are the most distinct with respect to the outcome variable of interest. In Young et al’s example, posthospital discharge placement is the outcome.3 This parent node then branches into two “child nodes” according to the independent variable that was selected. Within each of these “child nodes,” the tree-growing methodology recursively assesses each of the remaining independent variables to determine which will result in the best split according to the chosen splitting criterion.2 Each subsequent “child node” will become a “parent node” to the two groups in which it splits. This process is repeated on the data in each subsequent “child node” and is stopped once a predefined stopping point is reached. Notably, while division into two groups is the most common application of CART modeling, there are models that can split data into more than two child nodes.

Since CART outcomes can be heavily dependent on the data being used (eg, electronic health records or administrative data), it is important to attempt to confirm results in a similar, but different, study cohort. Because obtaining separate data sources with similar cohorts can be difficult, many investigators using CART will utilize a “split sample approach” in which study data are split into separate training and validation sets.4 In the training set, which frequently comprises two-thirds of the available data, the algorithm is tested in exploratory analysis. Once the algorithm is defined and agreed upon, it is retested within a validation set, constructed from the remaining one-third of data. This approach, which Young et al utilize,3 allows for improved confidence and reduced risk of bias in the findings and allows for some degree of external validation. Further, the split sample approach supports more reliable measures of predictive accuracy: in Young et al’s case, the proportion of correctly classified patients discharged to a postacute care facility (sensitivity: 58%, 95% CI 49-68%) and the proportion of correctly classified patients discharged home (specificity: 84%, 95% CI 78-90%). Despite these advantages, the split sample approach is not universally used.

 

 

Classification Versus Regression Trees

While commonly grouped together, CARTs can be distinguished from one another based on the dependent, or outcome, variable. Categorical outcome variables require the use of a classification tree, while continuous outcomes utilize regression trees. Of note, the independent, or predictor, variables can be any combination of categorical or continuous variables. However, splitting at each node creates categorical output when using CART algorithms.

Splitting Criteria

The splitting of each node is based on reducing the degree of “impurity” (heterogeneity with respect to the outcome variable) within each node. For example, a node that has no impurity will have a zero error rate labeling its binary outcomes. While CART works well with categorical variables, continuous variables (eg, age) can also be assessed, though only with certain algorithms. Several different splitting criteria exist, each of which attempt to maximize the differences within each child node. While beyond the scope of this review, examples of popular splitting criteria are Gini, entropy, and minimum error.5

Stopping Rules

To manage the size of a tree, CART analysis allows for predefined stopping rules to minimize the extent of growth while also establishing a minimal degree of statistical difference between nodes that is considered meaningful. To accomplish this task, two stopping rules are often used. The first defines the minimum number of observations in child, or “terminal,” nodes. The second defines the maximum number of levels a tree may grow, thus allowing the investigator to decide the total number of predictor variables that can define a terminal node. While several other stopping rules exist, these are the most commonly utilized.

Pruning

To avoid missing important associations due to premature stoppage, investigators may use another mechanism to limit tree growth called “pruning.” For pruning, the first step is to grow a considerably large tree that includes many levels or nodes, possibly to the point where there are just a few observations per terminal node. Then, similar to the residual sum of squares in a regression, the investigator can calculate a misclassification cost (ie, goodness of fit) and select the tree with the smallest cost.2 Of note, stopping rules and pruning can be used simultaneously.

Classification Error

Similar to other forms of statistical inference it remains important to understand the uncertainty within the inference. In regression modeling, for example, classification errors can be calculated using standard errors of the parameter estimates. In CART analysis, because random samples from a population may produce different trees, measures of variability can be more complicated. One strategy is to generate a tree from a test sample and then use the remaining data to calculate a measure of the misclassification cost (a measure of how much additional accuracy a split must add to the entire tree to warrant the additional complexity). Alternatively, a “k-fold cross-validation” can be performed in which the data is broken down into k subsets from which a tree is created using all data except for one of the subsets. The computed tree is then applied to the remaining subset to determine a misclassification cost. These classification costs are important as they also impact the stopping and pruning processes. Ultimately, a final tree, which best limits classification errors, is selected.

 

 

WHEN WOULD YOU USE CART ANALYSIS?

This method can be useful in multiple settings in which an investigator wants to characterize a subpopulation from a larger cohort. Adaptation of this could include, but is not limited to, risk stratification,6 diagnostics,7 and patient identification for medical interventions.8 Moreover, CART analysis has the added benefit of creating visually interpretable predictive models that can be utilized for front-line clinical decision making.9,10

STRENGTHS OF CART ANALYSIS

CART analysis has been shown to have several advantages over other commonly used modeling methods. First, it is a nonparametric model that can handle highly skewed data and does not require that the predictor, or predictors, takes on a predetermined form (allowing them to be constructed from the data). This is helpful as many clinical variables can have wide degrees of variance.

Unlike other modeling techniques, CART can identify higher-order interactions between multiple variables, meaning it can handle interactions that occur whenever one variable affects the nature of an interaction between two other variables. Further, CART can handle multiple correlated independent variables, something logistic regression models classically cannot do.

From a clinical standpoint, the “logic” of the visual-based CART output can be easier to interpret than the probabilistic output (eg, odds ratio) associated with logistic regression modeling, making it more practical, applicable, and easier for clinicians to adopt.10,12 Finally, CART software is easy to use for those who do not have strong statistical backgrounds, and it is less resource intensive than other statistical methods.2

LIMITATIONS OF CART ANALYSIS

Despite these features, CART does have several disadvantages. First, due to the ease with which CART analysis can be performed, “data dredging” can be a significant concern. Its ideal use is with a priori consideration of independent variables.2 Second, while CART is most beneficial in describing links and cutoffs between variables, it may not be useful for hypothesis testing.2 Third, large data sets are needed to perform CART, especially if the investigator is using the split sample approach mentioned above.11 Finally, while CART is the most utilized decision tree methodology, several other types of decision tree methods exist: C4.5, CRUISE, Quick, Unbiased, Efficient Statistical Trees, Chi-square-Automatic-Interaction-Detection, and others. Many of these allow for splitting into more than two groups and have other features that may be more advantageous to one’s analysis.13

WHY DID THE AUTHORS USE CART?

Decision trees offer simple, interpretable results of multiple factors that can be easily applied to clinical scenarios. In this case, the authors specifically used classification tree analysis to take advantage of CART’s machine-learning ability to consider higher-order interactions to build their model—as they lacked a priori evidence to help guide them in traditional (ie, logistic regression) model construction. Furthermore, CART analysis created an output that logically and visually illustrates which combination of characteristics is most associated with discharge placement and can potentially be utilized to help facilitate discharge planning in future hospitalized patients. To sum up, this machine-learning methodology allowed the investigators to determine which variables taken together were the most suitable in predicting their outcome of interest and present these findings in a manner that busy clinicians can interpret and apply.

References

1. Magee JF. Decision Trees for Decision Making. Harvard Business Review. 1964. https://hbr.org/1964/07/decision-trees-for-decision-making. Accessed August 26, 2019.
2. Lemon SC, Roy J, Clark MA, Friedmann PD, Rakowski W. Classification and regression tree analysis in public health: methodological review and comparison with logistic regression. Ann Behav Med. 2003;26(3):172-181. https://doi.org/10.1207/S15324796ABM2603_02
3. Young D, Colantuoni E, Seltzer D, et al. Prediction of disposition within 48-hours of hospital admission using patient mobility scores. J Hosp Med. 2020;15(9):540-543. https://doi.org/10.12788/jhm.3332
4. Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. 2019;380(14):1347-1358. https://doi.org/10.1056/NEJMra1814259
5. Zhang H, Singer B. Recursive Partitioning in the Health Sciences. New York: Springer-Verlag; 1999. https://www.springer.com/gp/book/9781475730272. Accessed August 24, 2019.
6. Fonarow GC, Adams KF, Abraham WT, Yancy CW, Boscardin WJ, for the ADHERE Scientific Advisory Committee SG. Risk stratification for in-hospital mortality in acutely decompensated heart failure: classification and regression tree analysis. JAMA. 2005;293(5):572-580. https://doi.org/10.1001/jama.293.5.572
7. Hess KR, Abbruzzese MC, Lenzi R, Raber MN, Abbruzzese JL. Classification and regression tree analysis of 1000 consecutive patients with unknown primary carcinoma. Clin Cancer Res. 1999;5(11):3403-3410.
8. Garzotto M, Beer TM, Hudson RG, et al. Improved detection of prostate cancer using classification and regression tree analysis. J Clin Oncol. 2005;23(19):4322-4329. https://doi.org/10.1200/JCO.2005.11.136
9. Hong W, Dong L, Huang Q, Wu W, Wu J, Wang Y. Prediction of severe acute pancreatitis using classification and regression tree analysis. Dig Dis Sci. 2011;56(12):3664-3671. https://doi.org/10.1007/s10620-011-1849-x
10. Lewis RJ. An Introduction to Classification and Regression Tree (CART) Analysis. Proceedings of Annual Meeting of the Society for Academic Emergency Medicine, San Francisco, CA, USA, May 22-25, 2000; pp. 1–14.
11. Perlich C, Provost F, Simonoff JS. Tree induction vs logistic regression: a learning-curve analysis. J Mach Learn Res. 2003;4(Jun):211-255. https://doi.org/10.1162/153244304322972694
12. Woolever D. The art and science of clinical decision making. Fam Pract Manag. 2008;15(5):31-36.
13. Loh WY. Classification and regression trees. Wires Data Min Know Disc. 2011;1(1):14-23. https://doi.org/10.1002/widm.8

Article PDF
Author and Disclosure Information

1Department of Medicine, University of California, San Francisco, California; 2Division of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California; 3Division of Mental Health Services, San Francisco Veterans Affairs Medical Center, San Francisco, California; 4Department of Psychiatry, University of California, San Francisco, California.

Disclosures

 

 

The authors report no conflict of interests in terms of the submission of this manuscript.

Issue
Journal of Hospital Medicine 15(9)
Publications
Topics
Page Number
549-551. Published Online First March 18, 2020
Sections
Author and Disclosure Information

1Department of Medicine, University of California, San Francisco, California; 2Division of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California; 3Division of Mental Health Services, San Francisco Veterans Affairs Medical Center, San Francisco, California; 4Department of Psychiatry, University of California, San Francisco, California.

Disclosures

 

 

The authors report no conflict of interests in terms of the submission of this manuscript.

Author and Disclosure Information

1Department of Medicine, University of California, San Francisco, California; 2Division of Hospital Medicine, San Francisco Veterans Affairs Medical Center, San Francisco, California; 3Division of Mental Health Services, San Francisco Veterans Affairs Medical Center, San Francisco, California; 4Department of Psychiatry, University of California, San Francisco, California.

Disclosures

 

 

The authors report no conflict of interests in terms of the submission of this manuscript.

Article PDF
Article PDF

Machine-learning is a type of artificial intelligence in which systems automatically learn and improve from experience without being explicitly programmed. Classification and Regression Tree (CART) analysis is a machine-learning algorithm that was developed to visually classify or segment populations into subgroups with similar characteristics and outcomes. CART analysis is a decision tree methodology that was initially developed in the 1960s for use in product marketing.1 Since then, a number of health disciplines have used it to isolate patient subgroups from larger populations to guide clinical decision-making by better identifying those most likely to benefit.2 The clinical utility of CART mirrors how most clinicians think, which is not in terms of coefficients (ie, regression output) but rather in terms of categories or classifications (eg, low vs high risk).

In this issue of the Journal of Hospital Medicine, Young and colleagues use classification trees to predict discharge placement (postacute care facility vs home) based on a patient’s hospital admission characteristics and mobility score. The resulting decision tree indicates that patients with the lowest mobility scores, as well as those 65 years and older, were most likely to be discharged to postacute care facilities.3 In this review, we orient the reader to the basics of CART analysis, discuss important intricacies, and weigh its pros, cons, and application as a statistical tool.

WHAT IS CART ANALYSIS?

CART is a nonparametric (ie, makes no assumptions about data distribution) statistical tool that identifies subgroups within a population whose members share common characteristics as defined by the independent variables included in the model. CART analysis is unique in that it yields a visual output of the data in the form of a multisegmented structure that resembles the branches of a tree (Figure). CART analysis consists of four basic steps: (1) tree-building (including splitting criteria and estimation of classification error), (2) stopping the tree-building process, (3) tree “pruning,” and (4) tree selection.

In general, CART analysis begins with a single “node” or group, which contains the entire sample population. This is referred to as the “parent node.” The CART procedure simultaneously examines all available independent variables and selects one that results in two groups that are the most distinct with respect to the outcome variable of interest. In Young et al’s example, posthospital discharge placement is the outcome.3 This parent node then branches into two “child nodes” according to the independent variable that was selected. Within each of these “child nodes,” the tree-growing methodology recursively assesses each of the remaining independent variables to determine which will result in the best split according to the chosen splitting criterion.2 Each subsequent “child node” will become a “parent node” to the two groups in which it splits. This process is repeated on the data in each subsequent “child node” and is stopped once a predefined stopping point is reached. Notably, while division into two groups is the most common application of CART modeling, there are models that can split data into more than two child nodes.

Since CART outcomes can be heavily dependent on the data being used (eg, electronic health records or administrative data), it is important to attempt to confirm results in a similar, but different, study cohort. Because obtaining separate data sources with similar cohorts can be difficult, many investigators using CART will utilize a “split sample approach” in which study data are split into separate training and validation sets.4 In the training set, which frequently comprises two-thirds of the available data, the algorithm is tested in exploratory analysis. Once the algorithm is defined and agreed upon, it is retested within a validation set, constructed from the remaining one-third of data. This approach, which Young et al utilize,3 allows for improved confidence and reduced risk of bias in the findings and allows for some degree of external validation. Further, the split sample approach supports more reliable measures of predictive accuracy: in Young et al’s case, the proportion of correctly classified patients discharged to a postacute care facility (sensitivity: 58%, 95% CI 49-68%) and the proportion of correctly classified patients discharged home (specificity: 84%, 95% CI 78-90%). Despite these advantages, the split sample approach is not universally used.

 

 

Classification Versus Regression Trees

While commonly grouped together, CARTs can be distinguished from one another based on the dependent, or outcome, variable. Categorical outcome variables require the use of a classification tree, while continuous outcomes utilize regression trees. Of note, the independent, or predictor, variables can be any combination of categorical or continuous variables. However, splitting at each node creates categorical output when using CART algorithms.

Splitting Criteria

The splitting of each node is based on reducing the degree of “impurity” (heterogeneity with respect to the outcome variable) within each node. For example, a node that has no impurity will have a zero error rate labeling its binary outcomes. While CART works well with categorical variables, continuous variables (eg, age) can also be assessed, though only with certain algorithms. Several different splitting criteria exist, each of which attempt to maximize the differences within each child node. While beyond the scope of this review, examples of popular splitting criteria are Gini, entropy, and minimum error.5

Stopping Rules

To manage the size of a tree, CART analysis allows for predefined stopping rules to minimize the extent of growth while also establishing a minimal degree of statistical difference between nodes that is considered meaningful. To accomplish this task, two stopping rules are often used. The first defines the minimum number of observations in child, or “terminal,” nodes. The second defines the maximum number of levels a tree may grow, thus allowing the investigator to decide the total number of predictor variables that can define a terminal node. While several other stopping rules exist, these are the most commonly utilized.

Pruning

To avoid missing important associations due to premature stoppage, investigators may use another mechanism to limit tree growth called “pruning.” For pruning, the first step is to grow a considerably large tree that includes many levels or nodes, possibly to the point where there are just a few observations per terminal node. Then, similar to the residual sum of squares in a regression, the investigator can calculate a misclassification cost (ie, goodness of fit) and select the tree with the smallest cost.2 Of note, stopping rules and pruning can be used simultaneously.

Classification Error

Similar to other forms of statistical inference it remains important to understand the uncertainty within the inference. In regression modeling, for example, classification errors can be calculated using standard errors of the parameter estimates. In CART analysis, because random samples from a population may produce different trees, measures of variability can be more complicated. One strategy is to generate a tree from a test sample and then use the remaining data to calculate a measure of the misclassification cost (a measure of how much additional accuracy a split must add to the entire tree to warrant the additional complexity). Alternatively, a “k-fold cross-validation” can be performed in which the data is broken down into k subsets from which a tree is created using all data except for one of the subsets. The computed tree is then applied to the remaining subset to determine a misclassification cost. These classification costs are important as they also impact the stopping and pruning processes. Ultimately, a final tree, which best limits classification errors, is selected.

 

 

WHEN WOULD YOU USE CART ANALYSIS?

This method can be useful in multiple settings in which an investigator wants to characterize a subpopulation from a larger cohort. Adaptation of this could include, but is not limited to, risk stratification,6 diagnostics,7 and patient identification for medical interventions.8 Moreover, CART analysis has the added benefit of creating visually interpretable predictive models that can be utilized for front-line clinical decision making.9,10

STRENGTHS OF CART ANALYSIS

CART analysis has been shown to have several advantages over other commonly used modeling methods. First, it is a nonparametric model that can handle highly skewed data and does not require that the predictor, or predictors, takes on a predetermined form (allowing them to be constructed from the data). This is helpful as many clinical variables can have wide degrees of variance.

Unlike other modeling techniques, CART can identify higher-order interactions between multiple variables, meaning it can handle interactions that occur whenever one variable affects the nature of an interaction between two other variables. Further, CART can handle multiple correlated independent variables, something logistic regression models classically cannot do.

From a clinical standpoint, the “logic” of the visual-based CART output can be easier to interpret than the probabilistic output (eg, odds ratio) associated with logistic regression modeling, making it more practical, applicable, and easier for clinicians to adopt.10,12 Finally, CART software is easy to use for those who do not have strong statistical backgrounds, and it is less resource intensive than other statistical methods.2

LIMITATIONS OF CART ANALYSIS

Despite these features, CART does have several disadvantages. First, due to the ease with which CART analysis can be performed, “data dredging” can be a significant concern. Its ideal use is with a priori consideration of independent variables.2 Second, while CART is most beneficial in describing links and cutoffs between variables, it may not be useful for hypothesis testing.2 Third, large data sets are needed to perform CART, especially if the investigator is using the split sample approach mentioned above.11 Finally, while CART is the most utilized decision tree methodology, several other types of decision tree methods exist: C4.5, CRUISE, Quick, Unbiased, Efficient Statistical Trees, Chi-square-Automatic-Interaction-Detection, and others. Many of these allow for splitting into more than two groups and have other features that may be more advantageous to one’s analysis.13

WHY DID THE AUTHORS USE CART?

Decision trees offer simple, interpretable results of multiple factors that can be easily applied to clinical scenarios. In this case, the authors specifically used classification tree analysis to take advantage of CART’s machine-learning ability to consider higher-order interactions to build their model—as they lacked a priori evidence to help guide them in traditional (ie, logistic regression) model construction. Furthermore, CART analysis created an output that logically and visually illustrates which combination of characteristics is most associated with discharge placement and can potentially be utilized to help facilitate discharge planning in future hospitalized patients. To sum up, this machine-learning methodology allowed the investigators to determine which variables taken together were the most suitable in predicting their outcome of interest and present these findings in a manner that busy clinicians can interpret and apply.

Machine-learning is a type of artificial intelligence in which systems automatically learn and improve from experience without being explicitly programmed. Classification and Regression Tree (CART) analysis is a machine-learning algorithm that was developed to visually classify or segment populations into subgroups with similar characteristics and outcomes. CART analysis is a decision tree methodology that was initially developed in the 1960s for use in product marketing.1 Since then, a number of health disciplines have used it to isolate patient subgroups from larger populations to guide clinical decision-making by better identifying those most likely to benefit.2 The clinical utility of CART mirrors how most clinicians think, which is not in terms of coefficients (ie, regression output) but rather in terms of categories or classifications (eg, low vs high risk).

In this issue of the Journal of Hospital Medicine, Young and colleagues use classification trees to predict discharge placement (postacute care facility vs home) based on a patient’s hospital admission characteristics and mobility score. The resulting decision tree indicates that patients with the lowest mobility scores, as well as those 65 years and older, were most likely to be discharged to postacute care facilities.3 In this review, we orient the reader to the basics of CART analysis, discuss important intricacies, and weigh its pros, cons, and application as a statistical tool.

WHAT IS CART ANALYSIS?

CART is a nonparametric (ie, makes no assumptions about data distribution) statistical tool that identifies subgroups within a population whose members share common characteristics as defined by the independent variables included in the model. CART analysis is unique in that it yields a visual output of the data in the form of a multisegmented structure that resembles the branches of a tree (Figure). CART analysis consists of four basic steps: (1) tree-building (including splitting criteria and estimation of classification error), (2) stopping the tree-building process, (3) tree “pruning,” and (4) tree selection.

In general, CART analysis begins with a single “node” or group, which contains the entire sample population. This is referred to as the “parent node.” The CART procedure simultaneously examines all available independent variables and selects one that results in two groups that are the most distinct with respect to the outcome variable of interest. In Young et al’s example, posthospital discharge placement is the outcome.3 This parent node then branches into two “child nodes” according to the independent variable that was selected. Within each of these “child nodes,” the tree-growing methodology recursively assesses each of the remaining independent variables to determine which will result in the best split according to the chosen splitting criterion.2 Each subsequent “child node” will become a “parent node” to the two groups in which it splits. This process is repeated on the data in each subsequent “child node” and is stopped once a predefined stopping point is reached. Notably, while division into two groups is the most common application of CART modeling, there are models that can split data into more than two child nodes.

Since CART outcomes can be heavily dependent on the data being used (eg, electronic health records or administrative data), it is important to attempt to confirm results in a similar, but different, study cohort. Because obtaining separate data sources with similar cohorts can be difficult, many investigators using CART will utilize a “split sample approach” in which study data are split into separate training and validation sets.4 In the training set, which frequently comprises two-thirds of the available data, the algorithm is tested in exploratory analysis. Once the algorithm is defined and agreed upon, it is retested within a validation set, constructed from the remaining one-third of data. This approach, which Young et al utilize,3 allows for improved confidence and reduced risk of bias in the findings and allows for some degree of external validation. Further, the split sample approach supports more reliable measures of predictive accuracy: in Young et al’s case, the proportion of correctly classified patients discharged to a postacute care facility (sensitivity: 58%, 95% CI 49-68%) and the proportion of correctly classified patients discharged home (specificity: 84%, 95% CI 78-90%). Despite these advantages, the split sample approach is not universally used.

 

 

Classification Versus Regression Trees

While commonly grouped together, CARTs can be distinguished from one another based on the dependent, or outcome, variable. Categorical outcome variables require the use of a classification tree, while continuous outcomes utilize regression trees. Of note, the independent, or predictor, variables can be any combination of categorical or continuous variables. However, splitting at each node creates categorical output when using CART algorithms.

Splitting Criteria

The splitting of each node is based on reducing the degree of “impurity” (heterogeneity with respect to the outcome variable) within each node. For example, a node that has no impurity will have a zero error rate labeling its binary outcomes. While CART works well with categorical variables, continuous variables (eg, age) can also be assessed, though only with certain algorithms. Several different splitting criteria exist, each of which attempt to maximize the differences within each child node. While beyond the scope of this review, examples of popular splitting criteria are Gini, entropy, and minimum error.5

Stopping Rules

To manage the size of a tree, CART analysis allows for predefined stopping rules to minimize the extent of growth while also establishing a minimal degree of statistical difference between nodes that is considered meaningful. To accomplish this task, two stopping rules are often used. The first defines the minimum number of observations in child, or “terminal,” nodes. The second defines the maximum number of levels a tree may grow, thus allowing the investigator to decide the total number of predictor variables that can define a terminal node. While several other stopping rules exist, these are the most commonly utilized.

Pruning

To avoid missing important associations due to premature stoppage, investigators may use another mechanism to limit tree growth called “pruning.” For pruning, the first step is to grow a considerably large tree that includes many levels or nodes, possibly to the point where there are just a few observations per terminal node. Then, similar to the residual sum of squares in a regression, the investigator can calculate a misclassification cost (ie, goodness of fit) and select the tree with the smallest cost.2 Of note, stopping rules and pruning can be used simultaneously.

Classification Error

Similar to other forms of statistical inference it remains important to understand the uncertainty within the inference. In regression modeling, for example, classification errors can be calculated using standard errors of the parameter estimates. In CART analysis, because random samples from a population may produce different trees, measures of variability can be more complicated. One strategy is to generate a tree from a test sample and then use the remaining data to calculate a measure of the misclassification cost (a measure of how much additional accuracy a split must add to the entire tree to warrant the additional complexity). Alternatively, a “k-fold cross-validation” can be performed in which the data is broken down into k subsets from which a tree is created using all data except for one of the subsets. The computed tree is then applied to the remaining subset to determine a misclassification cost. These classification costs are important as they also impact the stopping and pruning processes. Ultimately, a final tree, which best limits classification errors, is selected.

 

 

WHEN WOULD YOU USE CART ANALYSIS?

This method can be useful in multiple settings in which an investigator wants to characterize a subpopulation from a larger cohort. Adaptation of this could include, but is not limited to, risk stratification,6 diagnostics,7 and patient identification for medical interventions.8 Moreover, CART analysis has the added benefit of creating visually interpretable predictive models that can be utilized for front-line clinical decision making.9,10

STRENGTHS OF CART ANALYSIS

CART analysis has been shown to have several advantages over other commonly used modeling methods. First, it is a nonparametric model that can handle highly skewed data and does not require that the predictor, or predictors, takes on a predetermined form (allowing them to be constructed from the data). This is helpful as many clinical variables can have wide degrees of variance.

Unlike other modeling techniques, CART can identify higher-order interactions between multiple variables, meaning it can handle interactions that occur whenever one variable affects the nature of an interaction between two other variables. Further, CART can handle multiple correlated independent variables, something logistic regression models classically cannot do.

From a clinical standpoint, the “logic” of the visual-based CART output can be easier to interpret than the probabilistic output (eg, odds ratio) associated with logistic regression modeling, making it more practical, applicable, and easier for clinicians to adopt.10,12 Finally, CART software is easy to use for those who do not have strong statistical backgrounds, and it is less resource intensive than other statistical methods.2

LIMITATIONS OF CART ANALYSIS

Despite these features, CART does have several disadvantages. First, due to the ease with which CART analysis can be performed, “data dredging” can be a significant concern. Its ideal use is with a priori consideration of independent variables.2 Second, while CART is most beneficial in describing links and cutoffs between variables, it may not be useful for hypothesis testing.2 Third, large data sets are needed to perform CART, especially if the investigator is using the split sample approach mentioned above.11 Finally, while CART is the most utilized decision tree methodology, several other types of decision tree methods exist: C4.5, CRUISE, Quick, Unbiased, Efficient Statistical Trees, Chi-square-Automatic-Interaction-Detection, and others. Many of these allow for splitting into more than two groups and have other features that may be more advantageous to one’s analysis.13

WHY DID THE AUTHORS USE CART?

Decision trees offer simple, interpretable results of multiple factors that can be easily applied to clinical scenarios. In this case, the authors specifically used classification tree analysis to take advantage of CART’s machine-learning ability to consider higher-order interactions to build their model—as they lacked a priori evidence to help guide them in traditional (ie, logistic regression) model construction. Furthermore, CART analysis created an output that logically and visually illustrates which combination of characteristics is most associated with discharge placement and can potentially be utilized to help facilitate discharge planning in future hospitalized patients. To sum up, this machine-learning methodology allowed the investigators to determine which variables taken together were the most suitable in predicting their outcome of interest and present these findings in a manner that busy clinicians can interpret and apply.

References

1. Magee JF. Decision Trees for Decision Making. Harvard Business Review. 1964. https://hbr.org/1964/07/decision-trees-for-decision-making. Accessed August 26, 2019.
2. Lemon SC, Roy J, Clark MA, Friedmann PD, Rakowski W. Classification and regression tree analysis in public health: methodological review and comparison with logistic regression. Ann Behav Med. 2003;26(3):172-181. https://doi.org/10.1207/S15324796ABM2603_02
3. Young D, Colantuoni E, Seltzer D, et al. Prediction of disposition within 48-hours of hospital admission using patient mobility scores. J Hosp Med. 2020;15(9):540-543. https://doi.org/10.12788/jhm.3332
4. Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. 2019;380(14):1347-1358. https://doi.org/10.1056/NEJMra1814259
5. Zhang H, Singer B. Recursive Partitioning in the Health Sciences. New York: Springer-Verlag; 1999. https://www.springer.com/gp/book/9781475730272. Accessed August 24, 2019.
6. Fonarow GC, Adams KF, Abraham WT, Yancy CW, Boscardin WJ, for the ADHERE Scientific Advisory Committee SG. Risk stratification for in-hospital mortality in acutely decompensated heart failure: classification and regression tree analysis. JAMA. 2005;293(5):572-580. https://doi.org/10.1001/jama.293.5.572
7. Hess KR, Abbruzzese MC, Lenzi R, Raber MN, Abbruzzese JL. Classification and regression tree analysis of 1000 consecutive patients with unknown primary carcinoma. Clin Cancer Res. 1999;5(11):3403-3410.
8. Garzotto M, Beer TM, Hudson RG, et al. Improved detection of prostate cancer using classification and regression tree analysis. J Clin Oncol. 2005;23(19):4322-4329. https://doi.org/10.1200/JCO.2005.11.136
9. Hong W, Dong L, Huang Q, Wu W, Wu J, Wang Y. Prediction of severe acute pancreatitis using classification and regression tree analysis. Dig Dis Sci. 2011;56(12):3664-3671. https://doi.org/10.1007/s10620-011-1849-x
10. Lewis RJ. An Introduction to Classification and Regression Tree (CART) Analysis. Proceedings of Annual Meeting of the Society for Academic Emergency Medicine, San Francisco, CA, USA, May 22-25, 2000; pp. 1–14.
11. Perlich C, Provost F, Simonoff JS. Tree induction vs logistic regression: a learning-curve analysis. J Mach Learn Res. 2003;4(Jun):211-255. https://doi.org/10.1162/153244304322972694
12. Woolever D. The art and science of clinical decision making. Fam Pract Manag. 2008;15(5):31-36.
13. Loh WY. Classification and regression trees. Wires Data Min Know Disc. 2011;1(1):14-23. https://doi.org/10.1002/widm.8

References

1. Magee JF. Decision Trees for Decision Making. Harvard Business Review. 1964. https://hbr.org/1964/07/decision-trees-for-decision-making. Accessed August 26, 2019.
2. Lemon SC, Roy J, Clark MA, Friedmann PD, Rakowski W. Classification and regression tree analysis in public health: methodological review and comparison with logistic regression. Ann Behav Med. 2003;26(3):172-181. https://doi.org/10.1207/S15324796ABM2603_02
3. Young D, Colantuoni E, Seltzer D, et al. Prediction of disposition within 48-hours of hospital admission using patient mobility scores. J Hosp Med. 2020;15(9):540-543. https://doi.org/10.12788/jhm.3332
4. Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. 2019;380(14):1347-1358. https://doi.org/10.1056/NEJMra1814259
5. Zhang H, Singer B. Recursive Partitioning in the Health Sciences. New York: Springer-Verlag; 1999. https://www.springer.com/gp/book/9781475730272. Accessed August 24, 2019.
6. Fonarow GC, Adams KF, Abraham WT, Yancy CW, Boscardin WJ, for the ADHERE Scientific Advisory Committee SG. Risk stratification for in-hospital mortality in acutely decompensated heart failure: classification and regression tree analysis. JAMA. 2005;293(5):572-580. https://doi.org/10.1001/jama.293.5.572
7. Hess KR, Abbruzzese MC, Lenzi R, Raber MN, Abbruzzese JL. Classification and regression tree analysis of 1000 consecutive patients with unknown primary carcinoma. Clin Cancer Res. 1999;5(11):3403-3410.
8. Garzotto M, Beer TM, Hudson RG, et al. Improved detection of prostate cancer using classification and regression tree analysis. J Clin Oncol. 2005;23(19):4322-4329. https://doi.org/10.1200/JCO.2005.11.136
9. Hong W, Dong L, Huang Q, Wu W, Wu J, Wang Y. Prediction of severe acute pancreatitis using classification and regression tree analysis. Dig Dis Sci. 2011;56(12):3664-3671. https://doi.org/10.1007/s10620-011-1849-x
10. Lewis RJ. An Introduction to Classification and Regression Tree (CART) Analysis. Proceedings of Annual Meeting of the Society for Academic Emergency Medicine, San Francisco, CA, USA, May 22-25, 2000; pp. 1–14.
11. Perlich C, Provost F, Simonoff JS. Tree induction vs logistic regression: a learning-curve analysis. J Mach Learn Res. 2003;4(Jun):211-255. https://doi.org/10.1162/153244304322972694
12. Woolever D. The art and science of clinical decision making. Fam Pract Manag. 2008;15(5):31-36.
13. Loh WY. Classification and regression trees. Wires Data Min Know Disc. 2011;1(1):14-23. https://doi.org/10.1002/widm.8

Issue
Journal of Hospital Medicine 15(9)
Issue
Journal of Hospital Medicine 15(9)
Page Number
549-551. Published Online First March 18, 2020
Page Number
549-551. Published Online First March 18, 2020
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Charlie M. Wray, DO, MS; E-mail: [email protected]; Telephone: 415-595-9662
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Peek Free
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
Article PDF Media

Examining the Utility of 30-day Readmission Rates and Hospital Profiling in the Veterans Health Administration

Article Type
Changed
Sun, 05/26/2019 - 00:03

Using methodology created by the Centers for Medicare & Medicaid Services (CMS), the Department of Veterans Affairs (VA) calculates and reports hospital performance measures for several key conditions, including acute myocardial infarction (AMI), heart failure (HF), and pneumonia.1 These measures are designed to benchmark individual hospitals against how average hospitals perform when caring for a similar case-mix index. Because readmissions to the hospital within 30-days of discharge are common and costly, this metric has garnered extensive attention in recent years.

To summarize the 30-day readmission metric, the VA utilizes the Strategic Analytics for Improvement and Learning (SAIL) system to present internally its findings to VA practitioners and leadership.2 The VA provides these data as a means to drive quality improvement and allow for comparison of individual hospitals’ performance across measures throughout the VA healthcare system. Since 2010, the VA began using and publicly reporting the CMS-derived 30-day Risk-Stratified Readmission Rate (RSRR) on the Hospital Compare website.3 Similar to CMS, the VA uses three years of combined data so that patients, providers, and other stakeholders can compare individual hospitals’ performance across these measures.1 In response to this, hospitals and healthcare organizations have implemented quality improvement and large-scale programmatic interventions in an attempt to improve quality around readmissions.4-6 A recent assessment on how hospitals within the Medicare fee-for-service program have responded to such reporting found large degrees of variability, with more than half of the participating institutions facing penalties due to greater-than-expected readmission rates.5 Although the VA utilizes the same CMS-derived model in its assessments and reporting, the variability and distribution around this metric are not publicly reported—thus making it difficult to ascertain how individual VA hospitals compare with one another. Without such information, individual facilities may not know how to benchmark the quality of their care to others, nor would the VA recognize which interventions addressing readmissions are working, and which are not. Although previous assessments of interinstitutional variance have been performed in Medicare populations,7 a focused analysis of such variance within the VA has yet to be performed.

In this study, we performed a multiyear assessment of the CMS-derived 30-day RSRR metric for AMI, HF, and pneumonia as a useful measure to drive VA quality improvement or distinguish VA facility performance based on its ability to detect interfacility variability.

 

 

METHODS

Data Source

We used VA administrative and Medicare claims data from 2010 to 2012. After identifying index hospitalizations to VA hospitals, we obtained patients’ respective inpatient Medicare claims data from the Medicare Provider Analysis and Review (MedPAR) and Outpatient files. All Medicare records were linked to VA records via scrambled Social Security numbers and were provided by the VA Information Resource Center. This study was approved by the San Francisco VA Medical Center Institutional Review Board.

Study Sample

Our cohort consisted of hospitalized VA beneficiary and Medicare fee-for-service patients who were aged ≥65 years and admitted to and discharged from a VA acute care center with a primary discharge diagnosis of AMI, HF, or pneumonia. These comorbidities were chosen as they are publicly reported and frequently used for interfacility comparisons. Because studies have found that inclusion of secondary payer data (ie, CMS data) may affect hospital-profiling outcomes, we included Medicare data on all available patients.8 We excluded hospitalizations that resulted in a transfer to another acute care facility and those admitted to observation status at their index admission. To ensure a full year of data for risk adjustment, beneficiaries were included only if they were enrolled in Medicare for 12 months prior to and including the date of the index admission.

Index hospitalizations were first identified using VA-only inpatient data similar to methods outlined by the CMS and endorsed by the National Quality Forum for Hospital Profiling.9 An index hospitalization was defined as an acute inpatient discharge between 2010 and 2012 in which the principal diagnosis was AMI, HF, or pneumonia. We excluded in-hospital deaths, discharges against medical advice, and--for the AMI cohort only--discharges on the same day as admission. Patients may have multiple admissions per year, but only admissions after 30 days of discharge from an index admission were eligible to be included as an additional index admission.

Outcomes

A readmission was defined as any unplanned rehospitalization to either non-VA or VA acute care facilities for any cause within 30 days of discharge from the index hospitalization. Readmissions to observation status or nonacute or rehabilitation units, such as skilled nursing facilities, were not included. Planned readmissions for elective procedures, such as elective chemotherapy and revascularization following an AMI index admission, were not considered as an outcome event.

Risk Standardization for 30-day Readmission

Using approaches developed by CMS,10-12 we calculated hospital-specific 30-day RSRRs for each VA. Briefly, the RSRR is a ratio of the number of predicted readmissions within 30 days of discharge to the expected number of readmissions within 30 days of hospital discharge, multiplied by the national unadjusted 30-day readmission rate. This measure calculates hospital-specific RSRRs using hierarchical logistic regression models, which account for clustering of patients within hospitals and risk-adjusting for differences in case-mix, during the assessed time periods.13 This approach simultaneously models two levels (patient and hospital) to account for the variance in patient outcomes within and between hospitals.14 At the patient level, the model uses the log odds of readmissions as the dependent variable and age and selected comorbidities as the independent variables. The second level models the hospital-specific intercepts. According to CMS guidelines, the analysis was limited to facilities with at least 25 patient admissions annually for each condition. All readmissions were attributed to the hospital that initially discharged the patient to a nonacute setting.

 

 

Analysis

We examined and reported the distribution of patient and clinical characteristics at the hospital level. For each condition, we determined the number of hospitals that had a sufficient number of admissions (n ≥ 25) to be included in the analyses. We calculated the mean, median, and interquartile range for the observed unadjusted readmission rates across all included hospitals.

Similar to methods used by CMS, we used one year of data in the VA to assess hospital quality and variation in facility performance. First, we calculated the 30-day RSRRs using one year (2012) of data. To assess how variability changed with higher facility volume (ie, more years included in the analysis), we also calculated the 30-day RSRRs using two and three years of data. For this, we identified and quantified the number of hospitals whose RSRRs were calculated as being above or below the national VA average (mean ± 95% CI). Specifically, we calculated the number and percentage of hospitals that were classified as either above (+95% CI) or below the national average (−95% CI) using data from all three time periods. All analyses were conducted using SAS Enterprise Guide, Version 7.1. The SAS statistical packages made available by the CMS Measure Team were used to calculate RSRRs.

RESULTS

Patient Characteristics

Patients were predominantly older males (98.3%). Among those hospitalized for AMI, most of them had a history of previous coronary artery bypass graft (CABG) (69.1%), acute coronary syndrome (ACS; 66.2%), or documented coronary atherosclerosis (89.8%). Similarly, patients admitted for HF had high rates of CABG (71.3%) and HF (94.6%), in addition to cardiac arrhythmias (69.3%) and diabetes (60.8%). Patients admitted with a diagnosis of pneumonia had high rates of CABG (61.9%), chronic obstructive pulmonary disease (COPD; 58.1%), and previous diagnosis of pneumonia (78.8%; Table 1). Patient characteristics for two and three years of data are presented in Supplementary Table 1.

VA Hospitals with Sufficient Volume to Be Included in Profiling Assessments

There were 146 acute-care hospitals in the VA. In 2012, 56 (38%) VA hospitals had at least 25 admissions for AMI, 102 (70%) hospitals had at least 25 admissions for CHF, and 106 (73%) hospitals had at least 25 admissions for pneumonia (Table 1) and therefore qualified for analysis based on CMS criteria for 30-day RSRR calculation. The study sample included 3,571 patients with AMI, 10,609 patients with CHF, and 10,191 patients with pneumonia.

30-Day Readmission Rates

The mean observed readmission rates in 2012 were 20% (95% CI 19%-21%) among patients admitted for AMI, 20% (95% CI 19%-20%) for patients admitted with CHF, and 15% (95% CI 15%-16%) for patients admitted with pneumonia. No significant variation from these rates was noted following risk standardization across hospitals (Table 2). Observed and risk-standardized rates were also calculated for two and three years of data (Supplementary Table 2) but were not found to be grossly different when utilizing a single year of data.

In 2012, two hospitals (2%) exhibited HF RSRRs worse than the national average (+95% CI), whereas no hospital demonstrated worse-than-average rates (+95% CI) for AMI or pneumonia (Table 3, Figure 1). Similarly, in 2012, only three hospitals had RSRRs better than the national average (−95% CI) for HF and pneumonia.



We combined data from three years to increase the volume of admissions per hospital. Even after combining three years of data across all three conditions, only four hospitals (range: 3.5%-5.3%) had RSRRs worse than the national average (+95% CI). However, four (5.3%), eight (7.1%), and 11 (9.7%) VA hospitals had RSRRs better than the national average (−95% CI).

 

 

DISCUSSION

We found that the CMS-derived 30-day risk-stratified readmission metric for AMI, HF, and pneumonia showed little variation among VA hospitals. The lack of institutional 30-day readmission volume appears to be a fundamental limitation that subsequently requires multiple years of data to make this metric clinically meaningful. As the largest integrated healthcare system in the United States, the VA relies upon and makes large-scale programmatic decisions based on such performance data. The inability to detect meaningful interhospital variation in a timely manner suggests that the CMS-derived 30-day RSRR may not be a sensitive metric to distinguish facility performance or drive quality improvement initiatives within the VA.

First, we found it notable that among the 146 VA medical centers available for analysis,15 between 38% and 77% of hospitals qualified for evaluation when using CMS-based participation criteria—which excludes institutions with fewer than 25 episodes per year. Although this low degree of qualification for profiling was most dramatic when using one year of data (range: 38%-72%), we noted that it did not dramatically improve when we combined three years of data (range: 52%-77%). These findings act to highlight the population and systems differences between CMS and VA populations16 and further support the idea that CMS-derived models may not be optimized for use in the VA healthcare system.

Our findings are particularly relevant within the VA given the quarterly rate with which these data are reported within the VA SAIL scorecard.2 The VA designed SAIL for internal benchmarking to spotlight successful strategies of top performing institutions and promote high-quality, value-based care. Using one year of data, the minimum required to utilize CMS models, showed that quarterly feedback (ie, three months of data) may not be informative or useful given that few hospitals are able to differentiate themselves from the mean (±95% CI). Although the capacity to distinguish between high and low performers does improve by combining hospital admissions over three years, this is not a reasonable timeline for institutions to wait for quality comparisons. Furthermore, although the VA does present its data on CMS’s Hospital Compare website using three years of combined data, the variability and distribution of such results are not supplied.3

This lack of discriminability raises concerns about the ability to compare hospital performance between low- and high-volume institutions. Although these models function well in CMS settings with large patient volumes in which greater variability exists,5 they lose their capacity to discriminate when applied to low-volume settings such as the VA. Given that several hospitals in the US are small community hospitals with low patient volumes,17 this issue probably occurs in other non-VA settings. Although our study focuses on the VA, others have been able to compare VA and non-VA settings’ variation and distribution. For example, Nuti et al. explored the differences in 30-day RSRRs among hospitalized patients with AMI, HF, and pneumonia and similarly showed little variation, narrow distributions, and few outliers in the VA setting compared to those in the non-VA setting. For small patient volume institutions, including the VA, a focus on high-volume services, outcomes, and measures (ie, blood pressure control, medication reconciliation, etc.) may offer more discriminability between high- and low-performing facilities. For example, Patel et al. found that VA process measures in patients with HF (ie, beta-blocker and ACE-inhibitor use) can be used as valid quality measures as they exhibited consistent reliability over time and validity with adjusted mortality rates, whereas the 30-day RSRR did not.18

Our findings may have substantial financial, resource, and policy implications. Automatically developing and reporting measures created for the Medicare program in the VA may not be a good use of VA resources. In addition, facilities may react to these reported outcomes and expend local resources and finances to implement interventions to improve on a performance outcome whose measure is statistically no different than the vast majority of its comparators. Such events have been highlighted in the public media and have pointed to the fact that small changes in quality, or statistical errors themselves, can have large ramifications within the VA’s hospital rating system.19

These findings may also add to the discussion on whether public reporting of health and quality outcomes improves patient care. Since the CMS began public reporting on RSRRs in 2009, these rates have fallen for all three examined conditions (AMI, HF, and pneumonia),7,20,21 in addition to several other health outcomes.17 Although recent studies have suggested that these decreased rates have been driven by the CMS-sponsored Hospital Readmissions Reduction Program (HRRP),22 others have suggested that these findings are consistent with ongoing secular trends toward decreased readmissions and may not be completely explained by public reporting alone.23 Moreover, prior work has also found that readmissions may be strongly impacted by factors external to the hospital setting, such as patients’ social demographics (ie, household income, social isolation), that are not currently captured in risk-prediction models.24 Given the small variability we see in our data, public reporting within the VA is probably not beneficial, as only a small number of facilities are outliers based on RSRR.

Our study has several limitations. First, although we adapted the CMS model to the VA, we did not include gender in the model because >99% of all patient admissions were male. Second, we assessed only three medical conditions that were being tracked by both CMS and VA during this time period, and these outcomes may not be representative of other aspects of care and cannot be generalized to other medical conditions. Finally, more contemporary data could lead to differing results – though we note that no large-scale structural or policy changes addressing readmission rates have been implemented within the VA since our study period.

The results of this study suggest that the CMS-derived 30-day risk-stratified readmission metric for AMI, HF, and pneumonia may not have the capacity to properly detect interfacility variance and thus may not be an optimal quality indicator within the VA. As the VA and other healthcare systems continually strive to improve the quality of care they provide, they will require more accurate and timely metrics for which to index their performance.

 

 

Disclosures

The authors have nothing to disclose

 

Files
References

1. Medicare C for, Baltimore MS 7500 SB, Usa M. VA Data. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/VA-Data.html. Published October 19, 2016. Accessed July 15, 2018.
2. Strategic Analytics for Improvement and Learning (SAIL) - Quality of Care. https://www.va.gov/QUALITYOFCARE/measure-up/Strategic_Analytics_for_Improvement_and_Learning_SAIL.asp. Accessed July 15, 2018.
3. Snapshot. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/VA-Data.html. Accessed September 10, 2018.
4. Bradley EH, Curry L, Horwitz LI, et al. Hospital strategies associated with 30-day readmission rates for patients with heart failure. Circ Cardiovasc Qual Outcomes. 2013;6(4):444-450. doi: 10.1161/CIRCOUTCOMES.111.000101. PubMed
5. Desai NR, Ross JS, Kwon JY, et al. Association between hospital penalty status under the hospital readmission reduction program and readmission rates for target and nontarget conditions. JAMA. 2016;316(24):2647-2656. doi: 10.1001/jama.2016.18533. PubMed
6. McIlvennan CK, Eapen ZJ, Allen LA. Hospital readmissions reduction program. Circulation. 2015;131(20):1796-1803. doi: 10.1161/CIRCULATIONAHA.114.010270. PubMed
7. Suter LG, Li S-X, Grady JN, et al. National patterns of risk-standardized mortality and readmission after hospitalization for acute myocardial infarction, heart failure, and pneumonia: update on publicly reported outcomes measures based on the 2013 release. J Gen Intern Med. 2014;29(10):1333-1340. doi: 10.1007/s11606-014-2862-5. PubMed
8. O’Brien WJ, Chen Q, Mull HJ, et al. What is the value of adding Medicare data in estimating VA hospital readmission rates? Health Serv Res. 2015;50(1):40-57. doi: 10.1111/1475-6773.12207. PubMed
9. NQF: All-Cause Admissions and Readmissions 2015-2017 Technical Report. https://www.qualityforum.org/Publications/2017/04/All-Cause_Admissions_and_Readmissions_2015-2017_Technical_Report.aspx. Accessed August 2, 2018.
10. Keenan PS, Normand S-LT, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30-day all-cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1(1):29-37. doi: 10.1161/CIRCOUTCOMES.108.802686. PubMed
11. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30-day all-cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4(2):243-252. doi: 10.1161/CIRCOUTCOMES.110.957498. PubMed
12. Lindenauer PK, Normand S-LT, Drye EE, et al. Development, validation, and results of a measure of 30-day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142-150. doi: 10.1002/jhm.890. PubMed
13. Medicare C for, Baltimore MS 7500 SB, Usa M. OutcomeMeasures. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/OutcomeMeasures.html. Published October 13, 2017. Accessed July 19, 2018.
14. Nuti SV, Qin L, Rumsfeld JS, et al. Association of admission to Veterans Affairs hospitals vs non-Veterans Affairs hospitals with mortality and readmission rates among older hospitalized with acute myocardial infarction, heart failure, or pneumonia. JAMA. 2016;315(6):582-592. doi: 10.1001/jama.2016.0278. PubMed
15. Solutions VW. Veterans Health Administration - Locations. https://www.va.gov/directory/guide/division.asp?dnum=1. Accessed September 13, 2018.
16. Duan-Porter W (Denise), Martinson BC, Taylor B, et al. Evidence Review: Social Determinants of Health for Veterans. Washington (DC): Department of Veterans Affairs (US); 2017. http://www.ncbi.nlm.nih.gov/books/NBK488134/. Accessed June 13, 2018.
17. Fast Facts on U.S. Hospitals, 2018 | AHA. American Hospital Association. https://www.aha.org/statistics/fast-facts-us-hospitals. Accessed September 5, 2018.
18. Patel J, Sandhu A, Parizo J, Moayedi Y, Fonarow GC, Heidenreich PA. Validity of performance and outcome measures for heart failure. Circ Heart Fail. 2018;11(9):e005035. PubMed
19. Philipps D. Canceled Operations. Unsterile Tools. The V.A. Gave This Hospital 5 Stars. The New York Times. https://www.nytimes.com/2018/11/01/us/veterans-hospitals-rating-system-star.html. Published November 3, 2018. Accessed November 19, 2018.
20. DeVore AD, Hammill BG, Hardy NC, Eapen ZJ, Peterson ED, Hernandez AF. Has public reporting of hospital readmission rates affected patient outcomes?: Analysis of Medicare claims data. J Am Coll Cardiol. 2016;67(8):963-972. doi: 10.1016/j.jacc.2015.12.037. PubMed
21. Wasfy JH, Zigler CM, Choirat C, Wang Y, Dominici F, Yeh RW. Readmission rates after passage of the hospital readmissions reduction program: a pre-post analysis. Ann Intern Med. 2017;166(5):324-331. doi: 10.7326/M16-0185. PubMed
22. Medicare C for, Baltimore MS 7500 SB, Usa M. Hospital Readmission Reduction Program. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/HRRP/Hospital-Readmission-Reduction-Program.html. Published March 26, 2018. Accessed July 19, 2018.
23. Radford MJ. Does public reporting improve care? J Am Coll Cardiol. 2016;67(8):973-975. doi: 10.1016/j.jacc.2015.12.038. PubMed
24. Barnett ML, Hsu J, McWilliams JM. Patient characteristics and differences in hospital readmission rates. JAMA Intern Med. 2015;175(11):1803-1812. doi: 10.1001/jamainternmed.2015.4660. PubMed

Article PDF
Issue
Journal of Hospital Medicine 14(5)
Publications
Topics
Page Number
266-271. Published online first February 20, 2019.
Sections
Files
Files
Article PDF
Article PDF

Using methodology created by the Centers for Medicare & Medicaid Services (CMS), the Department of Veterans Affairs (VA) calculates and reports hospital performance measures for several key conditions, including acute myocardial infarction (AMI), heart failure (HF), and pneumonia.1 These measures are designed to benchmark individual hospitals against how average hospitals perform when caring for a similar case-mix index. Because readmissions to the hospital within 30-days of discharge are common and costly, this metric has garnered extensive attention in recent years.

To summarize the 30-day readmission metric, the VA utilizes the Strategic Analytics for Improvement and Learning (SAIL) system to present internally its findings to VA practitioners and leadership.2 The VA provides these data as a means to drive quality improvement and allow for comparison of individual hospitals’ performance across measures throughout the VA healthcare system. Since 2010, the VA began using and publicly reporting the CMS-derived 30-day Risk-Stratified Readmission Rate (RSRR) on the Hospital Compare website.3 Similar to CMS, the VA uses three years of combined data so that patients, providers, and other stakeholders can compare individual hospitals’ performance across these measures.1 In response to this, hospitals and healthcare organizations have implemented quality improvement and large-scale programmatic interventions in an attempt to improve quality around readmissions.4-6 A recent assessment on how hospitals within the Medicare fee-for-service program have responded to such reporting found large degrees of variability, with more than half of the participating institutions facing penalties due to greater-than-expected readmission rates.5 Although the VA utilizes the same CMS-derived model in its assessments and reporting, the variability and distribution around this metric are not publicly reported—thus making it difficult to ascertain how individual VA hospitals compare with one another. Without such information, individual facilities may not know how to benchmark the quality of their care to others, nor would the VA recognize which interventions addressing readmissions are working, and which are not. Although previous assessments of interinstitutional variance have been performed in Medicare populations,7 a focused analysis of such variance within the VA has yet to be performed.

In this study, we performed a multiyear assessment of the CMS-derived 30-day RSRR metric for AMI, HF, and pneumonia as a useful measure to drive VA quality improvement or distinguish VA facility performance based on its ability to detect interfacility variability.

 

 

METHODS

Data Source

We used VA administrative and Medicare claims data from 2010 to 2012. After identifying index hospitalizations to VA hospitals, we obtained patients’ respective inpatient Medicare claims data from the Medicare Provider Analysis and Review (MedPAR) and Outpatient files. All Medicare records were linked to VA records via scrambled Social Security numbers and were provided by the VA Information Resource Center. This study was approved by the San Francisco VA Medical Center Institutional Review Board.

Study Sample

Our cohort consisted of hospitalized VA beneficiary and Medicare fee-for-service patients who were aged ≥65 years and admitted to and discharged from a VA acute care center with a primary discharge diagnosis of AMI, HF, or pneumonia. These comorbidities were chosen as they are publicly reported and frequently used for interfacility comparisons. Because studies have found that inclusion of secondary payer data (ie, CMS data) may affect hospital-profiling outcomes, we included Medicare data on all available patients.8 We excluded hospitalizations that resulted in a transfer to another acute care facility and those admitted to observation status at their index admission. To ensure a full year of data for risk adjustment, beneficiaries were included only if they were enrolled in Medicare for 12 months prior to and including the date of the index admission.

Index hospitalizations were first identified using VA-only inpatient data similar to methods outlined by the CMS and endorsed by the National Quality Forum for Hospital Profiling.9 An index hospitalization was defined as an acute inpatient discharge between 2010 and 2012 in which the principal diagnosis was AMI, HF, or pneumonia. We excluded in-hospital deaths, discharges against medical advice, and--for the AMI cohort only--discharges on the same day as admission. Patients may have multiple admissions per year, but only admissions after 30 days of discharge from an index admission were eligible to be included as an additional index admission.

Outcomes

A readmission was defined as any unplanned rehospitalization to either non-VA or VA acute care facilities for any cause within 30 days of discharge from the index hospitalization. Readmissions to observation status or nonacute or rehabilitation units, such as skilled nursing facilities, were not included. Planned readmissions for elective procedures, such as elective chemotherapy and revascularization following an AMI index admission, were not considered as an outcome event.

Risk Standardization for 30-day Readmission

Using approaches developed by CMS,10-12 we calculated hospital-specific 30-day RSRRs for each VA. Briefly, the RSRR is a ratio of the number of predicted readmissions within 30 days of discharge to the expected number of readmissions within 30 days of hospital discharge, multiplied by the national unadjusted 30-day readmission rate. This measure calculates hospital-specific RSRRs using hierarchical logistic regression models, which account for clustering of patients within hospitals and risk-adjusting for differences in case-mix, during the assessed time periods.13 This approach simultaneously models two levels (patient and hospital) to account for the variance in patient outcomes within and between hospitals.14 At the patient level, the model uses the log odds of readmissions as the dependent variable and age and selected comorbidities as the independent variables. The second level models the hospital-specific intercepts. According to CMS guidelines, the analysis was limited to facilities with at least 25 patient admissions annually for each condition. All readmissions were attributed to the hospital that initially discharged the patient to a nonacute setting.

 

 

Analysis

We examined and reported the distribution of patient and clinical characteristics at the hospital level. For each condition, we determined the number of hospitals that had a sufficient number of admissions (n ≥ 25) to be included in the analyses. We calculated the mean, median, and interquartile range for the observed unadjusted readmission rates across all included hospitals.

Similar to methods used by CMS, we used one year of data in the VA to assess hospital quality and variation in facility performance. First, we calculated the 30-day RSRRs using one year (2012) of data. To assess how variability changed with higher facility volume (ie, more years included in the analysis), we also calculated the 30-day RSRRs using two and three years of data. For this, we identified and quantified the number of hospitals whose RSRRs were calculated as being above or below the national VA average (mean ± 95% CI). Specifically, we calculated the number and percentage of hospitals that were classified as either above (+95% CI) or below the national average (−95% CI) using data from all three time periods. All analyses were conducted using SAS Enterprise Guide, Version 7.1. The SAS statistical packages made available by the CMS Measure Team were used to calculate RSRRs.

RESULTS

Patient Characteristics

Patients were predominantly older males (98.3%). Among those hospitalized for AMI, most of them had a history of previous coronary artery bypass graft (CABG) (69.1%), acute coronary syndrome (ACS; 66.2%), or documented coronary atherosclerosis (89.8%). Similarly, patients admitted for HF had high rates of CABG (71.3%) and HF (94.6%), in addition to cardiac arrhythmias (69.3%) and diabetes (60.8%). Patients admitted with a diagnosis of pneumonia had high rates of CABG (61.9%), chronic obstructive pulmonary disease (COPD; 58.1%), and previous diagnosis of pneumonia (78.8%; Table 1). Patient characteristics for two and three years of data are presented in Supplementary Table 1.

VA Hospitals with Sufficient Volume to Be Included in Profiling Assessments

There were 146 acute-care hospitals in the VA. In 2012, 56 (38%) VA hospitals had at least 25 admissions for AMI, 102 (70%) hospitals had at least 25 admissions for CHF, and 106 (73%) hospitals had at least 25 admissions for pneumonia (Table 1) and therefore qualified for analysis based on CMS criteria for 30-day RSRR calculation. The study sample included 3,571 patients with AMI, 10,609 patients with CHF, and 10,191 patients with pneumonia.

30-Day Readmission Rates

The mean observed readmission rates in 2012 were 20% (95% CI 19%-21%) among patients admitted for AMI, 20% (95% CI 19%-20%) for patients admitted with CHF, and 15% (95% CI 15%-16%) for patients admitted with pneumonia. No significant variation from these rates was noted following risk standardization across hospitals (Table 2). Observed and risk-standardized rates were also calculated for two and three years of data (Supplementary Table 2) but were not found to be grossly different when utilizing a single year of data.

In 2012, two hospitals (2%) exhibited HF RSRRs worse than the national average (+95% CI), whereas no hospital demonstrated worse-than-average rates (+95% CI) for AMI or pneumonia (Table 3, Figure 1). Similarly, in 2012, only three hospitals had RSRRs better than the national average (−95% CI) for HF and pneumonia.



We combined data from three years to increase the volume of admissions per hospital. Even after combining three years of data across all three conditions, only four hospitals (range: 3.5%-5.3%) had RSRRs worse than the national average (+95% CI). However, four (5.3%), eight (7.1%), and 11 (9.7%) VA hospitals had RSRRs better than the national average (−95% CI).

 

 

DISCUSSION

We found that the CMS-derived 30-day risk-stratified readmission metric for AMI, HF, and pneumonia showed little variation among VA hospitals. The lack of institutional 30-day readmission volume appears to be a fundamental limitation that subsequently requires multiple years of data to make this metric clinically meaningful. As the largest integrated healthcare system in the United States, the VA relies upon and makes large-scale programmatic decisions based on such performance data. The inability to detect meaningful interhospital variation in a timely manner suggests that the CMS-derived 30-day RSRR may not be a sensitive metric to distinguish facility performance or drive quality improvement initiatives within the VA.

First, we found it notable that among the 146 VA medical centers available for analysis,15 between 38% and 77% of hospitals qualified for evaluation when using CMS-based participation criteria—which excludes institutions with fewer than 25 episodes per year. Although this low degree of qualification for profiling was most dramatic when using one year of data (range: 38%-72%), we noted that it did not dramatically improve when we combined three years of data (range: 52%-77%). These findings act to highlight the population and systems differences between CMS and VA populations16 and further support the idea that CMS-derived models may not be optimized for use in the VA healthcare system.

Our findings are particularly relevant within the VA given the quarterly rate with which these data are reported within the VA SAIL scorecard.2 The VA designed SAIL for internal benchmarking to spotlight successful strategies of top performing institutions and promote high-quality, value-based care. Using one year of data, the minimum required to utilize CMS models, showed that quarterly feedback (ie, three months of data) may not be informative or useful given that few hospitals are able to differentiate themselves from the mean (±95% CI). Although the capacity to distinguish between high and low performers does improve by combining hospital admissions over three years, this is not a reasonable timeline for institutions to wait for quality comparisons. Furthermore, although the VA does present its data on CMS’s Hospital Compare website using three years of combined data, the variability and distribution of such results are not supplied.3

This lack of discriminability raises concerns about the ability to compare hospital performance between low- and high-volume institutions. Although these models function well in CMS settings with large patient volumes in which greater variability exists,5 they lose their capacity to discriminate when applied to low-volume settings such as the VA. Given that several hospitals in the US are small community hospitals with low patient volumes,17 this issue probably occurs in other non-VA settings. Although our study focuses on the VA, others have been able to compare VA and non-VA settings’ variation and distribution. For example, Nuti et al. explored the differences in 30-day RSRRs among hospitalized patients with AMI, HF, and pneumonia and similarly showed little variation, narrow distributions, and few outliers in the VA setting compared to those in the non-VA setting. For small patient volume institutions, including the VA, a focus on high-volume services, outcomes, and measures (ie, blood pressure control, medication reconciliation, etc.) may offer more discriminability between high- and low-performing facilities. For example, Patel et al. found that VA process measures in patients with HF (ie, beta-blocker and ACE-inhibitor use) can be used as valid quality measures as they exhibited consistent reliability over time and validity with adjusted mortality rates, whereas the 30-day RSRR did not.18

Our findings may have substantial financial, resource, and policy implications. Automatically developing and reporting measures created for the Medicare program in the VA may not be a good use of VA resources. In addition, facilities may react to these reported outcomes and expend local resources and finances to implement interventions to improve on a performance outcome whose measure is statistically no different than the vast majority of its comparators. Such events have been highlighted in the public media and have pointed to the fact that small changes in quality, or statistical errors themselves, can have large ramifications within the VA’s hospital rating system.19

These findings may also add to the discussion on whether public reporting of health and quality outcomes improves patient care. Since the CMS began public reporting on RSRRs in 2009, these rates have fallen for all three examined conditions (AMI, HF, and pneumonia),7,20,21 in addition to several other health outcomes.17 Although recent studies have suggested that these decreased rates have been driven by the CMS-sponsored Hospital Readmissions Reduction Program (HRRP),22 others have suggested that these findings are consistent with ongoing secular trends toward decreased readmissions and may not be completely explained by public reporting alone.23 Moreover, prior work has also found that readmissions may be strongly impacted by factors external to the hospital setting, such as patients’ social demographics (ie, household income, social isolation), that are not currently captured in risk-prediction models.24 Given the small variability we see in our data, public reporting within the VA is probably not beneficial, as only a small number of facilities are outliers based on RSRR.

Our study has several limitations. First, although we adapted the CMS model to the VA, we did not include gender in the model because >99% of all patient admissions were male. Second, we assessed only three medical conditions that were being tracked by both CMS and VA during this time period, and these outcomes may not be representative of other aspects of care and cannot be generalized to other medical conditions. Finally, more contemporary data could lead to differing results – though we note that no large-scale structural or policy changes addressing readmission rates have been implemented within the VA since our study period.

The results of this study suggest that the CMS-derived 30-day risk-stratified readmission metric for AMI, HF, and pneumonia may not have the capacity to properly detect interfacility variance and thus may not be an optimal quality indicator within the VA. As the VA and other healthcare systems continually strive to improve the quality of care they provide, they will require more accurate and timely metrics for which to index their performance.

 

 

Disclosures

The authors have nothing to disclose

 

Using methodology created by the Centers for Medicare & Medicaid Services (CMS), the Department of Veterans Affairs (VA) calculates and reports hospital performance measures for several key conditions, including acute myocardial infarction (AMI), heart failure (HF), and pneumonia.1 These measures are designed to benchmark individual hospitals against how average hospitals perform when caring for a similar case-mix index. Because readmissions to the hospital within 30-days of discharge are common and costly, this metric has garnered extensive attention in recent years.

To summarize the 30-day readmission metric, the VA utilizes the Strategic Analytics for Improvement and Learning (SAIL) system to present internally its findings to VA practitioners and leadership.2 The VA provides these data as a means to drive quality improvement and allow for comparison of individual hospitals’ performance across measures throughout the VA healthcare system. Since 2010, the VA began using and publicly reporting the CMS-derived 30-day Risk-Stratified Readmission Rate (RSRR) on the Hospital Compare website.3 Similar to CMS, the VA uses three years of combined data so that patients, providers, and other stakeholders can compare individual hospitals’ performance across these measures.1 In response to this, hospitals and healthcare organizations have implemented quality improvement and large-scale programmatic interventions in an attempt to improve quality around readmissions.4-6 A recent assessment on how hospitals within the Medicare fee-for-service program have responded to such reporting found large degrees of variability, with more than half of the participating institutions facing penalties due to greater-than-expected readmission rates.5 Although the VA utilizes the same CMS-derived model in its assessments and reporting, the variability and distribution around this metric are not publicly reported—thus making it difficult to ascertain how individual VA hospitals compare with one another. Without such information, individual facilities may not know how to benchmark the quality of their care to others, nor would the VA recognize which interventions addressing readmissions are working, and which are not. Although previous assessments of interinstitutional variance have been performed in Medicare populations,7 a focused analysis of such variance within the VA has yet to be performed.

In this study, we performed a multiyear assessment of the CMS-derived 30-day RSRR metric for AMI, HF, and pneumonia as a useful measure to drive VA quality improvement or distinguish VA facility performance based on its ability to detect interfacility variability.

 

 

METHODS

Data Source

We used VA administrative and Medicare claims data from 2010 to 2012. After identifying index hospitalizations to VA hospitals, we obtained patients’ respective inpatient Medicare claims data from the Medicare Provider Analysis and Review (MedPAR) and Outpatient files. All Medicare records were linked to VA records via scrambled Social Security numbers and were provided by the VA Information Resource Center. This study was approved by the San Francisco VA Medical Center Institutional Review Board.

Study Sample

Our cohort consisted of hospitalized VA beneficiary and Medicare fee-for-service patients who were aged ≥65 years and admitted to and discharged from a VA acute care center with a primary discharge diagnosis of AMI, HF, or pneumonia. These comorbidities were chosen as they are publicly reported and frequently used for interfacility comparisons. Because studies have found that inclusion of secondary payer data (ie, CMS data) may affect hospital-profiling outcomes, we included Medicare data on all available patients.8 We excluded hospitalizations that resulted in a transfer to another acute care facility and those admitted to observation status at their index admission. To ensure a full year of data for risk adjustment, beneficiaries were included only if they were enrolled in Medicare for 12 months prior to and including the date of the index admission.

Index hospitalizations were first identified using VA-only inpatient data similar to methods outlined by the CMS and endorsed by the National Quality Forum for Hospital Profiling.9 An index hospitalization was defined as an acute inpatient discharge between 2010 and 2012 in which the principal diagnosis was AMI, HF, or pneumonia. We excluded in-hospital deaths, discharges against medical advice, and--for the AMI cohort only--discharges on the same day as admission. Patients may have multiple admissions per year, but only admissions after 30 days of discharge from an index admission were eligible to be included as an additional index admission.

Outcomes

A readmission was defined as any unplanned rehospitalization to either non-VA or VA acute care facilities for any cause within 30 days of discharge from the index hospitalization. Readmissions to observation status or nonacute or rehabilitation units, such as skilled nursing facilities, were not included. Planned readmissions for elective procedures, such as elective chemotherapy and revascularization following an AMI index admission, were not considered as an outcome event.

Risk Standardization for 30-day Readmission

Using approaches developed by CMS,10-12 we calculated hospital-specific 30-day RSRRs for each VA. Briefly, the RSRR is a ratio of the number of predicted readmissions within 30 days of discharge to the expected number of readmissions within 30 days of hospital discharge, multiplied by the national unadjusted 30-day readmission rate. This measure calculates hospital-specific RSRRs using hierarchical logistic regression models, which account for clustering of patients within hospitals and risk-adjusting for differences in case-mix, during the assessed time periods.13 This approach simultaneously models two levels (patient and hospital) to account for the variance in patient outcomes within and between hospitals.14 At the patient level, the model uses the log odds of readmissions as the dependent variable and age and selected comorbidities as the independent variables. The second level models the hospital-specific intercepts. According to CMS guidelines, the analysis was limited to facilities with at least 25 patient admissions annually for each condition. All readmissions were attributed to the hospital that initially discharged the patient to a nonacute setting.

 

 

Analysis

We examined and reported the distribution of patient and clinical characteristics at the hospital level. For each condition, we determined the number of hospitals that had a sufficient number of admissions (n ≥ 25) to be included in the analyses. We calculated the mean, median, and interquartile range for the observed unadjusted readmission rates across all included hospitals.

Similar to methods used by CMS, we used one year of data in the VA to assess hospital quality and variation in facility performance. First, we calculated the 30-day RSRRs using one year (2012) of data. To assess how variability changed with higher facility volume (ie, more years included in the analysis), we also calculated the 30-day RSRRs using two and three years of data. For this, we identified and quantified the number of hospitals whose RSRRs were calculated as being above or below the national VA average (mean ± 95% CI). Specifically, we calculated the number and percentage of hospitals that were classified as either above (+95% CI) or below the national average (−95% CI) using data from all three time periods. All analyses were conducted using SAS Enterprise Guide, Version 7.1. The SAS statistical packages made available by the CMS Measure Team were used to calculate RSRRs.

RESULTS

Patient Characteristics

Patients were predominantly older males (98.3%). Among those hospitalized for AMI, most of them had a history of previous coronary artery bypass graft (CABG) (69.1%), acute coronary syndrome (ACS; 66.2%), or documented coronary atherosclerosis (89.8%). Similarly, patients admitted for HF had high rates of CABG (71.3%) and HF (94.6%), in addition to cardiac arrhythmias (69.3%) and diabetes (60.8%). Patients admitted with a diagnosis of pneumonia had high rates of CABG (61.9%), chronic obstructive pulmonary disease (COPD; 58.1%), and previous diagnosis of pneumonia (78.8%; Table 1). Patient characteristics for two and three years of data are presented in Supplementary Table 1.

VA Hospitals with Sufficient Volume to Be Included in Profiling Assessments

There were 146 acute-care hospitals in the VA. In 2012, 56 (38%) VA hospitals had at least 25 admissions for AMI, 102 (70%) hospitals had at least 25 admissions for CHF, and 106 (73%) hospitals had at least 25 admissions for pneumonia (Table 1) and therefore qualified for analysis based on CMS criteria for 30-day RSRR calculation. The study sample included 3,571 patients with AMI, 10,609 patients with CHF, and 10,191 patients with pneumonia.

30-Day Readmission Rates

The mean observed readmission rates in 2012 were 20% (95% CI 19%-21%) among patients admitted for AMI, 20% (95% CI 19%-20%) for patients admitted with CHF, and 15% (95% CI 15%-16%) for patients admitted with pneumonia. No significant variation from these rates was noted following risk standardization across hospitals (Table 2). Observed and risk-standardized rates were also calculated for two and three years of data (Supplementary Table 2) but were not found to be grossly different when utilizing a single year of data.

In 2012, two hospitals (2%) exhibited HF RSRRs worse than the national average (+95% CI), whereas no hospital demonstrated worse-than-average rates (+95% CI) for AMI or pneumonia (Table 3, Figure 1). Similarly, in 2012, only three hospitals had RSRRs better than the national average (−95% CI) for HF and pneumonia.



We combined data from three years to increase the volume of admissions per hospital. Even after combining three years of data across all three conditions, only four hospitals (range: 3.5%-5.3%) had RSRRs worse than the national average (+95% CI). However, four (5.3%), eight (7.1%), and 11 (9.7%) VA hospitals had RSRRs better than the national average (−95% CI).

 

 

DISCUSSION

We found that the CMS-derived 30-day risk-stratified readmission metric for AMI, HF, and pneumonia showed little variation among VA hospitals. The lack of institutional 30-day readmission volume appears to be a fundamental limitation that subsequently requires multiple years of data to make this metric clinically meaningful. As the largest integrated healthcare system in the United States, the VA relies upon and makes large-scale programmatic decisions based on such performance data. The inability to detect meaningful interhospital variation in a timely manner suggests that the CMS-derived 30-day RSRR may not be a sensitive metric to distinguish facility performance or drive quality improvement initiatives within the VA.

First, we found it notable that among the 146 VA medical centers available for analysis,15 between 38% and 77% of hospitals qualified for evaluation when using CMS-based participation criteria—which excludes institutions with fewer than 25 episodes per year. Although this low degree of qualification for profiling was most dramatic when using one year of data (range: 38%-72%), we noted that it did not dramatically improve when we combined three years of data (range: 52%-77%). These findings act to highlight the population and systems differences between CMS and VA populations16 and further support the idea that CMS-derived models may not be optimized for use in the VA healthcare system.

Our findings are particularly relevant within the VA given the quarterly rate with which these data are reported within the VA SAIL scorecard.2 The VA designed SAIL for internal benchmarking to spotlight successful strategies of top performing institutions and promote high-quality, value-based care. Using one year of data, the minimum required to utilize CMS models, showed that quarterly feedback (ie, three months of data) may not be informative or useful given that few hospitals are able to differentiate themselves from the mean (±95% CI). Although the capacity to distinguish between high and low performers does improve by combining hospital admissions over three years, this is not a reasonable timeline for institutions to wait for quality comparisons. Furthermore, although the VA does present its data on CMS’s Hospital Compare website using three years of combined data, the variability and distribution of such results are not supplied.3

This lack of discriminability raises concerns about the ability to compare hospital performance between low- and high-volume institutions. Although these models function well in CMS settings with large patient volumes in which greater variability exists,5 they lose their capacity to discriminate when applied to low-volume settings such as the VA. Given that several hospitals in the US are small community hospitals with low patient volumes,17 this issue probably occurs in other non-VA settings. Although our study focuses on the VA, others have been able to compare VA and non-VA settings’ variation and distribution. For example, Nuti et al. explored the differences in 30-day RSRRs among hospitalized patients with AMI, HF, and pneumonia and similarly showed little variation, narrow distributions, and few outliers in the VA setting compared to those in the non-VA setting. For small patient volume institutions, including the VA, a focus on high-volume services, outcomes, and measures (ie, blood pressure control, medication reconciliation, etc.) may offer more discriminability between high- and low-performing facilities. For example, Patel et al. found that VA process measures in patients with HF (ie, beta-blocker and ACE-inhibitor use) can be used as valid quality measures as they exhibited consistent reliability over time and validity with adjusted mortality rates, whereas the 30-day RSRR did not.18

Our findings may have substantial financial, resource, and policy implications. Automatically developing and reporting measures created for the Medicare program in the VA may not be a good use of VA resources. In addition, facilities may react to these reported outcomes and expend local resources and finances to implement interventions to improve on a performance outcome whose measure is statistically no different than the vast majority of its comparators. Such events have been highlighted in the public media and have pointed to the fact that small changes in quality, or statistical errors themselves, can have large ramifications within the VA’s hospital rating system.19

These findings may also add to the discussion on whether public reporting of health and quality outcomes improves patient care. Since the CMS began public reporting on RSRRs in 2009, these rates have fallen for all three examined conditions (AMI, HF, and pneumonia),7,20,21 in addition to several other health outcomes.17 Although recent studies have suggested that these decreased rates have been driven by the CMS-sponsored Hospital Readmissions Reduction Program (HRRP),22 others have suggested that these findings are consistent with ongoing secular trends toward decreased readmissions and may not be completely explained by public reporting alone.23 Moreover, prior work has also found that readmissions may be strongly impacted by factors external to the hospital setting, such as patients’ social demographics (ie, household income, social isolation), that are not currently captured in risk-prediction models.24 Given the small variability we see in our data, public reporting within the VA is probably not beneficial, as only a small number of facilities are outliers based on RSRR.

Our study has several limitations. First, although we adapted the CMS model to the VA, we did not include gender in the model because >99% of all patient admissions were male. Second, we assessed only three medical conditions that were being tracked by both CMS and VA during this time period, and these outcomes may not be representative of other aspects of care and cannot be generalized to other medical conditions. Finally, more contemporary data could lead to differing results – though we note that no large-scale structural or policy changes addressing readmission rates have been implemented within the VA since our study period.

The results of this study suggest that the CMS-derived 30-day risk-stratified readmission metric for AMI, HF, and pneumonia may not have the capacity to properly detect interfacility variance and thus may not be an optimal quality indicator within the VA. As the VA and other healthcare systems continually strive to improve the quality of care they provide, they will require more accurate and timely metrics for which to index their performance.

 

 

Disclosures

The authors have nothing to disclose

 

References

1. Medicare C for, Baltimore MS 7500 SB, Usa M. VA Data. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/VA-Data.html. Published October 19, 2016. Accessed July 15, 2018.
2. Strategic Analytics for Improvement and Learning (SAIL) - Quality of Care. https://www.va.gov/QUALITYOFCARE/measure-up/Strategic_Analytics_for_Improvement_and_Learning_SAIL.asp. Accessed July 15, 2018.
3. Snapshot. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/VA-Data.html. Accessed September 10, 2018.
4. Bradley EH, Curry L, Horwitz LI, et al. Hospital strategies associated with 30-day readmission rates for patients with heart failure. Circ Cardiovasc Qual Outcomes. 2013;6(4):444-450. doi: 10.1161/CIRCOUTCOMES.111.000101. PubMed
5. Desai NR, Ross JS, Kwon JY, et al. Association between hospital penalty status under the hospital readmission reduction program and readmission rates for target and nontarget conditions. JAMA. 2016;316(24):2647-2656. doi: 10.1001/jama.2016.18533. PubMed
6. McIlvennan CK, Eapen ZJ, Allen LA. Hospital readmissions reduction program. Circulation. 2015;131(20):1796-1803. doi: 10.1161/CIRCULATIONAHA.114.010270. PubMed
7. Suter LG, Li S-X, Grady JN, et al. National patterns of risk-standardized mortality and readmission after hospitalization for acute myocardial infarction, heart failure, and pneumonia: update on publicly reported outcomes measures based on the 2013 release. J Gen Intern Med. 2014;29(10):1333-1340. doi: 10.1007/s11606-014-2862-5. PubMed
8. O’Brien WJ, Chen Q, Mull HJ, et al. What is the value of adding Medicare data in estimating VA hospital readmission rates? Health Serv Res. 2015;50(1):40-57. doi: 10.1111/1475-6773.12207. PubMed
9. NQF: All-Cause Admissions and Readmissions 2015-2017 Technical Report. https://www.qualityforum.org/Publications/2017/04/All-Cause_Admissions_and_Readmissions_2015-2017_Technical_Report.aspx. Accessed August 2, 2018.
10. Keenan PS, Normand S-LT, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30-day all-cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1(1):29-37. doi: 10.1161/CIRCOUTCOMES.108.802686. PubMed
11. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30-day all-cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4(2):243-252. doi: 10.1161/CIRCOUTCOMES.110.957498. PubMed
12. Lindenauer PK, Normand S-LT, Drye EE, et al. Development, validation, and results of a measure of 30-day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142-150. doi: 10.1002/jhm.890. PubMed
13. Medicare C for, Baltimore MS 7500 SB, Usa M. OutcomeMeasures. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/OutcomeMeasures.html. Published October 13, 2017. Accessed July 19, 2018.
14. Nuti SV, Qin L, Rumsfeld JS, et al. Association of admission to Veterans Affairs hospitals vs non-Veterans Affairs hospitals with mortality and readmission rates among older hospitalized with acute myocardial infarction, heart failure, or pneumonia. JAMA. 2016;315(6):582-592. doi: 10.1001/jama.2016.0278. PubMed
15. Solutions VW. Veterans Health Administration - Locations. https://www.va.gov/directory/guide/division.asp?dnum=1. Accessed September 13, 2018.
16. Duan-Porter W (Denise), Martinson BC, Taylor B, et al. Evidence Review: Social Determinants of Health for Veterans. Washington (DC): Department of Veterans Affairs (US); 2017. http://www.ncbi.nlm.nih.gov/books/NBK488134/. Accessed June 13, 2018.
17. Fast Facts on U.S. Hospitals, 2018 | AHA. American Hospital Association. https://www.aha.org/statistics/fast-facts-us-hospitals. Accessed September 5, 2018.
18. Patel J, Sandhu A, Parizo J, Moayedi Y, Fonarow GC, Heidenreich PA. Validity of performance and outcome measures for heart failure. Circ Heart Fail. 2018;11(9):e005035. PubMed
19. Philipps D. Canceled Operations. Unsterile Tools. The V.A. Gave This Hospital 5 Stars. The New York Times. https://www.nytimes.com/2018/11/01/us/veterans-hospitals-rating-system-star.html. Published November 3, 2018. Accessed November 19, 2018.
20. DeVore AD, Hammill BG, Hardy NC, Eapen ZJ, Peterson ED, Hernandez AF. Has public reporting of hospital readmission rates affected patient outcomes?: Analysis of Medicare claims data. J Am Coll Cardiol. 2016;67(8):963-972. doi: 10.1016/j.jacc.2015.12.037. PubMed
21. Wasfy JH, Zigler CM, Choirat C, Wang Y, Dominici F, Yeh RW. Readmission rates after passage of the hospital readmissions reduction program: a pre-post analysis. Ann Intern Med. 2017;166(5):324-331. doi: 10.7326/M16-0185. PubMed
22. Medicare C for, Baltimore MS 7500 SB, Usa M. Hospital Readmission Reduction Program. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/HRRP/Hospital-Readmission-Reduction-Program.html. Published March 26, 2018. Accessed July 19, 2018.
23. Radford MJ. Does public reporting improve care? J Am Coll Cardiol. 2016;67(8):973-975. doi: 10.1016/j.jacc.2015.12.038. PubMed
24. Barnett ML, Hsu J, McWilliams JM. Patient characteristics and differences in hospital readmission rates. JAMA Intern Med. 2015;175(11):1803-1812. doi: 10.1001/jamainternmed.2015.4660. PubMed

References

1. Medicare C for, Baltimore MS 7500 SB, Usa M. VA Data. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/VA-Data.html. Published October 19, 2016. Accessed July 15, 2018.
2. Strategic Analytics for Improvement and Learning (SAIL) - Quality of Care. https://www.va.gov/QUALITYOFCARE/measure-up/Strategic_Analytics_for_Improvement_and_Learning_SAIL.asp. Accessed July 15, 2018.
3. Snapshot. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/VA-Data.html. Accessed September 10, 2018.
4. Bradley EH, Curry L, Horwitz LI, et al. Hospital strategies associated with 30-day readmission rates for patients with heart failure. Circ Cardiovasc Qual Outcomes. 2013;6(4):444-450. doi: 10.1161/CIRCOUTCOMES.111.000101. PubMed
5. Desai NR, Ross JS, Kwon JY, et al. Association between hospital penalty status under the hospital readmission reduction program and readmission rates for target and nontarget conditions. JAMA. 2016;316(24):2647-2656. doi: 10.1001/jama.2016.18533. PubMed
6. McIlvennan CK, Eapen ZJ, Allen LA. Hospital readmissions reduction program. Circulation. 2015;131(20):1796-1803. doi: 10.1161/CIRCULATIONAHA.114.010270. PubMed
7. Suter LG, Li S-X, Grady JN, et al. National patterns of risk-standardized mortality and readmission after hospitalization for acute myocardial infarction, heart failure, and pneumonia: update on publicly reported outcomes measures based on the 2013 release. J Gen Intern Med. 2014;29(10):1333-1340. doi: 10.1007/s11606-014-2862-5. PubMed
8. O’Brien WJ, Chen Q, Mull HJ, et al. What is the value of adding Medicare data in estimating VA hospital readmission rates? Health Serv Res. 2015;50(1):40-57. doi: 10.1111/1475-6773.12207. PubMed
9. NQF: All-Cause Admissions and Readmissions 2015-2017 Technical Report. https://www.qualityforum.org/Publications/2017/04/All-Cause_Admissions_and_Readmissions_2015-2017_Technical_Report.aspx. Accessed August 2, 2018.
10. Keenan PS, Normand S-LT, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30-day all-cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1(1):29-37. doi: 10.1161/CIRCOUTCOMES.108.802686. PubMed
11. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30-day all-cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4(2):243-252. doi: 10.1161/CIRCOUTCOMES.110.957498. PubMed
12. Lindenauer PK, Normand S-LT, Drye EE, et al. Development, validation, and results of a measure of 30-day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142-150. doi: 10.1002/jhm.890. PubMed
13. Medicare C for, Baltimore MS 7500 SB, Usa M. OutcomeMeasures. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/OutcomeMeasures.html. Published October 13, 2017. Accessed July 19, 2018.
14. Nuti SV, Qin L, Rumsfeld JS, et al. Association of admission to Veterans Affairs hospitals vs non-Veterans Affairs hospitals with mortality and readmission rates among older hospitalized with acute myocardial infarction, heart failure, or pneumonia. JAMA. 2016;315(6):582-592. doi: 10.1001/jama.2016.0278. PubMed
15. Solutions VW. Veterans Health Administration - Locations. https://www.va.gov/directory/guide/division.asp?dnum=1. Accessed September 13, 2018.
16. Duan-Porter W (Denise), Martinson BC, Taylor B, et al. Evidence Review: Social Determinants of Health for Veterans. Washington (DC): Department of Veterans Affairs (US); 2017. http://www.ncbi.nlm.nih.gov/books/NBK488134/. Accessed June 13, 2018.
17. Fast Facts on U.S. Hospitals, 2018 | AHA. American Hospital Association. https://www.aha.org/statistics/fast-facts-us-hospitals. Accessed September 5, 2018.
18. Patel J, Sandhu A, Parizo J, Moayedi Y, Fonarow GC, Heidenreich PA. Validity of performance and outcome measures for heart failure. Circ Heart Fail. 2018;11(9):e005035. PubMed
19. Philipps D. Canceled Operations. Unsterile Tools. The V.A. Gave This Hospital 5 Stars. The New York Times. https://www.nytimes.com/2018/11/01/us/veterans-hospitals-rating-system-star.html. Published November 3, 2018. Accessed November 19, 2018.
20. DeVore AD, Hammill BG, Hardy NC, Eapen ZJ, Peterson ED, Hernandez AF. Has public reporting of hospital readmission rates affected patient outcomes?: Analysis of Medicare claims data. J Am Coll Cardiol. 2016;67(8):963-972. doi: 10.1016/j.jacc.2015.12.037. PubMed
21. Wasfy JH, Zigler CM, Choirat C, Wang Y, Dominici F, Yeh RW. Readmission rates after passage of the hospital readmissions reduction program: a pre-post analysis. Ann Intern Med. 2017;166(5):324-331. doi: 10.7326/M16-0185. PubMed
22. Medicare C for, Baltimore MS 7500 SB, Usa M. Hospital Readmission Reduction Program. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/HRRP/Hospital-Readmission-Reduction-Program.html. Published March 26, 2018. Accessed July 19, 2018.
23. Radford MJ. Does public reporting improve care? J Am Coll Cardiol. 2016;67(8):973-975. doi: 10.1016/j.jacc.2015.12.038. PubMed
24. Barnett ML, Hsu J, McWilliams JM. Patient characteristics and differences in hospital readmission rates. JAMA Intern Med. 2015;175(11):1803-1812. doi: 10.1001/jamainternmed.2015.4660. PubMed

Issue
Journal of Hospital Medicine 14(5)
Issue
Journal of Hospital Medicine 14(5)
Page Number
266-271. Published online first February 20, 2019.
Page Number
266-271. Published online first February 20, 2019.
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Charlie M. Wray, DO, MS; E-mail: [email protected]; Telephone: 415-595-9662; Twitter: @WrayCharles
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files