Given name(s)
Jennifer S.
Family name
Myers
Degrees
MD

Faculty Development for Hospitalists: A Call to Arms

Article Type
Changed
Wed, 07/11/2018 - 06:53

Over the past two decades, the field of hospital medicine has gone from relative obscurity to a viable career pathway for approximately 50,000 physicians in this country.1 A subset of hospitalists pursue careers in academic medicine, which is a pathway that traditionally includes education and scholarship in addition to patient care. While the academic career pathway is well paved in many clinical specialties, it is still relatively underdeveloped for academic hospitalists, and thus what defines career success for this group is even less clear.

In this issue of the Journal of Hospital Medicine, Cumbler et al. performed a qualitative analysis to explore how early career academic hospitalists self-define and perceive their career success.2 Drawing on interviews with 17 early-career hospitalists at 3 academic medical centers, the authors created a theoretical framework organized around a traditional conceptual model of career success that is divided into intrinsic and extrinsic motivating factors. They found that early-career academic hospitalists, (clinician-educators in first 2-5 years), defined their career success almost exclusively around factors intrinsic to their day-to-day job. These factors included such things as excitement about their daily work, developing proficiency in the delivery of high-quality clinical care, and passion for doing work that is meaningful to them. In addition to these immediate job satisfiers, many hospitalists emphasized long-term career success factors such as becoming an expert in a particular domain of hospital medicine and gaining respect and recognition within their local or national environment. Surprisingly, compensation and career advancement through promotion, two traditional external career success factors, were not uniformly valued.

These findings come at a critical time for our field in which early-career faculty outnumber mid- and late-career faculty by an order of magnitude. Indeed, how to develop, promote, sustain, and retain young hospitalists is a topic on the minds of most hospital medicine group directors. Putting aside the impact of hospitalist turnover on productivity, patient care outcomes, and morale within an individual hospital medicine group, we agree with the authors that understanding and cultivating career success for academic hospitalists is imperative for the future of our field. For this reason, we launched a formal faculty development program at Penn this year, which focuses on supporting the growth of hospitalists in their first two years on faculty. The findings of this study provide interesting new perspectives and encourage us to continue our focus on early-career academic hospitalists. We laud the previous efforts in this area and hope that the paper by Cumbler et al. encourages and inspires other programs to start or accelerate their hospitalist faculty development efforts.3-5

However, some findings from this study are somewhat perplexing or even a bit discouraging for those who are invested in faculty development in academia. For example, the authors raise the possibility that there may be a disconnect in the minds of early-career hospitalists as it pertains to their thoughts on career success. On the one hand, the hospitalists interviewed in this study are happy doing their clinical work and cite this as a primary driver of their career success. On the other hand, they equate career success with things such as developing expertise within a particular domain of hospital medicine, acquiring leadership roles, collaborating academically with other specialties or professions, or developing new innovations. Presumably this is part of the reason that they selected a job in an academic setting as opposed to a community setting. However, in order to achieve these goals, one must devote time and effort to purposefully developing them. Therefore, identifying and developing mentors who can assist early-career hospitalists with identifying, articulating, and developing strategies to achieve both their short- and long-term career goals is critical. One mentor–mentee conversation may reveal that an individual hospitalist values being an excellent clinician and has little interest in developing a niche within hospital medicine; another may reveal a lack of awareness of available professional development resources; still another may uncover a lack of realism regarding the time or skills it takes to achieve a particular career goal. These realities highlight an imperative for our field to develop robust and sustainable mentorship programs for not only early-career hospitalists but also some mid-career hospitalists whose careers may not yet be fully developed. Indeed, one of the biggest challenges that have emerged in our experience with a faculty development program at Penn is creating meaningful mentorship and career development advice for mid-career hospitalists (late assistant or early associate professors who are typically 5-10 years into their careers).

We found it interesting that the hospitalists interviewed did not mention three of the four pillars of career satisfaction outlined in the white paper on Hospitalist Career Satisfaction from the Society for Hospital Medicine: workload schedule, autonomy control, and community/environment.6 Perhaps this is because hospitalists, like many other professionals, recognize that feeling satisfied in one’s career is not the same as feeling successful. Satisfaction in one’s career refers to the foundational needs that one requires in order to feel content, whereas success is more often equated with achievement, even if that achievement is simply the acquisition of one’s goals for themselves. The reality is that given the constant growth and change within teaching hospitals, and therefore academic hospital medicine groups, tending to the satisfiers for hospitalists (eg, schedule and workload) often takes a front seat to assisting faculty in achieving their individual career potential. We assert that despite the inherent difficulty, academic hospital medicine group leaders need to focus their attention on both the satisfaction and career success of their early-career faculty.

Finally, this paper raises many interesting questions for researchers interested in the professional development of hospitalists. Are the career success perspectives of an early-career academic hospitalist different from those of an early-career intensivist or emergency medicine physician in an academic setting? Hospital medicine has historically been likened to both fields given the similar intensity of clinical work and the fact that all three fields were created around the need for specialists in a care setting as opposed to a disease state. It is possible that the vision of success for young academic physicians as a whole has changed with the millennial generation entering the workforce. Do early-career hospitalists look different from early-career general internists in academic settings? The latter group has more promoted faculty in their division to serve as role models and mentors and who have demonstrated more success in a variety of replicable career pathways. The fact that the definition of career success may evolve over time also emerged as a theme from this paper. Do mid-career academic hospitalists find that the excitement for daily clinical work wanes over time leaving them feeling less successful and looking for something more?

In conclusion, the findings of Cumbler et al. should promote unrest among leaders of academic hospital medicine groups and their departments of medicine. While it is inspiring to see so many early-career hospitalists focused on their daily happiness at work, we are unsure about whether they have the knowledge, tools, and guidance to achieve their self-professed academic goals, which many equate with career success. Given the continued growth of the hospital medicine workforce, we view this important new work as a national call to arms for the purposeful development of academic hospitalist faculty development programs.

 

 

Disclosures

Dr. Myers and Dr. Greysen have nothing to disclose.

References

1. Wachter RM, Goldman L. Zero to 50,000-The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. PubMed
2. Cumbler E, Yirdaw E, Kneeland P, et al. What is career success for academic hospitalists? A qualitative analysis of early-career faculty perspectives. J Hosp Med. 2018;13(5):372-377. doi: 10.12788/jhm.2924. Published online first January 31, 2018. PubMed
3. Nagarur A, O’Neill RM, Lawton D, Greenwald JL. Supporting faculty development in hospital medicine: design and implementation of a personalized structured mentoring program. J Hosp Med. 2018;13(2):96-99. PubMed
4. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161-166. PubMed
5. Howell E, Kravet S, Kisuule F, Wright SM. An innovative approach to supporting hospitalist physicians towards academic success. J Hosp Med. 2008;3(4):314-318. PubMed
6. Society of Hospital Medicine Career Satisfaction Taskforce: A Challenge for a new Specialty. A White paper on hospitalist career satisfaction. http://dev.hospitalmedicine.org/Web/Media_Center/shm_white_papers.aspx . Accessed February 9, 2018.

Article PDF
Issue
Journal of Hospital Medicine 13(6)
Publications
Topics
Page Number
429-430
Sections
Article PDF
Article PDF
Related Articles

Over the past two decades, the field of hospital medicine has gone from relative obscurity to a viable career pathway for approximately 50,000 physicians in this country.1 A subset of hospitalists pursue careers in academic medicine, which is a pathway that traditionally includes education and scholarship in addition to patient care. While the academic career pathway is well paved in many clinical specialties, it is still relatively underdeveloped for academic hospitalists, and thus what defines career success for this group is even less clear.

In this issue of the Journal of Hospital Medicine, Cumbler et al. performed a qualitative analysis to explore how early career academic hospitalists self-define and perceive their career success.2 Drawing on interviews with 17 early-career hospitalists at 3 academic medical centers, the authors created a theoretical framework organized around a traditional conceptual model of career success that is divided into intrinsic and extrinsic motivating factors. They found that early-career academic hospitalists, (clinician-educators in first 2-5 years), defined their career success almost exclusively around factors intrinsic to their day-to-day job. These factors included such things as excitement about their daily work, developing proficiency in the delivery of high-quality clinical care, and passion for doing work that is meaningful to them. In addition to these immediate job satisfiers, many hospitalists emphasized long-term career success factors such as becoming an expert in a particular domain of hospital medicine and gaining respect and recognition within their local or national environment. Surprisingly, compensation and career advancement through promotion, two traditional external career success factors, were not uniformly valued.

These findings come at a critical time for our field in which early-career faculty outnumber mid- and late-career faculty by an order of magnitude. Indeed, how to develop, promote, sustain, and retain young hospitalists is a topic on the minds of most hospital medicine group directors. Putting aside the impact of hospitalist turnover on productivity, patient care outcomes, and morale within an individual hospital medicine group, we agree with the authors that understanding and cultivating career success for academic hospitalists is imperative for the future of our field. For this reason, we launched a formal faculty development program at Penn this year, which focuses on supporting the growth of hospitalists in their first two years on faculty. The findings of this study provide interesting new perspectives and encourage us to continue our focus on early-career academic hospitalists. We laud the previous efforts in this area and hope that the paper by Cumbler et al. encourages and inspires other programs to start or accelerate their hospitalist faculty development efforts.3-5

However, some findings from this study are somewhat perplexing or even a bit discouraging for those who are invested in faculty development in academia. For example, the authors raise the possibility that there may be a disconnect in the minds of early-career hospitalists as it pertains to their thoughts on career success. On the one hand, the hospitalists interviewed in this study are happy doing their clinical work and cite this as a primary driver of their career success. On the other hand, they equate career success with things such as developing expertise within a particular domain of hospital medicine, acquiring leadership roles, collaborating academically with other specialties or professions, or developing new innovations. Presumably this is part of the reason that they selected a job in an academic setting as opposed to a community setting. However, in order to achieve these goals, one must devote time and effort to purposefully developing them. Therefore, identifying and developing mentors who can assist early-career hospitalists with identifying, articulating, and developing strategies to achieve both their short- and long-term career goals is critical. One mentor–mentee conversation may reveal that an individual hospitalist values being an excellent clinician and has little interest in developing a niche within hospital medicine; another may reveal a lack of awareness of available professional development resources; still another may uncover a lack of realism regarding the time or skills it takes to achieve a particular career goal. These realities highlight an imperative for our field to develop robust and sustainable mentorship programs for not only early-career hospitalists but also some mid-career hospitalists whose careers may not yet be fully developed. Indeed, one of the biggest challenges that have emerged in our experience with a faculty development program at Penn is creating meaningful mentorship and career development advice for mid-career hospitalists (late assistant or early associate professors who are typically 5-10 years into their careers).

We found it interesting that the hospitalists interviewed did not mention three of the four pillars of career satisfaction outlined in the white paper on Hospitalist Career Satisfaction from the Society for Hospital Medicine: workload schedule, autonomy control, and community/environment.6 Perhaps this is because hospitalists, like many other professionals, recognize that feeling satisfied in one’s career is not the same as feeling successful. Satisfaction in one’s career refers to the foundational needs that one requires in order to feel content, whereas success is more often equated with achievement, even if that achievement is simply the acquisition of one’s goals for themselves. The reality is that given the constant growth and change within teaching hospitals, and therefore academic hospital medicine groups, tending to the satisfiers for hospitalists (eg, schedule and workload) often takes a front seat to assisting faculty in achieving their individual career potential. We assert that despite the inherent difficulty, academic hospital medicine group leaders need to focus their attention on both the satisfaction and career success of their early-career faculty.

Finally, this paper raises many interesting questions for researchers interested in the professional development of hospitalists. Are the career success perspectives of an early-career academic hospitalist different from those of an early-career intensivist or emergency medicine physician in an academic setting? Hospital medicine has historically been likened to both fields given the similar intensity of clinical work and the fact that all three fields were created around the need for specialists in a care setting as opposed to a disease state. It is possible that the vision of success for young academic physicians as a whole has changed with the millennial generation entering the workforce. Do early-career hospitalists look different from early-career general internists in academic settings? The latter group has more promoted faculty in their division to serve as role models and mentors and who have demonstrated more success in a variety of replicable career pathways. The fact that the definition of career success may evolve over time also emerged as a theme from this paper. Do mid-career academic hospitalists find that the excitement for daily clinical work wanes over time leaving them feeling less successful and looking for something more?

In conclusion, the findings of Cumbler et al. should promote unrest among leaders of academic hospital medicine groups and their departments of medicine. While it is inspiring to see so many early-career hospitalists focused on their daily happiness at work, we are unsure about whether they have the knowledge, tools, and guidance to achieve their self-professed academic goals, which many equate with career success. Given the continued growth of the hospital medicine workforce, we view this important new work as a national call to arms for the purposeful development of academic hospitalist faculty development programs.

 

 

Disclosures

Dr. Myers and Dr. Greysen have nothing to disclose.

Over the past two decades, the field of hospital medicine has gone from relative obscurity to a viable career pathway for approximately 50,000 physicians in this country.1 A subset of hospitalists pursue careers in academic medicine, which is a pathway that traditionally includes education and scholarship in addition to patient care. While the academic career pathway is well paved in many clinical specialties, it is still relatively underdeveloped for academic hospitalists, and thus what defines career success for this group is even less clear.

In this issue of the Journal of Hospital Medicine, Cumbler et al. performed a qualitative analysis to explore how early career academic hospitalists self-define and perceive their career success.2 Drawing on interviews with 17 early-career hospitalists at 3 academic medical centers, the authors created a theoretical framework organized around a traditional conceptual model of career success that is divided into intrinsic and extrinsic motivating factors. They found that early-career academic hospitalists, (clinician-educators in first 2-5 years), defined their career success almost exclusively around factors intrinsic to their day-to-day job. These factors included such things as excitement about their daily work, developing proficiency in the delivery of high-quality clinical care, and passion for doing work that is meaningful to them. In addition to these immediate job satisfiers, many hospitalists emphasized long-term career success factors such as becoming an expert in a particular domain of hospital medicine and gaining respect and recognition within their local or national environment. Surprisingly, compensation and career advancement through promotion, two traditional external career success factors, were not uniformly valued.

These findings come at a critical time for our field in which early-career faculty outnumber mid- and late-career faculty by an order of magnitude. Indeed, how to develop, promote, sustain, and retain young hospitalists is a topic on the minds of most hospital medicine group directors. Putting aside the impact of hospitalist turnover on productivity, patient care outcomes, and morale within an individual hospital medicine group, we agree with the authors that understanding and cultivating career success for academic hospitalists is imperative for the future of our field. For this reason, we launched a formal faculty development program at Penn this year, which focuses on supporting the growth of hospitalists in their first two years on faculty. The findings of this study provide interesting new perspectives and encourage us to continue our focus on early-career academic hospitalists. We laud the previous efforts in this area and hope that the paper by Cumbler et al. encourages and inspires other programs to start or accelerate their hospitalist faculty development efforts.3-5

However, some findings from this study are somewhat perplexing or even a bit discouraging for those who are invested in faculty development in academia. For example, the authors raise the possibility that there may be a disconnect in the minds of early-career hospitalists as it pertains to their thoughts on career success. On the one hand, the hospitalists interviewed in this study are happy doing their clinical work and cite this as a primary driver of their career success. On the other hand, they equate career success with things such as developing expertise within a particular domain of hospital medicine, acquiring leadership roles, collaborating academically with other specialties or professions, or developing new innovations. Presumably this is part of the reason that they selected a job in an academic setting as opposed to a community setting. However, in order to achieve these goals, one must devote time and effort to purposefully developing them. Therefore, identifying and developing mentors who can assist early-career hospitalists with identifying, articulating, and developing strategies to achieve both their short- and long-term career goals is critical. One mentor–mentee conversation may reveal that an individual hospitalist values being an excellent clinician and has little interest in developing a niche within hospital medicine; another may reveal a lack of awareness of available professional development resources; still another may uncover a lack of realism regarding the time or skills it takes to achieve a particular career goal. These realities highlight an imperative for our field to develop robust and sustainable mentorship programs for not only early-career hospitalists but also some mid-career hospitalists whose careers may not yet be fully developed. Indeed, one of the biggest challenges that have emerged in our experience with a faculty development program at Penn is creating meaningful mentorship and career development advice for mid-career hospitalists (late assistant or early associate professors who are typically 5-10 years into their careers).

We found it interesting that the hospitalists interviewed did not mention three of the four pillars of career satisfaction outlined in the white paper on Hospitalist Career Satisfaction from the Society for Hospital Medicine: workload schedule, autonomy control, and community/environment.6 Perhaps this is because hospitalists, like many other professionals, recognize that feeling satisfied in one’s career is not the same as feeling successful. Satisfaction in one’s career refers to the foundational needs that one requires in order to feel content, whereas success is more often equated with achievement, even if that achievement is simply the acquisition of one’s goals for themselves. The reality is that given the constant growth and change within teaching hospitals, and therefore academic hospital medicine groups, tending to the satisfiers for hospitalists (eg, schedule and workload) often takes a front seat to assisting faculty in achieving their individual career potential. We assert that despite the inherent difficulty, academic hospital medicine group leaders need to focus their attention on both the satisfaction and career success of their early-career faculty.

Finally, this paper raises many interesting questions for researchers interested in the professional development of hospitalists. Are the career success perspectives of an early-career academic hospitalist different from those of an early-career intensivist or emergency medicine physician in an academic setting? Hospital medicine has historically been likened to both fields given the similar intensity of clinical work and the fact that all three fields were created around the need for specialists in a care setting as opposed to a disease state. It is possible that the vision of success for young academic physicians as a whole has changed with the millennial generation entering the workforce. Do early-career hospitalists look different from early-career general internists in academic settings? The latter group has more promoted faculty in their division to serve as role models and mentors and who have demonstrated more success in a variety of replicable career pathways. The fact that the definition of career success may evolve over time also emerged as a theme from this paper. Do mid-career academic hospitalists find that the excitement for daily clinical work wanes over time leaving them feeling less successful and looking for something more?

In conclusion, the findings of Cumbler et al. should promote unrest among leaders of academic hospital medicine groups and their departments of medicine. While it is inspiring to see so many early-career hospitalists focused on their daily happiness at work, we are unsure about whether they have the knowledge, tools, and guidance to achieve their self-professed academic goals, which many equate with career success. Given the continued growth of the hospital medicine workforce, we view this important new work as a national call to arms for the purposeful development of academic hospitalist faculty development programs.

 

 

Disclosures

Dr. Myers and Dr. Greysen have nothing to disclose.

References

1. Wachter RM, Goldman L. Zero to 50,000-The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. PubMed
2. Cumbler E, Yirdaw E, Kneeland P, et al. What is career success for academic hospitalists? A qualitative analysis of early-career faculty perspectives. J Hosp Med. 2018;13(5):372-377. doi: 10.12788/jhm.2924. Published online first January 31, 2018. PubMed
3. Nagarur A, O’Neill RM, Lawton D, Greenwald JL. Supporting faculty development in hospital medicine: design and implementation of a personalized structured mentoring program. J Hosp Med. 2018;13(2):96-99. PubMed
4. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161-166. PubMed
5. Howell E, Kravet S, Kisuule F, Wright SM. An innovative approach to supporting hospitalist physicians towards academic success. J Hosp Med. 2008;3(4):314-318. PubMed
6. Society of Hospital Medicine Career Satisfaction Taskforce: A Challenge for a new Specialty. A White paper on hospitalist career satisfaction. http://dev.hospitalmedicine.org/Web/Media_Center/shm_white_papers.aspx . Accessed February 9, 2018.

References

1. Wachter RM, Goldman L. Zero to 50,000-The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. PubMed
2. Cumbler E, Yirdaw E, Kneeland P, et al. What is career success for academic hospitalists? A qualitative analysis of early-career faculty perspectives. J Hosp Med. 2018;13(5):372-377. doi: 10.12788/jhm.2924. Published online first January 31, 2018. PubMed
3. Nagarur A, O’Neill RM, Lawton D, Greenwald JL. Supporting faculty development in hospital medicine: design and implementation of a personalized structured mentoring program. J Hosp Med. 2018;13(2):96-99. PubMed
4. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161-166. PubMed
5. Howell E, Kravet S, Kisuule F, Wright SM. An innovative approach to supporting hospitalist physicians towards academic success. J Hosp Med. 2008;3(4):314-318. PubMed
6. Society of Hospital Medicine Career Satisfaction Taskforce: A Challenge for a new Specialty. A White paper on hospitalist career satisfaction. http://dev.hospitalmedicine.org/Web/Media_Center/shm_white_papers.aspx . Accessed February 9, 2018.

Issue
Journal of Hospital Medicine 13(6)
Issue
Journal of Hospital Medicine 13(6)
Page Number
429-430
Page Number
429-430
Publications
Publications
Topics
Article Type
Sections
Article Source

©2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Jennifer S. Myers, MD, Professor of Clinical Medicine, Section of Hospital Medicine, Division of General Internal Medicine, Perelman School of Medicine, University of Pennsylvania. 3400 Spruce Street, Maloney Building Suite 5033, Philadelphia, PA 19104; Telephone: (215)662-3797; Fax (215) 662-6250; Email: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 07/11/2018 - 05:00
Un-Gate On Date
Wed, 06/13/2018 - 05:00
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media

If You Book It, Will They Come? Attendance at Postdischarge Follow-Up Visits Scheduled by Inpatient Providers

Article Type
Changed
Tue, 08/22/2017 - 04:49
Display Headline
If You Book It, Will They Come? Attendance at Postdischarge Follow-Up Visits Scheduled by Inpatient Providers

Given growing incentives to reduce readmission rates, predischarge checklists and bundles have recommended that inpatient providers schedule postdischarge follow-up visits (PDFVs) for their hospitalized patients.1-4 PDFVs have been linked to lower readmission rates in patients with chronic conditions, including congestive heart failure, psychiatric illnesses, and chronic obstructive pulmonary disease.5-8 In contrast, the impact of PDFVs on readmissions in hospitalized general medicine populations has been mixed.9-12 Beyond the presence or absence of PDFVs, it may be a patient’s inability to keep scheduled PDFVs that contributes more strongly to preventable readmissions.11

This challenge, dealing with the 12% to 37% of patients who miss their visits (“no-shows”), is not new.13-17 In high-risk patient populations, such as those with substance abuse, diabetes, or human immunodeficiency virus, no-shows (NSs) have been linked to poorer short-term and long-term clinical outcomes.16,18-20 Additionally, NSs pose a challenge for outpatient clinics and the healthcare system at large. The financial cost of NSs ranges from approximately $200 per patient in 2 analyses to $7 million in cumulative lost revenue per year at 1 large academic health system.13,17,21 As such, increasing attendance at PDFVs is a potential target for improving both patient outcomes and clinic productivity.

Most prior PDFV research has focused on readmission risk rather than PDFV attendance as the primary outcome.5-12 However, given the patient-oriented benefits of attending PDFVs and the clinic-oriented benefits of avoiding vacant time slots, NS PDFVs represent an important missed opportunity for our healthcare delivery system. To our knowledge, risk factors for PDFV nonattendance have not yet been systematically studied. The aim of our study was to analyze PDFV nonattendance, particularly NSs and same-day cancellations (SDCs), for hospitalizations and clinics within our healthcare system.

METHODS

Study Design

We conducted an observational cohort study of adult patients from 10 medical units at the Hospital of the University of Pennsylvania (a 789-bed quaternary-care hospital within an urban, academic medical system) who were scheduled with at least 1 PDFV. Specifically, the patients included in our analysis were hospitalized on general internal medicine services or medical subspecialty services with discharge dates between April 1, 2014, and March 31, 2015. Hospitalizations included in our study had at least 1 PDFV scheduled with an outpatient provider affiliated with the University of Pennsylvania Health System (UPHS). PDFVs scheduled with unaffiliated providers were not examined.

Each PDFV was requested by a patient’s inpatient care team. Once the care team had determined that a PDFV was clinically warranted, a member of the team (generally a resident, advanced practice provider, medical student, or designee) either called the UPHS clinic to schedule an appointment time or e-mailed the outpatient UPHS provider directly to facilitate a more urgent PDFV appointment time. Once a PDFV time was confirmed, PDFV details (ie, date, time, location, and phone number) were electronically entered into the patient’s discharge instructions by the inpatient care team. At the time of discharge, nurses reviewed these instructions with their patients. All patients left the hospital with a physical copy of these instructions. As part of routine care at our institution, patients then received automated telephone reminders from their UPHS-affiliated outpatient clinic 48 hours prior to each PDFV.

Data Collection

Our study was determined to meet criteria for quality improvement by the University of Pennsylvania’s Institutional Review Board. We used our healthcare system’s integrated electronic medical record system to track the dates of initial PDFV requests, the dates of hospitalization, and actual PDFV dates. PDFVs were included if the appointment request was made while a patient was hospitalized, including the day of discharge. Our study methodology only allowed us to investigate PDFVs scheduled with UPHS outpatient providers. We did not review discharge instructions or survey non-UPHS clinics to quantify visits scheduled with other providers, for example, community health centers or external private practices.

Exclusion criteria included the following: (1) office visits with nonproviders, for example, scheduled diagnostic procedures or pharmacist appointments for warfarin dosing; (2) visits cancelled by inpatient providers prior to discharge; (3) visits for patients not otherwise eligible for UPHS outpatient care because of insurance reasons; and (4) visits scheduled for dates after a patient’s death. Our motivation for the third exclusion criterion was the infrequent and irregular process by which PDFVs were authorized for these patients. These patients and their characteristics are described in Supplementary Table 1 in more detail.

For each PDFV, we recorded age, gender, race, insurance status, driving distance, length of stay for index hospitalization, discharging service (general internal medicine vs subspecialty), postdischarge disposition (home, home with home care services such as nursing or physical therapy, or facility), the number of PDFVs scheduled per index hospitalization, PDFV specialty type (oncologic subspecialty, nononcologic medical subspecialty, nononcologic surgical subspecialty, primary care, or other specialty), PDFV season, and PDFV lead time (the number of days between the discharge date and PDFV). We consolidated oncologic specialties into 1 group given the integrated nature of our healthcare system’s comprehensive cancer center. “Other” PDFV specialty subtypes are described in Supplementary Table 2. Driving distances between patient postal codes and our hospital were calculated using Excel VBA Master (Salt Lake City, Utah) and were subsequently categorized into patient-level quartiles for further analysis. For cancelled PDFVs, we collected dates of cancellation relative to the date of the appointment itself.

 

 

Study Outcomes

The primary study outcome was PDFV attendance. Each PDFV’s status was categorized by outpatient clinic staff as attended, cancelled, or NS. For cancelled appointments, cancellation dates and reasons (if entered by clinic representatives) were collected. In keeping with prior studies investigating outpatient nonattendance,we calculated collective NS/SDC rates for the variables listed above.17,22-25 We additionally calculated NS/SDC and attendance-as-scheduled rates stratified by the number of PDFVs per patient to assess for a “high-utilizer” effect with regard to PDFV attendance.

Statistical Analysis

We used multivariable mixed-effects regression with a logit link to assess associations between age, gender, race, insurance, driving distance quartile, length of stay, discharging service, postdischarge disposition, the number of PDFVs per hospitalization, PDFV specialty type, PDFV season, PDFV lead time, and our NS/SDC outcome. The mixed-effects approach was used to account for correlation structures induced by patients who had multiple visits and for patients with multiple hospitalizations. Specifically, our model specified 2 levels of nesting (PDFVs nested within each hospitalization, which were nested within each patient) to obtain appropriate standard error estimates for our adjusted odds ratios (ORs). Correlation matrices and multivariable variance inflation factors were used to assess collinearity among the predictor variables. These assessments did not indicate strong collinearity; hence, all predictors were included in the model. Only driving distance had a small amount of missing data (0.18% of driving distances were unavailable), so multiple imputation was not undertaken. Analyses were performed using R version 3.3.1 (R Foundation for Statistical Computing, Vienna, Austria).

RESULTS

Baseline Characteristics

During the 1-year study period, there were 11,829 discrete hospitalizations in medical units at our hospital. Of these hospitalizations, 6136 (52%) had at least 1 UPHS-affiliated PDFV meeting our inclusion and exclusion criteria, as detailed in the Figure. Across these hospitalizations, 9258 PDFVs were scheduled on behalf of 4653 patients. Demographic characteristics for these patients, hospitalizations, and visits are detailed in Table 1. The median age of patients in our cohort was 61 years old (interquartile range [IQR] 49-70, range 18-101). The median driving distance was 17 miles (IQR 4.3-38.8, range 0-2891). For hospitalizations, the median length of stay was 5 days (IQR 3-10, range 0-97). The median PDFV lead time, which is defined as the number of days between discharge and PDFV, was 12 days (IQR 6-23, range 1-60). Overall, 41% of patients (n = 1927) attended all of their PDFVs as scheduled; Supplementary Figure 1 lists patient-level PDFV attendance-as-scheduled percentages in more detail.

Incidence of NSs and SDCs

Twenty-five percent of PDFVs (n = 2303) were ultimately NS/SDCs; this included 1658 NSs (18% of all appointments) and 645 SDCs (7% of all appointments). Fifty-two percent of PDFVs (n = 4847) were kept as scheduled, while 23% (n = 2108) were cancelled before the day of the visit. Of the 2558 cancellations with valid cancellation dates, 49% (n = 1252) were cancelled 2 or fewer days beforehand, as shown in Supplementary Figure 2.

In Table 2, we show unadjusted NS/SDC rates and adjusted NS/SDC ORs based on patient and hospitalization characteristics. NS/SDC appointments were more likely to occur in patients who were black (adjusted OR 1.94, 95% confidence interval [CI], 1.63-2.32) or Medicaid insured (OR 1.41, 95% CI, 1.19-1.67). In contrast, NS/SDC appointments were less likely in elderly patients (age ≥65 years: OR 0.39, 95% CI, 0.31-0.49) and patients who lived further away (furthest quartile of driving distance: OR 0.65, 95% CI, 0.52-–0.81). Longer hospitalizations were associated with higher NS/SDC rates (length of stay ≥15 days: OR 1.51, 95% CI, 1.22-1.88). In contrast, discharges from subspecialty services (OR 0.79, 95% CI, 0.68-0.93) had independently lower NS/SDC rates. Compared to discharges to home without services, NS/SDC rates were higher with discharges to home with services (OR 1.32, 95% CI, 1.01-1.36) and with discharges to facilities (OR 2.10, 95% CI, 1.70-2.60).

The presence of exactly 2 PDFVs per hospitalization was also associated with higher NS/SDC rates (OR 1.17, 95% CI, 1.01-1.36), compared to a single PDFV per hospitalization; however, the presence of more than 2 PDFVs per hospitalization was associated with lower NS/SDC rates (OR 0.82, 95% CI, 0.69-0.98). A separate analysis (data not shown) of potential high utilizers revealed a 15% NS/SDC rate for the top 0.5% of patients (median: 18 PDFVs each) and an 18% NS/SDC rate for the top 1% of patients (median: 14 PDFVs each) with regard to the numbers of PDFVs scheduled, compared to the 25% overall NS/SDC rate for all patients.


NS/SDC rates and adjusted ORs with regard to individual PDFV characteristics are displayed in Table 3. Nononcologic visits had higher NS/SDC rates than oncologic visits; for example, the NS/SDC rate for primary care visits was 39% (OR 2.62, 95% CI, 2.03-3.38), compared to 12% for oncologic visits. Appointments in the “other” specialty category also had high nonattendance rates, as further described in Supplementary Table B. Summertime appointments were more likely to be attended (OR 0.81, 95% CI, 0.68-0.97) compared to those in the spring. PDFV lead time (the time interval between the discharge date and appointment date) was not associated with changes in visit attendance.

 

 

DISCUSSION

PDFVs were scheduled on patients’ behalf for more than half of all medical hospitalizations at our institution, a rate that is consistent with previous studies.10,11,26 However, 1 in 4 of these PDFVs resulted in a NS/SDC. This figure contrasts sharply with our institution’s 10% overall NS/SDC rate for all outpatient visits (S. Schlegel, written communication, July 2016). In our study, patients who were younger, black, or Medicaid insured were more likely to miss their follow-up visits. Patients who lived farther from the study hospital had lower NS/SDC rates, which is consistent with another study of a low-income, urban patient population.27 In contrast, patients with longer lengths of stay, discharges with home care services, or discharges to another facility were more likely to miss their PDFVs. Reasons for this are likely multifactorial, including readmission to a hospital or feeling too unwell to leave home to attend PDFVs. Insurance policies regarding ambulance reimbursement and outpatient billing can cause confusion and may have contributed to higher NS/SDC rates for facility-bound patients.28,29

When comparing PDFV characteristics themselves, oncologic visits had the lowest NS/SDC incidence of any group analyzed in our study. This may be related to the inherent life-altering nature of a cancer diagnosis or our cancer center’s use of patient navigators.23,30 In contrast, primary care clinics suffered from NS/SDC rates approaching 40%, which is a concerning finding given the importance of primary care coordination in the posthospitalization period.9,31 Why are primary care appointments so commonly missed? Some studies suggest that forgetting about a primary care appointment is a leading reason.15,32,33 For PDFVs, this phenomenon may be augmented because the visits are not scheduled by patients themselves. Additionally, patients may paradoxically undervalue the benefit of an all-encompassing primary care visit, compared to a PDFV focused on a specific problem, (eg, a cardiology follow-up appointment for a patient with congestive heart failure). In particular, patients with limited health literacy may potentially undervalue the capabilities of their primary care clinics.34,35

The low absolute number of primary care PDFVs (only 8% of all visits) scheduled for patients at our hospital was an unexpected finding. This low percentage is likely a function of the patient population hospitalized at our large, urban quaternary-care facility. First, an unknown number of patients may have had PDFVs manually scheduled with primary care providers external to our health system; these PDFVs were not captured within our study. Second, 71% of the hospitalizations in our study occurred in subspecialty services, for which specific primary care follow-up may not be as urgent. Supporting this fact, further analysis of the 6136 hospitalizations in our study (data not shown) revealed that 28% of the hospitalizations in general internal medicine were scheduled with at least 1 primary care PDFV as opposed to only 5% of subspecialty-service hospitalizations.

In contrast to several previous studies of outpatient nonattendance,we did not find that visits scheduled for time points further in the future were more likely to be missed.14,24,25,36,37 Unlike other appointments, it may be that PDFV lead time does not affect attendance because of the unique manner in which PDFV times are scheduled and conveyed to patients. Unlike other appointments, patients do not schedule PDFVs themselves but instead learn about their PDFV dates as part of a large set of discharge instructions. This practice may result in poor recall of PDFV dates in recently hospitalized patients38, regardless of the lead time between discharge and the visit itself.

Supplementary Table 1 details a 51% NS/SDC rate for the small number of PDFVs (n = 65) that were excluded a priori from our analysis because of general ineligibility for UPHS outpatient care. We specifically chose to exclude this population because of the infrequent and irregular process by which these PDFVs were authorized on a case-by-case basis, typically via active engagement by our hospital’s social work department. We did not study this population further but postulate that the 51% NS/SDC rate may reflect other social determinants of health that contribute to appointment nonadherence in a predominantly uninsured population.

Beyond their effect on patient outcomes, improving PDFV-related processes has the potential to boost both inpatient and outpatient provider satisfaction. From the standpoint of frontline inpatient providers (often resident physicians), calling outpatient clinics to request PDFVs is viewed as 1 of the top 5 administrative tasks that interfere with house staff education.39 Future interventions that involve patients in the PDFV scheduling process may improve inpatient workflow while simultaneously engaging patients in their own care. For example, asking clinic representatives to directly schedule PDFVs with hospitalized patients, either by phone or in person, has been shown in pilot studies to improve PDFV attendance and decrease readmissions.40-42 Conversely, NS/SDC visits harm outpatient provider productivity and decrease provider availability for other patients.13,17,43 Strategies to mitigate the impact of unfilled appointment slots (eg, deliberately overbooking time slots in advance) carry their own risks, including provider burnout.44 As such, preventing NSs may be superior to curing their adverse impacts. Many such strategies exist in the ambulatory setting,13,43,45 for example, better communication with patients through texting or goal-directed, personalized phone reminders.46-48Our study methodology has several limitations. Most importantly, we were unable to measure PDFVs made with providers unaffiliated with UPHS. As previously noted, our low proportion of primary care PDFVs may specifically reflect patients with primary care providers outside of our health system. Similarly, our low percentage of Medicaid patients receiving PDFVs may be related to follow-up visits with nonaffiliated community health centers. We were unable to measure patient acuity and health literacy as potential predictors of NS/SDC rates. Driving distances were calculated from patient postal codes to our hospital, not to individual outpatient clinics. However, the majority of our hospital-affiliated clinics are located adjacent to our hospital; additionally, we grouped driving distances into quartiles for our analysis. We had initially attempted to differentiate between clinic-initiated and patient-initiated cancellations, but unfortunately, we found that the data were too unreliable to be used for further analysis (outlined in Supplementary Table 3). Lastly, because we studied patients in medical units at a single large, urban, academic center, our results are not generalizable to other settings (eg, community hospitals, hospitals with smaller networks of outpatient providers, or patients being discharged from surgical services or observation units).

 

 

CONCLUSION

Given national efforts to enhance postdischarge transitions of care, we aimed to analyze attendance at provider-scheduled PDFV appointments. Our finding that 25% of PDFVs resulted in NS/SDCs raises both questions and opportunities for inpatient and outpatient providers. Further research is needed to understand why so many patients miss their PDFVs, and we should work as a field to develop creative solutions to improve PDFV scheduling and attendance.

Acknowledgments

The authors acknowledge Marie Synnestvedt, PhD, and Manik Chhabra, MD, for their assistance with data gathering and statistical analysis. They also acknowledge Allison DeKosky, MD, Michael Serpa, BS, Michael McFall, and Scott Schlegel, MBA, for their assistance with researching this topic. They did not receive external compensation for their assistance outside of their usual salary support.

DISCLOSURE

Nothing to report.

Files
References

1. Halasyamani L, Kripalani S, Coleman E, et al. Transition of care for hospitalized elderly patients - development of a discharge checklist for hospitalists. J Hosp Med. 2006;1(6):354-360. PubMed
2. Koehler BE, Richter KM, Youngblood L, et al. Reduction of 30-day postdischarge hospital readmission or emergency department (ED) visit rates in high-risk elderly medical patients through delivery of a targeted care bundle. J Hosp Med. 2009;4(4):211-218. PubMed
3. Soong C, Daub S, Lee JG, et al. Development of a checklist of safe discharge practices for hospital patients. J Hosp Med. 2013;8(8):444-449. PubMed
4. Rice YB, Barnes CA, Rastogi R, Hillstrom TJ, Steinkeler CN. Tackling 30-day, all-cause readmissions with a patient-centered transitional care bundle. Popul Health Manag. 2016;19(1):56-62. PubMed
5. Nelson EA, Maruish MM, Axler JL. Effects of discharge planning and compliance with outpatient appointments on readmission rates. Psych Serv. 2000;51(7):885-889. PubMed
6. Gavish R, Levy A, Dekel OK, Karp E, Maimon N. The association between hospital readmission and pulmonologist follow-up visits in patients with chronic obstructive pulmonary disease. Chest. 2015;148(2):375-381. PubMed
7. Jackson C, Shahsahebi M, Wedlake T, DuBard CA. Timeliness of outpatient follow-up: an evidence-based approach for planning after hospital discharge. Ann Fam Med. 2015;13(2):115-122. PubMed
8. Donaho EK, Hall AC, Gass JA, et al. Protocol-driven allied health post-discharge transition clinic to reduce hospital readmissions in heart failure. J Am Heart Assoc. 2015;4(12):e002296. PubMed
9. Misky GJ, Wald HL, Coleman EA. Post-hospitalization transitions: Examining the effects of timing of primary care provider follow-up. J Hosp Med. 2010;5(7):392-397. PubMed
10. Grafft CA, McDonald FS, Ruud KL, Liesinger JT, Johnson MG, Naessens JM. Effect of hospital follow-up appointment on clinical event outcomes and mortality. Arch Intern Med. 2010;171(11):955-960. PubMed
11. Auerbach AD, Kripalani S, Vasilevskis EE, et al. Preventability and causes of readmissions in a national cohort of general medicine patients. JAMA Intern Med. 2016;176(4):484-493. PubMed
12. Field TS, Ogarek J, Garber L, Reed G, Gurwitz JH. Association of early post-discharge follow-up by a primary care physician and 30-day rehospitalization among older adults. J Gen Intern Med. 2015;30(5):565-571. PubMed
13. Quinn K. It’s no-show time! Med Group Manage Assoc Connexion. 2007;7(6):44-49. PubMed
14. Whittle J, Schectman G, Lu N, Baar B, Mayo-Smith MF. Relationship of scheduling interval to missed and cancelled clinic appointments. J Ambulatory Care Manage. 2008;31(4):290-302. PubMed
15. Kaplan-Lewis E, Percac-Lima S. No-show to primary care appointments: Why patients do not come. J Prim Care Community Health. 2013;4(4):251-255. PubMed
16. Molfenter T. Reducing appointment no-shows: Going from theory to practice. Subst Use Misuse. 2013;48(9):743-749. PubMed
17. Kheirkhah P, Feng Q, Travis LM, Tavakoli-Tabasi S, Sharafkhaneh A. Prevalence, predictors and economic consequences of no-shows. BMC Health Serv Res. 2016;16(1):13. PubMed
18. Colubi MM, Perez-Elias MJ, Elias L, et al. Missing scheduled visits in the outpatient clinic as a marker of short-term admissions and death. HIV Clin Trials. 2012;13(5):289-295. PubMed
19. Obialo CI, Hunt WC, Bashir K, Zager PG. Relationship of missed and shortened hemodialysis treatments to hospitalization and mortality: Observations from a US dialysis network. Clin Kidney J. 2012;5(4):315-319. PubMed
20. Hwang AS, Atlas SJ, Cronin P, et al. Appointment “no-shows” are an independent predictor of subsequent quality of care and resource utilization outcomes. J Gen Intern Med. 2015;30(10):1426-1433. PubMed
21. Perez FD, Xie J, Sin A, et al. Characteristics and direct costs of academic pediatric subspecialty outpatient no-show events. J Healthc Qual. 2014;36(4):32-42. PubMed
22. Huang Y, Zuniga P. Effective cancellation policy to reduce the negative impact of patient no-show. Journal of the Operational Research Society. 2013;65(5):605-615. 
23. Percac-Lima S, Cronin PR, Ryan DP, Chabner BA, Daly EA, Kimball AB. Patient navigation based on predictive modeling decreases no-show rates in cancer care. Cancer. 2015;121(10):1662-1670. PubMed
24. Torres O, Rothberg MB, Garb J, Ogunneye O, Onyema J, Higgins T. Risk factor model to predict a missed clinic appointment in an urban, academic, and underserved setting. Popul Health Manag. 2015;18(2):131-136. PubMed
25. Eid WE, Shehata SF, Cole DA, Doerman KL. Predictors of nonattendance at an endocrinology outpatient clinic. Endocr Pract. 2016;22(8):983-989. PubMed
26. Kashiwagi DT, Burton MC, Kirkland LL, Cha S, Varkey P. Do timely outpatient follow-up visits decrease hospital readmission rates? Am J Med Qual. 2012;27(1):11-15. PubMed
27. Miller AJ, Chae E, Peterson E, Ko AB. Predictors of repeated “no-showing” to clinic appointments. Am J Otolaryngol. 2015;36(3):411-414. PubMed
28. ASCO. Billing challenges for residents of Skilled Nursing Facilities. J Oncol Pract. 2008;4(5):245-248. PubMed
29. Centers for Medicare & Medicaid Services (2013). “SE0433: Skilled Nursing Facility consolidated billing as it relates to ambulance services.” Medicare Learning Network Matters. https://www.cms.gov/Outreach-and-Education/Medicare-Learning-Network-MLN/MLNMattersArticles/downloads/se0433.pdf. Accessed on February 14, 2017.
30. Luckett R, Pena N, Vitonis A, Bernstein MR, Feldman S. Effect of patient navigator program on no-show rates at an academic referral colposcopy clinic. J Womens Health (Larchmt). 2015;24(7):608-615. PubMed
31. Jones CD, Vu MB, O’Donnell CM, et al. A failure to communicate: A qualitative exploration of care coordination between hospitalists and primary care providers around patient hospitalizations. J Gen Intern Med. 2015;30(4):417-424. PubMed
32. George A, Rubin G. Non-attendance in general practice: a systematic review and its implications for access to primary health care. Fam Pract. 2003;20(2):178-184. 2016;31(12):1460-1466.J Gen Intern Med. PubMed

48. Shah SJ, Cronin P, Hong CS, et al. Targeted reminder phone calls to patients at high risk of no-show for primary care appointment: A randomized trial. 2010;123(6):542-548.Am J Med. PubMed 

47. Parikh A, Gupta K, Wilson AC, Fields K, Cosgrove NM, Kostis JB. The effectiveness of outpatient appointment reminder systems in reducing no-show rates. 2009;20:142-144.Int J STD AIDS. PubMed

46. Price H, Waters AM, Mighty D, et al. Texting appointment reminders reduces ‘Did not attend’ rates, is popular with patients and is cost-effective. 2009;25(3):166-170.J Med Practice Management. PubMed

45. Hills LS. How to handle patients who miss appointments or show up late.
2009;39(3):271-287.Interfaces. PubMed

44. Kros J, Dellana S, West D. Overbooking Increases Patient Access at East Carolina University’s Student Health Services Clinic. 2012;344(3):211-219.Am J Med Sci.

43. Stubbs ND, Geraci SA, Stephenson PL, Jones DB, Sanders S. Methods to reduce outpatient non-attendance. PubMed
42. Haftka A, Cerasale MT, Paje D. Direct patient participation in discharge follow-up appointment scheduling. Paper presented at: Society of Hospital Medicine, Annual Meeting 2015; National Harbor, MD. 2012;5(1):27-32.Patient.

41. Chang R, Spahlinger D, Kim CS. Re-engineering the post-discharge appointment process for general medicine patients. PubMed
40. Coffey C, Kufta J. Patient-centered post-discharge appointment scheduling improves readmission rates. Paper presented at: Society of Hospital Medicine, Annual Meeting 2011; Grapevine, Texas. 2006;81(1):76-81.Acad Med.

39. Vidyarthi AR, Katz PP, Wall SD, Wachter RM, Auerbach AD. Impact of reduced duty hours on residents’ education satistfaction at the University of California, San Francisco.
2013;173(18):1715-1722.JAMA Intern Med. PubMed

38. Horwitz LI, Moriarty JP, Chen C, et al. Quality of discharge practices and patient understanding at an academic medical center. 2010;16(4):246-259.Health Informatics J. PubMed

37. Daggy J, Lawley M, Willis D, et al. Using no-show modeling to improve clinic performance. 2005;5:51.BMC Health Serv Res. PubMed

36. Lee VJ, Earnest A, Chen MI, Krishnan B. Predictors of failed attendances in a multi-specialty outpatient centre using electronic databases. 2013;3(9):e003212.BMJ Open. PubMed

35. Long T, Genao I, Horwitz LI. Reasons for readmission in an underserved high-risk population: A qualitative analysis of a series of inpatient interviews. 2013;32(7):1196-1203.Health Aff (Millwood). PubMed

34. Kangovi S, Barg FK, Carter T, Long JA, Shannon R, Grande D. Understanding why patients of low socioeconomic status prefer hospitals over ambulatory care. 2015;54(10):976-982.Clin Pediatr (Phila). PubMed

33. Samuels RC, Ward VL, Melvin P, et al. Missed Appointments: Factors Contributing to High No-Show Rates in an Urban Pediatrics Primary Care Clinic. PubMed

 

 

Article PDF
Issue
Journal of Hospital Medicine 12 (8)
Publications
Topics
Page Number
618-625
Sections
Files
Files
Article PDF
Article PDF

Given growing incentives to reduce readmission rates, predischarge checklists and bundles have recommended that inpatient providers schedule postdischarge follow-up visits (PDFVs) for their hospitalized patients.1-4 PDFVs have been linked to lower readmission rates in patients with chronic conditions, including congestive heart failure, psychiatric illnesses, and chronic obstructive pulmonary disease.5-8 In contrast, the impact of PDFVs on readmissions in hospitalized general medicine populations has been mixed.9-12 Beyond the presence or absence of PDFVs, it may be a patient’s inability to keep scheduled PDFVs that contributes more strongly to preventable readmissions.11

This challenge, dealing with the 12% to 37% of patients who miss their visits (“no-shows”), is not new.13-17 In high-risk patient populations, such as those with substance abuse, diabetes, or human immunodeficiency virus, no-shows (NSs) have been linked to poorer short-term and long-term clinical outcomes.16,18-20 Additionally, NSs pose a challenge for outpatient clinics and the healthcare system at large. The financial cost of NSs ranges from approximately $200 per patient in 2 analyses to $7 million in cumulative lost revenue per year at 1 large academic health system.13,17,21 As such, increasing attendance at PDFVs is a potential target for improving both patient outcomes and clinic productivity.

Most prior PDFV research has focused on readmission risk rather than PDFV attendance as the primary outcome.5-12 However, given the patient-oriented benefits of attending PDFVs and the clinic-oriented benefits of avoiding vacant time slots, NS PDFVs represent an important missed opportunity for our healthcare delivery system. To our knowledge, risk factors for PDFV nonattendance have not yet been systematically studied. The aim of our study was to analyze PDFV nonattendance, particularly NSs and same-day cancellations (SDCs), for hospitalizations and clinics within our healthcare system.

METHODS

Study Design

We conducted an observational cohort study of adult patients from 10 medical units at the Hospital of the University of Pennsylvania (a 789-bed quaternary-care hospital within an urban, academic medical system) who were scheduled with at least 1 PDFV. Specifically, the patients included in our analysis were hospitalized on general internal medicine services or medical subspecialty services with discharge dates between April 1, 2014, and March 31, 2015. Hospitalizations included in our study had at least 1 PDFV scheduled with an outpatient provider affiliated with the University of Pennsylvania Health System (UPHS). PDFVs scheduled with unaffiliated providers were not examined.

Each PDFV was requested by a patient’s inpatient care team. Once the care team had determined that a PDFV was clinically warranted, a member of the team (generally a resident, advanced practice provider, medical student, or designee) either called the UPHS clinic to schedule an appointment time or e-mailed the outpatient UPHS provider directly to facilitate a more urgent PDFV appointment time. Once a PDFV time was confirmed, PDFV details (ie, date, time, location, and phone number) were electronically entered into the patient’s discharge instructions by the inpatient care team. At the time of discharge, nurses reviewed these instructions with their patients. All patients left the hospital with a physical copy of these instructions. As part of routine care at our institution, patients then received automated telephone reminders from their UPHS-affiliated outpatient clinic 48 hours prior to each PDFV.

Data Collection

Our study was determined to meet criteria for quality improvement by the University of Pennsylvania’s Institutional Review Board. We used our healthcare system’s integrated electronic medical record system to track the dates of initial PDFV requests, the dates of hospitalization, and actual PDFV dates. PDFVs were included if the appointment request was made while a patient was hospitalized, including the day of discharge. Our study methodology only allowed us to investigate PDFVs scheduled with UPHS outpatient providers. We did not review discharge instructions or survey non-UPHS clinics to quantify visits scheduled with other providers, for example, community health centers or external private practices.

Exclusion criteria included the following: (1) office visits with nonproviders, for example, scheduled diagnostic procedures or pharmacist appointments for warfarin dosing; (2) visits cancelled by inpatient providers prior to discharge; (3) visits for patients not otherwise eligible for UPHS outpatient care because of insurance reasons; and (4) visits scheduled for dates after a patient’s death. Our motivation for the third exclusion criterion was the infrequent and irregular process by which PDFVs were authorized for these patients. These patients and their characteristics are described in Supplementary Table 1 in more detail.

For each PDFV, we recorded age, gender, race, insurance status, driving distance, length of stay for index hospitalization, discharging service (general internal medicine vs subspecialty), postdischarge disposition (home, home with home care services such as nursing or physical therapy, or facility), the number of PDFVs scheduled per index hospitalization, PDFV specialty type (oncologic subspecialty, nononcologic medical subspecialty, nononcologic surgical subspecialty, primary care, or other specialty), PDFV season, and PDFV lead time (the number of days between the discharge date and PDFV). We consolidated oncologic specialties into 1 group given the integrated nature of our healthcare system’s comprehensive cancer center. “Other” PDFV specialty subtypes are described in Supplementary Table 2. Driving distances between patient postal codes and our hospital were calculated using Excel VBA Master (Salt Lake City, Utah) and were subsequently categorized into patient-level quartiles for further analysis. For cancelled PDFVs, we collected dates of cancellation relative to the date of the appointment itself.

 

 

Study Outcomes

The primary study outcome was PDFV attendance. Each PDFV’s status was categorized by outpatient clinic staff as attended, cancelled, or NS. For cancelled appointments, cancellation dates and reasons (if entered by clinic representatives) were collected. In keeping with prior studies investigating outpatient nonattendance,we calculated collective NS/SDC rates for the variables listed above.17,22-25 We additionally calculated NS/SDC and attendance-as-scheduled rates stratified by the number of PDFVs per patient to assess for a “high-utilizer” effect with regard to PDFV attendance.

Statistical Analysis

We used multivariable mixed-effects regression with a logit link to assess associations between age, gender, race, insurance, driving distance quartile, length of stay, discharging service, postdischarge disposition, the number of PDFVs per hospitalization, PDFV specialty type, PDFV season, PDFV lead time, and our NS/SDC outcome. The mixed-effects approach was used to account for correlation structures induced by patients who had multiple visits and for patients with multiple hospitalizations. Specifically, our model specified 2 levels of nesting (PDFVs nested within each hospitalization, which were nested within each patient) to obtain appropriate standard error estimates for our adjusted odds ratios (ORs). Correlation matrices and multivariable variance inflation factors were used to assess collinearity among the predictor variables. These assessments did not indicate strong collinearity; hence, all predictors were included in the model. Only driving distance had a small amount of missing data (0.18% of driving distances were unavailable), so multiple imputation was not undertaken. Analyses were performed using R version 3.3.1 (R Foundation for Statistical Computing, Vienna, Austria).

RESULTS

Baseline Characteristics

During the 1-year study period, there were 11,829 discrete hospitalizations in medical units at our hospital. Of these hospitalizations, 6136 (52%) had at least 1 UPHS-affiliated PDFV meeting our inclusion and exclusion criteria, as detailed in the Figure. Across these hospitalizations, 9258 PDFVs were scheduled on behalf of 4653 patients. Demographic characteristics for these patients, hospitalizations, and visits are detailed in Table 1. The median age of patients in our cohort was 61 years old (interquartile range [IQR] 49-70, range 18-101). The median driving distance was 17 miles (IQR 4.3-38.8, range 0-2891). For hospitalizations, the median length of stay was 5 days (IQR 3-10, range 0-97). The median PDFV lead time, which is defined as the number of days between discharge and PDFV, was 12 days (IQR 6-23, range 1-60). Overall, 41% of patients (n = 1927) attended all of their PDFVs as scheduled; Supplementary Figure 1 lists patient-level PDFV attendance-as-scheduled percentages in more detail.

Incidence of NSs and SDCs

Twenty-five percent of PDFVs (n = 2303) were ultimately NS/SDCs; this included 1658 NSs (18% of all appointments) and 645 SDCs (7% of all appointments). Fifty-two percent of PDFVs (n = 4847) were kept as scheduled, while 23% (n = 2108) were cancelled before the day of the visit. Of the 2558 cancellations with valid cancellation dates, 49% (n = 1252) were cancelled 2 or fewer days beforehand, as shown in Supplementary Figure 2.

In Table 2, we show unadjusted NS/SDC rates and adjusted NS/SDC ORs based on patient and hospitalization characteristics. NS/SDC appointments were more likely to occur in patients who were black (adjusted OR 1.94, 95% confidence interval [CI], 1.63-2.32) or Medicaid insured (OR 1.41, 95% CI, 1.19-1.67). In contrast, NS/SDC appointments were less likely in elderly patients (age ≥65 years: OR 0.39, 95% CI, 0.31-0.49) and patients who lived further away (furthest quartile of driving distance: OR 0.65, 95% CI, 0.52-–0.81). Longer hospitalizations were associated with higher NS/SDC rates (length of stay ≥15 days: OR 1.51, 95% CI, 1.22-1.88). In contrast, discharges from subspecialty services (OR 0.79, 95% CI, 0.68-0.93) had independently lower NS/SDC rates. Compared to discharges to home without services, NS/SDC rates were higher with discharges to home with services (OR 1.32, 95% CI, 1.01-1.36) and with discharges to facilities (OR 2.10, 95% CI, 1.70-2.60).

The presence of exactly 2 PDFVs per hospitalization was also associated with higher NS/SDC rates (OR 1.17, 95% CI, 1.01-1.36), compared to a single PDFV per hospitalization; however, the presence of more than 2 PDFVs per hospitalization was associated with lower NS/SDC rates (OR 0.82, 95% CI, 0.69-0.98). A separate analysis (data not shown) of potential high utilizers revealed a 15% NS/SDC rate for the top 0.5% of patients (median: 18 PDFVs each) and an 18% NS/SDC rate for the top 1% of patients (median: 14 PDFVs each) with regard to the numbers of PDFVs scheduled, compared to the 25% overall NS/SDC rate for all patients.


NS/SDC rates and adjusted ORs with regard to individual PDFV characteristics are displayed in Table 3. Nononcologic visits had higher NS/SDC rates than oncologic visits; for example, the NS/SDC rate for primary care visits was 39% (OR 2.62, 95% CI, 2.03-3.38), compared to 12% for oncologic visits. Appointments in the “other” specialty category also had high nonattendance rates, as further described in Supplementary Table B. Summertime appointments were more likely to be attended (OR 0.81, 95% CI, 0.68-0.97) compared to those in the spring. PDFV lead time (the time interval between the discharge date and appointment date) was not associated with changes in visit attendance.

 

 

DISCUSSION

PDFVs were scheduled on patients’ behalf for more than half of all medical hospitalizations at our institution, a rate that is consistent with previous studies.10,11,26 However, 1 in 4 of these PDFVs resulted in a NS/SDC. This figure contrasts sharply with our institution’s 10% overall NS/SDC rate for all outpatient visits (S. Schlegel, written communication, July 2016). In our study, patients who were younger, black, or Medicaid insured were more likely to miss their follow-up visits. Patients who lived farther from the study hospital had lower NS/SDC rates, which is consistent with another study of a low-income, urban patient population.27 In contrast, patients with longer lengths of stay, discharges with home care services, or discharges to another facility were more likely to miss their PDFVs. Reasons for this are likely multifactorial, including readmission to a hospital or feeling too unwell to leave home to attend PDFVs. Insurance policies regarding ambulance reimbursement and outpatient billing can cause confusion and may have contributed to higher NS/SDC rates for facility-bound patients.28,29

When comparing PDFV characteristics themselves, oncologic visits had the lowest NS/SDC incidence of any group analyzed in our study. This may be related to the inherent life-altering nature of a cancer diagnosis or our cancer center’s use of patient navigators.23,30 In contrast, primary care clinics suffered from NS/SDC rates approaching 40%, which is a concerning finding given the importance of primary care coordination in the posthospitalization period.9,31 Why are primary care appointments so commonly missed? Some studies suggest that forgetting about a primary care appointment is a leading reason.15,32,33 For PDFVs, this phenomenon may be augmented because the visits are not scheduled by patients themselves. Additionally, patients may paradoxically undervalue the benefit of an all-encompassing primary care visit, compared to a PDFV focused on a specific problem, (eg, a cardiology follow-up appointment for a patient with congestive heart failure). In particular, patients with limited health literacy may potentially undervalue the capabilities of their primary care clinics.34,35

The low absolute number of primary care PDFVs (only 8% of all visits) scheduled for patients at our hospital was an unexpected finding. This low percentage is likely a function of the patient population hospitalized at our large, urban quaternary-care facility. First, an unknown number of patients may have had PDFVs manually scheduled with primary care providers external to our health system; these PDFVs were not captured within our study. Second, 71% of the hospitalizations in our study occurred in subspecialty services, for which specific primary care follow-up may not be as urgent. Supporting this fact, further analysis of the 6136 hospitalizations in our study (data not shown) revealed that 28% of the hospitalizations in general internal medicine were scheduled with at least 1 primary care PDFV as opposed to only 5% of subspecialty-service hospitalizations.

In contrast to several previous studies of outpatient nonattendance,we did not find that visits scheduled for time points further in the future were more likely to be missed.14,24,25,36,37 Unlike other appointments, it may be that PDFV lead time does not affect attendance because of the unique manner in which PDFV times are scheduled and conveyed to patients. Unlike other appointments, patients do not schedule PDFVs themselves but instead learn about their PDFV dates as part of a large set of discharge instructions. This practice may result in poor recall of PDFV dates in recently hospitalized patients38, regardless of the lead time between discharge and the visit itself.

Supplementary Table 1 details a 51% NS/SDC rate for the small number of PDFVs (n = 65) that were excluded a priori from our analysis because of general ineligibility for UPHS outpatient care. We specifically chose to exclude this population because of the infrequent and irregular process by which these PDFVs were authorized on a case-by-case basis, typically via active engagement by our hospital’s social work department. We did not study this population further but postulate that the 51% NS/SDC rate may reflect other social determinants of health that contribute to appointment nonadherence in a predominantly uninsured population.

Beyond their effect on patient outcomes, improving PDFV-related processes has the potential to boost both inpatient and outpatient provider satisfaction. From the standpoint of frontline inpatient providers (often resident physicians), calling outpatient clinics to request PDFVs is viewed as 1 of the top 5 administrative tasks that interfere with house staff education.39 Future interventions that involve patients in the PDFV scheduling process may improve inpatient workflow while simultaneously engaging patients in their own care. For example, asking clinic representatives to directly schedule PDFVs with hospitalized patients, either by phone or in person, has been shown in pilot studies to improve PDFV attendance and decrease readmissions.40-42 Conversely, NS/SDC visits harm outpatient provider productivity and decrease provider availability for other patients.13,17,43 Strategies to mitigate the impact of unfilled appointment slots (eg, deliberately overbooking time slots in advance) carry their own risks, including provider burnout.44 As such, preventing NSs may be superior to curing their adverse impacts. Many such strategies exist in the ambulatory setting,13,43,45 for example, better communication with patients through texting or goal-directed, personalized phone reminders.46-48Our study methodology has several limitations. Most importantly, we were unable to measure PDFVs made with providers unaffiliated with UPHS. As previously noted, our low proportion of primary care PDFVs may specifically reflect patients with primary care providers outside of our health system. Similarly, our low percentage of Medicaid patients receiving PDFVs may be related to follow-up visits with nonaffiliated community health centers. We were unable to measure patient acuity and health literacy as potential predictors of NS/SDC rates. Driving distances were calculated from patient postal codes to our hospital, not to individual outpatient clinics. However, the majority of our hospital-affiliated clinics are located adjacent to our hospital; additionally, we grouped driving distances into quartiles for our analysis. We had initially attempted to differentiate between clinic-initiated and patient-initiated cancellations, but unfortunately, we found that the data were too unreliable to be used for further analysis (outlined in Supplementary Table 3). Lastly, because we studied patients in medical units at a single large, urban, academic center, our results are not generalizable to other settings (eg, community hospitals, hospitals with smaller networks of outpatient providers, or patients being discharged from surgical services or observation units).

 

 

CONCLUSION

Given national efforts to enhance postdischarge transitions of care, we aimed to analyze attendance at provider-scheduled PDFV appointments. Our finding that 25% of PDFVs resulted in NS/SDCs raises both questions and opportunities for inpatient and outpatient providers. Further research is needed to understand why so many patients miss their PDFVs, and we should work as a field to develop creative solutions to improve PDFV scheduling and attendance.

Acknowledgments

The authors acknowledge Marie Synnestvedt, PhD, and Manik Chhabra, MD, for their assistance with data gathering and statistical analysis. They also acknowledge Allison DeKosky, MD, Michael Serpa, BS, Michael McFall, and Scott Schlegel, MBA, for their assistance with researching this topic. They did not receive external compensation for their assistance outside of their usual salary support.

DISCLOSURE

Nothing to report.

Given growing incentives to reduce readmission rates, predischarge checklists and bundles have recommended that inpatient providers schedule postdischarge follow-up visits (PDFVs) for their hospitalized patients.1-4 PDFVs have been linked to lower readmission rates in patients with chronic conditions, including congestive heart failure, psychiatric illnesses, and chronic obstructive pulmonary disease.5-8 In contrast, the impact of PDFVs on readmissions in hospitalized general medicine populations has been mixed.9-12 Beyond the presence or absence of PDFVs, it may be a patient’s inability to keep scheduled PDFVs that contributes more strongly to preventable readmissions.11

This challenge, dealing with the 12% to 37% of patients who miss their visits (“no-shows”), is not new.13-17 In high-risk patient populations, such as those with substance abuse, diabetes, or human immunodeficiency virus, no-shows (NSs) have been linked to poorer short-term and long-term clinical outcomes.16,18-20 Additionally, NSs pose a challenge for outpatient clinics and the healthcare system at large. The financial cost of NSs ranges from approximately $200 per patient in 2 analyses to $7 million in cumulative lost revenue per year at 1 large academic health system.13,17,21 As such, increasing attendance at PDFVs is a potential target for improving both patient outcomes and clinic productivity.

Most prior PDFV research has focused on readmission risk rather than PDFV attendance as the primary outcome.5-12 However, given the patient-oriented benefits of attending PDFVs and the clinic-oriented benefits of avoiding vacant time slots, NS PDFVs represent an important missed opportunity for our healthcare delivery system. To our knowledge, risk factors for PDFV nonattendance have not yet been systematically studied. The aim of our study was to analyze PDFV nonattendance, particularly NSs and same-day cancellations (SDCs), for hospitalizations and clinics within our healthcare system.

METHODS

Study Design

We conducted an observational cohort study of adult patients from 10 medical units at the Hospital of the University of Pennsylvania (a 789-bed quaternary-care hospital within an urban, academic medical system) who were scheduled with at least 1 PDFV. Specifically, the patients included in our analysis were hospitalized on general internal medicine services or medical subspecialty services with discharge dates between April 1, 2014, and March 31, 2015. Hospitalizations included in our study had at least 1 PDFV scheduled with an outpatient provider affiliated with the University of Pennsylvania Health System (UPHS). PDFVs scheduled with unaffiliated providers were not examined.

Each PDFV was requested by a patient’s inpatient care team. Once the care team had determined that a PDFV was clinically warranted, a member of the team (generally a resident, advanced practice provider, medical student, or designee) either called the UPHS clinic to schedule an appointment time or e-mailed the outpatient UPHS provider directly to facilitate a more urgent PDFV appointment time. Once a PDFV time was confirmed, PDFV details (ie, date, time, location, and phone number) were electronically entered into the patient’s discharge instructions by the inpatient care team. At the time of discharge, nurses reviewed these instructions with their patients. All patients left the hospital with a physical copy of these instructions. As part of routine care at our institution, patients then received automated telephone reminders from their UPHS-affiliated outpatient clinic 48 hours prior to each PDFV.

Data Collection

Our study was determined to meet criteria for quality improvement by the University of Pennsylvania’s Institutional Review Board. We used our healthcare system’s integrated electronic medical record system to track the dates of initial PDFV requests, the dates of hospitalization, and actual PDFV dates. PDFVs were included if the appointment request was made while a patient was hospitalized, including the day of discharge. Our study methodology only allowed us to investigate PDFVs scheduled with UPHS outpatient providers. We did not review discharge instructions or survey non-UPHS clinics to quantify visits scheduled with other providers, for example, community health centers or external private practices.

Exclusion criteria included the following: (1) office visits with nonproviders, for example, scheduled diagnostic procedures or pharmacist appointments for warfarin dosing; (2) visits cancelled by inpatient providers prior to discharge; (3) visits for patients not otherwise eligible for UPHS outpatient care because of insurance reasons; and (4) visits scheduled for dates after a patient’s death. Our motivation for the third exclusion criterion was the infrequent and irregular process by which PDFVs were authorized for these patients. These patients and their characteristics are described in Supplementary Table 1 in more detail.

For each PDFV, we recorded age, gender, race, insurance status, driving distance, length of stay for index hospitalization, discharging service (general internal medicine vs subspecialty), postdischarge disposition (home, home with home care services such as nursing or physical therapy, or facility), the number of PDFVs scheduled per index hospitalization, PDFV specialty type (oncologic subspecialty, nononcologic medical subspecialty, nononcologic surgical subspecialty, primary care, or other specialty), PDFV season, and PDFV lead time (the number of days between the discharge date and PDFV). We consolidated oncologic specialties into 1 group given the integrated nature of our healthcare system’s comprehensive cancer center. “Other” PDFV specialty subtypes are described in Supplementary Table 2. Driving distances between patient postal codes and our hospital were calculated using Excel VBA Master (Salt Lake City, Utah) and were subsequently categorized into patient-level quartiles for further analysis. For cancelled PDFVs, we collected dates of cancellation relative to the date of the appointment itself.

 

 

Study Outcomes

The primary study outcome was PDFV attendance. Each PDFV’s status was categorized by outpatient clinic staff as attended, cancelled, or NS. For cancelled appointments, cancellation dates and reasons (if entered by clinic representatives) were collected. In keeping with prior studies investigating outpatient nonattendance,we calculated collective NS/SDC rates for the variables listed above.17,22-25 We additionally calculated NS/SDC and attendance-as-scheduled rates stratified by the number of PDFVs per patient to assess for a “high-utilizer” effect with regard to PDFV attendance.

Statistical Analysis

We used multivariable mixed-effects regression with a logit link to assess associations between age, gender, race, insurance, driving distance quartile, length of stay, discharging service, postdischarge disposition, the number of PDFVs per hospitalization, PDFV specialty type, PDFV season, PDFV lead time, and our NS/SDC outcome. The mixed-effects approach was used to account for correlation structures induced by patients who had multiple visits and for patients with multiple hospitalizations. Specifically, our model specified 2 levels of nesting (PDFVs nested within each hospitalization, which were nested within each patient) to obtain appropriate standard error estimates for our adjusted odds ratios (ORs). Correlation matrices and multivariable variance inflation factors were used to assess collinearity among the predictor variables. These assessments did not indicate strong collinearity; hence, all predictors were included in the model. Only driving distance had a small amount of missing data (0.18% of driving distances were unavailable), so multiple imputation was not undertaken. Analyses were performed using R version 3.3.1 (R Foundation for Statistical Computing, Vienna, Austria).

RESULTS

Baseline Characteristics

During the 1-year study period, there were 11,829 discrete hospitalizations in medical units at our hospital. Of these hospitalizations, 6136 (52%) had at least 1 UPHS-affiliated PDFV meeting our inclusion and exclusion criteria, as detailed in the Figure. Across these hospitalizations, 9258 PDFVs were scheduled on behalf of 4653 patients. Demographic characteristics for these patients, hospitalizations, and visits are detailed in Table 1. The median age of patients in our cohort was 61 years old (interquartile range [IQR] 49-70, range 18-101). The median driving distance was 17 miles (IQR 4.3-38.8, range 0-2891). For hospitalizations, the median length of stay was 5 days (IQR 3-10, range 0-97). The median PDFV lead time, which is defined as the number of days between discharge and PDFV, was 12 days (IQR 6-23, range 1-60). Overall, 41% of patients (n = 1927) attended all of their PDFVs as scheduled; Supplementary Figure 1 lists patient-level PDFV attendance-as-scheduled percentages in more detail.

Incidence of NSs and SDCs

Twenty-five percent of PDFVs (n = 2303) were ultimately NS/SDCs; this included 1658 NSs (18% of all appointments) and 645 SDCs (7% of all appointments). Fifty-two percent of PDFVs (n = 4847) were kept as scheduled, while 23% (n = 2108) were cancelled before the day of the visit. Of the 2558 cancellations with valid cancellation dates, 49% (n = 1252) were cancelled 2 or fewer days beforehand, as shown in Supplementary Figure 2.

In Table 2, we show unadjusted NS/SDC rates and adjusted NS/SDC ORs based on patient and hospitalization characteristics. NS/SDC appointments were more likely to occur in patients who were black (adjusted OR 1.94, 95% confidence interval [CI], 1.63-2.32) or Medicaid insured (OR 1.41, 95% CI, 1.19-1.67). In contrast, NS/SDC appointments were less likely in elderly patients (age ≥65 years: OR 0.39, 95% CI, 0.31-0.49) and patients who lived further away (furthest quartile of driving distance: OR 0.65, 95% CI, 0.52-–0.81). Longer hospitalizations were associated with higher NS/SDC rates (length of stay ≥15 days: OR 1.51, 95% CI, 1.22-1.88). In contrast, discharges from subspecialty services (OR 0.79, 95% CI, 0.68-0.93) had independently lower NS/SDC rates. Compared to discharges to home without services, NS/SDC rates were higher with discharges to home with services (OR 1.32, 95% CI, 1.01-1.36) and with discharges to facilities (OR 2.10, 95% CI, 1.70-2.60).

The presence of exactly 2 PDFVs per hospitalization was also associated with higher NS/SDC rates (OR 1.17, 95% CI, 1.01-1.36), compared to a single PDFV per hospitalization; however, the presence of more than 2 PDFVs per hospitalization was associated with lower NS/SDC rates (OR 0.82, 95% CI, 0.69-0.98). A separate analysis (data not shown) of potential high utilizers revealed a 15% NS/SDC rate for the top 0.5% of patients (median: 18 PDFVs each) and an 18% NS/SDC rate for the top 1% of patients (median: 14 PDFVs each) with regard to the numbers of PDFVs scheduled, compared to the 25% overall NS/SDC rate for all patients.


NS/SDC rates and adjusted ORs with regard to individual PDFV characteristics are displayed in Table 3. Nononcologic visits had higher NS/SDC rates than oncologic visits; for example, the NS/SDC rate for primary care visits was 39% (OR 2.62, 95% CI, 2.03-3.38), compared to 12% for oncologic visits. Appointments in the “other” specialty category also had high nonattendance rates, as further described in Supplementary Table B. Summertime appointments were more likely to be attended (OR 0.81, 95% CI, 0.68-0.97) compared to those in the spring. PDFV lead time (the time interval between the discharge date and appointment date) was not associated with changes in visit attendance.

 

 

DISCUSSION

PDFVs were scheduled on patients’ behalf for more than half of all medical hospitalizations at our institution, a rate that is consistent with previous studies.10,11,26 However, 1 in 4 of these PDFVs resulted in a NS/SDC. This figure contrasts sharply with our institution’s 10% overall NS/SDC rate for all outpatient visits (S. Schlegel, written communication, July 2016). In our study, patients who were younger, black, or Medicaid insured were more likely to miss their follow-up visits. Patients who lived farther from the study hospital had lower NS/SDC rates, which is consistent with another study of a low-income, urban patient population.27 In contrast, patients with longer lengths of stay, discharges with home care services, or discharges to another facility were more likely to miss their PDFVs. Reasons for this are likely multifactorial, including readmission to a hospital or feeling too unwell to leave home to attend PDFVs. Insurance policies regarding ambulance reimbursement and outpatient billing can cause confusion and may have contributed to higher NS/SDC rates for facility-bound patients.28,29

When comparing PDFV characteristics themselves, oncologic visits had the lowest NS/SDC incidence of any group analyzed in our study. This may be related to the inherent life-altering nature of a cancer diagnosis or our cancer center’s use of patient navigators.23,30 In contrast, primary care clinics suffered from NS/SDC rates approaching 40%, which is a concerning finding given the importance of primary care coordination in the posthospitalization period.9,31 Why are primary care appointments so commonly missed? Some studies suggest that forgetting about a primary care appointment is a leading reason.15,32,33 For PDFVs, this phenomenon may be augmented because the visits are not scheduled by patients themselves. Additionally, patients may paradoxically undervalue the benefit of an all-encompassing primary care visit, compared to a PDFV focused on a specific problem, (eg, a cardiology follow-up appointment for a patient with congestive heart failure). In particular, patients with limited health literacy may potentially undervalue the capabilities of their primary care clinics.34,35

The low absolute number of primary care PDFVs (only 8% of all visits) scheduled for patients at our hospital was an unexpected finding. This low percentage is likely a function of the patient population hospitalized at our large, urban quaternary-care facility. First, an unknown number of patients may have had PDFVs manually scheduled with primary care providers external to our health system; these PDFVs were not captured within our study. Second, 71% of the hospitalizations in our study occurred in subspecialty services, for which specific primary care follow-up may not be as urgent. Supporting this fact, further analysis of the 6136 hospitalizations in our study (data not shown) revealed that 28% of the hospitalizations in general internal medicine were scheduled with at least 1 primary care PDFV as opposed to only 5% of subspecialty-service hospitalizations.

In contrast to several previous studies of outpatient nonattendance,we did not find that visits scheduled for time points further in the future were more likely to be missed.14,24,25,36,37 Unlike other appointments, it may be that PDFV lead time does not affect attendance because of the unique manner in which PDFV times are scheduled and conveyed to patients. Unlike other appointments, patients do not schedule PDFVs themselves but instead learn about their PDFV dates as part of a large set of discharge instructions. This practice may result in poor recall of PDFV dates in recently hospitalized patients38, regardless of the lead time between discharge and the visit itself.

Supplementary Table 1 details a 51% NS/SDC rate for the small number of PDFVs (n = 65) that were excluded a priori from our analysis because of general ineligibility for UPHS outpatient care. We specifically chose to exclude this population because of the infrequent and irregular process by which these PDFVs were authorized on a case-by-case basis, typically via active engagement by our hospital’s social work department. We did not study this population further but postulate that the 51% NS/SDC rate may reflect other social determinants of health that contribute to appointment nonadherence in a predominantly uninsured population.

Beyond their effect on patient outcomes, improving PDFV-related processes has the potential to boost both inpatient and outpatient provider satisfaction. From the standpoint of frontline inpatient providers (often resident physicians), calling outpatient clinics to request PDFVs is viewed as 1 of the top 5 administrative tasks that interfere with house staff education.39 Future interventions that involve patients in the PDFV scheduling process may improve inpatient workflow while simultaneously engaging patients in their own care. For example, asking clinic representatives to directly schedule PDFVs with hospitalized patients, either by phone or in person, has been shown in pilot studies to improve PDFV attendance and decrease readmissions.40-42 Conversely, NS/SDC visits harm outpatient provider productivity and decrease provider availability for other patients.13,17,43 Strategies to mitigate the impact of unfilled appointment slots (eg, deliberately overbooking time slots in advance) carry their own risks, including provider burnout.44 As such, preventing NSs may be superior to curing their adverse impacts. Many such strategies exist in the ambulatory setting,13,43,45 for example, better communication with patients through texting or goal-directed, personalized phone reminders.46-48Our study methodology has several limitations. Most importantly, we were unable to measure PDFVs made with providers unaffiliated with UPHS. As previously noted, our low proportion of primary care PDFVs may specifically reflect patients with primary care providers outside of our health system. Similarly, our low percentage of Medicaid patients receiving PDFVs may be related to follow-up visits with nonaffiliated community health centers. We were unable to measure patient acuity and health literacy as potential predictors of NS/SDC rates. Driving distances were calculated from patient postal codes to our hospital, not to individual outpatient clinics. However, the majority of our hospital-affiliated clinics are located adjacent to our hospital; additionally, we grouped driving distances into quartiles for our analysis. We had initially attempted to differentiate between clinic-initiated and patient-initiated cancellations, but unfortunately, we found that the data were too unreliable to be used for further analysis (outlined in Supplementary Table 3). Lastly, because we studied patients in medical units at a single large, urban, academic center, our results are not generalizable to other settings (eg, community hospitals, hospitals with smaller networks of outpatient providers, or patients being discharged from surgical services or observation units).

 

 

CONCLUSION

Given national efforts to enhance postdischarge transitions of care, we aimed to analyze attendance at provider-scheduled PDFV appointments. Our finding that 25% of PDFVs resulted in NS/SDCs raises both questions and opportunities for inpatient and outpatient providers. Further research is needed to understand why so many patients miss their PDFVs, and we should work as a field to develop creative solutions to improve PDFV scheduling and attendance.

Acknowledgments

The authors acknowledge Marie Synnestvedt, PhD, and Manik Chhabra, MD, for their assistance with data gathering and statistical analysis. They also acknowledge Allison DeKosky, MD, Michael Serpa, BS, Michael McFall, and Scott Schlegel, MBA, for their assistance with researching this topic. They did not receive external compensation for their assistance outside of their usual salary support.

DISCLOSURE

Nothing to report.

References

1. Halasyamani L, Kripalani S, Coleman E, et al. Transition of care for hospitalized elderly patients - development of a discharge checklist for hospitalists. J Hosp Med. 2006;1(6):354-360. PubMed
2. Koehler BE, Richter KM, Youngblood L, et al. Reduction of 30-day postdischarge hospital readmission or emergency department (ED) visit rates in high-risk elderly medical patients through delivery of a targeted care bundle. J Hosp Med. 2009;4(4):211-218. PubMed
3. Soong C, Daub S, Lee JG, et al. Development of a checklist of safe discharge practices for hospital patients. J Hosp Med. 2013;8(8):444-449. PubMed
4. Rice YB, Barnes CA, Rastogi R, Hillstrom TJ, Steinkeler CN. Tackling 30-day, all-cause readmissions with a patient-centered transitional care bundle. Popul Health Manag. 2016;19(1):56-62. PubMed
5. Nelson EA, Maruish MM, Axler JL. Effects of discharge planning and compliance with outpatient appointments on readmission rates. Psych Serv. 2000;51(7):885-889. PubMed
6. Gavish R, Levy A, Dekel OK, Karp E, Maimon N. The association between hospital readmission and pulmonologist follow-up visits in patients with chronic obstructive pulmonary disease. Chest. 2015;148(2):375-381. PubMed
7. Jackson C, Shahsahebi M, Wedlake T, DuBard CA. Timeliness of outpatient follow-up: an evidence-based approach for planning after hospital discharge. Ann Fam Med. 2015;13(2):115-122. PubMed
8. Donaho EK, Hall AC, Gass JA, et al. Protocol-driven allied health post-discharge transition clinic to reduce hospital readmissions in heart failure. J Am Heart Assoc. 2015;4(12):e002296. PubMed
9. Misky GJ, Wald HL, Coleman EA. Post-hospitalization transitions: Examining the effects of timing of primary care provider follow-up. J Hosp Med. 2010;5(7):392-397. PubMed
10. Grafft CA, McDonald FS, Ruud KL, Liesinger JT, Johnson MG, Naessens JM. Effect of hospital follow-up appointment on clinical event outcomes and mortality. Arch Intern Med. 2010;171(11):955-960. PubMed
11. Auerbach AD, Kripalani S, Vasilevskis EE, et al. Preventability and causes of readmissions in a national cohort of general medicine patients. JAMA Intern Med. 2016;176(4):484-493. PubMed
12. Field TS, Ogarek J, Garber L, Reed G, Gurwitz JH. Association of early post-discharge follow-up by a primary care physician and 30-day rehospitalization among older adults. J Gen Intern Med. 2015;30(5):565-571. PubMed
13. Quinn K. It’s no-show time! Med Group Manage Assoc Connexion. 2007;7(6):44-49. PubMed
14. Whittle J, Schectman G, Lu N, Baar B, Mayo-Smith MF. Relationship of scheduling interval to missed and cancelled clinic appointments. J Ambulatory Care Manage. 2008;31(4):290-302. PubMed
15. Kaplan-Lewis E, Percac-Lima S. No-show to primary care appointments: Why patients do not come. J Prim Care Community Health. 2013;4(4):251-255. PubMed
16. Molfenter T. Reducing appointment no-shows: Going from theory to practice. Subst Use Misuse. 2013;48(9):743-749. PubMed
17. Kheirkhah P, Feng Q, Travis LM, Tavakoli-Tabasi S, Sharafkhaneh A. Prevalence, predictors and economic consequences of no-shows. BMC Health Serv Res. 2016;16(1):13. PubMed
18. Colubi MM, Perez-Elias MJ, Elias L, et al. Missing scheduled visits in the outpatient clinic as a marker of short-term admissions and death. HIV Clin Trials. 2012;13(5):289-295. PubMed
19. Obialo CI, Hunt WC, Bashir K, Zager PG. Relationship of missed and shortened hemodialysis treatments to hospitalization and mortality: Observations from a US dialysis network. Clin Kidney J. 2012;5(4):315-319. PubMed
20. Hwang AS, Atlas SJ, Cronin P, et al. Appointment “no-shows” are an independent predictor of subsequent quality of care and resource utilization outcomes. J Gen Intern Med. 2015;30(10):1426-1433. PubMed
21. Perez FD, Xie J, Sin A, et al. Characteristics and direct costs of academic pediatric subspecialty outpatient no-show events. J Healthc Qual. 2014;36(4):32-42. PubMed
22. Huang Y, Zuniga P. Effective cancellation policy to reduce the negative impact of patient no-show. Journal of the Operational Research Society. 2013;65(5):605-615. 
23. Percac-Lima S, Cronin PR, Ryan DP, Chabner BA, Daly EA, Kimball AB. Patient navigation based on predictive modeling decreases no-show rates in cancer care. Cancer. 2015;121(10):1662-1670. PubMed
24. Torres O, Rothberg MB, Garb J, Ogunneye O, Onyema J, Higgins T. Risk factor model to predict a missed clinic appointment in an urban, academic, and underserved setting. Popul Health Manag. 2015;18(2):131-136. PubMed
25. Eid WE, Shehata SF, Cole DA, Doerman KL. Predictors of nonattendance at an endocrinology outpatient clinic. Endocr Pract. 2016;22(8):983-989. PubMed
26. Kashiwagi DT, Burton MC, Kirkland LL, Cha S, Varkey P. Do timely outpatient follow-up visits decrease hospital readmission rates? Am J Med Qual. 2012;27(1):11-15. PubMed
27. Miller AJ, Chae E, Peterson E, Ko AB. Predictors of repeated “no-showing” to clinic appointments. Am J Otolaryngol. 2015;36(3):411-414. PubMed
28. ASCO. Billing challenges for residents of Skilled Nursing Facilities. J Oncol Pract. 2008;4(5):245-248. PubMed
29. Centers for Medicare & Medicaid Services (2013). “SE0433: Skilled Nursing Facility consolidated billing as it relates to ambulance services.” Medicare Learning Network Matters. https://www.cms.gov/Outreach-and-Education/Medicare-Learning-Network-MLN/MLNMattersArticles/downloads/se0433.pdf. Accessed on February 14, 2017.
30. Luckett R, Pena N, Vitonis A, Bernstein MR, Feldman S. Effect of patient navigator program on no-show rates at an academic referral colposcopy clinic. J Womens Health (Larchmt). 2015;24(7):608-615. PubMed
31. Jones CD, Vu MB, O’Donnell CM, et al. A failure to communicate: A qualitative exploration of care coordination between hospitalists and primary care providers around patient hospitalizations. J Gen Intern Med. 2015;30(4):417-424. PubMed
32. George A, Rubin G. Non-attendance in general practice: a systematic review and its implications for access to primary health care. Fam Pract. 2003;20(2):178-184. 2016;31(12):1460-1466.J Gen Intern Med. PubMed

48. Shah SJ, Cronin P, Hong CS, et al. Targeted reminder phone calls to patients at high risk of no-show for primary care appointment: A randomized trial. 2010;123(6):542-548.Am J Med. PubMed 

47. Parikh A, Gupta K, Wilson AC, Fields K, Cosgrove NM, Kostis JB. The effectiveness of outpatient appointment reminder systems in reducing no-show rates. 2009;20:142-144.Int J STD AIDS. PubMed

46. Price H, Waters AM, Mighty D, et al. Texting appointment reminders reduces ‘Did not attend’ rates, is popular with patients and is cost-effective. 2009;25(3):166-170.J Med Practice Management. PubMed

45. Hills LS. How to handle patients who miss appointments or show up late.
2009;39(3):271-287.Interfaces. PubMed

44. Kros J, Dellana S, West D. Overbooking Increases Patient Access at East Carolina University’s Student Health Services Clinic. 2012;344(3):211-219.Am J Med Sci.

43. Stubbs ND, Geraci SA, Stephenson PL, Jones DB, Sanders S. Methods to reduce outpatient non-attendance. PubMed
42. Haftka A, Cerasale MT, Paje D. Direct patient participation in discharge follow-up appointment scheduling. Paper presented at: Society of Hospital Medicine, Annual Meeting 2015; National Harbor, MD. 2012;5(1):27-32.Patient.

41. Chang R, Spahlinger D, Kim CS. Re-engineering the post-discharge appointment process for general medicine patients. PubMed
40. Coffey C, Kufta J. Patient-centered post-discharge appointment scheduling improves readmission rates. Paper presented at: Society of Hospital Medicine, Annual Meeting 2011; Grapevine, Texas. 2006;81(1):76-81.Acad Med.

39. Vidyarthi AR, Katz PP, Wall SD, Wachter RM, Auerbach AD. Impact of reduced duty hours on residents’ education satistfaction at the University of California, San Francisco.
2013;173(18):1715-1722.JAMA Intern Med. PubMed

38. Horwitz LI, Moriarty JP, Chen C, et al. Quality of discharge practices and patient understanding at an academic medical center. 2010;16(4):246-259.Health Informatics J. PubMed

37. Daggy J, Lawley M, Willis D, et al. Using no-show modeling to improve clinic performance. 2005;5:51.BMC Health Serv Res. PubMed

36. Lee VJ, Earnest A, Chen MI, Krishnan B. Predictors of failed attendances in a multi-specialty outpatient centre using electronic databases. 2013;3(9):e003212.BMJ Open. PubMed

35. Long T, Genao I, Horwitz LI. Reasons for readmission in an underserved high-risk population: A qualitative analysis of a series of inpatient interviews. 2013;32(7):1196-1203.Health Aff (Millwood). PubMed

34. Kangovi S, Barg FK, Carter T, Long JA, Shannon R, Grande D. Understanding why patients of low socioeconomic status prefer hospitals over ambulatory care. 2015;54(10):976-982.Clin Pediatr (Phila). PubMed

33. Samuels RC, Ward VL, Melvin P, et al. Missed Appointments: Factors Contributing to High No-Show Rates in an Urban Pediatrics Primary Care Clinic. PubMed

 

 

References

1. Halasyamani L, Kripalani S, Coleman E, et al. Transition of care for hospitalized elderly patients - development of a discharge checklist for hospitalists. J Hosp Med. 2006;1(6):354-360. PubMed
2. Koehler BE, Richter KM, Youngblood L, et al. Reduction of 30-day postdischarge hospital readmission or emergency department (ED) visit rates in high-risk elderly medical patients through delivery of a targeted care bundle. J Hosp Med. 2009;4(4):211-218. PubMed
3. Soong C, Daub S, Lee JG, et al. Development of a checklist of safe discharge practices for hospital patients. J Hosp Med. 2013;8(8):444-449. PubMed
4. Rice YB, Barnes CA, Rastogi R, Hillstrom TJ, Steinkeler CN. Tackling 30-day, all-cause readmissions with a patient-centered transitional care bundle. Popul Health Manag. 2016;19(1):56-62. PubMed
5. Nelson EA, Maruish MM, Axler JL. Effects of discharge planning and compliance with outpatient appointments on readmission rates. Psych Serv. 2000;51(7):885-889. PubMed
6. Gavish R, Levy A, Dekel OK, Karp E, Maimon N. The association between hospital readmission and pulmonologist follow-up visits in patients with chronic obstructive pulmonary disease. Chest. 2015;148(2):375-381. PubMed
7. Jackson C, Shahsahebi M, Wedlake T, DuBard CA. Timeliness of outpatient follow-up: an evidence-based approach for planning after hospital discharge. Ann Fam Med. 2015;13(2):115-122. PubMed
8. Donaho EK, Hall AC, Gass JA, et al. Protocol-driven allied health post-discharge transition clinic to reduce hospital readmissions in heart failure. J Am Heart Assoc. 2015;4(12):e002296. PubMed
9. Misky GJ, Wald HL, Coleman EA. Post-hospitalization transitions: Examining the effects of timing of primary care provider follow-up. J Hosp Med. 2010;5(7):392-397. PubMed
10. Grafft CA, McDonald FS, Ruud KL, Liesinger JT, Johnson MG, Naessens JM. Effect of hospital follow-up appointment on clinical event outcomes and mortality. Arch Intern Med. 2010;171(11):955-960. PubMed
11. Auerbach AD, Kripalani S, Vasilevskis EE, et al. Preventability and causes of readmissions in a national cohort of general medicine patients. JAMA Intern Med. 2016;176(4):484-493. PubMed
12. Field TS, Ogarek J, Garber L, Reed G, Gurwitz JH. Association of early post-discharge follow-up by a primary care physician and 30-day rehospitalization among older adults. J Gen Intern Med. 2015;30(5):565-571. PubMed
13. Quinn K. It’s no-show time! Med Group Manage Assoc Connexion. 2007;7(6):44-49. PubMed
14. Whittle J, Schectman G, Lu N, Baar B, Mayo-Smith MF. Relationship of scheduling interval to missed and cancelled clinic appointments. J Ambulatory Care Manage. 2008;31(4):290-302. PubMed
15. Kaplan-Lewis E, Percac-Lima S. No-show to primary care appointments: Why patients do not come. J Prim Care Community Health. 2013;4(4):251-255. PubMed
16. Molfenter T. Reducing appointment no-shows: Going from theory to practice. Subst Use Misuse. 2013;48(9):743-749. PubMed
17. Kheirkhah P, Feng Q, Travis LM, Tavakoli-Tabasi S, Sharafkhaneh A. Prevalence, predictors and economic consequences of no-shows. BMC Health Serv Res. 2016;16(1):13. PubMed
18. Colubi MM, Perez-Elias MJ, Elias L, et al. Missing scheduled visits in the outpatient clinic as a marker of short-term admissions and death. HIV Clin Trials. 2012;13(5):289-295. PubMed
19. Obialo CI, Hunt WC, Bashir K, Zager PG. Relationship of missed and shortened hemodialysis treatments to hospitalization and mortality: Observations from a US dialysis network. Clin Kidney J. 2012;5(4):315-319. PubMed
20. Hwang AS, Atlas SJ, Cronin P, et al. Appointment “no-shows” are an independent predictor of subsequent quality of care and resource utilization outcomes. J Gen Intern Med. 2015;30(10):1426-1433. PubMed
21. Perez FD, Xie J, Sin A, et al. Characteristics and direct costs of academic pediatric subspecialty outpatient no-show events. J Healthc Qual. 2014;36(4):32-42. PubMed
22. Huang Y, Zuniga P. Effective cancellation policy to reduce the negative impact of patient no-show. Journal of the Operational Research Society. 2013;65(5):605-615. 
23. Percac-Lima S, Cronin PR, Ryan DP, Chabner BA, Daly EA, Kimball AB. Patient navigation based on predictive modeling decreases no-show rates in cancer care. Cancer. 2015;121(10):1662-1670. PubMed
24. Torres O, Rothberg MB, Garb J, Ogunneye O, Onyema J, Higgins T. Risk factor model to predict a missed clinic appointment in an urban, academic, and underserved setting. Popul Health Manag. 2015;18(2):131-136. PubMed
25. Eid WE, Shehata SF, Cole DA, Doerman KL. Predictors of nonattendance at an endocrinology outpatient clinic. Endocr Pract. 2016;22(8):983-989. PubMed
26. Kashiwagi DT, Burton MC, Kirkland LL, Cha S, Varkey P. Do timely outpatient follow-up visits decrease hospital readmission rates? Am J Med Qual. 2012;27(1):11-15. PubMed
27. Miller AJ, Chae E, Peterson E, Ko AB. Predictors of repeated “no-showing” to clinic appointments. Am J Otolaryngol. 2015;36(3):411-414. PubMed
28. ASCO. Billing challenges for residents of Skilled Nursing Facilities. J Oncol Pract. 2008;4(5):245-248. PubMed
29. Centers for Medicare & Medicaid Services (2013). “SE0433: Skilled Nursing Facility consolidated billing as it relates to ambulance services.” Medicare Learning Network Matters. https://www.cms.gov/Outreach-and-Education/Medicare-Learning-Network-MLN/MLNMattersArticles/downloads/se0433.pdf. Accessed on February 14, 2017.
30. Luckett R, Pena N, Vitonis A, Bernstein MR, Feldman S. Effect of patient navigator program on no-show rates at an academic referral colposcopy clinic. J Womens Health (Larchmt). 2015;24(7):608-615. PubMed
31. Jones CD, Vu MB, O’Donnell CM, et al. A failure to communicate: A qualitative exploration of care coordination between hospitalists and primary care providers around patient hospitalizations. J Gen Intern Med. 2015;30(4):417-424. PubMed
32. George A, Rubin G. Non-attendance in general practice: a systematic review and its implications for access to primary health care. Fam Pract. 2003;20(2):178-184. 2016;31(12):1460-1466.J Gen Intern Med. PubMed

48. Shah SJ, Cronin P, Hong CS, et al. Targeted reminder phone calls to patients at high risk of no-show for primary care appointment: A randomized trial. 2010;123(6):542-548.Am J Med. PubMed 

47. Parikh A, Gupta K, Wilson AC, Fields K, Cosgrove NM, Kostis JB. The effectiveness of outpatient appointment reminder systems in reducing no-show rates. 2009;20:142-144.Int J STD AIDS. PubMed

46. Price H, Waters AM, Mighty D, et al. Texting appointment reminders reduces ‘Did not attend’ rates, is popular with patients and is cost-effective. 2009;25(3):166-170.J Med Practice Management. PubMed

45. Hills LS. How to handle patients who miss appointments or show up late.
2009;39(3):271-287.Interfaces. PubMed

44. Kros J, Dellana S, West D. Overbooking Increases Patient Access at East Carolina University’s Student Health Services Clinic. 2012;344(3):211-219.Am J Med Sci.

43. Stubbs ND, Geraci SA, Stephenson PL, Jones DB, Sanders S. Methods to reduce outpatient non-attendance. PubMed
42. Haftka A, Cerasale MT, Paje D. Direct patient participation in discharge follow-up appointment scheduling. Paper presented at: Society of Hospital Medicine, Annual Meeting 2015; National Harbor, MD. 2012;5(1):27-32.Patient.

41. Chang R, Spahlinger D, Kim CS. Re-engineering the post-discharge appointment process for general medicine patients. PubMed
40. Coffey C, Kufta J. Patient-centered post-discharge appointment scheduling improves readmission rates. Paper presented at: Society of Hospital Medicine, Annual Meeting 2011; Grapevine, Texas. 2006;81(1):76-81.Acad Med.

39. Vidyarthi AR, Katz PP, Wall SD, Wachter RM, Auerbach AD. Impact of reduced duty hours on residents’ education satistfaction at the University of California, San Francisco.
2013;173(18):1715-1722.JAMA Intern Med. PubMed

38. Horwitz LI, Moriarty JP, Chen C, et al. Quality of discharge practices and patient understanding at an academic medical center. 2010;16(4):246-259.Health Informatics J. PubMed

37. Daggy J, Lawley M, Willis D, et al. Using no-show modeling to improve clinic performance. 2005;5:51.BMC Health Serv Res. PubMed

36. Lee VJ, Earnest A, Chen MI, Krishnan B. Predictors of failed attendances in a multi-specialty outpatient centre using electronic databases. 2013;3(9):e003212.BMJ Open. PubMed

35. Long T, Genao I, Horwitz LI. Reasons for readmission in an underserved high-risk population: A qualitative analysis of a series of inpatient interviews. 2013;32(7):1196-1203.Health Aff (Millwood). PubMed

34. Kangovi S, Barg FK, Carter T, Long JA, Shannon R, Grande D. Understanding why patients of low socioeconomic status prefer hospitals over ambulatory care. 2015;54(10):976-982.Clin Pediatr (Phila). PubMed

33. Samuels RC, Ward VL, Melvin P, et al. Missed Appointments: Factors Contributing to High No-Show Rates in an Urban Pediatrics Primary Care Clinic. PubMed

 

 

Issue
Journal of Hospital Medicine 12 (8)
Issue
Journal of Hospital Medicine 12 (8)
Page Number
618-625
Page Number
618-625
Publications
Publications
Topics
Article Type
Display Headline
If You Book It, Will They Come? Attendance at Postdischarge Follow-Up Visits Scheduled by Inpatient Providers
Display Headline
If You Book It, Will They Come? Attendance at Postdischarge Follow-Up Visits Scheduled by Inpatient Providers
Sections
Disallow All Ads
Correspondence Location
Rahul Banerjee, MD, Department of Medicine, Hospital of the University of Pennsylvania, 3400 Spruce St, 100 Centrex, Philadelphia, PA 19104; Telephone: 267-303-7995; Fax: 215-
662-7919; E-mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Why Residents Order Unnecessary Inpatient Laboratory Tests

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Residents' self‐report on why they order perceived unnecessary inpatient laboratory tests

Resident physicians routinely order inpatient laboratory tests,[1] and there is evidence to suggest that many of these tests are unnecessary[2] and potentially harmful.[3] The Society of Hospital Medicine has identified reducing the unnecessary ordering of inpatient laboratory testing as part of the Choosing Wisely campaign.[4] Hospitalists at academic medical centers face growing pressures to develop processes to reduce low‐value care and train residents to be stewards of healthcare resources.[5] Studies[6, 7, 8, 9] have described that institutional and training factors drive residents' resource utilization patterns, but, to our knowledge, none have described what factors contribute to residents' unnecessary laboratory testing. To better understand the factors associated with residents' ordering patterns, we conducted a qualitative analysis of internal medicine (IM) and general surgery (GS) residents at a large academic medical center in order to describe residents' perception of the: (1) frequency of ordering unnecessary inpatient laboratory tests, (2) factors contributing to that behavior, and (3) potential interventions to change it. We also explored differences in responses by specialty and training level.

METHODS

In October 2014, we surveyed all IM and GS residents at the Hospital of the University of Pennsylvania. We reviewed the literature and conducted focus groups with residents to formulate items for the survey instrument. A draft of the survey was administered to 8 residents from both specialties, and their feedback was collated and incorporated into the final version of the instrument. The final 15‐question survey was comprised of 4 components: (1) training information such as specialty and postgraduate year (PGY), (2) self‐reported frequency of perceived unnecessary ordering of inpatient laboratory tests, (3) perception of factors contributing to unnecessary ordering, and (4) potential interventions to reduce unnecessary ordering. An unnecessary test was defined as a test that would not change management regardless of its result. To increase response rates, participants were entered into drawings for $5 gift cards, a $200 air travel voucher, and an iPad mini.

Descriptive statistics and 2tests were conducted with Stata version 13 (StataCorp LP, College Station, TX) to explore differences in the frequency of responses by specialty and training level. To identify themes that emerged from free‐text responses, two independent reviewers (M.S.S. and E.J.K.) performed qualitative content analysis using grounded theory.[10] Reviewers read 10% of responses to create a coding guide. Another 10% of the responses were randomly selected to assess inter‐rater reliability by calculating scores. The reviewers independently coded the remaining 80% of responses. Discrepancies were adjudicated by consensus between the reviewers. The University of Pennsylvania Institutional Review Board deemed this study exempt from review.

RESULTS

The sample comprised 57.0% (85/149) of IM and 54.4% (31/57) of GS residents (Table 1). Among respondents, perceived unnecessary inpatient laboratory test ordering was self‐reported by 88.2% of IM and 67.7% of GS residents. This behavior was reported to occur on a daily basis by 43.5% and 32.3% of responding IM and GS residents, respectively. Across both specialties, the most commonly reported factors contributing to these behaviors were learned practice habit/routine (90.5%), a lack of understanding of the costs associated with lab tests (86.2%), diagnostic uncertainty (82.8%), and fear of not having the lab result information when requested by an attending (75.9%). There were no significant differences in any of these contributing factors by specialty or PGY level. Among respondents who completed a free‐text response (IM: 76 of 85; GS: 21 of 31), the most commonly proposed interventions to address these issues were increasing cost transparency (IM 40.8%; GS 33.3%), improvements to faculty role modeling (IM 30.2%; GS 33.3%), and computerized reminders or decision support (IM 21.1%; GS 28.6%) (Table 2).

Residents' Self‐Reported Frequency of and Factors Contributing to Perceived Unnecessary Inpatient Laboratory Ordering
Residents (n = 116)*
  • NOTE: Abbreviations: EHR, electronic health record. *There were 116 responses out of 206 eligible residents, among whom 57.0% (85/149) were IM and 54.4% (31/57) were GS residents. Among the IM respondents, 36 were PGY‐1 interns, and among the GS respondents, 12 were PGY‐1 interns. There were no differences in response across specialty and PGY level. Respondents were asked, Please rate your level of agreement with whether the following items contribute to unnecessary ordering on a 5‐point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree). Agreement included survey participants who agreed and/or strongly agreed with the statement.

Reported he or she orders unnecessary routine labs, no. (%) 96 (82.8)
Frequency of ordering unnecessary labs, no. (%)
Daily 47 (49.0)
23 times/week 44 (45.8)
1 time/week or less 5 (5.2)
Agreement with statement as factors contributing to ordering unnecessary labs, no. (%)
Practice habit; I am trained to order repeating daily labs 105 (90.5)
Lack of cost transparency of labs 100 (86.2)
Discomfort with diagnostic uncertainty 96 (82.8)
Concern that the attending will ask for the data and I will not have it 88 (75.9)
Lack of role modeling of cost conscious care 78 (67.2)
Lack of cost conscious culture at our institution 76 (65.5)
Lack of experience 72 (62.1)
Ease of ordering repeating labs in EHR 60 (51.7)
Fear of litigation from missed diagnosis related to lab data 44 (37.9)
Residents' Suggestions for Possible Solutions to Unnecessary Ordering
Categories* Representative Quotes IM, n = 76, No. (%) GS, n = 21, No. (%)
  • NOTE: Abbreviations: coags, coagulation tests; EHR, electronic health record; IM, internal medicine; GS, general surgery; LFT, liver function tests. *Kappa scores: mean 0.78; range, 0.591. Responses could be assigned to multiple categories. There were 85 of 149 (57.0%) IM respondents, among whom 76 of 85 (89.4%) provided a free‐text suggestion. There were 31 of 57 (54.4%) GS respondents, among whom 21 of 31 (67.7%) provided a free‐text suggestion.

Cost transparency Let us know the costs of what we order and train us to remember that a patient gets a bill and we are contributing to a possible bankruptcy or hardship. 31 (40.8) 7 (33.3)
Display the cost of labs when [we're] ordering them [in the EHR].
Post the prices so that MDs understand how much everything costs.
Role modeling restrain Train attendings to be more critical about necessity of labs and overordering. Make it part of rounding practice to decide on the labs truly needed for each patient the next day. 23 (30.2) 7 (33.3)
Attendings could review daily lab orders and briefly explain which they do not believe we need. This would allow residents to learn from their experience and their thought processes.
Encouragement and modeling of this practice from the faculty perhaps by laying out more clear expectations for which clinical situations warrant daily labs and which do not.
Computerized or decision support When someone orders labs and the previous day's lab was normal or labs were stable for 2 days, an alert should pop up to reconsider. 16 (21.1) 6 (28.6)
Prevent us from being able to order repeating [or standing] labs.
Track how many times labs changed management, and restrict certain labslike LFTs/coags.
High‐value care educational curricula Increase awareness of issue by having a noon conference about it or some other forum for residents to discuss the issue. 12 (15.8) 4 (19.0)
Establish guidelines for housestaff to learn/follow from start of residency.
Integrate cost conscious care into training program curricula.
System improvements Make it easier to get labs later [in the day] 6 (7.9) 2 (9.5)
Improve timeliness of phlebotomy/laboratory systems.
More responsive phlebotomy.

DISCUSSION

A significant portion of inpatient laboratory testing is unnecessary,[2] creating an opportunity to reduce utilization and associated costs. Our findings demonstrate that these behaviors occur at high levels among residents (IM 88.2%; GS 67.7%) at a large academic medical center. These findings also reveal that residents attribute this behavior to practice habit, lack of access to cost data, and perceived expectations for daily lab ordering by faculty. Interventions to change these behaviors will need to involve changes to the health system culture, increasing transparency of the costs associated with healthcare services, and shifting to a model of education that celebrates restraint.[11]

Our study adds to the emerging quest for delivering value in healthcare and provides several important insights for hospitalists and medical educators at academic centers. First, our findings reflect the significant role that the clinical learning environment plays in influencing practice behaviors among residents. Residency training is a critical time when physicians begin to form habits that imprint upon their future practice patterns,[5] and our residents are aware that their behavior to order what they perceive to be unnecessary laboratory tests is driven by habit. Studies[6, 7] have shown that residents may implicitly accept certain styles of practice as correct and are more likely to adopt those styles during the early years of their training. In our institution, for example, the process of ordering standing or daily morning labs using a repeated copy‐forward function in the electronic health record is a common, learned practice (a ritual) that is passed down from senior to junior residents year after year. This practice is common across both training specialties. There is a need to better understand, measure, and change the culture in the clinical learning environment to demonstrate practices and values that model high‐value care for residents. Multipronged interventions that address culture, oversight, and systems change[12] are necessary to facilitate effective physician stewardship of inpatient laboratory testing and attack a problem so deeply ingrained in habit.

Second, residents in our study believe that access to cost information will better equip them to reduce unnecessary lab ordering. Two recent systematic reviews[13, 14] have shown that having real‐time access to charges changes physician ordering and prescribing behavior. Increasing cost transparency may not only be an important intervention for hospitals to reduce overuse and control cost, but also better arm resident physicians with the information they need to make higher‐value recommendations for their patients and be stewards of healthcare resources.

Third, our study highlights that residents' unnecessary laboratory utilization is driven by perceived, unspoken expectations for such ordering by faculty. This reflects an important undercurrent in the medical education system that has historically emphasized and rewarded thoroughness while often penalizing restraint.[11] Hospitalists can play a major role in changing these behaviors by sharing their expectations regarding test ordering at the beginning of teaching rotations, including teaching points that discourage overutilization during rounds, and role modeling high‐value care in their own practice. Taken together and practiced routinely, these hospitalist behaviors could prevent poor habits from forming in our trainees and discourage overinvestigation. Hospitalists must be responsible to disseminate the practice of restraint to achieve more cost‐effective care. Purposeful faculty development efforts in the area of high‐value care are needed. Additionally, supporting physician leaders that serve as the institutional bridge between graduate medical education and the health system[15] could foster an environment conducive to coaching residents and faculty to reduce unnecessary practice variation.

This study is subject to several limitations. First, the survey was conducted at a single academic medical center, with a modest response rate, and thus our findings may not be generalizable to other settings or residents of different training programs. Second, we did not validate residents' perception of whether or not tests were, in fact, unnecessary. We also did not validate residents' self‐reporting of their own behavior, which may vary from actual behavior. Lack of validation at the level of the tests and at the level of the residents' behavior are two distinct but inter‐related limitations. Although self‐reported responses among residents are an important indicator of their practice, validating these data with objective measures, such as a measure of necessity of ordered lab tests as determined by an expert physician or group of experienced physicians or the number of inpatient labs ordered by residents, may add further insights. Ordering of perceived unnecessary tests may be even more common if there was under‐reporting of this behavior. Third, although we provided a definition within the survey, interpretation among survey respondents of the term unnecessary may vary, and this variation may contribute to our findings. However, we did provide a clear definition in the survey and we attempted to mitigate this with feedback from residents on our preliminary pilot.

In conclusion, this is one of the first qualitative evaluations to explore residents' perceptions on why they order unnecessary inpatient laboratory tests. Our findings offer a rich understanding of residents' beliefs about their own role in unnecessary lab ordering and explore possible solutions through the lens of the resident. Yet, it is unclear whether tests deemed unnecessary by residents would also be considered unnecessary by attending physicians or even patients. Future efforts are needed to better define which inpatient tests are unnecessary from multiple perspectives including clinicians and patients.

Acknowledgements

The authors thank Patrick J. Brennan, MD, Jeffery S. Berns, MD, Lisa M. Bellini, MD, Jon B. Morris, MD, and Irving Nachamkin, DrPH, MPH, all from the Hospital of the University of Pennsylvania, for supporting this work. They received no compensation.

Disclosures: This work was presented in part at the AAMC Integrating Quality Meeting, June 11, 2015, Chicago, Illinois; and the Alliance for Academic Internal Medicine Fall Meeting, October 9, 2015, Atlanta, Georgia. The authors report no conflicts of interest.

Files
References
  1. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university's hospitalist service. Acad Med. 2011;86(1):139145.
  2. Zhi M, Ding EL, Theisen‐Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15‐year meta‐analysis. PLoS One. 2013;8(11):e78962.
  3. Salisbury A, Reid K, Alexander K, et al. Diagnostic blood loss from phlebotomy and hospital‐acquired anemia during acute myocardial infarction. Arch Intern Med. 2011;171(18):16461653.
  4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486492.
  5. Korenstein D. Charting the route to high‐value care the role of medical education. JAMA. 2016;314(22):23592361.
  6. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):23852393.
  7. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists' ability to practice conservatively. JAMA Intern Med. 2014;174(10):16401648.
  8. Ryskina KL, Dine CJ, Kim EJ, Bishop TF, Epstein AJ. Effect of attending practice style on generic medication prescribing by residents in the clinic setting: an observational study. J Gen Intern Med. 2015;30(9):12861293.
  9. Patel MS, Reed DA, Smith C, Arora VM. Role‐modeling cost‐conscious care—a national evaluation of perceptions of faculty at teaching hospitals in the United States. J Gen Intern Med. 2015;30(9):12941298.
  10. Glaser BG, Strauss AL. The discovery of grounded theory. Int J Qual Methods. 1967;5:110.
  11. Detsky AC, Verma AA. A new model for medical education: celebrating restraint. JAMA. 2012;308(13):13291330.
  12. Moriates C, Shah NT, Arora VM. A framework for the frontline: how hospitalists can improve healthcare value. J Hosp Med. 2016;11(4):297302.
  13. Goetz C, Rotman SR, Hartoularos G, Bishop TF. The effect of charge display on cost of care and physician practice behaviors: a systematic review. J Gen Intern Med. 2015;30(6):835842.
  14. Silvestri MT, Bongiovanni TR, Glover JG, Gross CP. Impact of price display on provider ordering: a systematic review. J Hosp Med. 2016;11(1):6576.
  15. Gupta R, Arora VM. Merging the health system and education silos to better educate future physicians. JAMA. 2015;314(22):23492350.
Article PDF
Issue
Journal of Hospital Medicine - 11(12)
Publications
Page Number
869-872
Sections
Files
Files
Article PDF
Article PDF

Resident physicians routinely order inpatient laboratory tests,[1] and there is evidence to suggest that many of these tests are unnecessary[2] and potentially harmful.[3] The Society of Hospital Medicine has identified reducing the unnecessary ordering of inpatient laboratory testing as part of the Choosing Wisely campaign.[4] Hospitalists at academic medical centers face growing pressures to develop processes to reduce low‐value care and train residents to be stewards of healthcare resources.[5] Studies[6, 7, 8, 9] have described that institutional and training factors drive residents' resource utilization patterns, but, to our knowledge, none have described what factors contribute to residents' unnecessary laboratory testing. To better understand the factors associated with residents' ordering patterns, we conducted a qualitative analysis of internal medicine (IM) and general surgery (GS) residents at a large academic medical center in order to describe residents' perception of the: (1) frequency of ordering unnecessary inpatient laboratory tests, (2) factors contributing to that behavior, and (3) potential interventions to change it. We also explored differences in responses by specialty and training level.

METHODS

In October 2014, we surveyed all IM and GS residents at the Hospital of the University of Pennsylvania. We reviewed the literature and conducted focus groups with residents to formulate items for the survey instrument. A draft of the survey was administered to 8 residents from both specialties, and their feedback was collated and incorporated into the final version of the instrument. The final 15‐question survey was comprised of 4 components: (1) training information such as specialty and postgraduate year (PGY), (2) self‐reported frequency of perceived unnecessary ordering of inpatient laboratory tests, (3) perception of factors contributing to unnecessary ordering, and (4) potential interventions to reduce unnecessary ordering. An unnecessary test was defined as a test that would not change management regardless of its result. To increase response rates, participants were entered into drawings for $5 gift cards, a $200 air travel voucher, and an iPad mini.

Descriptive statistics and 2tests were conducted with Stata version 13 (StataCorp LP, College Station, TX) to explore differences in the frequency of responses by specialty and training level. To identify themes that emerged from free‐text responses, two independent reviewers (M.S.S. and E.J.K.) performed qualitative content analysis using grounded theory.[10] Reviewers read 10% of responses to create a coding guide. Another 10% of the responses were randomly selected to assess inter‐rater reliability by calculating scores. The reviewers independently coded the remaining 80% of responses. Discrepancies were adjudicated by consensus between the reviewers. The University of Pennsylvania Institutional Review Board deemed this study exempt from review.

RESULTS

The sample comprised 57.0% (85/149) of IM and 54.4% (31/57) of GS residents (Table 1). Among respondents, perceived unnecessary inpatient laboratory test ordering was self‐reported by 88.2% of IM and 67.7% of GS residents. This behavior was reported to occur on a daily basis by 43.5% and 32.3% of responding IM and GS residents, respectively. Across both specialties, the most commonly reported factors contributing to these behaviors were learned practice habit/routine (90.5%), a lack of understanding of the costs associated with lab tests (86.2%), diagnostic uncertainty (82.8%), and fear of not having the lab result information when requested by an attending (75.9%). There were no significant differences in any of these contributing factors by specialty or PGY level. Among respondents who completed a free‐text response (IM: 76 of 85; GS: 21 of 31), the most commonly proposed interventions to address these issues were increasing cost transparency (IM 40.8%; GS 33.3%), improvements to faculty role modeling (IM 30.2%; GS 33.3%), and computerized reminders or decision support (IM 21.1%; GS 28.6%) (Table 2).

Residents' Self‐Reported Frequency of and Factors Contributing to Perceived Unnecessary Inpatient Laboratory Ordering
Residents (n = 116)*
  • NOTE: Abbreviations: EHR, electronic health record. *There were 116 responses out of 206 eligible residents, among whom 57.0% (85/149) were IM and 54.4% (31/57) were GS residents. Among the IM respondents, 36 were PGY‐1 interns, and among the GS respondents, 12 were PGY‐1 interns. There were no differences in response across specialty and PGY level. Respondents were asked, Please rate your level of agreement with whether the following items contribute to unnecessary ordering on a 5‐point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree). Agreement included survey participants who agreed and/or strongly agreed with the statement.

Reported he or she orders unnecessary routine labs, no. (%) 96 (82.8)
Frequency of ordering unnecessary labs, no. (%)
Daily 47 (49.0)
23 times/week 44 (45.8)
1 time/week or less 5 (5.2)
Agreement with statement as factors contributing to ordering unnecessary labs, no. (%)
Practice habit; I am trained to order repeating daily labs 105 (90.5)
Lack of cost transparency of labs 100 (86.2)
Discomfort with diagnostic uncertainty 96 (82.8)
Concern that the attending will ask for the data and I will not have it 88 (75.9)
Lack of role modeling of cost conscious care 78 (67.2)
Lack of cost conscious culture at our institution 76 (65.5)
Lack of experience 72 (62.1)
Ease of ordering repeating labs in EHR 60 (51.7)
Fear of litigation from missed diagnosis related to lab data 44 (37.9)
Residents' Suggestions for Possible Solutions to Unnecessary Ordering
Categories* Representative Quotes IM, n = 76, No. (%) GS, n = 21, No. (%)
  • NOTE: Abbreviations: coags, coagulation tests; EHR, electronic health record; IM, internal medicine; GS, general surgery; LFT, liver function tests. *Kappa scores: mean 0.78; range, 0.591. Responses could be assigned to multiple categories. There were 85 of 149 (57.0%) IM respondents, among whom 76 of 85 (89.4%) provided a free‐text suggestion. There were 31 of 57 (54.4%) GS respondents, among whom 21 of 31 (67.7%) provided a free‐text suggestion.

Cost transparency Let us know the costs of what we order and train us to remember that a patient gets a bill and we are contributing to a possible bankruptcy or hardship. 31 (40.8) 7 (33.3)
Display the cost of labs when [we're] ordering them [in the EHR].
Post the prices so that MDs understand how much everything costs.
Role modeling restrain Train attendings to be more critical about necessity of labs and overordering. Make it part of rounding practice to decide on the labs truly needed for each patient the next day. 23 (30.2) 7 (33.3)
Attendings could review daily lab orders and briefly explain which they do not believe we need. This would allow residents to learn from their experience and their thought processes.
Encouragement and modeling of this practice from the faculty perhaps by laying out more clear expectations for which clinical situations warrant daily labs and which do not.
Computerized or decision support When someone orders labs and the previous day's lab was normal or labs were stable for 2 days, an alert should pop up to reconsider. 16 (21.1) 6 (28.6)
Prevent us from being able to order repeating [or standing] labs.
Track how many times labs changed management, and restrict certain labslike LFTs/coags.
High‐value care educational curricula Increase awareness of issue by having a noon conference about it or some other forum for residents to discuss the issue. 12 (15.8) 4 (19.0)
Establish guidelines for housestaff to learn/follow from start of residency.
Integrate cost conscious care into training program curricula.
System improvements Make it easier to get labs later [in the day] 6 (7.9) 2 (9.5)
Improve timeliness of phlebotomy/laboratory systems.
More responsive phlebotomy.

DISCUSSION

A significant portion of inpatient laboratory testing is unnecessary,[2] creating an opportunity to reduce utilization and associated costs. Our findings demonstrate that these behaviors occur at high levels among residents (IM 88.2%; GS 67.7%) at a large academic medical center. These findings also reveal that residents attribute this behavior to practice habit, lack of access to cost data, and perceived expectations for daily lab ordering by faculty. Interventions to change these behaviors will need to involve changes to the health system culture, increasing transparency of the costs associated with healthcare services, and shifting to a model of education that celebrates restraint.[11]

Our study adds to the emerging quest for delivering value in healthcare and provides several important insights for hospitalists and medical educators at academic centers. First, our findings reflect the significant role that the clinical learning environment plays in influencing practice behaviors among residents. Residency training is a critical time when physicians begin to form habits that imprint upon their future practice patterns,[5] and our residents are aware that their behavior to order what they perceive to be unnecessary laboratory tests is driven by habit. Studies[6, 7] have shown that residents may implicitly accept certain styles of practice as correct and are more likely to adopt those styles during the early years of their training. In our institution, for example, the process of ordering standing or daily morning labs using a repeated copy‐forward function in the electronic health record is a common, learned practice (a ritual) that is passed down from senior to junior residents year after year. This practice is common across both training specialties. There is a need to better understand, measure, and change the culture in the clinical learning environment to demonstrate practices and values that model high‐value care for residents. Multipronged interventions that address culture, oversight, and systems change[12] are necessary to facilitate effective physician stewardship of inpatient laboratory testing and attack a problem so deeply ingrained in habit.

Second, residents in our study believe that access to cost information will better equip them to reduce unnecessary lab ordering. Two recent systematic reviews[13, 14] have shown that having real‐time access to charges changes physician ordering and prescribing behavior. Increasing cost transparency may not only be an important intervention for hospitals to reduce overuse and control cost, but also better arm resident physicians with the information they need to make higher‐value recommendations for their patients and be stewards of healthcare resources.

Third, our study highlights that residents' unnecessary laboratory utilization is driven by perceived, unspoken expectations for such ordering by faculty. This reflects an important undercurrent in the medical education system that has historically emphasized and rewarded thoroughness while often penalizing restraint.[11] Hospitalists can play a major role in changing these behaviors by sharing their expectations regarding test ordering at the beginning of teaching rotations, including teaching points that discourage overutilization during rounds, and role modeling high‐value care in their own practice. Taken together and practiced routinely, these hospitalist behaviors could prevent poor habits from forming in our trainees and discourage overinvestigation. Hospitalists must be responsible to disseminate the practice of restraint to achieve more cost‐effective care. Purposeful faculty development efforts in the area of high‐value care are needed. Additionally, supporting physician leaders that serve as the institutional bridge between graduate medical education and the health system[15] could foster an environment conducive to coaching residents and faculty to reduce unnecessary practice variation.

This study is subject to several limitations. First, the survey was conducted at a single academic medical center, with a modest response rate, and thus our findings may not be generalizable to other settings or residents of different training programs. Second, we did not validate residents' perception of whether or not tests were, in fact, unnecessary. We also did not validate residents' self‐reporting of their own behavior, which may vary from actual behavior. Lack of validation at the level of the tests and at the level of the residents' behavior are two distinct but inter‐related limitations. Although self‐reported responses among residents are an important indicator of their practice, validating these data with objective measures, such as a measure of necessity of ordered lab tests as determined by an expert physician or group of experienced physicians or the number of inpatient labs ordered by residents, may add further insights. Ordering of perceived unnecessary tests may be even more common if there was under‐reporting of this behavior. Third, although we provided a definition within the survey, interpretation among survey respondents of the term unnecessary may vary, and this variation may contribute to our findings. However, we did provide a clear definition in the survey and we attempted to mitigate this with feedback from residents on our preliminary pilot.

In conclusion, this is one of the first qualitative evaluations to explore residents' perceptions on why they order unnecessary inpatient laboratory tests. Our findings offer a rich understanding of residents' beliefs about their own role in unnecessary lab ordering and explore possible solutions through the lens of the resident. Yet, it is unclear whether tests deemed unnecessary by residents would also be considered unnecessary by attending physicians or even patients. Future efforts are needed to better define which inpatient tests are unnecessary from multiple perspectives including clinicians and patients.

Acknowledgements

The authors thank Patrick J. Brennan, MD, Jeffery S. Berns, MD, Lisa M. Bellini, MD, Jon B. Morris, MD, and Irving Nachamkin, DrPH, MPH, all from the Hospital of the University of Pennsylvania, for supporting this work. They received no compensation.

Disclosures: This work was presented in part at the AAMC Integrating Quality Meeting, June 11, 2015, Chicago, Illinois; and the Alliance for Academic Internal Medicine Fall Meeting, October 9, 2015, Atlanta, Georgia. The authors report no conflicts of interest.

Resident physicians routinely order inpatient laboratory tests,[1] and there is evidence to suggest that many of these tests are unnecessary[2] and potentially harmful.[3] The Society of Hospital Medicine has identified reducing the unnecessary ordering of inpatient laboratory testing as part of the Choosing Wisely campaign.[4] Hospitalists at academic medical centers face growing pressures to develop processes to reduce low‐value care and train residents to be stewards of healthcare resources.[5] Studies[6, 7, 8, 9] have described that institutional and training factors drive residents' resource utilization patterns, but, to our knowledge, none have described what factors contribute to residents' unnecessary laboratory testing. To better understand the factors associated with residents' ordering patterns, we conducted a qualitative analysis of internal medicine (IM) and general surgery (GS) residents at a large academic medical center in order to describe residents' perception of the: (1) frequency of ordering unnecessary inpatient laboratory tests, (2) factors contributing to that behavior, and (3) potential interventions to change it. We also explored differences in responses by specialty and training level.

METHODS

In October 2014, we surveyed all IM and GS residents at the Hospital of the University of Pennsylvania. We reviewed the literature and conducted focus groups with residents to formulate items for the survey instrument. A draft of the survey was administered to 8 residents from both specialties, and their feedback was collated and incorporated into the final version of the instrument. The final 15‐question survey was comprised of 4 components: (1) training information such as specialty and postgraduate year (PGY), (2) self‐reported frequency of perceived unnecessary ordering of inpatient laboratory tests, (3) perception of factors contributing to unnecessary ordering, and (4) potential interventions to reduce unnecessary ordering. An unnecessary test was defined as a test that would not change management regardless of its result. To increase response rates, participants were entered into drawings for $5 gift cards, a $200 air travel voucher, and an iPad mini.

Descriptive statistics and 2tests were conducted with Stata version 13 (StataCorp LP, College Station, TX) to explore differences in the frequency of responses by specialty and training level. To identify themes that emerged from free‐text responses, two independent reviewers (M.S.S. and E.J.K.) performed qualitative content analysis using grounded theory.[10] Reviewers read 10% of responses to create a coding guide. Another 10% of the responses were randomly selected to assess inter‐rater reliability by calculating scores. The reviewers independently coded the remaining 80% of responses. Discrepancies were adjudicated by consensus between the reviewers. The University of Pennsylvania Institutional Review Board deemed this study exempt from review.

RESULTS

The sample comprised 57.0% (85/149) of IM and 54.4% (31/57) of GS residents (Table 1). Among respondents, perceived unnecessary inpatient laboratory test ordering was self‐reported by 88.2% of IM and 67.7% of GS residents. This behavior was reported to occur on a daily basis by 43.5% and 32.3% of responding IM and GS residents, respectively. Across both specialties, the most commonly reported factors contributing to these behaviors were learned practice habit/routine (90.5%), a lack of understanding of the costs associated with lab tests (86.2%), diagnostic uncertainty (82.8%), and fear of not having the lab result information when requested by an attending (75.9%). There were no significant differences in any of these contributing factors by specialty or PGY level. Among respondents who completed a free‐text response (IM: 76 of 85; GS: 21 of 31), the most commonly proposed interventions to address these issues were increasing cost transparency (IM 40.8%; GS 33.3%), improvements to faculty role modeling (IM 30.2%; GS 33.3%), and computerized reminders or decision support (IM 21.1%; GS 28.6%) (Table 2).

Residents' Self‐Reported Frequency of and Factors Contributing to Perceived Unnecessary Inpatient Laboratory Ordering
Residents (n = 116)*
  • NOTE: Abbreviations: EHR, electronic health record. *There were 116 responses out of 206 eligible residents, among whom 57.0% (85/149) were IM and 54.4% (31/57) were GS residents. Among the IM respondents, 36 were PGY‐1 interns, and among the GS respondents, 12 were PGY‐1 interns. There were no differences in response across specialty and PGY level. Respondents were asked, Please rate your level of agreement with whether the following items contribute to unnecessary ordering on a 5‐point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree). Agreement included survey participants who agreed and/or strongly agreed with the statement.

Reported he or she orders unnecessary routine labs, no. (%) 96 (82.8)
Frequency of ordering unnecessary labs, no. (%)
Daily 47 (49.0)
23 times/week 44 (45.8)
1 time/week or less 5 (5.2)
Agreement with statement as factors contributing to ordering unnecessary labs, no. (%)
Practice habit; I am trained to order repeating daily labs 105 (90.5)
Lack of cost transparency of labs 100 (86.2)
Discomfort with diagnostic uncertainty 96 (82.8)
Concern that the attending will ask for the data and I will not have it 88 (75.9)
Lack of role modeling of cost conscious care 78 (67.2)
Lack of cost conscious culture at our institution 76 (65.5)
Lack of experience 72 (62.1)
Ease of ordering repeating labs in EHR 60 (51.7)
Fear of litigation from missed diagnosis related to lab data 44 (37.9)
Residents' Suggestions for Possible Solutions to Unnecessary Ordering
Categories* Representative Quotes IM, n = 76, No. (%) GS, n = 21, No. (%)
  • NOTE: Abbreviations: coags, coagulation tests; EHR, electronic health record; IM, internal medicine; GS, general surgery; LFT, liver function tests. *Kappa scores: mean 0.78; range, 0.591. Responses could be assigned to multiple categories. There were 85 of 149 (57.0%) IM respondents, among whom 76 of 85 (89.4%) provided a free‐text suggestion. There were 31 of 57 (54.4%) GS respondents, among whom 21 of 31 (67.7%) provided a free‐text suggestion.

Cost transparency Let us know the costs of what we order and train us to remember that a patient gets a bill and we are contributing to a possible bankruptcy or hardship. 31 (40.8) 7 (33.3)
Display the cost of labs when [we're] ordering them [in the EHR].
Post the prices so that MDs understand how much everything costs.
Role modeling restrain Train attendings to be more critical about necessity of labs and overordering. Make it part of rounding practice to decide on the labs truly needed for each patient the next day. 23 (30.2) 7 (33.3)
Attendings could review daily lab orders and briefly explain which they do not believe we need. This would allow residents to learn from their experience and their thought processes.
Encouragement and modeling of this practice from the faculty perhaps by laying out more clear expectations for which clinical situations warrant daily labs and which do not.
Computerized or decision support When someone orders labs and the previous day's lab was normal or labs were stable for 2 days, an alert should pop up to reconsider. 16 (21.1) 6 (28.6)
Prevent us from being able to order repeating [or standing] labs.
Track how many times labs changed management, and restrict certain labslike LFTs/coags.
High‐value care educational curricula Increase awareness of issue by having a noon conference about it or some other forum for residents to discuss the issue. 12 (15.8) 4 (19.0)
Establish guidelines for housestaff to learn/follow from start of residency.
Integrate cost conscious care into training program curricula.
System improvements Make it easier to get labs later [in the day] 6 (7.9) 2 (9.5)
Improve timeliness of phlebotomy/laboratory systems.
More responsive phlebotomy.

DISCUSSION

A significant portion of inpatient laboratory testing is unnecessary,[2] creating an opportunity to reduce utilization and associated costs. Our findings demonstrate that these behaviors occur at high levels among residents (IM 88.2%; GS 67.7%) at a large academic medical center. These findings also reveal that residents attribute this behavior to practice habit, lack of access to cost data, and perceived expectations for daily lab ordering by faculty. Interventions to change these behaviors will need to involve changes to the health system culture, increasing transparency of the costs associated with healthcare services, and shifting to a model of education that celebrates restraint.[11]

Our study adds to the emerging quest for delivering value in healthcare and provides several important insights for hospitalists and medical educators at academic centers. First, our findings reflect the significant role that the clinical learning environment plays in influencing practice behaviors among residents. Residency training is a critical time when physicians begin to form habits that imprint upon their future practice patterns,[5] and our residents are aware that their behavior to order what they perceive to be unnecessary laboratory tests is driven by habit. Studies[6, 7] have shown that residents may implicitly accept certain styles of practice as correct and are more likely to adopt those styles during the early years of their training. In our institution, for example, the process of ordering standing or daily morning labs using a repeated copy‐forward function in the electronic health record is a common, learned practice (a ritual) that is passed down from senior to junior residents year after year. This practice is common across both training specialties. There is a need to better understand, measure, and change the culture in the clinical learning environment to demonstrate practices and values that model high‐value care for residents. Multipronged interventions that address culture, oversight, and systems change[12] are necessary to facilitate effective physician stewardship of inpatient laboratory testing and attack a problem so deeply ingrained in habit.

Second, residents in our study believe that access to cost information will better equip them to reduce unnecessary lab ordering. Two recent systematic reviews[13, 14] have shown that having real‐time access to charges changes physician ordering and prescribing behavior. Increasing cost transparency may not only be an important intervention for hospitals to reduce overuse and control cost, but also better arm resident physicians with the information they need to make higher‐value recommendations for their patients and be stewards of healthcare resources.

Third, our study highlights that residents' unnecessary laboratory utilization is driven by perceived, unspoken expectations for such ordering by faculty. This reflects an important undercurrent in the medical education system that has historically emphasized and rewarded thoroughness while often penalizing restraint.[11] Hospitalists can play a major role in changing these behaviors by sharing their expectations regarding test ordering at the beginning of teaching rotations, including teaching points that discourage overutilization during rounds, and role modeling high‐value care in their own practice. Taken together and practiced routinely, these hospitalist behaviors could prevent poor habits from forming in our trainees and discourage overinvestigation. Hospitalists must be responsible to disseminate the practice of restraint to achieve more cost‐effective care. Purposeful faculty development efforts in the area of high‐value care are needed. Additionally, supporting physician leaders that serve as the institutional bridge between graduate medical education and the health system[15] could foster an environment conducive to coaching residents and faculty to reduce unnecessary practice variation.

This study is subject to several limitations. First, the survey was conducted at a single academic medical center, with a modest response rate, and thus our findings may not be generalizable to other settings or residents of different training programs. Second, we did not validate residents' perception of whether or not tests were, in fact, unnecessary. We also did not validate residents' self‐reporting of their own behavior, which may vary from actual behavior. Lack of validation at the level of the tests and at the level of the residents' behavior are two distinct but inter‐related limitations. Although self‐reported responses among residents are an important indicator of their practice, validating these data with objective measures, such as a measure of necessity of ordered lab tests as determined by an expert physician or group of experienced physicians or the number of inpatient labs ordered by residents, may add further insights. Ordering of perceived unnecessary tests may be even more common if there was under‐reporting of this behavior. Third, although we provided a definition within the survey, interpretation among survey respondents of the term unnecessary may vary, and this variation may contribute to our findings. However, we did provide a clear definition in the survey and we attempted to mitigate this with feedback from residents on our preliminary pilot.

In conclusion, this is one of the first qualitative evaluations to explore residents' perceptions on why they order unnecessary inpatient laboratory tests. Our findings offer a rich understanding of residents' beliefs about their own role in unnecessary lab ordering and explore possible solutions through the lens of the resident. Yet, it is unclear whether tests deemed unnecessary by residents would also be considered unnecessary by attending physicians or even patients. Future efforts are needed to better define which inpatient tests are unnecessary from multiple perspectives including clinicians and patients.

Acknowledgements

The authors thank Patrick J. Brennan, MD, Jeffery S. Berns, MD, Lisa M. Bellini, MD, Jon B. Morris, MD, and Irving Nachamkin, DrPH, MPH, all from the Hospital of the University of Pennsylvania, for supporting this work. They received no compensation.

Disclosures: This work was presented in part at the AAMC Integrating Quality Meeting, June 11, 2015, Chicago, Illinois; and the Alliance for Academic Internal Medicine Fall Meeting, October 9, 2015, Atlanta, Georgia. The authors report no conflicts of interest.

References
  1. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university's hospitalist service. Acad Med. 2011;86(1):139145.
  2. Zhi M, Ding EL, Theisen‐Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15‐year meta‐analysis. PLoS One. 2013;8(11):e78962.
  3. Salisbury A, Reid K, Alexander K, et al. Diagnostic blood loss from phlebotomy and hospital‐acquired anemia during acute myocardial infarction. Arch Intern Med. 2011;171(18):16461653.
  4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486492.
  5. Korenstein D. Charting the route to high‐value care the role of medical education. JAMA. 2016;314(22):23592361.
  6. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):23852393.
  7. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists' ability to practice conservatively. JAMA Intern Med. 2014;174(10):16401648.
  8. Ryskina KL, Dine CJ, Kim EJ, Bishop TF, Epstein AJ. Effect of attending practice style on generic medication prescribing by residents in the clinic setting: an observational study. J Gen Intern Med. 2015;30(9):12861293.
  9. Patel MS, Reed DA, Smith C, Arora VM. Role‐modeling cost‐conscious care—a national evaluation of perceptions of faculty at teaching hospitals in the United States. J Gen Intern Med. 2015;30(9):12941298.
  10. Glaser BG, Strauss AL. The discovery of grounded theory. Int J Qual Methods. 1967;5:110.
  11. Detsky AC, Verma AA. A new model for medical education: celebrating restraint. JAMA. 2012;308(13):13291330.
  12. Moriates C, Shah NT, Arora VM. A framework for the frontline: how hospitalists can improve healthcare value. J Hosp Med. 2016;11(4):297302.
  13. Goetz C, Rotman SR, Hartoularos G, Bishop TF. The effect of charge display on cost of care and physician practice behaviors: a systematic review. J Gen Intern Med. 2015;30(6):835842.
  14. Silvestri MT, Bongiovanni TR, Glover JG, Gross CP. Impact of price display on provider ordering: a systematic review. J Hosp Med. 2016;11(1):6576.
  15. Gupta R, Arora VM. Merging the health system and education silos to better educate future physicians. JAMA. 2015;314(22):23492350.
References
  1. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university's hospitalist service. Acad Med. 2011;86(1):139145.
  2. Zhi M, Ding EL, Theisen‐Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15‐year meta‐analysis. PLoS One. 2013;8(11):e78962.
  3. Salisbury A, Reid K, Alexander K, et al. Diagnostic blood loss from phlebotomy and hospital‐acquired anemia during acute myocardial infarction. Arch Intern Med. 2011;171(18):16461653.
  4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486492.
  5. Korenstein D. Charting the route to high‐value care the role of medical education. JAMA. 2016;314(22):23592361.
  6. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):23852393.
  7. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists' ability to practice conservatively. JAMA Intern Med. 2014;174(10):16401648.
  8. Ryskina KL, Dine CJ, Kim EJ, Bishop TF, Epstein AJ. Effect of attending practice style on generic medication prescribing by residents in the clinic setting: an observational study. J Gen Intern Med. 2015;30(9):12861293.
  9. Patel MS, Reed DA, Smith C, Arora VM. Role‐modeling cost‐conscious care—a national evaluation of perceptions of faculty at teaching hospitals in the United States. J Gen Intern Med. 2015;30(9):12941298.
  10. Glaser BG, Strauss AL. The discovery of grounded theory. Int J Qual Methods. 1967;5:110.
  11. Detsky AC, Verma AA. A new model for medical education: celebrating restraint. JAMA. 2012;308(13):13291330.
  12. Moriates C, Shah NT, Arora VM. A framework for the frontline: how hospitalists can improve healthcare value. J Hosp Med. 2016;11(4):297302.
  13. Goetz C, Rotman SR, Hartoularos G, Bishop TF. The effect of charge display on cost of care and physician practice behaviors: a systematic review. J Gen Intern Med. 2015;30(6):835842.
  14. Silvestri MT, Bongiovanni TR, Glover JG, Gross CP. Impact of price display on provider ordering: a systematic review. J Hosp Med. 2016;11(1):6576.
  15. Gupta R, Arora VM. Merging the health system and education silos to better educate future physicians. JAMA. 2015;314(22):23492350.
Issue
Journal of Hospital Medicine - 11(12)
Issue
Journal of Hospital Medicine - 11(12)
Page Number
869-872
Page Number
869-872
Publications
Publications
Article Type
Display Headline
Residents' self‐report on why they order perceived unnecessary inpatient laboratory tests
Display Headline
Residents' self‐report on why they order perceived unnecessary inpatient laboratory tests
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Mina S. Sedrak, MD, MS, 1500 E. Duarte Road, Duarte, CA 91010; Telephone: 626‐471‐9200; Fax: 626‐301‐8233; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

In reply: Cognitive bias and diagnostic error

Article Type
Changed
Wed, 08/16/2017 - 13:53
Display Headline
In reply: Cognitive bias and diagnostic error

In Reply: We thank Dr. Field for his insights and personal observations related to diagnosis and biases that contribute to diagnostic errors.

Dr. Field’s comment about the importance of revisiting one’s initial working diagnosis is consistent with our proposed diagnostic time out. A diagnostic time out can incorporate a short checklist and aid in debiasing clinicians when findings do not fit the case presentation, such as lack of response to diuretic therapy. Being mindful of slowing down and not necessarily rushing to judgment is another important component.1 Of note, the residents in our case did revisit their initial working diagnosis, as suggested by Dr. Field. Questions from learners have great potential to serve as debiasing instruments and should always be encouraged. Those who do not work with students can do the same by speaking with nurses or other members of the healthcare team, who offer observations that busy physicians might miss.

Our case highlights the problem that we lack objective criteria to diagnose symptomatic heart failure. While B-type natriuretic factor (BNP) has a strong negative predictive value, serial BNP measurements have not been established to be helpful in the management of heart failure.2 Although certain findings on chest radiography have strong positive and negative likelihood associations, the role of serial chest radiographs is less clear.3 Thus, heart failure remains a clinical diagnosis in current practice.

As Dr. Field points out, the accuracy and performance characteristics of diagnostic testing, such as the respiratory rate, need to be considered in conjunction with debiasing strategies to achieve higher diagnostic accuracy. Multiple factors can contribute to low-performing or misinterpreted diagnostic tests, and inaccurate vital signs have been shown to be similarly prone to potential error.4

Finally, we wholeheartedly agree with Dr. Field’s comment on unnecessary testing.  High-value care is appropriate care. Using Bayesian reasoning to guide testing, monitoring the treatment course appropriately, and eliminating waste is highly likely to improve both value and diagnostic accuracy. Automated, ritual ordering of daily tests can indicate that thinking has been shut off, leaving clinicians susceptible to premature closure of the diagnostic process as well as the potential for “incidentalomas” to distract them from the right diagnosis, all the while leading to low-value care such as wasteful spending, patient dissatisfaction, and hospital-acquired anemia.5 We believe that deciding on a daily basis what the next day’s tests will be can be another powerful debiasing habit, one with benefits beyond diagnosis.

References
  1. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008; 121(suppl):S38–S42.
  2. Yancy CW, Jessup M, Bozkurt B, et al. 2013 ACCF/AHA guideline for the management of heart failure. Circulation 2013; 128:e240–e327.
  3. Wang CS, FitzGerald JM, Schulzer M, Mak E, Ayas NT. Does this dyspneic patient in the emergency department have congestive heart failure? JAMA 2005; 294:1944–1956.
  4. Philip KE, Pack E, Cambiano V, Rollmann H, Weil S, O’Beirne J. The accuracy of respiratory rate assessment by doctors in a London teaching hospital: a cross-sectional study. J Clin Monit Comput 2015; 29:455–460.
  5. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: prevalence, outcomes, and healthcare implications. J Hosp Med 2013; 8:506–512. 
Article PDF
Author and Disclosure Information

Nikhil Mull, MD
University of Pennsylvania, Philadelphia

James B. Reilly, MD, MS
Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
University of Pennsylvania, Philadelphia

Issue
Cleveland Clinic Journal of Medicine - 83(6)
Publications
Topics
Page Number
407-408
Legacy Keywords
heart failure, cognitive bias, diagnostic error, Emergency medicine, General internal medicine, Hospital medicine, morton field
Sections
Author and Disclosure Information

Nikhil Mull, MD
University of Pennsylvania, Philadelphia

James B. Reilly, MD, MS
Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
University of Pennsylvania, Philadelphia

Author and Disclosure Information

Nikhil Mull, MD
University of Pennsylvania, Philadelphia

James B. Reilly, MD, MS
Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
University of Pennsylvania, Philadelphia

Article PDF
Article PDF
Related Articles

In Reply: We thank Dr. Field for his insights and personal observations related to diagnosis and biases that contribute to diagnostic errors.

Dr. Field’s comment about the importance of revisiting one’s initial working diagnosis is consistent with our proposed diagnostic time out. A diagnostic time out can incorporate a short checklist and aid in debiasing clinicians when findings do not fit the case presentation, such as lack of response to diuretic therapy. Being mindful of slowing down and not necessarily rushing to judgment is another important component.1 Of note, the residents in our case did revisit their initial working diagnosis, as suggested by Dr. Field. Questions from learners have great potential to serve as debiasing instruments and should always be encouraged. Those who do not work with students can do the same by speaking with nurses or other members of the healthcare team, who offer observations that busy physicians might miss.

Our case highlights the problem that we lack objective criteria to diagnose symptomatic heart failure. While B-type natriuretic factor (BNP) has a strong negative predictive value, serial BNP measurements have not been established to be helpful in the management of heart failure.2 Although certain findings on chest radiography have strong positive and negative likelihood associations, the role of serial chest radiographs is less clear.3 Thus, heart failure remains a clinical diagnosis in current practice.

As Dr. Field points out, the accuracy and performance characteristics of diagnostic testing, such as the respiratory rate, need to be considered in conjunction with debiasing strategies to achieve higher diagnostic accuracy. Multiple factors can contribute to low-performing or misinterpreted diagnostic tests, and inaccurate vital signs have been shown to be similarly prone to potential error.4

Finally, we wholeheartedly agree with Dr. Field’s comment on unnecessary testing.  High-value care is appropriate care. Using Bayesian reasoning to guide testing, monitoring the treatment course appropriately, and eliminating waste is highly likely to improve both value and diagnostic accuracy. Automated, ritual ordering of daily tests can indicate that thinking has been shut off, leaving clinicians susceptible to premature closure of the diagnostic process as well as the potential for “incidentalomas” to distract them from the right diagnosis, all the while leading to low-value care such as wasteful spending, patient dissatisfaction, and hospital-acquired anemia.5 We believe that deciding on a daily basis what the next day’s tests will be can be another powerful debiasing habit, one with benefits beyond diagnosis.

In Reply: We thank Dr. Field for his insights and personal observations related to diagnosis and biases that contribute to diagnostic errors.

Dr. Field’s comment about the importance of revisiting one’s initial working diagnosis is consistent with our proposed diagnostic time out. A diagnostic time out can incorporate a short checklist and aid in debiasing clinicians when findings do not fit the case presentation, such as lack of response to diuretic therapy. Being mindful of slowing down and not necessarily rushing to judgment is another important component.1 Of note, the residents in our case did revisit their initial working diagnosis, as suggested by Dr. Field. Questions from learners have great potential to serve as debiasing instruments and should always be encouraged. Those who do not work with students can do the same by speaking with nurses or other members of the healthcare team, who offer observations that busy physicians might miss.

Our case highlights the problem that we lack objective criteria to diagnose symptomatic heart failure. While B-type natriuretic factor (BNP) has a strong negative predictive value, serial BNP measurements have not been established to be helpful in the management of heart failure.2 Although certain findings on chest radiography have strong positive and negative likelihood associations, the role of serial chest radiographs is less clear.3 Thus, heart failure remains a clinical diagnosis in current practice.

As Dr. Field points out, the accuracy and performance characteristics of diagnostic testing, such as the respiratory rate, need to be considered in conjunction with debiasing strategies to achieve higher diagnostic accuracy. Multiple factors can contribute to low-performing or misinterpreted diagnostic tests, and inaccurate vital signs have been shown to be similarly prone to potential error.4

Finally, we wholeheartedly agree with Dr. Field’s comment on unnecessary testing.  High-value care is appropriate care. Using Bayesian reasoning to guide testing, monitoring the treatment course appropriately, and eliminating waste is highly likely to improve both value and diagnostic accuracy. Automated, ritual ordering of daily tests can indicate that thinking has been shut off, leaving clinicians susceptible to premature closure of the diagnostic process as well as the potential for “incidentalomas” to distract them from the right diagnosis, all the while leading to low-value care such as wasteful spending, patient dissatisfaction, and hospital-acquired anemia.5 We believe that deciding on a daily basis what the next day’s tests will be can be another powerful debiasing habit, one with benefits beyond diagnosis.

References
  1. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008; 121(suppl):S38–S42.
  2. Yancy CW, Jessup M, Bozkurt B, et al. 2013 ACCF/AHA guideline for the management of heart failure. Circulation 2013; 128:e240–e327.
  3. Wang CS, FitzGerald JM, Schulzer M, Mak E, Ayas NT. Does this dyspneic patient in the emergency department have congestive heart failure? JAMA 2005; 294:1944–1956.
  4. Philip KE, Pack E, Cambiano V, Rollmann H, Weil S, O’Beirne J. The accuracy of respiratory rate assessment by doctors in a London teaching hospital: a cross-sectional study. J Clin Monit Comput 2015; 29:455–460.
  5. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: prevalence, outcomes, and healthcare implications. J Hosp Med 2013; 8:506–512. 
References
  1. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008; 121(suppl):S38–S42.
  2. Yancy CW, Jessup M, Bozkurt B, et al. 2013 ACCF/AHA guideline for the management of heart failure. Circulation 2013; 128:e240–e327.
  3. Wang CS, FitzGerald JM, Schulzer M, Mak E, Ayas NT. Does this dyspneic patient in the emergency department have congestive heart failure? JAMA 2005; 294:1944–1956.
  4. Philip KE, Pack E, Cambiano V, Rollmann H, Weil S, O’Beirne J. The accuracy of respiratory rate assessment by doctors in a London teaching hospital: a cross-sectional study. J Clin Monit Comput 2015; 29:455–460.
  5. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: prevalence, outcomes, and healthcare implications. J Hosp Med 2013; 8:506–512. 
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Issue
Cleveland Clinic Journal of Medicine - 83(6)
Page Number
407-408
Page Number
407-408
Publications
Publications
Topics
Article Type
Display Headline
In reply: Cognitive bias and diagnostic error
Display Headline
In reply: Cognitive bias and diagnostic error
Legacy Keywords
heart failure, cognitive bias, diagnostic error, Emergency medicine, General internal medicine, Hospital medicine, morton field
Legacy Keywords
heart failure, cognitive bias, diagnostic error, Emergency medicine, General internal medicine, Hospital medicine, morton field
Sections
Disallow All Ads
Alternative CME
Article PDF Media

An elderly woman with ‘heart failure’: Cognitive biases and diagnostic error

Article Type
Changed
Tue, 09/12/2017 - 14:22
Display Headline
An elderly woman with ‘heart failure’: Cognitive biases and diagnostic error

An elderly Spanish-speaking woman with morbid obesity, diabetes, hypertension, and rheumatoid arthritis presents to the emergency department with worsening shortness of breath and cough. She speaks only Spanish, so her son provides the history without the aid of an interpreter.

Her shortness of breath is most noticeable with exertion and has increased gradually over the past 2 months. She has a nonproductive cough. Her son has noticed decreased oral intake and weight loss over the past few weeks.  She has neither traveled recently nor been in contact with anyone known to have an infectious disease.

A review of systems is otherwise negative: specifically, she denies chest pain, fevers, or chills. She saw her primary care physician 3 weeks ago for these complaints and was prescribed a 3-day course of azithromycin with no improvement.

Her medications include lisinopril, atenolol, glipizide, and metformin; her son believes she may be taking others as well but is not sure. He is also unsure of what treatment his mother has received for her rheumatoid arthritis, and most of her medical records are within another health system.

The patient’s son believes she may be taking other medications but is not sure; her records are at another institution

On physical examination, the patient is coughing and appears ill. Her temperature is 99.9°F (37.7°C), heart rate 105 beats per minute, blood pressure 140/70 mm Hg, res­piratory rate 24 per minute, and oxygen saturation by pulse oximetry 89% on room air. Heart sounds are normal, jugular venous pressure cannot be assessed because of her obese body habitus, pulmonary examination demonstrates crackles in all lung fields, and lower-extremity edema is not present. Her extremities are warm and well perfused. Musculoskeletal examination reveals deformities of the joints in both hands consistent with rheumatoid arthritis.

Laboratory data:

  • White blood cell count 13.0 × 109/L (reference range 3.7–11.0)
  • Hemoglobin level 10 g/dL (11.5–15)
  • Serum creatinine 1.0 mg/dL (0.7–1.4)
  • Pro-brain-type natriuretic peptide (pro-BNP) level greater than the upper limit of normal.

A chest radiograph is obtained, and the resident radiologist’s preliminary impression is that it is consistent with pulmonary vascular congestion.

The patient is admitted for further diagnostic evaluation. The emergency department resident orders intravenous furosemide and signs out to the night float medicine resident that this is an “elderly woman with hypertension, diabetes, and heart failure being admitted for a heart failure exacerbation.”

What is the accuracy of a physician’s initial working diagnosis?

Diagnostic accuracy requires both clinical knowledge and problem-solving skills.1

A decade ago, a National Patient Safety Foundation survey2 found that one in six patients had suffered a medical error related to misdiagnosis. In a large systematic review of autopsy-based diagnostic errors, the theorized rate of major errors ranged from 8.4% to as high as 24.4%.3 A study by Neale et al4 found that admitting diagnoses were incorrect in 6% of cases. In emergency departments, inaccuracy rates of up to 12% have been described.5

What factors influence the prevalence of diagnostic errors?

Initial empiric treatments, such as intravenous furosemide in the above scenario, add to the challenge of diagnosis in acute care settings and can influence clinical decisions made by subsequent providers.6

Nonspecific or vague symptoms make diagnosis especially challenging. Shortness of breath, for example, is a common chief complaint in medical patients, as in this case. Green et al7 found emergency department physicians reported clinical uncertainty for a diagnosis of heart failure in 31% of patients evaluated for “dyspnea.” Pulmonary embolism and pulmonary tuberculosis are also in the differential diagnosis for our patient, with studies reporting a misdiagnosis rate of 55% for pulmonary embolism8 and 50% for pulmonary tuberculosis.9

Hertwig et al,10 describing the diagnostic process in patients presenting to emergency departments with a nonspecific constellation of symptoms, found particularly low rates of agreement between the initial diagnostic impression and the final, correct one. In fact, the actual diagnosis was only in the physician’s initial “top three” differential diagnoses 29% to 83% of the time.

Atypical presentations of common diseases, initial nonspecific presentations of common diseases, and confounding comorbid conditions have also been associated with misdiagnosis.11 Our case scenario illustrates the frequent challenges physicians face when diagnosing patients who present with nonspecific symptoms and signs on a background of multiple, chronic comorbidities.

Contextual factors in the system and environment contribute to the potential for error.12 Examples include frequent interruptions, time pressure, poor handoffs, insufficient data, and multitasking.

In our scenario, incomplete data, time constraints, and multitasking in a busy work environment compelled the emergency department resident to rapidly synthesize information to establish a working diagnosis. Interpretations of radiographs by on-call radiology residents are similarly at risk of diagnostic error for the same reasons.13

Physician factors also influence diagnosis. Interestingly, physician certainty or uncertainty at the time of initial diagnosis does not uniformly appear to correlate with diagnostic accuracy. A recent study showed that physician confidence remained high regardless of the degree of difficulty in a given case, and degree of confidence also correlated poorly with whether the physician’s diagnosis was accurate.14

For patients admitted with a chief complaint of dyspnea, as in our scenario, Zwaan et al15 showed that “inappropriate selectivity” in reasoning contributed to an inaccurate diagnosis 23% of the time. Inappropriate selectivity, as defined by these authors, occurs when a probable diagnosis is not sufficiently considered and therefore is neither confirmed nor ruled out.

In our patient scenario, the failure to consider diagnoses other than heart failure and the inability to confirm a prior diagnosis of heart failure in the emergency department may contribute to a diagnostic error.

 

 

CASE CONTINUED: NO IMPROVEMENT OVER 3 DAYS

The night float resident, who has six other admissions this night, cannot ask the resident who evaluated this patient in the emergency department for further information because the shift has ended. The patient’s son left at the time of admission and is not available when the patient arrives on the medical ward.

The night float resident quickly examines the patient, enters admission orders, and signs the patient out to the intern and resident who will be caring for her during her hospitalization. The verbal handoff notes that the history was limited due to a language barrier. The initial problem list includes heart failure without a differential diagnosis, but notes that an elevated pro-BNP and chest radiograph confirm heart failure as the likely diagnosis.

Several hours after the night float resident has left, the resident presents this history to the attending physician, and together they decide to order her regular at-home medications, as well as deep vein thrombosis prophylaxis and echocardiography. In writing the orders, subcutaneous heparin once daily is erroneously entered instead of low-molecular-weight heparin daily, as this is the default in the medical record system. The tired resident fails to recognize this, and the pharmacist does not question it.

Over the next 2 days, the patient’s cough and shortness of breath persist.

After the attending physician dismisses their concerns, the residents do not bring up their idea again

On hospital day 3, two junior residents on the team (who finished their internship 2 weeks ago) review the attending radiologist’s interpretation of the chest radiograph. Unflagged, it confirms the resident’s interpretation but notes ill-defined, scattered, faint opacities. The residents believe that an interstitial pattern may be present and suggest that the patient may not have heart failure but rather a primary pulmonary disease. They bring this to the attention of their attending physician, who dismisses their concerns and comments that heart failure is a clinical diagnosis. The residents do not bring this idea up again to the attending physician.

That night, the float team is called by the nursing staff because of worsening oxygenation and cough. They add an intravenous corticosteroid, a broad-spectrum antibiotic, and an inhaled bronchodilator to the patient’s drug regimen.

How do cognitive errors predispose physicians to diagnostic errors?

When errors in diagnosis are reviewed retrospectively, cognitive or “thinking” errors are generally found, especially in nonprocedural or primary care specialties such as internal medicine, pediatrics, and emergency medicine.16,17

A widely accepted theory on how humans make decisions was described by the psychologists Tversky and Kahneman in 197418 and has been applied more recently to physicians’ diagnostic processes.19 Their dual process model theory states that persons with a requisite level of expertise use either the intuitive “system 1” process of thinking, based on pattern-recognition and heuristics, or the slower, more analytical “system 2” process.20 Experts disagree as to whether in medicine these processes represent a binary either-or model or a continuum21 with relative contributions of each process determined by the physician and the task.

What are some common types of cognitive error?

Experts agree that many diagnostic errors in medicine stem from decisions arrived at by inappropriate system 1 thinking due to biases. These biases have been identified and described as they relate to medicine, most notably by Croskerry.22

Several cognitive biases are illustrated in our clinical scenario:

The framing effect occurred when the emergency department resident listed the patient’s admitting diagnosis as heart failure during the clinical handoff of care.

Anchoring bias, as defined by Croskerry,22 is the tendency to lock onto salient features of the case too early in the diagnostic process and then to fail to adjust this initial diagnostic impression. This bias affected the admitting night float resident, primary intern, resident, and attending physician.

Diagnostic momentum, in turn, is a well-described phenomenon that clinical providers are especially vulnerable to in today’s environment of “copy-and-paste” medical records and numerous handovers of care as a consequence of residency duty-hour restrictions.23

Availability bias refers to commonly seen diagnoses like heart failure or recently seen diagnoses, which are more “available” to the human memory. These diagnoses, which spring to mind quickly, often trick providers into thinking that because they are more easily recalled, they are also more common or more likely.

Confirmation bias. The initial working diagnosis of heart failure may have led the medical team to place greater emphasis on the elevated pro-BNP and the chest radiograph to support the initial impression while ignoring findings such as weight loss that do not support this impression.

Blind obedience. Although the residents recognized the possibility of a primary pulmonary disease, they did not investigate this further. And when the attending physician dismissed their suggestion, they thus deferred to the person in authority or with a reputation of expertise.

Overconfidence bias. Despite minimal improvement in the patient’s clinical status after effective diuresis and the suggestion of alternative diagnoses by the residents, the attending physician remained confident—perhaps overconfident—in the diagnosis of heart failure and would not consider alternatives. Overconfidence bias has been well described and occurs when a medical provider believes too strongly in his or her ability to be correct and therefore fails to consider alternative diagnoses.24

Despite succumbing to overconfidence bias, the attending physician was able to overcome base-rate neglect, ie, failure to consider the prevalence of potential diagnoses in diagnostic reasoning.

Definitions and representative examples of cognitive biases in the case

Each of these biases, and others not mentioned, can lead to premature closure, which is the unfortunate root cause of many diagnostic errors and delays. We have illustrated several biases in our case scenario that led several physicians on the medical team to prematurely “close” on the diagnosis of heart failure (Table 1).

CASE CONTINUED: SURPRISES AND REASSESSMENT

On hospital day 4, the patient’s medication lists from her previous hospitalizations arrive, and the team is surprised to discover that she has been receiving infliximab for the past 3 to 4 months for her rheumatoid arthritis.

Additionally, an echocardiogram that was ordered on hospital day 1 but was lost in the cardiologist’s reading queue comes in and shows a normal ejection fraction with no evidence of elevated filling pressures.

Computed tomography of the chest reveals a reticular pattern with innumerable, tiny, 1- to 2-mm pulmonary nodules. The differential diagnosis is expanded to include hypersensitivity pneumonitis, lymphoma, fungal infection, and miliary tuberculosis.

How do faulty systems contribute to diagnostic error?

It is increasingly recognized that diagnostic errors can occur as a result of cognitive error, systems-based error, or quite commonly, both. Graber et al17 analyzed 100 cases of diagnostic error and determined that while cognitive errors did occur in most of them, nearly half the time both cognitive and systems-based errors contributed simultaneously.17 Observers have further delineated the importance of the systems context and how it affects our thinking.25

In this case, the language barrier, lack of availability of family, and inability to promptly utilize interpreter services contributed to early problems in acquiring a detailed history and a complete medication list that included the immunosuppressant infliximab. Later, a systems error led to a delay in the interpretation of an echocardiogram. Each of these factors, if prevented, would have presumably resulted in expansion of the differential diagnosis and earlier arrival at the correct diagnosis.

CASE CONTINUED: THE PATIENT DIES OF TUBERCULOSIS

The patient is moved to a negative pressure room, and the pulmonary consultants recommend bronchoscopy. During the procedure, the patient suffers acute respiratory failure, is intubated, and is transferred to the medical intensive care unit, where a saddle pulmonary embolism is diagnosed by computed tomographic angiography.

One day later, the sputum culture from the bronchoscopy returns as positive for acid-fast bacilli. A four-drug regimen for tuberculosis is started. The patient continues to have a downward course and expires 2 weeks later. Autopsy reveals miliary tuberculosis.

What is the frequency of diagnostic error in medicine?

Diagnostic error is estimated to have a frequency of 10% to 20%.24 Rates of diagnostic error are similar irrespective of method of determination, eg, from autopsy,3 standardized patients (ie, actors presenting with scripted scenarios),26 or case reviews.27 Patient surveys report patient-perceived harm from diagnostic error at a rate of 35% to 42%.28,29 The landmark Harvard Medical Practice Study found that 17% of all adverse events were attributable to diagnostic error.30

Diagnostic error is the most common type of medical error in nonprocedural medical fields.31 It causes a disproportionately large amount of morbidity and death.

Diagnostic error is the most common cause of malpractice claims in the United States. In inpatient and outpatient settings, for both medical and surgical patients, it accounted for 45.9% of all outpatient malpractice claims in 2009, making it the most common reason for medical malpractice litigation.32 A 2013 study indicated that diagnostic error is more common, more expensive, and two times more likely to result in death than any other category of error.33

 

 

CASE CONTINUED: MORBIDITY AND MORTALITY CONFERENCE

The patient’s case is brought to a morbidity and mortality conference for discussion. The systems issues in the case—including medication reconciliation, availability of interpreters, and timing and process of echocardiogram readings—are all discussed, but clinical reasoning and cognitive errors made in the case are avoided.

Why are cognitive errors often neglected in discussions of medical error?

Historically, openly discussing error in medicine has been difficult. Over the past decade, however, and fueled by the landmark Institute of Medicine report To Err is Human,34 the healthcare community has made substantial strides in identifying and talking about systems factors as a cause of preventable medical error.34,35

While systems contributions to medical error are inherently “external” to physicians and other healthcare providers, the cognitive contributions to error are inherently “internal” and are often considered personal. This has led to diagnostic error being kept out of many patient safety conversations. Further, while the solutions to systems errors are often tangible, such as implementing a fall prevention program or changing the physical packaging of a medication to reduce a medication dispensing or administration error, solutions to cognitive errors are generally considered more challenging to address by organizations trying to improve patient safety.

How can hospitals and department leaders do better?

Healthcare organizations and leaders of clinical teams or departments can implement several strategies.36

First, they can seek out and analyze the causes of diagnostic errors that are occurring locally in their institution and learn from their diagnostic errors, such as the one in our clinical scenario.

Trainees, physicians, and nurses should be comfortable questioning each other

Second, they can promote a culture of open communication and questioning around diagnosis. Trainees, physicians, and nurses should be comfortable questioning each other, including those higher up in the hierarchy, by saying, “I’m not sure” or “What else could this be?” to help reduce cognitive bias and expand the diagnostic possibilities.

Similarly, developing strategies to promote feedback on diagnosis among physicians will allow us all to learn from our diagnostic mistakes.

Use of the electronic medical record to assist in follow-up of pending diagnostic studies and patient return visits is yet another strategy.

Finally, healthcare organizations can adopt strategies to promote patient involvement in diagnosis, such as providing patients with copies of their test results and discharge summaries, encouraging the use of electronic patient communication portals, and empowering patients to ask questions related to their diagnosis. Prioritizing potential solutions to reduce diagnostic errors may be helpful in situations, depending on the context and environment, in which all proposed interventions may not be possible.

CASE CONTINUED: LEARNING FROM MISTAKES

The attending physician and resident in the case meet after the conference to review their clinical decision-making. Both are interested in learning from this case and improving their diagnostic skills in the future.

What specific steps can clinicians take to mitigate cognitive bias in daily practice?

In addition to continuing to expand one’s medical knowledge and gain more clinical experience, we can suggest several small steps to busy clinicians, taken individually or in combination with others that may improve diagnostic skills by reducing the potential for biased thinking in clinical practice.

Approaches to decision-making
From Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ 2009; 14:27–35. With kind permission from Springer Science and Business Media.
Figure 1. Approaches to decision-making can be located along a continuum, with unconscious, intuitive ones clustering at one end and deliberate, analytical ones at the other.

Think about your thinking. Our first recommendation would be to become more familiar with the dual process theory of clinical cognition (Figure 1).37,38 This theoretical framework may be very helpful as a foundation from which to build better thinking skills. Physicians, especially residents, and students can be taught these concepts and their potential to contribute to diagnostic errors, and can use these skills to recognize those contributions in others’ diagnostic practices and even in their own.39

Facilitating metacognition, or “thinking about one’s thinking,” may help clinicians catch themselves in thinking traps and provide the opportunity to reflect on biases retrospectively, as a double check or an opportunity to learn from a mistake.

Recognize your emotions. Gaining an understanding of the effect of one’s emotions on decision-making also can help clinicians free themselves of bias. As human beings, healthcare professionals are  susceptible to emotion, and the best approach to mitigate the emotional influences may be to consciously name them and adjust for them.40

Because it is impractical to apply slow, analytical system 2 approaches to every case, skills that hone and develop more accurate, reliable system 1 thinking are crucial. Gaining broad exposure to increased numbers of cases may be the most reliable way to build an experiential repertoire of “illness scripts,” but there are ways to increase the experiential value of any case with a few techniques that have potential to promote better intuition.41

Embracing uncertainty in the early diagnostic process and envisioning the worst-case scenario in a case allows the consideration of additional diagnostic paths outside of the current working diagnosis, potentially priming the clinician to look for and recognize early warning signs that could argue against the initial diagnosis at a time when an adjustment could be made to prevent a bad outcome.

Practice progressive problem-solving,42 a technique in which the physician creates additional challenges to increase the cognitive burden of a “routine” case in an effort to train his or her mind and sharpen intuition. An example of this practice is contemplating a backup treatment plan in advance in the event of a poor response to or an adverse effect of treatment. Highly rated physicians and teachers perform this regularly.43,44 Other ways to maximize the learning value of an individual case include seeking feedback on patient outcomes, especially when a patient has been discharged or transferred to another provider’s care, or when the physician goes off service.

Simulation, traditionally used for procedural training, has potential as well. Cognitive simulation, such as case reports or virtual patient modules, have potential to enhance clinical reasoning skills as well, though possibly at greater cost of time and expense.

Decreased reliance on memory is likely to improve diagnostic reasoning. Systems tools such as checklists45 and health information technology46 have potential to reduce diagnostic errors, not by taking thinking away from the clinician but by relieving the cognitive load enough to facilitate greater effort toward reasoning.

Slow down. Finally, and perhaps most important, recent models of clinical expertise have suggested that mastery comes from having a robust intuitive method, with a sense of the limitations of the intuitive approach, an ability to recognize the need to perform more analytical reasoning in select cases, and the willingness to do so. In short, it may well be that the hallmark of a master clinician is the propensity to slow down when necessary.47

A ‘diagnostic time-out’ for safety might catch opportunities to recognize and mitigate biases and errors

If one considers diagnosis a cognitive procedure, perhaps a brief “diagnostic time-out” for safety might afford an opportunity to recognize and mitigate biases and errors. There are likely many potential scripts for a good diagnostic time-out, but to be functional it should be brief and simple to facilitate consistent use. We have recommended the following four questions to our residents as a starting point, any of which could signal the need to switch to a slower, analytic approach.

Four-step diagnostic time-out

  • What else can it be?
  • Is there anything about the case that does not fit?
  • Is it possible that multiple processes are going on?
  • Do I need to slow down?

These questions can serve as a double check for an intuitively formed initial working diagnosis, incorporating many of the principles discussed above, in a way that would hopefully avoid undue burden on a busy clinician. These techniques, it must be acknowledged, have not yet been directly tied to reductions in diagnostic errors. However, diagnostic errors, as discussed, are very difficult to identify and study, and these techniques will serve mainly to improve habits that are likely to show benefits over much longer time periods than most studies can measure.

References
  1. Kassirer JP. Diagnostic reasoning. Ann Intern Med 1989; 110:893–900.
  2. Golodner L. How the public perceives patient safety. Newsletter of the National Patient Safety Foundation 2004; 1997:1–6.
  3. Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2003; 289:2849–2856.
  4. Neale G, Woloshynowych M, Vincent C. Exploring the causes of adverse events in NHS hospital practice. J R Soc Med 2001; 94:322–330.
  5. Chellis M, Olson J, Augustine J, Hamilton G. Evaluation of missed diagnoses for patients admitted from the emergency department. Acad Emerg Med 2001; 8:125–130.
  6. Tallentire VR, Smith SE, Skinner J, Cameron HS. Exploring error in team-based acute care scenarios: an observational study from the United Kingdom. Acad Med 2012; 87:792–798.
  7. Green SM, Martinez-Rumayor A, Gregory SA, et al. Clinical uncertainty, diagnostic accuracy, and outcomes in emergency department patients presenting with dyspnea. Arch Intern Med 2008; 168:741–748.
  8. Pineda LA, Hathwar VS, Grant BJ. Clinical suspicion of fatal pulmonary embolism. Chest 2001; 120:791–795.
  9. Shojania KG, Burton EC, McDonald KM, Goldman L. The autopsy as an outcome and performance measure. Evid Rep Technol Assess (Summ) 2002; 58:1–5.
  10. Hertwig R, Meier N, Nickel C, et al. Correlates of diagnostic accuracy in patients with nonspecific complaints. Med Decis Making 2013; 33:533–543.
  11. Kostopoulou O, Delaney BC, Munro CW. Diagnostic difficulty and error in primary care—a systematic review. Fam Pract 2008; 25:400–413.
  12. Ogdie AR, Reilly JB, Pang WG, et al. Seen through their eyes: residents’ reflections on the cognitive and contextual components of diagnostic errors in medicine. Acad Med 2012; 87:1361–1367.
  13. Feldmann EJ, Jain VR, Rakoff S, Haramati LB. Radiology residents’ on-call interpretation of chest radiographs for congestive heart failure. Acad Radiol 2007; 14:1264–1270.
  14. Meyer AN, Payne VL, Meeks DW, Rao R, Singh H. Physicians’ diagnostic accuracy, confidence, and resource requests: a vignette study. JAMA Intern Med 2013; 173:1952–1958.
  15. Zwaan L, Thijs A, Wagner C, Timmermans DR. Does inappropriate selectivity in information use relate to diagnostic errors and patient harm? The diagnosis of patients with dyspnea. Soc Sci Med 2013; 91:32–38.
  16. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med 2009; 169:1881–1887.
  17. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med 2005; 165:1493–1499.
  18. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science 1974; 185:1124–1131.
  19. Kahneman D. Thinking, fast and slow. New York, NY: Farrar, Straus, and Giroux; 2011.
  20. Croskerry P. A universal model of diagnostic reasoning. Acad Med 2009; 84:1022–1028.
  21. Custers EJ. Medical education and cognitive continuum theory: an alternative perspective on medical problem solving and clinical reasoning. Acad Med 2013; 88:1074–1080.
  22. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 2003; 78:775–780.
  23. Hirschtick RE. A piece of my mind. Copy-and-paste. JAMA 2006; 295:2335–2336.
  24. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med 2008;121(suppl 5):S2–S23.
  25. Henriksen K, Brady J. The pursuit of better diagnostic performance: a human factors perspective. BMJ Qual Saf 2013; 22(suppl 2):ii1–ii5.
  26. Peabody JW, Luck J, Jain S, Bertenthal D, Glassman P. Assessing the accuracy of administrative data in health information systems. Med Care 2004; 42:1066–1072.
  27. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf 2012; 21:737–745.
  28. Blendon RJ, DesRoches CM, Brodie M, et al. Views of practicing physicians and the public on medical errors. N Engl J Med 2002; 347:1933–1940.
  29. Burroughs TE, Waterman AD, Gallagher TH, et al. Patient concerns about medical errors in emergency departments. Acad Emerg Med 2005; 12:57–64.
  30. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med 1991; 324:377–384.
  31. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000; 38:261–271.
  32. Bishop TF, Ryan AM, Casalino LP. Paid malpractice claims for adverse events in inpatient and outpatient settings. JAMA 2011; 305:2427–2431.
  33. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-year summary of US malpractice claims for diagnostic errors 1986–2010: an analysis from the national practitioner data bank. BMJ Qual Saf 2013; 22:672–680.
  34. Kohn LT, Corrigan JM, Donaldson MS. To err is human: building a safer health system. Washington, DC: The National Academies Press; 2000.
  35. Singh H. Diagnostic errors: moving beyond ‘no respect’ and getting ready for prime time. BMJ Qual Saf 2013; 22:789–792.
  36. Graber ML, Trowbridge R, Myers JS, Umscheid CA, Strull W, Kanter MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf 2014; 40:102–110.
  37. Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract 2009; 14(suppl 1):27–35.
  38. Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ Theory Pract 2009; 14(suppl 1):37–49.
  39. Reilly JB, Ogdie AR, Von Feldt JM, Myers JS. Teaching about how doctors think: a longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Qual Saf 2013; 22:1044–1050.
  40. Croskerry P, Abbass A, Wu AW. Emotional influences in patient safety. J Patient Saf 2010; 6:199–205.
  41. Rajkomar A, Dhaliwal G. Improving diagnostic reasoning to improve patient safety. Perm J 2011; 15:68–73.
  42. Trowbridge RL, Dhaliwal G, Cosby KS. Educational agenda for diagnostic error reduction. BMJ Qual Saf 2013; 22(suppl 2):ii28­–ii32.
  43. Sargeant J, Mann K, Sinclair D, et al. Learning in practice: experiences and perceptions of high-scoring physicians. Acad Med 2006; 81:655–660.
  44. Mylopoulos M, Lohfeld L, Norman GR, Dhaliwal G, Eva KW. Renowned physicians' perceptions of expert diagnostic practice. Acad Med 2012; 87:1413–1417.
  45. Sibbald M, de Bruin AB, van Merrienboer JJ. Checklists improve experts' diagnostic decisions. Med Educ 2013; 47:301–308.
  46. El-Kareh R, Hasan O, Schiff GD. Use of health information technology to reduce diagnostic errors. BMJ Qual Saf 2013; 22(suppl 2):ii40–ii51.
  47. Moulton CA, Regehr G, Mylopoulos M, MacRae HM. Slowing down when you should: a new model of expert judgment. Acad Med 2007; 82(suppl 10):S109–S116.
Click for Credit Link
Article PDF
Author and Disclosure Information

Nikhill Mull, MD
Assistant Professor of Clinical Medicine, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia; Assistant Director, Center for Evidence-based Practice, University of Pennsylvania Health System, Philadelphia, PA

James B. Reilly, MD, MS
Director, Internal Medicine Residency Program, Allegheny Health Network, Pittsburgh, PA; Assistant Professor of Medicine, Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
Associate Professor of Medicine, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia

Address: Nikhil Mull, MD, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, 3400 Spruce Street, Penn Tower 2009, Philadelphia, PA 19104; e-mail: [email protected]

Issue
Cleveland Clinic Journal of Medicine - 82(11)
Publications
Topics
Page Number
745-753
Legacy Keywords
Cognitive bias, diagnostic error, medical error, misdiagnosis, heart failure, tuberculosis, Nikhil Mull, James Reilly, Jennifer Myers
Sections
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Nikhill Mull, MD
Assistant Professor of Clinical Medicine, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia; Assistant Director, Center for Evidence-based Practice, University of Pennsylvania Health System, Philadelphia, PA

James B. Reilly, MD, MS
Director, Internal Medicine Residency Program, Allegheny Health Network, Pittsburgh, PA; Assistant Professor of Medicine, Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
Associate Professor of Medicine, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia

Address: Nikhil Mull, MD, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, 3400 Spruce Street, Penn Tower 2009, Philadelphia, PA 19104; e-mail: [email protected]

Author and Disclosure Information

Nikhill Mull, MD
Assistant Professor of Clinical Medicine, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia; Assistant Director, Center for Evidence-based Practice, University of Pennsylvania Health System, Philadelphia, PA

James B. Reilly, MD, MS
Director, Internal Medicine Residency Program, Allegheny Health Network, Pittsburgh, PA; Assistant Professor of Medicine, Temple University, Pittsburgh, PA

Jennifer S. Myers, MD
Associate Professor of Medicine, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia

Address: Nikhil Mull, MD, Division of General Internal Medicine, Section of Hospital Medicine, Perelman School of Medicine, University of Pennsylvania, 3400 Spruce Street, Penn Tower 2009, Philadelphia, PA 19104; e-mail: [email protected]

Article PDF
Article PDF
Related Articles

An elderly Spanish-speaking woman with morbid obesity, diabetes, hypertension, and rheumatoid arthritis presents to the emergency department with worsening shortness of breath and cough. She speaks only Spanish, so her son provides the history without the aid of an interpreter.

Her shortness of breath is most noticeable with exertion and has increased gradually over the past 2 months. She has a nonproductive cough. Her son has noticed decreased oral intake and weight loss over the past few weeks.  She has neither traveled recently nor been in contact with anyone known to have an infectious disease.

A review of systems is otherwise negative: specifically, she denies chest pain, fevers, or chills. She saw her primary care physician 3 weeks ago for these complaints and was prescribed a 3-day course of azithromycin with no improvement.

Her medications include lisinopril, atenolol, glipizide, and metformin; her son believes she may be taking others as well but is not sure. He is also unsure of what treatment his mother has received for her rheumatoid arthritis, and most of her medical records are within another health system.

The patient’s son believes she may be taking other medications but is not sure; her records are at another institution

On physical examination, the patient is coughing and appears ill. Her temperature is 99.9°F (37.7°C), heart rate 105 beats per minute, blood pressure 140/70 mm Hg, res­piratory rate 24 per minute, and oxygen saturation by pulse oximetry 89% on room air. Heart sounds are normal, jugular venous pressure cannot be assessed because of her obese body habitus, pulmonary examination demonstrates crackles in all lung fields, and lower-extremity edema is not present. Her extremities are warm and well perfused. Musculoskeletal examination reveals deformities of the joints in both hands consistent with rheumatoid arthritis.

Laboratory data:

  • White blood cell count 13.0 × 109/L (reference range 3.7–11.0)
  • Hemoglobin level 10 g/dL (11.5–15)
  • Serum creatinine 1.0 mg/dL (0.7–1.4)
  • Pro-brain-type natriuretic peptide (pro-BNP) level greater than the upper limit of normal.

A chest radiograph is obtained, and the resident radiologist’s preliminary impression is that it is consistent with pulmonary vascular congestion.

The patient is admitted for further diagnostic evaluation. The emergency department resident orders intravenous furosemide and signs out to the night float medicine resident that this is an “elderly woman with hypertension, diabetes, and heart failure being admitted for a heart failure exacerbation.”

What is the accuracy of a physician’s initial working diagnosis?

Diagnostic accuracy requires both clinical knowledge and problem-solving skills.1

A decade ago, a National Patient Safety Foundation survey2 found that one in six patients had suffered a medical error related to misdiagnosis. In a large systematic review of autopsy-based diagnostic errors, the theorized rate of major errors ranged from 8.4% to as high as 24.4%.3 A study by Neale et al4 found that admitting diagnoses were incorrect in 6% of cases. In emergency departments, inaccuracy rates of up to 12% have been described.5

What factors influence the prevalence of diagnostic errors?

Initial empiric treatments, such as intravenous furosemide in the above scenario, add to the challenge of diagnosis in acute care settings and can influence clinical decisions made by subsequent providers.6

Nonspecific or vague symptoms make diagnosis especially challenging. Shortness of breath, for example, is a common chief complaint in medical patients, as in this case. Green et al7 found emergency department physicians reported clinical uncertainty for a diagnosis of heart failure in 31% of patients evaluated for “dyspnea.” Pulmonary embolism and pulmonary tuberculosis are also in the differential diagnosis for our patient, with studies reporting a misdiagnosis rate of 55% for pulmonary embolism8 and 50% for pulmonary tuberculosis.9

Hertwig et al,10 describing the diagnostic process in patients presenting to emergency departments with a nonspecific constellation of symptoms, found particularly low rates of agreement between the initial diagnostic impression and the final, correct one. In fact, the actual diagnosis was only in the physician’s initial “top three” differential diagnoses 29% to 83% of the time.

Atypical presentations of common diseases, initial nonspecific presentations of common diseases, and confounding comorbid conditions have also been associated with misdiagnosis.11 Our case scenario illustrates the frequent challenges physicians face when diagnosing patients who present with nonspecific symptoms and signs on a background of multiple, chronic comorbidities.

Contextual factors in the system and environment contribute to the potential for error.12 Examples include frequent interruptions, time pressure, poor handoffs, insufficient data, and multitasking.

In our scenario, incomplete data, time constraints, and multitasking in a busy work environment compelled the emergency department resident to rapidly synthesize information to establish a working diagnosis. Interpretations of radiographs by on-call radiology residents are similarly at risk of diagnostic error for the same reasons.13

Physician factors also influence diagnosis. Interestingly, physician certainty or uncertainty at the time of initial diagnosis does not uniformly appear to correlate with diagnostic accuracy. A recent study showed that physician confidence remained high regardless of the degree of difficulty in a given case, and degree of confidence also correlated poorly with whether the physician’s diagnosis was accurate.14

For patients admitted with a chief complaint of dyspnea, as in our scenario, Zwaan et al15 showed that “inappropriate selectivity” in reasoning contributed to an inaccurate diagnosis 23% of the time. Inappropriate selectivity, as defined by these authors, occurs when a probable diagnosis is not sufficiently considered and therefore is neither confirmed nor ruled out.

In our patient scenario, the failure to consider diagnoses other than heart failure and the inability to confirm a prior diagnosis of heart failure in the emergency department may contribute to a diagnostic error.

 

 

CASE CONTINUED: NO IMPROVEMENT OVER 3 DAYS

The night float resident, who has six other admissions this night, cannot ask the resident who evaluated this patient in the emergency department for further information because the shift has ended. The patient’s son left at the time of admission and is not available when the patient arrives on the medical ward.

The night float resident quickly examines the patient, enters admission orders, and signs the patient out to the intern and resident who will be caring for her during her hospitalization. The verbal handoff notes that the history was limited due to a language barrier. The initial problem list includes heart failure without a differential diagnosis, but notes that an elevated pro-BNP and chest radiograph confirm heart failure as the likely diagnosis.

Several hours after the night float resident has left, the resident presents this history to the attending physician, and together they decide to order her regular at-home medications, as well as deep vein thrombosis prophylaxis and echocardiography. In writing the orders, subcutaneous heparin once daily is erroneously entered instead of low-molecular-weight heparin daily, as this is the default in the medical record system. The tired resident fails to recognize this, and the pharmacist does not question it.

Over the next 2 days, the patient’s cough and shortness of breath persist.

After the attending physician dismisses their concerns, the residents do not bring up their idea again

On hospital day 3, two junior residents on the team (who finished their internship 2 weeks ago) review the attending radiologist’s interpretation of the chest radiograph. Unflagged, it confirms the resident’s interpretation but notes ill-defined, scattered, faint opacities. The residents believe that an interstitial pattern may be present and suggest that the patient may not have heart failure but rather a primary pulmonary disease. They bring this to the attention of their attending physician, who dismisses their concerns and comments that heart failure is a clinical diagnosis. The residents do not bring this idea up again to the attending physician.

That night, the float team is called by the nursing staff because of worsening oxygenation and cough. They add an intravenous corticosteroid, a broad-spectrum antibiotic, and an inhaled bronchodilator to the patient’s drug regimen.

How do cognitive errors predispose physicians to diagnostic errors?

When errors in diagnosis are reviewed retrospectively, cognitive or “thinking” errors are generally found, especially in nonprocedural or primary care specialties such as internal medicine, pediatrics, and emergency medicine.16,17

A widely accepted theory on how humans make decisions was described by the psychologists Tversky and Kahneman in 197418 and has been applied more recently to physicians’ diagnostic processes.19 Their dual process model theory states that persons with a requisite level of expertise use either the intuitive “system 1” process of thinking, based on pattern-recognition and heuristics, or the slower, more analytical “system 2” process.20 Experts disagree as to whether in medicine these processes represent a binary either-or model or a continuum21 with relative contributions of each process determined by the physician and the task.

What are some common types of cognitive error?

Experts agree that many diagnostic errors in medicine stem from decisions arrived at by inappropriate system 1 thinking due to biases. These biases have been identified and described as they relate to medicine, most notably by Croskerry.22

Several cognitive biases are illustrated in our clinical scenario:

The framing effect occurred when the emergency department resident listed the patient’s admitting diagnosis as heart failure during the clinical handoff of care.

Anchoring bias, as defined by Croskerry,22 is the tendency to lock onto salient features of the case too early in the diagnostic process and then to fail to adjust this initial diagnostic impression. This bias affected the admitting night float resident, primary intern, resident, and attending physician.

Diagnostic momentum, in turn, is a well-described phenomenon that clinical providers are especially vulnerable to in today’s environment of “copy-and-paste” medical records and numerous handovers of care as a consequence of residency duty-hour restrictions.23

Availability bias refers to commonly seen diagnoses like heart failure or recently seen diagnoses, which are more “available” to the human memory. These diagnoses, which spring to mind quickly, often trick providers into thinking that because they are more easily recalled, they are also more common or more likely.

Confirmation bias. The initial working diagnosis of heart failure may have led the medical team to place greater emphasis on the elevated pro-BNP and the chest radiograph to support the initial impression while ignoring findings such as weight loss that do not support this impression.

Blind obedience. Although the residents recognized the possibility of a primary pulmonary disease, they did not investigate this further. And when the attending physician dismissed their suggestion, they thus deferred to the person in authority or with a reputation of expertise.

Overconfidence bias. Despite minimal improvement in the patient’s clinical status after effective diuresis and the suggestion of alternative diagnoses by the residents, the attending physician remained confident—perhaps overconfident—in the diagnosis of heart failure and would not consider alternatives. Overconfidence bias has been well described and occurs when a medical provider believes too strongly in his or her ability to be correct and therefore fails to consider alternative diagnoses.24

Despite succumbing to overconfidence bias, the attending physician was able to overcome base-rate neglect, ie, failure to consider the prevalence of potential diagnoses in diagnostic reasoning.

Definitions and representative examples of cognitive biases in the case

Each of these biases, and others not mentioned, can lead to premature closure, which is the unfortunate root cause of many diagnostic errors and delays. We have illustrated several biases in our case scenario that led several physicians on the medical team to prematurely “close” on the diagnosis of heart failure (Table 1).

CASE CONTINUED: SURPRISES AND REASSESSMENT

On hospital day 4, the patient’s medication lists from her previous hospitalizations arrive, and the team is surprised to discover that she has been receiving infliximab for the past 3 to 4 months for her rheumatoid arthritis.

Additionally, an echocardiogram that was ordered on hospital day 1 but was lost in the cardiologist’s reading queue comes in and shows a normal ejection fraction with no evidence of elevated filling pressures.

Computed tomography of the chest reveals a reticular pattern with innumerable, tiny, 1- to 2-mm pulmonary nodules. The differential diagnosis is expanded to include hypersensitivity pneumonitis, lymphoma, fungal infection, and miliary tuberculosis.

How do faulty systems contribute to diagnostic error?

It is increasingly recognized that diagnostic errors can occur as a result of cognitive error, systems-based error, or quite commonly, both. Graber et al17 analyzed 100 cases of diagnostic error and determined that while cognitive errors did occur in most of them, nearly half the time both cognitive and systems-based errors contributed simultaneously.17 Observers have further delineated the importance of the systems context and how it affects our thinking.25

In this case, the language barrier, lack of availability of family, and inability to promptly utilize interpreter services contributed to early problems in acquiring a detailed history and a complete medication list that included the immunosuppressant infliximab. Later, a systems error led to a delay in the interpretation of an echocardiogram. Each of these factors, if prevented, would have presumably resulted in expansion of the differential diagnosis and earlier arrival at the correct diagnosis.

CASE CONTINUED: THE PATIENT DIES OF TUBERCULOSIS

The patient is moved to a negative pressure room, and the pulmonary consultants recommend bronchoscopy. During the procedure, the patient suffers acute respiratory failure, is intubated, and is transferred to the medical intensive care unit, where a saddle pulmonary embolism is diagnosed by computed tomographic angiography.

One day later, the sputum culture from the bronchoscopy returns as positive for acid-fast bacilli. A four-drug regimen for tuberculosis is started. The patient continues to have a downward course and expires 2 weeks later. Autopsy reveals miliary tuberculosis.

What is the frequency of diagnostic error in medicine?

Diagnostic error is estimated to have a frequency of 10% to 20%.24 Rates of diagnostic error are similar irrespective of method of determination, eg, from autopsy,3 standardized patients (ie, actors presenting with scripted scenarios),26 or case reviews.27 Patient surveys report patient-perceived harm from diagnostic error at a rate of 35% to 42%.28,29 The landmark Harvard Medical Practice Study found that 17% of all adverse events were attributable to diagnostic error.30

Diagnostic error is the most common type of medical error in nonprocedural medical fields.31 It causes a disproportionately large amount of morbidity and death.

Diagnostic error is the most common cause of malpractice claims in the United States. In inpatient and outpatient settings, for both medical and surgical patients, it accounted for 45.9% of all outpatient malpractice claims in 2009, making it the most common reason for medical malpractice litigation.32 A 2013 study indicated that diagnostic error is more common, more expensive, and two times more likely to result in death than any other category of error.33

 

 

CASE CONTINUED: MORBIDITY AND MORTALITY CONFERENCE

The patient’s case is brought to a morbidity and mortality conference for discussion. The systems issues in the case—including medication reconciliation, availability of interpreters, and timing and process of echocardiogram readings—are all discussed, but clinical reasoning and cognitive errors made in the case are avoided.

Why are cognitive errors often neglected in discussions of medical error?

Historically, openly discussing error in medicine has been difficult. Over the past decade, however, and fueled by the landmark Institute of Medicine report To Err is Human,34 the healthcare community has made substantial strides in identifying and talking about systems factors as a cause of preventable medical error.34,35

While systems contributions to medical error are inherently “external” to physicians and other healthcare providers, the cognitive contributions to error are inherently “internal” and are often considered personal. This has led to diagnostic error being kept out of many patient safety conversations. Further, while the solutions to systems errors are often tangible, such as implementing a fall prevention program or changing the physical packaging of a medication to reduce a medication dispensing or administration error, solutions to cognitive errors are generally considered more challenging to address by organizations trying to improve patient safety.

How can hospitals and department leaders do better?

Healthcare organizations and leaders of clinical teams or departments can implement several strategies.36

First, they can seek out and analyze the causes of diagnostic errors that are occurring locally in their institution and learn from their diagnostic errors, such as the one in our clinical scenario.

Trainees, physicians, and nurses should be comfortable questioning each other

Second, they can promote a culture of open communication and questioning around diagnosis. Trainees, physicians, and nurses should be comfortable questioning each other, including those higher up in the hierarchy, by saying, “I’m not sure” or “What else could this be?” to help reduce cognitive bias and expand the diagnostic possibilities.

Similarly, developing strategies to promote feedback on diagnosis among physicians will allow us all to learn from our diagnostic mistakes.

Use of the electronic medical record to assist in follow-up of pending diagnostic studies and patient return visits is yet another strategy.

Finally, healthcare organizations can adopt strategies to promote patient involvement in diagnosis, such as providing patients with copies of their test results and discharge summaries, encouraging the use of electronic patient communication portals, and empowering patients to ask questions related to their diagnosis. Prioritizing potential solutions to reduce diagnostic errors may be helpful in situations, depending on the context and environment, in which all proposed interventions may not be possible.

CASE CONTINUED: LEARNING FROM MISTAKES

The attending physician and resident in the case meet after the conference to review their clinical decision-making. Both are interested in learning from this case and improving their diagnostic skills in the future.

What specific steps can clinicians take to mitigate cognitive bias in daily practice?

In addition to continuing to expand one’s medical knowledge and gain more clinical experience, we can suggest several small steps to busy clinicians, taken individually or in combination with others that may improve diagnostic skills by reducing the potential for biased thinking in clinical practice.

Approaches to decision-making
From Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ 2009; 14:27–35. With kind permission from Springer Science and Business Media.
Figure 1. Approaches to decision-making can be located along a continuum, with unconscious, intuitive ones clustering at one end and deliberate, analytical ones at the other.

Think about your thinking. Our first recommendation would be to become more familiar with the dual process theory of clinical cognition (Figure 1).37,38 This theoretical framework may be very helpful as a foundation from which to build better thinking skills. Physicians, especially residents, and students can be taught these concepts and their potential to contribute to diagnostic errors, and can use these skills to recognize those contributions in others’ diagnostic practices and even in their own.39

Facilitating metacognition, or “thinking about one’s thinking,” may help clinicians catch themselves in thinking traps and provide the opportunity to reflect on biases retrospectively, as a double check or an opportunity to learn from a mistake.

Recognize your emotions. Gaining an understanding of the effect of one’s emotions on decision-making also can help clinicians free themselves of bias. As human beings, healthcare professionals are  susceptible to emotion, and the best approach to mitigate the emotional influences may be to consciously name them and adjust for them.40

Because it is impractical to apply slow, analytical system 2 approaches to every case, skills that hone and develop more accurate, reliable system 1 thinking are crucial. Gaining broad exposure to increased numbers of cases may be the most reliable way to build an experiential repertoire of “illness scripts,” but there are ways to increase the experiential value of any case with a few techniques that have potential to promote better intuition.41

Embracing uncertainty in the early diagnostic process and envisioning the worst-case scenario in a case allows the consideration of additional diagnostic paths outside of the current working diagnosis, potentially priming the clinician to look for and recognize early warning signs that could argue against the initial diagnosis at a time when an adjustment could be made to prevent a bad outcome.

Practice progressive problem-solving,42 a technique in which the physician creates additional challenges to increase the cognitive burden of a “routine” case in an effort to train his or her mind and sharpen intuition. An example of this practice is contemplating a backup treatment plan in advance in the event of a poor response to or an adverse effect of treatment. Highly rated physicians and teachers perform this regularly.43,44 Other ways to maximize the learning value of an individual case include seeking feedback on patient outcomes, especially when a patient has been discharged or transferred to another provider’s care, or when the physician goes off service.

Simulation, traditionally used for procedural training, has potential as well. Cognitive simulation, such as case reports or virtual patient modules, have potential to enhance clinical reasoning skills as well, though possibly at greater cost of time and expense.

Decreased reliance on memory is likely to improve diagnostic reasoning. Systems tools such as checklists45 and health information technology46 have potential to reduce diagnostic errors, not by taking thinking away from the clinician but by relieving the cognitive load enough to facilitate greater effort toward reasoning.

Slow down. Finally, and perhaps most important, recent models of clinical expertise have suggested that mastery comes from having a robust intuitive method, with a sense of the limitations of the intuitive approach, an ability to recognize the need to perform more analytical reasoning in select cases, and the willingness to do so. In short, it may well be that the hallmark of a master clinician is the propensity to slow down when necessary.47

A ‘diagnostic time-out’ for safety might catch opportunities to recognize and mitigate biases and errors

If one considers diagnosis a cognitive procedure, perhaps a brief “diagnostic time-out” for safety might afford an opportunity to recognize and mitigate biases and errors. There are likely many potential scripts for a good diagnostic time-out, but to be functional it should be brief and simple to facilitate consistent use. We have recommended the following four questions to our residents as a starting point, any of which could signal the need to switch to a slower, analytic approach.

Four-step diagnostic time-out

  • What else can it be?
  • Is there anything about the case that does not fit?
  • Is it possible that multiple processes are going on?
  • Do I need to slow down?

These questions can serve as a double check for an intuitively formed initial working diagnosis, incorporating many of the principles discussed above, in a way that would hopefully avoid undue burden on a busy clinician. These techniques, it must be acknowledged, have not yet been directly tied to reductions in diagnostic errors. However, diagnostic errors, as discussed, are very difficult to identify and study, and these techniques will serve mainly to improve habits that are likely to show benefits over much longer time periods than most studies can measure.

An elderly Spanish-speaking woman with morbid obesity, diabetes, hypertension, and rheumatoid arthritis presents to the emergency department with worsening shortness of breath and cough. She speaks only Spanish, so her son provides the history without the aid of an interpreter.

Her shortness of breath is most noticeable with exertion and has increased gradually over the past 2 months. She has a nonproductive cough. Her son has noticed decreased oral intake and weight loss over the past few weeks.  She has neither traveled recently nor been in contact with anyone known to have an infectious disease.

A review of systems is otherwise negative: specifically, she denies chest pain, fevers, or chills. She saw her primary care physician 3 weeks ago for these complaints and was prescribed a 3-day course of azithromycin with no improvement.

Her medications include lisinopril, atenolol, glipizide, and metformin; her son believes she may be taking others as well but is not sure. He is also unsure of what treatment his mother has received for her rheumatoid arthritis, and most of her medical records are within another health system.

The patient’s son believes she may be taking other medications but is not sure; her records are at another institution

On physical examination, the patient is coughing and appears ill. Her temperature is 99.9°F (37.7°C), heart rate 105 beats per minute, blood pressure 140/70 mm Hg, res­piratory rate 24 per minute, and oxygen saturation by pulse oximetry 89% on room air. Heart sounds are normal, jugular venous pressure cannot be assessed because of her obese body habitus, pulmonary examination demonstrates crackles in all lung fields, and lower-extremity edema is not present. Her extremities are warm and well perfused. Musculoskeletal examination reveals deformities of the joints in both hands consistent with rheumatoid arthritis.

Laboratory data:

  • White blood cell count 13.0 × 109/L (reference range 3.7–11.0)
  • Hemoglobin level 10 g/dL (11.5–15)
  • Serum creatinine 1.0 mg/dL (0.7–1.4)
  • Pro-brain-type natriuretic peptide (pro-BNP) level greater than the upper limit of normal.

A chest radiograph is obtained, and the resident radiologist’s preliminary impression is that it is consistent with pulmonary vascular congestion.

The patient is admitted for further diagnostic evaluation. The emergency department resident orders intravenous furosemide and signs out to the night float medicine resident that this is an “elderly woman with hypertension, diabetes, and heart failure being admitted for a heart failure exacerbation.”

What is the accuracy of a physician’s initial working diagnosis?

Diagnostic accuracy requires both clinical knowledge and problem-solving skills.1

A decade ago, a National Patient Safety Foundation survey2 found that one in six patients had suffered a medical error related to misdiagnosis. In a large systematic review of autopsy-based diagnostic errors, the theorized rate of major errors ranged from 8.4% to as high as 24.4%.3 A study by Neale et al4 found that admitting diagnoses were incorrect in 6% of cases. In emergency departments, inaccuracy rates of up to 12% have been described.5

What factors influence the prevalence of diagnostic errors?

Initial empiric treatments, such as intravenous furosemide in the above scenario, add to the challenge of diagnosis in acute care settings and can influence clinical decisions made by subsequent providers.6

Nonspecific or vague symptoms make diagnosis especially challenging. Shortness of breath, for example, is a common chief complaint in medical patients, as in this case. Green et al7 found emergency department physicians reported clinical uncertainty for a diagnosis of heart failure in 31% of patients evaluated for “dyspnea.” Pulmonary embolism and pulmonary tuberculosis are also in the differential diagnosis for our patient, with studies reporting a misdiagnosis rate of 55% for pulmonary embolism8 and 50% for pulmonary tuberculosis.9

Hertwig et al,10 describing the diagnostic process in patients presenting to emergency departments with a nonspecific constellation of symptoms, found particularly low rates of agreement between the initial diagnostic impression and the final, correct one. In fact, the actual diagnosis was only in the physician’s initial “top three” differential diagnoses 29% to 83% of the time.

Atypical presentations of common diseases, initial nonspecific presentations of common diseases, and confounding comorbid conditions have also been associated with misdiagnosis.11 Our case scenario illustrates the frequent challenges physicians face when diagnosing patients who present with nonspecific symptoms and signs on a background of multiple, chronic comorbidities.

Contextual factors in the system and environment contribute to the potential for error.12 Examples include frequent interruptions, time pressure, poor handoffs, insufficient data, and multitasking.

In our scenario, incomplete data, time constraints, and multitasking in a busy work environment compelled the emergency department resident to rapidly synthesize information to establish a working diagnosis. Interpretations of radiographs by on-call radiology residents are similarly at risk of diagnostic error for the same reasons.13

Physician factors also influence diagnosis. Interestingly, physician certainty or uncertainty at the time of initial diagnosis does not uniformly appear to correlate with diagnostic accuracy. A recent study showed that physician confidence remained high regardless of the degree of difficulty in a given case, and degree of confidence also correlated poorly with whether the physician’s diagnosis was accurate.14

For patients admitted with a chief complaint of dyspnea, as in our scenario, Zwaan et al15 showed that “inappropriate selectivity” in reasoning contributed to an inaccurate diagnosis 23% of the time. Inappropriate selectivity, as defined by these authors, occurs when a probable diagnosis is not sufficiently considered and therefore is neither confirmed nor ruled out.

In our patient scenario, the failure to consider diagnoses other than heart failure and the inability to confirm a prior diagnosis of heart failure in the emergency department may contribute to a diagnostic error.

 

 

CASE CONTINUED: NO IMPROVEMENT OVER 3 DAYS

The night float resident, who has six other admissions this night, cannot ask the resident who evaluated this patient in the emergency department for further information because the shift has ended. The patient’s son left at the time of admission and is not available when the patient arrives on the medical ward.

The night float resident quickly examines the patient, enters admission orders, and signs the patient out to the intern and resident who will be caring for her during her hospitalization. The verbal handoff notes that the history was limited due to a language barrier. The initial problem list includes heart failure without a differential diagnosis, but notes that an elevated pro-BNP and chest radiograph confirm heart failure as the likely diagnosis.

Several hours after the night float resident has left, the resident presents this history to the attending physician, and together they decide to order her regular at-home medications, as well as deep vein thrombosis prophylaxis and echocardiography. In writing the orders, subcutaneous heparin once daily is erroneously entered instead of low-molecular-weight heparin daily, as this is the default in the medical record system. The tired resident fails to recognize this, and the pharmacist does not question it.

Over the next 2 days, the patient’s cough and shortness of breath persist.

After the attending physician dismisses their concerns, the residents do not bring up their idea again

On hospital day 3, two junior residents on the team (who finished their internship 2 weeks ago) review the attending radiologist’s interpretation of the chest radiograph. Unflagged, it confirms the resident’s interpretation but notes ill-defined, scattered, faint opacities. The residents believe that an interstitial pattern may be present and suggest that the patient may not have heart failure but rather a primary pulmonary disease. They bring this to the attention of their attending physician, who dismisses their concerns and comments that heart failure is a clinical diagnosis. The residents do not bring this idea up again to the attending physician.

That night, the float team is called by the nursing staff because of worsening oxygenation and cough. They add an intravenous corticosteroid, a broad-spectrum antibiotic, and an inhaled bronchodilator to the patient’s drug regimen.

How do cognitive errors predispose physicians to diagnostic errors?

When errors in diagnosis are reviewed retrospectively, cognitive or “thinking” errors are generally found, especially in nonprocedural or primary care specialties such as internal medicine, pediatrics, and emergency medicine.16,17

A widely accepted theory on how humans make decisions was described by the psychologists Tversky and Kahneman in 197418 and has been applied more recently to physicians’ diagnostic processes.19 Their dual process model theory states that persons with a requisite level of expertise use either the intuitive “system 1” process of thinking, based on pattern-recognition and heuristics, or the slower, more analytical “system 2” process.20 Experts disagree as to whether in medicine these processes represent a binary either-or model or a continuum21 with relative contributions of each process determined by the physician and the task.

What are some common types of cognitive error?

Experts agree that many diagnostic errors in medicine stem from decisions arrived at by inappropriate system 1 thinking due to biases. These biases have been identified and described as they relate to medicine, most notably by Croskerry.22

Several cognitive biases are illustrated in our clinical scenario:

The framing effect occurred when the emergency department resident listed the patient’s admitting diagnosis as heart failure during the clinical handoff of care.

Anchoring bias, as defined by Croskerry,22 is the tendency to lock onto salient features of the case too early in the diagnostic process and then to fail to adjust this initial diagnostic impression. This bias affected the admitting night float resident, primary intern, resident, and attending physician.

Diagnostic momentum, in turn, is a well-described phenomenon that clinical providers are especially vulnerable to in today’s environment of “copy-and-paste” medical records and numerous handovers of care as a consequence of residency duty-hour restrictions.23

Availability bias refers to commonly seen diagnoses like heart failure or recently seen diagnoses, which are more “available” to the human memory. These diagnoses, which spring to mind quickly, often trick providers into thinking that because they are more easily recalled, they are also more common or more likely.

Confirmation bias. The initial working diagnosis of heart failure may have led the medical team to place greater emphasis on the elevated pro-BNP and the chest radiograph to support the initial impression while ignoring findings such as weight loss that do not support this impression.

Blind obedience. Although the residents recognized the possibility of a primary pulmonary disease, they did not investigate this further. And when the attending physician dismissed their suggestion, they thus deferred to the person in authority or with a reputation of expertise.

Overconfidence bias. Despite minimal improvement in the patient’s clinical status after effective diuresis and the suggestion of alternative diagnoses by the residents, the attending physician remained confident—perhaps overconfident—in the diagnosis of heart failure and would not consider alternatives. Overconfidence bias has been well described and occurs when a medical provider believes too strongly in his or her ability to be correct and therefore fails to consider alternative diagnoses.24

Despite succumbing to overconfidence bias, the attending physician was able to overcome base-rate neglect, ie, failure to consider the prevalence of potential diagnoses in diagnostic reasoning.

Definitions and representative examples of cognitive biases in the case

Each of these biases, and others not mentioned, can lead to premature closure, which is the unfortunate root cause of many diagnostic errors and delays. We have illustrated several biases in our case scenario that led several physicians on the medical team to prematurely “close” on the diagnosis of heart failure (Table 1).

CASE CONTINUED: SURPRISES AND REASSESSMENT

On hospital day 4, the patient’s medication lists from her previous hospitalizations arrive, and the team is surprised to discover that she has been receiving infliximab for the past 3 to 4 months for her rheumatoid arthritis.

Additionally, an echocardiogram that was ordered on hospital day 1 but was lost in the cardiologist’s reading queue comes in and shows a normal ejection fraction with no evidence of elevated filling pressures.

Computed tomography of the chest reveals a reticular pattern with innumerable, tiny, 1- to 2-mm pulmonary nodules. The differential diagnosis is expanded to include hypersensitivity pneumonitis, lymphoma, fungal infection, and miliary tuberculosis.

How do faulty systems contribute to diagnostic error?

It is increasingly recognized that diagnostic errors can occur as a result of cognitive error, systems-based error, or quite commonly, both. Graber et al17 analyzed 100 cases of diagnostic error and determined that while cognitive errors did occur in most of them, nearly half the time both cognitive and systems-based errors contributed simultaneously.17 Observers have further delineated the importance of the systems context and how it affects our thinking.25

In this case, the language barrier, lack of availability of family, and inability to promptly utilize interpreter services contributed to early problems in acquiring a detailed history and a complete medication list that included the immunosuppressant infliximab. Later, a systems error led to a delay in the interpretation of an echocardiogram. Each of these factors, if prevented, would have presumably resulted in expansion of the differential diagnosis and earlier arrival at the correct diagnosis.

CASE CONTINUED: THE PATIENT DIES OF TUBERCULOSIS

The patient is moved to a negative pressure room, and the pulmonary consultants recommend bronchoscopy. During the procedure, the patient suffers acute respiratory failure, is intubated, and is transferred to the medical intensive care unit, where a saddle pulmonary embolism is diagnosed by computed tomographic angiography.

One day later, the sputum culture from the bronchoscopy returns as positive for acid-fast bacilli. A four-drug regimen for tuberculosis is started. The patient continues to have a downward course and expires 2 weeks later. Autopsy reveals miliary tuberculosis.

What is the frequency of diagnostic error in medicine?

Diagnostic error is estimated to have a frequency of 10% to 20%.24 Rates of diagnostic error are similar irrespective of method of determination, eg, from autopsy,3 standardized patients (ie, actors presenting with scripted scenarios),26 or case reviews.27 Patient surveys report patient-perceived harm from diagnostic error at a rate of 35% to 42%.28,29 The landmark Harvard Medical Practice Study found that 17% of all adverse events were attributable to diagnostic error.30

Diagnostic error is the most common type of medical error in nonprocedural medical fields.31 It causes a disproportionately large amount of morbidity and death.

Diagnostic error is the most common cause of malpractice claims in the United States. In inpatient and outpatient settings, for both medical and surgical patients, it accounted for 45.9% of all outpatient malpractice claims in 2009, making it the most common reason for medical malpractice litigation.32 A 2013 study indicated that diagnostic error is more common, more expensive, and two times more likely to result in death than any other category of error.33

 

 

CASE CONTINUED: MORBIDITY AND MORTALITY CONFERENCE

The patient’s case is brought to a morbidity and mortality conference for discussion. The systems issues in the case—including medication reconciliation, availability of interpreters, and timing and process of echocardiogram readings—are all discussed, but clinical reasoning and cognitive errors made in the case are avoided.

Why are cognitive errors often neglected in discussions of medical error?

Historically, openly discussing error in medicine has been difficult. Over the past decade, however, and fueled by the landmark Institute of Medicine report To Err is Human,34 the healthcare community has made substantial strides in identifying and talking about systems factors as a cause of preventable medical error.34,35

While systems contributions to medical error are inherently “external” to physicians and other healthcare providers, the cognitive contributions to error are inherently “internal” and are often considered personal. This has led to diagnostic error being kept out of many patient safety conversations. Further, while the solutions to systems errors are often tangible, such as implementing a fall prevention program or changing the physical packaging of a medication to reduce a medication dispensing or administration error, solutions to cognitive errors are generally considered more challenging to address by organizations trying to improve patient safety.

How can hospitals and department leaders do better?

Healthcare organizations and leaders of clinical teams or departments can implement several strategies.36

First, they can seek out and analyze the causes of diagnostic errors that are occurring locally in their institution and learn from their diagnostic errors, such as the one in our clinical scenario.

Trainees, physicians, and nurses should be comfortable questioning each other

Second, they can promote a culture of open communication and questioning around diagnosis. Trainees, physicians, and nurses should be comfortable questioning each other, including those higher up in the hierarchy, by saying, “I’m not sure” or “What else could this be?” to help reduce cognitive bias and expand the diagnostic possibilities.

Similarly, developing strategies to promote feedback on diagnosis among physicians will allow us all to learn from our diagnostic mistakes.

Use of the electronic medical record to assist in follow-up of pending diagnostic studies and patient return visits is yet another strategy.

Finally, healthcare organizations can adopt strategies to promote patient involvement in diagnosis, such as providing patients with copies of their test results and discharge summaries, encouraging the use of electronic patient communication portals, and empowering patients to ask questions related to their diagnosis. Prioritizing potential solutions to reduce diagnostic errors may be helpful in situations, depending on the context and environment, in which all proposed interventions may not be possible.

CASE CONTINUED: LEARNING FROM MISTAKES

The attending physician and resident in the case meet after the conference to review their clinical decision-making. Both are interested in learning from this case and improving their diagnostic skills in the future.

What specific steps can clinicians take to mitigate cognitive bias in daily practice?

In addition to continuing to expand one’s medical knowledge and gain more clinical experience, we can suggest several small steps to busy clinicians, taken individually or in combination with others that may improve diagnostic skills by reducing the potential for biased thinking in clinical practice.

Approaches to decision-making
From Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ 2009; 14:27–35. With kind permission from Springer Science and Business Media.
Figure 1. Approaches to decision-making can be located along a continuum, with unconscious, intuitive ones clustering at one end and deliberate, analytical ones at the other.

Think about your thinking. Our first recommendation would be to become more familiar with the dual process theory of clinical cognition (Figure 1).37,38 This theoretical framework may be very helpful as a foundation from which to build better thinking skills. Physicians, especially residents, and students can be taught these concepts and their potential to contribute to diagnostic errors, and can use these skills to recognize those contributions in others’ diagnostic practices and even in their own.39

Facilitating metacognition, or “thinking about one’s thinking,” may help clinicians catch themselves in thinking traps and provide the opportunity to reflect on biases retrospectively, as a double check or an opportunity to learn from a mistake.

Recognize your emotions. Gaining an understanding of the effect of one’s emotions on decision-making also can help clinicians free themselves of bias. As human beings, healthcare professionals are  susceptible to emotion, and the best approach to mitigate the emotional influences may be to consciously name them and adjust for them.40

Because it is impractical to apply slow, analytical system 2 approaches to every case, skills that hone and develop more accurate, reliable system 1 thinking are crucial. Gaining broad exposure to increased numbers of cases may be the most reliable way to build an experiential repertoire of “illness scripts,” but there are ways to increase the experiential value of any case with a few techniques that have potential to promote better intuition.41

Embracing uncertainty in the early diagnostic process and envisioning the worst-case scenario in a case allows the consideration of additional diagnostic paths outside of the current working diagnosis, potentially priming the clinician to look for and recognize early warning signs that could argue against the initial diagnosis at a time when an adjustment could be made to prevent a bad outcome.

Practice progressive problem-solving,42 a technique in which the physician creates additional challenges to increase the cognitive burden of a “routine” case in an effort to train his or her mind and sharpen intuition. An example of this practice is contemplating a backup treatment plan in advance in the event of a poor response to or an adverse effect of treatment. Highly rated physicians and teachers perform this regularly.43,44 Other ways to maximize the learning value of an individual case include seeking feedback on patient outcomes, especially when a patient has been discharged or transferred to another provider’s care, or when the physician goes off service.

Simulation, traditionally used for procedural training, has potential as well. Cognitive simulation, such as case reports or virtual patient modules, have potential to enhance clinical reasoning skills as well, though possibly at greater cost of time and expense.

Decreased reliance on memory is likely to improve diagnostic reasoning. Systems tools such as checklists45 and health information technology46 have potential to reduce diagnostic errors, not by taking thinking away from the clinician but by relieving the cognitive load enough to facilitate greater effort toward reasoning.

Slow down. Finally, and perhaps most important, recent models of clinical expertise have suggested that mastery comes from having a robust intuitive method, with a sense of the limitations of the intuitive approach, an ability to recognize the need to perform more analytical reasoning in select cases, and the willingness to do so. In short, it may well be that the hallmark of a master clinician is the propensity to slow down when necessary.47

A ‘diagnostic time-out’ for safety might catch opportunities to recognize and mitigate biases and errors

If one considers diagnosis a cognitive procedure, perhaps a brief “diagnostic time-out” for safety might afford an opportunity to recognize and mitigate biases and errors. There are likely many potential scripts for a good diagnostic time-out, but to be functional it should be brief and simple to facilitate consistent use. We have recommended the following four questions to our residents as a starting point, any of which could signal the need to switch to a slower, analytic approach.

Four-step diagnostic time-out

  • What else can it be?
  • Is there anything about the case that does not fit?
  • Is it possible that multiple processes are going on?
  • Do I need to slow down?

These questions can serve as a double check for an intuitively formed initial working diagnosis, incorporating many of the principles discussed above, in a way that would hopefully avoid undue burden on a busy clinician. These techniques, it must be acknowledged, have not yet been directly tied to reductions in diagnostic errors. However, diagnostic errors, as discussed, are very difficult to identify and study, and these techniques will serve mainly to improve habits that are likely to show benefits over much longer time periods than most studies can measure.

References
  1. Kassirer JP. Diagnostic reasoning. Ann Intern Med 1989; 110:893–900.
  2. Golodner L. How the public perceives patient safety. Newsletter of the National Patient Safety Foundation 2004; 1997:1–6.
  3. Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2003; 289:2849–2856.
  4. Neale G, Woloshynowych M, Vincent C. Exploring the causes of adverse events in NHS hospital practice. J R Soc Med 2001; 94:322–330.
  5. Chellis M, Olson J, Augustine J, Hamilton G. Evaluation of missed diagnoses for patients admitted from the emergency department. Acad Emerg Med 2001; 8:125–130.
  6. Tallentire VR, Smith SE, Skinner J, Cameron HS. Exploring error in team-based acute care scenarios: an observational study from the United Kingdom. Acad Med 2012; 87:792–798.
  7. Green SM, Martinez-Rumayor A, Gregory SA, et al. Clinical uncertainty, diagnostic accuracy, and outcomes in emergency department patients presenting with dyspnea. Arch Intern Med 2008; 168:741–748.
  8. Pineda LA, Hathwar VS, Grant BJ. Clinical suspicion of fatal pulmonary embolism. Chest 2001; 120:791–795.
  9. Shojania KG, Burton EC, McDonald KM, Goldman L. The autopsy as an outcome and performance measure. Evid Rep Technol Assess (Summ) 2002; 58:1–5.
  10. Hertwig R, Meier N, Nickel C, et al. Correlates of diagnostic accuracy in patients with nonspecific complaints. Med Decis Making 2013; 33:533–543.
  11. Kostopoulou O, Delaney BC, Munro CW. Diagnostic difficulty and error in primary care—a systematic review. Fam Pract 2008; 25:400–413.
  12. Ogdie AR, Reilly JB, Pang WG, et al. Seen through their eyes: residents’ reflections on the cognitive and contextual components of diagnostic errors in medicine. Acad Med 2012; 87:1361–1367.
  13. Feldmann EJ, Jain VR, Rakoff S, Haramati LB. Radiology residents’ on-call interpretation of chest radiographs for congestive heart failure. Acad Radiol 2007; 14:1264–1270.
  14. Meyer AN, Payne VL, Meeks DW, Rao R, Singh H. Physicians’ diagnostic accuracy, confidence, and resource requests: a vignette study. JAMA Intern Med 2013; 173:1952–1958.
  15. Zwaan L, Thijs A, Wagner C, Timmermans DR. Does inappropriate selectivity in information use relate to diagnostic errors and patient harm? The diagnosis of patients with dyspnea. Soc Sci Med 2013; 91:32–38.
  16. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med 2009; 169:1881–1887.
  17. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med 2005; 165:1493–1499.
  18. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science 1974; 185:1124–1131.
  19. Kahneman D. Thinking, fast and slow. New York, NY: Farrar, Straus, and Giroux; 2011.
  20. Croskerry P. A universal model of diagnostic reasoning. Acad Med 2009; 84:1022–1028.
  21. Custers EJ. Medical education and cognitive continuum theory: an alternative perspective on medical problem solving and clinical reasoning. Acad Med 2013; 88:1074–1080.
  22. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 2003; 78:775–780.
  23. Hirschtick RE. A piece of my mind. Copy-and-paste. JAMA 2006; 295:2335–2336.
  24. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med 2008;121(suppl 5):S2–S23.
  25. Henriksen K, Brady J. The pursuit of better diagnostic performance: a human factors perspective. BMJ Qual Saf 2013; 22(suppl 2):ii1–ii5.
  26. Peabody JW, Luck J, Jain S, Bertenthal D, Glassman P. Assessing the accuracy of administrative data in health information systems. Med Care 2004; 42:1066–1072.
  27. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf 2012; 21:737–745.
  28. Blendon RJ, DesRoches CM, Brodie M, et al. Views of practicing physicians and the public on medical errors. N Engl J Med 2002; 347:1933–1940.
  29. Burroughs TE, Waterman AD, Gallagher TH, et al. Patient concerns about medical errors in emergency departments. Acad Emerg Med 2005; 12:57–64.
  30. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med 1991; 324:377–384.
  31. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000; 38:261–271.
  32. Bishop TF, Ryan AM, Casalino LP. Paid malpractice claims for adverse events in inpatient and outpatient settings. JAMA 2011; 305:2427–2431.
  33. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-year summary of US malpractice claims for diagnostic errors 1986–2010: an analysis from the national practitioner data bank. BMJ Qual Saf 2013; 22:672–680.
  34. Kohn LT, Corrigan JM, Donaldson MS. To err is human: building a safer health system. Washington, DC: The National Academies Press; 2000.
  35. Singh H. Diagnostic errors: moving beyond ‘no respect’ and getting ready for prime time. BMJ Qual Saf 2013; 22:789–792.
  36. Graber ML, Trowbridge R, Myers JS, Umscheid CA, Strull W, Kanter MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf 2014; 40:102–110.
  37. Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract 2009; 14(suppl 1):27–35.
  38. Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ Theory Pract 2009; 14(suppl 1):37–49.
  39. Reilly JB, Ogdie AR, Von Feldt JM, Myers JS. Teaching about how doctors think: a longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Qual Saf 2013; 22:1044–1050.
  40. Croskerry P, Abbass A, Wu AW. Emotional influences in patient safety. J Patient Saf 2010; 6:199–205.
  41. Rajkomar A, Dhaliwal G. Improving diagnostic reasoning to improve patient safety. Perm J 2011; 15:68–73.
  42. Trowbridge RL, Dhaliwal G, Cosby KS. Educational agenda for diagnostic error reduction. BMJ Qual Saf 2013; 22(suppl 2):ii28­–ii32.
  43. Sargeant J, Mann K, Sinclair D, et al. Learning in practice: experiences and perceptions of high-scoring physicians. Acad Med 2006; 81:655–660.
  44. Mylopoulos M, Lohfeld L, Norman GR, Dhaliwal G, Eva KW. Renowned physicians' perceptions of expert diagnostic practice. Acad Med 2012; 87:1413–1417.
  45. Sibbald M, de Bruin AB, van Merrienboer JJ. Checklists improve experts' diagnostic decisions. Med Educ 2013; 47:301–308.
  46. El-Kareh R, Hasan O, Schiff GD. Use of health information technology to reduce diagnostic errors. BMJ Qual Saf 2013; 22(suppl 2):ii40–ii51.
  47. Moulton CA, Regehr G, Mylopoulos M, MacRae HM. Slowing down when you should: a new model of expert judgment. Acad Med 2007; 82(suppl 10):S109–S116.
References
  1. Kassirer JP. Diagnostic reasoning. Ann Intern Med 1989; 110:893–900.
  2. Golodner L. How the public perceives patient safety. Newsletter of the National Patient Safety Foundation 2004; 1997:1–6.
  3. Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2003; 289:2849–2856.
  4. Neale G, Woloshynowych M, Vincent C. Exploring the causes of adverse events in NHS hospital practice. J R Soc Med 2001; 94:322–330.
  5. Chellis M, Olson J, Augustine J, Hamilton G. Evaluation of missed diagnoses for patients admitted from the emergency department. Acad Emerg Med 2001; 8:125–130.
  6. Tallentire VR, Smith SE, Skinner J, Cameron HS. Exploring error in team-based acute care scenarios: an observational study from the United Kingdom. Acad Med 2012; 87:792–798.
  7. Green SM, Martinez-Rumayor A, Gregory SA, et al. Clinical uncertainty, diagnostic accuracy, and outcomes in emergency department patients presenting with dyspnea. Arch Intern Med 2008; 168:741–748.
  8. Pineda LA, Hathwar VS, Grant BJ. Clinical suspicion of fatal pulmonary embolism. Chest 2001; 120:791–795.
  9. Shojania KG, Burton EC, McDonald KM, Goldman L. The autopsy as an outcome and performance measure. Evid Rep Technol Assess (Summ) 2002; 58:1–5.
  10. Hertwig R, Meier N, Nickel C, et al. Correlates of diagnostic accuracy in patients with nonspecific complaints. Med Decis Making 2013; 33:533–543.
  11. Kostopoulou O, Delaney BC, Munro CW. Diagnostic difficulty and error in primary care—a systematic review. Fam Pract 2008; 25:400–413.
  12. Ogdie AR, Reilly JB, Pang WG, et al. Seen through their eyes: residents’ reflections on the cognitive and contextual components of diagnostic errors in medicine. Acad Med 2012; 87:1361–1367.
  13. Feldmann EJ, Jain VR, Rakoff S, Haramati LB. Radiology residents’ on-call interpretation of chest radiographs for congestive heart failure. Acad Radiol 2007; 14:1264–1270.
  14. Meyer AN, Payne VL, Meeks DW, Rao R, Singh H. Physicians’ diagnostic accuracy, confidence, and resource requests: a vignette study. JAMA Intern Med 2013; 173:1952–1958.
  15. Zwaan L, Thijs A, Wagner C, Timmermans DR. Does inappropriate selectivity in information use relate to diagnostic errors and patient harm? The diagnosis of patients with dyspnea. Soc Sci Med 2013; 91:32–38.
  16. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med 2009; 169:1881–1887.
  17. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med 2005; 165:1493–1499.
  18. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science 1974; 185:1124–1131.
  19. Kahneman D. Thinking, fast and slow. New York, NY: Farrar, Straus, and Giroux; 2011.
  20. Croskerry P. A universal model of diagnostic reasoning. Acad Med 2009; 84:1022–1028.
  21. Custers EJ. Medical education and cognitive continuum theory: an alternative perspective on medical problem solving and clinical reasoning. Acad Med 2013; 88:1074–1080.
  22. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 2003; 78:775–780.
  23. Hirschtick RE. A piece of my mind. Copy-and-paste. JAMA 2006; 295:2335–2336.
  24. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med 2008;121(suppl 5):S2–S23.
  25. Henriksen K, Brady J. The pursuit of better diagnostic performance: a human factors perspective. BMJ Qual Saf 2013; 22(suppl 2):ii1–ii5.
  26. Peabody JW, Luck J, Jain S, Bertenthal D, Glassman P. Assessing the accuracy of administrative data in health information systems. Med Care 2004; 42:1066–1072.
  27. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf 2012; 21:737–745.
  28. Blendon RJ, DesRoches CM, Brodie M, et al. Views of practicing physicians and the public on medical errors. N Engl J Med 2002; 347:1933–1940.
  29. Burroughs TE, Waterman AD, Gallagher TH, et al. Patient concerns about medical errors in emergency departments. Acad Emerg Med 2005; 12:57–64.
  30. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med 1991; 324:377–384.
  31. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000; 38:261–271.
  32. Bishop TF, Ryan AM, Casalino LP. Paid malpractice claims for adverse events in inpatient and outpatient settings. JAMA 2011; 305:2427–2431.
  33. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-year summary of US malpractice claims for diagnostic errors 1986–2010: an analysis from the national practitioner data bank. BMJ Qual Saf 2013; 22:672–680.
  34. Kohn LT, Corrigan JM, Donaldson MS. To err is human: building a safer health system. Washington, DC: The National Academies Press; 2000.
  35. Singh H. Diagnostic errors: moving beyond ‘no respect’ and getting ready for prime time. BMJ Qual Saf 2013; 22:789–792.
  36. Graber ML, Trowbridge R, Myers JS, Umscheid CA, Strull W, Kanter MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf 2014; 40:102–110.
  37. Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract 2009; 14(suppl 1):27–35.
  38. Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ Theory Pract 2009; 14(suppl 1):37–49.
  39. Reilly JB, Ogdie AR, Von Feldt JM, Myers JS. Teaching about how doctors think: a longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Qual Saf 2013; 22:1044–1050.
  40. Croskerry P, Abbass A, Wu AW. Emotional influences in patient safety. J Patient Saf 2010; 6:199–205.
  41. Rajkomar A, Dhaliwal G. Improving diagnostic reasoning to improve patient safety. Perm J 2011; 15:68–73.
  42. Trowbridge RL, Dhaliwal G, Cosby KS. Educational agenda for diagnostic error reduction. BMJ Qual Saf 2013; 22(suppl 2):ii28­–ii32.
  43. Sargeant J, Mann K, Sinclair D, et al. Learning in practice: experiences and perceptions of high-scoring physicians. Acad Med 2006; 81:655–660.
  44. Mylopoulos M, Lohfeld L, Norman GR, Dhaliwal G, Eva KW. Renowned physicians' perceptions of expert diagnostic practice. Acad Med 2012; 87:1413–1417.
  45. Sibbald M, de Bruin AB, van Merrienboer JJ. Checklists improve experts' diagnostic decisions. Med Educ 2013; 47:301–308.
  46. El-Kareh R, Hasan O, Schiff GD. Use of health information technology to reduce diagnostic errors. BMJ Qual Saf 2013; 22(suppl 2):ii40–ii51.
  47. Moulton CA, Regehr G, Mylopoulos M, MacRae HM. Slowing down when you should: a new model of expert judgment. Acad Med 2007; 82(suppl 10):S109–S116.
Issue
Cleveland Clinic Journal of Medicine - 82(11)
Issue
Cleveland Clinic Journal of Medicine - 82(11)
Page Number
745-753
Page Number
745-753
Publications
Publications
Topics
Article Type
Display Headline
An elderly woman with ‘heart failure’: Cognitive biases and diagnostic error
Display Headline
An elderly woman with ‘heart failure’: Cognitive biases and diagnostic error
Legacy Keywords
Cognitive bias, diagnostic error, medical error, misdiagnosis, heart failure, tuberculosis, Nikhil Mull, James Reilly, Jennifer Myers
Legacy Keywords
Cognitive bias, diagnostic error, medical error, misdiagnosis, heart failure, tuberculosis, Nikhil Mull, James Reilly, Jennifer Myers
Sections
Inside the Article

KEY POINTS

  • Diagnostic errors are common and lead to bad outcomes.
  • Factors that increase the risk of diagnostic error include initial empiric treatment, nonspecific or vague symptoms, atypical presentation, confounding comorbid conditions, contextual factors, and physician factors.
  • Common types of cognitive error include the framing effect, anchoring bias, diagnostic momentum, availability bias, confirmation bias, blind obedience, overconfidence bias, base-rate neglect, and premature closure.
  • Organizations and leaders can implement strategies to reduce diagnostic errors.
Disallow All Ads
Alternative CME
Article PDF Media

HQPS Competencies

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Hospital quality and patient safety competencies: Development, description, and recommendations for use

Healthcare quality is defined as the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.1 Delivering high quality care to patients in the hospital setting is especially challenging, given the rapid pace of clinical care, the severity and multitude of patient conditions, and the interdependence of complex processes within the hospital system. Research has shown that hospitalized patients do not consistently receive recommended care2 and are at risk for experiencing preventable harm.3 In an effort to stimulate improvement, stakeholders have called for increased accountability, including enhanced transparency and differential payment based on performance. A growing number of hospital process and outcome measures are readily available to the public via the Internet.46 The Joint Commission, which accredits US hospitals, requires the collection of core quality measure data7 and sets the expectation that National Patient Safety Goals be met to maintain accreditation.8 Moreover, the Center for Medicare and Medicaid Services (CMS) has developed a Value‐Based Purchasing (VBP) plan intended to adjust hospital payment based on quality measures and the occurrence of certain hospital‐acquired conditions.9, 10

Because of their clinical expertise, understanding of hospital clinical operations, leadership of multidisciplinary inpatient teams, and vested interest to improve the systems in which they work, hospitalists are perfectly positioned to collaborate with their institutions to improve the quality of care delivered to inpatients. However, many hospitalists are inadequately prepared to engage in efforts to improve quality, because medical schools and residency programs have not traditionally included or emphasized healthcare quality and patient safety in their curricula.1113 In a survey of 389 internal medicine‐trained hospitalists, significant educational deficiencies were identified in the area of systems‐based practice.14 Specifically, the topics of quality improvement, team management, practice guideline development, health information systems management, and coordination of care between healthcare settings were listed as essential skills for hospitalist practice but underemphasized in residency training. Recognizing the gap between the needs of practicing physicians and current medical education provided in healthcare quality, professional societies have recently published position papers calling for increased training in quality, safety, and systems, both in medical school11 and residency training.15, 16

The Society of Hospital Medicine (SHM) convened a Quality Summit in December 2008 to develop strategic plans related to healthcare quality. Summit attendees felt that most hospitalists lack the formal training necessary to evaluate, implement, and sustain system changes within the hospital. In response, the SHM Hospital Quality and Patient Safety (HQPS) Committee formed a Quality Improvement Education (QIE) subcommittee in 2009 to assess the needs of hospitalists with respect to hospital quality and patient safety, and to evaluate and expand upon existing educational programs in this area. Membership of the QIE subcommittee consisted of hospitalists with extensive experience in healthcare quality and medical education. The QIE subcommittee refined and expanded upon the healthcare quality and patient safety‐related competencies initially described in the Core Competencies in Hospital Medicine.17 The purpose of this report is to describe the development, provide definitions, and make recommendations on the use of the Hospital Quality and Patient Safety (HQPS) Competencies.

Development of The Hospital Quality and Patient Safety Competencies

The multistep process used by the SHM QIE subcommittee to develop the HQPS Competencies is summarized in Figure 1. We performed an in‐depth evaluation of current educational materials and offerings, including a review of the Core Competencies in Hospital Medicine, past annual SHM Quality Improvement Pre‐Course objectives, and the content of training courses offered by other organizations.1722 Throughout our analysis, we emphasized the identification of gaps in content relevant to hospitalists. We then used the Institute of Medicine's (IOM) 6 aims for healthcare quality as a foundation for developing the HQPS Competencies.1 Specifically, the IOM states that healthcare should be safe, effective, patient‐centered, timely, efficient, and equitable. Additionally, we reviewed and integrated elements of the Practice‐Based Learning and Improvement (PBLI) and Systems‐Based Practice (SBP) competencies as defined by the Accreditation Council for Graduate Medical Education (ACGME).23 We defined general areas of competence and specific standards for knowledge, skills, and attitudes within each area. Subcommittee members reflected on their own experience, as clinicians, educators, and leaders in healthcare quality and patient safety, to inform and refine the competency definitions and standards. Acknowledging that some hospitalists may serve as collaborators or clinical content experts, while others may serve as leaders of hospital quality initiatives, 3 levels of expertise were established: basic, intermediate, and advanced.

Figure 1
Hospital quality and patient safety competency process and timeline. Abbreviations: HQPS, hospital quality and patient safety; QI, quality improvement; SHM, Society of Hospital Medicine.

The QIE subcommittee presented a draft version of the HQPS Competencies to the HQPS Committee in the fall of 2009 and incorporated suggested revisions. The revised set of competencies was then reviewed by members of the Leadership and Education Committees during the winter of 2009‐2010, and additional recommendations were included in the final version now described.

Description of The Competencies

The 8 areas of competence include: Quality Measurement and Stakeholder Interests, Data Acquisition and Interpretation, Organizational Knowledge and Leadership Skills, Patient Safety Principles, Teamwork and Communication, Quality and Safety Improvement Methods, Health Information Systems, and Patient Centeredness. Three levels of competence and standards within each level and area are defined in Table 1. Standards use carefully selected action verbs to reflect educational goals for hospitalists at each level.24 The basic level represents a minimum level of competency for all practicing hospitalists. The intermediate level represents a hospitalist who is prepared to meaningfully engage and collaborate with his or her institution in quality improvement efforts. A hospitalist at this level may also lead uncomplicated improvement projects for his or her medical center and/or hospital medicine group. The advanced level represents a hospitalist prepared to lead quality improvement efforts for his or her institution and/or hospital medicine group. Many hospitalists at this level will have, or will be prepared to have, leadership positions in quality and patient safety at their institutions. Advanced level hospitalists will also have the expertise to teach and mentor other individuals in their quality improvement efforts.

Hospitalist Competencies in Healthcare Quality and Patient Safety
Competency Basic Intermediate Advanced
  • NOTE: The basic level represents a minimum level of competency for all practicing hospitalists. The intermediate level represents a hospitalist prepared to meaningfully collaborate with his or her institution in quality improvement efforts. The advanced level represents a hospitalist prepared to lead quality improvement efforts for his or her institution and/or group.

  • Abbreviation: PDSA, Plan Do Study Act.

Quality measurement and stakeholder interests Define structure, process, and outcome measures Compare and contrast relative benefits of using one type of measure vs another Anticipate and respond to stakeholders' needs and interests
Define stakeholders and understand their interests related to healthcare quality Explain measures as defined by stakeholders (Center for Medicare and Medicaid Services, Leapfrog, etc) Anticipate and respond to changes in quality measures and incentive programs
Identify measures as defined by stakeholders (Center for Medicare and Medicaid Services, Leapfrog, etc) Appreciate variation in quality and utilization performance Lead efforts to reduce variation in care delivery (see also quality improvement methods)
Describe potential unintended consequences of quality measurement and incentive programs Avoid unintended consequences of quality measurement and incentive programs
Data acquisition and interpretation Interpret simple statistical methods to compare populations within a sample (chi‐square, t tests, etc) Describe sources of data for quality measurement Acquire data from internal and external sources
Define basic terms used to describe continuous and categorical data (mean, median, standard deviation, interquartile range, percentages, rates, etc) Identify potential pitfalls in administrative data Create visual representations of data (Bar, Pareto, and Control Charts)
Summarize basic principles of statistical process control Explain variation in data Use simple statistical methods to compare populations within a sample (chi‐square, t tests, etc)
Interpret data displayed in Pareto and Control Charts Administer and interpret a survey
Summarize basic survey techniques (including methods to maximize response, minimize bias, and use of ordinal response scales)
Use appropriate terms to describe continuous and categorical data (mean, median, standard deviation, interquartile range, percentages, rates, etc)
Organizational knowledge and leadership skills Describe the organizational structure of one's institution Define interests of internal and external stakeholders Effectively negotiate with stakeholders
Define leaders within the organization and describe their roles Collaborate as an effective team member of a quality improvement project Assemble a quality improvement project team and effectively lead meetings (setting agendas, hold members accountable, etc)
Exemplify the importance of leading by example Explain principles of change management and how it can positively or negatively impact quality improvement project implementation Motivate change and create vision for ideal state
Effectively communicate quality or safety issues identified during routine patient care to the appropriate parties Communicate effectively in a variety of settings (lead a meeting, public speaking, etc)
Serve as a resource and/or mentor for less‐experienced team members
Patient safety principles Identify potential sources of error encountered during routine patient care Compare methods to measure errors and adverse events, including administrative data analysis, chart review, and incident reporting systems Lead efforts to appropriately measure medical error and/or adverse events
Compare and contrast medical error with adverse event Identify and explain how human factors can contribute to medical errors Lead efforts to redesign systems to reduce errors from occurring; this may include the facilitation of a hospital, departmental, or divisional Root Cause Analysis
Describe how the systems approach to medical error is more productive than assigning individual blame Know the difference between a strong vs a weak action plan for improvement (ie, brief education intervention is weak; skills training with deliberate practice or physical changes are stronger) Lead efforts to advance the culture of patient safety in the hospital
Differentiate among types of error (knowledge/judgment vs systems vs procedural/technical; latent vs active)
Explain the role that incident reporting plays in quality improvement efforts and how reporting can foster a culture of safety
Describe principles of medical error disclosure
Teamwork and communication Explain how poor teamwork and communication failures contribute to adverse events Collaborate on administration and interpretation of teamwork and safety culture measures Lead efforts to improve teamwork and safety culture
Identify the potential for errors during transitions within and between healthcare settings (handoffs, transfers, discharge) Describe the principles of effective teamwork and identify behaviors consistent with effective teamwork Lead efforts to improve teamwork in specific settings (intensive care, medical‐surgical unit, etc)
Identify deficiencies in transitions within and between healthcare settings (handoffs, transfers, discharge) Successfully improve the safety of transitions within and between healthcare settings (handoffs, transfers, discharge)
Quality and safety improvement methods and tools Define the quality improvement methods used and infrastructure in place at one's hospital Compare and contrast various quality improvement methods, including six sigma, lean, and PDSA Lead a quality improvement project using six sigma, lean, or PDSA methodology
Summarize the basic principles and use of Root Cause Analysis as a tool to evaluate medical error Collaborate on a quality improvement project using six sigma, lean, or PDSA Use high level process mapping, fishbone diagrams, etc, to identify areas for opportunity in evaluating a process
Describe and collaborate on Failure Mode and Effects Analysis Lead the development and implementation of clinical protocols to standardize care delivery when appropriate
Actively participate in a Root Cause Analysis Conduct Failure Mode and Effects Analysis
Conduct Root Cause Analysis
Health information systems Identify the potential for information systems to reduce as well as contribute to medical error Define types of clinical decision support Lead or co‐lead efforts to leverage information systems in quality measurement
Describe how information systems fit into provider workflow and care delivery Collaborate on the design of health information systems Lead or co‐lead efforts to leverage information systems to reduce error and/or improve delivery of effective care
Anticipate and prevent unintended consequences of implementation or revision of information systems
Lead or co‐lead efforts to leverage clinical decision support to improve quality and safety
Patient centeredness Explain the clinical benefits of a patient‐centered approach Explain benefits and potential limitations of patient satisfaction surveys Interpret data from patient satisfaction surveys and lead efforts to improve patient satisfaction
Identify system barriers to effective and safe care from the patient's perspective Identify clinical areas with suboptimal efficiency and/or timeliness from the patient's perspective Lead effort to reduce inefficiency and/or improve timeliness from the patient's perspective
Describe the value of patient satisfaction surveys and patient and family partnership in care Promote patient and caregiver education including use of effective education tools Lead efforts to eliminate system barriers to effective and safe care from the patient's perspective
Lead efforts to improve patent and caregiver education including development or implementation of effective education tools
Lead efforts to actively involve patients and families in the redesign of healthcare delivery systems and processes

Recommended Use of The Competencies

The HQPS Competencies provide a framework for curricula and other professional development experiences in healthcare quality and patient safety. We recommend a step‐wise approach to curriculum development which includes conducting a targeted needs assessment, defining goals and specific learning objectives, and evaluation of the curriculum.25 The HQPS Competencies can be used at each step and provide educational targets for learners across a range of interest and experience.

Professional Development

Since residency programs historically have not trained their graduates to achieve a basic level of competence, practicing hospitalists will need to seek out professional development opportunities. Some educational opportunities which already exist include the Quality Track sessions during the SHM Annual Meeting, and the SHM Quality Improvement Pre‐Course. Hospitalist leaders are currently using the HQPS Competencies to review and revise annual meeting and pre‐course objectives and content in an effort to meet the expected level of competence for SHM members. Similarly, local SHM Chapter and regional hospital medicine leaders should look to the competencies to help select topics and objectives for future presentations. Additionally, the SHM Web site offers tools to develop skills, including a resource room and quality improvement primer.26 Mentored‐implementation programs, supported by SHM, can help hospitalists' acquire more advanced experiential training in quality improvement.

New educational opportunities are being developed, including a comprehensive set of Internet‐based modules designed to help practicing hospitalists achieve a basic level of competence. Hospitalists will be able to achieve continuing medical education (CME) credit upon completion of individual modules. Plans are underway to provide Certification in Hospital Quality and Patient Safety, reflecting an advanced level of competence, upon completion of the entire set, and demonstration of knowledge and skill application through an approved quality improvement project. The certification process will leverage the success of the SHM Leadership Academies and Mentored Implementation projects to help hospitalists apply their new skills in a real world setting.

HQPS Competencies and Focused Practice in Hospital Medicine

Recently, the American Board of Internal Medicine (ABIM) has recognized the field of hospital medicine by developing a new program that provides hospitalists the opportunity to earn Maintenance of Certification (MOC) in Internal Medicine with a Focused Practice in Hospital Medicine.27 Appropriately, hospital quality and patient safety content is included among the knowledge questions on the secure exam, and completion of a practice improvement module (commonly known as PIM) is required for the certification. The SHM Education Committee has developed a Self‐Evaluation of Medical Knowledge module related to hospital quality and patient safety for use in the MOC process. ABIM recertification with Focused Practice in Hospital Medicine is an important and visible step for the Hospital Medicine movement; the content of both the secure exam and the MOC reaffirms the notion that the acquisition of knowledge, skills, and attitudes in hospital quality and patient safety is essential to the practice of hospital medicine.

Medical Education

Because teaching hospitalists frequently serve in important roles as educators and physician leaders in quality improvement, they are often responsible for medical student and resident training in healthcare quality and patient safety. Medical schools and residency programs have struggled to integrate healthcare quality and patient safety into their curricula.11, 12, 28 Hospitalists can play a major role in academic medical centers by helping to develop curricular materials and evaluations related to healthcare quality. Though intended primarily for future and current hospitalists, the HQPS Competencies and standards for the basic level may be adapted to provide educational targets for many learners in undergraduate and graduate medical education. Teaching hospitalists may use these standards to evaluate current educational efforts and design new curricula in collaboration with their medical school and residency program leaders.

Beyond the basic level of training in healthcare quality required for all, many residents will benefit from more advanced training experiences, including opportunities to apply knowledge and develop skills related to quality improvement. A recent report from the ACGME concluded that role models and mentors were essential for engaging residents in quality improvement efforts.29 Hospitalists are ideally suited to serve as role models during residents' experiential learning opportunities related to hospital quality. Several residency programs have begun to implement hospitalist tracks13 and quality improvement rotations.3032 Additionally, some academic medical centers have begun to develop and offer fellowship training in Hospital Medicine.33 These hospitalist‐led educational programs are an ideal opportunity to teach the intermediate and advanced training components, of healthcare quality and patient safety, to residents and fellows that wish to incorporate activity or leadership in quality improvement and patient safety science into their generalist or subspecialty careers. Teaching hospitalists should use the HQPS competency standards to define learning objectives for trainees at this stage of development.

To address the enormous educational needs in quality and safety for future physicians, a cadre of expert teachers in quality and safety will need to be developed. In collaboration with the Alliance for Academic Internal Medicine (AAIM), SHM is developing a Quality and Safety Educators Academy which will target academic hospitalists and other medical educators interested in developing advanced skills in quality improvement and patient safety education.

Assessment of Competence

An essential component of a rigorous faculty development program or medical education initiative is the assessment of whether these endeavors are achieving their stated aims. Published literature provides examples of useful assessment methods applicable to the HQPS Competencies. Knowledge in several areas of HQPS competence may be assessed with the use of multiple choice tests.34, 35 Knowledge of quality improvement methods may be assessed using the Quality Improvement Knowledge Application Tool (QIKAT), an instrument in which the learner responds to each of 3 scenarios with an aim, outcome and process measures, and ideas for changes which may result in improved performance.36 Teamwork and communication skills may be assessed using 360‐degree evaluations3739 and direct observation using behaviorally anchored rating scales.4043 Objective structured clinical examinations have been used to assess knowledge and skills related to patient safety principles.44, 45 Notably, few studies have rigorously assessed the validity and reliability of tools designed to evaluate competence related to healthcare quality.46 Additionally, to our knowledge, no prior research has evaluated assessment specifically for hospitalists. Thus, the development and validation of new assessment tools based on the HQPS Competencies for learners at each level is a crucial next step in the educational process. Additionally, evaluation of educational initiatives should include analyses of clinical benefit, as the ultimate goal of these efforts is to improve patient care.47, 48

Conclusion

Hospitalists are poised to have a tremendous impact on improving the quality of care for hospitalized patients. The lack of training in quality improvement in traditional medical education programs, in which most current hospitalists were trained, can be overcome through appropriate use of the HQPS Competencies. Formal incorporation of the HQPS Competencies into professional development programs, and innovative educational initiatives and curricula, will help provide current hospitalists and the next generations of hospitalists with the needed skills to be successful.

Files
References
  1. Crossing the Quality Chasm: A New Health System for the Twenty‐first Century.Washington, DC:Institute of Medicine;2001.
  2. Jha AK,Li Z,Orav EJ,Epstein AM.Care in U.S. hospitals—the Hospital Quality Alliance program.N Engl J Med.2005;353(3):265274.
  3. Zhan C,Miller MR.Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization.JAMA.2003;290(14):18681874.
  4. Hospital Compare—A quality tool provided by Medicare. Available at: http://www.hospitalcompare.hhs.gov/. Accessed April 23,2010.
  5. The Leapfrog Group: Hospital Quality Ratings. Available at: http://www.leapfroggroup.org/cp. Accessed April 30,2010.
  6. Why Not the Best? A Healthcare Quality Improvement Resource. Available at: http://www.whynotthebest.org/. Accessed April 30,2010.
  7. The Joint Commission: Facts about ORYX for hospitals (National Hospital Quality Measures). Available at: http://www.jointcommission.org/accreditationprograms/hospitals/oryx/oryx_facts.htm. Accessed August 19,2010.
  8. The Joint Commission: National Patient Safety Goals. Available at: http://www.jointcommission.org/patientsafety/nationalpatientsafetygoals/. Accessed August 9,2010.
  9. Hospital Acquired Conditions: Overview. Available at: http://www.cms.gov/HospitalAcqCond/01_Overview.asp. Accessed April 30,2010.
  10. Report to Congress:Plan to Implement a Medicare Hospital Value‐based Purchasing Program. Washington, DC: US Department of Health and Human Services, Center for Medicare and Medicaid Services;2007.
  11. Unmet Needs: Teaching Physicians to Provide Safe Patient Care.Boston, MA:Lucian Leape Institute at the National Patient Safety Foundation;2010.
  12. Alper E,Rosenberg EI,O'Brien KE,Fischer M,Durning SJ.Patient safety education at U.S. and Canadian medical schools: results from the 2006 Clerkship Directors in Internal Medicine survey.Acad Med.2009;84(12):16721676.
  13. Glasheen JJ,Siegal EM,Epstein K,Kutner J,Prochazka AV.Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists' needs.J Gen Intern Med.2008;23(7):11101115.
  14. Plauth WH,Pantilat SZ,Wachter RM,Fenton CL.Hospitalists' perceptions of their residency training needs: results of a national survey.Am J Med.2001;111(3):247254.
  15. Fitzgibbons JP,Bordley DR,Berkowitz LR,Miller BW,Henderson MC.Redesigning residency education in internal medicine: a position paper from the Association of Program Directors in Internal Medicine.Ann Intern Med.2006;144(12):920926.
  16. Weinberger SE,Smith LG,Collier VU.Redesigning training for internal medicine.Ann Intern Med.2006;144(12):927932.
  17. Dressler DD,Pistoria MJ,Budnitz TL,McKean SC,Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1(1):4856.
  18. Intermountain Healthcare. 20‐Day Course for Executives 2001.
  19. Kern DE,Thomas PA,Bass EB,Howard DM.Curriculum Development for Medical Education: A Six‐step Approach.Baltimore, MD:Johns Hopkins Press;1998.
  20. Society of Hospital Medicine Quality Improvement Basics. Available at: http://www.hospitalmedicine.org/Content/NavigationMenu/QualityImprovement/QIPrimer/QI_Primer_Landing_Pa.htm. Accessed June 4,2010.
  21. American Board of Internal Medicine: Questions and Answers Regarding ABIM's Maintenance of Certification in Internal Medicine With a Focused Practice in Hospital Medicine Program. Available at: http://www.abim.org/news/news/focused‐practice‐hospital‐medicine‐qa.aspx. Accessed August 9,2010.
  22. Heard JK,Allen RM,Clardy J.Assessing the needs of residency program directors to meet the ACGME general competencies.Acad Med.2002;77(7):750.
  23. Philibert I.Accreditation Council for Graduate Medical Education and Institute for Healthcare Improvement 90‐Day Project. Involving Residents in Quality Improvement: Contrasting “Top‐Down” and “Bottom‐Up” Approaches.Chicago, IL;ACGME;2008.
  24. Oyler J,Vinci L,Arora V,Johnson J.Teaching internal medicine residents quality improvement techniques using the ABIM's practice improvement modules.J Gen Intern Med.2008;23(7):927930.
  25. Peters AS,Kimura J,Ladden MD,March E,Moore GT.A self‐instructional model to teach systems‐based practice and practice‐based learning and improvement.J Gen Intern Med.2008;23(7):931936.
  26. Weingart SN,Tess A,Driver J,Aronson MD,Sands K.Creating a quality improvement elective for medical house officers.J Gen Intern Med.2004;19(8):861867.
  27. Ranji SR,Rosenman DJ,Amin AN,Kripalani S.Hospital medicine fellowships: works in progress.Am J Med.2006;119(1):72.e1e7.
  28. Kerfoot BP,Conlin PR,Travison T,McMahon GT.Web‐based education in systems‐based practice: a randomized trial.Arch Intern Med.2007;167(4):361366.
  29. Peters AS,Kimura J,Ladden MD,March E,Moore GT.A self‐instructional model to teach systems‐based practice and practice‐based learning and improvement.J Gen Intern Med.2008;23(7):931936.
  30. Morrison L,Headrick L,Ogrinc G,Foster T.The quality improvement knowledge application tool: an instrument to assess knowledge application in practice‐based learning and improvement.J Gen Intern Med.2003;18(suppl 1):250.
  31. Brinkman WB,Geraghty SR,Lanphear BP, et al.Effect of multisource feedback on resident communication skills and professionalism: a randomized controlled trial.Arch Pediatr Adolesc Med.2007;161(1):4449.
  32. Massagli TL,Carline JD.Reliability of a 360‐degree evaluation to assess resident competence.Am J Phys Med Rehabil.2007;86(10):845852.
  33. Musick DW,McDowell SM,Clark N,Salcido R.Pilot study of a 360‐degree assessment instrument for physical medicine 82(5):394402.
  34. Fletcher G,Flin R,McGeorge P,Glavin R,Maran N,Patey R.Anaesthetists' non‐technical skills (ANTS): evaluation of a behavioural marker system.Br J Anaesth.2003;90(5):580588.
  35. Malec JF,Torsher LC,Dunn WF, et al.The Mayo high performance teamwork scale: reliability and validity for evaluating key crew resource management skills.Simul Healthc.2007;2(1):410.
  36. Sevdalis N,Davis R,Koutantji M,Undre S,Darzi A,Vincent CA.Reliability of a revised NOTECHS scale for use in surgical teams.Am J Surg.2008;196(2):184190.
  37. Sevdalis N,Lyons M,Healey AN,Undre S,Darzi A,Vincent CA.Observational teamwork assessment for surgery: construct validation with expert versus novice raters.Ann Surg.2009;249(6):10471051.
  38. Singh R,Singh A,Fish R,McLean D,Anderson DR,Singh G.A patient safety objective structured clinical examination.J Patient Saf.2009;5(2):5560.
  39. Varkey P,Natt N.The Objective Structured Clinical Examination as an educational tool in patient safety.Jt Comm J Qual Patient Saf.2007;33(1):4853.
  40. Lurie SJ,Mooney CJ,Lyness JM.Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review.Acad Med.2009;84(3):301309.
  41. Boonyasai RT,Windish DM,Chakraborti C,Feldman LS,Rubin HR,Bass EB.Effectiveness of teaching quality improvement to clinicians: a systematic review.JAMA.2007;298(9):10231037.
  42. Windish DM,Reed DA,Boonyasai RT,Chakraborti C,Bass EB.Methodological rigor of quality improvement curricula for physician trainees: a systematic review and recommendations for change.Acad Med.2009;84(12):16771692.
Article PDF
Issue
Journal of Hospital Medicine - 6(9)
Publications
Page Number
530-536
Sections
Files
Files
Article PDF
Article PDF

Healthcare quality is defined as the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.1 Delivering high quality care to patients in the hospital setting is especially challenging, given the rapid pace of clinical care, the severity and multitude of patient conditions, and the interdependence of complex processes within the hospital system. Research has shown that hospitalized patients do not consistently receive recommended care2 and are at risk for experiencing preventable harm.3 In an effort to stimulate improvement, stakeholders have called for increased accountability, including enhanced transparency and differential payment based on performance. A growing number of hospital process and outcome measures are readily available to the public via the Internet.46 The Joint Commission, which accredits US hospitals, requires the collection of core quality measure data7 and sets the expectation that National Patient Safety Goals be met to maintain accreditation.8 Moreover, the Center for Medicare and Medicaid Services (CMS) has developed a Value‐Based Purchasing (VBP) plan intended to adjust hospital payment based on quality measures and the occurrence of certain hospital‐acquired conditions.9, 10

Because of their clinical expertise, understanding of hospital clinical operations, leadership of multidisciplinary inpatient teams, and vested interest to improve the systems in which they work, hospitalists are perfectly positioned to collaborate with their institutions to improve the quality of care delivered to inpatients. However, many hospitalists are inadequately prepared to engage in efforts to improve quality, because medical schools and residency programs have not traditionally included or emphasized healthcare quality and patient safety in their curricula.1113 In a survey of 389 internal medicine‐trained hospitalists, significant educational deficiencies were identified in the area of systems‐based practice.14 Specifically, the topics of quality improvement, team management, practice guideline development, health information systems management, and coordination of care between healthcare settings were listed as essential skills for hospitalist practice but underemphasized in residency training. Recognizing the gap between the needs of practicing physicians and current medical education provided in healthcare quality, professional societies have recently published position papers calling for increased training in quality, safety, and systems, both in medical school11 and residency training.15, 16

The Society of Hospital Medicine (SHM) convened a Quality Summit in December 2008 to develop strategic plans related to healthcare quality. Summit attendees felt that most hospitalists lack the formal training necessary to evaluate, implement, and sustain system changes within the hospital. In response, the SHM Hospital Quality and Patient Safety (HQPS) Committee formed a Quality Improvement Education (QIE) subcommittee in 2009 to assess the needs of hospitalists with respect to hospital quality and patient safety, and to evaluate and expand upon existing educational programs in this area. Membership of the QIE subcommittee consisted of hospitalists with extensive experience in healthcare quality and medical education. The QIE subcommittee refined and expanded upon the healthcare quality and patient safety‐related competencies initially described in the Core Competencies in Hospital Medicine.17 The purpose of this report is to describe the development, provide definitions, and make recommendations on the use of the Hospital Quality and Patient Safety (HQPS) Competencies.

Development of The Hospital Quality and Patient Safety Competencies

The multistep process used by the SHM QIE subcommittee to develop the HQPS Competencies is summarized in Figure 1. We performed an in‐depth evaluation of current educational materials and offerings, including a review of the Core Competencies in Hospital Medicine, past annual SHM Quality Improvement Pre‐Course objectives, and the content of training courses offered by other organizations.1722 Throughout our analysis, we emphasized the identification of gaps in content relevant to hospitalists. We then used the Institute of Medicine's (IOM) 6 aims for healthcare quality as a foundation for developing the HQPS Competencies.1 Specifically, the IOM states that healthcare should be safe, effective, patient‐centered, timely, efficient, and equitable. Additionally, we reviewed and integrated elements of the Practice‐Based Learning and Improvement (PBLI) and Systems‐Based Practice (SBP) competencies as defined by the Accreditation Council for Graduate Medical Education (ACGME).23 We defined general areas of competence and specific standards for knowledge, skills, and attitudes within each area. Subcommittee members reflected on their own experience, as clinicians, educators, and leaders in healthcare quality and patient safety, to inform and refine the competency definitions and standards. Acknowledging that some hospitalists may serve as collaborators or clinical content experts, while others may serve as leaders of hospital quality initiatives, 3 levels of expertise were established: basic, intermediate, and advanced.

Figure 1
Hospital quality and patient safety competency process and timeline. Abbreviations: HQPS, hospital quality and patient safety; QI, quality improvement; SHM, Society of Hospital Medicine.

The QIE subcommittee presented a draft version of the HQPS Competencies to the HQPS Committee in the fall of 2009 and incorporated suggested revisions. The revised set of competencies was then reviewed by members of the Leadership and Education Committees during the winter of 2009‐2010, and additional recommendations were included in the final version now described.

Description of The Competencies

The 8 areas of competence include: Quality Measurement and Stakeholder Interests, Data Acquisition and Interpretation, Organizational Knowledge and Leadership Skills, Patient Safety Principles, Teamwork and Communication, Quality and Safety Improvement Methods, Health Information Systems, and Patient Centeredness. Three levels of competence and standards within each level and area are defined in Table 1. Standards use carefully selected action verbs to reflect educational goals for hospitalists at each level.24 The basic level represents a minimum level of competency for all practicing hospitalists. The intermediate level represents a hospitalist who is prepared to meaningfully engage and collaborate with his or her institution in quality improvement efforts. A hospitalist at this level may also lead uncomplicated improvement projects for his or her medical center and/or hospital medicine group. The advanced level represents a hospitalist prepared to lead quality improvement efforts for his or her institution and/or hospital medicine group. Many hospitalists at this level will have, or will be prepared to have, leadership positions in quality and patient safety at their institutions. Advanced level hospitalists will also have the expertise to teach and mentor other individuals in their quality improvement efforts.

Hospitalist Competencies in Healthcare Quality and Patient Safety
Competency Basic Intermediate Advanced
  • NOTE: The basic level represents a minimum level of competency for all practicing hospitalists. The intermediate level represents a hospitalist prepared to meaningfully collaborate with his or her institution in quality improvement efforts. The advanced level represents a hospitalist prepared to lead quality improvement efforts for his or her institution and/or group.

  • Abbreviation: PDSA, Plan Do Study Act.

Quality measurement and stakeholder interests Define structure, process, and outcome measures Compare and contrast relative benefits of using one type of measure vs another Anticipate and respond to stakeholders' needs and interests
Define stakeholders and understand their interests related to healthcare quality Explain measures as defined by stakeholders (Center for Medicare and Medicaid Services, Leapfrog, etc) Anticipate and respond to changes in quality measures and incentive programs
Identify measures as defined by stakeholders (Center for Medicare and Medicaid Services, Leapfrog, etc) Appreciate variation in quality and utilization performance Lead efforts to reduce variation in care delivery (see also quality improvement methods)
Describe potential unintended consequences of quality measurement and incentive programs Avoid unintended consequences of quality measurement and incentive programs
Data acquisition and interpretation Interpret simple statistical methods to compare populations within a sample (chi‐square, t tests, etc) Describe sources of data for quality measurement Acquire data from internal and external sources
Define basic terms used to describe continuous and categorical data (mean, median, standard deviation, interquartile range, percentages, rates, etc) Identify potential pitfalls in administrative data Create visual representations of data (Bar, Pareto, and Control Charts)
Summarize basic principles of statistical process control Explain variation in data Use simple statistical methods to compare populations within a sample (chi‐square, t tests, etc)
Interpret data displayed in Pareto and Control Charts Administer and interpret a survey
Summarize basic survey techniques (including methods to maximize response, minimize bias, and use of ordinal response scales)
Use appropriate terms to describe continuous and categorical data (mean, median, standard deviation, interquartile range, percentages, rates, etc)
Organizational knowledge and leadership skills Describe the organizational structure of one's institution Define interests of internal and external stakeholders Effectively negotiate with stakeholders
Define leaders within the organization and describe their roles Collaborate as an effective team member of a quality improvement project Assemble a quality improvement project team and effectively lead meetings (setting agendas, hold members accountable, etc)
Exemplify the importance of leading by example Explain principles of change management and how it can positively or negatively impact quality improvement project implementation Motivate change and create vision for ideal state
Effectively communicate quality or safety issues identified during routine patient care to the appropriate parties Communicate effectively in a variety of settings (lead a meeting, public speaking, etc)
Serve as a resource and/or mentor for less‐experienced team members
Patient safety principles Identify potential sources of error encountered during routine patient care Compare methods to measure errors and adverse events, including administrative data analysis, chart review, and incident reporting systems Lead efforts to appropriately measure medical error and/or adverse events
Compare and contrast medical error with adverse event Identify and explain how human factors can contribute to medical errors Lead efforts to redesign systems to reduce errors from occurring; this may include the facilitation of a hospital, departmental, or divisional Root Cause Analysis
Describe how the systems approach to medical error is more productive than assigning individual blame Know the difference between a strong vs a weak action plan for improvement (ie, brief education intervention is weak; skills training with deliberate practice or physical changes are stronger) Lead efforts to advance the culture of patient safety in the hospital
Differentiate among types of error (knowledge/judgment vs systems vs procedural/technical; latent vs active)
Explain the role that incident reporting plays in quality improvement efforts and how reporting can foster a culture of safety
Describe principles of medical error disclosure
Teamwork and communication Explain how poor teamwork and communication failures contribute to adverse events Collaborate on administration and interpretation of teamwork and safety culture measures Lead efforts to improve teamwork and safety culture
Identify the potential for errors during transitions within and between healthcare settings (handoffs, transfers, discharge) Describe the principles of effective teamwork and identify behaviors consistent with effective teamwork Lead efforts to improve teamwork in specific settings (intensive care, medical‐surgical unit, etc)
Identify deficiencies in transitions within and between healthcare settings (handoffs, transfers, discharge) Successfully improve the safety of transitions within and between healthcare settings (handoffs, transfers, discharge)
Quality and safety improvement methods and tools Define the quality improvement methods used and infrastructure in place at one's hospital Compare and contrast various quality improvement methods, including six sigma, lean, and PDSA Lead a quality improvement project using six sigma, lean, or PDSA methodology
Summarize the basic principles and use of Root Cause Analysis as a tool to evaluate medical error Collaborate on a quality improvement project using six sigma, lean, or PDSA Use high level process mapping, fishbone diagrams, etc, to identify areas for opportunity in evaluating a process
Describe and collaborate on Failure Mode and Effects Analysis Lead the development and implementation of clinical protocols to standardize care delivery when appropriate
Actively participate in a Root Cause Analysis Conduct Failure Mode and Effects Analysis
Conduct Root Cause Analysis
Health information systems Identify the potential for information systems to reduce as well as contribute to medical error Define types of clinical decision support Lead or co‐lead efforts to leverage information systems in quality measurement
Describe how information systems fit into provider workflow and care delivery Collaborate on the design of health information systems Lead or co‐lead efforts to leverage information systems to reduce error and/or improve delivery of effective care
Anticipate and prevent unintended consequences of implementation or revision of information systems
Lead or co‐lead efforts to leverage clinical decision support to improve quality and safety
Patient centeredness Explain the clinical benefits of a patient‐centered approach Explain benefits and potential limitations of patient satisfaction surveys Interpret data from patient satisfaction surveys and lead efforts to improve patient satisfaction
Identify system barriers to effective and safe care from the patient's perspective Identify clinical areas with suboptimal efficiency and/or timeliness from the patient's perspective Lead effort to reduce inefficiency and/or improve timeliness from the patient's perspective
Describe the value of patient satisfaction surveys and patient and family partnership in care Promote patient and caregiver education including use of effective education tools Lead efforts to eliminate system barriers to effective and safe care from the patient's perspective
Lead efforts to improve patent and caregiver education including development or implementation of effective education tools
Lead efforts to actively involve patients and families in the redesign of healthcare delivery systems and processes

Recommended Use of The Competencies

The HQPS Competencies provide a framework for curricula and other professional development experiences in healthcare quality and patient safety. We recommend a step‐wise approach to curriculum development which includes conducting a targeted needs assessment, defining goals and specific learning objectives, and evaluation of the curriculum.25 The HQPS Competencies can be used at each step and provide educational targets for learners across a range of interest and experience.

Professional Development

Since residency programs historically have not trained their graduates to achieve a basic level of competence, practicing hospitalists will need to seek out professional development opportunities. Some educational opportunities which already exist include the Quality Track sessions during the SHM Annual Meeting, and the SHM Quality Improvement Pre‐Course. Hospitalist leaders are currently using the HQPS Competencies to review and revise annual meeting and pre‐course objectives and content in an effort to meet the expected level of competence for SHM members. Similarly, local SHM Chapter and regional hospital medicine leaders should look to the competencies to help select topics and objectives for future presentations. Additionally, the SHM Web site offers tools to develop skills, including a resource room and quality improvement primer.26 Mentored‐implementation programs, supported by SHM, can help hospitalists' acquire more advanced experiential training in quality improvement.

New educational opportunities are being developed, including a comprehensive set of Internet‐based modules designed to help practicing hospitalists achieve a basic level of competence. Hospitalists will be able to achieve continuing medical education (CME) credit upon completion of individual modules. Plans are underway to provide Certification in Hospital Quality and Patient Safety, reflecting an advanced level of competence, upon completion of the entire set, and demonstration of knowledge and skill application through an approved quality improvement project. The certification process will leverage the success of the SHM Leadership Academies and Mentored Implementation projects to help hospitalists apply their new skills in a real world setting.

HQPS Competencies and Focused Practice in Hospital Medicine

Recently, the American Board of Internal Medicine (ABIM) has recognized the field of hospital medicine by developing a new program that provides hospitalists the opportunity to earn Maintenance of Certification (MOC) in Internal Medicine with a Focused Practice in Hospital Medicine.27 Appropriately, hospital quality and patient safety content is included among the knowledge questions on the secure exam, and completion of a practice improvement module (commonly known as PIM) is required for the certification. The SHM Education Committee has developed a Self‐Evaluation of Medical Knowledge module related to hospital quality and patient safety for use in the MOC process. ABIM recertification with Focused Practice in Hospital Medicine is an important and visible step for the Hospital Medicine movement; the content of both the secure exam and the MOC reaffirms the notion that the acquisition of knowledge, skills, and attitudes in hospital quality and patient safety is essential to the practice of hospital medicine.

Medical Education

Because teaching hospitalists frequently serve in important roles as educators and physician leaders in quality improvement, they are often responsible for medical student and resident training in healthcare quality and patient safety. Medical schools and residency programs have struggled to integrate healthcare quality and patient safety into their curricula.11, 12, 28 Hospitalists can play a major role in academic medical centers by helping to develop curricular materials and evaluations related to healthcare quality. Though intended primarily for future and current hospitalists, the HQPS Competencies and standards for the basic level may be adapted to provide educational targets for many learners in undergraduate and graduate medical education. Teaching hospitalists may use these standards to evaluate current educational efforts and design new curricula in collaboration with their medical school and residency program leaders.

Beyond the basic level of training in healthcare quality required for all, many residents will benefit from more advanced training experiences, including opportunities to apply knowledge and develop skills related to quality improvement. A recent report from the ACGME concluded that role models and mentors were essential for engaging residents in quality improvement efforts.29 Hospitalists are ideally suited to serve as role models during residents' experiential learning opportunities related to hospital quality. Several residency programs have begun to implement hospitalist tracks13 and quality improvement rotations.3032 Additionally, some academic medical centers have begun to develop and offer fellowship training in Hospital Medicine.33 These hospitalist‐led educational programs are an ideal opportunity to teach the intermediate and advanced training components, of healthcare quality and patient safety, to residents and fellows that wish to incorporate activity or leadership in quality improvement and patient safety science into their generalist or subspecialty careers. Teaching hospitalists should use the HQPS competency standards to define learning objectives for trainees at this stage of development.

To address the enormous educational needs in quality and safety for future physicians, a cadre of expert teachers in quality and safety will need to be developed. In collaboration with the Alliance for Academic Internal Medicine (AAIM), SHM is developing a Quality and Safety Educators Academy which will target academic hospitalists and other medical educators interested in developing advanced skills in quality improvement and patient safety education.

Assessment of Competence

An essential component of a rigorous faculty development program or medical education initiative is the assessment of whether these endeavors are achieving their stated aims. Published literature provides examples of useful assessment methods applicable to the HQPS Competencies. Knowledge in several areas of HQPS competence may be assessed with the use of multiple choice tests.34, 35 Knowledge of quality improvement methods may be assessed using the Quality Improvement Knowledge Application Tool (QIKAT), an instrument in which the learner responds to each of 3 scenarios with an aim, outcome and process measures, and ideas for changes which may result in improved performance.36 Teamwork and communication skills may be assessed using 360‐degree evaluations3739 and direct observation using behaviorally anchored rating scales.4043 Objective structured clinical examinations have been used to assess knowledge and skills related to patient safety principles.44, 45 Notably, few studies have rigorously assessed the validity and reliability of tools designed to evaluate competence related to healthcare quality.46 Additionally, to our knowledge, no prior research has evaluated assessment specifically for hospitalists. Thus, the development and validation of new assessment tools based on the HQPS Competencies for learners at each level is a crucial next step in the educational process. Additionally, evaluation of educational initiatives should include analyses of clinical benefit, as the ultimate goal of these efforts is to improve patient care.47, 48

Conclusion

Hospitalists are poised to have a tremendous impact on improving the quality of care for hospitalized patients. The lack of training in quality improvement in traditional medical education programs, in which most current hospitalists were trained, can be overcome through appropriate use of the HQPS Competencies. Formal incorporation of the HQPS Competencies into professional development programs, and innovative educational initiatives and curricula, will help provide current hospitalists and the next generations of hospitalists with the needed skills to be successful.

Healthcare quality is defined as the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.1 Delivering high quality care to patients in the hospital setting is especially challenging, given the rapid pace of clinical care, the severity and multitude of patient conditions, and the interdependence of complex processes within the hospital system. Research has shown that hospitalized patients do not consistently receive recommended care2 and are at risk for experiencing preventable harm.3 In an effort to stimulate improvement, stakeholders have called for increased accountability, including enhanced transparency and differential payment based on performance. A growing number of hospital process and outcome measures are readily available to the public via the Internet.46 The Joint Commission, which accredits US hospitals, requires the collection of core quality measure data7 and sets the expectation that National Patient Safety Goals be met to maintain accreditation.8 Moreover, the Center for Medicare and Medicaid Services (CMS) has developed a Value‐Based Purchasing (VBP) plan intended to adjust hospital payment based on quality measures and the occurrence of certain hospital‐acquired conditions.9, 10

Because of their clinical expertise, understanding of hospital clinical operations, leadership of multidisciplinary inpatient teams, and vested interest to improve the systems in which they work, hospitalists are perfectly positioned to collaborate with their institutions to improve the quality of care delivered to inpatients. However, many hospitalists are inadequately prepared to engage in efforts to improve quality, because medical schools and residency programs have not traditionally included or emphasized healthcare quality and patient safety in their curricula.1113 In a survey of 389 internal medicine‐trained hospitalists, significant educational deficiencies were identified in the area of systems‐based practice.14 Specifically, the topics of quality improvement, team management, practice guideline development, health information systems management, and coordination of care between healthcare settings were listed as essential skills for hospitalist practice but underemphasized in residency training. Recognizing the gap between the needs of practicing physicians and current medical education provided in healthcare quality, professional societies have recently published position papers calling for increased training in quality, safety, and systems, both in medical school11 and residency training.15, 16

The Society of Hospital Medicine (SHM) convened a Quality Summit in December 2008 to develop strategic plans related to healthcare quality. Summit attendees felt that most hospitalists lack the formal training necessary to evaluate, implement, and sustain system changes within the hospital. In response, the SHM Hospital Quality and Patient Safety (HQPS) Committee formed a Quality Improvement Education (QIE) subcommittee in 2009 to assess the needs of hospitalists with respect to hospital quality and patient safety, and to evaluate and expand upon existing educational programs in this area. Membership of the QIE subcommittee consisted of hospitalists with extensive experience in healthcare quality and medical education. The QIE subcommittee refined and expanded upon the healthcare quality and patient safety‐related competencies initially described in the Core Competencies in Hospital Medicine.17 The purpose of this report is to describe the development, provide definitions, and make recommendations on the use of the Hospital Quality and Patient Safety (HQPS) Competencies.

Development of The Hospital Quality and Patient Safety Competencies

The multistep process used by the SHM QIE subcommittee to develop the HQPS Competencies is summarized in Figure 1. We performed an in‐depth evaluation of current educational materials and offerings, including a review of the Core Competencies in Hospital Medicine, past annual SHM Quality Improvement Pre‐Course objectives, and the content of training courses offered by other organizations.1722 Throughout our analysis, we emphasized the identification of gaps in content relevant to hospitalists. We then used the Institute of Medicine's (IOM) 6 aims for healthcare quality as a foundation for developing the HQPS Competencies.1 Specifically, the IOM states that healthcare should be safe, effective, patient‐centered, timely, efficient, and equitable. Additionally, we reviewed and integrated elements of the Practice‐Based Learning and Improvement (PBLI) and Systems‐Based Practice (SBP) competencies as defined by the Accreditation Council for Graduate Medical Education (ACGME).23 We defined general areas of competence and specific standards for knowledge, skills, and attitudes within each area. Subcommittee members reflected on their own experience, as clinicians, educators, and leaders in healthcare quality and patient safety, to inform and refine the competency definitions and standards. Acknowledging that some hospitalists may serve as collaborators or clinical content experts, while others may serve as leaders of hospital quality initiatives, 3 levels of expertise were established: basic, intermediate, and advanced.

Figure 1
Hospital quality and patient safety competency process and timeline. Abbreviations: HQPS, hospital quality and patient safety; QI, quality improvement; SHM, Society of Hospital Medicine.

The QIE subcommittee presented a draft version of the HQPS Competencies to the HQPS Committee in the fall of 2009 and incorporated suggested revisions. The revised set of competencies was then reviewed by members of the Leadership and Education Committees during the winter of 2009‐2010, and additional recommendations were included in the final version now described.

Description of The Competencies

The 8 areas of competence include: Quality Measurement and Stakeholder Interests, Data Acquisition and Interpretation, Organizational Knowledge and Leadership Skills, Patient Safety Principles, Teamwork and Communication, Quality and Safety Improvement Methods, Health Information Systems, and Patient Centeredness. Three levels of competence and standards within each level and area are defined in Table 1. Standards use carefully selected action verbs to reflect educational goals for hospitalists at each level.24 The basic level represents a minimum level of competency for all practicing hospitalists. The intermediate level represents a hospitalist who is prepared to meaningfully engage and collaborate with his or her institution in quality improvement efforts. A hospitalist at this level may also lead uncomplicated improvement projects for his or her medical center and/or hospital medicine group. The advanced level represents a hospitalist prepared to lead quality improvement efforts for his or her institution and/or hospital medicine group. Many hospitalists at this level will have, or will be prepared to have, leadership positions in quality and patient safety at their institutions. Advanced level hospitalists will also have the expertise to teach and mentor other individuals in their quality improvement efforts.

Hospitalist Competencies in Healthcare Quality and Patient Safety
Competency Basic Intermediate Advanced
  • NOTE: The basic level represents a minimum level of competency for all practicing hospitalists. The intermediate level represents a hospitalist prepared to meaningfully collaborate with his or her institution in quality improvement efforts. The advanced level represents a hospitalist prepared to lead quality improvement efforts for his or her institution and/or group.

  • Abbreviation: PDSA, Plan Do Study Act.

Quality measurement and stakeholder interests Define structure, process, and outcome measures Compare and contrast relative benefits of using one type of measure vs another Anticipate and respond to stakeholders' needs and interests
Define stakeholders and understand their interests related to healthcare quality Explain measures as defined by stakeholders (Center for Medicare and Medicaid Services, Leapfrog, etc) Anticipate and respond to changes in quality measures and incentive programs
Identify measures as defined by stakeholders (Center for Medicare and Medicaid Services, Leapfrog, etc) Appreciate variation in quality and utilization performance Lead efforts to reduce variation in care delivery (see also quality improvement methods)
Describe potential unintended consequences of quality measurement and incentive programs Avoid unintended consequences of quality measurement and incentive programs
Data acquisition and interpretation Interpret simple statistical methods to compare populations within a sample (chi‐square, t tests, etc) Describe sources of data for quality measurement Acquire data from internal and external sources
Define basic terms used to describe continuous and categorical data (mean, median, standard deviation, interquartile range, percentages, rates, etc) Identify potential pitfalls in administrative data Create visual representations of data (Bar, Pareto, and Control Charts)
Summarize basic principles of statistical process control Explain variation in data Use simple statistical methods to compare populations within a sample (chi‐square, t tests, etc)
Interpret data displayed in Pareto and Control Charts Administer and interpret a survey
Summarize basic survey techniques (including methods to maximize response, minimize bias, and use of ordinal response scales)
Use appropriate terms to describe continuous and categorical data (mean, median, standard deviation, interquartile range, percentages, rates, etc)
Organizational knowledge and leadership skills Describe the organizational structure of one's institution Define interests of internal and external stakeholders Effectively negotiate with stakeholders
Define leaders within the organization and describe their roles Collaborate as an effective team member of a quality improvement project Assemble a quality improvement project team and effectively lead meetings (setting agendas, hold members accountable, etc)
Exemplify the importance of leading by example Explain principles of change management and how it can positively or negatively impact quality improvement project implementation Motivate change and create vision for ideal state
Effectively communicate quality or safety issues identified during routine patient care to the appropriate parties Communicate effectively in a variety of settings (lead a meeting, public speaking, etc)
Serve as a resource and/or mentor for less‐experienced team members
Patient safety principles Identify potential sources of error encountered during routine patient care Compare methods to measure errors and adverse events, including administrative data analysis, chart review, and incident reporting systems Lead efforts to appropriately measure medical error and/or adverse events
Compare and contrast medical error with adverse event Identify and explain how human factors can contribute to medical errors Lead efforts to redesign systems to reduce errors from occurring; this may include the facilitation of a hospital, departmental, or divisional Root Cause Analysis
Describe how the systems approach to medical error is more productive than assigning individual blame Know the difference between a strong vs a weak action plan for improvement (ie, brief education intervention is weak; skills training with deliberate practice or physical changes are stronger) Lead efforts to advance the culture of patient safety in the hospital
Differentiate among types of error (knowledge/judgment vs systems vs procedural/technical; latent vs active)
Explain the role that incident reporting plays in quality improvement efforts and how reporting can foster a culture of safety
Describe principles of medical error disclosure
Teamwork and communication Explain how poor teamwork and communication failures contribute to adverse events Collaborate on administration and interpretation of teamwork and safety culture measures Lead efforts to improve teamwork and safety culture
Identify the potential for errors during transitions within and between healthcare settings (handoffs, transfers, discharge) Describe the principles of effective teamwork and identify behaviors consistent with effective teamwork Lead efforts to improve teamwork in specific settings (intensive care, medical‐surgical unit, etc)
Identify deficiencies in transitions within and between healthcare settings (handoffs, transfers, discharge) Successfully improve the safety of transitions within and between healthcare settings (handoffs, transfers, discharge)
Quality and safety improvement methods and tools Define the quality improvement methods used and infrastructure in place at one's hospital Compare and contrast various quality improvement methods, including six sigma, lean, and PDSA Lead a quality improvement project using six sigma, lean, or PDSA methodology
Summarize the basic principles and use of Root Cause Analysis as a tool to evaluate medical error Collaborate on a quality improvement project using six sigma, lean, or PDSA Use high level process mapping, fishbone diagrams, etc, to identify areas for opportunity in evaluating a process
Describe and collaborate on Failure Mode and Effects Analysis Lead the development and implementation of clinical protocols to standardize care delivery when appropriate
Actively participate in a Root Cause Analysis Conduct Failure Mode and Effects Analysis
Conduct Root Cause Analysis
Health information systems Identify the potential for information systems to reduce as well as contribute to medical error Define types of clinical decision support Lead or co‐lead efforts to leverage information systems in quality measurement
Describe how information systems fit into provider workflow and care delivery Collaborate on the design of health information systems Lead or co‐lead efforts to leverage information systems to reduce error and/or improve delivery of effective care
Anticipate and prevent unintended consequences of implementation or revision of information systems
Lead or co‐lead efforts to leverage clinical decision support to improve quality and safety
Patient centeredness Explain the clinical benefits of a patient‐centered approach Explain benefits and potential limitations of patient satisfaction surveys Interpret data from patient satisfaction surveys and lead efforts to improve patient satisfaction
Identify system barriers to effective and safe care from the patient's perspective Identify clinical areas with suboptimal efficiency and/or timeliness from the patient's perspective Lead effort to reduce inefficiency and/or improve timeliness from the patient's perspective
Describe the value of patient satisfaction surveys and patient and family partnership in care Promote patient and caregiver education including use of effective education tools Lead efforts to eliminate system barriers to effective and safe care from the patient's perspective
Lead efforts to improve patent and caregiver education including development or implementation of effective education tools
Lead efforts to actively involve patients and families in the redesign of healthcare delivery systems and processes

Recommended Use of The Competencies

The HQPS Competencies provide a framework for curricula and other professional development experiences in healthcare quality and patient safety. We recommend a step‐wise approach to curriculum development which includes conducting a targeted needs assessment, defining goals and specific learning objectives, and evaluation of the curriculum.25 The HQPS Competencies can be used at each step and provide educational targets for learners across a range of interest and experience.

Professional Development

Since residency programs historically have not trained their graduates to achieve a basic level of competence, practicing hospitalists will need to seek out professional development opportunities. Some educational opportunities which already exist include the Quality Track sessions during the SHM Annual Meeting, and the SHM Quality Improvement Pre‐Course. Hospitalist leaders are currently using the HQPS Competencies to review and revise annual meeting and pre‐course objectives and content in an effort to meet the expected level of competence for SHM members. Similarly, local SHM Chapter and regional hospital medicine leaders should look to the competencies to help select topics and objectives for future presentations. Additionally, the SHM Web site offers tools to develop skills, including a resource room and quality improvement primer.26 Mentored‐implementation programs, supported by SHM, can help hospitalists' acquire more advanced experiential training in quality improvement.

New educational opportunities are being developed, including a comprehensive set of Internet‐based modules designed to help practicing hospitalists achieve a basic level of competence. Hospitalists will be able to achieve continuing medical education (CME) credit upon completion of individual modules. Plans are underway to provide Certification in Hospital Quality and Patient Safety, reflecting an advanced level of competence, upon completion of the entire set, and demonstration of knowledge and skill application through an approved quality improvement project. The certification process will leverage the success of the SHM Leadership Academies and Mentored Implementation projects to help hospitalists apply their new skills in a real world setting.

HQPS Competencies and Focused Practice in Hospital Medicine

Recently, the American Board of Internal Medicine (ABIM) has recognized the field of hospital medicine by developing a new program that provides hospitalists the opportunity to earn Maintenance of Certification (MOC) in Internal Medicine with a Focused Practice in Hospital Medicine.27 Appropriately, hospital quality and patient safety content is included among the knowledge questions on the secure exam, and completion of a practice improvement module (commonly known as PIM) is required for the certification. The SHM Education Committee has developed a Self‐Evaluation of Medical Knowledge module related to hospital quality and patient safety for use in the MOC process. ABIM recertification with Focused Practice in Hospital Medicine is an important and visible step for the Hospital Medicine movement; the content of both the secure exam and the MOC reaffirms the notion that the acquisition of knowledge, skills, and attitudes in hospital quality and patient safety is essential to the practice of hospital medicine.

Medical Education

Because teaching hospitalists frequently serve in important roles as educators and physician leaders in quality improvement, they are often responsible for medical student and resident training in healthcare quality and patient safety. Medical schools and residency programs have struggled to integrate healthcare quality and patient safety into their curricula.11, 12, 28 Hospitalists can play a major role in academic medical centers by helping to develop curricular materials and evaluations related to healthcare quality. Though intended primarily for future and current hospitalists, the HQPS Competencies and standards for the basic level may be adapted to provide educational targets for many learners in undergraduate and graduate medical education. Teaching hospitalists may use these standards to evaluate current educational efforts and design new curricula in collaboration with their medical school and residency program leaders.

Beyond the basic level of training in healthcare quality required for all, many residents will benefit from more advanced training experiences, including opportunities to apply knowledge and develop skills related to quality improvement. A recent report from the ACGME concluded that role models and mentors were essential for engaging residents in quality improvement efforts.29 Hospitalists are ideally suited to serve as role models during residents' experiential learning opportunities related to hospital quality. Several residency programs have begun to implement hospitalist tracks13 and quality improvement rotations.3032 Additionally, some academic medical centers have begun to develop and offer fellowship training in Hospital Medicine.33 These hospitalist‐led educational programs are an ideal opportunity to teach the intermediate and advanced training components, of healthcare quality and patient safety, to residents and fellows that wish to incorporate activity or leadership in quality improvement and patient safety science into their generalist or subspecialty careers. Teaching hospitalists should use the HQPS competency standards to define learning objectives for trainees at this stage of development.

To address the enormous educational needs in quality and safety for future physicians, a cadre of expert teachers in quality and safety will need to be developed. In collaboration with the Alliance for Academic Internal Medicine (AAIM), SHM is developing a Quality and Safety Educators Academy which will target academic hospitalists and other medical educators interested in developing advanced skills in quality improvement and patient safety education.

Assessment of Competence

An essential component of a rigorous faculty development program or medical education initiative is the assessment of whether these endeavors are achieving their stated aims. Published literature provides examples of useful assessment methods applicable to the HQPS Competencies. Knowledge in several areas of HQPS competence may be assessed with the use of multiple choice tests.34, 35 Knowledge of quality improvement methods may be assessed using the Quality Improvement Knowledge Application Tool (QIKAT), an instrument in which the learner responds to each of 3 scenarios with an aim, outcome and process measures, and ideas for changes which may result in improved performance.36 Teamwork and communication skills may be assessed using 360‐degree evaluations3739 and direct observation using behaviorally anchored rating scales.4043 Objective structured clinical examinations have been used to assess knowledge and skills related to patient safety principles.44, 45 Notably, few studies have rigorously assessed the validity and reliability of tools designed to evaluate competence related to healthcare quality.46 Additionally, to our knowledge, no prior research has evaluated assessment specifically for hospitalists. Thus, the development and validation of new assessment tools based on the HQPS Competencies for learners at each level is a crucial next step in the educational process. Additionally, evaluation of educational initiatives should include analyses of clinical benefit, as the ultimate goal of these efforts is to improve patient care.47, 48

Conclusion

Hospitalists are poised to have a tremendous impact on improving the quality of care for hospitalized patients. The lack of training in quality improvement in traditional medical education programs, in which most current hospitalists were trained, can be overcome through appropriate use of the HQPS Competencies. Formal incorporation of the HQPS Competencies into professional development programs, and innovative educational initiatives and curricula, will help provide current hospitalists and the next generations of hospitalists with the needed skills to be successful.

References
  1. Crossing the Quality Chasm: A New Health System for the Twenty‐first Century.Washington, DC:Institute of Medicine;2001.
  2. Jha AK,Li Z,Orav EJ,Epstein AM.Care in U.S. hospitals—the Hospital Quality Alliance program.N Engl J Med.2005;353(3):265274.
  3. Zhan C,Miller MR.Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization.JAMA.2003;290(14):18681874.
  4. Hospital Compare—A quality tool provided by Medicare. Available at: http://www.hospitalcompare.hhs.gov/. Accessed April 23,2010.
  5. The Leapfrog Group: Hospital Quality Ratings. Available at: http://www.leapfroggroup.org/cp. Accessed April 30,2010.
  6. Why Not the Best? A Healthcare Quality Improvement Resource. Available at: http://www.whynotthebest.org/. Accessed April 30,2010.
  7. The Joint Commission: Facts about ORYX for hospitals (National Hospital Quality Measures). Available at: http://www.jointcommission.org/accreditationprograms/hospitals/oryx/oryx_facts.htm. Accessed August 19,2010.
  8. The Joint Commission: National Patient Safety Goals. Available at: http://www.jointcommission.org/patientsafety/nationalpatientsafetygoals/. Accessed August 9,2010.
  9. Hospital Acquired Conditions: Overview. Available at: http://www.cms.gov/HospitalAcqCond/01_Overview.asp. Accessed April 30,2010.
  10. Report to Congress:Plan to Implement a Medicare Hospital Value‐based Purchasing Program. Washington, DC: US Department of Health and Human Services, Center for Medicare and Medicaid Services;2007.
  11. Unmet Needs: Teaching Physicians to Provide Safe Patient Care.Boston, MA:Lucian Leape Institute at the National Patient Safety Foundation;2010.
  12. Alper E,Rosenberg EI,O'Brien KE,Fischer M,Durning SJ.Patient safety education at U.S. and Canadian medical schools: results from the 2006 Clerkship Directors in Internal Medicine survey.Acad Med.2009;84(12):16721676.
  13. Glasheen JJ,Siegal EM,Epstein K,Kutner J,Prochazka AV.Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists' needs.J Gen Intern Med.2008;23(7):11101115.
  14. Plauth WH,Pantilat SZ,Wachter RM,Fenton CL.Hospitalists' perceptions of their residency training needs: results of a national survey.Am J Med.2001;111(3):247254.
  15. Fitzgibbons JP,Bordley DR,Berkowitz LR,Miller BW,Henderson MC.Redesigning residency education in internal medicine: a position paper from the Association of Program Directors in Internal Medicine.Ann Intern Med.2006;144(12):920926.
  16. Weinberger SE,Smith LG,Collier VU.Redesigning training for internal medicine.Ann Intern Med.2006;144(12):927932.
  17. Dressler DD,Pistoria MJ,Budnitz TL,McKean SC,Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1(1):4856.
  18. Intermountain Healthcare. 20‐Day Course for Executives 2001.
  19. Kern DE,Thomas PA,Bass EB,Howard DM.Curriculum Development for Medical Education: A Six‐step Approach.Baltimore, MD:Johns Hopkins Press;1998.
  20. Society of Hospital Medicine Quality Improvement Basics. Available at: http://www.hospitalmedicine.org/Content/NavigationMenu/QualityImprovement/QIPrimer/QI_Primer_Landing_Pa.htm. Accessed June 4,2010.
  21. American Board of Internal Medicine: Questions and Answers Regarding ABIM's Maintenance of Certification in Internal Medicine With a Focused Practice in Hospital Medicine Program. Available at: http://www.abim.org/news/news/focused‐practice‐hospital‐medicine‐qa.aspx. Accessed August 9,2010.
  22. Heard JK,Allen RM,Clardy J.Assessing the needs of residency program directors to meet the ACGME general competencies.Acad Med.2002;77(7):750.
  23. Philibert I.Accreditation Council for Graduate Medical Education and Institute for Healthcare Improvement 90‐Day Project. Involving Residents in Quality Improvement: Contrasting “Top‐Down” and “Bottom‐Up” Approaches.Chicago, IL;ACGME;2008.
  24. Oyler J,Vinci L,Arora V,Johnson J.Teaching internal medicine residents quality improvement techniques using the ABIM's practice improvement modules.J Gen Intern Med.2008;23(7):927930.
  25. Peters AS,Kimura J,Ladden MD,March E,Moore GT.A self‐instructional model to teach systems‐based practice and practice‐based learning and improvement.J Gen Intern Med.2008;23(7):931936.
  26. Weingart SN,Tess A,Driver J,Aronson MD,Sands K.Creating a quality improvement elective for medical house officers.J Gen Intern Med.2004;19(8):861867.
  27. Ranji SR,Rosenman DJ,Amin AN,Kripalani S.Hospital medicine fellowships: works in progress.Am J Med.2006;119(1):72.e1e7.
  28. Kerfoot BP,Conlin PR,Travison T,McMahon GT.Web‐based education in systems‐based practice: a randomized trial.Arch Intern Med.2007;167(4):361366.
  29. Peters AS,Kimura J,Ladden MD,March E,Moore GT.A self‐instructional model to teach systems‐based practice and practice‐based learning and improvement.J Gen Intern Med.2008;23(7):931936.
  30. Morrison L,Headrick L,Ogrinc G,Foster T.The quality improvement knowledge application tool: an instrument to assess knowledge application in practice‐based learning and improvement.J Gen Intern Med.2003;18(suppl 1):250.
  31. Brinkman WB,Geraghty SR,Lanphear BP, et al.Effect of multisource feedback on resident communication skills and professionalism: a randomized controlled trial.Arch Pediatr Adolesc Med.2007;161(1):4449.
  32. Massagli TL,Carline JD.Reliability of a 360‐degree evaluation to assess resident competence.Am J Phys Med Rehabil.2007;86(10):845852.
  33. Musick DW,McDowell SM,Clark N,Salcido R.Pilot study of a 360‐degree assessment instrument for physical medicine 82(5):394402.
  34. Fletcher G,Flin R,McGeorge P,Glavin R,Maran N,Patey R.Anaesthetists' non‐technical skills (ANTS): evaluation of a behavioural marker system.Br J Anaesth.2003;90(5):580588.
  35. Malec JF,Torsher LC,Dunn WF, et al.The Mayo high performance teamwork scale: reliability and validity for evaluating key crew resource management skills.Simul Healthc.2007;2(1):410.
  36. Sevdalis N,Davis R,Koutantji M,Undre S,Darzi A,Vincent CA.Reliability of a revised NOTECHS scale for use in surgical teams.Am J Surg.2008;196(2):184190.
  37. Sevdalis N,Lyons M,Healey AN,Undre S,Darzi A,Vincent CA.Observational teamwork assessment for surgery: construct validation with expert versus novice raters.Ann Surg.2009;249(6):10471051.
  38. Singh R,Singh A,Fish R,McLean D,Anderson DR,Singh G.A patient safety objective structured clinical examination.J Patient Saf.2009;5(2):5560.
  39. Varkey P,Natt N.The Objective Structured Clinical Examination as an educational tool in patient safety.Jt Comm J Qual Patient Saf.2007;33(1):4853.
  40. Lurie SJ,Mooney CJ,Lyness JM.Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review.Acad Med.2009;84(3):301309.
  41. Boonyasai RT,Windish DM,Chakraborti C,Feldman LS,Rubin HR,Bass EB.Effectiveness of teaching quality improvement to clinicians: a systematic review.JAMA.2007;298(9):10231037.
  42. Windish DM,Reed DA,Boonyasai RT,Chakraborti C,Bass EB.Methodological rigor of quality improvement curricula for physician trainees: a systematic review and recommendations for change.Acad Med.2009;84(12):16771692.
References
  1. Crossing the Quality Chasm: A New Health System for the Twenty‐first Century.Washington, DC:Institute of Medicine;2001.
  2. Jha AK,Li Z,Orav EJ,Epstein AM.Care in U.S. hospitals—the Hospital Quality Alliance program.N Engl J Med.2005;353(3):265274.
  3. Zhan C,Miller MR.Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization.JAMA.2003;290(14):18681874.
  4. Hospital Compare—A quality tool provided by Medicare. Available at: http://www.hospitalcompare.hhs.gov/. Accessed April 23,2010.
  5. The Leapfrog Group: Hospital Quality Ratings. Available at: http://www.leapfroggroup.org/cp. Accessed April 30,2010.
  6. Why Not the Best? A Healthcare Quality Improvement Resource. Available at: http://www.whynotthebest.org/. Accessed April 30,2010.
  7. The Joint Commission: Facts about ORYX for hospitals (National Hospital Quality Measures). Available at: http://www.jointcommission.org/accreditationprograms/hospitals/oryx/oryx_facts.htm. Accessed August 19,2010.
  8. The Joint Commission: National Patient Safety Goals. Available at: http://www.jointcommission.org/patientsafety/nationalpatientsafetygoals/. Accessed August 9,2010.
  9. Hospital Acquired Conditions: Overview. Available at: http://www.cms.gov/HospitalAcqCond/01_Overview.asp. Accessed April 30,2010.
  10. Report to Congress:Plan to Implement a Medicare Hospital Value‐based Purchasing Program. Washington, DC: US Department of Health and Human Services, Center for Medicare and Medicaid Services;2007.
  11. Unmet Needs: Teaching Physicians to Provide Safe Patient Care.Boston, MA:Lucian Leape Institute at the National Patient Safety Foundation;2010.
  12. Alper E,Rosenberg EI,O'Brien KE,Fischer M,Durning SJ.Patient safety education at U.S. and Canadian medical schools: results from the 2006 Clerkship Directors in Internal Medicine survey.Acad Med.2009;84(12):16721676.
  13. Glasheen JJ,Siegal EM,Epstein K,Kutner J,Prochazka AV.Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists' needs.J Gen Intern Med.2008;23(7):11101115.
  14. Plauth WH,Pantilat SZ,Wachter RM,Fenton CL.Hospitalists' perceptions of their residency training needs: results of a national survey.Am J Med.2001;111(3):247254.
  15. Fitzgibbons JP,Bordley DR,Berkowitz LR,Miller BW,Henderson MC.Redesigning residency education in internal medicine: a position paper from the Association of Program Directors in Internal Medicine.Ann Intern Med.2006;144(12):920926.
  16. Weinberger SE,Smith LG,Collier VU.Redesigning training for internal medicine.Ann Intern Med.2006;144(12):927932.
  17. Dressler DD,Pistoria MJ,Budnitz TL,McKean SC,Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1(1):4856.
  18. Intermountain Healthcare. 20‐Day Course for Executives 2001.
  19. Kern DE,Thomas PA,Bass EB,Howard DM.Curriculum Development for Medical Education: A Six‐step Approach.Baltimore, MD:Johns Hopkins Press;1998.
  20. Society of Hospital Medicine Quality Improvement Basics. Available at: http://www.hospitalmedicine.org/Content/NavigationMenu/QualityImprovement/QIPrimer/QI_Primer_Landing_Pa.htm. Accessed June 4,2010.
  21. American Board of Internal Medicine: Questions and Answers Regarding ABIM's Maintenance of Certification in Internal Medicine With a Focused Practice in Hospital Medicine Program. Available at: http://www.abim.org/news/news/focused‐practice‐hospital‐medicine‐qa.aspx. Accessed August 9,2010.
  22. Heard JK,Allen RM,Clardy J.Assessing the needs of residency program directors to meet the ACGME general competencies.Acad Med.2002;77(7):750.
  23. Philibert I.Accreditation Council for Graduate Medical Education and Institute for Healthcare Improvement 90‐Day Project. Involving Residents in Quality Improvement: Contrasting “Top‐Down” and “Bottom‐Up” Approaches.Chicago, IL;ACGME;2008.
  24. Oyler J,Vinci L,Arora V,Johnson J.Teaching internal medicine residents quality improvement techniques using the ABIM's practice improvement modules.J Gen Intern Med.2008;23(7):927930.
  25. Peters AS,Kimura J,Ladden MD,March E,Moore GT.A self‐instructional model to teach systems‐based practice and practice‐based learning and improvement.J Gen Intern Med.2008;23(7):931936.
  26. Weingart SN,Tess A,Driver J,Aronson MD,Sands K.Creating a quality improvement elective for medical house officers.J Gen Intern Med.2004;19(8):861867.
  27. Ranji SR,Rosenman DJ,Amin AN,Kripalani S.Hospital medicine fellowships: works in progress.Am J Med.2006;119(1):72.e1e7.
  28. Kerfoot BP,Conlin PR,Travison T,McMahon GT.Web‐based education in systems‐based practice: a randomized trial.Arch Intern Med.2007;167(4):361366.
  29. Peters AS,Kimura J,Ladden MD,March E,Moore GT.A self‐instructional model to teach systems‐based practice and practice‐based learning and improvement.J Gen Intern Med.2008;23(7):931936.
  30. Morrison L,Headrick L,Ogrinc G,Foster T.The quality improvement knowledge application tool: an instrument to assess knowledge application in practice‐based learning and improvement.J Gen Intern Med.2003;18(suppl 1):250.
  31. Brinkman WB,Geraghty SR,Lanphear BP, et al.Effect of multisource feedback on resident communication skills and professionalism: a randomized controlled trial.Arch Pediatr Adolesc Med.2007;161(1):4449.
  32. Massagli TL,Carline JD.Reliability of a 360‐degree evaluation to assess resident competence.Am J Phys Med Rehabil.2007;86(10):845852.
  33. Musick DW,McDowell SM,Clark N,Salcido R.Pilot study of a 360‐degree assessment instrument for physical medicine 82(5):394402.
  34. Fletcher G,Flin R,McGeorge P,Glavin R,Maran N,Patey R.Anaesthetists' non‐technical skills (ANTS): evaluation of a behavioural marker system.Br J Anaesth.2003;90(5):580588.
  35. Malec JF,Torsher LC,Dunn WF, et al.The Mayo high performance teamwork scale: reliability and validity for evaluating key crew resource management skills.Simul Healthc.2007;2(1):410.
  36. Sevdalis N,Davis R,Koutantji M,Undre S,Darzi A,Vincent CA.Reliability of a revised NOTECHS scale for use in surgical teams.Am J Surg.2008;196(2):184190.
  37. Sevdalis N,Lyons M,Healey AN,Undre S,Darzi A,Vincent CA.Observational teamwork assessment for surgery: construct validation with expert versus novice raters.Ann Surg.2009;249(6):10471051.
  38. Singh R,Singh A,Fish R,McLean D,Anderson DR,Singh G.A patient safety objective structured clinical examination.J Patient Saf.2009;5(2):5560.
  39. Varkey P,Natt N.The Objective Structured Clinical Examination as an educational tool in patient safety.Jt Comm J Qual Patient Saf.2007;33(1):4853.
  40. Lurie SJ,Mooney CJ,Lyness JM.Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review.Acad Med.2009;84(3):301309.
  41. Boonyasai RT,Windish DM,Chakraborti C,Feldman LS,Rubin HR,Bass EB.Effectiveness of teaching quality improvement to clinicians: a systematic review.JAMA.2007;298(9):10231037.
  42. Windish DM,Reed DA,Boonyasai RT,Chakraborti C,Bass EB.Methodological rigor of quality improvement curricula for physician trainees: a systematic review and recommendations for change.Acad Med.2009;84(12):16771692.
Issue
Journal of Hospital Medicine - 6(9)
Issue
Journal of Hospital Medicine - 6(9)
Page Number
530-536
Page Number
530-536
Publications
Publications
Article Type
Display Headline
Hospital quality and patient safety competencies: Development, description, and recommendations for use
Display Headline
Hospital quality and patient safety competencies: Development, description, and recommendations for use
Sections
Article Source
Copyright © 2011 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Division of Hospital Medicine, 259 E Erie St, Suite 475, Chicago, IL 60611
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files