User login
Impact of Clinical Specialty on Attitudes Regarding Overuse of Inpatient Laboratory Testing
Routine laboratory testing in hospitalized patients is common, with a high prevalence of unnecessary tests that do not contribute to patient management.1 Excessive laboratory testing of hospitalized patients can contribute to anemia2 and may cause patient discomfort, additional unnecessary testing resulting from false positive results, and higher out-of-pocket patient costs. Excessive testing can impact hospital budgets both directly (though direct costs are often low) and indirectly through costly downstream services and prolonged hospital stay.3 As part of the American Board of Internal Medicine (ABIM) Foundation’s Choosing Wisely initiative, several professional societies have recommended against routine laboratory testing of hospitalized adult patients.4
Excessive inpatient laboratory testing has been documented mostly among adult internal medicine (IM) patients with studies of drivers of unnecessary testing and efforts to reduce it conducted in IM settings.5, 6 Attitudes toward other issues related to testing overuse differ by specialty7 and are likely to similarly vary with regard to unnecessary laboratory testing. Understanding differences in attitudes by clinical specialty is critical for framing tailored approaches to reducing inappropriate care.
We performed a cross-sectional survey of a diverse group of hospital clinicians to describe attitudes and beliefs regarding laboratory testing and its overuse across clinical specialties (eg, medical, surgical, and pediatric). We hypothesized that attitudes toward the need for testing would differ across specialties.
METHODS
Survey Development and Administration
The study was conducted at Memorial Sloan Kettering Cancer Center, a tertiary academic cancer hospital in New York City. The 12-item survey was adopted from a previously administered but not formally validated survey (Online-only Appendix).5,8 The survey was pilot tested with 4 physicians, 3 NPs, 2 PAs, and 3 RNs and edited for content and clarity. All staff providers including NPs, PAs, RNs, and resident, fellow, and attending MDs working in the hospital during the 2-week survey period (November 2-15, 2015) were eligible to participate and were emailed a link to the survey. The email invitation was resent 3 times during the survey period. Participants who completed the survey received a coupon for a free coffee. The study was reviewed by the Institutional Review Board and exempted from ongoing oversight.
Measures
Demographic items included clinical specialty, provider type, and gender (Online-only Appendix). The remaining survey questions included the following categories:
1. Attitudes toward laboratory testing were evaluated by 3 items about accepted norms for lab testing and 2 items about fears (Table 2). Responses to these items used a 4-point Likert scale (strongly agree to strongly disagree).
2. Drivers contributing to unnecessary testing were evaluated by presenting a list of possible contributing factors (Table 2). Responses to these items used a 3-point Likert scale (contributes a lot, contributes a little, or does not contribute).
Analysis
We used univariate statistics to describe demographics and survey responses. We used the chi-square statistic to evaluate differences in attitudes and drivers by clinical specialty. We dichotomized responses regarding attitudes toward lab testing (“strongly agree” and “somewhat agree” vs. “somewhat disagree” and “strongly disagree.”) and beliefs regarding contributing drivers (“contributes a lot” vs all others). We grouped clinical specialty into medical/med-oncology, surgical, pediatric, and other (gynecological, critical care, and other).
We used logistic regression to explore the associations between attitudes/drivers and clinical specialty after adjusting for provider type, and report the overall P-value. We used pediatrics as the reference group to assess direct comparisons with each of the other specialties. We performed analyses with SAS statistical software, version 9.4 (SAS Institute, Cary, North Carolina) and considered P < .05 to be significant.
RESULTS
Among 1580 eligible participants, 837 (53%) completed surveys. Attending MD response rates ranged between 61% (surgical) to 86% (pediatric); rates were 59% for all trainees, 72% for PAs and 46% for RNs and NPs combined. Given privacy concerns, we were unable to collect detailed response rate information or any information about nonrespondents. The demographics are shown in Table 1.
Attitudes toward Laboratory Testing
The majority of respondents agreed that hospitalized patients should get daily labs (59%), testing on the discharge day (52%), and that daily testing generally enhances safety (55%; Table 2). Fewer pediatric and surgical clinicians endorsed that laboratory testing should be done daily (56% and 47% respectively) and enhances patient safety (46% and 47%). These differences were significant after adjusting for provider type. In addition, fewer pediatric providers endorsed the statement that daily laboratory testing helps avoid malpractice litigation. Overall, 68% of respondents agreed they would be comfortable with less testing.
Drivers Contributing to Unnecessary Laboratory Testing
The strongest drivers of unnecessary testing were seen as habit (94% responding “contributes a lot”) and institutional culture (89% responding “contributes a lot”; Table 2). After adjusting for provider type, significant differences were observed based on clinical specialty. In particular, pediatric specialists were less likely to endorse fear of litigation (P < .001) and more likely to endorse pressure from patient/family (P = .0003) compared to all other specialties (Table 2, odd ratios not shown).
DISCUSSION
Overuse of laboratory testing in hospitalized patients is widely recognized in IM and likely to be prevalent in other clinical specialties. Our study elucidated differences in attitudes toward unnecessary testing and self-identified drivers across specialties in a diverse group of clinical providers at an academic cancer center. We found differences based on clinical specialty, with those caring for pediatric and surgical patients less likely than others to believe that testing should be done daily and that daily testing enhances patient safety. Furthermore, comfort with less testing was highest among pediatric specialists. Habit and institutional culture were recognized broadly as the strongest drivers of laboratory testing overuse.
Our findings regarding differences based on clinical specialty are novel. Respondents caring for pediatric patients generally placed lower value on testing, and IM clinicians were the most likely to endorse daily testing and to believe that it enhances patient safety and helps avoid malpractice litigation. The difference between adult and pediatric clinicians is surprising given the fundamental similarities between these specialties.9 Although some resource use studies have described differences across specialties, none has examined differences in laboratory testing or examined the practice patterns of clinicians who are not physicians across specialties.10 Prior studies have documented the impact of training location on practice11,12, suggesting the importance of the local training culture.13 As physician personalities vary across clinical specialties14 it is likely that culture varies as well. Specialty-specific cultures are likely to strongly influence attitudes and practice patterns and warrant further exploration.
Clinicians in our sample identified drivers of unnecessary laboratory testing that were consistent with other studies, most frequently endorsing habit, followed by culture, discomfort with not knowing, and concern that someone will ask for the results.5,15 Previous studies have focused on IM and have not included nonphysicians or compared attitudes across specialties. We found that the largest differences in drivers by specialty were related to malpractice concerns and the perception of pressure from patients or families. The low endorsement of defensive medicine among clinicians serving pediatric populations may imply that interventions to reduce unnecessary care in hospitalized children may not need to address malpractice fear. In contrast, clinicians from pediatrics identified family pressure as a greater driver of unnecessary testing. Efforts to reduce unnecessary laboratory testing in pediatrics will need to address parent expectations.
Our findings have implications for efforts to reduce unnecessary testing. Culture, identified as a key driver of testing, reflects leadership priorities, institutional history, and other factors and is difficult to specifically target. Habit, the other most-endorsed driver, is a more promising target for quality improvement interventions, particularly those addressing care processes (eg, electronic ordering). Discomfort with not knowing and fear of being asked are drivers that might be influenced by better communication about information expectations by supervising physicians and hospital administration. Lastly, education about the potential harms of excessive testing may facilitate more targeted efforts to reduce testing overuse.
Our study has important limitations. The cancer focus of the center may have influenced provider attitudes and practices. Attitudes may differ at community centers, though important differences regarding routine laboratory testing are unlikely. Second, although our sample was large, our response rate was modest at 53% and as low as 46% among RNs and NPs and we have no information regarding nonresponders. This response rate, though, was comparable to response rates seen in other large surveys.5,15 In addition, our results reflect clinician self-report; perceptions of necessity and the true need for testing may vary across specialties and the true subconscious drivers of behavior may differ. However, differences across specialties are likely to be valid even if there are other factors at play. Self assessment of unnecessary testing may also underestimate prevalence of the problem. Finally, our findings related to drivers of unnecessary testing are descriptive rather than quantitative given the lack of validated scales.
In conclusion, we evaluated attitudes toward routine laboratory testing in hospitalized patients in clinicians across specialties and found important differences. These findings speak to the diversity of cultures of medical care even within a single institution and point to the importance of studying attitudes about overused services across clinical specialties. In particular, as medical fields beyond IM increasingly recognize the importance of reducing medical overuse both in and out of the hospital, our findings highlight the importance of elucidating specialty-specific attitudes to optimize interventions to address unnecessary testing.
Disclosures
Mr. Husain, Ms. Gennarelli, Ms. White4, Mr. Masciale, MA5, and Dr. Roman, MD, have nothing to disclose. The work of Dr. Roman and Dr. Korenstein on this project was supported, in part, by a Cancer Center Support Grant from the National Cancer Institute to Memorial Sloan Kettering Cancer Center (P30 CA008748)
1. Zhi M, Ding EL, Theisen-Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15-year meta-analysis. PloS One. 2013;8(11):e78962. DOI: 10.1371/journal.pone.0078962. PubMed
2. Thavendiranathan P, Bagai A, Ebidia A, Detsky AS, Choudhry NK. Do blood tests cause anemia in hospitalized patients? The effect of diagnostic phlebotomy on hemoglobin and hematocrit levels. J Gen Intern Med. 2005;20(6):520-524. DOI: 10.1111/j.1525-1497.2005.0094.x. PubMed
3. Eaton KP, Levy K, Soong C, et al. Evidence-based guidelines to eliminate repetitive laboratory testing. JAMA Intern Med. 2017;177(12):1833-1839. DOI: 10.1001/jamainternmed.2017.5152 PubMed
4. Choosing wisely. http://www.choosingwisely.org/resources/. Accessed November 21, 2017.
5. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. DOI: 10.1002/jhm.2645. PubMed
6. Thakkar RN, Kim D, Knight AM, Riedel S, Vaidya D, Wright SM. Impact of an educational intervention on the frequency of daily blood test orders for hospitalized patients. Am J Clin Pathol. 2015;143(3):393-397. DOI: 10.1309/AJCPJS4EEM7UAUBV. PubMed
7. Sheeler RD, Mundell T, Hurst SA, et al. Self-reported rationing behavior among US physicians: a national survey. J Gen Intern Med. 2016;31(12):1444-1451. DOI: 10.1007/s11606-016-3756-5. PubMed
8. Roman BR, Yang A, Masciale J, Korenstein D. Association of attitudes regarding overuse of inpatient laboratory testing with health care provider type. JAMA Intern Med. 2017;177(8):1205-1207. DOI: 10.1001/jamainternmed.2017.1634. PubMed
9. Schatz IJ, Realini JP, Charney E. Family practice, internal medicine, and pediatrics as partners in the education of generalists. Acad Med. 1996;71(1):35-39. PubMed
10. Johnson RE, Freeborn DK, Mullooly JP. Physicians’ use of laboratory, radiology, and drugs in a prepaid group practice HMO. Health Serv Res. 1985;20(5):525-547. PubMed
11. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. Dec 10, 2014;312(22):2385-2393. DOI: 10.1001/jama.2014.15973. PubMed
12. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. DOI: 10.1001/jamainternmed.2014.3337. PubMed
13. Smith CD, Korenstein D. Harnessing the power of peer pressure to reduce health care waste and improve clinical outcomes. Mayo Clin Proc. 2015;90(3):311-312. DOI: https://doi.org/10.1017/ice.2015.136 PubMed
14. Vaidya NA, Sierles FS, Raida MD, Fakhoury FJ, Przybeck TR, Cloninger CR. Relationship between specialty choice and medical student temperament and character assessed with Cloninger Inventory. Teach Learn Med. 2004;16(2):150-156. DOI: 10.1207/s15328015tlm1602_6 PubMed
15. Studdert DM, Mello MM, Sage WM, et al. Defensive medicine among high-risk specialist physicians in a volatile malpractice environment. JAMA. 2005;293(21):2609-2617. DOI: 10.1001/jama.293.21.2609 PubMed
Routine laboratory testing in hospitalized patients is common, with a high prevalence of unnecessary tests that do not contribute to patient management.1 Excessive laboratory testing of hospitalized patients can contribute to anemia2 and may cause patient discomfort, additional unnecessary testing resulting from false positive results, and higher out-of-pocket patient costs. Excessive testing can impact hospital budgets both directly (though direct costs are often low) and indirectly through costly downstream services and prolonged hospital stay.3 As part of the American Board of Internal Medicine (ABIM) Foundation’s Choosing Wisely initiative, several professional societies have recommended against routine laboratory testing of hospitalized adult patients.4
Excessive inpatient laboratory testing has been documented mostly among adult internal medicine (IM) patients with studies of drivers of unnecessary testing and efforts to reduce it conducted in IM settings.5, 6 Attitudes toward other issues related to testing overuse differ by specialty7 and are likely to similarly vary with regard to unnecessary laboratory testing. Understanding differences in attitudes by clinical specialty is critical for framing tailored approaches to reducing inappropriate care.
We performed a cross-sectional survey of a diverse group of hospital clinicians to describe attitudes and beliefs regarding laboratory testing and its overuse across clinical specialties (eg, medical, surgical, and pediatric). We hypothesized that attitudes toward the need for testing would differ across specialties.
METHODS
Survey Development and Administration
The study was conducted at Memorial Sloan Kettering Cancer Center, a tertiary academic cancer hospital in New York City. The 12-item survey was adopted from a previously administered but not formally validated survey (Online-only Appendix).5,8 The survey was pilot tested with 4 physicians, 3 NPs, 2 PAs, and 3 RNs and edited for content and clarity. All staff providers including NPs, PAs, RNs, and resident, fellow, and attending MDs working in the hospital during the 2-week survey period (November 2-15, 2015) were eligible to participate and were emailed a link to the survey. The email invitation was resent 3 times during the survey period. Participants who completed the survey received a coupon for a free coffee. The study was reviewed by the Institutional Review Board and exempted from ongoing oversight.
Measures
Demographic items included clinical specialty, provider type, and gender (Online-only Appendix). The remaining survey questions included the following categories:
1. Attitudes toward laboratory testing were evaluated by 3 items about accepted norms for lab testing and 2 items about fears (Table 2). Responses to these items used a 4-point Likert scale (strongly agree to strongly disagree).
2. Drivers contributing to unnecessary testing were evaluated by presenting a list of possible contributing factors (Table 2). Responses to these items used a 3-point Likert scale (contributes a lot, contributes a little, or does not contribute).
Analysis
We used univariate statistics to describe demographics and survey responses. We used the chi-square statistic to evaluate differences in attitudes and drivers by clinical specialty. We dichotomized responses regarding attitudes toward lab testing (“strongly agree” and “somewhat agree” vs. “somewhat disagree” and “strongly disagree.”) and beliefs regarding contributing drivers (“contributes a lot” vs all others). We grouped clinical specialty into medical/med-oncology, surgical, pediatric, and other (gynecological, critical care, and other).
We used logistic regression to explore the associations between attitudes/drivers and clinical specialty after adjusting for provider type, and report the overall P-value. We used pediatrics as the reference group to assess direct comparisons with each of the other specialties. We performed analyses with SAS statistical software, version 9.4 (SAS Institute, Cary, North Carolina) and considered P < .05 to be significant.
RESULTS
Among 1580 eligible participants, 837 (53%) completed surveys. Attending MD response rates ranged between 61% (surgical) to 86% (pediatric); rates were 59% for all trainees, 72% for PAs and 46% for RNs and NPs combined. Given privacy concerns, we were unable to collect detailed response rate information or any information about nonrespondents. The demographics are shown in Table 1.
Attitudes toward Laboratory Testing
The majority of respondents agreed that hospitalized patients should get daily labs (59%), testing on the discharge day (52%), and that daily testing generally enhances safety (55%; Table 2). Fewer pediatric and surgical clinicians endorsed that laboratory testing should be done daily (56% and 47% respectively) and enhances patient safety (46% and 47%). These differences were significant after adjusting for provider type. In addition, fewer pediatric providers endorsed the statement that daily laboratory testing helps avoid malpractice litigation. Overall, 68% of respondents agreed they would be comfortable with less testing.
Drivers Contributing to Unnecessary Laboratory Testing
The strongest drivers of unnecessary testing were seen as habit (94% responding “contributes a lot”) and institutional culture (89% responding “contributes a lot”; Table 2). After adjusting for provider type, significant differences were observed based on clinical specialty. In particular, pediatric specialists were less likely to endorse fear of litigation (P < .001) and more likely to endorse pressure from patient/family (P = .0003) compared to all other specialties (Table 2, odd ratios not shown).
DISCUSSION
Overuse of laboratory testing in hospitalized patients is widely recognized in IM and likely to be prevalent in other clinical specialties. Our study elucidated differences in attitudes toward unnecessary testing and self-identified drivers across specialties in a diverse group of clinical providers at an academic cancer center. We found differences based on clinical specialty, with those caring for pediatric and surgical patients less likely than others to believe that testing should be done daily and that daily testing enhances patient safety. Furthermore, comfort with less testing was highest among pediatric specialists. Habit and institutional culture were recognized broadly as the strongest drivers of laboratory testing overuse.
Our findings regarding differences based on clinical specialty are novel. Respondents caring for pediatric patients generally placed lower value on testing, and IM clinicians were the most likely to endorse daily testing and to believe that it enhances patient safety and helps avoid malpractice litigation. The difference between adult and pediatric clinicians is surprising given the fundamental similarities between these specialties.9 Although some resource use studies have described differences across specialties, none has examined differences in laboratory testing or examined the practice patterns of clinicians who are not physicians across specialties.10 Prior studies have documented the impact of training location on practice11,12, suggesting the importance of the local training culture.13 As physician personalities vary across clinical specialties14 it is likely that culture varies as well. Specialty-specific cultures are likely to strongly influence attitudes and practice patterns and warrant further exploration.
Clinicians in our sample identified drivers of unnecessary laboratory testing that were consistent with other studies, most frequently endorsing habit, followed by culture, discomfort with not knowing, and concern that someone will ask for the results.5,15 Previous studies have focused on IM and have not included nonphysicians or compared attitudes across specialties. We found that the largest differences in drivers by specialty were related to malpractice concerns and the perception of pressure from patients or families. The low endorsement of defensive medicine among clinicians serving pediatric populations may imply that interventions to reduce unnecessary care in hospitalized children may not need to address malpractice fear. In contrast, clinicians from pediatrics identified family pressure as a greater driver of unnecessary testing. Efforts to reduce unnecessary laboratory testing in pediatrics will need to address parent expectations.
Our findings have implications for efforts to reduce unnecessary testing. Culture, identified as a key driver of testing, reflects leadership priorities, institutional history, and other factors and is difficult to specifically target. Habit, the other most-endorsed driver, is a more promising target for quality improvement interventions, particularly those addressing care processes (eg, electronic ordering). Discomfort with not knowing and fear of being asked are drivers that might be influenced by better communication about information expectations by supervising physicians and hospital administration. Lastly, education about the potential harms of excessive testing may facilitate more targeted efforts to reduce testing overuse.
Our study has important limitations. The cancer focus of the center may have influenced provider attitudes and practices. Attitudes may differ at community centers, though important differences regarding routine laboratory testing are unlikely. Second, although our sample was large, our response rate was modest at 53% and as low as 46% among RNs and NPs and we have no information regarding nonresponders. This response rate, though, was comparable to response rates seen in other large surveys.5,15 In addition, our results reflect clinician self-report; perceptions of necessity and the true need for testing may vary across specialties and the true subconscious drivers of behavior may differ. However, differences across specialties are likely to be valid even if there are other factors at play. Self assessment of unnecessary testing may also underestimate prevalence of the problem. Finally, our findings related to drivers of unnecessary testing are descriptive rather than quantitative given the lack of validated scales.
In conclusion, we evaluated attitudes toward routine laboratory testing in hospitalized patients in clinicians across specialties and found important differences. These findings speak to the diversity of cultures of medical care even within a single institution and point to the importance of studying attitudes about overused services across clinical specialties. In particular, as medical fields beyond IM increasingly recognize the importance of reducing medical overuse both in and out of the hospital, our findings highlight the importance of elucidating specialty-specific attitudes to optimize interventions to address unnecessary testing.
Disclosures
Mr. Husain, Ms. Gennarelli, Ms. White4, Mr. Masciale, MA5, and Dr. Roman, MD, have nothing to disclose. The work of Dr. Roman and Dr. Korenstein on this project was supported, in part, by a Cancer Center Support Grant from the National Cancer Institute to Memorial Sloan Kettering Cancer Center (P30 CA008748)
Routine laboratory testing in hospitalized patients is common, with a high prevalence of unnecessary tests that do not contribute to patient management.1 Excessive laboratory testing of hospitalized patients can contribute to anemia2 and may cause patient discomfort, additional unnecessary testing resulting from false positive results, and higher out-of-pocket patient costs. Excessive testing can impact hospital budgets both directly (though direct costs are often low) and indirectly through costly downstream services and prolonged hospital stay.3 As part of the American Board of Internal Medicine (ABIM) Foundation’s Choosing Wisely initiative, several professional societies have recommended against routine laboratory testing of hospitalized adult patients.4
Excessive inpatient laboratory testing has been documented mostly among adult internal medicine (IM) patients with studies of drivers of unnecessary testing and efforts to reduce it conducted in IM settings.5, 6 Attitudes toward other issues related to testing overuse differ by specialty7 and are likely to similarly vary with regard to unnecessary laboratory testing. Understanding differences in attitudes by clinical specialty is critical for framing tailored approaches to reducing inappropriate care.
We performed a cross-sectional survey of a diverse group of hospital clinicians to describe attitudes and beliefs regarding laboratory testing and its overuse across clinical specialties (eg, medical, surgical, and pediatric). We hypothesized that attitudes toward the need for testing would differ across specialties.
METHODS
Survey Development and Administration
The study was conducted at Memorial Sloan Kettering Cancer Center, a tertiary academic cancer hospital in New York City. The 12-item survey was adopted from a previously administered but not formally validated survey (Online-only Appendix).5,8 The survey was pilot tested with 4 physicians, 3 NPs, 2 PAs, and 3 RNs and edited for content and clarity. All staff providers including NPs, PAs, RNs, and resident, fellow, and attending MDs working in the hospital during the 2-week survey period (November 2-15, 2015) were eligible to participate and were emailed a link to the survey. The email invitation was resent 3 times during the survey period. Participants who completed the survey received a coupon for a free coffee. The study was reviewed by the Institutional Review Board and exempted from ongoing oversight.
Measures
Demographic items included clinical specialty, provider type, and gender (Online-only Appendix). The remaining survey questions included the following categories:
1. Attitudes toward laboratory testing were evaluated by 3 items about accepted norms for lab testing and 2 items about fears (Table 2). Responses to these items used a 4-point Likert scale (strongly agree to strongly disagree).
2. Drivers contributing to unnecessary testing were evaluated by presenting a list of possible contributing factors (Table 2). Responses to these items used a 3-point Likert scale (contributes a lot, contributes a little, or does not contribute).
Analysis
We used univariate statistics to describe demographics and survey responses. We used the chi-square statistic to evaluate differences in attitudes and drivers by clinical specialty. We dichotomized responses regarding attitudes toward lab testing (“strongly agree” and “somewhat agree” vs. “somewhat disagree” and “strongly disagree.”) and beliefs regarding contributing drivers (“contributes a lot” vs all others). We grouped clinical specialty into medical/med-oncology, surgical, pediatric, and other (gynecological, critical care, and other).
We used logistic regression to explore the associations between attitudes/drivers and clinical specialty after adjusting for provider type, and report the overall P-value. We used pediatrics as the reference group to assess direct comparisons with each of the other specialties. We performed analyses with SAS statistical software, version 9.4 (SAS Institute, Cary, North Carolina) and considered P < .05 to be significant.
RESULTS
Among 1580 eligible participants, 837 (53%) completed surveys. Attending MD response rates ranged between 61% (surgical) to 86% (pediatric); rates were 59% for all trainees, 72% for PAs and 46% for RNs and NPs combined. Given privacy concerns, we were unable to collect detailed response rate information or any information about nonrespondents. The demographics are shown in Table 1.
Attitudes toward Laboratory Testing
The majority of respondents agreed that hospitalized patients should get daily labs (59%), testing on the discharge day (52%), and that daily testing generally enhances safety (55%; Table 2). Fewer pediatric and surgical clinicians endorsed that laboratory testing should be done daily (56% and 47% respectively) and enhances patient safety (46% and 47%). These differences were significant after adjusting for provider type. In addition, fewer pediatric providers endorsed the statement that daily laboratory testing helps avoid malpractice litigation. Overall, 68% of respondents agreed they would be comfortable with less testing.
Drivers Contributing to Unnecessary Laboratory Testing
The strongest drivers of unnecessary testing were seen as habit (94% responding “contributes a lot”) and institutional culture (89% responding “contributes a lot”; Table 2). After adjusting for provider type, significant differences were observed based on clinical specialty. In particular, pediatric specialists were less likely to endorse fear of litigation (P < .001) and more likely to endorse pressure from patient/family (P = .0003) compared to all other specialties (Table 2, odd ratios not shown).
DISCUSSION
Overuse of laboratory testing in hospitalized patients is widely recognized in IM and likely to be prevalent in other clinical specialties. Our study elucidated differences in attitudes toward unnecessary testing and self-identified drivers across specialties in a diverse group of clinical providers at an academic cancer center. We found differences based on clinical specialty, with those caring for pediatric and surgical patients less likely than others to believe that testing should be done daily and that daily testing enhances patient safety. Furthermore, comfort with less testing was highest among pediatric specialists. Habit and institutional culture were recognized broadly as the strongest drivers of laboratory testing overuse.
Our findings regarding differences based on clinical specialty are novel. Respondents caring for pediatric patients generally placed lower value on testing, and IM clinicians were the most likely to endorse daily testing and to believe that it enhances patient safety and helps avoid malpractice litigation. The difference between adult and pediatric clinicians is surprising given the fundamental similarities between these specialties.9 Although some resource use studies have described differences across specialties, none has examined differences in laboratory testing or examined the practice patterns of clinicians who are not physicians across specialties.10 Prior studies have documented the impact of training location on practice11,12, suggesting the importance of the local training culture.13 As physician personalities vary across clinical specialties14 it is likely that culture varies as well. Specialty-specific cultures are likely to strongly influence attitudes and practice patterns and warrant further exploration.
Clinicians in our sample identified drivers of unnecessary laboratory testing that were consistent with other studies, most frequently endorsing habit, followed by culture, discomfort with not knowing, and concern that someone will ask for the results.5,15 Previous studies have focused on IM and have not included nonphysicians or compared attitudes across specialties. We found that the largest differences in drivers by specialty were related to malpractice concerns and the perception of pressure from patients or families. The low endorsement of defensive medicine among clinicians serving pediatric populations may imply that interventions to reduce unnecessary care in hospitalized children may not need to address malpractice fear. In contrast, clinicians from pediatrics identified family pressure as a greater driver of unnecessary testing. Efforts to reduce unnecessary laboratory testing in pediatrics will need to address parent expectations.
Our findings have implications for efforts to reduce unnecessary testing. Culture, identified as a key driver of testing, reflects leadership priorities, institutional history, and other factors and is difficult to specifically target. Habit, the other most-endorsed driver, is a more promising target for quality improvement interventions, particularly those addressing care processes (eg, electronic ordering). Discomfort with not knowing and fear of being asked are drivers that might be influenced by better communication about information expectations by supervising physicians and hospital administration. Lastly, education about the potential harms of excessive testing may facilitate more targeted efforts to reduce testing overuse.
Our study has important limitations. The cancer focus of the center may have influenced provider attitudes and practices. Attitudes may differ at community centers, though important differences regarding routine laboratory testing are unlikely. Second, although our sample was large, our response rate was modest at 53% and as low as 46% among RNs and NPs and we have no information regarding nonresponders. This response rate, though, was comparable to response rates seen in other large surveys.5,15 In addition, our results reflect clinician self-report; perceptions of necessity and the true need for testing may vary across specialties and the true subconscious drivers of behavior may differ. However, differences across specialties are likely to be valid even if there are other factors at play. Self assessment of unnecessary testing may also underestimate prevalence of the problem. Finally, our findings related to drivers of unnecessary testing are descriptive rather than quantitative given the lack of validated scales.
In conclusion, we evaluated attitudes toward routine laboratory testing in hospitalized patients in clinicians across specialties and found important differences. These findings speak to the diversity of cultures of medical care even within a single institution and point to the importance of studying attitudes about overused services across clinical specialties. In particular, as medical fields beyond IM increasingly recognize the importance of reducing medical overuse both in and out of the hospital, our findings highlight the importance of elucidating specialty-specific attitudes to optimize interventions to address unnecessary testing.
Disclosures
Mr. Husain, Ms. Gennarelli, Ms. White4, Mr. Masciale, MA5, and Dr. Roman, MD, have nothing to disclose. The work of Dr. Roman and Dr. Korenstein on this project was supported, in part, by a Cancer Center Support Grant from the National Cancer Institute to Memorial Sloan Kettering Cancer Center (P30 CA008748)
1. Zhi M, Ding EL, Theisen-Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15-year meta-analysis. PloS One. 2013;8(11):e78962. DOI: 10.1371/journal.pone.0078962. PubMed
2. Thavendiranathan P, Bagai A, Ebidia A, Detsky AS, Choudhry NK. Do blood tests cause anemia in hospitalized patients? The effect of diagnostic phlebotomy on hemoglobin and hematocrit levels. J Gen Intern Med. 2005;20(6):520-524. DOI: 10.1111/j.1525-1497.2005.0094.x. PubMed
3. Eaton KP, Levy K, Soong C, et al. Evidence-based guidelines to eliminate repetitive laboratory testing. JAMA Intern Med. 2017;177(12):1833-1839. DOI: 10.1001/jamainternmed.2017.5152 PubMed
4. Choosing wisely. http://www.choosingwisely.org/resources/. Accessed November 21, 2017.
5. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. DOI: 10.1002/jhm.2645. PubMed
6. Thakkar RN, Kim D, Knight AM, Riedel S, Vaidya D, Wright SM. Impact of an educational intervention on the frequency of daily blood test orders for hospitalized patients. Am J Clin Pathol. 2015;143(3):393-397. DOI: 10.1309/AJCPJS4EEM7UAUBV. PubMed
7. Sheeler RD, Mundell T, Hurst SA, et al. Self-reported rationing behavior among US physicians: a national survey. J Gen Intern Med. 2016;31(12):1444-1451. DOI: 10.1007/s11606-016-3756-5. PubMed
8. Roman BR, Yang A, Masciale J, Korenstein D. Association of attitudes regarding overuse of inpatient laboratory testing with health care provider type. JAMA Intern Med. 2017;177(8):1205-1207. DOI: 10.1001/jamainternmed.2017.1634. PubMed
9. Schatz IJ, Realini JP, Charney E. Family practice, internal medicine, and pediatrics as partners in the education of generalists. Acad Med. 1996;71(1):35-39. PubMed
10. Johnson RE, Freeborn DK, Mullooly JP. Physicians’ use of laboratory, radiology, and drugs in a prepaid group practice HMO. Health Serv Res. 1985;20(5):525-547. PubMed
11. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. Dec 10, 2014;312(22):2385-2393. DOI: 10.1001/jama.2014.15973. PubMed
12. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. DOI: 10.1001/jamainternmed.2014.3337. PubMed
13. Smith CD, Korenstein D. Harnessing the power of peer pressure to reduce health care waste and improve clinical outcomes. Mayo Clin Proc. 2015;90(3):311-312. DOI: https://doi.org/10.1017/ice.2015.136 PubMed
14. Vaidya NA, Sierles FS, Raida MD, Fakhoury FJ, Przybeck TR, Cloninger CR. Relationship between specialty choice and medical student temperament and character assessed with Cloninger Inventory. Teach Learn Med. 2004;16(2):150-156. DOI: 10.1207/s15328015tlm1602_6 PubMed
15. Studdert DM, Mello MM, Sage WM, et al. Defensive medicine among high-risk specialist physicians in a volatile malpractice environment. JAMA. 2005;293(21):2609-2617. DOI: 10.1001/jama.293.21.2609 PubMed
1. Zhi M, Ding EL, Theisen-Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15-year meta-analysis. PloS One. 2013;8(11):e78962. DOI: 10.1371/journal.pone.0078962. PubMed
2. Thavendiranathan P, Bagai A, Ebidia A, Detsky AS, Choudhry NK. Do blood tests cause anemia in hospitalized patients? The effect of diagnostic phlebotomy on hemoglobin and hematocrit levels. J Gen Intern Med. 2005;20(6):520-524. DOI: 10.1111/j.1525-1497.2005.0094.x. PubMed
3. Eaton KP, Levy K, Soong C, et al. Evidence-based guidelines to eliminate repetitive laboratory testing. JAMA Intern Med. 2017;177(12):1833-1839. DOI: 10.1001/jamainternmed.2017.5152 PubMed
4. Choosing wisely. http://www.choosingwisely.org/resources/. Accessed November 21, 2017.
5. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. DOI: 10.1002/jhm.2645. PubMed
6. Thakkar RN, Kim D, Knight AM, Riedel S, Vaidya D, Wright SM. Impact of an educational intervention on the frequency of daily blood test orders for hospitalized patients. Am J Clin Pathol. 2015;143(3):393-397. DOI: 10.1309/AJCPJS4EEM7UAUBV. PubMed
7. Sheeler RD, Mundell T, Hurst SA, et al. Self-reported rationing behavior among US physicians: a national survey. J Gen Intern Med. 2016;31(12):1444-1451. DOI: 10.1007/s11606-016-3756-5. PubMed
8. Roman BR, Yang A, Masciale J, Korenstein D. Association of attitudes regarding overuse of inpatient laboratory testing with health care provider type. JAMA Intern Med. 2017;177(8):1205-1207. DOI: 10.1001/jamainternmed.2017.1634. PubMed
9. Schatz IJ, Realini JP, Charney E. Family practice, internal medicine, and pediatrics as partners in the education of generalists. Acad Med. 1996;71(1):35-39. PubMed
10. Johnson RE, Freeborn DK, Mullooly JP. Physicians’ use of laboratory, radiology, and drugs in a prepaid group practice HMO. Health Serv Res. 1985;20(5):525-547. PubMed
11. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. Dec 10, 2014;312(22):2385-2393. DOI: 10.1001/jama.2014.15973. PubMed
12. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. DOI: 10.1001/jamainternmed.2014.3337. PubMed
13. Smith CD, Korenstein D. Harnessing the power of peer pressure to reduce health care waste and improve clinical outcomes. Mayo Clin Proc. 2015;90(3):311-312. DOI: https://doi.org/10.1017/ice.2015.136 PubMed
14. Vaidya NA, Sierles FS, Raida MD, Fakhoury FJ, Przybeck TR, Cloninger CR. Relationship between specialty choice and medical student temperament and character assessed with Cloninger Inventory. Teach Learn Med. 2004;16(2):150-156. DOI: 10.1207/s15328015tlm1602_6 PubMed
15. Studdert DM, Mello MM, Sage WM, et al. Defensive medicine among high-risk specialist physicians in a volatile malpractice environment. JAMA. 2005;293(21):2609-2617. DOI: 10.1001/jama.293.21.2609 PubMed
© 2018 Society of Hospital Medicine
A practical framework for understanding and reducing medical overuse: Conceptualizing overuse through the patient-clinician interaction
Medical services overuse is the provision of healthcare services for which there is no medical basis or for which harms equal or exceed benefits.1 This overuse drives poor-quality care and unnecessary cost.2,3 The high prevalence of overuse is recognized by patients,4 clinicians,5 and policymakers.6 Initiatives to reduce overuse have targeted physicians,7 the public,8 and medical educators9,10 but have had limited impact.11,12 Few studies have addressed methods for reducing overuse, and de-implementation of nonbeneficial practices has proved challenging.1,13,14 Models for reducing overuse are only theoretical15 or are focused on administrative decisions.16,17 We think a practical framework is needed. We used an iterative process, informed by expert opinion and discussion, to design such a framework.
METHODS
The authors, who have expertise in overuse, value, medical education, evidence-based medicine, and implementation science, reviewed related conceptual frameworks18 and evidence regarding drivers of overuse. We organized these drivers into domains to create a draft framework, which we presented at Preventing Overdiagnosis 2015, a meeting of clinicians, patients, and policymakers interested in overuse. We incorporated feedback from meeting attendees to modify framework domains, and we performed structured searches (using key words in Pubmed) to explore, and estimate the strength of, evidence supporting items within each domain. We rated supporting evidence as strong (studies found a clear correlation between a factor and overuse), moderate (evidence suggests such a correlation or demonstrates a correlation between a particular factor and utilization but not overuse per se), weak (only indirect evidence exists), or absent (no studies identified evaluating a particular factor). All authors reached consensus on ratings.
Framework Principles and Evidence
Patient-centered definition of overuse. During framework development, defining clinical appropriateness emerged as the primary challenge to identifying and reducing overuse. Although some care generally is appropriate based on strong evidence of benefit, and some is inappropriate given a clear lack of benefit or harm, much care is of unclear or variable benefit. Practice guidelines can help identify overuse, but their utility may be limited by lack of evidence in specific clinical situations,19 and their recommendations may apply poorly to an individual patient. This presents challenges to using guidelines to identify and reduce overuse.
Despite limitations, the scope of overuse has been estimated by applying broad, often guideline-based, criteria for care appropriateness to administrative data.20 Unfortunately, these estimates provide little direction to clinicians and patients partnering to make usage decisions. During framework development, we identified the importance of a patient-level, patient-specific definition of overuse. This approach reinforces the importance of meeting patient needs while standardizing treatments to reduce overuse. A patient-centered approach may also assist professional societies and advocacy groups in developing actionable campaigns and may uncover evidence gaps.
Centrality of patient-clinician interaction. During framework development, the patient–clinician interaction emerged as the nexus through which drivers of overuse exert influence. The centrality of this interaction has been demonstrated in studies of the relationship between care continuity and overuse21 or utilization,22,23 by evidence that communication and patient–clinician relationships affect utilization,24 and by the observation that clinician training in shared decision-making reduces overuse.25 A patient-centered framework assumes that, at least in the weighing of clinically reasonable options, a patient-centered approach optimizes outcomes for that patient.
Incorporating drivers of overuse. We incorporated drivers of overuse into domains and related them to the patient–clinician interaction.26 Domains included the culture of healthcare consumption, patient factors and experiences, the practice environment, the culture of professional medicine, and clinician attitudes and beliefs.
We characterized the evidence illustrating how drivers within each domain influence healthcare use. The evidence for each domain is listed in Table 1.
RESULTS
The final framework is shown in the Figure. Within the healthcare system, patients are influenced by the culture of healthcare consumption, which varies within and among countries.27 Clinicians are influenced by the culture of medical care, which varies by practice setting,28 and by their training environment.29 Both clinicians and patients are influenced by the practice environment and by personal experiences. Ultimately, clinical decisions occur within the specific patient–clinician interaction.24 Table 1 lists each domain’s components, likely impact on overuse, and estimated strength of supporting evidence. Interventions can be conceptualized within appropriate domains or through the interaction between patient and clinician.
DISCUSSION
We developed a novel and practical conceptual framework for characterizing drivers of overuse and potential intervention points. To our knowledge, this is the first framework incorporating a patient-specific approach to overuse and emphasizing the patient–clinician interaction. Key strengths of framework development are inclusion of a range of perspectives and characterization of the evidence within each domain. Limitations include lack of a formal systematic review and broad, qualitative assessments of evidence strength. However, we believe this framework provides an important conceptual foundation for the study of overuse and interventions to reduce overuse.
Framework Applications
This framework, which highlights the many drivers of overuse, can facilitate understanding of overuse and help conceptualize change, prioritize research goals, and inform specific interventions. For policymakers, the framework can inform efforts to reduce overuse by emphasizing the need for complex interventions and by clarifying the likely impact of interventions targeting specific domains. Similarly, for clinicians and quality improvement professionals, the framework can ground root cause analyses of overuse-related problems and inform allocation of limited resources. Finally, the relatively weak evidence on the role of most acknowledged drivers of overuse suggests an important research agenda. Specifically, several pressing needs have been identified: defining relevant physician and patient cultural factors, investigating interventions to impact culture, defining practice environment features that optimize care appropriateness, and describing specific patient–clinician interaction practices that minimize overuse while providing needed care.
Targeting Interventions
Domains within the framework are influenced by different types of interventions, and different stakeholders may target different domains. For example:
- The culture of healthcare consumption may be influenced through public education (eg, Choosing Wisely® patient resources)30-32 and public health campaigns.
- The practice environment may be influenced by initiatives to align clinician incentives,33 team care,34 electronic health record interventions,35 and improved access.36
- Clinician attitudes and beliefs may be influenced by audit and feedback,37-40 reflection,41 role modeling,42 and education.43-45
- Patient attitudes and beliefs may be influenced by education, access to price and quality information, and increased engagement in care.46,47
- For clinicians, the patient–clinician interaction can be improved through training in communication and shared decision-making,25 through access to information (eg, costs) that can be easily shared with patients,48,49 and through novel visit structures (eg, scribes).50
- On the patient side, this interaction can be optimized with improved access (eg, through telemedicine)51,52 or with patient empowerment during hospitalization.
- The culture of medicine is difficult to influence. Change likely will occur through:
○ Regulatory interventions (eg, Transforming Clinical Practice Initiative of Center for Medicare & Medicaid Innovation).
○ Educational initiatives (eg, high-value care curricula of Alliance for Academic Internal Medicine/American College of Physicians53).
○ Medical journal features (eg, “Less Is More” in JAMA Internal Medicine54 and “Things We Do for No Reason” in Journal of Hospital Medicine).
○ Professional organizations (eg, Choosing Wisely®).
As organizations implement quality improvement initiatives to reduce overuse of services, the framework can be used to target interventions to relevant domains. For example, a hospital leader who wants to reduce opioid prescribing may use the framework to identify the factors that encourage prescribing in each domain—poor understanding of pain treatment (a clinician factor), desire for early discharge encouraging overly aggressive pain management (an environmental factor), patient demand for opioids combined with poor understanding of harms (patient factors), and poor communication regarding pain (a patient–clinician interaction factor). Although not all relevant factors can be addressed, their classification by domain facilitates intervention, in this case perhaps leading to a focus on clinician and patient education on opioids and development of a practical communication tool that targets 3 domains. Table 2 lists ways in which the framework informs approaches to this and other overused services in the hospital setting. Note that some drivers can be acknowledged without identifying targeted interventions.
Moving Forward
Through a multi-stakeholder iterative process, we developed a practical framework for understanding medical overuse and interventions to reduce it. Centered on the patient–clinician interaction, this framework explains overuse as the product of medical and patient culture, the practice environment and incentives, and other clinician and patient factors. Ultimately, care is implemented during the patient–clinician interaction, though few interventions to reduce overuse have focused on that domain.
Conceptualizing overuse through the patient–clinician interaction maintains focus on patients while promoting population health that is both better and lower in cost. This framework can guide interventions to reduce overuse in important parts of the healthcare system while ensuring the final goal of high-quality individualized patient care.
Acknowledgments
The authors thank Valerie Pocus for helping with the artistic design of Framework. An early version of Framework was presented at the 2015 Preventing Overdiagnosis meeting in Bethesda, Maryland.
Disclosures
Dr. Morgan received research support from the VA Health Services Research (CRE 12-307), Agency for Healthcare Research and Quality (AHRQ) (K08- HS18111). Dr. Leppin’s work was supported by CTSA Grant Number UL1 TR000135 from the National Center for Advancing Translational Sciences, a component of the National Institutes of Health (NIH). Dr. Korenstein’s work on this paper was supported by a Cancer Center Support Grant from the National Cancer Institute to Memorial Sloan Kettering Cancer Center (award number P30 CA008748). Dr. Morgan provided a self-developed lecture in a 3M-sponsored series on hospital epidemiology and has received honoraria for serving as a book and journal editor for Springer Publishing. Dr. Smith is employed by the American College of Physicians and owns stock in Merck, where her husband is employed. The other authors report no potential conflicts of interest.
PubMed
2. Hood VL, Weinberger SE. High value, cost-conscious care: an international imperative. Eur J Intern Med. 2012;23(6):495-498. PubMed
3. Korenstein D, Falk R, Howell EA, Bishop T, Keyhani S. Overuse of health care services in the United States: an understudied problem. Arch Intern Med. 2012;172(2):171-178. PubMed
4. How SKH, Shih A, Lau J, Schoen C. Public Views on U.S. Health System Organization: A Call for New Directions. http://www.commonwealthfund.org/publications/data-briefs/2008/aug/public-views-on-u-s--health-system-organization--a-call-for-new-directions. Published August 1, 2008. Accessed December 11, 2015.
5. Sirovich BE, Woloshin S, Schwartz LM. Too little? Too much? Primary care physicians’ views on US health care: a brief report. Arch Intern Med. 2011;171(17):1582-1585. PubMed
6. Joint Commission, American Medical Association–Convened Physician Consortium for Performance Improvement. Proceedings From the National Summit on Overuse. https://www.jointcommission.org/assets/1/6/National_Summit_Overuse.pdf. Published September 24, 2012. Accessed July 8, 2016.
7. Cassel CK, Guest JA. Choosing Wisely: helping physicians and patients make smart decisions about their care. JAMA. 2012;307(17):1801-1802. PubMed
8. Wolfson D, Santa J, Slass L. Engaging physicians and consumers in conversations about treatment overuse and waste: a short history of the Choosing Wisely campaign. Acad Med. 2014;89(7):990-995. PubMed
9. Smith CD, Levinson WS. A commitment to high-value care education from the internal medicine community. Ann Int Med. 2015;162(9):639-640. PubMed
10. Korenstein D, Kale M, Levinson W. Teaching value in academic environments: shifting the ivory tower. JAMA. 2013;310(16):1671-1672. PubMed
11. Kale MS, Bishop TF, Federman AD, Keyhani S. Trends in the overuse of ambulatory health care services in the United States. JAMA Intern Med. 2013;173(2):142-148. PubMed
12. Rosenberg A, Agiro A, Gottlieb M, et al. Early trends among seven recommendations from the Choosing Wisely campaign. JAMA Intern Med. 2015;175(12):1913-1920. PubMed
13. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
14. Ubel PA, Asch DA. Creating value in health by understanding and overcoming resistance to de-innovation. Health Aff (Millwood). 2015;34(2):239-244. PubMed
15. Powell AA, Bloomfield HE, Burgess DJ, Wilt TJ, Partin MR. A conceptual framework for understanding and reducing overuse by primary care providers. Med Care Res Rev. 2013;70(5):451-472. PubMed
16. Nassery N, Segal JB, Chang E, Bridges JF. Systematic overuse of healthcare services: a conceptual model. Appl Health Econ Health Policy. 2015;13(1):1-6. PubMed
17. Segal JB, Nassery N, Chang HY, Chang E, Chan K, Bridges JF. An index for measuring overuse of health care resources with Medicare claims. Med Care. 2015;53(3):230-236. PubMed
18. Reschovsky JD, Rich EC, Lake TK. Factors contributing to variations in physicians’ use of evidence at the point of care: a conceptual model. J Gen Intern Med. 2015;30(suppl 3):S555-S561. PubMed
19. Feinstein AR, Horwitz RI. Problems in the “evidence” of “evidence-based medicine.” Am J Med. 1997;103(6):529-535. PubMed
20. Makarov DV, Soulos PR, Gold HT, et al. Regional-level correlations in inappropriate imaging rates for prostate and breast cancers: potential implications for the Choosing Wisely campaign. JAMA Oncol. 2015;1(2):185-194. PubMed
21. Romano MJ, Segal JB, Pollack CE. The association between continuity of care and the overuse of medical procedures. JAMA Intern Med. 2015;175(7):1148-1154. PubMed
22. Bayliss EA, Ellis JL, Shoup JA, Zeng C, McQuillan DB, Steiner JF. Effect of continuity of care on hospital utilization for seniors with multiple medical conditions in an integrated health care system. Ann Fam Med. 2015;13(2):123-129. PubMed
23. Chaiyachati KH, Gordon K, Long T, et al. Continuity in a VA patient-centered medical home reduces emergency department visits. PloS One. 2014;9(5):e96356. PubMed
24. Underhill ML, Kiviniemi MT. The association of perceived provider-patient communication and relationship quality with colorectal cancer screening. Health Educ Behav. 2012;39(5):555-563. PubMed
25. Legare F, Labrecque M, Cauchon M, Castel J, Turcotte S, Grimshaw J. Training family physicians in shared decision-making to reduce the overuse of antibiotics in acute respiratory infections: a cluster randomized trial. CMAJ. 2012;184(13):E726-E734. PubMed
26. PerryUndum Research/Communication; for ABIM Foundation. Unnecessary Tests and Procedures in the Health Care System: What Physicians Say About the Problem, the Causes, and the Solutions: Results From a National Survey of Physicians. http://www.choosingwisely.org/wp-content/uploads/2015/04/Final-Choosing-Wisely-Survey-Report.pdf. Published May 1, 2014. Accessed July 8, 2016.
27. Corallo AN, Croxford R, Goodman DC, Bryan EL, Srivastava D, Stukel TA. A systematic review of medical practice variation in OECD countries. Health Policy. 2014;114(1):5-14. PubMed
28. Cutler D, Skinner JS, Stern AD, Wennberg DE. Physician Beliefs and Patient Preferences: A New Look at Regional Variation in Health Care Spending. NBER Working Paper No. 19320. Cambridge, MA: National Bureau of Economic Research; 2013. http://www.nber.org/papers/w19320. Published August 2013. Accessed July 8, 2016.
29. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. PubMed
30. Huttner B, Goossens H, Verheij T, Harbarth S. Characteristics and outcomes of public campaigns aimed at improving the use of antibiotics in outpatients in high-income countries. Lancet Infect Dis. 2010;10(1):17-31. PubMed
31. Perz JF, Craig AS, Coffey CS, et al. Changes in antibiotic prescribing for children after a community-wide campaign. JAMA. 2002;287(23):3103-3109. PubMed
32. Sabuncu E, David J, Bernede-Bauduin C, et al. Significant reduction of antibiotic use in the community after a nationwide campaign in France, 2002-2007. PLoS Med. 2009;6(6):e1000084. PubMed
33. Flodgren G, Eccles MP, Shepperd S, Scott A, Parmelli E, Beyer FR. An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. Cochrane Database Syst Rev. 2011;(7):CD009255. PubMed
34. Yoon J, Rose DE, Canelo I, et al. Medical home features of VHA primary care clinics and avoidable hospitalizations. J Gen Intern Med. 2013;28(9):1188-1194. PubMed
35. Gonzales R, Anderer T, McCulloch CE, et al. A cluster randomized trial of decision support strategies for reducing antibiotic use in acute bronchitis. JAMA Intern Med. 2013;173(4):267-273. PubMed
36. Davis MM, Balasubramanian BA, Cifuentes M, et al. Clinician staffing, scheduling, and engagement strategies among primary care practices delivering integrated care. J Am Board Fam Med. 2015;28(suppl 1):S32-S40. PubMed
37. Dine CJ, Miller J, Fuld A, Bellini LM, Iwashyna TJ. Educating physicians-in-training about resource utilization and their own outcomes of care in the inpatient setting. J Grad Med Educ. 2010;2(2):175-180. PubMed
38. Elligsen M, Walker SA, Pinto R, et al. Audit and feedback to reduce broad-spectrum antibiotic use among intensive care unit patients: a controlled interrupted time series analysis. Infect Control Hosp Epidemiol. 2012;33(4):354-361. PubMed
39. Gerber JS, Prasad PA, Fiks AG, et al. Effect of an outpatient antimicrobial stewardship intervention on broad-spectrum antibiotic prescribing by primary care pediatricians: a randomized trial. JAMA. 2013;309(22):2345-2352. PubMed
40. Taggart LR, Leung E, Muller MP, Matukas LM, Daneman N. Differential outcome of an antimicrobial stewardship audit and feedback program in two intensive care units: a controlled interrupted time series study. BMC Infect Dis. 2015;15:480. PubMed
41. Hughes DR, Sunshine JH, Bhargavan M, Forman H. Physician self-referral for imaging and the cost of chronic care for Medicare beneficiaries. Med Care. 2011;49(9):857-864. PubMed
42. Ryskina KL, Pesko MF, Gossey JT, Caesar EP, Bishop TF. Brand name statin prescribing in a resident ambulatory practice: implications for teaching cost-conscious medicine. J Grad Med Educ. 2014;6(3):484-488. PubMed
43. Bhatia RS, Milford CE, Picard MH, Weiner RB. An educational intervention reduces the rate of inappropriate echocardiograms on an inpatient medical service. JACC Cardiovasc Imaging. 2013;6(5):545-555. PubMed
44. Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004;8(6):iii-iv, 1-72. PubMed
45. Wilson I, Cowin LS, Johnson M, Young H. Professional identity in medical students: pedagogical challenges to medical education. Teach Learn Med. 2013;25(4):369-373. PubMed
46. Berger Z, Flickinger TE, Pfoh E, Martinez KA, Dy SM. Promoting engagement by patients and families to reduce adverse events in acute care settings: a systematic review. BMJ Qual Saf. 2014;23(7):548-555. PubMed
47. Dykes PC, Stade D, Chang F, et al. Participatory design and development of a patient-centered toolkit to engage hospitalized patients and care partners in their plan of care. AMIA Annu Symp Proc. 2014;2014:486-495. PubMed
48. Coxeter P, Del Mar CB, McGregor L, Beller EM, Hoffmann TC. Interventions to facilitate shared decision making to address antibiotic use for acute respiratory infections in primary care. Cochrane Database Syst Rev. 2015;(11):CD010907. PubMed
49. Stacey D, Legare F, Col NF, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2014;(1):CD001431. PubMed
50. Bank AJ, Gage RM. Annual impact of scribes on physician productivity and revenue in a cardiology clinic. Clinicoecon Outcomes Res. 2015;7:489-495. PubMed
51. Lyles CR, Sarkar U, Schillinger D, et al. Refilling medications through an online patient portal: consistent improvements in adherence across racial/ethnic groups. J Am Med Inform Assoc. 2016;23(e1):e28-e33. PubMed
52. Kruse CS, Bolton K, Freriks G. The effect of patient portals on quality outcomes and its implications to meaningful use: a systematic review. J Med Internet Res. 2015;17(2):e44. PubMed
53. Smith CD. Teaching high-value, cost-conscious care to residents: the Alliance for Academic Internal Medicine-American College of Physicians curriculum. Ann Intern Med. 2012;157(4):284-286. PubMed
54. Redberg RF. Less is more. Arch Intern Med. 2010;170(7):584. PubMed
65. Birkmeyer JD, Reames BN, McCulloch P, Carr AJ, Campbell WB, Wennberg JE. Understanding of regional variation in the use of surgery. Lancet. 2013;382(9898):1121-1129. PubMed
66. Pearson SD, Goldman L, Orav EJ, et al. Triage decisions for emergency department patients with chest pain: do physicians’ risk attitudes make the difference? J Gen Intern Med. 1995;10(10):557-564. PubMed
67. Tubbs EP, Elrod JA, Flum DR. Risk taking and tolerance of uncertainty: implications for surgeons. J Surg Res. 2006;131(1):1-6. PubMed
68. Zaat JO, van Eijk JT. General practitioners’ uncertainty, risk preference, and use of laboratory tests. Med Care. 1992;30(9):846-854. PubMed
69. Barnato AE, Tate JA, Rodriguez KL, Zickmund SL, Arnold RM. Norms of decision making in the ICU: a case study of two academic medical centers at the extremes of end-of-life treatment intensity. Intensive Care Med. 2012;38(11):1886-1896. PubMed
70. Fisher ES, Wennberg JE, Stukel TA, et al. Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351-1362. PubMed
71. Yasaitis LC, Bynum JP, Skinner JS. Association between physician supply, local practice norms, and outpatient visit rates. Med Care. 2013;51(6):524-531. PubMed
72. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):2385-2393. PubMed
73. Ryskina KL, Smith CD, Weissman A, et al. U.S. internal medicine residents’ knowledge and practice of high-value care: a national survey. Acad Med. 2015;90(10):1373-1379. PubMed
74. Khullar D, Chokshi DA, Kocher R, et al. Behavioral economics and physician compensation—promise and challenges. N Engl J Med. 2015;372(24):2281-2283. PubMed
75. Landon BE, Reschovsky J, Reed M, Blumenthal D. Personal, organizational, and market level influences on physicians’ practice patterns: results of a national survey of primary care physicians. Med Care. 2001;39(8):889-905. PubMed
76. Fanari Z, Abraham N, Kolm P, et al. Aggressive measures to decrease “door to balloon” time and incidence of unnecessary cardiac catheterization: potential risks and role of quality improvement. Mayo Clin Proc. 2015;90(12):1614-1622. PubMed
77. Kerr EA, Lucatorto MA, Holleman R, Hogan MM, Klamerus ML, Hofer TP. Monitoring performance for blood pressure management among patients with diabetes mellitus: too much of a good thing? Arch Intern Med. 2012;172(12):938-945. PubMed
78. Verhofstede R, Smets T, Cohen J, Costantini M, Van Den Noortgate N, Deliens L. Implementing the care programme for the last days of life in an acute geriatric hospital ward: a phase 2 mixed method study. BMC Palliat Care. 2016;15:27. PubMed
Medical services overuse is the provision of healthcare services for which there is no medical basis or for which harms equal or exceed benefits.1 This overuse drives poor-quality care and unnecessary cost.2,3 The high prevalence of overuse is recognized by patients,4 clinicians,5 and policymakers.6 Initiatives to reduce overuse have targeted physicians,7 the public,8 and medical educators9,10 but have had limited impact.11,12 Few studies have addressed methods for reducing overuse, and de-implementation of nonbeneficial practices has proved challenging.1,13,14 Models for reducing overuse are only theoretical15 or are focused on administrative decisions.16,17 We think a practical framework is needed. We used an iterative process, informed by expert opinion and discussion, to design such a framework.
METHODS
The authors, who have expertise in overuse, value, medical education, evidence-based medicine, and implementation science, reviewed related conceptual frameworks18 and evidence regarding drivers of overuse. We organized these drivers into domains to create a draft framework, which we presented at Preventing Overdiagnosis 2015, a meeting of clinicians, patients, and policymakers interested in overuse. We incorporated feedback from meeting attendees to modify framework domains, and we performed structured searches (using key words in Pubmed) to explore, and estimate the strength of, evidence supporting items within each domain. We rated supporting evidence as strong (studies found a clear correlation between a factor and overuse), moderate (evidence suggests such a correlation or demonstrates a correlation between a particular factor and utilization but not overuse per se), weak (only indirect evidence exists), or absent (no studies identified evaluating a particular factor). All authors reached consensus on ratings.
Framework Principles and Evidence
Patient-centered definition of overuse. During framework development, defining clinical appropriateness emerged as the primary challenge to identifying and reducing overuse. Although some care generally is appropriate based on strong evidence of benefit, and some is inappropriate given a clear lack of benefit or harm, much care is of unclear or variable benefit. Practice guidelines can help identify overuse, but their utility may be limited by lack of evidence in specific clinical situations,19 and their recommendations may apply poorly to an individual patient. This presents challenges to using guidelines to identify and reduce overuse.
Despite limitations, the scope of overuse has been estimated by applying broad, often guideline-based, criteria for care appropriateness to administrative data.20 Unfortunately, these estimates provide little direction to clinicians and patients partnering to make usage decisions. During framework development, we identified the importance of a patient-level, patient-specific definition of overuse. This approach reinforces the importance of meeting patient needs while standardizing treatments to reduce overuse. A patient-centered approach may also assist professional societies and advocacy groups in developing actionable campaigns and may uncover evidence gaps.
Centrality of patient-clinician interaction. During framework development, the patient–clinician interaction emerged as the nexus through which drivers of overuse exert influence. The centrality of this interaction has been demonstrated in studies of the relationship between care continuity and overuse21 or utilization,22,23 by evidence that communication and patient–clinician relationships affect utilization,24 and by the observation that clinician training in shared decision-making reduces overuse.25 A patient-centered framework assumes that, at least in the weighing of clinically reasonable options, a patient-centered approach optimizes outcomes for that patient.
Incorporating drivers of overuse. We incorporated drivers of overuse into domains and related them to the patient–clinician interaction.26 Domains included the culture of healthcare consumption, patient factors and experiences, the practice environment, the culture of professional medicine, and clinician attitudes and beliefs.
We characterized the evidence illustrating how drivers within each domain influence healthcare use. The evidence for each domain is listed in Table 1.
RESULTS
The final framework is shown in the Figure. Within the healthcare system, patients are influenced by the culture of healthcare consumption, which varies within and among countries.27 Clinicians are influenced by the culture of medical care, which varies by practice setting,28 and by their training environment.29 Both clinicians and patients are influenced by the practice environment and by personal experiences. Ultimately, clinical decisions occur within the specific patient–clinician interaction.24 Table 1 lists each domain’s components, likely impact on overuse, and estimated strength of supporting evidence. Interventions can be conceptualized within appropriate domains or through the interaction between patient and clinician.
DISCUSSION
We developed a novel and practical conceptual framework for characterizing drivers of overuse and potential intervention points. To our knowledge, this is the first framework incorporating a patient-specific approach to overuse and emphasizing the patient–clinician interaction. Key strengths of framework development are inclusion of a range of perspectives and characterization of the evidence within each domain. Limitations include lack of a formal systematic review and broad, qualitative assessments of evidence strength. However, we believe this framework provides an important conceptual foundation for the study of overuse and interventions to reduce overuse.
Framework Applications
This framework, which highlights the many drivers of overuse, can facilitate understanding of overuse and help conceptualize change, prioritize research goals, and inform specific interventions. For policymakers, the framework can inform efforts to reduce overuse by emphasizing the need for complex interventions and by clarifying the likely impact of interventions targeting specific domains. Similarly, for clinicians and quality improvement professionals, the framework can ground root cause analyses of overuse-related problems and inform allocation of limited resources. Finally, the relatively weak evidence on the role of most acknowledged drivers of overuse suggests an important research agenda. Specifically, several pressing needs have been identified: defining relevant physician and patient cultural factors, investigating interventions to impact culture, defining practice environment features that optimize care appropriateness, and describing specific patient–clinician interaction practices that minimize overuse while providing needed care.
Targeting Interventions
Domains within the framework are influenced by different types of interventions, and different stakeholders may target different domains. For example:
- The culture of healthcare consumption may be influenced through public education (eg, Choosing Wisely® patient resources)30-32 and public health campaigns.
- The practice environment may be influenced by initiatives to align clinician incentives,33 team care,34 electronic health record interventions,35 and improved access.36
- Clinician attitudes and beliefs may be influenced by audit and feedback,37-40 reflection,41 role modeling,42 and education.43-45
- Patient attitudes and beliefs may be influenced by education, access to price and quality information, and increased engagement in care.46,47
- For clinicians, the patient–clinician interaction can be improved through training in communication and shared decision-making,25 through access to information (eg, costs) that can be easily shared with patients,48,49 and through novel visit structures (eg, scribes).50
- On the patient side, this interaction can be optimized with improved access (eg, through telemedicine)51,52 or with patient empowerment during hospitalization.
- The culture of medicine is difficult to influence. Change likely will occur through:
○ Regulatory interventions (eg, Transforming Clinical Practice Initiative of Center for Medicare & Medicaid Innovation).
○ Educational initiatives (eg, high-value care curricula of Alliance for Academic Internal Medicine/American College of Physicians53).
○ Medical journal features (eg, “Less Is More” in JAMA Internal Medicine54 and “Things We Do for No Reason” in Journal of Hospital Medicine).
○ Professional organizations (eg, Choosing Wisely®).
As organizations implement quality improvement initiatives to reduce overuse of services, the framework can be used to target interventions to relevant domains. For example, a hospital leader who wants to reduce opioid prescribing may use the framework to identify the factors that encourage prescribing in each domain—poor understanding of pain treatment (a clinician factor), desire for early discharge encouraging overly aggressive pain management (an environmental factor), patient demand for opioids combined with poor understanding of harms (patient factors), and poor communication regarding pain (a patient–clinician interaction factor). Although not all relevant factors can be addressed, their classification by domain facilitates intervention, in this case perhaps leading to a focus on clinician and patient education on opioids and development of a practical communication tool that targets 3 domains. Table 2 lists ways in which the framework informs approaches to this and other overused services in the hospital setting. Note that some drivers can be acknowledged without identifying targeted interventions.
Moving Forward
Through a multi-stakeholder iterative process, we developed a practical framework for understanding medical overuse and interventions to reduce it. Centered on the patient–clinician interaction, this framework explains overuse as the product of medical and patient culture, the practice environment and incentives, and other clinician and patient factors. Ultimately, care is implemented during the patient–clinician interaction, though few interventions to reduce overuse have focused on that domain.
Conceptualizing overuse through the patient–clinician interaction maintains focus on patients while promoting population health that is both better and lower in cost. This framework can guide interventions to reduce overuse in important parts of the healthcare system while ensuring the final goal of high-quality individualized patient care.
Acknowledgments
The authors thank Valerie Pocus for helping with the artistic design of Framework. An early version of Framework was presented at the 2015 Preventing Overdiagnosis meeting in Bethesda, Maryland.
Disclosures
Dr. Morgan received research support from the VA Health Services Research (CRE 12-307), Agency for Healthcare Research and Quality (AHRQ) (K08- HS18111). Dr. Leppin’s work was supported by CTSA Grant Number UL1 TR000135 from the National Center for Advancing Translational Sciences, a component of the National Institutes of Health (NIH). Dr. Korenstein’s work on this paper was supported by a Cancer Center Support Grant from the National Cancer Institute to Memorial Sloan Kettering Cancer Center (award number P30 CA008748). Dr. Morgan provided a self-developed lecture in a 3M-sponsored series on hospital epidemiology and has received honoraria for serving as a book and journal editor for Springer Publishing. Dr. Smith is employed by the American College of Physicians and owns stock in Merck, where her husband is employed. The other authors report no potential conflicts of interest.
Medical services overuse is the provision of healthcare services for which there is no medical basis or for which harms equal or exceed benefits.1 This overuse drives poor-quality care and unnecessary cost.2,3 The high prevalence of overuse is recognized by patients,4 clinicians,5 and policymakers.6 Initiatives to reduce overuse have targeted physicians,7 the public,8 and medical educators9,10 but have had limited impact.11,12 Few studies have addressed methods for reducing overuse, and de-implementation of nonbeneficial practices has proved challenging.1,13,14 Models for reducing overuse are only theoretical15 or are focused on administrative decisions.16,17 We think a practical framework is needed. We used an iterative process, informed by expert opinion and discussion, to design such a framework.
METHODS
The authors, who have expertise in overuse, value, medical education, evidence-based medicine, and implementation science, reviewed related conceptual frameworks18 and evidence regarding drivers of overuse. We organized these drivers into domains to create a draft framework, which we presented at Preventing Overdiagnosis 2015, a meeting of clinicians, patients, and policymakers interested in overuse. We incorporated feedback from meeting attendees to modify framework domains, and we performed structured searches (using key words in Pubmed) to explore, and estimate the strength of, evidence supporting items within each domain. We rated supporting evidence as strong (studies found a clear correlation between a factor and overuse), moderate (evidence suggests such a correlation or demonstrates a correlation between a particular factor and utilization but not overuse per se), weak (only indirect evidence exists), or absent (no studies identified evaluating a particular factor). All authors reached consensus on ratings.
Framework Principles and Evidence
Patient-centered definition of overuse. During framework development, defining clinical appropriateness emerged as the primary challenge to identifying and reducing overuse. Although some care generally is appropriate based on strong evidence of benefit, and some is inappropriate given a clear lack of benefit or harm, much care is of unclear or variable benefit. Practice guidelines can help identify overuse, but their utility may be limited by lack of evidence in specific clinical situations,19 and their recommendations may apply poorly to an individual patient. This presents challenges to using guidelines to identify and reduce overuse.
Despite limitations, the scope of overuse has been estimated by applying broad, often guideline-based, criteria for care appropriateness to administrative data.20 Unfortunately, these estimates provide little direction to clinicians and patients partnering to make usage decisions. During framework development, we identified the importance of a patient-level, patient-specific definition of overuse. This approach reinforces the importance of meeting patient needs while standardizing treatments to reduce overuse. A patient-centered approach may also assist professional societies and advocacy groups in developing actionable campaigns and may uncover evidence gaps.
Centrality of patient-clinician interaction. During framework development, the patient–clinician interaction emerged as the nexus through which drivers of overuse exert influence. The centrality of this interaction has been demonstrated in studies of the relationship between care continuity and overuse21 or utilization,22,23 by evidence that communication and patient–clinician relationships affect utilization,24 and by the observation that clinician training in shared decision-making reduces overuse.25 A patient-centered framework assumes that, at least in the weighing of clinically reasonable options, a patient-centered approach optimizes outcomes for that patient.
Incorporating drivers of overuse. We incorporated drivers of overuse into domains and related them to the patient–clinician interaction.26 Domains included the culture of healthcare consumption, patient factors and experiences, the practice environment, the culture of professional medicine, and clinician attitudes and beliefs.
We characterized the evidence illustrating how drivers within each domain influence healthcare use. The evidence for each domain is listed in Table 1.
RESULTS
The final framework is shown in the Figure. Within the healthcare system, patients are influenced by the culture of healthcare consumption, which varies within and among countries.27 Clinicians are influenced by the culture of medical care, which varies by practice setting,28 and by their training environment.29 Both clinicians and patients are influenced by the practice environment and by personal experiences. Ultimately, clinical decisions occur within the specific patient–clinician interaction.24 Table 1 lists each domain’s components, likely impact on overuse, and estimated strength of supporting evidence. Interventions can be conceptualized within appropriate domains or through the interaction between patient and clinician.
DISCUSSION
We developed a novel and practical conceptual framework for characterizing drivers of overuse and potential intervention points. To our knowledge, this is the first framework incorporating a patient-specific approach to overuse and emphasizing the patient–clinician interaction. Key strengths of framework development are inclusion of a range of perspectives and characterization of the evidence within each domain. Limitations include lack of a formal systematic review and broad, qualitative assessments of evidence strength. However, we believe this framework provides an important conceptual foundation for the study of overuse and interventions to reduce overuse.
Framework Applications
This framework, which highlights the many drivers of overuse, can facilitate understanding of overuse and help conceptualize change, prioritize research goals, and inform specific interventions. For policymakers, the framework can inform efforts to reduce overuse by emphasizing the need for complex interventions and by clarifying the likely impact of interventions targeting specific domains. Similarly, for clinicians and quality improvement professionals, the framework can ground root cause analyses of overuse-related problems and inform allocation of limited resources. Finally, the relatively weak evidence on the role of most acknowledged drivers of overuse suggests an important research agenda. Specifically, several pressing needs have been identified: defining relevant physician and patient cultural factors, investigating interventions to impact culture, defining practice environment features that optimize care appropriateness, and describing specific patient–clinician interaction practices that minimize overuse while providing needed care.
Targeting Interventions
Domains within the framework are influenced by different types of interventions, and different stakeholders may target different domains. For example:
- The culture of healthcare consumption may be influenced through public education (eg, Choosing Wisely® patient resources)30-32 and public health campaigns.
- The practice environment may be influenced by initiatives to align clinician incentives,33 team care,34 electronic health record interventions,35 and improved access.36
- Clinician attitudes and beliefs may be influenced by audit and feedback,37-40 reflection,41 role modeling,42 and education.43-45
- Patient attitudes and beliefs may be influenced by education, access to price and quality information, and increased engagement in care.46,47
- For clinicians, the patient–clinician interaction can be improved through training in communication and shared decision-making,25 through access to information (eg, costs) that can be easily shared with patients,48,49 and through novel visit structures (eg, scribes).50
- On the patient side, this interaction can be optimized with improved access (eg, through telemedicine)51,52 or with patient empowerment during hospitalization.
- The culture of medicine is difficult to influence. Change likely will occur through:
○ Regulatory interventions (eg, Transforming Clinical Practice Initiative of Center for Medicare & Medicaid Innovation).
○ Educational initiatives (eg, high-value care curricula of Alliance for Academic Internal Medicine/American College of Physicians53).
○ Medical journal features (eg, “Less Is More” in JAMA Internal Medicine54 and “Things We Do for No Reason” in Journal of Hospital Medicine).
○ Professional organizations (eg, Choosing Wisely®).
As organizations implement quality improvement initiatives to reduce overuse of services, the framework can be used to target interventions to relevant domains. For example, a hospital leader who wants to reduce opioid prescribing may use the framework to identify the factors that encourage prescribing in each domain—poor understanding of pain treatment (a clinician factor), desire for early discharge encouraging overly aggressive pain management (an environmental factor), patient demand for opioids combined with poor understanding of harms (patient factors), and poor communication regarding pain (a patient–clinician interaction factor). Although not all relevant factors can be addressed, their classification by domain facilitates intervention, in this case perhaps leading to a focus on clinician and patient education on opioids and development of a practical communication tool that targets 3 domains. Table 2 lists ways in which the framework informs approaches to this and other overused services in the hospital setting. Note that some drivers can be acknowledged without identifying targeted interventions.
Moving Forward
Through a multi-stakeholder iterative process, we developed a practical framework for understanding medical overuse and interventions to reduce it. Centered on the patient–clinician interaction, this framework explains overuse as the product of medical and patient culture, the practice environment and incentives, and other clinician and patient factors. Ultimately, care is implemented during the patient–clinician interaction, though few interventions to reduce overuse have focused on that domain.
Conceptualizing overuse through the patient–clinician interaction maintains focus on patients while promoting population health that is both better and lower in cost. This framework can guide interventions to reduce overuse in important parts of the healthcare system while ensuring the final goal of high-quality individualized patient care.
Acknowledgments
The authors thank Valerie Pocus for helping with the artistic design of Framework. An early version of Framework was presented at the 2015 Preventing Overdiagnosis meeting in Bethesda, Maryland.
Disclosures
Dr. Morgan received research support from the VA Health Services Research (CRE 12-307), Agency for Healthcare Research and Quality (AHRQ) (K08- HS18111). Dr. Leppin’s work was supported by CTSA Grant Number UL1 TR000135 from the National Center for Advancing Translational Sciences, a component of the National Institutes of Health (NIH). Dr. Korenstein’s work on this paper was supported by a Cancer Center Support Grant from the National Cancer Institute to Memorial Sloan Kettering Cancer Center (award number P30 CA008748). Dr. Morgan provided a self-developed lecture in a 3M-sponsored series on hospital epidemiology and has received honoraria for serving as a book and journal editor for Springer Publishing. Dr. Smith is employed by the American College of Physicians and owns stock in Merck, where her husband is employed. The other authors report no potential conflicts of interest.
PubMed
2. Hood VL, Weinberger SE. High value, cost-conscious care: an international imperative. Eur J Intern Med. 2012;23(6):495-498. PubMed
3. Korenstein D, Falk R, Howell EA, Bishop T, Keyhani S. Overuse of health care services in the United States: an understudied problem. Arch Intern Med. 2012;172(2):171-178. PubMed
4. How SKH, Shih A, Lau J, Schoen C. Public Views on U.S. Health System Organization: A Call for New Directions. http://www.commonwealthfund.org/publications/data-briefs/2008/aug/public-views-on-u-s--health-system-organization--a-call-for-new-directions. Published August 1, 2008. Accessed December 11, 2015.
5. Sirovich BE, Woloshin S, Schwartz LM. Too little? Too much? Primary care physicians’ views on US health care: a brief report. Arch Intern Med. 2011;171(17):1582-1585. PubMed
6. Joint Commission, American Medical Association–Convened Physician Consortium for Performance Improvement. Proceedings From the National Summit on Overuse. https://www.jointcommission.org/assets/1/6/National_Summit_Overuse.pdf. Published September 24, 2012. Accessed July 8, 2016.
7. Cassel CK, Guest JA. Choosing Wisely: helping physicians and patients make smart decisions about their care. JAMA. 2012;307(17):1801-1802. PubMed
8. Wolfson D, Santa J, Slass L. Engaging physicians and consumers in conversations about treatment overuse and waste: a short history of the Choosing Wisely campaign. Acad Med. 2014;89(7):990-995. PubMed
9. Smith CD, Levinson WS. A commitment to high-value care education from the internal medicine community. Ann Int Med. 2015;162(9):639-640. PubMed
10. Korenstein D, Kale M, Levinson W. Teaching value in academic environments: shifting the ivory tower. JAMA. 2013;310(16):1671-1672. PubMed
11. Kale MS, Bishop TF, Federman AD, Keyhani S. Trends in the overuse of ambulatory health care services in the United States. JAMA Intern Med. 2013;173(2):142-148. PubMed
12. Rosenberg A, Agiro A, Gottlieb M, et al. Early trends among seven recommendations from the Choosing Wisely campaign. JAMA Intern Med. 2015;175(12):1913-1920. PubMed
13. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
14. Ubel PA, Asch DA. Creating value in health by understanding and overcoming resistance to de-innovation. Health Aff (Millwood). 2015;34(2):239-244. PubMed
15. Powell AA, Bloomfield HE, Burgess DJ, Wilt TJ, Partin MR. A conceptual framework for understanding and reducing overuse by primary care providers. Med Care Res Rev. 2013;70(5):451-472. PubMed
16. Nassery N, Segal JB, Chang E, Bridges JF. Systematic overuse of healthcare services: a conceptual model. Appl Health Econ Health Policy. 2015;13(1):1-6. PubMed
17. Segal JB, Nassery N, Chang HY, Chang E, Chan K, Bridges JF. An index for measuring overuse of health care resources with Medicare claims. Med Care. 2015;53(3):230-236. PubMed
18. Reschovsky JD, Rich EC, Lake TK. Factors contributing to variations in physicians’ use of evidence at the point of care: a conceptual model. J Gen Intern Med. 2015;30(suppl 3):S555-S561. PubMed
19. Feinstein AR, Horwitz RI. Problems in the “evidence” of “evidence-based medicine.” Am J Med. 1997;103(6):529-535. PubMed
20. Makarov DV, Soulos PR, Gold HT, et al. Regional-level correlations in inappropriate imaging rates for prostate and breast cancers: potential implications for the Choosing Wisely campaign. JAMA Oncol. 2015;1(2):185-194. PubMed
21. Romano MJ, Segal JB, Pollack CE. The association between continuity of care and the overuse of medical procedures. JAMA Intern Med. 2015;175(7):1148-1154. PubMed
22. Bayliss EA, Ellis JL, Shoup JA, Zeng C, McQuillan DB, Steiner JF. Effect of continuity of care on hospital utilization for seniors with multiple medical conditions in an integrated health care system. Ann Fam Med. 2015;13(2):123-129. PubMed
23. Chaiyachati KH, Gordon K, Long T, et al. Continuity in a VA patient-centered medical home reduces emergency department visits. PloS One. 2014;9(5):e96356. PubMed
24. Underhill ML, Kiviniemi MT. The association of perceived provider-patient communication and relationship quality with colorectal cancer screening. Health Educ Behav. 2012;39(5):555-563. PubMed
25. Legare F, Labrecque M, Cauchon M, Castel J, Turcotte S, Grimshaw J. Training family physicians in shared decision-making to reduce the overuse of antibiotics in acute respiratory infections: a cluster randomized trial. CMAJ. 2012;184(13):E726-E734. PubMed
26. PerryUndum Research/Communication; for ABIM Foundation. Unnecessary Tests and Procedures in the Health Care System: What Physicians Say About the Problem, the Causes, and the Solutions: Results From a National Survey of Physicians. http://www.choosingwisely.org/wp-content/uploads/2015/04/Final-Choosing-Wisely-Survey-Report.pdf. Published May 1, 2014. Accessed July 8, 2016.
27. Corallo AN, Croxford R, Goodman DC, Bryan EL, Srivastava D, Stukel TA. A systematic review of medical practice variation in OECD countries. Health Policy. 2014;114(1):5-14. PubMed
28. Cutler D, Skinner JS, Stern AD, Wennberg DE. Physician Beliefs and Patient Preferences: A New Look at Regional Variation in Health Care Spending. NBER Working Paper No. 19320. Cambridge, MA: National Bureau of Economic Research; 2013. http://www.nber.org/papers/w19320. Published August 2013. Accessed July 8, 2016.
29. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. PubMed
30. Huttner B, Goossens H, Verheij T, Harbarth S. Characteristics and outcomes of public campaigns aimed at improving the use of antibiotics in outpatients in high-income countries. Lancet Infect Dis. 2010;10(1):17-31. PubMed
31. Perz JF, Craig AS, Coffey CS, et al. Changes in antibiotic prescribing for children after a community-wide campaign. JAMA. 2002;287(23):3103-3109. PubMed
32. Sabuncu E, David J, Bernede-Bauduin C, et al. Significant reduction of antibiotic use in the community after a nationwide campaign in France, 2002-2007. PLoS Med. 2009;6(6):e1000084. PubMed
33. Flodgren G, Eccles MP, Shepperd S, Scott A, Parmelli E, Beyer FR. An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. Cochrane Database Syst Rev. 2011;(7):CD009255. PubMed
34. Yoon J, Rose DE, Canelo I, et al. Medical home features of VHA primary care clinics and avoidable hospitalizations. J Gen Intern Med. 2013;28(9):1188-1194. PubMed
35. Gonzales R, Anderer T, McCulloch CE, et al. A cluster randomized trial of decision support strategies for reducing antibiotic use in acute bronchitis. JAMA Intern Med. 2013;173(4):267-273. PubMed
36. Davis MM, Balasubramanian BA, Cifuentes M, et al. Clinician staffing, scheduling, and engagement strategies among primary care practices delivering integrated care. J Am Board Fam Med. 2015;28(suppl 1):S32-S40. PubMed
37. Dine CJ, Miller J, Fuld A, Bellini LM, Iwashyna TJ. Educating physicians-in-training about resource utilization and their own outcomes of care in the inpatient setting. J Grad Med Educ. 2010;2(2):175-180. PubMed
38. Elligsen M, Walker SA, Pinto R, et al. Audit and feedback to reduce broad-spectrum antibiotic use among intensive care unit patients: a controlled interrupted time series analysis. Infect Control Hosp Epidemiol. 2012;33(4):354-361. PubMed
39. Gerber JS, Prasad PA, Fiks AG, et al. Effect of an outpatient antimicrobial stewardship intervention on broad-spectrum antibiotic prescribing by primary care pediatricians: a randomized trial. JAMA. 2013;309(22):2345-2352. PubMed
40. Taggart LR, Leung E, Muller MP, Matukas LM, Daneman N. Differential outcome of an antimicrobial stewardship audit and feedback program in two intensive care units: a controlled interrupted time series study. BMC Infect Dis. 2015;15:480. PubMed
41. Hughes DR, Sunshine JH, Bhargavan M, Forman H. Physician self-referral for imaging and the cost of chronic care for Medicare beneficiaries. Med Care. 2011;49(9):857-864. PubMed
42. Ryskina KL, Pesko MF, Gossey JT, Caesar EP, Bishop TF. Brand name statin prescribing in a resident ambulatory practice: implications for teaching cost-conscious medicine. J Grad Med Educ. 2014;6(3):484-488. PubMed
43. Bhatia RS, Milford CE, Picard MH, Weiner RB. An educational intervention reduces the rate of inappropriate echocardiograms on an inpatient medical service. JACC Cardiovasc Imaging. 2013;6(5):545-555. PubMed
44. Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004;8(6):iii-iv, 1-72. PubMed
45. Wilson I, Cowin LS, Johnson M, Young H. Professional identity in medical students: pedagogical challenges to medical education. Teach Learn Med. 2013;25(4):369-373. PubMed
46. Berger Z, Flickinger TE, Pfoh E, Martinez KA, Dy SM. Promoting engagement by patients and families to reduce adverse events in acute care settings: a systematic review. BMJ Qual Saf. 2014;23(7):548-555. PubMed
47. Dykes PC, Stade D, Chang F, et al. Participatory design and development of a patient-centered toolkit to engage hospitalized patients and care partners in their plan of care. AMIA Annu Symp Proc. 2014;2014:486-495. PubMed
48. Coxeter P, Del Mar CB, McGregor L, Beller EM, Hoffmann TC. Interventions to facilitate shared decision making to address antibiotic use for acute respiratory infections in primary care. Cochrane Database Syst Rev. 2015;(11):CD010907. PubMed
49. Stacey D, Legare F, Col NF, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2014;(1):CD001431. PubMed
50. Bank AJ, Gage RM. Annual impact of scribes on physician productivity and revenue in a cardiology clinic. Clinicoecon Outcomes Res. 2015;7:489-495. PubMed
51. Lyles CR, Sarkar U, Schillinger D, et al. Refilling medications through an online patient portal: consistent improvements in adherence across racial/ethnic groups. J Am Med Inform Assoc. 2016;23(e1):e28-e33. PubMed
52. Kruse CS, Bolton K, Freriks G. The effect of patient portals on quality outcomes and its implications to meaningful use: a systematic review. J Med Internet Res. 2015;17(2):e44. PubMed
53. Smith CD. Teaching high-value, cost-conscious care to residents: the Alliance for Academic Internal Medicine-American College of Physicians curriculum. Ann Intern Med. 2012;157(4):284-286. PubMed
54. Redberg RF. Less is more. Arch Intern Med. 2010;170(7):584. PubMed
65. Birkmeyer JD, Reames BN, McCulloch P, Carr AJ, Campbell WB, Wennberg JE. Understanding of regional variation in the use of surgery. Lancet. 2013;382(9898):1121-1129. PubMed
66. Pearson SD, Goldman L, Orav EJ, et al. Triage decisions for emergency department patients with chest pain: do physicians’ risk attitudes make the difference? J Gen Intern Med. 1995;10(10):557-564. PubMed
67. Tubbs EP, Elrod JA, Flum DR. Risk taking and tolerance of uncertainty: implications for surgeons. J Surg Res. 2006;131(1):1-6. PubMed
68. Zaat JO, van Eijk JT. General practitioners’ uncertainty, risk preference, and use of laboratory tests. Med Care. 1992;30(9):846-854. PubMed
69. Barnato AE, Tate JA, Rodriguez KL, Zickmund SL, Arnold RM. Norms of decision making in the ICU: a case study of two academic medical centers at the extremes of end-of-life treatment intensity. Intensive Care Med. 2012;38(11):1886-1896. PubMed
70. Fisher ES, Wennberg JE, Stukel TA, et al. Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351-1362. PubMed
71. Yasaitis LC, Bynum JP, Skinner JS. Association between physician supply, local practice norms, and outpatient visit rates. Med Care. 2013;51(6):524-531. PubMed
72. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):2385-2393. PubMed
73. Ryskina KL, Smith CD, Weissman A, et al. U.S. internal medicine residents’ knowledge and practice of high-value care: a national survey. Acad Med. 2015;90(10):1373-1379. PubMed
74. Khullar D, Chokshi DA, Kocher R, et al. Behavioral economics and physician compensation—promise and challenges. N Engl J Med. 2015;372(24):2281-2283. PubMed
75. Landon BE, Reschovsky J, Reed M, Blumenthal D. Personal, organizational, and market level influences on physicians’ practice patterns: results of a national survey of primary care physicians. Med Care. 2001;39(8):889-905. PubMed
76. Fanari Z, Abraham N, Kolm P, et al. Aggressive measures to decrease “door to balloon” time and incidence of unnecessary cardiac catheterization: potential risks and role of quality improvement. Mayo Clin Proc. 2015;90(12):1614-1622. PubMed
77. Kerr EA, Lucatorto MA, Holleman R, Hogan MM, Klamerus ML, Hofer TP. Monitoring performance for blood pressure management among patients with diabetes mellitus: too much of a good thing? Arch Intern Med. 2012;172(12):938-945. PubMed
78. Verhofstede R, Smets T, Cohen J, Costantini M, Van Den Noortgate N, Deliens L. Implementing the care programme for the last days of life in an acute geriatric hospital ward: a phase 2 mixed method study. BMC Palliat Care. 2016;15:27. PubMed
PubMed
2. Hood VL, Weinberger SE. High value, cost-conscious care: an international imperative. Eur J Intern Med. 2012;23(6):495-498. PubMed
3. Korenstein D, Falk R, Howell EA, Bishop T, Keyhani S. Overuse of health care services in the United States: an understudied problem. Arch Intern Med. 2012;172(2):171-178. PubMed
4. How SKH, Shih A, Lau J, Schoen C. Public Views on U.S. Health System Organization: A Call for New Directions. http://www.commonwealthfund.org/publications/data-briefs/2008/aug/public-views-on-u-s--health-system-organization--a-call-for-new-directions. Published August 1, 2008. Accessed December 11, 2015.
5. Sirovich BE, Woloshin S, Schwartz LM. Too little? Too much? Primary care physicians’ views on US health care: a brief report. Arch Intern Med. 2011;171(17):1582-1585. PubMed
6. Joint Commission, American Medical Association–Convened Physician Consortium for Performance Improvement. Proceedings From the National Summit on Overuse. https://www.jointcommission.org/assets/1/6/National_Summit_Overuse.pdf. Published September 24, 2012. Accessed July 8, 2016.
7. Cassel CK, Guest JA. Choosing Wisely: helping physicians and patients make smart decisions about their care. JAMA. 2012;307(17):1801-1802. PubMed
8. Wolfson D, Santa J, Slass L. Engaging physicians and consumers in conversations about treatment overuse and waste: a short history of the Choosing Wisely campaign. Acad Med. 2014;89(7):990-995. PubMed
9. Smith CD, Levinson WS. A commitment to high-value care education from the internal medicine community. Ann Int Med. 2015;162(9):639-640. PubMed
10. Korenstein D, Kale M, Levinson W. Teaching value in academic environments: shifting the ivory tower. JAMA. 2013;310(16):1671-1672. PubMed
11. Kale MS, Bishop TF, Federman AD, Keyhani S. Trends in the overuse of ambulatory health care services in the United States. JAMA Intern Med. 2013;173(2):142-148. PubMed
12. Rosenberg A, Agiro A, Gottlieb M, et al. Early trends among seven recommendations from the Choosing Wisely campaign. JAMA Intern Med. 2015;175(12):1913-1920. PubMed
13. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
14. Ubel PA, Asch DA. Creating value in health by understanding and overcoming resistance to de-innovation. Health Aff (Millwood). 2015;34(2):239-244. PubMed
15. Powell AA, Bloomfield HE, Burgess DJ, Wilt TJ, Partin MR. A conceptual framework for understanding and reducing overuse by primary care providers. Med Care Res Rev. 2013;70(5):451-472. PubMed
16. Nassery N, Segal JB, Chang E, Bridges JF. Systematic overuse of healthcare services: a conceptual model. Appl Health Econ Health Policy. 2015;13(1):1-6. PubMed
17. Segal JB, Nassery N, Chang HY, Chang E, Chan K, Bridges JF. An index for measuring overuse of health care resources with Medicare claims. Med Care. 2015;53(3):230-236. PubMed
18. Reschovsky JD, Rich EC, Lake TK. Factors contributing to variations in physicians’ use of evidence at the point of care: a conceptual model. J Gen Intern Med. 2015;30(suppl 3):S555-S561. PubMed
19. Feinstein AR, Horwitz RI. Problems in the “evidence” of “evidence-based medicine.” Am J Med. 1997;103(6):529-535. PubMed
20. Makarov DV, Soulos PR, Gold HT, et al. Regional-level correlations in inappropriate imaging rates for prostate and breast cancers: potential implications for the Choosing Wisely campaign. JAMA Oncol. 2015;1(2):185-194. PubMed
21. Romano MJ, Segal JB, Pollack CE. The association between continuity of care and the overuse of medical procedures. JAMA Intern Med. 2015;175(7):1148-1154. PubMed
22. Bayliss EA, Ellis JL, Shoup JA, Zeng C, McQuillan DB, Steiner JF. Effect of continuity of care on hospital utilization for seniors with multiple medical conditions in an integrated health care system. Ann Fam Med. 2015;13(2):123-129. PubMed
23. Chaiyachati KH, Gordon K, Long T, et al. Continuity in a VA patient-centered medical home reduces emergency department visits. PloS One. 2014;9(5):e96356. PubMed
24. Underhill ML, Kiviniemi MT. The association of perceived provider-patient communication and relationship quality with colorectal cancer screening. Health Educ Behav. 2012;39(5):555-563. PubMed
25. Legare F, Labrecque M, Cauchon M, Castel J, Turcotte S, Grimshaw J. Training family physicians in shared decision-making to reduce the overuse of antibiotics in acute respiratory infections: a cluster randomized trial. CMAJ. 2012;184(13):E726-E734. PubMed
26. PerryUndum Research/Communication; for ABIM Foundation. Unnecessary Tests and Procedures in the Health Care System: What Physicians Say About the Problem, the Causes, and the Solutions: Results From a National Survey of Physicians. http://www.choosingwisely.org/wp-content/uploads/2015/04/Final-Choosing-Wisely-Survey-Report.pdf. Published May 1, 2014. Accessed July 8, 2016.
27. Corallo AN, Croxford R, Goodman DC, Bryan EL, Srivastava D, Stukel TA. A systematic review of medical practice variation in OECD countries. Health Policy. 2014;114(1):5-14. PubMed
28. Cutler D, Skinner JS, Stern AD, Wennberg DE. Physician Beliefs and Patient Preferences: A New Look at Regional Variation in Health Care Spending. NBER Working Paper No. 19320. Cambridge, MA: National Bureau of Economic Research; 2013. http://www.nber.org/papers/w19320. Published August 2013. Accessed July 8, 2016.
29. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. PubMed
30. Huttner B, Goossens H, Verheij T, Harbarth S. Characteristics and outcomes of public campaigns aimed at improving the use of antibiotics in outpatients in high-income countries. Lancet Infect Dis. 2010;10(1):17-31. PubMed
31. Perz JF, Craig AS, Coffey CS, et al. Changes in antibiotic prescribing for children after a community-wide campaign. JAMA. 2002;287(23):3103-3109. PubMed
32. Sabuncu E, David J, Bernede-Bauduin C, et al. Significant reduction of antibiotic use in the community after a nationwide campaign in France, 2002-2007. PLoS Med. 2009;6(6):e1000084. PubMed
33. Flodgren G, Eccles MP, Shepperd S, Scott A, Parmelli E, Beyer FR. An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. Cochrane Database Syst Rev. 2011;(7):CD009255. PubMed
34. Yoon J, Rose DE, Canelo I, et al. Medical home features of VHA primary care clinics and avoidable hospitalizations. J Gen Intern Med. 2013;28(9):1188-1194. PubMed
35. Gonzales R, Anderer T, McCulloch CE, et al. A cluster randomized trial of decision support strategies for reducing antibiotic use in acute bronchitis. JAMA Intern Med. 2013;173(4):267-273. PubMed
36. Davis MM, Balasubramanian BA, Cifuentes M, et al. Clinician staffing, scheduling, and engagement strategies among primary care practices delivering integrated care. J Am Board Fam Med. 2015;28(suppl 1):S32-S40. PubMed
37. Dine CJ, Miller J, Fuld A, Bellini LM, Iwashyna TJ. Educating physicians-in-training about resource utilization and their own outcomes of care in the inpatient setting. J Grad Med Educ. 2010;2(2):175-180. PubMed
38. Elligsen M, Walker SA, Pinto R, et al. Audit and feedback to reduce broad-spectrum antibiotic use among intensive care unit patients: a controlled interrupted time series analysis. Infect Control Hosp Epidemiol. 2012;33(4):354-361. PubMed
39. Gerber JS, Prasad PA, Fiks AG, et al. Effect of an outpatient antimicrobial stewardship intervention on broad-spectrum antibiotic prescribing by primary care pediatricians: a randomized trial. JAMA. 2013;309(22):2345-2352. PubMed
40. Taggart LR, Leung E, Muller MP, Matukas LM, Daneman N. Differential outcome of an antimicrobial stewardship audit and feedback program in two intensive care units: a controlled interrupted time series study. BMC Infect Dis. 2015;15:480. PubMed
41. Hughes DR, Sunshine JH, Bhargavan M, Forman H. Physician self-referral for imaging and the cost of chronic care for Medicare beneficiaries. Med Care. 2011;49(9):857-864. PubMed
42. Ryskina KL, Pesko MF, Gossey JT, Caesar EP, Bishop TF. Brand name statin prescribing in a resident ambulatory practice: implications for teaching cost-conscious medicine. J Grad Med Educ. 2014;6(3):484-488. PubMed
43. Bhatia RS, Milford CE, Picard MH, Weiner RB. An educational intervention reduces the rate of inappropriate echocardiograms on an inpatient medical service. JACC Cardiovasc Imaging. 2013;6(5):545-555. PubMed
44. Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004;8(6):iii-iv, 1-72. PubMed
45. Wilson I, Cowin LS, Johnson M, Young H. Professional identity in medical students: pedagogical challenges to medical education. Teach Learn Med. 2013;25(4):369-373. PubMed
46. Berger Z, Flickinger TE, Pfoh E, Martinez KA, Dy SM. Promoting engagement by patients and families to reduce adverse events in acute care settings: a systematic review. BMJ Qual Saf. 2014;23(7):548-555. PubMed
47. Dykes PC, Stade D, Chang F, et al. Participatory design and development of a patient-centered toolkit to engage hospitalized patients and care partners in their plan of care. AMIA Annu Symp Proc. 2014;2014:486-495. PubMed
48. Coxeter P, Del Mar CB, McGregor L, Beller EM, Hoffmann TC. Interventions to facilitate shared decision making to address antibiotic use for acute respiratory infections in primary care. Cochrane Database Syst Rev. 2015;(11):CD010907. PubMed
49. Stacey D, Legare F, Col NF, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2014;(1):CD001431. PubMed
50. Bank AJ, Gage RM. Annual impact of scribes on physician productivity and revenue in a cardiology clinic. Clinicoecon Outcomes Res. 2015;7:489-495. PubMed
51. Lyles CR, Sarkar U, Schillinger D, et al. Refilling medications through an online patient portal: consistent improvements in adherence across racial/ethnic groups. J Am Med Inform Assoc. 2016;23(e1):e28-e33. PubMed
52. Kruse CS, Bolton K, Freriks G. The effect of patient portals on quality outcomes and its implications to meaningful use: a systematic review. J Med Internet Res. 2015;17(2):e44. PubMed
53. Smith CD. Teaching high-value, cost-conscious care to residents: the Alliance for Academic Internal Medicine-American College of Physicians curriculum. Ann Intern Med. 2012;157(4):284-286. PubMed
54. Redberg RF. Less is more. Arch Intern Med. 2010;170(7):584. PubMed
65. Birkmeyer JD, Reames BN, McCulloch P, Carr AJ, Campbell WB, Wennberg JE. Understanding of regional variation in the use of surgery. Lancet. 2013;382(9898):1121-1129. PubMed
66. Pearson SD, Goldman L, Orav EJ, et al. Triage decisions for emergency department patients with chest pain: do physicians’ risk attitudes make the difference? J Gen Intern Med. 1995;10(10):557-564. PubMed
67. Tubbs EP, Elrod JA, Flum DR. Risk taking and tolerance of uncertainty: implications for surgeons. J Surg Res. 2006;131(1):1-6. PubMed
68. Zaat JO, van Eijk JT. General practitioners’ uncertainty, risk preference, and use of laboratory tests. Med Care. 1992;30(9):846-854. PubMed
69. Barnato AE, Tate JA, Rodriguez KL, Zickmund SL, Arnold RM. Norms of decision making in the ICU: a case study of two academic medical centers at the extremes of end-of-life treatment intensity. Intensive Care Med. 2012;38(11):1886-1896. PubMed
70. Fisher ES, Wennberg JE, Stukel TA, et al. Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351-1362. PubMed
71. Yasaitis LC, Bynum JP, Skinner JS. Association between physician supply, local practice norms, and outpatient visit rates. Med Care. 2013;51(6):524-531. PubMed
72. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):2385-2393. PubMed
73. Ryskina KL, Smith CD, Weissman A, et al. U.S. internal medicine residents’ knowledge and practice of high-value care: a national survey. Acad Med. 2015;90(10):1373-1379. PubMed
74. Khullar D, Chokshi DA, Kocher R, et al. Behavioral economics and physician compensation—promise and challenges. N Engl J Med. 2015;372(24):2281-2283. PubMed
75. Landon BE, Reschovsky J, Reed M, Blumenthal D. Personal, organizational, and market level influences on physicians’ practice patterns: results of a national survey of primary care physicians. Med Care. 2001;39(8):889-905. PubMed
76. Fanari Z, Abraham N, Kolm P, et al. Aggressive measures to decrease “door to balloon” time and incidence of unnecessary cardiac catheterization: potential risks and role of quality improvement. Mayo Clin Proc. 2015;90(12):1614-1622. PubMed
77. Kerr EA, Lucatorto MA, Holleman R, Hogan MM, Klamerus ML, Hofer TP. Monitoring performance for blood pressure management among patients with diabetes mellitus: too much of a good thing? Arch Intern Med. 2012;172(12):938-945. PubMed
78. Verhofstede R, Smets T, Cohen J, Costantini M, Van Den Noortgate N, Deliens L. Implementing the care programme for the last days of life in an acute geriatric hospital ward: a phase 2 mixed method study. BMC Palliat Care. 2016;15:27. PubMed
© 2017 Society of Hospital Medicine
Improving Feedback to Ward Residents
Feedback has long been recognized as pivotal to the attainment of clinical acumen and skills in medical training.1 Formative feedback can give trainees insight into their strengths and weaknesses, and provide them with clear goals and methods to attain those goals.1, 2 In fact, feedback given regularly over time by a respected figure has shown to improve physician performance.3 However, most faculty are not trained to provide effective feedback. As a result, supervisors often believe they are giving more feedback than trainees believe they are receiving, and residents receive little feedback that they perceive as useful.4 Most residents receive little to no feedback on their communications skills4 or professionalism,5 and rarely receive corrective feedback.6, 7
Faculty may fail to give feedback to residents for a number of reasons. Those barriers most commonly cited in the literature are discomfort with criticizing residents,6, 7 lack of time,4 and lack of direct observation of residents in clinical settings.810 Several studies have looked at tools to guide feedback and address the barrier of discomfort with criticism.6, 7, 11 Some showed improvements in overall feedback, though often supervisors gave only positive feedback and avoided giving feedback about weaknesses.6, 7, 11 Despite the recognition of lack of time as a barrier to feedback,4 most studies on feedback interventions thus far have not included setting aside time for the feedback to occur.6, 7, 11, 12 Finally, a number of studies utilized objective structured clinical examinations (OSCEs) coupled with immediate feedback to improve direct observation of residents, with success in improving feedback related to the encounter.9, 10, 13 To address the gaps in the current literature, the goals of our study were to address 2 specific barriers to feedback for residents: lack of time and discomfort with giving feedback.
The aim of this study was to improve Internal Medicine (IM) residents' and attendings' experiences with feedback on the wards using a pocket card and a dedicated feedback session. We developed and evaluated the pocket feedback card and session for faculty to improve the quality and frequency of their feedback to residents in the inpatient setting. We performed a randomized trial to evaluate our intervention. We hypothesized that the intervention would: 1) improve the quality and quantity of attendings' feedback given to IM ward residents; and 2) improve attendings' comfort with feedback delivery on the wards.
PARTICIPANTS AND METHODS
Setting
The study was performed at Mount Sinai Medical Center in New York City, New York, between July 2008 and January 2009.
Participants
Participants in this study were IM residents and ward teaching attendings on inpatient ward teams at Mount Sinai Medical Center from July 2008 to January 2009. There are 12 ward teams on 3 inpatient services (each service has 4 teams) during each block at our hospital. Ward teams are made up of 1 teaching attending, 1 resident, 1 to 3 interns, and 1 to 2 medical students. The majority of attendings are on the ward service for 4‐week blocks, but some are only on for 1 or 2 weeks. Teams included in the randomization were the General Medicine and Gastroenterology/Cardiology service teams. Half of the General Medicine service attendings are hospitalists. Ward teams were excluded from the study randomization if the attending on the team was on the wards for less than 2 weeks, or if the attending had already been assigned to the experimental group in a previous block, given the influence of having used the card and feedback session previously. Since residents were unaware of the intervention and random assignments were based on attendings, residents could be assigned to the intervention group or the control group on any given inpatient rotation. Therefore, a resident could be in the control group in 1 block and the intervention group in his/her next block on the wards or vice versa, or could be assigned to either the intervention or the control group on more than 1 occasion. Because resident participants were blinded to their team's assignment (as intervention or control) and all surveys were anonymous (tracked as intervention or control by the team name only), it was not possible to exclude residents based on their prior participation or to match the surveys completed by the same residents.
Study Design
We performed a prospective randomized study to evaluate our educational innovation. The unit of randomization was the ward team. For each block, approximately half of the 6‐8 teams were randomized to the intervention group and half to the control group. Randomization assignments were completed the day prior to the start of the block using the random allocation software based on the ward team letters (blind to the attending and resident names). Of the 48 possible ward teams (8 teams per block over 6 blocks), 36 teams were randomized to the intervention or control groups, and 12 teams were not based on the above exclusion criteria. Of the 36 teams, 16 (composed of 16 attendings and 48 residents and interns) were randomized to the intervention group, and 20 (composed of 20 attendings and 63 residents and interns) were randomized to the control group.
The study was blinded such that residents and attendings in the control group were unaware of the study. The study was exempt from IRB review by the Mount Sinai Institutional Review Board, and Grants and Contracts Office, as an evaluation of the effectiveness of an instructional technique in medical education.
Intervention Design
We designed a pocket feedback card to guide a feedback session and assist attendings in giving useful feedback to IM residents on the wards (Figure 1).14 The individual items and categories were adapted from the Accreditation Council for Graduate Medical Education (ACGME) Common Program Requirements Core Competencies section and were revised via the expert consensus of the authors.14 We included 20 items related to resident skills, knowledge, attitudes, and behaviors important to the care of hospitalized patients, grouped under the 6 ACGME core competency domains.14 Many of these items correspond to competencies in the Society of Hospital Medicine (SHM) Core Competencies; in particular, the categories of Systems‐Based Practice and Practice‐Based Learning mirror competencies in the SHM Core Competencies Healthcare Systems chapter.15 Each item utilized a 5‐point Likert scale (1 = very poor, 3 = at expected level, 5 = superior) to evaluate resident performance (Figure 1). We created this card to serve as a directive and specific guide for attendings to provide feedback about specific domains and to give more constructive feedback. The card was to be used during a specific dedicated feedback session in order to overcome the commonly cited barrier of lack of time.
Program Implementation
On the first day of the block, both groups of attendings received the standard inpatient ward orientation given by the program director, including instructions about teaching and administrative responsibilities, and explicit instructions to provide mid‐rotation feedback to residents. Attendings randomized to the intervention group had an additional 5‐minute orientation given by 1 of the investigators. The orientation included a brief discussion on the importance of feedback and an introduction to the items on the card.2 In addition, faculty were instructed to dedicate 1 mid‐rotation attending rounds as a feedback session, to meet individually for 10‐15 minutes with each of the 3‐4 residents on their team, and to use the card to provide feedback on skills in each domain. As noted on the feedback card, if a resident scored less than 3 on a skill set, the attending was instructed to give examples of skills within that domain needing improvement and to offer suggestions for improvement. The intervention group was also asked not to discuss the card or session with others. No other instructions were provided.
Survey Design
At the end of each block, residents and attendings in both groups completed questionnaires to assess satisfaction with, and attitudes toward, feedback (Supporting Information Appendices 1 and 2 in the online version of this article). Survey questions were based on the competency areas included in the feedback card, previously published surveys evaluating feedback interventions,5, 9, 11 and expert opinion. The resident survey was designed to address the impact of feedback on the domains of resident knowledge, clinical and communication skills, and attitudes about feedback from supervisors and peers. We utilized a 5‐point Likert scale including: strongly disagree, disagree, neutral, agree, and strongly agree. The attending survey addressed attendings' satisfaction with feedback encounters and resident performance. At the completion of the study, investigators compared responses in intervention and control groups.
Statistical Analysis
For purposes of analysis, due to the relatively small number of responses for certain answer choices, the Likert scale was converted to a dichotomous variable. The responses of agree and strongly agree were coded as agree; and disagree, strongly disagree, and neutral were coded as disagree. Neutral was coded as disagree in order to avoid overestimating positive attitudes and, in effect, bias our results toward the null hypothesis. Differences between groups were analyzed using chi‐square Fisher's exact test (2‐sided).
Qualitative Interviews
In order to understand the relative contribution of the feedback card versus the feedback session, we performed a qualitative survey of attendings in the intervention group. Following the conclusion of the study period, we selected a convenience sample of 8 attendings from the intervention group for these brief qualitative interviews. We asked 3 basic questions. Was the intervention of the feedback card and dedicated time for feedback useful? Did you find one component, either the card or the dedicated time for feedback, more useful than the other? Were there any negative effects on patient care, education, or other areas, from using an attending rounds as a feedback session? This data was coded and analyzed for common themes.
RESULTS
During the 6‐month study period, 34 teaching attendings (over 36 attending inpatient blocks) and 93 IM residents (over 111 resident inpatient blocks) participated in the study. Thirty‐four of 36 attending surveys and 96 of 111 resident surveys were completed. The overall survey response rates for residents and attendings were 85% and 94%, respectively. Two attendings participated during 2 separate blocks, first in the control group and then in the intervention group, and 18 residents participated during 2 separate blocks. No attendings or residents participated more than twice.
Resident survey response rate was 81.2% in the intervention group and 87.3% in the control group (Table 1). Residents in the intervention group reported receiving more feedback regarding skills they did well (89.7% vs 63.6%, P = 0.004) and skills needing improvement (51.3% vs 25.5%, P = 0.02) than those in the control group. In addition, more intervention residents reported receiving useful information regarding how to improve their skills (53.8% vs 27.3%, P = 0.01), and reported actually improving both their clinical skills (61.5% vs 27.8%, P = 0.001) and their professionalism/communication skills (51.3% vs 29.1%, P = 0.03) based on feedback received from attendings.
Survey Item | Resident Intervention Agree* % (No.) N = 39 | Resident Control Agree*% (No.) N = 55 | P Value |
---|---|---|---|
| |||
I did NOT receive a sufficient amount of feedback from my attending supervisor(s) this block. | 20.5 (8) | 38.2 (21) | 0.08 |
I received feedback from my attending regarding skills I did well during this block. | 89.7 (35) | 63.6 (35) | 0.004 |
I received feedback from my attending regarding specific skills that needed improvement during this block. | 51.3 (20) | 25.5 (14) | 0.02 |
I received useful information from my attending about how to improve my skills during this block. | 53.8 (21) | 27.3 (15) | 0.01 |
I improved my clinical skills based on feedback I received from my attending this block. | 61.5 (24) | 27.8 (15) | 0.001 |
I improved my professionalism/communication skills based on feedback I received from my attending this block. | 51.3 (20) | 29.1 (16) | 0.03 |
I improved my knowledge base because of feedback I received from my attending this block. | 64.1 (25) | 60.0 (33) | 0.83 |
The feedback I received from my attending this block gave me an overall sense of my performance more than it helped me identify specific areas for improvement. | 64.1 (25) | 65.5 (36) | 1.0 |
Feedback from colleagues (other interns and residents) is more helpful than feedback from attendings. | 41.0 (16) | 43.6 (24) | 0.84 |
Independent of feedback received from others, I am able to identify areas in which I need improvement. | 84.6 (33) | 80.0 (44) | 0.60 |
The attending survey response rates for the intervention and control groups were 100% and 90%, respectively. In general, both groups of attendings reported that they were comfortable giving feedback and that they did, in fact, give feedback in each area during their ward block (Table 2). More intervention attendings felt that at least 1 of their residents improved their professionalism/communication skills based on the feedback given (76.9% vs 31.1%, P = 0.02). There were no other significant differences between the groups of attendings.
Survey Item | Attending Intervention Agree* % (No.) N = 16 | Attending Control Agree* % (No.) N = 18 | P Value |
---|---|---|---|
| |||
Giving feedback to housestaff was DIFFICULT this block. | 6.3 (1) | 16.7 (3) | 0.60 |
I was comfortable giving feedback to my housestaff this block. | 93.8 (15) | 94.4 (17) | 1.00 |
I did NOT give a sufficient amount of feedback to my housestaff this block. | 18.8 (3) | 38.9 (7) | 0.27 |
My skills in giving feedback improved during this block. | 50 (8) | 16.7 (3) | 0.07 |
I gave feedback to housestaff regarding skills they did well during this block. | 100 (16) | 94.4 (17) | 1.00 |
I gave feedback to housestaff which targeted specific areas for their improvement. | 81.3 (13) | 70.6 (12) | 0.69 |
At least one of my housestaff improved his/her clinical skills based on feedback I gave this block. | 68.8 (11) | 47.1 (8) | 0.30 |
At least one of my housestaff improved his/her professionalism/communication skills based on feedback I gave this block. | 76.9 (10) | 31.1 (5) | 0.02 |
At least one of my housestaff improved his/her fund of knowledge based on feedback I gave this block. | 50.0 (8) | 52.9 (9) | 1.00 |
Housestaff found the feedback I gave them useful. | 66.7 (10) | 62.5 (10) | 1.00 |
I find it DIFFICULT to find time during inpatient rotations to give feedback to residents regarding their performance. | 50.0 (8) | 33.3 (6) | 0.49 |
Intervention attendings also shared their attitudes toward the feedback card and session. A majority felt that using 1 attending rounds as a feedback session helped create a dedicated time for giving feedback (68.8%), and that the feedback card helped them to give specific, constructive feedback (62.5%). Most attendings reported they would use the feedback card and session again during future inpatient blocks (81%), and would recommend them to other attendings (75%).
Qualitative data from intervention attending interviews revealed further thoughts about the feedback card and feedback session. Most attendings interviewed (7/8) felt that the card was useful for the structure and topic guidance it provided. Half felt that setting aside time for feedback was the more useful component. The other half reported that, because they usually set aside time for feedback regardless, the card was more useful. None of the attendings felt that the feedback card or session was detrimental for patient care or education, and many said that the intervention had positive effects on these areas. For example, 1 attending said that the session added to patient care because I used particular [patient] cases as examples for giving feedback.
DISCUSSION
In this randomized study, we found that a simple pocket feedback card and dedicated feedback session was acceptable to ward attendings and improved resident satisfaction with feedback. Unlike most prior studies of feedback, we demonstrated more feedback around skills needing improvement, and intervention residents felt the feedback they received helped them improve their skills. Our educational intervention was unique in that it combined a pocket card to structure feedback content and dedicated time to structure the feedback process, to address 2 of the major barriers to giving feedback: lack of time and lack of comfort.
The pocket card itself as a tool for improving feedback is innovative and valuable. As a short but directive guide, the card supports attendings' delivery of relevant and specific feedback about residents' performance, and because it is based on the ACGME competencies, it may help attendings focus feedback on areas in which they will later evaluate residents. The inclusion of a prespecified time for giving feedback was important as well, in that it allowed for face‐to‐face feedback to occur, as opposed to a passing comment after a presentation or brief notes in a written final evaluation. Both the card and the feedback session seemed equally important for the success of this intervention, with attitudes varying based on individual attending preferences. Those who usually set aside time for feedback on their own found the card more useful, whereas those who had more trouble finding time for feedback found the specific session more useful. Most attendings found the intervention as a whole helpful, and without any detrimental effects on patient care or education. The card and session may be particularly valuable for hospital attendings, given their growing presence as teachers and supervisors for residents, and their busy days on the wards.
Our study results have important implications for resident training in the hospital. Improving resident receipt of feedback about strengths and weaknesses is an ACGME training requirement, and specific guidance about how to improve skills is critical for focusing improvement efforts. Previous studies have demonstrated that directive feedback in medical training can lead to a variety of performance improvements, including better evaluations by other professionals,9, 16 and objective improvements in resident communication skills,17 chart documentation,18 and clinical management of patients.11, 15, 19 By improving the quality of feedback across several domains and facilitating the feedback process, our intervention may lead to similar improvements. Future studies should examine the global impact of guided feedback as in our study. Perhaps most importantly, attendings found the intervention acceptable and would recommend its use, implying longer term sustainability of its integration into the hospital routine.
One strength of our study was its prospective randomized design. Despite the importance of rigor in medical education research, there remains a paucity of randomized studies to evaluate educational interventions for residents in inpatient settings. Few studies of feedback interventions in particular have performed randomized trials,5, 6, 11 and only one has examined a feedback intervention in a randomized fashion in the inpatient setting.12 This evaluation of a 20‐minute intervention, and a reminder card for supervising attendings to improve written and verbal feedback to residents, modestly improved the amount of verbal feedback given to residents, but did not impact the number of residents receiving mid‐rotation feedback or feedback overall as our study did by report.12
There were several important limitations to our study. First, because this was a single institution study, we only achieved modest sample sizes, particularly in the attending groups, and were unable to assess all of the differences in attending attitudes related to feedback. Second, control and intervention participants were on service simultaneously, which may have led to contamination of the control group and an underestimation of the true impact of our intervention. Since residents were not exclusive to 1 study group on 1 occasion (18 of the 93 residents participated during 2 separate blocks), our results may be biased. In particular, those residents who had the intervention first, and were subsequently in the control group, may have rated the control experience worse than they would have otherwise, creating a bias in favor of a positive result for our intervention. Nonetheless, we believe this situation was uncommon and the potential associated bias minimal. Further, this study assessed attitudes related to feedback and self‐reported knowledge and skills, but did not directly assess resident knowledge, skills, or patient outcomes. We recognize the importance of these outcomes and hope that future interventions can determine these important downstream effects of feedback. We were also unable to assess the card and session's impact on attendings' comfort with feedback, because most attendings in both groups reported feeling comfortable giving feedback. This result may indicate that attendings actually are comfortable giving feedback, or may suggest some element of social desirability bias. Finally, in this study, we designed an intervention which combined the pocket card and dedicated feedback time. We did not quantitatively examine the effect of either component alone, and it is unclear if offering the feedback card without protected time or offering protected time without a guide would have impacted feedback on the wards. However, qualitative data from our study support the use of both components, and implementing the 2 components together is feasible in any inpatient teaching setting.
Despite these limitations, protected time for feedback guided by a pocket feedback card is a simple intervention that appears to improve feedback quantity and quality for ward residents, and guides them to improve their performance. Our low‐intensity intervention helped attendings give residents the tools to improve their clinical and communication skills. An opportunity to make a positive impact on resident education with such a small intervention is rare. The use of a feedback card with protected feedback time could be easily implemented in any training program, and is a valuable tool for busy hospitalists who are more commonly supervising residents on their inpatient rotations.
- Feedback in clinical medical education.JAMA.1983;250(6):777–781. .
- Giving feedback in medical education: verification of recommended techniques.J Gen Intern Med.1998;13(2):111–116. , .
- Systematic review of the literature on assessment, feedback and physicians' clinical performance: BEME Guide No. 7.Med Teach.2006;28(2):117–128. , , , , .
- Missed opportunities: a descriptive assessment of teaching and attitudes regarding communication skills in a surgical residency.Curr Surg.2006;63(6):401–409. , , , .
- Impact of a 360‐degree professionalism assessment on faculty comfort and skills in feedback delivery.J Gen Intern Med.2008;23(7):969–972. , , .
- Daily encounter cards facilitate competency‐based feedback while leniency bias persists.CJEM.2008;10(1):44–50. , .
- Teaching compassion and respect. Attending physicians' responses to problematic behaviors.J Gen Intern Med.1999;14(1):49–55. , , , , .
- Faculty and the observation of trainees' clinical skills: problems and opportunities.Acad Med.2004;79(1):16–22. .
- Direct observation of residents in the emergency department: a structured educational program.Acad Emerg Med.2009;16(4):343–351. , .
- Evaluation of a novel assessment form for observing medical residents: a randomised, controlled trial.Med Educ.2008;42(12):1234–1242. , , , et al.
- Resident evaluations: the use of daily evaluation forms in rheumatology ambulatory care.J Rheumatol.2009;36(6):1298–1303. , , , et al.
- Effectiveness of a focused educational intervention on resident evaluations from faculty a randomized controlled trial.J Gen Intern Med.2001;16(7):427–434. , , , .
- Effects of training in direct observation of medical residents' clinical competence: a randomized trial.Ann Intern Med.2004;140(11):874–881. , , .
- Internal Medicine Program Requirements. ACGME. July 1, 2009. Available at: http://www.acgme.org/acWebsite/downloads/RRC_progReq/140_internal_medicine_07012009.pdf. Accessed November 8,2009.
- How to use the core competencies in hospital medicine: a framework for curriculum development.J Hosp Med. 2006;1(suppl 1):57–67. , , , , .
- Debriefing in the intensive care unit: a feedback tool to facilitate bedside teaching.Crit Care Med.2007;35(3):738–754. , , , , .
- Use of an innovative video feedback technique to enhance communication skills training.Med Educ.2004;38(2):145–157. , , , et al.
- The impact of feedback to medical housestaff on chart documentation and quality of care in the outpatient setting.J Gen Intern Med.1997;12(6):352–356. .
- Feedback and the mini clinical evaluation exercise.J Gen Intern Med.2004;19(5 pt 2):558–561. , , , .
Feedback has long been recognized as pivotal to the attainment of clinical acumen and skills in medical training.1 Formative feedback can give trainees insight into their strengths and weaknesses, and provide them with clear goals and methods to attain those goals.1, 2 In fact, feedback given regularly over time by a respected figure has shown to improve physician performance.3 However, most faculty are not trained to provide effective feedback. As a result, supervisors often believe they are giving more feedback than trainees believe they are receiving, and residents receive little feedback that they perceive as useful.4 Most residents receive little to no feedback on their communications skills4 or professionalism,5 and rarely receive corrective feedback.6, 7
Faculty may fail to give feedback to residents for a number of reasons. Those barriers most commonly cited in the literature are discomfort with criticizing residents,6, 7 lack of time,4 and lack of direct observation of residents in clinical settings.810 Several studies have looked at tools to guide feedback and address the barrier of discomfort with criticism.6, 7, 11 Some showed improvements in overall feedback, though often supervisors gave only positive feedback and avoided giving feedback about weaknesses.6, 7, 11 Despite the recognition of lack of time as a barrier to feedback,4 most studies on feedback interventions thus far have not included setting aside time for the feedback to occur.6, 7, 11, 12 Finally, a number of studies utilized objective structured clinical examinations (OSCEs) coupled with immediate feedback to improve direct observation of residents, with success in improving feedback related to the encounter.9, 10, 13 To address the gaps in the current literature, the goals of our study were to address 2 specific barriers to feedback for residents: lack of time and discomfort with giving feedback.
The aim of this study was to improve Internal Medicine (IM) residents' and attendings' experiences with feedback on the wards using a pocket card and a dedicated feedback session. We developed and evaluated the pocket feedback card and session for faculty to improve the quality and frequency of their feedback to residents in the inpatient setting. We performed a randomized trial to evaluate our intervention. We hypothesized that the intervention would: 1) improve the quality and quantity of attendings' feedback given to IM ward residents; and 2) improve attendings' comfort with feedback delivery on the wards.
PARTICIPANTS AND METHODS
Setting
The study was performed at Mount Sinai Medical Center in New York City, New York, between July 2008 and January 2009.
Participants
Participants in this study were IM residents and ward teaching attendings on inpatient ward teams at Mount Sinai Medical Center from July 2008 to January 2009. There are 12 ward teams on 3 inpatient services (each service has 4 teams) during each block at our hospital. Ward teams are made up of 1 teaching attending, 1 resident, 1 to 3 interns, and 1 to 2 medical students. The majority of attendings are on the ward service for 4‐week blocks, but some are only on for 1 or 2 weeks. Teams included in the randomization were the General Medicine and Gastroenterology/Cardiology service teams. Half of the General Medicine service attendings are hospitalists. Ward teams were excluded from the study randomization if the attending on the team was on the wards for less than 2 weeks, or if the attending had already been assigned to the experimental group in a previous block, given the influence of having used the card and feedback session previously. Since residents were unaware of the intervention and random assignments were based on attendings, residents could be assigned to the intervention group or the control group on any given inpatient rotation. Therefore, a resident could be in the control group in 1 block and the intervention group in his/her next block on the wards or vice versa, or could be assigned to either the intervention or the control group on more than 1 occasion. Because resident participants were blinded to their team's assignment (as intervention or control) and all surveys were anonymous (tracked as intervention or control by the team name only), it was not possible to exclude residents based on their prior participation or to match the surveys completed by the same residents.
Study Design
We performed a prospective randomized study to evaluate our educational innovation. The unit of randomization was the ward team. For each block, approximately half of the 6‐8 teams were randomized to the intervention group and half to the control group. Randomization assignments were completed the day prior to the start of the block using the random allocation software based on the ward team letters (blind to the attending and resident names). Of the 48 possible ward teams (8 teams per block over 6 blocks), 36 teams were randomized to the intervention or control groups, and 12 teams were not based on the above exclusion criteria. Of the 36 teams, 16 (composed of 16 attendings and 48 residents and interns) were randomized to the intervention group, and 20 (composed of 20 attendings and 63 residents and interns) were randomized to the control group.
The study was blinded such that residents and attendings in the control group were unaware of the study. The study was exempt from IRB review by the Mount Sinai Institutional Review Board, and Grants and Contracts Office, as an evaluation of the effectiveness of an instructional technique in medical education.
Intervention Design
We designed a pocket feedback card to guide a feedback session and assist attendings in giving useful feedback to IM residents on the wards (Figure 1).14 The individual items and categories were adapted from the Accreditation Council for Graduate Medical Education (ACGME) Common Program Requirements Core Competencies section and were revised via the expert consensus of the authors.14 We included 20 items related to resident skills, knowledge, attitudes, and behaviors important to the care of hospitalized patients, grouped under the 6 ACGME core competency domains.14 Many of these items correspond to competencies in the Society of Hospital Medicine (SHM) Core Competencies; in particular, the categories of Systems‐Based Practice and Practice‐Based Learning mirror competencies in the SHM Core Competencies Healthcare Systems chapter.15 Each item utilized a 5‐point Likert scale (1 = very poor, 3 = at expected level, 5 = superior) to evaluate resident performance (Figure 1). We created this card to serve as a directive and specific guide for attendings to provide feedback about specific domains and to give more constructive feedback. The card was to be used during a specific dedicated feedback session in order to overcome the commonly cited barrier of lack of time.
Program Implementation
On the first day of the block, both groups of attendings received the standard inpatient ward orientation given by the program director, including instructions about teaching and administrative responsibilities, and explicit instructions to provide mid‐rotation feedback to residents. Attendings randomized to the intervention group had an additional 5‐minute orientation given by 1 of the investigators. The orientation included a brief discussion on the importance of feedback and an introduction to the items on the card.2 In addition, faculty were instructed to dedicate 1 mid‐rotation attending rounds as a feedback session, to meet individually for 10‐15 minutes with each of the 3‐4 residents on their team, and to use the card to provide feedback on skills in each domain. As noted on the feedback card, if a resident scored less than 3 on a skill set, the attending was instructed to give examples of skills within that domain needing improvement and to offer suggestions for improvement. The intervention group was also asked not to discuss the card or session with others. No other instructions were provided.
Survey Design
At the end of each block, residents and attendings in both groups completed questionnaires to assess satisfaction with, and attitudes toward, feedback (Supporting Information Appendices 1 and 2 in the online version of this article). Survey questions were based on the competency areas included in the feedback card, previously published surveys evaluating feedback interventions,5, 9, 11 and expert opinion. The resident survey was designed to address the impact of feedback on the domains of resident knowledge, clinical and communication skills, and attitudes about feedback from supervisors and peers. We utilized a 5‐point Likert scale including: strongly disagree, disagree, neutral, agree, and strongly agree. The attending survey addressed attendings' satisfaction with feedback encounters and resident performance. At the completion of the study, investigators compared responses in intervention and control groups.
Statistical Analysis
For purposes of analysis, due to the relatively small number of responses for certain answer choices, the Likert scale was converted to a dichotomous variable. The responses of agree and strongly agree were coded as agree; and disagree, strongly disagree, and neutral were coded as disagree. Neutral was coded as disagree in order to avoid overestimating positive attitudes and, in effect, bias our results toward the null hypothesis. Differences between groups were analyzed using chi‐square Fisher's exact test (2‐sided).
Qualitative Interviews
In order to understand the relative contribution of the feedback card versus the feedback session, we performed a qualitative survey of attendings in the intervention group. Following the conclusion of the study period, we selected a convenience sample of 8 attendings from the intervention group for these brief qualitative interviews. We asked 3 basic questions. Was the intervention of the feedback card and dedicated time for feedback useful? Did you find one component, either the card or the dedicated time for feedback, more useful than the other? Were there any negative effects on patient care, education, or other areas, from using an attending rounds as a feedback session? This data was coded and analyzed for common themes.
RESULTS
During the 6‐month study period, 34 teaching attendings (over 36 attending inpatient blocks) and 93 IM residents (over 111 resident inpatient blocks) participated in the study. Thirty‐four of 36 attending surveys and 96 of 111 resident surveys were completed. The overall survey response rates for residents and attendings were 85% and 94%, respectively. Two attendings participated during 2 separate blocks, first in the control group and then in the intervention group, and 18 residents participated during 2 separate blocks. No attendings or residents participated more than twice.
Resident survey response rate was 81.2% in the intervention group and 87.3% in the control group (Table 1). Residents in the intervention group reported receiving more feedback regarding skills they did well (89.7% vs 63.6%, P = 0.004) and skills needing improvement (51.3% vs 25.5%, P = 0.02) than those in the control group. In addition, more intervention residents reported receiving useful information regarding how to improve their skills (53.8% vs 27.3%, P = 0.01), and reported actually improving both their clinical skills (61.5% vs 27.8%, P = 0.001) and their professionalism/communication skills (51.3% vs 29.1%, P = 0.03) based on feedback received from attendings.
Survey Item | Resident Intervention Agree* % (No.) N = 39 | Resident Control Agree*% (No.) N = 55 | P Value |
---|---|---|---|
| |||
I did NOT receive a sufficient amount of feedback from my attending supervisor(s) this block. | 20.5 (8) | 38.2 (21) | 0.08 |
I received feedback from my attending regarding skills I did well during this block. | 89.7 (35) | 63.6 (35) | 0.004 |
I received feedback from my attending regarding specific skills that needed improvement during this block. | 51.3 (20) | 25.5 (14) | 0.02 |
I received useful information from my attending about how to improve my skills during this block. | 53.8 (21) | 27.3 (15) | 0.01 |
I improved my clinical skills based on feedback I received from my attending this block. | 61.5 (24) | 27.8 (15) | 0.001 |
I improved my professionalism/communication skills based on feedback I received from my attending this block. | 51.3 (20) | 29.1 (16) | 0.03 |
I improved my knowledge base because of feedback I received from my attending this block. | 64.1 (25) | 60.0 (33) | 0.83 |
The feedback I received from my attending this block gave me an overall sense of my performance more than it helped me identify specific areas for improvement. | 64.1 (25) | 65.5 (36) | 1.0 |
Feedback from colleagues (other interns and residents) is more helpful than feedback from attendings. | 41.0 (16) | 43.6 (24) | 0.84 |
Independent of feedback received from others, I am able to identify areas in which I need improvement. | 84.6 (33) | 80.0 (44) | 0.60 |
The attending survey response rates for the intervention and control groups were 100% and 90%, respectively. In general, both groups of attendings reported that they were comfortable giving feedback and that they did, in fact, give feedback in each area during their ward block (Table 2). More intervention attendings felt that at least 1 of their residents improved their professionalism/communication skills based on the feedback given (76.9% vs 31.1%, P = 0.02). There were no other significant differences between the groups of attendings.
Survey Item | Attending Intervention Agree* % (No.) N = 16 | Attending Control Agree* % (No.) N = 18 | P Value |
---|---|---|---|
| |||
Giving feedback to housestaff was DIFFICULT this block. | 6.3 (1) | 16.7 (3) | 0.60 |
I was comfortable giving feedback to my housestaff this block. | 93.8 (15) | 94.4 (17) | 1.00 |
I did NOT give a sufficient amount of feedback to my housestaff this block. | 18.8 (3) | 38.9 (7) | 0.27 |
My skills in giving feedback improved during this block. | 50 (8) | 16.7 (3) | 0.07 |
I gave feedback to housestaff regarding skills they did well during this block. | 100 (16) | 94.4 (17) | 1.00 |
I gave feedback to housestaff which targeted specific areas for their improvement. | 81.3 (13) | 70.6 (12) | 0.69 |
At least one of my housestaff improved his/her clinical skills based on feedback I gave this block. | 68.8 (11) | 47.1 (8) | 0.30 |
At least one of my housestaff improved his/her professionalism/communication skills based on feedback I gave this block. | 76.9 (10) | 31.1 (5) | 0.02 |
At least one of my housestaff improved his/her fund of knowledge based on feedback I gave this block. | 50.0 (8) | 52.9 (9) | 1.00 |
Housestaff found the feedback I gave them useful. | 66.7 (10) | 62.5 (10) | 1.00 |
I find it DIFFICULT to find time during inpatient rotations to give feedback to residents regarding their performance. | 50.0 (8) | 33.3 (6) | 0.49 |
Intervention attendings also shared their attitudes toward the feedback card and session. A majority felt that using 1 attending rounds as a feedback session helped create a dedicated time for giving feedback (68.8%), and that the feedback card helped them to give specific, constructive feedback (62.5%). Most attendings reported they would use the feedback card and session again during future inpatient blocks (81%), and would recommend them to other attendings (75%).
Qualitative data from intervention attending interviews revealed further thoughts about the feedback card and feedback session. Most attendings interviewed (7/8) felt that the card was useful for the structure and topic guidance it provided. Half felt that setting aside time for feedback was the more useful component. The other half reported that, because they usually set aside time for feedback regardless, the card was more useful. None of the attendings felt that the feedback card or session was detrimental for patient care or education, and many said that the intervention had positive effects on these areas. For example, 1 attending said that the session added to patient care because I used particular [patient] cases as examples for giving feedback.
DISCUSSION
In this randomized study, we found that a simple pocket feedback card and dedicated feedback session was acceptable to ward attendings and improved resident satisfaction with feedback. Unlike most prior studies of feedback, we demonstrated more feedback around skills needing improvement, and intervention residents felt the feedback they received helped them improve their skills. Our educational intervention was unique in that it combined a pocket card to structure feedback content and dedicated time to structure the feedback process, to address 2 of the major barriers to giving feedback: lack of time and lack of comfort.
The pocket card itself as a tool for improving feedback is innovative and valuable. As a short but directive guide, the card supports attendings' delivery of relevant and specific feedback about residents' performance, and because it is based on the ACGME competencies, it may help attendings focus feedback on areas in which they will later evaluate residents. The inclusion of a prespecified time for giving feedback was important as well, in that it allowed for face‐to‐face feedback to occur, as opposed to a passing comment after a presentation or brief notes in a written final evaluation. Both the card and the feedback session seemed equally important for the success of this intervention, with attitudes varying based on individual attending preferences. Those who usually set aside time for feedback on their own found the card more useful, whereas those who had more trouble finding time for feedback found the specific session more useful. Most attendings found the intervention as a whole helpful, and without any detrimental effects on patient care or education. The card and session may be particularly valuable for hospital attendings, given their growing presence as teachers and supervisors for residents, and their busy days on the wards.
Our study results have important implications for resident training in the hospital. Improving resident receipt of feedback about strengths and weaknesses is an ACGME training requirement, and specific guidance about how to improve skills is critical for focusing improvement efforts. Previous studies have demonstrated that directive feedback in medical training can lead to a variety of performance improvements, including better evaluations by other professionals,9, 16 and objective improvements in resident communication skills,17 chart documentation,18 and clinical management of patients.11, 15, 19 By improving the quality of feedback across several domains and facilitating the feedback process, our intervention may lead to similar improvements. Future studies should examine the global impact of guided feedback as in our study. Perhaps most importantly, attendings found the intervention acceptable and would recommend its use, implying longer term sustainability of its integration into the hospital routine.
One strength of our study was its prospective randomized design. Despite the importance of rigor in medical education research, there remains a paucity of randomized studies to evaluate educational interventions for residents in inpatient settings. Few studies of feedback interventions in particular have performed randomized trials,5, 6, 11 and only one has examined a feedback intervention in a randomized fashion in the inpatient setting.12 This evaluation of a 20‐minute intervention, and a reminder card for supervising attendings to improve written and verbal feedback to residents, modestly improved the amount of verbal feedback given to residents, but did not impact the number of residents receiving mid‐rotation feedback or feedback overall as our study did by report.12
There were several important limitations to our study. First, because this was a single institution study, we only achieved modest sample sizes, particularly in the attending groups, and were unable to assess all of the differences in attending attitudes related to feedback. Second, control and intervention participants were on service simultaneously, which may have led to contamination of the control group and an underestimation of the true impact of our intervention. Since residents were not exclusive to 1 study group on 1 occasion (18 of the 93 residents participated during 2 separate blocks), our results may be biased. In particular, those residents who had the intervention first, and were subsequently in the control group, may have rated the control experience worse than they would have otherwise, creating a bias in favor of a positive result for our intervention. Nonetheless, we believe this situation was uncommon and the potential associated bias minimal. Further, this study assessed attitudes related to feedback and self‐reported knowledge and skills, but did not directly assess resident knowledge, skills, or patient outcomes. We recognize the importance of these outcomes and hope that future interventions can determine these important downstream effects of feedback. We were also unable to assess the card and session's impact on attendings' comfort with feedback, because most attendings in both groups reported feeling comfortable giving feedback. This result may indicate that attendings actually are comfortable giving feedback, or may suggest some element of social desirability bias. Finally, in this study, we designed an intervention which combined the pocket card and dedicated feedback time. We did not quantitatively examine the effect of either component alone, and it is unclear if offering the feedback card without protected time or offering protected time without a guide would have impacted feedback on the wards. However, qualitative data from our study support the use of both components, and implementing the 2 components together is feasible in any inpatient teaching setting.
Despite these limitations, protected time for feedback guided by a pocket feedback card is a simple intervention that appears to improve feedback quantity and quality for ward residents, and guides them to improve their performance. Our low‐intensity intervention helped attendings give residents the tools to improve their clinical and communication skills. An opportunity to make a positive impact on resident education with such a small intervention is rare. The use of a feedback card with protected feedback time could be easily implemented in any training program, and is a valuable tool for busy hospitalists who are more commonly supervising residents on their inpatient rotations.
Feedback has long been recognized as pivotal to the attainment of clinical acumen and skills in medical training.1 Formative feedback can give trainees insight into their strengths and weaknesses, and provide them with clear goals and methods to attain those goals.1, 2 In fact, feedback given regularly over time by a respected figure has shown to improve physician performance.3 However, most faculty are not trained to provide effective feedback. As a result, supervisors often believe they are giving more feedback than trainees believe they are receiving, and residents receive little feedback that they perceive as useful.4 Most residents receive little to no feedback on their communications skills4 or professionalism,5 and rarely receive corrective feedback.6, 7
Faculty may fail to give feedback to residents for a number of reasons. Those barriers most commonly cited in the literature are discomfort with criticizing residents,6, 7 lack of time,4 and lack of direct observation of residents in clinical settings.810 Several studies have looked at tools to guide feedback and address the barrier of discomfort with criticism.6, 7, 11 Some showed improvements in overall feedback, though often supervisors gave only positive feedback and avoided giving feedback about weaknesses.6, 7, 11 Despite the recognition of lack of time as a barrier to feedback,4 most studies on feedback interventions thus far have not included setting aside time for the feedback to occur.6, 7, 11, 12 Finally, a number of studies utilized objective structured clinical examinations (OSCEs) coupled with immediate feedback to improve direct observation of residents, with success in improving feedback related to the encounter.9, 10, 13 To address the gaps in the current literature, the goals of our study were to address 2 specific barriers to feedback for residents: lack of time and discomfort with giving feedback.
The aim of this study was to improve Internal Medicine (IM) residents' and attendings' experiences with feedback on the wards using a pocket card and a dedicated feedback session. We developed and evaluated the pocket feedback card and session for faculty to improve the quality and frequency of their feedback to residents in the inpatient setting. We performed a randomized trial to evaluate our intervention. We hypothesized that the intervention would: 1) improve the quality and quantity of attendings' feedback given to IM ward residents; and 2) improve attendings' comfort with feedback delivery on the wards.
PARTICIPANTS AND METHODS
Setting
The study was performed at Mount Sinai Medical Center in New York City, New York, between July 2008 and January 2009.
Participants
Participants in this study were IM residents and ward teaching attendings on inpatient ward teams at Mount Sinai Medical Center from July 2008 to January 2009. There are 12 ward teams on 3 inpatient services (each service has 4 teams) during each block at our hospital. Ward teams are made up of 1 teaching attending, 1 resident, 1 to 3 interns, and 1 to 2 medical students. The majority of attendings are on the ward service for 4‐week blocks, but some are only on for 1 or 2 weeks. Teams included in the randomization were the General Medicine and Gastroenterology/Cardiology service teams. Half of the General Medicine service attendings are hospitalists. Ward teams were excluded from the study randomization if the attending on the team was on the wards for less than 2 weeks, or if the attending had already been assigned to the experimental group in a previous block, given the influence of having used the card and feedback session previously. Since residents were unaware of the intervention and random assignments were based on attendings, residents could be assigned to the intervention group or the control group on any given inpatient rotation. Therefore, a resident could be in the control group in 1 block and the intervention group in his/her next block on the wards or vice versa, or could be assigned to either the intervention or the control group on more than 1 occasion. Because resident participants were blinded to their team's assignment (as intervention or control) and all surveys were anonymous (tracked as intervention or control by the team name only), it was not possible to exclude residents based on their prior participation or to match the surveys completed by the same residents.
Study Design
We performed a prospective randomized study to evaluate our educational innovation. The unit of randomization was the ward team. For each block, approximately half of the 6‐8 teams were randomized to the intervention group and half to the control group. Randomization assignments were completed the day prior to the start of the block using the random allocation software based on the ward team letters (blind to the attending and resident names). Of the 48 possible ward teams (8 teams per block over 6 blocks), 36 teams were randomized to the intervention or control groups, and 12 teams were not based on the above exclusion criteria. Of the 36 teams, 16 (composed of 16 attendings and 48 residents and interns) were randomized to the intervention group, and 20 (composed of 20 attendings and 63 residents and interns) were randomized to the control group.
The study was blinded such that residents and attendings in the control group were unaware of the study. The study was exempt from IRB review by the Mount Sinai Institutional Review Board, and Grants and Contracts Office, as an evaluation of the effectiveness of an instructional technique in medical education.
Intervention Design
We designed a pocket feedback card to guide a feedback session and assist attendings in giving useful feedback to IM residents on the wards (Figure 1).14 The individual items and categories were adapted from the Accreditation Council for Graduate Medical Education (ACGME) Common Program Requirements Core Competencies section and were revised via the expert consensus of the authors.14 We included 20 items related to resident skills, knowledge, attitudes, and behaviors important to the care of hospitalized patients, grouped under the 6 ACGME core competency domains.14 Many of these items correspond to competencies in the Society of Hospital Medicine (SHM) Core Competencies; in particular, the categories of Systems‐Based Practice and Practice‐Based Learning mirror competencies in the SHM Core Competencies Healthcare Systems chapter.15 Each item utilized a 5‐point Likert scale (1 = very poor, 3 = at expected level, 5 = superior) to evaluate resident performance (Figure 1). We created this card to serve as a directive and specific guide for attendings to provide feedback about specific domains and to give more constructive feedback. The card was to be used during a specific dedicated feedback session in order to overcome the commonly cited barrier of lack of time.
Program Implementation
On the first day of the block, both groups of attendings received the standard inpatient ward orientation given by the program director, including instructions about teaching and administrative responsibilities, and explicit instructions to provide mid‐rotation feedback to residents. Attendings randomized to the intervention group had an additional 5‐minute orientation given by 1 of the investigators. The orientation included a brief discussion on the importance of feedback and an introduction to the items on the card.2 In addition, faculty were instructed to dedicate 1 mid‐rotation attending rounds as a feedback session, to meet individually for 10‐15 minutes with each of the 3‐4 residents on their team, and to use the card to provide feedback on skills in each domain. As noted on the feedback card, if a resident scored less than 3 on a skill set, the attending was instructed to give examples of skills within that domain needing improvement and to offer suggestions for improvement. The intervention group was also asked not to discuss the card or session with others. No other instructions were provided.
Survey Design
At the end of each block, residents and attendings in both groups completed questionnaires to assess satisfaction with, and attitudes toward, feedback (Supporting Information Appendices 1 and 2 in the online version of this article). Survey questions were based on the competency areas included in the feedback card, previously published surveys evaluating feedback interventions,5, 9, 11 and expert opinion. The resident survey was designed to address the impact of feedback on the domains of resident knowledge, clinical and communication skills, and attitudes about feedback from supervisors and peers. We utilized a 5‐point Likert scale including: strongly disagree, disagree, neutral, agree, and strongly agree. The attending survey addressed attendings' satisfaction with feedback encounters and resident performance. At the completion of the study, investigators compared responses in intervention and control groups.
Statistical Analysis
For purposes of analysis, due to the relatively small number of responses for certain answer choices, the Likert scale was converted to a dichotomous variable. The responses of agree and strongly agree were coded as agree; and disagree, strongly disagree, and neutral were coded as disagree. Neutral was coded as disagree in order to avoid overestimating positive attitudes and, in effect, bias our results toward the null hypothesis. Differences between groups were analyzed using chi‐square Fisher's exact test (2‐sided).
Qualitative Interviews
In order to understand the relative contribution of the feedback card versus the feedback session, we performed a qualitative survey of attendings in the intervention group. Following the conclusion of the study period, we selected a convenience sample of 8 attendings from the intervention group for these brief qualitative interviews. We asked 3 basic questions. Was the intervention of the feedback card and dedicated time for feedback useful? Did you find one component, either the card or the dedicated time for feedback, more useful than the other? Were there any negative effects on patient care, education, or other areas, from using an attending rounds as a feedback session? This data was coded and analyzed for common themes.
RESULTS
During the 6‐month study period, 34 teaching attendings (over 36 attending inpatient blocks) and 93 IM residents (over 111 resident inpatient blocks) participated in the study. Thirty‐four of 36 attending surveys and 96 of 111 resident surveys were completed. The overall survey response rates for residents and attendings were 85% and 94%, respectively. Two attendings participated during 2 separate blocks, first in the control group and then in the intervention group, and 18 residents participated during 2 separate blocks. No attendings or residents participated more than twice.
Resident survey response rate was 81.2% in the intervention group and 87.3% in the control group (Table 1). Residents in the intervention group reported receiving more feedback regarding skills they did well (89.7% vs 63.6%, P = 0.004) and skills needing improvement (51.3% vs 25.5%, P = 0.02) than those in the control group. In addition, more intervention residents reported receiving useful information regarding how to improve their skills (53.8% vs 27.3%, P = 0.01), and reported actually improving both their clinical skills (61.5% vs 27.8%, P = 0.001) and their professionalism/communication skills (51.3% vs 29.1%, P = 0.03) based on feedback received from attendings.
Survey Item | Resident Intervention Agree* % (No.) N = 39 | Resident Control Agree*% (No.) N = 55 | P Value |
---|---|---|---|
| |||
I did NOT receive a sufficient amount of feedback from my attending supervisor(s) this block. | 20.5 (8) | 38.2 (21) | 0.08 |
I received feedback from my attending regarding skills I did well during this block. | 89.7 (35) | 63.6 (35) | 0.004 |
I received feedback from my attending regarding specific skills that needed improvement during this block. | 51.3 (20) | 25.5 (14) | 0.02 |
I received useful information from my attending about how to improve my skills during this block. | 53.8 (21) | 27.3 (15) | 0.01 |
I improved my clinical skills based on feedback I received from my attending this block. | 61.5 (24) | 27.8 (15) | 0.001 |
I improved my professionalism/communication skills based on feedback I received from my attending this block. | 51.3 (20) | 29.1 (16) | 0.03 |
I improved my knowledge base because of feedback I received from my attending this block. | 64.1 (25) | 60.0 (33) | 0.83 |
The feedback I received from my attending this block gave me an overall sense of my performance more than it helped me identify specific areas for improvement. | 64.1 (25) | 65.5 (36) | 1.0 |
Feedback from colleagues (other interns and residents) is more helpful than feedback from attendings. | 41.0 (16) | 43.6 (24) | 0.84 |
Independent of feedback received from others, I am able to identify areas in which I need improvement. | 84.6 (33) | 80.0 (44) | 0.60 |
The attending survey response rates for the intervention and control groups were 100% and 90%, respectively. In general, both groups of attendings reported that they were comfortable giving feedback and that they did, in fact, give feedback in each area during their ward block (Table 2). More intervention attendings felt that at least 1 of their residents improved their professionalism/communication skills based on the feedback given (76.9% vs 31.1%, P = 0.02). There were no other significant differences between the groups of attendings.
Survey Item | Attending Intervention Agree* % (No.) N = 16 | Attending Control Agree* % (No.) N = 18 | P Value |
---|---|---|---|
| |||
Giving feedback to housestaff was DIFFICULT this block. | 6.3 (1) | 16.7 (3) | 0.60 |
I was comfortable giving feedback to my housestaff this block. | 93.8 (15) | 94.4 (17) | 1.00 |
I did NOT give a sufficient amount of feedback to my housestaff this block. | 18.8 (3) | 38.9 (7) | 0.27 |
My skills in giving feedback improved during this block. | 50 (8) | 16.7 (3) | 0.07 |
I gave feedback to housestaff regarding skills they did well during this block. | 100 (16) | 94.4 (17) | 1.00 |
I gave feedback to housestaff which targeted specific areas for their improvement. | 81.3 (13) | 70.6 (12) | 0.69 |
At least one of my housestaff improved his/her clinical skills based on feedback I gave this block. | 68.8 (11) | 47.1 (8) | 0.30 |
At least one of my housestaff improved his/her professionalism/communication skills based on feedback I gave this block. | 76.9 (10) | 31.1 (5) | 0.02 |
At least one of my housestaff improved his/her fund of knowledge based on feedback I gave this block. | 50.0 (8) | 52.9 (9) | 1.00 |
Housestaff found the feedback I gave them useful. | 66.7 (10) | 62.5 (10) | 1.00 |
I find it DIFFICULT to find time during inpatient rotations to give feedback to residents regarding their performance. | 50.0 (8) | 33.3 (6) | 0.49 |
Intervention attendings also shared their attitudes toward the feedback card and session. A majority felt that using 1 attending rounds as a feedback session helped create a dedicated time for giving feedback (68.8%), and that the feedback card helped them to give specific, constructive feedback (62.5%). Most attendings reported they would use the feedback card and session again during future inpatient blocks (81%), and would recommend them to other attendings (75%).
Qualitative data from intervention attending interviews revealed further thoughts about the feedback card and feedback session. Most attendings interviewed (7/8) felt that the card was useful for the structure and topic guidance it provided. Half felt that setting aside time for feedback was the more useful component. The other half reported that, because they usually set aside time for feedback regardless, the card was more useful. None of the attendings felt that the feedback card or session was detrimental for patient care or education, and many said that the intervention had positive effects on these areas. For example, 1 attending said that the session added to patient care because I used particular [patient] cases as examples for giving feedback.
DISCUSSION
In this randomized study, we found that a simple pocket feedback card and dedicated feedback session was acceptable to ward attendings and improved resident satisfaction with feedback. Unlike most prior studies of feedback, we demonstrated more feedback around skills needing improvement, and intervention residents felt the feedback they received helped them improve their skills. Our educational intervention was unique in that it combined a pocket card to structure feedback content and dedicated time to structure the feedback process, to address 2 of the major barriers to giving feedback: lack of time and lack of comfort.
The pocket card itself as a tool for improving feedback is innovative and valuable. As a short but directive guide, the card supports attendings' delivery of relevant and specific feedback about residents' performance, and because it is based on the ACGME competencies, it may help attendings focus feedback on areas in which they will later evaluate residents. The inclusion of a prespecified time for giving feedback was important as well, in that it allowed for face‐to‐face feedback to occur, as opposed to a passing comment after a presentation or brief notes in a written final evaluation. Both the card and the feedback session seemed equally important for the success of this intervention, with attitudes varying based on individual attending preferences. Those who usually set aside time for feedback on their own found the card more useful, whereas those who had more trouble finding time for feedback found the specific session more useful. Most attendings found the intervention as a whole helpful, and without any detrimental effects on patient care or education. The card and session may be particularly valuable for hospital attendings, given their growing presence as teachers and supervisors for residents, and their busy days on the wards.
Our study results have important implications for resident training in the hospital. Improving resident receipt of feedback about strengths and weaknesses is an ACGME training requirement, and specific guidance about how to improve skills is critical for focusing improvement efforts. Previous studies have demonstrated that directive feedback in medical training can lead to a variety of performance improvements, including better evaluations by other professionals,9, 16 and objective improvements in resident communication skills,17 chart documentation,18 and clinical management of patients.11, 15, 19 By improving the quality of feedback across several domains and facilitating the feedback process, our intervention may lead to similar improvements. Future studies should examine the global impact of guided feedback as in our study. Perhaps most importantly, attendings found the intervention acceptable and would recommend its use, implying longer term sustainability of its integration into the hospital routine.
One strength of our study was its prospective randomized design. Despite the importance of rigor in medical education research, there remains a paucity of randomized studies to evaluate educational interventions for residents in inpatient settings. Few studies of feedback interventions in particular have performed randomized trials,5, 6, 11 and only one has examined a feedback intervention in a randomized fashion in the inpatient setting.12 This evaluation of a 20‐minute intervention, and a reminder card for supervising attendings to improve written and verbal feedback to residents, modestly improved the amount of verbal feedback given to residents, but did not impact the number of residents receiving mid‐rotation feedback or feedback overall as our study did by report.12
There were several important limitations to our study. First, because this was a single institution study, we only achieved modest sample sizes, particularly in the attending groups, and were unable to assess all of the differences in attending attitudes related to feedback. Second, control and intervention participants were on service simultaneously, which may have led to contamination of the control group and an underestimation of the true impact of our intervention. Since residents were not exclusive to 1 study group on 1 occasion (18 of the 93 residents participated during 2 separate blocks), our results may be biased. In particular, those residents who had the intervention first, and were subsequently in the control group, may have rated the control experience worse than they would have otherwise, creating a bias in favor of a positive result for our intervention. Nonetheless, we believe this situation was uncommon and the potential associated bias minimal. Further, this study assessed attitudes related to feedback and self‐reported knowledge and skills, but did not directly assess resident knowledge, skills, or patient outcomes. We recognize the importance of these outcomes and hope that future interventions can determine these important downstream effects of feedback. We were also unable to assess the card and session's impact on attendings' comfort with feedback, because most attendings in both groups reported feeling comfortable giving feedback. This result may indicate that attendings actually are comfortable giving feedback, or may suggest some element of social desirability bias. Finally, in this study, we designed an intervention which combined the pocket card and dedicated feedback time. We did not quantitatively examine the effect of either component alone, and it is unclear if offering the feedback card without protected time or offering protected time without a guide would have impacted feedback on the wards. However, qualitative data from our study support the use of both components, and implementing the 2 components together is feasible in any inpatient teaching setting.
Despite these limitations, protected time for feedback guided by a pocket feedback card is a simple intervention that appears to improve feedback quantity and quality for ward residents, and guides them to improve their performance. Our low‐intensity intervention helped attendings give residents the tools to improve their clinical and communication skills. An opportunity to make a positive impact on resident education with such a small intervention is rare. The use of a feedback card with protected feedback time could be easily implemented in any training program, and is a valuable tool for busy hospitalists who are more commonly supervising residents on their inpatient rotations.
- Feedback in clinical medical education.JAMA.1983;250(6):777–781. .
- Giving feedback in medical education: verification of recommended techniques.J Gen Intern Med.1998;13(2):111–116. , .
- Systematic review of the literature on assessment, feedback and physicians' clinical performance: BEME Guide No. 7.Med Teach.2006;28(2):117–128. , , , , .
- Missed opportunities: a descriptive assessment of teaching and attitudes regarding communication skills in a surgical residency.Curr Surg.2006;63(6):401–409. , , , .
- Impact of a 360‐degree professionalism assessment on faculty comfort and skills in feedback delivery.J Gen Intern Med.2008;23(7):969–972. , , .
- Daily encounter cards facilitate competency‐based feedback while leniency bias persists.CJEM.2008;10(1):44–50. , .
- Teaching compassion and respect. Attending physicians' responses to problematic behaviors.J Gen Intern Med.1999;14(1):49–55. , , , , .
- Faculty and the observation of trainees' clinical skills: problems and opportunities.Acad Med.2004;79(1):16–22. .
- Direct observation of residents in the emergency department: a structured educational program.Acad Emerg Med.2009;16(4):343–351. , .
- Evaluation of a novel assessment form for observing medical residents: a randomised, controlled trial.Med Educ.2008;42(12):1234–1242. , , , et al.
- Resident evaluations: the use of daily evaluation forms in rheumatology ambulatory care.J Rheumatol.2009;36(6):1298–1303. , , , et al.
- Effectiveness of a focused educational intervention on resident evaluations from faculty a randomized controlled trial.J Gen Intern Med.2001;16(7):427–434. , , , .
- Effects of training in direct observation of medical residents' clinical competence: a randomized trial.Ann Intern Med.2004;140(11):874–881. , , .
- Internal Medicine Program Requirements. ACGME. July 1, 2009. Available at: http://www.acgme.org/acWebsite/downloads/RRC_progReq/140_internal_medicine_07012009.pdf. Accessed November 8,2009.
- How to use the core competencies in hospital medicine: a framework for curriculum development.J Hosp Med. 2006;1(suppl 1):57–67. , , , , .
- Debriefing in the intensive care unit: a feedback tool to facilitate bedside teaching.Crit Care Med.2007;35(3):738–754. , , , , .
- Use of an innovative video feedback technique to enhance communication skills training.Med Educ.2004;38(2):145–157. , , , et al.
- The impact of feedback to medical housestaff on chart documentation and quality of care in the outpatient setting.J Gen Intern Med.1997;12(6):352–356. .
- Feedback and the mini clinical evaluation exercise.J Gen Intern Med.2004;19(5 pt 2):558–561. , , , .
- Feedback in clinical medical education.JAMA.1983;250(6):777–781. .
- Giving feedback in medical education: verification of recommended techniques.J Gen Intern Med.1998;13(2):111–116. , .
- Systematic review of the literature on assessment, feedback and physicians' clinical performance: BEME Guide No. 7.Med Teach.2006;28(2):117–128. , , , , .
- Missed opportunities: a descriptive assessment of teaching and attitudes regarding communication skills in a surgical residency.Curr Surg.2006;63(6):401–409. , , , .
- Impact of a 360‐degree professionalism assessment on faculty comfort and skills in feedback delivery.J Gen Intern Med.2008;23(7):969–972. , , .
- Daily encounter cards facilitate competency‐based feedback while leniency bias persists.CJEM.2008;10(1):44–50. , .
- Teaching compassion and respect. Attending physicians' responses to problematic behaviors.J Gen Intern Med.1999;14(1):49–55. , , , , .
- Faculty and the observation of trainees' clinical skills: problems and opportunities.Acad Med.2004;79(1):16–22. .
- Direct observation of residents in the emergency department: a structured educational program.Acad Emerg Med.2009;16(4):343–351. , .
- Evaluation of a novel assessment form for observing medical residents: a randomised, controlled trial.Med Educ.2008;42(12):1234–1242. , , , et al.
- Resident evaluations: the use of daily evaluation forms in rheumatology ambulatory care.J Rheumatol.2009;36(6):1298–1303. , , , et al.
- Effectiveness of a focused educational intervention on resident evaluations from faculty a randomized controlled trial.J Gen Intern Med.2001;16(7):427–434. , , , .
- Effects of training in direct observation of medical residents' clinical competence: a randomized trial.Ann Intern Med.2004;140(11):874–881. , , .
- Internal Medicine Program Requirements. ACGME. July 1, 2009. Available at: http://www.acgme.org/acWebsite/downloads/RRC_progReq/140_internal_medicine_07012009.pdf. Accessed November 8,2009.
- How to use the core competencies in hospital medicine: a framework for curriculum development.J Hosp Med. 2006;1(suppl 1):57–67. , , , , .
- Debriefing in the intensive care unit: a feedback tool to facilitate bedside teaching.Crit Care Med.2007;35(3):738–754. , , , , .
- Use of an innovative video feedback technique to enhance communication skills training.Med Educ.2004;38(2):145–157. , , , et al.
- The impact of feedback to medical housestaff on chart documentation and quality of care in the outpatient setting.J Gen Intern Med.1997;12(6):352–356. .
- Feedback and the mini clinical evaluation exercise.J Gen Intern Med.2004;19(5 pt 2):558–561. , , , .
Copyright © 2011 Society of Hospital Medicine