Using Democratic Deliberation to Engage Veterans in Complex Policy Making for the Veterans Health Administration

Article Type
Changed
Thu, 02/06/2020 - 15:32
A democratic deliberation panel of veterans providing insight into veteran perspectives on resource allocation and the Veterans Choice Act showed the importance and feasibility of engaging veterans in the policy-making process.

Providing high-quality, patient-centered health care is a top priority for the US Department of Veterans Affairs (VA) Veteran Health Administration (VHA), whose core mission is to improve the health and well-being of US veterans. Thus, news of long wait times for medical appointments in the VHA sparked intense national attention and debate and led to changes in senior management and legislative action. 1 On August 8, 2014, President Bara c k Obama signed the Veterans Access, Choice, and Accountability Act of 2014, also known as the Choice Act, which provided an additional $16 billion in emergency spending over 3 years to improve veterans’ access to timely health care. 2 The Choice Act sought to develop an integrated health care network that allowed qualified VHA patients to receive specific health care services in their communities delivered by non-VHA health care providers (HCPs) but paid for by the VHA. The Choice Act also laid out explicit criteria for how to prioritize who would be eligible for VHA-purchased civilian care: (1) veterans who could not get timely appointments at a VHA medical facility within 30 days of referral; or (2) veterans who lived > 40 miles from the closest VHA medical facility.

VHA decision makers seeking to improve care delivery also need to weigh trade-offs between alternative approaches to providing rapid access. For instance, increasing access to non-VHA HCPs may not always decrease wait times and could result in loss of continuity, limited care coordination, limited ability to ensure and enforce high-quality standards at the VHA, and other challenges.3-6 Although the concerns and views of elected representatives, advocacy groups, and health system leaders are important, it is unknown whether these views and preferences align with those of veterans. Arguably, the range of views and concerns of informed veterans whose health is at stake should be particularly prominent in such policy decision making.

To identify the considerations that were most important to veterans regarding VHA policy around decreasing wait times, a study was designed to engage a group of veterans who were eligible for civilian care under the Choice Act. The study took place 1 year after the Choice Act was passed. Veterans were asked to focus on 2 related questions: First, how should funding be used for building VHA capacity (build) vs purchasing civilian care (buy)? Second, under what circumstances should civilian care be prioritized?

The aim of this paper is to describe democratic deliberation (DD), a specific method that engaged veteran patients in complex policy decisions around access to care. DD methods have been used increasingly in health care for developing policy guidance, setting priorities, providing advice on ethical dilemmas, weighing risk-benefit trade-offs, and determining decision-making authority.7-12 For example, DD helped guide national policy for mammography screening for breast cancer in New Zealand.13 The Agency for Healthcare Research and Quality has completed a systematic review and a large, randomized experiment on best practices for carrying out public deliberation.8,13,14 However, despite the potential value of this approach, there has been little use of deliberative methods within the VHA for the explicit purpose of informing veteran health care delivery.

This paper describes the experience engaging veterans by using DD methodology and informing VHA leadership about the results of those deliberations. The specific aims were to understand whether DD is an acceptable approach to use to engage patients in the medical services policy-making process within VHA and whether veterans are able to come to an informed consensus.

 

 

Methods

Engaging patients and incorporating their needs and concerns within the policy-making process may improve health system policies and make those policies more patient centered. Such engagement also could be a way to generate creative solutions. However, because health-system decisions often involve making difficult trade-offs, effectively obtaining patient population input on complex care delivery issues can be challenging.

Although surveys can provide intuitive, top-of-mind input from respondents, these opinions are generally not sufficient for resolving complex problems.15 Focus groups and interviews may produce results that are more in-depth than surveys, but these methods tend to elicit settled private preferences rather than opinions about what the community should do.16 DD, on the other hand, is designed to elicit deeply informed public opinions on complex, value-laden topics to develop recommendations and policies for a larger community.17 The goal is to find collective solutions to challenging social problems. DD achieves this by giving participants an opportunity to explore a topic in-depth, question experts, and engage peers in reason-based discussions.18,19 This method has its roots in political science and has been used over several decades to successfully inform policy making on a broad array of topics nationally and internationally—from health research ethics in the US to nuclear and energy policy in Japan.7,16,20,21 DD has been found to promote ownership of public programs and lend legitimacy to policy decisions, political institutions, and democracy itself.18

A single day (8 hours) DD session was convened, following a Citizens Jury model of deliberation, which brings veteran patients together to learn about a topic, ask questions of experts, deliberate with peers, and generate a “citizen’s report” that contains a set of recommendations (Table 1). An overview of the different models of DD and rationale for each can be found elsewhere.8,15

 

Recruitment Considerations

A purposively selected sample of civilian care-eligible veterans from a midwestern VHA health care system (1 medical center and 3 community-based outpatient clinics [CBOCs]) were invited to the DD session. The targeted number of participants was 30. Female veterans, who comprise only 7% of the local veteran population, were oversampled to account for their potentially different health care needs and to create balance between males and females in the session. Oversampling for other characteristics was not possible due to the relatively small sample size. Based on prior experience,7 it was assumed that 70% of willing participants would attend the session; therefore 34 veterans were invited and 24 attended. Each participant received a $200 incentive in appreciation for their substantial time commitment and to offset transportation costs.

Background Materials

A packet with educational materials (Flesch-Kincaid Grade Level of 10.5) was mailed to participants about 2 weeks before the DD session. Participants were asked to review prior to attending the session. These materials described the session (eg, purpose, organizers, importance) and provided factual information about the Choice Act (eg, eligibility, out-of-pocket costs, travel pay, prescription drug policies).

Session Overview

The session was structured to accomplish the following goals: (1) Elicit participants’ opinions about access to health care and reasons for those opinions; (2) Provide in-depth education about the Choice Act through presentations and discussions with topical experts; and (3) Elicit reasoning and recommendations on both the criteria by which participants prioritize candidates for civilian care and how participants would allocate additional funding to improve access (ie, by building VHA capacity to deliver more timely health care vs purchasing health care from civilian HCPs).

 

 

Participants were asked to fill out a survey on arrival in the morning and were assigned to 1 of 3 tables or small groups. Each table had a facilitator who had extensive experience in qualitative data collection methods and guided the dialogue using a scripted protocol that they helped develop and refine. The facilitation materials drew from and used previously published studies.22,23 Each facilitator audio recorded the sessions and took notes. Three experts presented during plenary education sessions. Presentations were designed to provide balanced factual information and included a veteran’s perspective. One presenter was a clinician on the project team, another was a local clinical leader responsible for making decisions about what services to provide via civilian care (buy) vs enhancing the local VHA health system’s ability to provide those services (build), and the third presenter was a veteran who was on the project team.

Education Session 1

The first plenary education session with expert presentations was conducted after each table completed an icebreaker exercise. The project team physician provided a brief history and description of the Choice Act to reinforce educational materials sent to participants prior to the session. The health system clinical leader described his decision process and principles and highlighted constraints placed on him by the Choice Act that were in place at the time of the DD session. He also described existing local and national programs to provide civilian care (eg, local fee-basis non-VHA care programs) and how these programs sought to achieve goals similar to the Choice Act. The veteran presenter focused on the importance of session participants providing candid insight and observations and emphasized that this session was a significant opportunity to “have their voices heard.”

Deliberation 1: What criteria should be used to prioritize patients for receiving civilian care paid for by the VHA? To elicit preferences on the central question of this deliberation, participants were presented with 8 real-world cases that were based on interviews conducted with Choice Act-eligible veterans (Table 2 and eAppendices A

, B  ,  C   , and D   ). Participants were first instructed to read through and discuss the cases as a group, then come to agreement on prioritizing how the patients in the case scenarios should receive civilian care. Agreement was defined as having complete consensus or consensus by the majority, in which case, the facilitator noted the number who agreed and disagreed within each group. The facilitators documented the criteria each group considered as they prioritized the cases, along with the group’s reasoning behind their choices.

 

Education Session 2

In the second plenary session, the project team physician provided information about health care access issues, both inside and outside of the VHA, particularly between urban and rural areas. He also discussed factors related to the insufficient capacity to meet growing demand that contributed to the VHA wait-time crisis. The veteran presenter shared reflections on health care access from a veteran’s perspective.

Deliberation 2: How should additional funding be divided between increasing the ability of the VHA to (1) provide care by VHA HCPs; and (2) pay for care from non-VHA civilian HCPs? Participants were presented the patient examples and Choice Act funding scenarios (the buy policy option) and contrasted that with a build policy option. Participants were explicitly encouraged to shift their perspectives from thinking about individual cases to considering policy-level decisions and the broader social good (Table 2).

 

 

Ensuring Robust Deliberations

If participants do not adequately grasp the complexities of the topic, a deliberation can fail. To facilitate nuanced reasoning, real-world concrete examples were developed as the starting point of each deliberation based on interviews with actual patients (deliberation 1) and actual policy proposals relevant to the funding allocation decisions within the Choice Act (deliberation 2).

A deliberation also can fail with self-silencing, where participants withhold opinions that differ from those articulated first or by more vocal members of the group.24 To combat self-silencing, highly experienced facilitators were used to ensure sharing from all participants and to support an open-minded, courteous, and reason-based environment for discourse. It was specified that the best solutions are achieved through reason-based and cordial disagreement and that success can be undermined when participants simply agree because it is easier or more comfortable.

A third way a deliberation can fail is if individuals do not adopt a group or system-level perspective. To counter this, facilitators reinforced at multiple points the importance of taking a broader social perspective rather than sharing only one’s personal preferences.

Finally, it is important to assess the quality of the deliberative process itself, to ensure that results are trustworthy.25 To assess the quality of the deliberative process, participants knowledge about key issues pre- and postdeliberation were assessed. Participants also were asked to rate the quality of the facilitators and how well they felt their voice was heard and respected, and facilitators made qualitative assessments about the extent to which participants were engaged in reason-based and collaborative discussion.

Data

Quantitative data were collected via pre- and postsession surveys. The surveys contained items related to knowledge about the Choice Act, expectations for the DD session, beliefs and opinions about the provision of health care for veterans, recommended funding allocations between build vs buy policy options, and general demographics. Qualitative data were collected through detailed notes taken by the 3 facilitators. Each table’s deliberations were audio recorded so that gaps in the notes could be filled.

The 3 facilitators, who were all experienced qualitative researchers, typed their written notes into a template immediately after the session. Two of the 3 facilitators led the analysis of the session notes. Findings within and across the 3 deliberation tables were developed using content and matrix analysis methods.26 Descriptive statistics were generated from survey responses and compared survey items pre- and postsession using paired t tests or χ2 tests for categorical responses.

Results

Thirty-three percent of individuals invited (n = 127) agreed to participate. Those who declined cited conflicts related to distance, transportation, work/school, medical appointments, family commitments, or were not interested. In all, 24 (69%) of the 35 veterans who accepted the invitation attended the deliberation session. Of the 11 who accepted but did not attend, 5 cancelled ahead of time because of conflicts (Figure). Most participants were male (70%), 48% were aged 61 to 75 years, 65% were white, 43% had some college education, 43% reported an annual income of between $25,000 and $40,000, and only 35% reported very good health (eAppendix D).

 

 

Deliberation 1

During the deliberation on the prioritization criteria, the concept of “condition severity” emerged as an important criterion for veterans. This criterion captured simultaneous consideration of both clinical necessity and burden on the veteran to obtain care. For example, participants felt that patients with a life-threatening illness should be prioritized for civilian care over patients who need preventative or primary care (clinical necessity) and that elderly patients with substantial difficulty traveling to VHA appointments should be prioritized over patients who can travel more easily (burden). The Choice Act regulations at the time of the DD session did not reflect this nuanced perspective, stipulating only that veterans must live > 40 miles from the nearest VHA medical facility.

One of the 3 groups did not prioritize the patient cases because some members felt that no veteran should be constrained from receiving civilian care if they desired it. Nonetheless, this group did agree with prioritizing the first 2 cases in Table 3. The other groups prioritized all 8 cases in generally similar ways.

Deliberation 2

No clear consensus emerged on the buy vs build question. A representative from each table presented their group’s positions, rationale, and recommendations after deliberations were completed. After hearing the range of positions, the groups then had another opportunity to deliberate based on what they heard from the other tables; no new recommendations or consensus emerged.

Participants who were in favor of allocating more funds toward the build policy offered a range of rationales, saying that it would (1) increase access for rural veterans by building CBOCs and deploying more mobile units that could bring outlets for health care closer to their home communities; (2) provide critical and unique medical expertise to address veteran-specific issues such as prosthetics, traumatic brain injury, posttraumatic stress disorder, spinal cord injury, and shrapnel wounds that are typically not available through civilian providers; (3) give VHA more oversight over the quality and cost of care, which is more challenging to do with civilian providers; and (4) Improve VHA infrastructure by, for example, upgrading technology and attracting the best clinicians and staff to support “our VHA.”

Participants who were in favor of allocating more funds toward the buy policy also offered a range of rationales, saying that it would (1) decrease patient burden by increasing access through community providers, decreasing wait time, and lessening personal cost and travel time; (2) allow more patients to receive civilian care, which was generally seen as beneficial by a few participants because of perceptions that the VHA provides lower quality care due to a shortage of VHA providers, run-down/older facilities, lack of technology, and poorer-quality VHA providers; and (3) provide an opportunity to divest of costly facilities and invest in other innovative approaches. Regarding this last reason, a few participants felt that the VHA is “gouged” when building medical centers that overrun budgets. They also were concerned that investing in facilities tied VHA to specific locations when current locations of veterans may change “25 years from now.”

 

 

Survey Results

Twenty-three of the 24 participants completed both pre- and postsession surveys. The majority of participants in the session felt people in the group respected their opinion (96%); felt that the facilitator did not try to influence the group with her own opinions (96%); indicated they understood the information enough to participate as much as they wanted (100%); and were hopeful that their reasoning and recommendations would help inform VHA policy makers (82%).

The surveys also provided an opportunity to examine the extent to which knowledge, attitudes, and opinions changed from before to after the deliberation. Even with the small sample, responses revealed a trend toward improved knowledge about key elements of the Choice Act and its goals. Further, there was a shift in some participants’ opinions about how patients should be prioritized to receive civilian care. For example, before the deliberation participants generally felt that all veterans should be able to receive civilian care, whereas postdeliberation this was not the case. Postdeliberation, most participants felt that primary care should not be a high priority for civilian care but continued to endorse prioritizing civilian care for specialty services like orthopedic or cardiology-related care. Finally, participants moved from more diverse recommendations regarding additional funds allocations, toward consensus after the deliberation around allocating funds to the build policy. Eight participants supported a build policy beforehand, whereas 16 supported this policy afterward.

Discussion

This study explored DD as a method for deeply engaging veterans in complex policy making to guide funding allocation and prioritization decisions related to the Choice Act, decisions that are still very relevant today within the context of the Mission Act and have substantial implications for how health care is delivered in the VHA. The Mission Act passed on June 6, 2018, with the goal of improving access to and the reliability of civilian or community care for eligible veterans.27 Decisions related to appropriating scarce funding to improve access to care is an emotional and value-laden topic that elicited strong and divergent opinions among the participants. Veterans were eager to have their voices heard and had strong expectations that VHA leadership would be briefed about their recommendations. The majority of participants were satisfied with the deliberation process, felt they understood the issues, and felt their opinions were respected. They expressed feelings of comradery and community throughout the process.

In this single deliberation session, the groups did not achieve a single, final consensus regarding how VHA funding should ultimately be allocated between buy and build policy options. Nonetheless, participants provided a rich array of recommendations and rationale for them. Session moderators observed rich, sophisticated, fair, and reason-based discussions on this complex topic. Participants left with a deeper knowledge and appreciation for the complex trade-offs and expressed strong rationales for both sides of the policy debate on build vs buy. In addition, the project yielded results of high interest to VHA policy makers.

This work was presented in multiple venues between 2015 to 2016, and to both local and national VHA leadership, including the local Executive Quality Leadership Boards, the VHA Central Office Committee on the Future State of VA Community Care, the VA Office of Patient Centered Care, and the National Veteran Experience Committee. Through these discussions and others, we saw great interest within the VHA system and high-level leaders to explore ways to include veterans’ voices in the policy-making process. This work was invaluable to our research team (eAppendix E 

 ), has influenced the methodology of multiple research grants within the VA that seek to engage veterans in the research process, and played a pivotal role in the development of the Veteran Experience Office.

Many health system decisions regarding what care should be delivered (and how) involve making difficult, value-laden choices in the context of limited resources. DD methods can be used to target and obtain specific viewpoints from diverse populations, such as the informed perspectives of minority and underrepresented populations within the VHA.19 For example, female veterans were oversampled to ensure that the informed preferences of this population was obtained. Thus, DD methods could provide a valuable tool for health systems to elicit in-depth diverse patient input on high-profile policies that will have a substantial impact on the system’s patient population.

 

 

Limitations

One potential downside of DD is that, because of the resource-intensive nature of deliberation sessions, they are often conducted with relatively small groups.9 Viewpoints of those within these small samples who are willing to spend an entire day discussing a complex topic may not be representative of the larger patient community. However, the core goal of DD is diversity of opinions rather than representativeness.

A stratified random sampling strategy that oversampled for underrepresented and minority populations was used to help select a diverse group that represents the population on key characteristics and partially addresses concern about representativeness. Efforts to optimize participation rates, including providing monetary incentives, also are helpful and have led to high participation rates in past deliberations.7

 

Health system communication strategies that promote the importance of becoming involved in DD sessions also may be helpful in improving rates of recruitment. On particularly important topics where health system leaders feel a larger resource investment is justified, conducting larger scale deliberations with many small groups may obtain more generalizable evidence about what individual patients and groups of patients recommend.7 However, due to the inherent limitations of surveys and focus group approaches for obtaining informed views on complex topics, there are no clear systematic alternatives to the DD approach.

Conclusion

DD is an effective method to meaningfully engage patients in deep deliberations to guide complex policy making. Although design of deliberative sessions is resource-intensive, patient engagement efforts, such as those described in this paper, could be an important aspect of a well-functioning learning health system. Further research into alternative, streamlined methods that can also engage veterans more deeply is needed. DD also can be combined with other approaches to broaden and confirm findings, including focus groups, town hall meetings, or surveys.

Although this study did not provide consensus on how the VHA should allocate funds with respect to the Choice Act, it did provide insight into the importance and feasibility of engaging veterans in the policy-making process. As more policies aimed at improving veterans’ access to civilian care are created, such as the most recent Mission Act, policy makers should strongly consider using the DD method of obtaining informed veteran input into future policy decisions.

Acknowledgments
Funding was provided by the US Department of Veterans Affairs Office of Analytics and Business Intelligence (OABI) and the VA Quality Enhancement Research Initiative (QUERI). Dr. Caverly was supported in part by a VA Career Development Award (CDA 16-151). Dr. Krein is supported by a VA Health Services Research and Development Research Career Scientist Award (RCS 11-222). The authors thank the veterans who participated in this work. They also thank Caitlin Reardon and Natalya Wawrin for their assistance in organizing the deliberation session.

References

1. VA Office of the Inspector General. Veterans Health Administration. Interim report: review of patient wait times, scheduling practices, and alleged patient deaths at the Phoenix Health Care System. https://www.va.gov/oig/pubs/VAOIG-14-02603-178.pdf. Published May 28, 2014. Accessed December 9, 2019.

2. Veterans Access, Choice, and Accountability Act of 2014. 42 USC §1395 (2014).

3. Penn M, Bhatnagar S, Kuy S, et al. Comparison of wait times for new patients between the private sector and United States Department of Veterans Affairs medical centers. JAMA Netw Open. 2019;2(1):e187096.

4. Thorpe JM, Thorpe CT, Schleiden L, et al. Association between dual use of Department of Veterans Affairs and Medicare Part D drug benefits and potentially unsafe prescribing. JAMA Intern Med. 2019; July 22. [Epub ahead of print.]

5. Moyo P, Zhao X, Thorpe CT, et al. Dual receipt of prescription opioids from the Department of Veterans Affairs and Medicare Part D and prescription opioid overdose death among veterans: a nested case-control study. Ann Intern Med. 2019;170(7):433-442.

6. Meyer LJ, Clancy CM. Care fragmentation and prescription opioids. Ann Intern Med. 2019;170(7):497-498.

7. Damschroder LJ, Pritts JL, Neblo MA, Kalarickal RJ, Creswell JW, Hayward RA. Patients, privacy and trust: patients’ willingness to allow researchers to access their medical records. Soc Sci Med. 2007;64(1):223-235.

8. Street J, Duszynski K, Krawczyk S, Braunack-Mayer A. The use of citizens’ juries in health policy decision-making: a systematic review. Soc Sci Med. 2014;109:1-9.

9. Paul C, Nicholls R, Priest P, McGee R. Making policy decisions about population screening for breast cancer: the role of citizens’ deliberation. Health Policy. 2008;85(3):314-320.

10. Martin D, Abelson J, Singer P. Participation in health care priority-setting through the eyes of the participants. J Health Serv Res Pol. 2002;7(4):222-229.

11. Mort M, Finch T. Principles for telemedicine and telecare: the perspective of a citizens’ panel. J Telemed Telecare. 2005;11(suppl 1):66-68.

12. Kass N, Faden R, Fabi RE, et al. Alternative consent models for comparative effectiveness studies: views of patients from two institutions. AJOB Empir Bioeth. 2016;7(2):92-105.

13. Carman KL, Mallery C, Maurer M, et al. Effectiveness of public deliberation methods for gathering input on issues in healthcare: results from a randomized trial. Soc Sci Med. 2015;133:11-20.

14. Carman KL, Maurer M, Mangrum R, et al. Understanding an informed public’s views on the role of evidence in making health care decisions. Health Aff (Millwood). 2016;35(4):566-574.

15. Kim SYH, Wall IF, Stanczyk A, De Vries R. Assessing the public’s views in research ethics controversies: deliberative democracy and bioethics as natural allies, J Empir Res Hum Res Ethics. 2009;4(4):3-16.

16. Gastil J, Levine P, eds. The Deliberative Democracy Handbook: Strategies for Effective Civic Engagement in the Twenty-First Century. San Francisco, CA: Jossey-Bass; 2005.

17. Dryzek JS, Bächtiger A, Chambers S, et al. The crisis of democracy and the science of deliberation. Science. 2019;363(6432):1144-1146.

18. Blacksher E, Diebel A, Forest PG, Goold SD, Abelson J. What is public deliberation? Hastings Cent Rep. 2012;4(2):14-17.

19. Wang G, Gold M, Siegel J, et al. Deliberation: obtaining informed input from a diverse public. J Health Care Poor Underserved. 2015;26(1):223-242.

20. Simon RL, ed. The Blackwell Guide to Social and Political Philosophy. Malden, MA: Wiley-Blackwell; 2002.

21. Stanford University, Center for Deliberative Democracy. Deliberative polling on energy and environmental policy options in Japan. https://cdd.stanford.edu/2012/deliberative-polling-on-energy-and-environmental-policy-options-in-japan. Published August 12, 2012. Accessed December 9, 2019.

22. Damschroder LJ, Pritts JL, Neblo MA, Kalarickal RJ, Creswell JW, Hayward RA. Patients, privacy and trust: patients’ willingness to allow researchers to access their medical records. Soc Sci Med. 2007;64(1):223-235.

23. Carman KL, Maurer M, Mallery C, et al. Community forum deliberative methods demonstration: evaluating effectiveness and eliciting public views on use of evidence. Final report. https://effectivehealthcare.ahrq.gov/sites/default/files/pdf/deliberative-methods_research-2013-1.pdf. Published November 2014. Accessed December 9, 2019.

24. Sunstein CR, Hastie R. Wiser: Getting Beyond Groupthink to Make Groups Smarter. Boston, MA: Harvard Business Review Press; 2014.

25. Damschroder LJ, Kim SY. Assessing the quality of democratic deliberation: a case study of public deliberation on the ethics of surrogate consent for research. Soc Sci Med. 2010;70(12):1896-1903.

26. Miles MB, Huberman AM. Qualitative Data Analysis: An Expanded Sourcebook. 2nd ed. Thousand Oaks: SAGE Publications, Inc; 1994.

27. US Department of Veterans Affairs. Veteran community care – general information. https://www.va.gov/COMMUNITYCARE/docs/pubfiles/factsheets/VHA-FS_MISSION-Act.pdf. Published September 9 2019. Accessed December 9, 2019.

Article PDF
Author and Disclosure Information

Tanner Caverly, Sarah Krein, and Laura Damschroder are Research Investigators; Claire Robinson and Jane Forman are Qualitative Analysts; and Sarah Skurla is a Research Associate; all at the VA Ann Arbor Health Care System, Center for Clinical Management Research, Health Services Research and Development in Michigan. Martha Quinn is a Research Specialist at the School of Public Health; Tanner Caverly is an Assistant Professor in the Medical School; and Sarah Krein is an Adjunct Research Professor in the School of Nursing; all at the University of Michigan in Ann Arbor.
Correspondence: Tanner Caverly ([email protected]

Issue
Federal Practitioner - 37(1)a
Publications
Topics
Page Number
24-32
Sections
Author and Disclosure Information

Tanner Caverly, Sarah Krein, and Laura Damschroder are Research Investigators; Claire Robinson and Jane Forman are Qualitative Analysts; and Sarah Skurla is a Research Associate; all at the VA Ann Arbor Health Care System, Center for Clinical Management Research, Health Services Research and Development in Michigan. Martha Quinn is a Research Specialist at the School of Public Health; Tanner Caverly is an Assistant Professor in the Medical School; and Sarah Krein is an Adjunct Research Professor in the School of Nursing; all at the University of Michigan in Ann Arbor.
Correspondence: Tanner Caverly ([email protected]

Author and Disclosure Information

Tanner Caverly, Sarah Krein, and Laura Damschroder are Research Investigators; Claire Robinson and Jane Forman are Qualitative Analysts; and Sarah Skurla is a Research Associate; all at the VA Ann Arbor Health Care System, Center for Clinical Management Research, Health Services Research and Development in Michigan. Martha Quinn is a Research Specialist at the School of Public Health; Tanner Caverly is an Assistant Professor in the Medical School; and Sarah Krein is an Adjunct Research Professor in the School of Nursing; all at the University of Michigan in Ann Arbor.
Correspondence: Tanner Caverly ([email protected]

Article PDF
Article PDF
Related Articles
A democratic deliberation panel of veterans providing insight into veteran perspectives on resource allocation and the Veterans Choice Act showed the importance and feasibility of engaging veterans in the policy-making process.
A democratic deliberation panel of veterans providing insight into veteran perspectives on resource allocation and the Veterans Choice Act showed the importance and feasibility of engaging veterans in the policy-making process.

Providing high-quality, patient-centered health care is a top priority for the US Department of Veterans Affairs (VA) Veteran Health Administration (VHA), whose core mission is to improve the health and well-being of US veterans. Thus, news of long wait times for medical appointments in the VHA sparked intense national attention and debate and led to changes in senior management and legislative action. 1 On August 8, 2014, President Bara c k Obama signed the Veterans Access, Choice, and Accountability Act of 2014, also known as the Choice Act, which provided an additional $16 billion in emergency spending over 3 years to improve veterans’ access to timely health care. 2 The Choice Act sought to develop an integrated health care network that allowed qualified VHA patients to receive specific health care services in their communities delivered by non-VHA health care providers (HCPs) but paid for by the VHA. The Choice Act also laid out explicit criteria for how to prioritize who would be eligible for VHA-purchased civilian care: (1) veterans who could not get timely appointments at a VHA medical facility within 30 days of referral; or (2) veterans who lived > 40 miles from the closest VHA medical facility.

VHA decision makers seeking to improve care delivery also need to weigh trade-offs between alternative approaches to providing rapid access. For instance, increasing access to non-VHA HCPs may not always decrease wait times and could result in loss of continuity, limited care coordination, limited ability to ensure and enforce high-quality standards at the VHA, and other challenges.3-6 Although the concerns and views of elected representatives, advocacy groups, and health system leaders are important, it is unknown whether these views and preferences align with those of veterans. Arguably, the range of views and concerns of informed veterans whose health is at stake should be particularly prominent in such policy decision making.

To identify the considerations that were most important to veterans regarding VHA policy around decreasing wait times, a study was designed to engage a group of veterans who were eligible for civilian care under the Choice Act. The study took place 1 year after the Choice Act was passed. Veterans were asked to focus on 2 related questions: First, how should funding be used for building VHA capacity (build) vs purchasing civilian care (buy)? Second, under what circumstances should civilian care be prioritized?

The aim of this paper is to describe democratic deliberation (DD), a specific method that engaged veteran patients in complex policy decisions around access to care. DD methods have been used increasingly in health care for developing policy guidance, setting priorities, providing advice on ethical dilemmas, weighing risk-benefit trade-offs, and determining decision-making authority.7-12 For example, DD helped guide national policy for mammography screening for breast cancer in New Zealand.13 The Agency for Healthcare Research and Quality has completed a systematic review and a large, randomized experiment on best practices for carrying out public deliberation.8,13,14 However, despite the potential value of this approach, there has been little use of deliberative methods within the VHA for the explicit purpose of informing veteran health care delivery.

This paper describes the experience engaging veterans by using DD methodology and informing VHA leadership about the results of those deliberations. The specific aims were to understand whether DD is an acceptable approach to use to engage patients in the medical services policy-making process within VHA and whether veterans are able to come to an informed consensus.

 

 

Methods

Engaging patients and incorporating their needs and concerns within the policy-making process may improve health system policies and make those policies more patient centered. Such engagement also could be a way to generate creative solutions. However, because health-system decisions often involve making difficult trade-offs, effectively obtaining patient population input on complex care delivery issues can be challenging.

Although surveys can provide intuitive, top-of-mind input from respondents, these opinions are generally not sufficient for resolving complex problems.15 Focus groups and interviews may produce results that are more in-depth than surveys, but these methods tend to elicit settled private preferences rather than opinions about what the community should do.16 DD, on the other hand, is designed to elicit deeply informed public opinions on complex, value-laden topics to develop recommendations and policies for a larger community.17 The goal is to find collective solutions to challenging social problems. DD achieves this by giving participants an opportunity to explore a topic in-depth, question experts, and engage peers in reason-based discussions.18,19 This method has its roots in political science and has been used over several decades to successfully inform policy making on a broad array of topics nationally and internationally—from health research ethics in the US to nuclear and energy policy in Japan.7,16,20,21 DD has been found to promote ownership of public programs and lend legitimacy to policy decisions, political institutions, and democracy itself.18

A single day (8 hours) DD session was convened, following a Citizens Jury model of deliberation, which brings veteran patients together to learn about a topic, ask questions of experts, deliberate with peers, and generate a “citizen’s report” that contains a set of recommendations (Table 1). An overview of the different models of DD and rationale for each can be found elsewhere.8,15

 

Recruitment Considerations

A purposively selected sample of civilian care-eligible veterans from a midwestern VHA health care system (1 medical center and 3 community-based outpatient clinics [CBOCs]) were invited to the DD session. The targeted number of participants was 30. Female veterans, who comprise only 7% of the local veteran population, were oversampled to account for their potentially different health care needs and to create balance between males and females in the session. Oversampling for other characteristics was not possible due to the relatively small sample size. Based on prior experience,7 it was assumed that 70% of willing participants would attend the session; therefore 34 veterans were invited and 24 attended. Each participant received a $200 incentive in appreciation for their substantial time commitment and to offset transportation costs.

Background Materials

A packet with educational materials (Flesch-Kincaid Grade Level of 10.5) was mailed to participants about 2 weeks before the DD session. Participants were asked to review prior to attending the session. These materials described the session (eg, purpose, organizers, importance) and provided factual information about the Choice Act (eg, eligibility, out-of-pocket costs, travel pay, prescription drug policies).

Session Overview

The session was structured to accomplish the following goals: (1) Elicit participants’ opinions about access to health care and reasons for those opinions; (2) Provide in-depth education about the Choice Act through presentations and discussions with topical experts; and (3) Elicit reasoning and recommendations on both the criteria by which participants prioritize candidates for civilian care and how participants would allocate additional funding to improve access (ie, by building VHA capacity to deliver more timely health care vs purchasing health care from civilian HCPs).

 

 

Participants were asked to fill out a survey on arrival in the morning and were assigned to 1 of 3 tables or small groups. Each table had a facilitator who had extensive experience in qualitative data collection methods and guided the dialogue using a scripted protocol that they helped develop and refine. The facilitation materials drew from and used previously published studies.22,23 Each facilitator audio recorded the sessions and took notes. Three experts presented during plenary education sessions. Presentations were designed to provide balanced factual information and included a veteran’s perspective. One presenter was a clinician on the project team, another was a local clinical leader responsible for making decisions about what services to provide via civilian care (buy) vs enhancing the local VHA health system’s ability to provide those services (build), and the third presenter was a veteran who was on the project team.

Education Session 1

The first plenary education session with expert presentations was conducted after each table completed an icebreaker exercise. The project team physician provided a brief history and description of the Choice Act to reinforce educational materials sent to participants prior to the session. The health system clinical leader described his decision process and principles and highlighted constraints placed on him by the Choice Act that were in place at the time of the DD session. He also described existing local and national programs to provide civilian care (eg, local fee-basis non-VHA care programs) and how these programs sought to achieve goals similar to the Choice Act. The veteran presenter focused on the importance of session participants providing candid insight and observations and emphasized that this session was a significant opportunity to “have their voices heard.”

Deliberation 1: What criteria should be used to prioritize patients for receiving civilian care paid for by the VHA? To elicit preferences on the central question of this deliberation, participants were presented with 8 real-world cases that were based on interviews conducted with Choice Act-eligible veterans (Table 2 and eAppendices A

, B  ,  C   , and D   ). Participants were first instructed to read through and discuss the cases as a group, then come to agreement on prioritizing how the patients in the case scenarios should receive civilian care. Agreement was defined as having complete consensus or consensus by the majority, in which case, the facilitator noted the number who agreed and disagreed within each group. The facilitators documented the criteria each group considered as they prioritized the cases, along with the group’s reasoning behind their choices.

 

Education Session 2

In the second plenary session, the project team physician provided information about health care access issues, both inside and outside of the VHA, particularly between urban and rural areas. He also discussed factors related to the insufficient capacity to meet growing demand that contributed to the VHA wait-time crisis. The veteran presenter shared reflections on health care access from a veteran’s perspective.

Deliberation 2: How should additional funding be divided between increasing the ability of the VHA to (1) provide care by VHA HCPs; and (2) pay for care from non-VHA civilian HCPs? Participants were presented the patient examples and Choice Act funding scenarios (the buy policy option) and contrasted that with a build policy option. Participants were explicitly encouraged to shift their perspectives from thinking about individual cases to considering policy-level decisions and the broader social good (Table 2).

 

 

Ensuring Robust Deliberations

If participants do not adequately grasp the complexities of the topic, a deliberation can fail. To facilitate nuanced reasoning, real-world concrete examples were developed as the starting point of each deliberation based on interviews with actual patients (deliberation 1) and actual policy proposals relevant to the funding allocation decisions within the Choice Act (deliberation 2).

A deliberation also can fail with self-silencing, where participants withhold opinions that differ from those articulated first or by more vocal members of the group.24 To combat self-silencing, highly experienced facilitators were used to ensure sharing from all participants and to support an open-minded, courteous, and reason-based environment for discourse. It was specified that the best solutions are achieved through reason-based and cordial disagreement and that success can be undermined when participants simply agree because it is easier or more comfortable.

A third way a deliberation can fail is if individuals do not adopt a group or system-level perspective. To counter this, facilitators reinforced at multiple points the importance of taking a broader social perspective rather than sharing only one’s personal preferences.

Finally, it is important to assess the quality of the deliberative process itself, to ensure that results are trustworthy.25 To assess the quality of the deliberative process, participants knowledge about key issues pre- and postdeliberation were assessed. Participants also were asked to rate the quality of the facilitators and how well they felt their voice was heard and respected, and facilitators made qualitative assessments about the extent to which participants were engaged in reason-based and collaborative discussion.

Data

Quantitative data were collected via pre- and postsession surveys. The surveys contained items related to knowledge about the Choice Act, expectations for the DD session, beliefs and opinions about the provision of health care for veterans, recommended funding allocations between build vs buy policy options, and general demographics. Qualitative data were collected through detailed notes taken by the 3 facilitators. Each table’s deliberations were audio recorded so that gaps in the notes could be filled.

The 3 facilitators, who were all experienced qualitative researchers, typed their written notes into a template immediately after the session. Two of the 3 facilitators led the analysis of the session notes. Findings within and across the 3 deliberation tables were developed using content and matrix analysis methods.26 Descriptive statistics were generated from survey responses and compared survey items pre- and postsession using paired t tests or χ2 tests for categorical responses.

Results

Thirty-three percent of individuals invited (n = 127) agreed to participate. Those who declined cited conflicts related to distance, transportation, work/school, medical appointments, family commitments, or were not interested. In all, 24 (69%) of the 35 veterans who accepted the invitation attended the deliberation session. Of the 11 who accepted but did not attend, 5 cancelled ahead of time because of conflicts (Figure). Most participants were male (70%), 48% were aged 61 to 75 years, 65% were white, 43% had some college education, 43% reported an annual income of between $25,000 and $40,000, and only 35% reported very good health (eAppendix D).

 

 

Deliberation 1

During the deliberation on the prioritization criteria, the concept of “condition severity” emerged as an important criterion for veterans. This criterion captured simultaneous consideration of both clinical necessity and burden on the veteran to obtain care. For example, participants felt that patients with a life-threatening illness should be prioritized for civilian care over patients who need preventative or primary care (clinical necessity) and that elderly patients with substantial difficulty traveling to VHA appointments should be prioritized over patients who can travel more easily (burden). The Choice Act regulations at the time of the DD session did not reflect this nuanced perspective, stipulating only that veterans must live > 40 miles from the nearest VHA medical facility.

One of the 3 groups did not prioritize the patient cases because some members felt that no veteran should be constrained from receiving civilian care if they desired it. Nonetheless, this group did agree with prioritizing the first 2 cases in Table 3. The other groups prioritized all 8 cases in generally similar ways.

Deliberation 2

No clear consensus emerged on the buy vs build question. A representative from each table presented their group’s positions, rationale, and recommendations after deliberations were completed. After hearing the range of positions, the groups then had another opportunity to deliberate based on what they heard from the other tables; no new recommendations or consensus emerged.

Participants who were in favor of allocating more funds toward the build policy offered a range of rationales, saying that it would (1) increase access for rural veterans by building CBOCs and deploying more mobile units that could bring outlets for health care closer to their home communities; (2) provide critical and unique medical expertise to address veteran-specific issues such as prosthetics, traumatic brain injury, posttraumatic stress disorder, spinal cord injury, and shrapnel wounds that are typically not available through civilian providers; (3) give VHA more oversight over the quality and cost of care, which is more challenging to do with civilian providers; and (4) Improve VHA infrastructure by, for example, upgrading technology and attracting the best clinicians and staff to support “our VHA.”

Participants who were in favor of allocating more funds toward the buy policy also offered a range of rationales, saying that it would (1) decrease patient burden by increasing access through community providers, decreasing wait time, and lessening personal cost and travel time; (2) allow more patients to receive civilian care, which was generally seen as beneficial by a few participants because of perceptions that the VHA provides lower quality care due to a shortage of VHA providers, run-down/older facilities, lack of technology, and poorer-quality VHA providers; and (3) provide an opportunity to divest of costly facilities and invest in other innovative approaches. Regarding this last reason, a few participants felt that the VHA is “gouged” when building medical centers that overrun budgets. They also were concerned that investing in facilities tied VHA to specific locations when current locations of veterans may change “25 years from now.”

 

 

Survey Results

Twenty-three of the 24 participants completed both pre- and postsession surveys. The majority of participants in the session felt people in the group respected their opinion (96%); felt that the facilitator did not try to influence the group with her own opinions (96%); indicated they understood the information enough to participate as much as they wanted (100%); and were hopeful that their reasoning and recommendations would help inform VHA policy makers (82%).

The surveys also provided an opportunity to examine the extent to which knowledge, attitudes, and opinions changed from before to after the deliberation. Even with the small sample, responses revealed a trend toward improved knowledge about key elements of the Choice Act and its goals. Further, there was a shift in some participants’ opinions about how patients should be prioritized to receive civilian care. For example, before the deliberation participants generally felt that all veterans should be able to receive civilian care, whereas postdeliberation this was not the case. Postdeliberation, most participants felt that primary care should not be a high priority for civilian care but continued to endorse prioritizing civilian care for specialty services like orthopedic or cardiology-related care. Finally, participants moved from more diverse recommendations regarding additional funds allocations, toward consensus after the deliberation around allocating funds to the build policy. Eight participants supported a build policy beforehand, whereas 16 supported this policy afterward.

Discussion

This study explored DD as a method for deeply engaging veterans in complex policy making to guide funding allocation and prioritization decisions related to the Choice Act, decisions that are still very relevant today within the context of the Mission Act and have substantial implications for how health care is delivered in the VHA. The Mission Act passed on June 6, 2018, with the goal of improving access to and the reliability of civilian or community care for eligible veterans.27 Decisions related to appropriating scarce funding to improve access to care is an emotional and value-laden topic that elicited strong and divergent opinions among the participants. Veterans were eager to have their voices heard and had strong expectations that VHA leadership would be briefed about their recommendations. The majority of participants were satisfied with the deliberation process, felt they understood the issues, and felt their opinions were respected. They expressed feelings of comradery and community throughout the process.

In this single deliberation session, the groups did not achieve a single, final consensus regarding how VHA funding should ultimately be allocated between buy and build policy options. Nonetheless, participants provided a rich array of recommendations and rationale for them. Session moderators observed rich, sophisticated, fair, and reason-based discussions on this complex topic. Participants left with a deeper knowledge and appreciation for the complex trade-offs and expressed strong rationales for both sides of the policy debate on build vs buy. In addition, the project yielded results of high interest to VHA policy makers.

This work was presented in multiple venues between 2015 to 2016, and to both local and national VHA leadership, including the local Executive Quality Leadership Boards, the VHA Central Office Committee on the Future State of VA Community Care, the VA Office of Patient Centered Care, and the National Veteran Experience Committee. Through these discussions and others, we saw great interest within the VHA system and high-level leaders to explore ways to include veterans’ voices in the policy-making process. This work was invaluable to our research team (eAppendix E 

 ), has influenced the methodology of multiple research grants within the VA that seek to engage veterans in the research process, and played a pivotal role in the development of the Veteran Experience Office.

Many health system decisions regarding what care should be delivered (and how) involve making difficult, value-laden choices in the context of limited resources. DD methods can be used to target and obtain specific viewpoints from diverse populations, such as the informed perspectives of minority and underrepresented populations within the VHA.19 For example, female veterans were oversampled to ensure that the informed preferences of this population was obtained. Thus, DD methods could provide a valuable tool for health systems to elicit in-depth diverse patient input on high-profile policies that will have a substantial impact on the system’s patient population.

 

 

Limitations

One potential downside of DD is that, because of the resource-intensive nature of deliberation sessions, they are often conducted with relatively small groups.9 Viewpoints of those within these small samples who are willing to spend an entire day discussing a complex topic may not be representative of the larger patient community. However, the core goal of DD is diversity of opinions rather than representativeness.

A stratified random sampling strategy that oversampled for underrepresented and minority populations was used to help select a diverse group that represents the population on key characteristics and partially addresses concern about representativeness. Efforts to optimize participation rates, including providing monetary incentives, also are helpful and have led to high participation rates in past deliberations.7

 

Health system communication strategies that promote the importance of becoming involved in DD sessions also may be helpful in improving rates of recruitment. On particularly important topics where health system leaders feel a larger resource investment is justified, conducting larger scale deliberations with many small groups may obtain more generalizable evidence about what individual patients and groups of patients recommend.7 However, due to the inherent limitations of surveys and focus group approaches for obtaining informed views on complex topics, there are no clear systematic alternatives to the DD approach.

Conclusion

DD is an effective method to meaningfully engage patients in deep deliberations to guide complex policy making. Although design of deliberative sessions is resource-intensive, patient engagement efforts, such as those described in this paper, could be an important aspect of a well-functioning learning health system. Further research into alternative, streamlined methods that can also engage veterans more deeply is needed. DD also can be combined with other approaches to broaden and confirm findings, including focus groups, town hall meetings, or surveys.

Although this study did not provide consensus on how the VHA should allocate funds with respect to the Choice Act, it did provide insight into the importance and feasibility of engaging veterans in the policy-making process. As more policies aimed at improving veterans’ access to civilian care are created, such as the most recent Mission Act, policy makers should strongly consider using the DD method of obtaining informed veteran input into future policy decisions.

Acknowledgments
Funding was provided by the US Department of Veterans Affairs Office of Analytics and Business Intelligence (OABI) and the VA Quality Enhancement Research Initiative (QUERI). Dr. Caverly was supported in part by a VA Career Development Award (CDA 16-151). Dr. Krein is supported by a VA Health Services Research and Development Research Career Scientist Award (RCS 11-222). The authors thank the veterans who participated in this work. They also thank Caitlin Reardon and Natalya Wawrin for their assistance in organizing the deliberation session.

Providing high-quality, patient-centered health care is a top priority for the US Department of Veterans Affairs (VA) Veteran Health Administration (VHA), whose core mission is to improve the health and well-being of US veterans. Thus, news of long wait times for medical appointments in the VHA sparked intense national attention and debate and led to changes in senior management and legislative action. 1 On August 8, 2014, President Bara c k Obama signed the Veterans Access, Choice, and Accountability Act of 2014, also known as the Choice Act, which provided an additional $16 billion in emergency spending over 3 years to improve veterans’ access to timely health care. 2 The Choice Act sought to develop an integrated health care network that allowed qualified VHA patients to receive specific health care services in their communities delivered by non-VHA health care providers (HCPs) but paid for by the VHA. The Choice Act also laid out explicit criteria for how to prioritize who would be eligible for VHA-purchased civilian care: (1) veterans who could not get timely appointments at a VHA medical facility within 30 days of referral; or (2) veterans who lived > 40 miles from the closest VHA medical facility.

VHA decision makers seeking to improve care delivery also need to weigh trade-offs between alternative approaches to providing rapid access. For instance, increasing access to non-VHA HCPs may not always decrease wait times and could result in loss of continuity, limited care coordination, limited ability to ensure and enforce high-quality standards at the VHA, and other challenges.3-6 Although the concerns and views of elected representatives, advocacy groups, and health system leaders are important, it is unknown whether these views and preferences align with those of veterans. Arguably, the range of views and concerns of informed veterans whose health is at stake should be particularly prominent in such policy decision making.

To identify the considerations that were most important to veterans regarding VHA policy around decreasing wait times, a study was designed to engage a group of veterans who were eligible for civilian care under the Choice Act. The study took place 1 year after the Choice Act was passed. Veterans were asked to focus on 2 related questions: First, how should funding be used for building VHA capacity (build) vs purchasing civilian care (buy)? Second, under what circumstances should civilian care be prioritized?

The aim of this paper is to describe democratic deliberation (DD), a specific method that engaged veteran patients in complex policy decisions around access to care. DD methods have been used increasingly in health care for developing policy guidance, setting priorities, providing advice on ethical dilemmas, weighing risk-benefit trade-offs, and determining decision-making authority.7-12 For example, DD helped guide national policy for mammography screening for breast cancer in New Zealand.13 The Agency for Healthcare Research and Quality has completed a systematic review and a large, randomized experiment on best practices for carrying out public deliberation.8,13,14 However, despite the potential value of this approach, there has been little use of deliberative methods within the VHA for the explicit purpose of informing veteran health care delivery.

This paper describes the experience engaging veterans by using DD methodology and informing VHA leadership about the results of those deliberations. The specific aims were to understand whether DD is an acceptable approach to use to engage patients in the medical services policy-making process within VHA and whether veterans are able to come to an informed consensus.

 

 

Methods

Engaging patients and incorporating their needs and concerns within the policy-making process may improve health system policies and make those policies more patient centered. Such engagement also could be a way to generate creative solutions. However, because health-system decisions often involve making difficult trade-offs, effectively obtaining patient population input on complex care delivery issues can be challenging.

Although surveys can provide intuitive, top-of-mind input from respondents, these opinions are generally not sufficient for resolving complex problems.15 Focus groups and interviews may produce results that are more in-depth than surveys, but these methods tend to elicit settled private preferences rather than opinions about what the community should do.16 DD, on the other hand, is designed to elicit deeply informed public opinions on complex, value-laden topics to develop recommendations and policies for a larger community.17 The goal is to find collective solutions to challenging social problems. DD achieves this by giving participants an opportunity to explore a topic in-depth, question experts, and engage peers in reason-based discussions.18,19 This method has its roots in political science and has been used over several decades to successfully inform policy making on a broad array of topics nationally and internationally—from health research ethics in the US to nuclear and energy policy in Japan.7,16,20,21 DD has been found to promote ownership of public programs and lend legitimacy to policy decisions, political institutions, and democracy itself.18

A single day (8 hours) DD session was convened, following a Citizens Jury model of deliberation, which brings veteran patients together to learn about a topic, ask questions of experts, deliberate with peers, and generate a “citizen’s report” that contains a set of recommendations (Table 1). An overview of the different models of DD and rationale for each can be found elsewhere.8,15

 

Recruitment Considerations

A purposively selected sample of civilian care-eligible veterans from a midwestern VHA health care system (1 medical center and 3 community-based outpatient clinics [CBOCs]) were invited to the DD session. The targeted number of participants was 30. Female veterans, who comprise only 7% of the local veteran population, were oversampled to account for their potentially different health care needs and to create balance between males and females in the session. Oversampling for other characteristics was not possible due to the relatively small sample size. Based on prior experience,7 it was assumed that 70% of willing participants would attend the session; therefore 34 veterans were invited and 24 attended. Each participant received a $200 incentive in appreciation for their substantial time commitment and to offset transportation costs.

Background Materials

A packet with educational materials (Flesch-Kincaid Grade Level of 10.5) was mailed to participants about 2 weeks before the DD session. Participants were asked to review prior to attending the session. These materials described the session (eg, purpose, organizers, importance) and provided factual information about the Choice Act (eg, eligibility, out-of-pocket costs, travel pay, prescription drug policies).

Session Overview

The session was structured to accomplish the following goals: (1) Elicit participants’ opinions about access to health care and reasons for those opinions; (2) Provide in-depth education about the Choice Act through presentations and discussions with topical experts; and (3) Elicit reasoning and recommendations on both the criteria by which participants prioritize candidates for civilian care and how participants would allocate additional funding to improve access (ie, by building VHA capacity to deliver more timely health care vs purchasing health care from civilian HCPs).

 

 

Participants were asked to fill out a survey on arrival in the morning and were assigned to 1 of 3 tables or small groups. Each table had a facilitator who had extensive experience in qualitative data collection methods and guided the dialogue using a scripted protocol that they helped develop and refine. The facilitation materials drew from and used previously published studies.22,23 Each facilitator audio recorded the sessions and took notes. Three experts presented during plenary education sessions. Presentations were designed to provide balanced factual information and included a veteran’s perspective. One presenter was a clinician on the project team, another was a local clinical leader responsible for making decisions about what services to provide via civilian care (buy) vs enhancing the local VHA health system’s ability to provide those services (build), and the third presenter was a veteran who was on the project team.

Education Session 1

The first plenary education session with expert presentations was conducted after each table completed an icebreaker exercise. The project team physician provided a brief history and description of the Choice Act to reinforce educational materials sent to participants prior to the session. The health system clinical leader described his decision process and principles and highlighted constraints placed on him by the Choice Act that were in place at the time of the DD session. He also described existing local and national programs to provide civilian care (eg, local fee-basis non-VHA care programs) and how these programs sought to achieve goals similar to the Choice Act. The veteran presenter focused on the importance of session participants providing candid insight and observations and emphasized that this session was a significant opportunity to “have their voices heard.”

Deliberation 1: What criteria should be used to prioritize patients for receiving civilian care paid for by the VHA? To elicit preferences on the central question of this deliberation, participants were presented with 8 real-world cases that were based on interviews conducted with Choice Act-eligible veterans (Table 2 and eAppendices A

, B  ,  C   , and D   ). Participants were first instructed to read through and discuss the cases as a group, then come to agreement on prioritizing how the patients in the case scenarios should receive civilian care. Agreement was defined as having complete consensus or consensus by the majority, in which case, the facilitator noted the number who agreed and disagreed within each group. The facilitators documented the criteria each group considered as they prioritized the cases, along with the group’s reasoning behind their choices.

 

Education Session 2

In the second plenary session, the project team physician provided information about health care access issues, both inside and outside of the VHA, particularly between urban and rural areas. He also discussed factors related to the insufficient capacity to meet growing demand that contributed to the VHA wait-time crisis. The veteran presenter shared reflections on health care access from a veteran’s perspective.

Deliberation 2: How should additional funding be divided between increasing the ability of the VHA to (1) provide care by VHA HCPs; and (2) pay for care from non-VHA civilian HCPs? Participants were presented the patient examples and Choice Act funding scenarios (the buy policy option) and contrasted that with a build policy option. Participants were explicitly encouraged to shift their perspectives from thinking about individual cases to considering policy-level decisions and the broader social good (Table 2).

 

 

Ensuring Robust Deliberations

If participants do not adequately grasp the complexities of the topic, a deliberation can fail. To facilitate nuanced reasoning, real-world concrete examples were developed as the starting point of each deliberation based on interviews with actual patients (deliberation 1) and actual policy proposals relevant to the funding allocation decisions within the Choice Act (deliberation 2).

A deliberation also can fail with self-silencing, where participants withhold opinions that differ from those articulated first or by more vocal members of the group.24 To combat self-silencing, highly experienced facilitators were used to ensure sharing from all participants and to support an open-minded, courteous, and reason-based environment for discourse. It was specified that the best solutions are achieved through reason-based and cordial disagreement and that success can be undermined when participants simply agree because it is easier or more comfortable.

A third way a deliberation can fail is if individuals do not adopt a group or system-level perspective. To counter this, facilitators reinforced at multiple points the importance of taking a broader social perspective rather than sharing only one’s personal preferences.

Finally, it is important to assess the quality of the deliberative process itself, to ensure that results are trustworthy.25 To assess the quality of the deliberative process, participants knowledge about key issues pre- and postdeliberation were assessed. Participants also were asked to rate the quality of the facilitators and how well they felt their voice was heard and respected, and facilitators made qualitative assessments about the extent to which participants were engaged in reason-based and collaborative discussion.

Data

Quantitative data were collected via pre- and postsession surveys. The surveys contained items related to knowledge about the Choice Act, expectations for the DD session, beliefs and opinions about the provision of health care for veterans, recommended funding allocations between build vs buy policy options, and general demographics. Qualitative data were collected through detailed notes taken by the 3 facilitators. Each table’s deliberations were audio recorded so that gaps in the notes could be filled.

The 3 facilitators, who were all experienced qualitative researchers, typed their written notes into a template immediately after the session. Two of the 3 facilitators led the analysis of the session notes. Findings within and across the 3 deliberation tables were developed using content and matrix analysis methods.26 Descriptive statistics were generated from survey responses and compared survey items pre- and postsession using paired t tests or χ2 tests for categorical responses.

Results

Thirty-three percent of individuals invited (n = 127) agreed to participate. Those who declined cited conflicts related to distance, transportation, work/school, medical appointments, family commitments, or were not interested. In all, 24 (69%) of the 35 veterans who accepted the invitation attended the deliberation session. Of the 11 who accepted but did not attend, 5 cancelled ahead of time because of conflicts (Figure). Most participants were male (70%), 48% were aged 61 to 75 years, 65% were white, 43% had some college education, 43% reported an annual income of between $25,000 and $40,000, and only 35% reported very good health (eAppendix D).

 

 

Deliberation 1

During the deliberation on the prioritization criteria, the concept of “condition severity” emerged as an important criterion for veterans. This criterion captured simultaneous consideration of both clinical necessity and burden on the veteran to obtain care. For example, participants felt that patients with a life-threatening illness should be prioritized for civilian care over patients who need preventative or primary care (clinical necessity) and that elderly patients with substantial difficulty traveling to VHA appointments should be prioritized over patients who can travel more easily (burden). The Choice Act regulations at the time of the DD session did not reflect this nuanced perspective, stipulating only that veterans must live > 40 miles from the nearest VHA medical facility.

One of the 3 groups did not prioritize the patient cases because some members felt that no veteran should be constrained from receiving civilian care if they desired it. Nonetheless, this group did agree with prioritizing the first 2 cases in Table 3. The other groups prioritized all 8 cases in generally similar ways.

Deliberation 2

No clear consensus emerged on the buy vs build question. A representative from each table presented their group’s positions, rationale, and recommendations after deliberations were completed. After hearing the range of positions, the groups then had another opportunity to deliberate based on what they heard from the other tables; no new recommendations or consensus emerged.

Participants who were in favor of allocating more funds toward the build policy offered a range of rationales, saying that it would (1) increase access for rural veterans by building CBOCs and deploying more mobile units that could bring outlets for health care closer to their home communities; (2) provide critical and unique medical expertise to address veteran-specific issues such as prosthetics, traumatic brain injury, posttraumatic stress disorder, spinal cord injury, and shrapnel wounds that are typically not available through civilian providers; (3) give VHA more oversight over the quality and cost of care, which is more challenging to do with civilian providers; and (4) Improve VHA infrastructure by, for example, upgrading technology and attracting the best clinicians and staff to support “our VHA.”

Participants who were in favor of allocating more funds toward the buy policy also offered a range of rationales, saying that it would (1) decrease patient burden by increasing access through community providers, decreasing wait time, and lessening personal cost and travel time; (2) allow more patients to receive civilian care, which was generally seen as beneficial by a few participants because of perceptions that the VHA provides lower quality care due to a shortage of VHA providers, run-down/older facilities, lack of technology, and poorer-quality VHA providers; and (3) provide an opportunity to divest of costly facilities and invest in other innovative approaches. Regarding this last reason, a few participants felt that the VHA is “gouged” when building medical centers that overrun budgets. They also were concerned that investing in facilities tied VHA to specific locations when current locations of veterans may change “25 years from now.”

 

 

Survey Results

Twenty-three of the 24 participants completed both pre- and postsession surveys. The majority of participants in the session felt people in the group respected their opinion (96%); felt that the facilitator did not try to influence the group with her own opinions (96%); indicated they understood the information enough to participate as much as they wanted (100%); and were hopeful that their reasoning and recommendations would help inform VHA policy makers (82%).

The surveys also provided an opportunity to examine the extent to which knowledge, attitudes, and opinions changed from before to after the deliberation. Even with the small sample, responses revealed a trend toward improved knowledge about key elements of the Choice Act and its goals. Further, there was a shift in some participants’ opinions about how patients should be prioritized to receive civilian care. For example, before the deliberation participants generally felt that all veterans should be able to receive civilian care, whereas postdeliberation this was not the case. Postdeliberation, most participants felt that primary care should not be a high priority for civilian care but continued to endorse prioritizing civilian care for specialty services like orthopedic or cardiology-related care. Finally, participants moved from more diverse recommendations regarding additional funds allocations, toward consensus after the deliberation around allocating funds to the build policy. Eight participants supported a build policy beforehand, whereas 16 supported this policy afterward.

Discussion

This study explored DD as a method for deeply engaging veterans in complex policy making to guide funding allocation and prioritization decisions related to the Choice Act, decisions that are still very relevant today within the context of the Mission Act and have substantial implications for how health care is delivered in the VHA. The Mission Act passed on June 6, 2018, with the goal of improving access to and the reliability of civilian or community care for eligible veterans.27 Decisions related to appropriating scarce funding to improve access to care is an emotional and value-laden topic that elicited strong and divergent opinions among the participants. Veterans were eager to have their voices heard and had strong expectations that VHA leadership would be briefed about their recommendations. The majority of participants were satisfied with the deliberation process, felt they understood the issues, and felt their opinions were respected. They expressed feelings of comradery and community throughout the process.

In this single deliberation session, the groups did not achieve a single, final consensus regarding how VHA funding should ultimately be allocated between buy and build policy options. Nonetheless, participants provided a rich array of recommendations and rationale for them. Session moderators observed rich, sophisticated, fair, and reason-based discussions on this complex topic. Participants left with a deeper knowledge and appreciation for the complex trade-offs and expressed strong rationales for both sides of the policy debate on build vs buy. In addition, the project yielded results of high interest to VHA policy makers.

This work was presented in multiple venues between 2015 to 2016, and to both local and national VHA leadership, including the local Executive Quality Leadership Boards, the VHA Central Office Committee on the Future State of VA Community Care, the VA Office of Patient Centered Care, and the National Veteran Experience Committee. Through these discussions and others, we saw great interest within the VHA system and high-level leaders to explore ways to include veterans’ voices in the policy-making process. This work was invaluable to our research team (eAppendix E 

 ), has influenced the methodology of multiple research grants within the VA that seek to engage veterans in the research process, and played a pivotal role in the development of the Veteran Experience Office.

Many health system decisions regarding what care should be delivered (and how) involve making difficult, value-laden choices in the context of limited resources. DD methods can be used to target and obtain specific viewpoints from diverse populations, such as the informed perspectives of minority and underrepresented populations within the VHA.19 For example, female veterans were oversampled to ensure that the informed preferences of this population was obtained. Thus, DD methods could provide a valuable tool for health systems to elicit in-depth diverse patient input on high-profile policies that will have a substantial impact on the system’s patient population.

 

 

Limitations

One potential downside of DD is that, because of the resource-intensive nature of deliberation sessions, they are often conducted with relatively small groups.9 Viewpoints of those within these small samples who are willing to spend an entire day discussing a complex topic may not be representative of the larger patient community. However, the core goal of DD is diversity of opinions rather than representativeness.

A stratified random sampling strategy that oversampled for underrepresented and minority populations was used to help select a diverse group that represents the population on key characteristics and partially addresses concern about representativeness. Efforts to optimize participation rates, including providing monetary incentives, also are helpful and have led to high participation rates in past deliberations.7

 

Health system communication strategies that promote the importance of becoming involved in DD sessions also may be helpful in improving rates of recruitment. On particularly important topics where health system leaders feel a larger resource investment is justified, conducting larger scale deliberations with many small groups may obtain more generalizable evidence about what individual patients and groups of patients recommend.7 However, due to the inherent limitations of surveys and focus group approaches for obtaining informed views on complex topics, there are no clear systematic alternatives to the DD approach.

Conclusion

DD is an effective method to meaningfully engage patients in deep deliberations to guide complex policy making. Although design of deliberative sessions is resource-intensive, patient engagement efforts, such as those described in this paper, could be an important aspect of a well-functioning learning health system. Further research into alternative, streamlined methods that can also engage veterans more deeply is needed. DD also can be combined with other approaches to broaden and confirm findings, including focus groups, town hall meetings, or surveys.

Although this study did not provide consensus on how the VHA should allocate funds with respect to the Choice Act, it did provide insight into the importance and feasibility of engaging veterans in the policy-making process. As more policies aimed at improving veterans’ access to civilian care are created, such as the most recent Mission Act, policy makers should strongly consider using the DD method of obtaining informed veteran input into future policy decisions.

Acknowledgments
Funding was provided by the US Department of Veterans Affairs Office of Analytics and Business Intelligence (OABI) and the VA Quality Enhancement Research Initiative (QUERI). Dr. Caverly was supported in part by a VA Career Development Award (CDA 16-151). Dr. Krein is supported by a VA Health Services Research and Development Research Career Scientist Award (RCS 11-222). The authors thank the veterans who participated in this work. They also thank Caitlin Reardon and Natalya Wawrin for their assistance in organizing the deliberation session.

References

1. VA Office of the Inspector General. Veterans Health Administration. Interim report: review of patient wait times, scheduling practices, and alleged patient deaths at the Phoenix Health Care System. https://www.va.gov/oig/pubs/VAOIG-14-02603-178.pdf. Published May 28, 2014. Accessed December 9, 2019.

2. Veterans Access, Choice, and Accountability Act of 2014. 42 USC §1395 (2014).

3. Penn M, Bhatnagar S, Kuy S, et al. Comparison of wait times for new patients between the private sector and United States Department of Veterans Affairs medical centers. JAMA Netw Open. 2019;2(1):e187096.

4. Thorpe JM, Thorpe CT, Schleiden L, et al. Association between dual use of Department of Veterans Affairs and Medicare Part D drug benefits and potentially unsafe prescribing. JAMA Intern Med. 2019; July 22. [Epub ahead of print.]

5. Moyo P, Zhao X, Thorpe CT, et al. Dual receipt of prescription opioids from the Department of Veterans Affairs and Medicare Part D and prescription opioid overdose death among veterans: a nested case-control study. Ann Intern Med. 2019;170(7):433-442.

6. Meyer LJ, Clancy CM. Care fragmentation and prescription opioids. Ann Intern Med. 2019;170(7):497-498.

7. Damschroder LJ, Pritts JL, Neblo MA, Kalarickal RJ, Creswell JW, Hayward RA. Patients, privacy and trust: patients’ willingness to allow researchers to access their medical records. Soc Sci Med. 2007;64(1):223-235.

8. Street J, Duszynski K, Krawczyk S, Braunack-Mayer A. The use of citizens’ juries in health policy decision-making: a systematic review. Soc Sci Med. 2014;109:1-9.

9. Paul C, Nicholls R, Priest P, McGee R. Making policy decisions about population screening for breast cancer: the role of citizens’ deliberation. Health Policy. 2008;85(3):314-320.

10. Martin D, Abelson J, Singer P. Participation in health care priority-setting through the eyes of the participants. J Health Serv Res Pol. 2002;7(4):222-229.

11. Mort M, Finch T. Principles for telemedicine and telecare: the perspective of a citizens’ panel. J Telemed Telecare. 2005;11(suppl 1):66-68.

12. Kass N, Faden R, Fabi RE, et al. Alternative consent models for comparative effectiveness studies: views of patients from two institutions. AJOB Empir Bioeth. 2016;7(2):92-105.

13. Carman KL, Mallery C, Maurer M, et al. Effectiveness of public deliberation methods for gathering input on issues in healthcare: results from a randomized trial. Soc Sci Med. 2015;133:11-20.

14. Carman KL, Maurer M, Mangrum R, et al. Understanding an informed public’s views on the role of evidence in making health care decisions. Health Aff (Millwood). 2016;35(4):566-574.

15. Kim SYH, Wall IF, Stanczyk A, De Vries R. Assessing the public’s views in research ethics controversies: deliberative democracy and bioethics as natural allies, J Empir Res Hum Res Ethics. 2009;4(4):3-16.

16. Gastil J, Levine P, eds. The Deliberative Democracy Handbook: Strategies for Effective Civic Engagement in the Twenty-First Century. San Francisco, CA: Jossey-Bass; 2005.

17. Dryzek JS, Bächtiger A, Chambers S, et al. The crisis of democracy and the science of deliberation. Science. 2019;363(6432):1144-1146.

18. Blacksher E, Diebel A, Forest PG, Goold SD, Abelson J. What is public deliberation? Hastings Cent Rep. 2012;4(2):14-17.

19. Wang G, Gold M, Siegel J, et al. Deliberation: obtaining informed input from a diverse public. J Health Care Poor Underserved. 2015;26(1):223-242.

20. Simon RL, ed. The Blackwell Guide to Social and Political Philosophy. Malden, MA: Wiley-Blackwell; 2002.

21. Stanford University, Center for Deliberative Democracy. Deliberative polling on energy and environmental policy options in Japan. https://cdd.stanford.edu/2012/deliberative-polling-on-energy-and-environmental-policy-options-in-japan. Published August 12, 2012. Accessed December 9, 2019.

22. Damschroder LJ, Pritts JL, Neblo MA, Kalarickal RJ, Creswell JW, Hayward RA. Patients, privacy and trust: patients’ willingness to allow researchers to access their medical records. Soc Sci Med. 2007;64(1):223-235.

23. Carman KL, Maurer M, Mallery C, et al. Community forum deliberative methods demonstration: evaluating effectiveness and eliciting public views on use of evidence. Final report. https://effectivehealthcare.ahrq.gov/sites/default/files/pdf/deliberative-methods_research-2013-1.pdf. Published November 2014. Accessed December 9, 2019.

24. Sunstein CR, Hastie R. Wiser: Getting Beyond Groupthink to Make Groups Smarter. Boston, MA: Harvard Business Review Press; 2014.

25. Damschroder LJ, Kim SY. Assessing the quality of democratic deliberation: a case study of public deliberation on the ethics of surrogate consent for research. Soc Sci Med. 2010;70(12):1896-1903.

26. Miles MB, Huberman AM. Qualitative Data Analysis: An Expanded Sourcebook. 2nd ed. Thousand Oaks: SAGE Publications, Inc; 1994.

27. US Department of Veterans Affairs. Veteran community care – general information. https://www.va.gov/COMMUNITYCARE/docs/pubfiles/factsheets/VHA-FS_MISSION-Act.pdf. Published September 9 2019. Accessed December 9, 2019.

References

1. VA Office of the Inspector General. Veterans Health Administration. Interim report: review of patient wait times, scheduling practices, and alleged patient deaths at the Phoenix Health Care System. https://www.va.gov/oig/pubs/VAOIG-14-02603-178.pdf. Published May 28, 2014. Accessed December 9, 2019.

2. Veterans Access, Choice, and Accountability Act of 2014. 42 USC §1395 (2014).

3. Penn M, Bhatnagar S, Kuy S, et al. Comparison of wait times for new patients between the private sector and United States Department of Veterans Affairs medical centers. JAMA Netw Open. 2019;2(1):e187096.

4. Thorpe JM, Thorpe CT, Schleiden L, et al. Association between dual use of Department of Veterans Affairs and Medicare Part D drug benefits and potentially unsafe prescribing. JAMA Intern Med. 2019; July 22. [Epub ahead of print.]

5. Moyo P, Zhao X, Thorpe CT, et al. Dual receipt of prescription opioids from the Department of Veterans Affairs and Medicare Part D and prescription opioid overdose death among veterans: a nested case-control study. Ann Intern Med. 2019;170(7):433-442.

6. Meyer LJ, Clancy CM. Care fragmentation and prescription opioids. Ann Intern Med. 2019;170(7):497-498.

7. Damschroder LJ, Pritts JL, Neblo MA, Kalarickal RJ, Creswell JW, Hayward RA. Patients, privacy and trust: patients’ willingness to allow researchers to access their medical records. Soc Sci Med. 2007;64(1):223-235.

8. Street J, Duszynski K, Krawczyk S, Braunack-Mayer A. The use of citizens’ juries in health policy decision-making: a systematic review. Soc Sci Med. 2014;109:1-9.

9. Paul C, Nicholls R, Priest P, McGee R. Making policy decisions about population screening for breast cancer: the role of citizens’ deliberation. Health Policy. 2008;85(3):314-320.

10. Martin D, Abelson J, Singer P. Participation in health care priority-setting through the eyes of the participants. J Health Serv Res Pol. 2002;7(4):222-229.

11. Mort M, Finch T. Principles for telemedicine and telecare: the perspective of a citizens’ panel. J Telemed Telecare. 2005;11(suppl 1):66-68.

12. Kass N, Faden R, Fabi RE, et al. Alternative consent models for comparative effectiveness studies: views of patients from two institutions. AJOB Empir Bioeth. 2016;7(2):92-105.

13. Carman KL, Mallery C, Maurer M, et al. Effectiveness of public deliberation methods for gathering input on issues in healthcare: results from a randomized trial. Soc Sci Med. 2015;133:11-20.

14. Carman KL, Maurer M, Mangrum R, et al. Understanding an informed public’s views on the role of evidence in making health care decisions. Health Aff (Millwood). 2016;35(4):566-574.

15. Kim SYH, Wall IF, Stanczyk A, De Vries R. Assessing the public’s views in research ethics controversies: deliberative democracy and bioethics as natural allies, J Empir Res Hum Res Ethics. 2009;4(4):3-16.

16. Gastil J, Levine P, eds. The Deliberative Democracy Handbook: Strategies for Effective Civic Engagement in the Twenty-First Century. San Francisco, CA: Jossey-Bass; 2005.

17. Dryzek JS, Bächtiger A, Chambers S, et al. The crisis of democracy and the science of deliberation. Science. 2019;363(6432):1144-1146.

18. Blacksher E, Diebel A, Forest PG, Goold SD, Abelson J. What is public deliberation? Hastings Cent Rep. 2012;4(2):14-17.

19. Wang G, Gold M, Siegel J, et al. Deliberation: obtaining informed input from a diverse public. J Health Care Poor Underserved. 2015;26(1):223-242.

20. Simon RL, ed. The Blackwell Guide to Social and Political Philosophy. Malden, MA: Wiley-Blackwell; 2002.

21. Stanford University, Center for Deliberative Democracy. Deliberative polling on energy and environmental policy options in Japan. https://cdd.stanford.edu/2012/deliberative-polling-on-energy-and-environmental-policy-options-in-japan. Published August 12, 2012. Accessed December 9, 2019.

22. Damschroder LJ, Pritts JL, Neblo MA, Kalarickal RJ, Creswell JW, Hayward RA. Patients, privacy and trust: patients’ willingness to allow researchers to access their medical records. Soc Sci Med. 2007;64(1):223-235.

23. Carman KL, Maurer M, Mallery C, et al. Community forum deliberative methods demonstration: evaluating effectiveness and eliciting public views on use of evidence. Final report. https://effectivehealthcare.ahrq.gov/sites/default/files/pdf/deliberative-methods_research-2013-1.pdf. Published November 2014. Accessed December 9, 2019.

24. Sunstein CR, Hastie R. Wiser: Getting Beyond Groupthink to Make Groups Smarter. Boston, MA: Harvard Business Review Press; 2014.

25. Damschroder LJ, Kim SY. Assessing the quality of democratic deliberation: a case study of public deliberation on the ethics of surrogate consent for research. Soc Sci Med. 2010;70(12):1896-1903.

26. Miles MB, Huberman AM. Qualitative Data Analysis: An Expanded Sourcebook. 2nd ed. Thousand Oaks: SAGE Publications, Inc; 1994.

27. US Department of Veterans Affairs. Veteran community care – general information. https://www.va.gov/COMMUNITYCARE/docs/pubfiles/factsheets/VHA-FS_MISSION-Act.pdf. Published September 9 2019. Accessed December 9, 2019.

Issue
Federal Practitioner - 37(1)a
Issue
Federal Practitioner - 37(1)a
Page Number
24-32
Page Number
24-32
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Focused Ethnography of Diagnosis in Academic Medical Centers

Article Type
Changed
Fri, 12/06/2019 - 12:31

Diagnostic error—defined as a failure to establish an accurate and timely explanation of the patient’s health problem—is an important source of patient harm.1 Data suggest that all patients will experience at least 1 diagnostic error in their lifetime.2-4 Not surprisingly, diagnostic errors are among the leading categories of paid malpractice claims in the United States.5

Despite diagnostic errors being morbid and sometimes deadly in the hospital,6,7 little is known about how residents and learners approach diagnostic decision making. Errors in diagnosis are believed to stem from cognitive or system failures,8 with errors in cognition believed to occur due to rapid, reflexive thinking operating in the absence of a more analytical, deliberate process. System-based problems (eg, lack of expert availability, technology barriers, and access to data) have also been cited as contributors.9 However, whether and how these apply to trainees is not known.

Therefore, we conducted a focused ethnography of inpatient medicine teams (ie, attendings, residents, interns, and medical students) in 2 affiliated teaching hospitals, aiming to (a) observe the process of diagnosis by trainees and (b) identify methods to improve the diagnostic process and prevent errors.

METHODS

We designed a multimethod, focused ethnographic study to examine diagnostic decision making in hospital settings.10,11 In contrast to anthropologic ethnographies that study entire fields using open-ended questions, our study was designed to examine the process of diagnosis from the perspective of clinicians engaged in this activity.11 This approach allowed us to capture diagnostic decisions and cognitive and system-based factors in a manner currently lacking in the literature.12

Setting and Participants

Between January 2016 and May 2016, we observed the members of four inpatient internal medicine teaching teams at 2 affiliated teaching hospitals. We purposefully selected teaching teams for observation because they are the primary model of care in academic settings and we have expertise in carrying out similar studies.13,14 Teaching teams typically consisted of a medical attending (senior-level physician), 1 senior resident (a second- or third-year postgraduate trainee), two interns (a trainee in their first postgraduate year), and two to four  medical students. Teams were selected at random using existing schedules and followed Monday to Friday so as to permit observation of work on call and noncall days. Owing to manpower limitations, weekend and night shifts were not observed. However, overnight events were captured during morning rounds.

Most of the teams began rounds at 8:30 AM. Typically, rounds lasted for 90–120 min and concluded with a recap (ie, “running the list”) with a review of explicit plans for patients after they had been evaluated by the attending. This discussion often occurred in the team rooms, with the attending leading the discussion with the trainees.

Data Collection

A multidisciplinary team, including clinicians (eg, physicians, nurses), nonclinicians (eg, qualitative researchers, social scientists), and healthcare engineers, conducted the observations. We observed preround activities of interns and residents before arrival of the attending (7:00 AM - 8:30 AM), followed by morning rounds with the entire team, and afternoon work that included senior residents, interns, and students.

To capture multiple aspects of the diagnostic process, we collected data using field notes modeled on components of the National Academy of Science model for diagnosis (Appendix).1,15 This model encompasses phases of the diagnostic process (eg, data gathering, integration, formulation of a working diagnosis, treatment delivery, and outcomes) and the work system (team members, organization, technology and tools, physical environment, tasks).

Focus Groups and Interviews

At the end of weekly observations, we conducted focus groups with the residents and one-on- one interviews with the attendings. Focus groups with the residents were conducted to encourage a group discussion about the diagnostic process. Separate interviews with the attendings were performed to ensure that power differentials did not influence discussions. During focus groups, we specifically asked about challenges and possible solutions to improve diagnosis. Experienced qualitative methodologists (J.F., M.H., M.Q.) used semistructured interview guides for discussions (Appendix).

 

 

Data Analysis

After aggregating and reading the data, three reviewers (V.C., S.K., S.S.) began inductive analysis by handwriting notes and initial reflective thoughts to create preliminary codes. Multiple team members then reread the original field notes and the focus group/interview data to refine the preliminary codes and develop additional codes. Next, relationships between codes were identified and used to develop key themes. Triangulation of data collected from observations and interview/focus group sessions was carried out to compare data that we surmised with data that were verbalized by the team. The developed themes were discussed as a group to ensure consistency of major findings.

Ethical and Regulatory Oversight

This study was reviewed and approved by the Institutional Review Boards at the University of Michigan Health System (HUM-00106657) and the VA Ann Arbor Healthcare System (1-2016-010040).

RESULTS

Four teaching teams (4 attendings, 4 senior residents, 9 interns, and 14 medical students) were observed over 33 distinct shifts and 168 hours. Observations included morning rounds (96 h), postround call days (52 h), and postround non-call days (20 h). Morning rounds lasted an average of 127 min (range: 48-232 min) and included an average of 9 patients (range: 4-16 patients).

Themes Regarding the Diagnostic Process

We identified the following 4 primary themes related to the diagnostic process in teaching hospitals: (1) diagnosis is a social phenomenon; (2) data necessary to make diagnoses are fragmented; (3) distractions undermine the diagnostic process; and (4) time pressures interfere with diagnostic decision making (Appendix Table 1).

(1) Diagnosis is a Social Phenomenon.

Team members viewed the process of diagnosis as a social exchange of facts, findings, and strategies within a defined structure. The opportunity to discuss impressions with others was valued as a means to share, test, and process assumptions.

“Rounds are the most important part of the process. That is where we make most decisions in a collective, collaborative way with the attending present. We bounce ideas off each other.” (Intern)

Typical of social processes, variations based on time of day and schedule were observed. For instance, during call days, learners gathered data and formed working diagnosis and treatment plans with minimal attending interaction. This separation of roles and responsibilities introduced a hierarchy within diagnosis as follows:

“The interns would not call me first; they would talk to the senior resident and then if the senior thought he should chat with me, then they would call. But for the most part, they gather information and come up with the plan.” (Attending).

The work system was suited to facilitate social interactions. For instance, designated rooms (with team members informally assigned to a computer) provided physical proximity of the resident to interns and medical students. In this space, numerous informal discussions between team members (eg, “What do you think about this test?” “I’m not sure what to do about this finding.” “Should I call a [consult] on this patient?”) were observed. Although proximity to each other was viewed as beneficial, dangers to the social nature of diagnosis in the form of anchoring (ie, a cognitive bias where emphasis is placed on the first piece of data)16 were also mentioned. Similarly, the paradox associated with social proof (ie, the pressure to assume conformity within a group) was also observed as disagreement between team members and attendings rarely occurred during observations.

“I mean, they’re the attending, right? It’s hard to argue with them when they want a test or something done. When I do push back, it’s rare that others will support me–so it’s usually me and the attending.” (Resident)

“I would push back if I think it’s really bad for the patient or could cause harm–but the truth is, it doesn’t happen much.” (Intern)

(2) Data Necessary to Make Diagnoses are Fragmented

Team members universally cited fragmentation in data delivery, retrieval, and processing as a barrier to diagnosis. Team members indicated that test results might not be looked at or acted upon in a timely manner, and participants pointed to the electronic medical record as a source of this challenge.

“Before I knew about [the app for Epic], I would literally sit on the computer to get all the information we would need on rounds. Its key to making decisions. We often say we will do something, only to find the test result doesn’t support it–and then we’re back to square 1.” (Intern)

Information used by teams came from myriad sources (eg, patients, family members, electronic records) and from various settings (eg, emergency department, patient rooms, discussions with consultants). Additionally, test results often appeared without warning. Thus, availability of information was poorly aligned with clinical duties.

 

 

“They (the lab) will call us when a blood culture is positive or something is off. That is very helpful but it often comes later in the day, when we’re done with rounds.” (Resident)

The work system was highlighted as a key contributor to data fragmentation. Peculiarities of our electronic medical record (EMR) and how data were collected, stored, or presented were described as “frustrating,” and “unsafe,” by team members. Correspondingly, we frequently observed interns asking for assistance for tasks such as ordering tests or finding information despite being “trained” to use the EMR.

“People have to learn how to filter, how to recognize the most important points and link data streams together in terms of causality. But we assume they know where to find that information. It’s actually a very hard thing to do, for both the house staff and me.” (Attending)

(3) Distractions Undermine the Diagnostic Process

Distractions often created cognitive difficulties. For example, ambient noise and interruptions from neighbors working on other teams were cited as barriers to diagnosis. In addition, we observed several team members using headphones to drown out ambient noise while working on the computer.

“I know I shouldn’t do it (wear headphones), but I have no other way of turning down the noise so I can concentrate.” (Intern)

Similarly, the unpredictable nature and the volume of pages often interrupted thinking about diagnosis.

“Sometimes the pager just goes off all the time and (after making sure its not an urgent issue), I will just ignore it for a bit, especially if I am in the middle of something. It would be great if I could finish my thought process knowing I would not be interrupted.” (Resident)

To mitigate this problem, 1 attending described how he would proactively seek out nurses caring for his patients to “head off” questions (eg, “I will renew the restraints and medications this morning,” and “Is there anything you need in terms of orders for this patient that I can take care of now?”) that might lead to pages. Another resident described his approach as follows:

“I make it a point to tell the nurses where I will be hanging out and where they can find me if they have any questions. I tell them to come talk to me rather than page me since that will be less distracting.” (Resident).

Most of the interns described documentation work such as writing admission and progress notes in negative terms (“an academic exercise,” “part of the billing activity”). However, in the context of interruptions, some described this as helpful.

“The most valuable part of the thinking process was writing the assessment and plan because that’s actually my schema for all problems. It literally is the only time where I can sit and collect my thoughts to formulate a diagnosis and plan.” (Intern)

(4) Time Pressures Interfere With Diagnostic Decision Making

All team members spoke about the challenge of finding time for diagnosis during the workday. Often, they had to skip learning sessions for this purpose.

“They tell us we should go to morning report or noon conference but when I’m running around trying to get things done. I hate having to choose between my education and doing what’s best for the patient–but that’s often what it comes down to.” (Intern)

When specifically asked whether setting aside dedicated time to specifically review and formulate diagnoses would be valuable, respondents were uniformly enthusiastic. Team members described attentional conflicts as being the worst when “cross covering” other teams on call days, as their patient load effectively doubled during this time. Of note, cross-covering occurred when teams were also on call—and thus took them away from important diagnostic activities such as data gathering or synthesis for patients they were admitting.

“If you were to ever design a system where errors were likely–this is how you would design it: take a team with little supervision, double their patient load, keep them busy with new challenging cases and then ask questions about patients they know little about.” (Resident)

DISCUSSION

Although diagnostic errors have been called “the next frontier for patient safety,”17 little is known about the process, barriers, and facilitators to diagnosis in teaching hospitals. In this focused ethnography conducted at 2 academic medical centers, we identified multiple cognitive and system-level challenges and potential strategies to improve diagnosis from trainees engaged in this activity. Key themes identified by those we observed included the social nature of diagnosis, fragmented information delivery, constant distractions and interruptions, and time pressures. In turn, these insights allow us to generate strategies that can be applied to improve the diagnostic process in teaching hospitals.

 

 

Our study underscores the importance of social interactions in diagnosis. In contrast, most of the interventions to prevent diagnostic errors target individual providers through practices such as metacognition and “thinking about thinking.”18-20 These interventions are based on Daniel Kahnemann’s work on dual thought process. Type 1 thought processes are fast, subconscious, reflexive, largely intuitive, and more vulnerable to error. In contrast, Type 2 processes are slower, deliberate, analytic, and less prone to error.21 Although an individual’s Type 2 thought capacity is limited, a major goal of cognitive interventions is to encourage Type 2 over Type 1 thinking, an approach termed “de-biasing.”22-24 Unfortunately, cognitive interventions testing such approaches have suffered mixed results–perhaps because of lack of focus on collective wisdom or group thinking, which may be key to diagnosis from our findings.9,25 In this sense, morning rounds were a social gathering used to strategize and develop care plans, but with limited time to think about diagnosis.26 Introduction of defined periods for individuals to engage in diagnostic activities such as de-biasing (ie, asking “what else could this be)27 before or after rounds may provide an opportunity for reflection and improving diagnosis. In addition, embedding tools such as diagnosis expanders and checklists within these defined time slots28,29 may prove to be useful in reflecting on diagnosis and preventing diagnostic errors.

An unexpected yet important finding from this study were the challenges posed by distractions and the physical environment. Potentially maladaptive workarounds to these interruptions included use of headphones; more productive strategies included updating nurses with plans to avert pages and creating a list of activities to ensure that key tasks were not forgotten.30,31 Applying lessons from aviation, a focused effort to limit distractions during key portions of the day, might be worth considering for diagnostic safety.32 Similarly, improving the environment in which diagnosis occurs—including creating spaces that are quiet, orderly, and optimized for thinking—may be valuable.33Our study has limitations. First, our findings are limited to direct observations; we are thus unable to comment on how unobserved aspects of care (eg, cognitive processes) might have influenced our findings. Our observations of clinical care might also have introduced a Hawthorne effect. However, because we were closely integrated with teams and conducted focus groups to corroborate our assessments, we believe that this was not the case. Second, we did not identify diagnostic errors or link processes we observed to errors. Third, our approach is limited to 2 teaching centers, thereby limiting the generalizability of findings. Relatedly, we were only able to conduct observations during weekdays; differences in weekend and night resources might affect our insights.

The cognitive and system-based barriers faced by clinicians in teaching hospitals suggest that new methods to improve diagnosis are needed. Future interventions such as defined “time-outs” for diagnosis, strategies focused on limiting distractions, and methods to improve communication between team members are novel and have parallels in other industries. As challenges to quantify diagnostic errors abound,34 improving cognitive- and system-based factors via reflection through communication, concentration, and organization is necessary to improve medical decision making in academic medical centers.

Disclosures

None declared for all coauthors.

Funding

This project was supported by grant number P30HS024385 from the Agency for Healthcare Research and Quality. The funding source played no role in study design, data acquisition, analysis or decision to report these data. Dr. Chopra is supported by a career development award from the Agency of Healthcare Research and Quality (1-K08-HS022835-01). Dr. Krein is supported by a VA Health Services Research and Development Research Career Scientist Award (RCS 11-222). Dr. Singh is partially supported by Houston VA HSR&D Center for Innovations in Quality, Effectiveness and Safety (CIN 13-413). The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality or the Department of Veterans Affairs.

Files
References

1. National Academies of Sciences, Engineering, and Medicine. 2015. Improving Diagnosis in Health Care. Washington, DC: The National Academies Press. http://www.nap.edu/21794. Accessed November 1; 2016:2015. https://doi.org/10.17226/21794.
2. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169(20):1881-1887. http://dx.doi.org/10.1001/archinternmed.2009.333. PubMed
3. Sonderegger-Iseli K, Burger S, Muntwyler J, Salomon F. Diagnostic errors in three medical eras: A necropsy study. Lancet. 2000;355(9220):2027-2031. http://dx.doi.org/10.1016/S0140-6736(00)02349-7PubMed
4. Winters B, Custer J, Galvagno SM Jr, et al. Diagnostic errors in the intensive care unit: a systematic review of autopsy studies. BMJ Qual Saf. 2012;21(11):894-902. http://dx.doi.org/10.1136/bmjqs-2012-000803. PubMed
5. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-Year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22(8):672-680. http://dx.doi.org/10.1136/bmjqs-2012-001550PubMed
6. Graber M, Gordon R, Franklin N. Reducing diagnostic errors in medicine: what’s the goal? Acad Med. 2002;77(10):981-992. http://dx.doi.org/10.1097/00001888-200210000-00009PubMed
7. Gupta A, Snyder A, Kachalia A, Flanders S, Saint S, Chopra V. Malpractice claims related to diagnostic errors in the hospital. BMJ Qual Saf. 2018;27(1):53-60. 10.1136/bmjqs-2017-006774. PubMed
8. van Noord I, Eikens MP, Hamersma AM, de Bruijne MC. Application of root cause analysis on malpractice claim files related to diagnostic failures. Qual Saf Health Care. 2010;19(6):e21. http://dx.doi.org/10.1136/qshc.2008.029801PubMed
9. Croskerry P, Petrie DA, Reilly JB, Tait G. Deciding about fast and slow decisions. Acad Med. 2014;89(2):197-200. 10.1097/ACM.0000000000000121. PubMed
10. Higginbottom GM, Pillay JJ, Boadu NY. Guidance on performing focused ethnographies with an emphasis on healthcare research. Qual Rep. 2013;18(9):1-6. https://doi.org/10.7939/R35M6287P. 
11. Savage J. Participative observation: standing in the shoes of others? Qual Health Res. 2000;10(3):324-339. http://dx.doi.org/10.1177/104973200129118471PubMed
12. Patton MQ. Qualitative Research and Evaluation Methods. 3rd ed. Thousand Oaks, CA: SAGE Publications; 2002. 
13. Harrod M, Weston LE, Robinson C, Tremblay A, Greenstone CL, Forman J. “It goes beyond good camaraderie”: A qualitative study of the process of becoming an interprofessional healthcare “teamlet.” J Interprof Care. 2016;30(3):295-300. http://dx.doi.org/10.3109/13561820.2015.1130028PubMed
14. Houchens N, Harrod M, Moody S, Fowler KE, Saint S. Techniques and behaviors associated with exemplary inpatient general medicine teaching: an exploratory qualitative study. J Hosp Med. 2017;12(7):503-509. http://dx.doi.org/10.12788/jhm.2763PubMed
15. Mulhall A. In the field: notes on observation in qualitative research. J Adv Nurs. 2003;41(3):306-313. http://dx.doi.org/10.1046/j.1365-2648.2003.02514.xPubMed
16. Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Qual Saf. 2017;26(2):104-110. http://dx.doi.org/10.1136/bmjqs-2015-005014PubMed
17. Singh H, Graber ML. Improving diagnosis in health care--the next imperative for patient safety. N Engl J Med. 2015;373(26):2493-2495. http://dx.doi.org/10.1056/NEJMp1512241PubMed
18. Croskerry P. From mindless to mindful practice--cognitive bias and clinical decision making. N Engl J Med. 2013;368(26):2445-2448. http://dx.doi.org/10.1056/NEJMp1303712PubMed
19. van den Berge K, Mamede S. Cognitive diagnostic error in internal medicine. Eur J Intern Med. 2013;24(6):525-529. http://dx.doi.org/10.1016/j.ejim.2013.03.006PubMed
20. Norman G, Sherbino J, Dore K, et al. The etiology of diagnostic errors: A controlled trial of system 1 versus system 2 reasoning. Acad Med. 2014;89(2):277-284. 10.1097/ACM.0000000000000105 PubMed
21. Dhaliwal G. Premature closure? Not so fast. BMJ Qual Saf. 2017;26(2):87-89. http://dx.doi.org/10.1136/bmjqs-2016-005267PubMed
22. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: Origins of bias and theory of debiasing. BMJ Qual Saf. 2013;22(suppl 2):ii58-iiii64. http://dx.doi.org/10.1136/bmjqs-2012-001712PubMed
23. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 2: Impediments to and strategies for change. BMJ Qual Saf. 2013;22(suppl 2):ii65-iiii72. http://dx.doi.org/10.1136/bmjqs-2012-001713PubMed
24. Reilly JB, Ogdie AR, Von Feldt JM, Myers JS. Teaching about how doctors think: a longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Qual Saf. 2013;22(12):1044-1050. http://dx.doi.org/10.1136/bmjqs-2013-001987PubMed
25. Schmidt HG, Mamede S, van den Berge K, van Gog T, van Saase JL, Rikers RM. Exposure to media information about a disease can cause doctors to misdiagnose similar-looking clinical cases. Acad Med. 2014;89(2):285-291. http://dx.doi.org/10.1097/ACM.0000000000000107PubMed
26. Hess BJ, Lipner RS, Thompson V, Holmboe ES, Graber ML. Blink or think: can further reflection improve initial diagnostic impressions? Acad Med. 2015;90(1):112-118. http://dx.doi.org/10.1097/ACM.0000000000000550PubMed
27. Lambe KA, O’Reilly G, Kelly BD, Curristan S. Dual-process cognitive interventions to enhance diagnostic reasoning: A systematic review. BMJ Qual Saf. 2016;25(10):808-820. http://dx.doi.org/10.1136/bmjqs-2015-004417PubMed
28. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535-557. http://dx.doi.org/10.1136/bmjqs-2011-000149PubMed
29. McDonald KM, Matesic B, Contopoulos-Ioannidis DG, et al. Patient safety strategies targeted at diagnostic errors: a systematic review. Ann Intern Med. 2013;158(5 Pt 2):381-389. http://dx.doi.org/10.7326/0003-4819-158-5-201303051-00004PubMed
30. Wray CM, Chaudhry S, Pincavage A, et al. Resident shift handoff strategies in US internal medicine residency programs. JAMA. 2016;316(21):2273-2275. http://dx.doi.org/10.1001/jama.2016.17786PubMed
31. Choo KJ, Arora VM, Barach P, Johnson JK, Farnan JM. How do supervising physicians decide to entrust residents with unsupervised tasks? A qualitative analysis. J Hosp Med. 2014;9(3):169-175. http://dx.doi.org/10.1002/jhm.2150PubMed
32. Carayon P, Wood KE. Patient safety - the role of human factors and systems engineering. Stud Health Technol Inform. 2010;153:23-46.

 

 

 

.http://dx.doi.org/10.1001/jama.2015.13453  PubMed

34. McGlynn EA, McDonald KM, Cassel CK. Measurement is essential for improving diagnosis and reducing diagnostic error: A report from the Institute of Medicine. JAMA. 2015;314(23):2501-2502.
.http://dx.doi.org/10.1136/bmjqs-2013-001812 PubMed

33. Carayon P, Xie A, Kianfar S. Human factors and ergonomics as a patient safety practice. BMJ Qual Saf. 2014;23(3):196-205. PubMed

 

Article PDF
Issue
Journal of Hospital Medicine 13(10)
Publications
Topics
Page Number
668-672. Published online first April 25, 2018
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Diagnostic error—defined as a failure to establish an accurate and timely explanation of the patient’s health problem—is an important source of patient harm.1 Data suggest that all patients will experience at least 1 diagnostic error in their lifetime.2-4 Not surprisingly, diagnostic errors are among the leading categories of paid malpractice claims in the United States.5

Despite diagnostic errors being morbid and sometimes deadly in the hospital,6,7 little is known about how residents and learners approach diagnostic decision making. Errors in diagnosis are believed to stem from cognitive or system failures,8 with errors in cognition believed to occur due to rapid, reflexive thinking operating in the absence of a more analytical, deliberate process. System-based problems (eg, lack of expert availability, technology barriers, and access to data) have also been cited as contributors.9 However, whether and how these apply to trainees is not known.

Therefore, we conducted a focused ethnography of inpatient medicine teams (ie, attendings, residents, interns, and medical students) in 2 affiliated teaching hospitals, aiming to (a) observe the process of diagnosis by trainees and (b) identify methods to improve the diagnostic process and prevent errors.

METHODS

We designed a multimethod, focused ethnographic study to examine diagnostic decision making in hospital settings.10,11 In contrast to anthropologic ethnographies that study entire fields using open-ended questions, our study was designed to examine the process of diagnosis from the perspective of clinicians engaged in this activity.11 This approach allowed us to capture diagnostic decisions and cognitive and system-based factors in a manner currently lacking in the literature.12

Setting and Participants

Between January 2016 and May 2016, we observed the members of four inpatient internal medicine teaching teams at 2 affiliated teaching hospitals. We purposefully selected teaching teams for observation because they are the primary model of care in academic settings and we have expertise in carrying out similar studies.13,14 Teaching teams typically consisted of a medical attending (senior-level physician), 1 senior resident (a second- or third-year postgraduate trainee), two interns (a trainee in their first postgraduate year), and two to four  medical students. Teams were selected at random using existing schedules and followed Monday to Friday so as to permit observation of work on call and noncall days. Owing to manpower limitations, weekend and night shifts were not observed. However, overnight events were captured during morning rounds.

Most of the teams began rounds at 8:30 AM. Typically, rounds lasted for 90–120 min and concluded with a recap (ie, “running the list”) with a review of explicit plans for patients after they had been evaluated by the attending. This discussion often occurred in the team rooms, with the attending leading the discussion with the trainees.

Data Collection

A multidisciplinary team, including clinicians (eg, physicians, nurses), nonclinicians (eg, qualitative researchers, social scientists), and healthcare engineers, conducted the observations. We observed preround activities of interns and residents before arrival of the attending (7:00 AM - 8:30 AM), followed by morning rounds with the entire team, and afternoon work that included senior residents, interns, and students.

To capture multiple aspects of the diagnostic process, we collected data using field notes modeled on components of the National Academy of Science model for diagnosis (Appendix).1,15 This model encompasses phases of the diagnostic process (eg, data gathering, integration, formulation of a working diagnosis, treatment delivery, and outcomes) and the work system (team members, organization, technology and tools, physical environment, tasks).

Focus Groups and Interviews

At the end of weekly observations, we conducted focus groups with the residents and one-on- one interviews with the attendings. Focus groups with the residents were conducted to encourage a group discussion about the diagnostic process. Separate interviews with the attendings were performed to ensure that power differentials did not influence discussions. During focus groups, we specifically asked about challenges and possible solutions to improve diagnosis. Experienced qualitative methodologists (J.F., M.H., M.Q.) used semistructured interview guides for discussions (Appendix).

 

 

Data Analysis

After aggregating and reading the data, three reviewers (V.C., S.K., S.S.) began inductive analysis by handwriting notes and initial reflective thoughts to create preliminary codes. Multiple team members then reread the original field notes and the focus group/interview data to refine the preliminary codes and develop additional codes. Next, relationships between codes were identified and used to develop key themes. Triangulation of data collected from observations and interview/focus group sessions was carried out to compare data that we surmised with data that were verbalized by the team. The developed themes were discussed as a group to ensure consistency of major findings.

Ethical and Regulatory Oversight

This study was reviewed and approved by the Institutional Review Boards at the University of Michigan Health System (HUM-00106657) and the VA Ann Arbor Healthcare System (1-2016-010040).

RESULTS

Four teaching teams (4 attendings, 4 senior residents, 9 interns, and 14 medical students) were observed over 33 distinct shifts and 168 hours. Observations included morning rounds (96 h), postround call days (52 h), and postround non-call days (20 h). Morning rounds lasted an average of 127 min (range: 48-232 min) and included an average of 9 patients (range: 4-16 patients).

Themes Regarding the Diagnostic Process

We identified the following 4 primary themes related to the diagnostic process in teaching hospitals: (1) diagnosis is a social phenomenon; (2) data necessary to make diagnoses are fragmented; (3) distractions undermine the diagnostic process; and (4) time pressures interfere with diagnostic decision making (Appendix Table 1).

(1) Diagnosis is a Social Phenomenon.

Team members viewed the process of diagnosis as a social exchange of facts, findings, and strategies within a defined structure. The opportunity to discuss impressions with others was valued as a means to share, test, and process assumptions.

“Rounds are the most important part of the process. That is where we make most decisions in a collective, collaborative way with the attending present. We bounce ideas off each other.” (Intern)

Typical of social processes, variations based on time of day and schedule were observed. For instance, during call days, learners gathered data and formed working diagnosis and treatment plans with minimal attending interaction. This separation of roles and responsibilities introduced a hierarchy within diagnosis as follows:

“The interns would not call me first; they would talk to the senior resident and then if the senior thought he should chat with me, then they would call. But for the most part, they gather information and come up with the plan.” (Attending).

The work system was suited to facilitate social interactions. For instance, designated rooms (with team members informally assigned to a computer) provided physical proximity of the resident to interns and medical students. In this space, numerous informal discussions between team members (eg, “What do you think about this test?” “I’m not sure what to do about this finding.” “Should I call a [consult] on this patient?”) were observed. Although proximity to each other was viewed as beneficial, dangers to the social nature of diagnosis in the form of anchoring (ie, a cognitive bias where emphasis is placed on the first piece of data)16 were also mentioned. Similarly, the paradox associated with social proof (ie, the pressure to assume conformity within a group) was also observed as disagreement between team members and attendings rarely occurred during observations.

“I mean, they’re the attending, right? It’s hard to argue with them when they want a test or something done. When I do push back, it’s rare that others will support me–so it’s usually me and the attending.” (Resident)

“I would push back if I think it’s really bad for the patient or could cause harm–but the truth is, it doesn’t happen much.” (Intern)

(2) Data Necessary to Make Diagnoses are Fragmented

Team members universally cited fragmentation in data delivery, retrieval, and processing as a barrier to diagnosis. Team members indicated that test results might not be looked at or acted upon in a timely manner, and participants pointed to the electronic medical record as a source of this challenge.

“Before I knew about [the app for Epic], I would literally sit on the computer to get all the information we would need on rounds. Its key to making decisions. We often say we will do something, only to find the test result doesn’t support it–and then we’re back to square 1.” (Intern)

Information used by teams came from myriad sources (eg, patients, family members, electronic records) and from various settings (eg, emergency department, patient rooms, discussions with consultants). Additionally, test results often appeared without warning. Thus, availability of information was poorly aligned with clinical duties.

 

 

“They (the lab) will call us when a blood culture is positive or something is off. That is very helpful but it often comes later in the day, when we’re done with rounds.” (Resident)

The work system was highlighted as a key contributor to data fragmentation. Peculiarities of our electronic medical record (EMR) and how data were collected, stored, or presented were described as “frustrating,” and “unsafe,” by team members. Correspondingly, we frequently observed interns asking for assistance for tasks such as ordering tests or finding information despite being “trained” to use the EMR.

“People have to learn how to filter, how to recognize the most important points and link data streams together in terms of causality. But we assume they know where to find that information. It’s actually a very hard thing to do, for both the house staff and me.” (Attending)

(3) Distractions Undermine the Diagnostic Process

Distractions often created cognitive difficulties. For example, ambient noise and interruptions from neighbors working on other teams were cited as barriers to diagnosis. In addition, we observed several team members using headphones to drown out ambient noise while working on the computer.

“I know I shouldn’t do it (wear headphones), but I have no other way of turning down the noise so I can concentrate.” (Intern)

Similarly, the unpredictable nature and the volume of pages often interrupted thinking about diagnosis.

“Sometimes the pager just goes off all the time and (after making sure its not an urgent issue), I will just ignore it for a bit, especially if I am in the middle of something. It would be great if I could finish my thought process knowing I would not be interrupted.” (Resident)

To mitigate this problem, 1 attending described how he would proactively seek out nurses caring for his patients to “head off” questions (eg, “I will renew the restraints and medications this morning,” and “Is there anything you need in terms of orders for this patient that I can take care of now?”) that might lead to pages. Another resident described his approach as follows:

“I make it a point to tell the nurses where I will be hanging out and where they can find me if they have any questions. I tell them to come talk to me rather than page me since that will be less distracting.” (Resident).

Most of the interns described documentation work such as writing admission and progress notes in negative terms (“an academic exercise,” “part of the billing activity”). However, in the context of interruptions, some described this as helpful.

“The most valuable part of the thinking process was writing the assessment and plan because that’s actually my schema for all problems. It literally is the only time where I can sit and collect my thoughts to formulate a diagnosis and plan.” (Intern)

(4) Time Pressures Interfere With Diagnostic Decision Making

All team members spoke about the challenge of finding time for diagnosis during the workday. Often, they had to skip learning sessions for this purpose.

“They tell us we should go to morning report or noon conference but when I’m running around trying to get things done. I hate having to choose between my education and doing what’s best for the patient–but that’s often what it comes down to.” (Intern)

When specifically asked whether setting aside dedicated time to specifically review and formulate diagnoses would be valuable, respondents were uniformly enthusiastic. Team members described attentional conflicts as being the worst when “cross covering” other teams on call days, as their patient load effectively doubled during this time. Of note, cross-covering occurred when teams were also on call—and thus took them away from important diagnostic activities such as data gathering or synthesis for patients they were admitting.

“If you were to ever design a system where errors were likely–this is how you would design it: take a team with little supervision, double their patient load, keep them busy with new challenging cases and then ask questions about patients they know little about.” (Resident)

DISCUSSION

Although diagnostic errors have been called “the next frontier for patient safety,”17 little is known about the process, barriers, and facilitators to diagnosis in teaching hospitals. In this focused ethnography conducted at 2 academic medical centers, we identified multiple cognitive and system-level challenges and potential strategies to improve diagnosis from trainees engaged in this activity. Key themes identified by those we observed included the social nature of diagnosis, fragmented information delivery, constant distractions and interruptions, and time pressures. In turn, these insights allow us to generate strategies that can be applied to improve the diagnostic process in teaching hospitals.

 

 

Our study underscores the importance of social interactions in diagnosis. In contrast, most of the interventions to prevent diagnostic errors target individual providers through practices such as metacognition and “thinking about thinking.”18-20 These interventions are based on Daniel Kahnemann’s work on dual thought process. Type 1 thought processes are fast, subconscious, reflexive, largely intuitive, and more vulnerable to error. In contrast, Type 2 processes are slower, deliberate, analytic, and less prone to error.21 Although an individual’s Type 2 thought capacity is limited, a major goal of cognitive interventions is to encourage Type 2 over Type 1 thinking, an approach termed “de-biasing.”22-24 Unfortunately, cognitive interventions testing such approaches have suffered mixed results–perhaps because of lack of focus on collective wisdom or group thinking, which may be key to diagnosis from our findings.9,25 In this sense, morning rounds were a social gathering used to strategize and develop care plans, but with limited time to think about diagnosis.26 Introduction of defined periods for individuals to engage in diagnostic activities such as de-biasing (ie, asking “what else could this be)27 before or after rounds may provide an opportunity for reflection and improving diagnosis. In addition, embedding tools such as diagnosis expanders and checklists within these defined time slots28,29 may prove to be useful in reflecting on diagnosis and preventing diagnostic errors.

An unexpected yet important finding from this study were the challenges posed by distractions and the physical environment. Potentially maladaptive workarounds to these interruptions included use of headphones; more productive strategies included updating nurses with plans to avert pages and creating a list of activities to ensure that key tasks were not forgotten.30,31 Applying lessons from aviation, a focused effort to limit distractions during key portions of the day, might be worth considering for diagnostic safety.32 Similarly, improving the environment in which diagnosis occurs—including creating spaces that are quiet, orderly, and optimized for thinking—may be valuable.33Our study has limitations. First, our findings are limited to direct observations; we are thus unable to comment on how unobserved aspects of care (eg, cognitive processes) might have influenced our findings. Our observations of clinical care might also have introduced a Hawthorne effect. However, because we were closely integrated with teams and conducted focus groups to corroborate our assessments, we believe that this was not the case. Second, we did not identify diagnostic errors or link processes we observed to errors. Third, our approach is limited to 2 teaching centers, thereby limiting the generalizability of findings. Relatedly, we were only able to conduct observations during weekdays; differences in weekend and night resources might affect our insights.

The cognitive and system-based barriers faced by clinicians in teaching hospitals suggest that new methods to improve diagnosis are needed. Future interventions such as defined “time-outs” for diagnosis, strategies focused on limiting distractions, and methods to improve communication between team members are novel and have parallels in other industries. As challenges to quantify diagnostic errors abound,34 improving cognitive- and system-based factors via reflection through communication, concentration, and organization is necessary to improve medical decision making in academic medical centers.

Disclosures

None declared for all coauthors.

Funding

This project was supported by grant number P30HS024385 from the Agency for Healthcare Research and Quality. The funding source played no role in study design, data acquisition, analysis or decision to report these data. Dr. Chopra is supported by a career development award from the Agency of Healthcare Research and Quality (1-K08-HS022835-01). Dr. Krein is supported by a VA Health Services Research and Development Research Career Scientist Award (RCS 11-222). Dr. Singh is partially supported by Houston VA HSR&D Center for Innovations in Quality, Effectiveness and Safety (CIN 13-413). The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality or the Department of Veterans Affairs.

Diagnostic error—defined as a failure to establish an accurate and timely explanation of the patient’s health problem—is an important source of patient harm.1 Data suggest that all patients will experience at least 1 diagnostic error in their lifetime.2-4 Not surprisingly, diagnostic errors are among the leading categories of paid malpractice claims in the United States.5

Despite diagnostic errors being morbid and sometimes deadly in the hospital,6,7 little is known about how residents and learners approach diagnostic decision making. Errors in diagnosis are believed to stem from cognitive or system failures,8 with errors in cognition believed to occur due to rapid, reflexive thinking operating in the absence of a more analytical, deliberate process. System-based problems (eg, lack of expert availability, technology barriers, and access to data) have also been cited as contributors.9 However, whether and how these apply to trainees is not known.

Therefore, we conducted a focused ethnography of inpatient medicine teams (ie, attendings, residents, interns, and medical students) in 2 affiliated teaching hospitals, aiming to (a) observe the process of diagnosis by trainees and (b) identify methods to improve the diagnostic process and prevent errors.

METHODS

We designed a multimethod, focused ethnographic study to examine diagnostic decision making in hospital settings.10,11 In contrast to anthropologic ethnographies that study entire fields using open-ended questions, our study was designed to examine the process of diagnosis from the perspective of clinicians engaged in this activity.11 This approach allowed us to capture diagnostic decisions and cognitive and system-based factors in a manner currently lacking in the literature.12

Setting and Participants

Between January 2016 and May 2016, we observed the members of four inpatient internal medicine teaching teams at 2 affiliated teaching hospitals. We purposefully selected teaching teams for observation because they are the primary model of care in academic settings and we have expertise in carrying out similar studies.13,14 Teaching teams typically consisted of a medical attending (senior-level physician), 1 senior resident (a second- or third-year postgraduate trainee), two interns (a trainee in their first postgraduate year), and two to four  medical students. Teams were selected at random using existing schedules and followed Monday to Friday so as to permit observation of work on call and noncall days. Owing to manpower limitations, weekend and night shifts were not observed. However, overnight events were captured during morning rounds.

Most of the teams began rounds at 8:30 AM. Typically, rounds lasted for 90–120 min and concluded with a recap (ie, “running the list”) with a review of explicit plans for patients after they had been evaluated by the attending. This discussion often occurred in the team rooms, with the attending leading the discussion with the trainees.

Data Collection

A multidisciplinary team, including clinicians (eg, physicians, nurses), nonclinicians (eg, qualitative researchers, social scientists), and healthcare engineers, conducted the observations. We observed preround activities of interns and residents before arrival of the attending (7:00 AM - 8:30 AM), followed by morning rounds with the entire team, and afternoon work that included senior residents, interns, and students.

To capture multiple aspects of the diagnostic process, we collected data using field notes modeled on components of the National Academy of Science model for diagnosis (Appendix).1,15 This model encompasses phases of the diagnostic process (eg, data gathering, integration, formulation of a working diagnosis, treatment delivery, and outcomes) and the work system (team members, organization, technology and tools, physical environment, tasks).

Focus Groups and Interviews

At the end of weekly observations, we conducted focus groups with the residents and one-on- one interviews with the attendings. Focus groups with the residents were conducted to encourage a group discussion about the diagnostic process. Separate interviews with the attendings were performed to ensure that power differentials did not influence discussions. During focus groups, we specifically asked about challenges and possible solutions to improve diagnosis. Experienced qualitative methodologists (J.F., M.H., M.Q.) used semistructured interview guides for discussions (Appendix).

 

 

Data Analysis

After aggregating and reading the data, three reviewers (V.C., S.K., S.S.) began inductive analysis by handwriting notes and initial reflective thoughts to create preliminary codes. Multiple team members then reread the original field notes and the focus group/interview data to refine the preliminary codes and develop additional codes. Next, relationships between codes were identified and used to develop key themes. Triangulation of data collected from observations and interview/focus group sessions was carried out to compare data that we surmised with data that were verbalized by the team. The developed themes were discussed as a group to ensure consistency of major findings.

Ethical and Regulatory Oversight

This study was reviewed and approved by the Institutional Review Boards at the University of Michigan Health System (HUM-00106657) and the VA Ann Arbor Healthcare System (1-2016-010040).

RESULTS

Four teaching teams (4 attendings, 4 senior residents, 9 interns, and 14 medical students) were observed over 33 distinct shifts and 168 hours. Observations included morning rounds (96 h), postround call days (52 h), and postround non-call days (20 h). Morning rounds lasted an average of 127 min (range: 48-232 min) and included an average of 9 patients (range: 4-16 patients).

Themes Regarding the Diagnostic Process

We identified the following 4 primary themes related to the diagnostic process in teaching hospitals: (1) diagnosis is a social phenomenon; (2) data necessary to make diagnoses are fragmented; (3) distractions undermine the diagnostic process; and (4) time pressures interfere with diagnostic decision making (Appendix Table 1).

(1) Diagnosis is a Social Phenomenon.

Team members viewed the process of diagnosis as a social exchange of facts, findings, and strategies within a defined structure. The opportunity to discuss impressions with others was valued as a means to share, test, and process assumptions.

“Rounds are the most important part of the process. That is where we make most decisions in a collective, collaborative way with the attending present. We bounce ideas off each other.” (Intern)

Typical of social processes, variations based on time of day and schedule were observed. For instance, during call days, learners gathered data and formed working diagnosis and treatment plans with minimal attending interaction. This separation of roles and responsibilities introduced a hierarchy within diagnosis as follows:

“The interns would not call me first; they would talk to the senior resident and then if the senior thought he should chat with me, then they would call. But for the most part, they gather information and come up with the plan.” (Attending).

The work system was suited to facilitate social interactions. For instance, designated rooms (with team members informally assigned to a computer) provided physical proximity of the resident to interns and medical students. In this space, numerous informal discussions between team members (eg, “What do you think about this test?” “I’m not sure what to do about this finding.” “Should I call a [consult] on this patient?”) were observed. Although proximity to each other was viewed as beneficial, dangers to the social nature of diagnosis in the form of anchoring (ie, a cognitive bias where emphasis is placed on the first piece of data)16 were also mentioned. Similarly, the paradox associated with social proof (ie, the pressure to assume conformity within a group) was also observed as disagreement between team members and attendings rarely occurred during observations.

“I mean, they’re the attending, right? It’s hard to argue with them when they want a test or something done. When I do push back, it’s rare that others will support me–so it’s usually me and the attending.” (Resident)

“I would push back if I think it’s really bad for the patient or could cause harm–but the truth is, it doesn’t happen much.” (Intern)

(2) Data Necessary to Make Diagnoses are Fragmented

Team members universally cited fragmentation in data delivery, retrieval, and processing as a barrier to diagnosis. Team members indicated that test results might not be looked at or acted upon in a timely manner, and participants pointed to the electronic medical record as a source of this challenge.

“Before I knew about [the app for Epic], I would literally sit on the computer to get all the information we would need on rounds. Its key to making decisions. We often say we will do something, only to find the test result doesn’t support it–and then we’re back to square 1.” (Intern)

Information used by teams came from myriad sources (eg, patients, family members, electronic records) and from various settings (eg, emergency department, patient rooms, discussions with consultants). Additionally, test results often appeared without warning. Thus, availability of information was poorly aligned with clinical duties.

 

 

“They (the lab) will call us when a blood culture is positive or something is off. That is very helpful but it often comes later in the day, when we’re done with rounds.” (Resident)

The work system was highlighted as a key contributor to data fragmentation. Peculiarities of our electronic medical record (EMR) and how data were collected, stored, or presented were described as “frustrating,” and “unsafe,” by team members. Correspondingly, we frequently observed interns asking for assistance for tasks such as ordering tests or finding information despite being “trained” to use the EMR.

“People have to learn how to filter, how to recognize the most important points and link data streams together in terms of causality. But we assume they know where to find that information. It’s actually a very hard thing to do, for both the house staff and me.” (Attending)

(3) Distractions Undermine the Diagnostic Process

Distractions often created cognitive difficulties. For example, ambient noise and interruptions from neighbors working on other teams were cited as barriers to diagnosis. In addition, we observed several team members using headphones to drown out ambient noise while working on the computer.

“I know I shouldn’t do it (wear headphones), but I have no other way of turning down the noise so I can concentrate.” (Intern)

Similarly, the unpredictable nature and the volume of pages often interrupted thinking about diagnosis.

“Sometimes the pager just goes off all the time and (after making sure its not an urgent issue), I will just ignore it for a bit, especially if I am in the middle of something. It would be great if I could finish my thought process knowing I would not be interrupted.” (Resident)

To mitigate this problem, 1 attending described how he would proactively seek out nurses caring for his patients to “head off” questions (eg, “I will renew the restraints and medications this morning,” and “Is there anything you need in terms of orders for this patient that I can take care of now?”) that might lead to pages. Another resident described his approach as follows:

“I make it a point to tell the nurses where I will be hanging out and where they can find me if they have any questions. I tell them to come talk to me rather than page me since that will be less distracting.” (Resident).

Most of the interns described documentation work such as writing admission and progress notes in negative terms (“an academic exercise,” “part of the billing activity”). However, in the context of interruptions, some described this as helpful.

“The most valuable part of the thinking process was writing the assessment and plan because that’s actually my schema for all problems. It literally is the only time where I can sit and collect my thoughts to formulate a diagnosis and plan.” (Intern)

(4) Time Pressures Interfere With Diagnostic Decision Making

All team members spoke about the challenge of finding time for diagnosis during the workday. Often, they had to skip learning sessions for this purpose.

“They tell us we should go to morning report or noon conference but when I’m running around trying to get things done. I hate having to choose between my education and doing what’s best for the patient–but that’s often what it comes down to.” (Intern)

When specifically asked whether setting aside dedicated time to specifically review and formulate diagnoses would be valuable, respondents were uniformly enthusiastic. Team members described attentional conflicts as being the worst when “cross covering” other teams on call days, as their patient load effectively doubled during this time. Of note, cross-covering occurred when teams were also on call—and thus took them away from important diagnostic activities such as data gathering or synthesis for patients they were admitting.

“If you were to ever design a system where errors were likely–this is how you would design it: take a team with little supervision, double their patient load, keep them busy with new challenging cases and then ask questions about patients they know little about.” (Resident)

DISCUSSION

Although diagnostic errors have been called “the next frontier for patient safety,”17 little is known about the process, barriers, and facilitators to diagnosis in teaching hospitals. In this focused ethnography conducted at 2 academic medical centers, we identified multiple cognitive and system-level challenges and potential strategies to improve diagnosis from trainees engaged in this activity. Key themes identified by those we observed included the social nature of diagnosis, fragmented information delivery, constant distractions and interruptions, and time pressures. In turn, these insights allow us to generate strategies that can be applied to improve the diagnostic process in teaching hospitals.

 

 

Our study underscores the importance of social interactions in diagnosis. In contrast, most of the interventions to prevent diagnostic errors target individual providers through practices such as metacognition and “thinking about thinking.”18-20 These interventions are based on Daniel Kahnemann’s work on dual thought process. Type 1 thought processes are fast, subconscious, reflexive, largely intuitive, and more vulnerable to error. In contrast, Type 2 processes are slower, deliberate, analytic, and less prone to error.21 Although an individual’s Type 2 thought capacity is limited, a major goal of cognitive interventions is to encourage Type 2 over Type 1 thinking, an approach termed “de-biasing.”22-24 Unfortunately, cognitive interventions testing such approaches have suffered mixed results–perhaps because of lack of focus on collective wisdom or group thinking, which may be key to diagnosis from our findings.9,25 In this sense, morning rounds were a social gathering used to strategize and develop care plans, but with limited time to think about diagnosis.26 Introduction of defined periods for individuals to engage in diagnostic activities such as de-biasing (ie, asking “what else could this be)27 before or after rounds may provide an opportunity for reflection and improving diagnosis. In addition, embedding tools such as diagnosis expanders and checklists within these defined time slots28,29 may prove to be useful in reflecting on diagnosis and preventing diagnostic errors.

An unexpected yet important finding from this study were the challenges posed by distractions and the physical environment. Potentially maladaptive workarounds to these interruptions included use of headphones; more productive strategies included updating nurses with plans to avert pages and creating a list of activities to ensure that key tasks were not forgotten.30,31 Applying lessons from aviation, a focused effort to limit distractions during key portions of the day, might be worth considering for diagnostic safety.32 Similarly, improving the environment in which diagnosis occurs—including creating spaces that are quiet, orderly, and optimized for thinking—may be valuable.33Our study has limitations. First, our findings are limited to direct observations; we are thus unable to comment on how unobserved aspects of care (eg, cognitive processes) might have influenced our findings. Our observations of clinical care might also have introduced a Hawthorne effect. However, because we were closely integrated with teams and conducted focus groups to corroborate our assessments, we believe that this was not the case. Second, we did not identify diagnostic errors or link processes we observed to errors. Third, our approach is limited to 2 teaching centers, thereby limiting the generalizability of findings. Relatedly, we were only able to conduct observations during weekdays; differences in weekend and night resources might affect our insights.

The cognitive and system-based barriers faced by clinicians in teaching hospitals suggest that new methods to improve diagnosis are needed. Future interventions such as defined “time-outs” for diagnosis, strategies focused on limiting distractions, and methods to improve communication between team members are novel and have parallels in other industries. As challenges to quantify diagnostic errors abound,34 improving cognitive- and system-based factors via reflection through communication, concentration, and organization is necessary to improve medical decision making in academic medical centers.

Disclosures

None declared for all coauthors.

Funding

This project was supported by grant number P30HS024385 from the Agency for Healthcare Research and Quality. The funding source played no role in study design, data acquisition, analysis or decision to report these data. Dr. Chopra is supported by a career development award from the Agency of Healthcare Research and Quality (1-K08-HS022835-01). Dr. Krein is supported by a VA Health Services Research and Development Research Career Scientist Award (RCS 11-222). Dr. Singh is partially supported by Houston VA HSR&D Center for Innovations in Quality, Effectiveness and Safety (CIN 13-413). The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality or the Department of Veterans Affairs.

References

1. National Academies of Sciences, Engineering, and Medicine. 2015. Improving Diagnosis in Health Care. Washington, DC: The National Academies Press. http://www.nap.edu/21794. Accessed November 1; 2016:2015. https://doi.org/10.17226/21794.
2. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169(20):1881-1887. http://dx.doi.org/10.1001/archinternmed.2009.333. PubMed
3. Sonderegger-Iseli K, Burger S, Muntwyler J, Salomon F. Diagnostic errors in three medical eras: A necropsy study. Lancet. 2000;355(9220):2027-2031. http://dx.doi.org/10.1016/S0140-6736(00)02349-7PubMed
4. Winters B, Custer J, Galvagno SM Jr, et al. Diagnostic errors in the intensive care unit: a systematic review of autopsy studies. BMJ Qual Saf. 2012;21(11):894-902. http://dx.doi.org/10.1136/bmjqs-2012-000803. PubMed
5. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-Year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22(8):672-680. http://dx.doi.org/10.1136/bmjqs-2012-001550PubMed
6. Graber M, Gordon R, Franklin N. Reducing diagnostic errors in medicine: what’s the goal? Acad Med. 2002;77(10):981-992. http://dx.doi.org/10.1097/00001888-200210000-00009PubMed
7. Gupta A, Snyder A, Kachalia A, Flanders S, Saint S, Chopra V. Malpractice claims related to diagnostic errors in the hospital. BMJ Qual Saf. 2018;27(1):53-60. 10.1136/bmjqs-2017-006774. PubMed
8. van Noord I, Eikens MP, Hamersma AM, de Bruijne MC. Application of root cause analysis on malpractice claim files related to diagnostic failures. Qual Saf Health Care. 2010;19(6):e21. http://dx.doi.org/10.1136/qshc.2008.029801PubMed
9. Croskerry P, Petrie DA, Reilly JB, Tait G. Deciding about fast and slow decisions. Acad Med. 2014;89(2):197-200. 10.1097/ACM.0000000000000121. PubMed
10. Higginbottom GM, Pillay JJ, Boadu NY. Guidance on performing focused ethnographies with an emphasis on healthcare research. Qual Rep. 2013;18(9):1-6. https://doi.org/10.7939/R35M6287P. 
11. Savage J. Participative observation: standing in the shoes of others? Qual Health Res. 2000;10(3):324-339. http://dx.doi.org/10.1177/104973200129118471PubMed
12. Patton MQ. Qualitative Research and Evaluation Methods. 3rd ed. Thousand Oaks, CA: SAGE Publications; 2002. 
13. Harrod M, Weston LE, Robinson C, Tremblay A, Greenstone CL, Forman J. “It goes beyond good camaraderie”: A qualitative study of the process of becoming an interprofessional healthcare “teamlet.” J Interprof Care. 2016;30(3):295-300. http://dx.doi.org/10.3109/13561820.2015.1130028PubMed
14. Houchens N, Harrod M, Moody S, Fowler KE, Saint S. Techniques and behaviors associated with exemplary inpatient general medicine teaching: an exploratory qualitative study. J Hosp Med. 2017;12(7):503-509. http://dx.doi.org/10.12788/jhm.2763PubMed
15. Mulhall A. In the field: notes on observation in qualitative research. J Adv Nurs. 2003;41(3):306-313. http://dx.doi.org/10.1046/j.1365-2648.2003.02514.xPubMed
16. Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Qual Saf. 2017;26(2):104-110. http://dx.doi.org/10.1136/bmjqs-2015-005014PubMed
17. Singh H, Graber ML. Improving diagnosis in health care--the next imperative for patient safety. N Engl J Med. 2015;373(26):2493-2495. http://dx.doi.org/10.1056/NEJMp1512241PubMed
18. Croskerry P. From mindless to mindful practice--cognitive bias and clinical decision making. N Engl J Med. 2013;368(26):2445-2448. http://dx.doi.org/10.1056/NEJMp1303712PubMed
19. van den Berge K, Mamede S. Cognitive diagnostic error in internal medicine. Eur J Intern Med. 2013;24(6):525-529. http://dx.doi.org/10.1016/j.ejim.2013.03.006PubMed
20. Norman G, Sherbino J, Dore K, et al. The etiology of diagnostic errors: A controlled trial of system 1 versus system 2 reasoning. Acad Med. 2014;89(2):277-284. 10.1097/ACM.0000000000000105 PubMed
21. Dhaliwal G. Premature closure? Not so fast. BMJ Qual Saf. 2017;26(2):87-89. http://dx.doi.org/10.1136/bmjqs-2016-005267PubMed
22. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: Origins of bias and theory of debiasing. BMJ Qual Saf. 2013;22(suppl 2):ii58-iiii64. http://dx.doi.org/10.1136/bmjqs-2012-001712PubMed
23. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 2: Impediments to and strategies for change. BMJ Qual Saf. 2013;22(suppl 2):ii65-iiii72. http://dx.doi.org/10.1136/bmjqs-2012-001713PubMed
24. Reilly JB, Ogdie AR, Von Feldt JM, Myers JS. Teaching about how doctors think: a longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Qual Saf. 2013;22(12):1044-1050. http://dx.doi.org/10.1136/bmjqs-2013-001987PubMed
25. Schmidt HG, Mamede S, van den Berge K, van Gog T, van Saase JL, Rikers RM. Exposure to media information about a disease can cause doctors to misdiagnose similar-looking clinical cases. Acad Med. 2014;89(2):285-291. http://dx.doi.org/10.1097/ACM.0000000000000107PubMed
26. Hess BJ, Lipner RS, Thompson V, Holmboe ES, Graber ML. Blink or think: can further reflection improve initial diagnostic impressions? Acad Med. 2015;90(1):112-118. http://dx.doi.org/10.1097/ACM.0000000000000550PubMed
27. Lambe KA, O’Reilly G, Kelly BD, Curristan S. Dual-process cognitive interventions to enhance diagnostic reasoning: A systematic review. BMJ Qual Saf. 2016;25(10):808-820. http://dx.doi.org/10.1136/bmjqs-2015-004417PubMed
28. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535-557. http://dx.doi.org/10.1136/bmjqs-2011-000149PubMed
29. McDonald KM, Matesic B, Contopoulos-Ioannidis DG, et al. Patient safety strategies targeted at diagnostic errors: a systematic review. Ann Intern Med. 2013;158(5 Pt 2):381-389. http://dx.doi.org/10.7326/0003-4819-158-5-201303051-00004PubMed
30. Wray CM, Chaudhry S, Pincavage A, et al. Resident shift handoff strategies in US internal medicine residency programs. JAMA. 2016;316(21):2273-2275. http://dx.doi.org/10.1001/jama.2016.17786PubMed
31. Choo KJ, Arora VM, Barach P, Johnson JK, Farnan JM. How do supervising physicians decide to entrust residents with unsupervised tasks? A qualitative analysis. J Hosp Med. 2014;9(3):169-175. http://dx.doi.org/10.1002/jhm.2150PubMed
32. Carayon P, Wood KE. Patient safety - the role of human factors and systems engineering. Stud Health Technol Inform. 2010;153:23-46.

 

 

 

.http://dx.doi.org/10.1001/jama.2015.13453  PubMed

34. McGlynn EA, McDonald KM, Cassel CK. Measurement is essential for improving diagnosis and reducing diagnostic error: A report from the Institute of Medicine. JAMA. 2015;314(23):2501-2502.
.http://dx.doi.org/10.1136/bmjqs-2013-001812 PubMed

33. Carayon P, Xie A, Kianfar S. Human factors and ergonomics as a patient safety practice. BMJ Qual Saf. 2014;23(3):196-205. PubMed

 

References

1. National Academies of Sciences, Engineering, and Medicine. 2015. Improving Diagnosis in Health Care. Washington, DC: The National Academies Press. http://www.nap.edu/21794. Accessed November 1; 2016:2015. https://doi.org/10.17226/21794.
2. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169(20):1881-1887. http://dx.doi.org/10.1001/archinternmed.2009.333. PubMed
3. Sonderegger-Iseli K, Burger S, Muntwyler J, Salomon F. Diagnostic errors in three medical eras: A necropsy study. Lancet. 2000;355(9220):2027-2031. http://dx.doi.org/10.1016/S0140-6736(00)02349-7PubMed
4. Winters B, Custer J, Galvagno SM Jr, et al. Diagnostic errors in the intensive care unit: a systematic review of autopsy studies. BMJ Qual Saf. 2012;21(11):894-902. http://dx.doi.org/10.1136/bmjqs-2012-000803. PubMed
5. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-Year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22(8):672-680. http://dx.doi.org/10.1136/bmjqs-2012-001550PubMed
6. Graber M, Gordon R, Franklin N. Reducing diagnostic errors in medicine: what’s the goal? Acad Med. 2002;77(10):981-992. http://dx.doi.org/10.1097/00001888-200210000-00009PubMed
7. Gupta A, Snyder A, Kachalia A, Flanders S, Saint S, Chopra V. Malpractice claims related to diagnostic errors in the hospital. BMJ Qual Saf. 2018;27(1):53-60. 10.1136/bmjqs-2017-006774. PubMed
8. van Noord I, Eikens MP, Hamersma AM, de Bruijne MC. Application of root cause analysis on malpractice claim files related to diagnostic failures. Qual Saf Health Care. 2010;19(6):e21. http://dx.doi.org/10.1136/qshc.2008.029801PubMed
9. Croskerry P, Petrie DA, Reilly JB, Tait G. Deciding about fast and slow decisions. Acad Med. 2014;89(2):197-200. 10.1097/ACM.0000000000000121. PubMed
10. Higginbottom GM, Pillay JJ, Boadu NY. Guidance on performing focused ethnographies with an emphasis on healthcare research. Qual Rep. 2013;18(9):1-6. https://doi.org/10.7939/R35M6287P. 
11. Savage J. Participative observation: standing in the shoes of others? Qual Health Res. 2000;10(3):324-339. http://dx.doi.org/10.1177/104973200129118471PubMed
12. Patton MQ. Qualitative Research and Evaluation Methods. 3rd ed. Thousand Oaks, CA: SAGE Publications; 2002. 
13. Harrod M, Weston LE, Robinson C, Tremblay A, Greenstone CL, Forman J. “It goes beyond good camaraderie”: A qualitative study of the process of becoming an interprofessional healthcare “teamlet.” J Interprof Care. 2016;30(3):295-300. http://dx.doi.org/10.3109/13561820.2015.1130028PubMed
14. Houchens N, Harrod M, Moody S, Fowler KE, Saint S. Techniques and behaviors associated with exemplary inpatient general medicine teaching: an exploratory qualitative study. J Hosp Med. 2017;12(7):503-509. http://dx.doi.org/10.12788/jhm.2763PubMed
15. Mulhall A. In the field: notes on observation in qualitative research. J Adv Nurs. 2003;41(3):306-313. http://dx.doi.org/10.1046/j.1365-2648.2003.02514.xPubMed
16. Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Qual Saf. 2017;26(2):104-110. http://dx.doi.org/10.1136/bmjqs-2015-005014PubMed
17. Singh H, Graber ML. Improving diagnosis in health care--the next imperative for patient safety. N Engl J Med. 2015;373(26):2493-2495. http://dx.doi.org/10.1056/NEJMp1512241PubMed
18. Croskerry P. From mindless to mindful practice--cognitive bias and clinical decision making. N Engl J Med. 2013;368(26):2445-2448. http://dx.doi.org/10.1056/NEJMp1303712PubMed
19. van den Berge K, Mamede S. Cognitive diagnostic error in internal medicine. Eur J Intern Med. 2013;24(6):525-529. http://dx.doi.org/10.1016/j.ejim.2013.03.006PubMed
20. Norman G, Sherbino J, Dore K, et al. The etiology of diagnostic errors: A controlled trial of system 1 versus system 2 reasoning. Acad Med. 2014;89(2):277-284. 10.1097/ACM.0000000000000105 PubMed
21. Dhaliwal G. Premature closure? Not so fast. BMJ Qual Saf. 2017;26(2):87-89. http://dx.doi.org/10.1136/bmjqs-2016-005267PubMed
22. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: Origins of bias and theory of debiasing. BMJ Qual Saf. 2013;22(suppl 2):ii58-iiii64. http://dx.doi.org/10.1136/bmjqs-2012-001712PubMed
23. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 2: Impediments to and strategies for change. BMJ Qual Saf. 2013;22(suppl 2):ii65-iiii72. http://dx.doi.org/10.1136/bmjqs-2012-001713PubMed
24. Reilly JB, Ogdie AR, Von Feldt JM, Myers JS. Teaching about how doctors think: a longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Qual Saf. 2013;22(12):1044-1050. http://dx.doi.org/10.1136/bmjqs-2013-001987PubMed
25. Schmidt HG, Mamede S, van den Berge K, van Gog T, van Saase JL, Rikers RM. Exposure to media information about a disease can cause doctors to misdiagnose similar-looking clinical cases. Acad Med. 2014;89(2):285-291. http://dx.doi.org/10.1097/ACM.0000000000000107PubMed
26. Hess BJ, Lipner RS, Thompson V, Holmboe ES, Graber ML. Blink or think: can further reflection improve initial diagnostic impressions? Acad Med. 2015;90(1):112-118. http://dx.doi.org/10.1097/ACM.0000000000000550PubMed
27. Lambe KA, O’Reilly G, Kelly BD, Curristan S. Dual-process cognitive interventions to enhance diagnostic reasoning: A systematic review. BMJ Qual Saf. 2016;25(10):808-820. http://dx.doi.org/10.1136/bmjqs-2015-004417PubMed
28. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535-557. http://dx.doi.org/10.1136/bmjqs-2011-000149PubMed
29. McDonald KM, Matesic B, Contopoulos-Ioannidis DG, et al. Patient safety strategies targeted at diagnostic errors: a systematic review. Ann Intern Med. 2013;158(5 Pt 2):381-389. http://dx.doi.org/10.7326/0003-4819-158-5-201303051-00004PubMed
30. Wray CM, Chaudhry S, Pincavage A, et al. Resident shift handoff strategies in US internal medicine residency programs. JAMA. 2016;316(21):2273-2275. http://dx.doi.org/10.1001/jama.2016.17786PubMed
31. Choo KJ, Arora VM, Barach P, Johnson JK, Farnan JM. How do supervising physicians decide to entrust residents with unsupervised tasks? A qualitative analysis. J Hosp Med. 2014;9(3):169-175. http://dx.doi.org/10.1002/jhm.2150PubMed
32. Carayon P, Wood KE. Patient safety - the role of human factors and systems engineering. Stud Health Technol Inform. 2010;153:23-46.

 

 

 

.http://dx.doi.org/10.1001/jama.2015.13453  PubMed

34. McGlynn EA, McDonald KM, Cassel CK. Measurement is essential for improving diagnosis and reducing diagnostic error: A report from the Institute of Medicine. JAMA. 2015;314(23):2501-2502.
.http://dx.doi.org/10.1136/bmjqs-2013-001812 PubMed

33. Carayon P, Xie A, Kianfar S. Human factors and ergonomics as a patient safety practice. BMJ Qual Saf. 2014;23(3):196-205. PubMed

 

Issue
Journal of Hospital Medicine 13(10)
Issue
Journal of Hospital Medicine 13(10)
Page Number
668-672. Published online first April 25, 2018
Page Number
668-672. Published online first April 25, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Vineet Chopra MD, MSc, 2800 Plymouth Rd, Building 16 #432W North Campus Research Complex, Ann Arbor, MI 48109; Telephone: 734-936-4000; Fax: 734-852-4600; E-mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 05/23/2018 - 06:45
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files