A Taxonomy of Requests by Patients (TORP) A New System for Understanding Clinical Negotiation in Office Practice

Article Type
Changed
Thu, 03/28/2019 - 14:06
Display Headline
A Taxonomy of Requests by Patients (TORP) A New System for Understanding Clinical Negotiation in Office Practice

 

BACKGROUND: The goal of our investigation was to facilitate research on clinical negotiation between patients and physicians by developing a reliable and valid classification system for patients’ requests in office practice.

METHODS: We developed the Taxonomy of Requests by Patients (TORP) using input from researchers, clinicians, and patient focus groups. To assess the system’s reliability and validity, we applied TORP to audiotaped encounters between 139 patients and 6 northern California internists. Reliability was assessed with the k statistic as a measure of interrater agreement. Face validity was assessed through expert and patient judgment of the coding system. Content validity was examined by monitoring the incidence of unclassifiable requests. Construct valdity was evaluated by examining the relationship between patient requests and patient health status; patient request fulfillment and patient satisfaction; and patient requests and physician perceptions of the visit.

RESULTS: The 139 patients made 772 requests (619 requests for information and 153 requests for physician action). Average interrater agreement across a sample of 40 cases was 94% (k = 0.93; P <.001). Patients with better health status made fewer requests (r = -0.17; P = .048). Having more chronic diseases was associated with more requests for physician action (r = 0.32; P = .0002). Patients with more unfulfilled requests had lower visit satisfaction (r = -0.32; P <.001). More patient requests was also associated with physician reports of longer visit times (P = .016) and increased visit demands (P = .006).

CONCLUSIONS: Our study provides evidence that TORP is a reliable and valid system for capturing and categorizing patients’ requests in adult primary care. Further research is needed to confirm the system’s validity, expand its applicability, and explore its usefulness as a tool for studying clinical negotiation.

Requests are the primary means of patient-initiated action in office practice. But these requests can be problematic because they consume time and resources. In particular, patients’ requests for diagnostic tests, medications, and referrals can be costly to capitated practices and may cause physician-patient discord if not handled appropriately. Patients who participate actively in their own care, however, often achieve better outcomes than those who do not.1 Managing the negotiation triggered by these requests is a fundamental clinical skill. Unfortunately, few empiric data are available to help physicians select effective negotiation strategies. One barrier to necessary research is the lack of a reliable, valid, and comprehensive system for describing and classifying patients’ requests.

Uhlmann and colleagues2 defined patient requests as “desires explicitly communicated [to the physician] through either verbal or written language.” In their formulation, desires are defined as wishes regarding medical care. Requests in turn are defined as desires that the patient communicates to the physician.

The definition of patient requests proposed by Uhlmann and coworkers is operationally explicit. However, few studies of patient requests have adhered to this definition. For Lazare and colleagues3 requests were “what patients wish or hope will occur”; for DelVecchio and coworkers4 they were ways patients indicate to the research assistant how the “clinic can help you at this time”; for Uhlmann and colleagues,5 “health problems you feel should be dealt with today”; for Like and Zyzanski,6 the “types of help [patients] would like to receive at that day’s visit”; for Eisenthal and coworkers,7 responses to the question, “How do you hope the doctor (or clinic) can be of help to you today?”; and for Valori and colleagues,8 requests were defined as previsit desires for “explanation and reassurance, for emotional support, and for investigation and treatment.”

A common feature of most of this literature is the blending of “requests” (what patients ask for) with “desires” (what patients want) and “expectancies” (what patients think their physicians will do). Previsit patient surveys can only elicit desires and expectancies, while requests are more readily assessed by postvisit patient or physician reports or by direct observation. The operational distinction between desires and requests is important if we are to focus on how patients influence the content of their visits by asking questions or making statements that affect physician behavior. Some desires (eg, diagnostic imaging) may be more frequently converted into explicit requests than other desires (eg, therapeutic listening).

As a method for studying patients’ requests, direct observation using audiorecording or videorecording has several advantages over other approaches, such as patient or physician reports. First, patients’ requests and physicians’ responses can be captured precisely by recording them. Second, tapes (or transcripts) can be preserved and used for reliability checking and post-hoc analyses. Third, behavioral observation is the only method that can capture the interactional dynamics of clinical negotiations. Although these advantages are countered by a potential Hawthorne effect, this bias is manageable.9 Existing systems for the analysis of interactions were not specifically designed to describe the content of clinical negotiation. Therefore, we developed a new system called Taxonomy of Requests by Patients (TORP) for classifying patient requests and physician responses in office practice. The main features of TORP are that it relies on direct observation, focuses on request content, can be applied in real time, and is designed for use in general medical settings.

 

 

Our goal was to produce a classification system for patients’ requests that would be useful in understanding the links between patients’ unarticulated desires and expectations, patients’ articulated requests, physicians’ provision of health care services, and patients’ and physicians’ perceptions of the visit and of each other. We hypothesized that the characteristics, needs, and attitudes of patients and physicians would influence clinical negotiation Figure 1. Clinical negotiation, in turn, was posited to affect patient well-being and physician perceptions of the visit. In this schema, the negotiation is central. Patients are more than the passive recipients of doctors’ actions; they influence the clinical encounter through use of their own linguistic resources.

Methods

Development of the Taxonomy

On the basis of clinical experience and preliminary discussions, our research group defined patient requests as:

… an expression of hope or desire that the physician provide information or perform action. Requests may be expressed as questions, commands, statements, or conjecture. Most questions are requests, except rhetorical questions (“Who do you think I am?”), exclamations (“You’re kidding, aren’t you?”), questions related to the mechanics of the physical examination (“Where should I sit?”), and chatting on topics unrelated to health or medicine (“It’s sure been hot, hasn’t it?”)

Following this definition, our group generated an initial set of categories that included requests for examinations, tests, prescriptions, referrals, social or psychological help, and information. These categories were then reviewed in general terms by colleagues and by 2 patient focus groups. The focus groups consisted of adult patients who were receiving care from one academic general medicine clinic and one group model health maintenance organization. The sessions were 90 minutes long, and the patients were asked to describe what they wanted from their physicians, relate any recent experiences with physicians that fell short of expectations, and comment on the sorts of things they might ask of their physician. Using this input, the original set of categories was revised and applied to a set of audiotapes obtained from a convenience sample of 20 adult general medicine outpatients visiting a small single-specialty group practice. Following review of these tapes, additional categories were added, and others were amended or deleted. There seemed to be a natural division between requests for information and requests for action.

The final taxonomy (TORP) is shown in Table 1. There are 11 categories of patient requests for information and 8 categories of patient requests for action. In addition, physician responses to patient requests are coded as 1 of 8 mutually exclusive categories that are modified from Roter and colleagues:10 (1) ignores; (2) acknowledges only; (3) fulfills (performs action or provides requested information); (4) partially fulfills; (5) negotiates, with fulfillment; (6) negotiates, with partial fulfillment; (7) negotiates, with denial; or (8) denies.

Evaluation of the Taxonomy

Data collection. To assess the reliability and validity of TORP in office practice, we applied it to 139 physician-patient encounters selected at random from 318 studied as part of a larger project on patients’ expectations for care. Details of that study are described elsewhere.11 To summarize, data were collected in 1994 from a community-based university-affiliated 6-physician general internal medicine practice in northern California. Patients were eligible for enrollment if they were at least 18 years of age, could speak and understand English, had a telephone, and had scheduled an office visit at least 1 day in advance.

Using patient appointment lists obtained the day before the scheduled visit, we contacted 503 eligible individuals; 396 (79%) agreed to participate. Seventy-eight patients failed to attend their appointment, arrived late, withdrew consent, or could not be successfully audiotaped, leaving complete data for 318 patients. Of those, we randomly selected 139 patients for inclusion in our study. The mean age of patients in this sample was 52 years (standard deviation [SD] = 16); 49% were men; 72% were white. Thirty-five percent had a college degree, and the median family income range was $40,000 to $49,000. There were no meaningful differences in age, sex, race, education, or income between the 139 randomly selected individuals and the 179 remaining patients.

Just before the visit, all patients were asked about demographic characteristics and health status. All encounters were audiotaped using unobtrusive equipment. After the visit, patients completed postvisit questionnaires that included questions about visit satisfaction, and physicians reported on the type of visit, medical diagnoses, interventions requested (by the patients), interventions performed, and the extent to which they perceived the visit to be demanding.

Measures. Patient were asked about demographic characteristics (age, sex, education, income, and employment status) with straightforward questions. We evaluated health status in terms of the patients’ health perceptions (“In general, would you say your health is: excellent, very good, good, fair, poor?”); health worry (“How worried are you about your health?” and “How concerned are you that you might have a serious disease or condition today: extremely…not at all?” [a reliability for the 2-item scale = 0.79]); and a chronic disease count derived from a 12-item checklist completed by the treating physician. Patient satisfaction with the visit was assessed using the Ware and Hays12 5-item visit-specific scale (a = 0.90).

 

 

We obtained physicians’ perceptions of how demanding the visit was by using a brief form with a single question and 5-point response scale (“Compared to your average patient visit, how demanding would you rate this visit in terms of the amount of effort required?” 1 = far more demanding than average; 5 = far less demanding).

Coding procedures. A research assistant reviewed all 139 audiotapes selected for this analysis. After identifying a patient request, she transcribed the request verbatim, assigned an appropriate request code and response code, and continued listening until the visit was over. A request-response exchange was coded as a “negotiation” when the physician’s initial demurral was met by a counter-request or demand from the patient. When a physician’s ultimate response to a patient request differed from the physician’s initial response, the lead coder recorded both an initial and final response code. Variables were created to reflect, at the patient level, the number of requests made, the number and proportion of requests not fulfilled, and the number of requests negotiated before ultimate fulfillment.

Assessment of reliability and validity. The first author reviewed all transcribed segments from the first 20 tapes and coded each segment independently. Interrater agreement was assessed using the k statistic.12 To determine whether reliability degraded with time, the lead author also coded transcribed segments from the last 20 tapes. Face validity was assessed through frequent discussion among the coinvestigators and by obtaining feedback from practicing physicians and patient focus groups. Content validity was assessed by monitoring the number of unclassifiable requests. Construct validity was evaluated quantitatively on the basis of tests of the following hypotheses: (1) patients with worse health status will make a greater number of requests; (2) greater request fulfillment will be associated with greater patient satisfaction; and (3) more requests will be associated with longer visit times and more demanding visits as perceived by physicians. The relevant associations were assessed using Pearson Product-Moment Correlation coefficients, t tests, chi-square tests, and analysis of variance, as appropriate, using Stata software, release 5.0 (Stata Corporation, College Station, Texas).13 Associations between patient requests and physicians’ perceptions of visit time, and those between patient requests and physicians’ perceptions of the visit’s demands were assessed using multiple linear regression, with Huber-White adjustment of standard errors to account for clustering of patients by physician.14 Power to identify bivariable correlations of moderate size (r >0.30) exceeded 0.90 for all inferential tests of significance. Two-tailed P values less than .05 were considered statistically significant. Explicit corrections for multiple statistical comparisons were not made.

Results

Interrater Agreement

On review of the first 20 cases, the lead coder identified and transcribed a total of 147 requests. Overall agreement between the lead and secondary coder was 94% (k = 0.93; P <.001), indicating excellent agreement beyond chance. Of the 9 coding disagreements, 2 were “major” (one coder classified a request as an “action request” and the other as an “information request”). There was no degradation of interrater reliability over time (agreement for the last 20 cases = 95%; k = 0.94; P <.001).

Prevalence of Patient Requests

Table 2 shows that the 139 patients made 772 requests (mean = 5.6; range = 0 to 32). Of these, 619 were requests for information (mean = 4.5 requests per patient) and 153 were requests for action (mean = 1.1). For any given patient, the number of information and action requests were only weakly correlated (r = 0.18; P = .04; data not shown in table). The most common information requests involved questions about medications or treatments (191 requests) and about symptoms, problems, or diseases (178 requests). The most prevalent action request was for medications or treatments Table 2. Among the 772 requests, only 33 (4.3%) were not classifiable into 1 of the 17 standing categories and had to be coded as “other requests for information” or “other requests for action.”

Patient Requests in Relation to Health Status

In assessing the construct validity of TORP, we hypothesized that patients with worse health perceptions, greater health worry, and more chronic diseases would make more requests of their physicians. As shown in Table 3, patients who rated their general health more positively made fewer total requests (r = -0.17; P = .048). The inverse relationship between health perceptions and requests was stronger for action requests (r = -0.25; P = .004) than for information requests (r = -0.11; P = .19). Greater health worry or concern was marginally associated with making more information requests. Having more chronic diseases was associated with more action requests (r = 0.32; P = .0002). Taken together, these results suggest that greater illness burden (as reflected by general health perceptions and number of chronic conditions) is associated with more health care resource needs, while greater health-related anxiety is associated with more informational needs.

 

 

Patient Request Fulfillment and Visit Satisfaction

Our second hypothesis was that patients whose requests were more frequently fulfilled would report greater visit satisfaction. We created 2 indicators of request fulfillment (or nonfulfillment) at the patient level according to the coder’s judgment: the number of unfulfilled requests (mean = 0.55; SD = 1.3; median = 0; range = 0-9) and the proportion of unfulfilled requests (mean = 7.5%; median = 0; range = 0%-60%). Mean patient satisfaction with the visit was 4.48 (SD = 0.65) on a scale from 1 to 5 scale (5 = excellent). Patient satisfaction was significantly and inversely correlated with the total number of unfulfilled requests (r = -0.32; P <.001). This relationship appeared to be driven more by action requests (r = -.39; P <.001) than information requests (r = -0.21; P = .015). There were no significant associations between satisfaction and the proportion of unfulfilled requests. Compared with patients without any unfulfilled action requests (n = 112), those with one or more unfulfilled request (n = 23) had lower mean satisfaction (4.21 vs 4.54, P = .03).

In a supsidiary analysis, we compared the 22 visits in which patients and physicians negotiated a request with the 117 visits in which no negotiation occurred. There were no significant differences in patient-reported satisfaction with these 2 types of visits (mean = 4.3 vs 4.5, P = .18), suggesting that the quality of the negotiation process may be more important in influencing patient evaluations than the presence or absence of negotiation.

Patient Requests and Physician Perceptions of the Visit

As a final test of TORP, we hypothesized that visits involving many patient requests would take more time and would be perceived by physicians as more demanding. Using linear regression with adjustment for clustering by physician, more information requests (but not action requests) were associated with increased physician-reported visit duration (P = .017, data not shown). Visits in which patients made more requests were rated by physicians as more demanding (r = 0.40 for total requests; r = 0.35 for information requests; and r = 0.29 for action requests; all P values <.001). Using multiple regression (with adjustment for clustering) to control for patients’ general health perceptions, the number of chronic diseases, physician-reported visit length (in minutes), and visit type (new, follow-up, or urgent care), total requests remained significantly associated with the perceived demands of the visit (regression coefficient = 0.05; P = .006; data not shown).

Discussion

TORP fills an important methodologic void for researchers interested in understanding how patient requests and physician responses influence clinical effectiveness. Our investigation demonstrates that TORP is capable of capturing and categorizing patients’ requests in adult primary care medicine. This coding system exhibits excellent reliability in the hands of trained coders and is relatively easy to apply in real time. TORP also measures meaningful phenomena as demonstrated by the significant associations between patient requests and patient health status, request fulfillment and visit satisfaction, and patients’ request behavior and physicians’ perceptions of the demands of the visit.

To our knowledge, TORP is the first direct-observation system designed to identify, classify, and enumerate patients’ requests and physicians’ responses in office practice. TORP may be usefully compared with 4 other popular coding schemes. The Roter Interactional Analysis System (RIAS) is a major refinement of previous work by Bales.15 It is a reliable and valid system that has been used with success in several studies16-19 evaluating the relationship between a clinician’s communication style and health care outcomes. The unit of analysis is the utterance (smallest meaningful unit of speech); the emphasis is on process rather than content; and the raw data consist of audiotapes or videotapes. Unlike TORP, RIAS does not code the content of patients’ requests for information, and it has a single “request for services” code that is used when the patient makes “a direct appeal to the physician’s authority.”

The Davis Observation Code (DOC) is an analysis system designed specifically for primary care.20 The unit of analysis is time (10-second blocks); the emphasis is on content (eg, the proportion of time spent discussing prevention); and data may be acquired either from videotapes or real-time observation. As with RIAS, there is no specific mechanism within the DOC system for extracting and classifying patient requests. RIAS and DOC are validated systems, but neither was specifically intended to examine patients’ requests.

In contrast to RIAS and DOC, the systems developed by Like and Zyzanski6 and by Eisenthal and coworkers21 provide for a detailed categorization of patients’ wishes. Like and Zyzanski’s Patient Request for Services Scale identified 5 clusters of desires: medical information, psychosocial assistance, therapeutic listening, general health advice, and biomedical treatment. Eisenthal and colleagues distinguished between the desired focus or objective and the desired form or method. For example, a patient might want pain relief (focus) achieved through the prescription of a narcotic analgesic (method). Both systems stress assessment of patients’ self-reported desires. TORP shares the same general objectives as these 2 systems but brings a detailed taxonomy to actual clinical behavior.

 

 

TORP was developed with input from clinicians, researchers, and patients. It is organized to reflect the major categories of patient-initiated interaction in primary care settings. The system relies on real-time coding during observation or from audiotapes (rather than transcripts) because we believe some requests are difficult to identify without hearing the requester’s intonation. Although there were relatively few uncodable requests, future versions of TORP will need to incorporate several new request categories.

Although we believe TORP is a useful system that could be productively applied to analysis of physician-patient interactions in a variety of settings, several opportunities for improvement remain. First, procedures for ensuring unitizing reliability (the ability of 2 raters to agree that a given segment of speech represents a request) should be developed and evaluated. Some types of requests may be easier to identify than others. Second, the rapidly changing health care environment virtually guarantees that any system for coding patient and physician behavior will require periodic updating. For example, as newer managed care models become dominant, request and response categories will be needed that account for the complex relationships among employers, insurers, medical groups, insurers, and patients. Third, codes are needed to acknowledge the involvement of family caregivers, especially in pediatric and geriatric settings. Fourth, greater attention to physician responses (including how clinicians promote effective negotiation) is needed. Fifth, TORP places a major emphasis on content; a more refined system that acknowledges form and emotionality may be needed when TORP is used for some research issues. One way to address this limitation would be to use TORP with an existing analysis system, such as RIAS.

More fundamentally, additional research is required to help researchers decide when direct observation is needed to understand critical elements of visit dynamics and when other data sources (such as patient or physician self-report, chart review, or administrative data) will suffice.22,23 Although audio-recording of visits can be intrusive and the coding of tapes is time consuming, direct observation is sometimes necessary because available evidence does not inspire optimism about the reliability of patient and physician reports of visit content.24-26 It is unlikely that reliance on self-report data alone can adequately support research on the give-and-take of clinical interactions.

Conclusions

TORP represents a new approach for understanding patients’ requests and physicians’ responses in office practice. This analysis system will provide new insights into a fundamental aspect of the physician-patient relationship that cannot be assessed by other means. By highlighting problematic requests and identifying successful and unsuccessful strategies for clinical negotiation, TORP may ultimately help clinicians to better meet patients’ needs in an increasingly demanding health care environment.

Acknowledgments

Data for this project were collected while Dr Kravitz was a Picker-Commonwealth Faculty Scholar. Analysis was performed with support from the Agency for Health Care Policy and Research (R03 HS09812-01). The authors thank Deirdre Antonius for coordinating the data collection effort, Shannon Quinlan for coding the audiotapes, and Charles E. Lewis, PhD, for providing mentorship and guidance.

References

 

1. Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med 1985;102:520-8.

2. Uhlmann RF, Inui TS, Carter WB. Patient requests and expectations: definitions and clinical applications. Medical Care 1984;22:681-5.

3. Lazare A, Eisenthal S, Wasserman L, Harford TC. Patient requests in a walk-in clinic. Compr Psychiatry 1975;16:467-77.

4. DelVecchio Good MJ, Good BJ, Nassi AJ. Patient requests in primary health care settings: development and validation of a research instrument. J Behavioral Med 1983;6:151-68.

5. Uhlmann RF, Carter WB, Inui TS. Fulfillment of patient requests in a general medicine clinic. Am J Public Health 1984;74:257-8.

6. Like R, Zyzanski SJ. Patient requests in family practice: a focal point for clinical negotiation. Fam Pract 1986;3:216-28.

7. Eisenthal S, Koopman C, Stoeckle JD. The nature of patients’ requests for physicians’ help. Acad Med 1990;65:401-5.

8. Valori R, Woloshynowych M, Bellenger N, Aluvihare V, Salmon P. The Patient Requests Form: a way of measuring what patients want from their general practitioner. J Psychosom Res 1996;40:87-94.

9. Arborelius E, Timpka T. In what way may videotapes be used to get significant information about the patient-physician relationship? Med Teacher 1990;12:197-208.

10. Roter D. The Roter method of interaction process analysis. Internal document, Johns Hopkins University; 1990.

11. Kravitz RL, Callahan EJ, Azari R, Antonius D, Lewis CE. Assessing patients’ expectations in ambulatory practice: does the measurement approach make a difference? J Gen Intern Med 1997;12:67-72.

12. Ware JE, Hays RD. Methods for measuring patient satisfaction with specific medical encounters. Med Care 1988;26:393-402.

13. Stata Corporation. Stata Statistical Software: release 5.0. College Station, Tex: Stata Corporation; 1997.

14. Huber PJ. The behavior of maximum likelihood estimates under nonstandard conditions. In: Proceedings of the fifth Berkeley symposium in mathematical statistics and probability. Berkeley, Calif: University of California, Berkeley Press; 1967.

15. Roter D, Hall JA. Doctors talking with patients/patients talking with doctors: improving communication in medical visits. Westport, Conn: Auburn House; 1992.

16. Levinson W, Roter DL, Mullooly JP, Dull VT, Frankel RM. Physician-patient communication: the relationship with malpractice claims among primary care physicians and surgeons. JAMA 1997;277:553-9.

17. Roter DL, Stewart M, Putnam SM, Lipkin M, Jr, Stiles W, Inui TS. Communication patterns of primary care physicians. JAMA 1997;277:350-6.

18. Hall JA, Irish JT, Roter DL, Ehrlich CM, Miller LH. Satisfaction, gender, and communication in medical visits. Med Care 1994;32:1216-31.

19. Wissow LS, Roter D, Bauman LJ, et al. Patient-provider communication during the emergency department care of children with asthma: the National Cooperative Inner-City Asthma Study, National Institute of Allergy and Infectious Diseases, NIH, Bethesda, Md. Med Care 1998;36:1439-50.

20. Callahan EJ, Bertakis KD. Development and validation of the Davis Observation Code. Fam Med 1991;23:19-24.

21. Eisenthal S, Koopman C, Stoeckle JD. The nature of patients’ requests for physicians’ help. Acad Med 1990;65:401-5.

22. Stange KC, Zyzanski SJ, Smith TF, et al. How valid are medical records and patient questionnaires for physician profiling and health services research? A comparison with direct observation of patient visits. Med Care 1998;36:851-67.

23. Stange KC, Zyzanski SJ, Jaen CR. Illuminating the black box: a description of 4454 patient visits to 138 family physicians. J Fam Pract 1998;46:377-89.

24. Gerbert B, Stone G, Stulbarg M, Gullion DS, Greenfield S. Agreement among physician assessment methods: searching for the truth among fallible methods. Med Care 1988;26:519-35.

25. Scheitel SM, Boland BJ, Wollan PC, Silverstein MD. Patient-physician agreement about medical diagnoses and cardiovascular risk factors in the ambulatory general medical examination. Mayo Clin Proc 1996;71:1131-7.

26. Temple W, Toews J, Fidler H, Lockyer JM, Taenzer P, Parboosingh EJ. Concordance in communication between surgeon and patient. Can J Surg 1998;41:439-45.

Author and Disclosure Information

 

Richard L. Kravitz, MD, MSPH
Robert A. Bell, PhD
Carol E. Franz, PhD
Davis, California
supmitted, revised, August 30, 1999.
From the departments of Internal Medicine (R.L.K.), Communication (R.A.B.), and Psychiatry (C.E.F.) and the Center for Health Services Research in Primary Care (R.L.K., R.A.B.), University of California, Davis. Reprint requests should be addressed to Richard L. Kravitz, MD, MSPH, UCD Center for Health Services Research in Primary Care, 4150 V Street, Suite 2500 PSSB, Sacramento, CA 95817. E-mail: [email protected].

Issue
The Journal of Family Practice - 48(11)
Publications
Topics
Page Number
872-878
Legacy Keywords
,Physician-patient relationspatient satisfactionoffice visits. (J Fam Pract 1999; 48:872-878)
Sections
Author and Disclosure Information

 

Richard L. Kravitz, MD, MSPH
Robert A. Bell, PhD
Carol E. Franz, PhD
Davis, California
supmitted, revised, August 30, 1999.
From the departments of Internal Medicine (R.L.K.), Communication (R.A.B.), and Psychiatry (C.E.F.) and the Center for Health Services Research in Primary Care (R.L.K., R.A.B.), University of California, Davis. Reprint requests should be addressed to Richard L. Kravitz, MD, MSPH, UCD Center for Health Services Research in Primary Care, 4150 V Street, Suite 2500 PSSB, Sacramento, CA 95817. E-mail: [email protected].

Author and Disclosure Information

 

Richard L. Kravitz, MD, MSPH
Robert A. Bell, PhD
Carol E. Franz, PhD
Davis, California
supmitted, revised, August 30, 1999.
From the departments of Internal Medicine (R.L.K.), Communication (R.A.B.), and Psychiatry (C.E.F.) and the Center for Health Services Research in Primary Care (R.L.K., R.A.B.), University of California, Davis. Reprint requests should be addressed to Richard L. Kravitz, MD, MSPH, UCD Center for Health Services Research in Primary Care, 4150 V Street, Suite 2500 PSSB, Sacramento, CA 95817. E-mail: [email protected].

 

BACKGROUND: The goal of our investigation was to facilitate research on clinical negotiation between patients and physicians by developing a reliable and valid classification system for patients’ requests in office practice.

METHODS: We developed the Taxonomy of Requests by Patients (TORP) using input from researchers, clinicians, and patient focus groups. To assess the system’s reliability and validity, we applied TORP to audiotaped encounters between 139 patients and 6 northern California internists. Reliability was assessed with the k statistic as a measure of interrater agreement. Face validity was assessed through expert and patient judgment of the coding system. Content validity was examined by monitoring the incidence of unclassifiable requests. Construct valdity was evaluated by examining the relationship between patient requests and patient health status; patient request fulfillment and patient satisfaction; and patient requests and physician perceptions of the visit.

RESULTS: The 139 patients made 772 requests (619 requests for information and 153 requests for physician action). Average interrater agreement across a sample of 40 cases was 94% (k = 0.93; P <.001). Patients with better health status made fewer requests (r = -0.17; P = .048). Having more chronic diseases was associated with more requests for physician action (r = 0.32; P = .0002). Patients with more unfulfilled requests had lower visit satisfaction (r = -0.32; P <.001). More patient requests was also associated with physician reports of longer visit times (P = .016) and increased visit demands (P = .006).

CONCLUSIONS: Our study provides evidence that TORP is a reliable and valid system for capturing and categorizing patients’ requests in adult primary care. Further research is needed to confirm the system’s validity, expand its applicability, and explore its usefulness as a tool for studying clinical negotiation.

Requests are the primary means of patient-initiated action in office practice. But these requests can be problematic because they consume time and resources. In particular, patients’ requests for diagnostic tests, medications, and referrals can be costly to capitated practices and may cause physician-patient discord if not handled appropriately. Patients who participate actively in their own care, however, often achieve better outcomes than those who do not.1 Managing the negotiation triggered by these requests is a fundamental clinical skill. Unfortunately, few empiric data are available to help physicians select effective negotiation strategies. One barrier to necessary research is the lack of a reliable, valid, and comprehensive system for describing and classifying patients’ requests.

Uhlmann and colleagues2 defined patient requests as “desires explicitly communicated [to the physician] through either verbal or written language.” In their formulation, desires are defined as wishes regarding medical care. Requests in turn are defined as desires that the patient communicates to the physician.

The definition of patient requests proposed by Uhlmann and coworkers is operationally explicit. However, few studies of patient requests have adhered to this definition. For Lazare and colleagues3 requests were “what patients wish or hope will occur”; for DelVecchio and coworkers4 they were ways patients indicate to the research assistant how the “clinic can help you at this time”; for Uhlmann and colleagues,5 “health problems you feel should be dealt with today”; for Like and Zyzanski,6 the “types of help [patients] would like to receive at that day’s visit”; for Eisenthal and coworkers,7 responses to the question, “How do you hope the doctor (or clinic) can be of help to you today?”; and for Valori and colleagues,8 requests were defined as previsit desires for “explanation and reassurance, for emotional support, and for investigation and treatment.”

A common feature of most of this literature is the blending of “requests” (what patients ask for) with “desires” (what patients want) and “expectancies” (what patients think their physicians will do). Previsit patient surveys can only elicit desires and expectancies, while requests are more readily assessed by postvisit patient or physician reports or by direct observation. The operational distinction between desires and requests is important if we are to focus on how patients influence the content of their visits by asking questions or making statements that affect physician behavior. Some desires (eg, diagnostic imaging) may be more frequently converted into explicit requests than other desires (eg, therapeutic listening).

As a method for studying patients’ requests, direct observation using audiorecording or videorecording has several advantages over other approaches, such as patient or physician reports. First, patients’ requests and physicians’ responses can be captured precisely by recording them. Second, tapes (or transcripts) can be preserved and used for reliability checking and post-hoc analyses. Third, behavioral observation is the only method that can capture the interactional dynamics of clinical negotiations. Although these advantages are countered by a potential Hawthorne effect, this bias is manageable.9 Existing systems for the analysis of interactions were not specifically designed to describe the content of clinical negotiation. Therefore, we developed a new system called Taxonomy of Requests by Patients (TORP) for classifying patient requests and physician responses in office practice. The main features of TORP are that it relies on direct observation, focuses on request content, can be applied in real time, and is designed for use in general medical settings.

 

 

Our goal was to produce a classification system for patients’ requests that would be useful in understanding the links between patients’ unarticulated desires and expectations, patients’ articulated requests, physicians’ provision of health care services, and patients’ and physicians’ perceptions of the visit and of each other. We hypothesized that the characteristics, needs, and attitudes of patients and physicians would influence clinical negotiation Figure 1. Clinical negotiation, in turn, was posited to affect patient well-being and physician perceptions of the visit. In this schema, the negotiation is central. Patients are more than the passive recipients of doctors’ actions; they influence the clinical encounter through use of their own linguistic resources.

Methods

Development of the Taxonomy

On the basis of clinical experience and preliminary discussions, our research group defined patient requests as:

… an expression of hope or desire that the physician provide information or perform action. Requests may be expressed as questions, commands, statements, or conjecture. Most questions are requests, except rhetorical questions (“Who do you think I am?”), exclamations (“You’re kidding, aren’t you?”), questions related to the mechanics of the physical examination (“Where should I sit?”), and chatting on topics unrelated to health or medicine (“It’s sure been hot, hasn’t it?”)

Following this definition, our group generated an initial set of categories that included requests for examinations, tests, prescriptions, referrals, social or psychological help, and information. These categories were then reviewed in general terms by colleagues and by 2 patient focus groups. The focus groups consisted of adult patients who were receiving care from one academic general medicine clinic and one group model health maintenance organization. The sessions were 90 minutes long, and the patients were asked to describe what they wanted from their physicians, relate any recent experiences with physicians that fell short of expectations, and comment on the sorts of things they might ask of their physician. Using this input, the original set of categories was revised and applied to a set of audiotapes obtained from a convenience sample of 20 adult general medicine outpatients visiting a small single-specialty group practice. Following review of these tapes, additional categories were added, and others were amended or deleted. There seemed to be a natural division between requests for information and requests for action.

The final taxonomy (TORP) is shown in Table 1. There are 11 categories of patient requests for information and 8 categories of patient requests for action. In addition, physician responses to patient requests are coded as 1 of 8 mutually exclusive categories that are modified from Roter and colleagues:10 (1) ignores; (2) acknowledges only; (3) fulfills (performs action or provides requested information); (4) partially fulfills; (5) negotiates, with fulfillment; (6) negotiates, with partial fulfillment; (7) negotiates, with denial; or (8) denies.

Evaluation of the Taxonomy

Data collection. To assess the reliability and validity of TORP in office practice, we applied it to 139 physician-patient encounters selected at random from 318 studied as part of a larger project on patients’ expectations for care. Details of that study are described elsewhere.11 To summarize, data were collected in 1994 from a community-based university-affiliated 6-physician general internal medicine practice in northern California. Patients were eligible for enrollment if they were at least 18 years of age, could speak and understand English, had a telephone, and had scheduled an office visit at least 1 day in advance.

Using patient appointment lists obtained the day before the scheduled visit, we contacted 503 eligible individuals; 396 (79%) agreed to participate. Seventy-eight patients failed to attend their appointment, arrived late, withdrew consent, or could not be successfully audiotaped, leaving complete data for 318 patients. Of those, we randomly selected 139 patients for inclusion in our study. The mean age of patients in this sample was 52 years (standard deviation [SD] = 16); 49% were men; 72% were white. Thirty-five percent had a college degree, and the median family income range was $40,000 to $49,000. There were no meaningful differences in age, sex, race, education, or income between the 139 randomly selected individuals and the 179 remaining patients.

Just before the visit, all patients were asked about demographic characteristics and health status. All encounters were audiotaped using unobtrusive equipment. After the visit, patients completed postvisit questionnaires that included questions about visit satisfaction, and physicians reported on the type of visit, medical diagnoses, interventions requested (by the patients), interventions performed, and the extent to which they perceived the visit to be demanding.

Measures. Patient were asked about demographic characteristics (age, sex, education, income, and employment status) with straightforward questions. We evaluated health status in terms of the patients’ health perceptions (“In general, would you say your health is: excellent, very good, good, fair, poor?”); health worry (“How worried are you about your health?” and “How concerned are you that you might have a serious disease or condition today: extremely…not at all?” [a reliability for the 2-item scale = 0.79]); and a chronic disease count derived from a 12-item checklist completed by the treating physician. Patient satisfaction with the visit was assessed using the Ware and Hays12 5-item visit-specific scale (a = 0.90).

 

 

We obtained physicians’ perceptions of how demanding the visit was by using a brief form with a single question and 5-point response scale (“Compared to your average patient visit, how demanding would you rate this visit in terms of the amount of effort required?” 1 = far more demanding than average; 5 = far less demanding).

Coding procedures. A research assistant reviewed all 139 audiotapes selected for this analysis. After identifying a patient request, she transcribed the request verbatim, assigned an appropriate request code and response code, and continued listening until the visit was over. A request-response exchange was coded as a “negotiation” when the physician’s initial demurral was met by a counter-request or demand from the patient. When a physician’s ultimate response to a patient request differed from the physician’s initial response, the lead coder recorded both an initial and final response code. Variables were created to reflect, at the patient level, the number of requests made, the number and proportion of requests not fulfilled, and the number of requests negotiated before ultimate fulfillment.

Assessment of reliability and validity. The first author reviewed all transcribed segments from the first 20 tapes and coded each segment independently. Interrater agreement was assessed using the k statistic.12 To determine whether reliability degraded with time, the lead author also coded transcribed segments from the last 20 tapes. Face validity was assessed through frequent discussion among the coinvestigators and by obtaining feedback from practicing physicians and patient focus groups. Content validity was assessed by monitoring the number of unclassifiable requests. Construct validity was evaluated quantitatively on the basis of tests of the following hypotheses: (1) patients with worse health status will make a greater number of requests; (2) greater request fulfillment will be associated with greater patient satisfaction; and (3) more requests will be associated with longer visit times and more demanding visits as perceived by physicians. The relevant associations were assessed using Pearson Product-Moment Correlation coefficients, t tests, chi-square tests, and analysis of variance, as appropriate, using Stata software, release 5.0 (Stata Corporation, College Station, Texas).13 Associations between patient requests and physicians’ perceptions of visit time, and those between patient requests and physicians’ perceptions of the visit’s demands were assessed using multiple linear regression, with Huber-White adjustment of standard errors to account for clustering of patients by physician.14 Power to identify bivariable correlations of moderate size (r >0.30) exceeded 0.90 for all inferential tests of significance. Two-tailed P values less than .05 were considered statistically significant. Explicit corrections for multiple statistical comparisons were not made.

Results

Interrater Agreement

On review of the first 20 cases, the lead coder identified and transcribed a total of 147 requests. Overall agreement between the lead and secondary coder was 94% (k = 0.93; P <.001), indicating excellent agreement beyond chance. Of the 9 coding disagreements, 2 were “major” (one coder classified a request as an “action request” and the other as an “information request”). There was no degradation of interrater reliability over time (agreement for the last 20 cases = 95%; k = 0.94; P <.001).

Prevalence of Patient Requests

Table 2 shows that the 139 patients made 772 requests (mean = 5.6; range = 0 to 32). Of these, 619 were requests for information (mean = 4.5 requests per patient) and 153 were requests for action (mean = 1.1). For any given patient, the number of information and action requests were only weakly correlated (r = 0.18; P = .04; data not shown in table). The most common information requests involved questions about medications or treatments (191 requests) and about symptoms, problems, or diseases (178 requests). The most prevalent action request was for medications or treatments Table 2. Among the 772 requests, only 33 (4.3%) were not classifiable into 1 of the 17 standing categories and had to be coded as “other requests for information” or “other requests for action.”

Patient Requests in Relation to Health Status

In assessing the construct validity of TORP, we hypothesized that patients with worse health perceptions, greater health worry, and more chronic diseases would make more requests of their physicians. As shown in Table 3, patients who rated their general health more positively made fewer total requests (r = -0.17; P = .048). The inverse relationship between health perceptions and requests was stronger for action requests (r = -0.25; P = .004) than for information requests (r = -0.11; P = .19). Greater health worry or concern was marginally associated with making more information requests. Having more chronic diseases was associated with more action requests (r = 0.32; P = .0002). Taken together, these results suggest that greater illness burden (as reflected by general health perceptions and number of chronic conditions) is associated with more health care resource needs, while greater health-related anxiety is associated with more informational needs.

 

 

Patient Request Fulfillment and Visit Satisfaction

Our second hypothesis was that patients whose requests were more frequently fulfilled would report greater visit satisfaction. We created 2 indicators of request fulfillment (or nonfulfillment) at the patient level according to the coder’s judgment: the number of unfulfilled requests (mean = 0.55; SD = 1.3; median = 0; range = 0-9) and the proportion of unfulfilled requests (mean = 7.5%; median = 0; range = 0%-60%). Mean patient satisfaction with the visit was 4.48 (SD = 0.65) on a scale from 1 to 5 scale (5 = excellent). Patient satisfaction was significantly and inversely correlated with the total number of unfulfilled requests (r = -0.32; P <.001). This relationship appeared to be driven more by action requests (r = -.39; P <.001) than information requests (r = -0.21; P = .015). There were no significant associations between satisfaction and the proportion of unfulfilled requests. Compared with patients without any unfulfilled action requests (n = 112), those with one or more unfulfilled request (n = 23) had lower mean satisfaction (4.21 vs 4.54, P = .03).

In a supsidiary analysis, we compared the 22 visits in which patients and physicians negotiated a request with the 117 visits in which no negotiation occurred. There were no significant differences in patient-reported satisfaction with these 2 types of visits (mean = 4.3 vs 4.5, P = .18), suggesting that the quality of the negotiation process may be more important in influencing patient evaluations than the presence or absence of negotiation.

Patient Requests and Physician Perceptions of the Visit

As a final test of TORP, we hypothesized that visits involving many patient requests would take more time and would be perceived by physicians as more demanding. Using linear regression with adjustment for clustering by physician, more information requests (but not action requests) were associated with increased physician-reported visit duration (P = .017, data not shown). Visits in which patients made more requests were rated by physicians as more demanding (r = 0.40 for total requests; r = 0.35 for information requests; and r = 0.29 for action requests; all P values <.001). Using multiple regression (with adjustment for clustering) to control for patients’ general health perceptions, the number of chronic diseases, physician-reported visit length (in minutes), and visit type (new, follow-up, or urgent care), total requests remained significantly associated with the perceived demands of the visit (regression coefficient = 0.05; P = .006; data not shown).

Discussion

TORP fills an important methodologic void for researchers interested in understanding how patient requests and physician responses influence clinical effectiveness. Our investigation demonstrates that TORP is capable of capturing and categorizing patients’ requests in adult primary care medicine. This coding system exhibits excellent reliability in the hands of trained coders and is relatively easy to apply in real time. TORP also measures meaningful phenomena as demonstrated by the significant associations between patient requests and patient health status, request fulfillment and visit satisfaction, and patients’ request behavior and physicians’ perceptions of the demands of the visit.

To our knowledge, TORP is the first direct-observation system designed to identify, classify, and enumerate patients’ requests and physicians’ responses in office practice. TORP may be usefully compared with 4 other popular coding schemes. The Roter Interactional Analysis System (RIAS) is a major refinement of previous work by Bales.15 It is a reliable and valid system that has been used with success in several studies16-19 evaluating the relationship between a clinician’s communication style and health care outcomes. The unit of analysis is the utterance (smallest meaningful unit of speech); the emphasis is on process rather than content; and the raw data consist of audiotapes or videotapes. Unlike TORP, RIAS does not code the content of patients’ requests for information, and it has a single “request for services” code that is used when the patient makes “a direct appeal to the physician’s authority.”

The Davis Observation Code (DOC) is an analysis system designed specifically for primary care.20 The unit of analysis is time (10-second blocks); the emphasis is on content (eg, the proportion of time spent discussing prevention); and data may be acquired either from videotapes or real-time observation. As with RIAS, there is no specific mechanism within the DOC system for extracting and classifying patient requests. RIAS and DOC are validated systems, but neither was specifically intended to examine patients’ requests.

In contrast to RIAS and DOC, the systems developed by Like and Zyzanski6 and by Eisenthal and coworkers21 provide for a detailed categorization of patients’ wishes. Like and Zyzanski’s Patient Request for Services Scale identified 5 clusters of desires: medical information, psychosocial assistance, therapeutic listening, general health advice, and biomedical treatment. Eisenthal and colleagues distinguished between the desired focus or objective and the desired form or method. For example, a patient might want pain relief (focus) achieved through the prescription of a narcotic analgesic (method). Both systems stress assessment of patients’ self-reported desires. TORP shares the same general objectives as these 2 systems but brings a detailed taxonomy to actual clinical behavior.

 

 

TORP was developed with input from clinicians, researchers, and patients. It is organized to reflect the major categories of patient-initiated interaction in primary care settings. The system relies on real-time coding during observation or from audiotapes (rather than transcripts) because we believe some requests are difficult to identify without hearing the requester’s intonation. Although there were relatively few uncodable requests, future versions of TORP will need to incorporate several new request categories.

Although we believe TORP is a useful system that could be productively applied to analysis of physician-patient interactions in a variety of settings, several opportunities for improvement remain. First, procedures for ensuring unitizing reliability (the ability of 2 raters to agree that a given segment of speech represents a request) should be developed and evaluated. Some types of requests may be easier to identify than others. Second, the rapidly changing health care environment virtually guarantees that any system for coding patient and physician behavior will require periodic updating. For example, as newer managed care models become dominant, request and response categories will be needed that account for the complex relationships among employers, insurers, medical groups, insurers, and patients. Third, codes are needed to acknowledge the involvement of family caregivers, especially in pediatric and geriatric settings. Fourth, greater attention to physician responses (including how clinicians promote effective negotiation) is needed. Fifth, TORP places a major emphasis on content; a more refined system that acknowledges form and emotionality may be needed when TORP is used for some research issues. One way to address this limitation would be to use TORP with an existing analysis system, such as RIAS.

More fundamentally, additional research is required to help researchers decide when direct observation is needed to understand critical elements of visit dynamics and when other data sources (such as patient or physician self-report, chart review, or administrative data) will suffice.22,23 Although audio-recording of visits can be intrusive and the coding of tapes is time consuming, direct observation is sometimes necessary because available evidence does not inspire optimism about the reliability of patient and physician reports of visit content.24-26 It is unlikely that reliance on self-report data alone can adequately support research on the give-and-take of clinical interactions.

Conclusions

TORP represents a new approach for understanding patients’ requests and physicians’ responses in office practice. This analysis system will provide new insights into a fundamental aspect of the physician-patient relationship that cannot be assessed by other means. By highlighting problematic requests and identifying successful and unsuccessful strategies for clinical negotiation, TORP may ultimately help clinicians to better meet patients’ needs in an increasingly demanding health care environment.

Acknowledgments

Data for this project were collected while Dr Kravitz was a Picker-Commonwealth Faculty Scholar. Analysis was performed with support from the Agency for Health Care Policy and Research (R03 HS09812-01). The authors thank Deirdre Antonius for coordinating the data collection effort, Shannon Quinlan for coding the audiotapes, and Charles E. Lewis, PhD, for providing mentorship and guidance.

 

BACKGROUND: The goal of our investigation was to facilitate research on clinical negotiation between patients and physicians by developing a reliable and valid classification system for patients’ requests in office practice.

METHODS: We developed the Taxonomy of Requests by Patients (TORP) using input from researchers, clinicians, and patient focus groups. To assess the system’s reliability and validity, we applied TORP to audiotaped encounters between 139 patients and 6 northern California internists. Reliability was assessed with the k statistic as a measure of interrater agreement. Face validity was assessed through expert and patient judgment of the coding system. Content validity was examined by monitoring the incidence of unclassifiable requests. Construct valdity was evaluated by examining the relationship between patient requests and patient health status; patient request fulfillment and patient satisfaction; and patient requests and physician perceptions of the visit.

RESULTS: The 139 patients made 772 requests (619 requests for information and 153 requests for physician action). Average interrater agreement across a sample of 40 cases was 94% (k = 0.93; P <.001). Patients with better health status made fewer requests (r = -0.17; P = .048). Having more chronic diseases was associated with more requests for physician action (r = 0.32; P = .0002). Patients with more unfulfilled requests had lower visit satisfaction (r = -0.32; P <.001). More patient requests was also associated with physician reports of longer visit times (P = .016) and increased visit demands (P = .006).

CONCLUSIONS: Our study provides evidence that TORP is a reliable and valid system for capturing and categorizing patients’ requests in adult primary care. Further research is needed to confirm the system’s validity, expand its applicability, and explore its usefulness as a tool for studying clinical negotiation.

Requests are the primary means of patient-initiated action in office practice. But these requests can be problematic because they consume time and resources. In particular, patients’ requests for diagnostic tests, medications, and referrals can be costly to capitated practices and may cause physician-patient discord if not handled appropriately. Patients who participate actively in their own care, however, often achieve better outcomes than those who do not.1 Managing the negotiation triggered by these requests is a fundamental clinical skill. Unfortunately, few empiric data are available to help physicians select effective negotiation strategies. One barrier to necessary research is the lack of a reliable, valid, and comprehensive system for describing and classifying patients’ requests.

Uhlmann and colleagues2 defined patient requests as “desires explicitly communicated [to the physician] through either verbal or written language.” In their formulation, desires are defined as wishes regarding medical care. Requests in turn are defined as desires that the patient communicates to the physician.

The definition of patient requests proposed by Uhlmann and coworkers is operationally explicit. However, few studies of patient requests have adhered to this definition. For Lazare and colleagues3 requests were “what patients wish or hope will occur”; for DelVecchio and coworkers4 they were ways patients indicate to the research assistant how the “clinic can help you at this time”; for Uhlmann and colleagues,5 “health problems you feel should be dealt with today”; for Like and Zyzanski,6 the “types of help [patients] would like to receive at that day’s visit”; for Eisenthal and coworkers,7 responses to the question, “How do you hope the doctor (or clinic) can be of help to you today?”; and for Valori and colleagues,8 requests were defined as previsit desires for “explanation and reassurance, for emotional support, and for investigation and treatment.”

A common feature of most of this literature is the blending of “requests” (what patients ask for) with “desires” (what patients want) and “expectancies” (what patients think their physicians will do). Previsit patient surveys can only elicit desires and expectancies, while requests are more readily assessed by postvisit patient or physician reports or by direct observation. The operational distinction between desires and requests is important if we are to focus on how patients influence the content of their visits by asking questions or making statements that affect physician behavior. Some desires (eg, diagnostic imaging) may be more frequently converted into explicit requests than other desires (eg, therapeutic listening).

As a method for studying patients’ requests, direct observation using audiorecording or videorecording has several advantages over other approaches, such as patient or physician reports. First, patients’ requests and physicians’ responses can be captured precisely by recording them. Second, tapes (or transcripts) can be preserved and used for reliability checking and post-hoc analyses. Third, behavioral observation is the only method that can capture the interactional dynamics of clinical negotiations. Although these advantages are countered by a potential Hawthorne effect, this bias is manageable.9 Existing systems for the analysis of interactions were not specifically designed to describe the content of clinical negotiation. Therefore, we developed a new system called Taxonomy of Requests by Patients (TORP) for classifying patient requests and physician responses in office practice. The main features of TORP are that it relies on direct observation, focuses on request content, can be applied in real time, and is designed for use in general medical settings.

 

 

Our goal was to produce a classification system for patients’ requests that would be useful in understanding the links between patients’ unarticulated desires and expectations, patients’ articulated requests, physicians’ provision of health care services, and patients’ and physicians’ perceptions of the visit and of each other. We hypothesized that the characteristics, needs, and attitudes of patients and physicians would influence clinical negotiation Figure 1. Clinical negotiation, in turn, was posited to affect patient well-being and physician perceptions of the visit. In this schema, the negotiation is central. Patients are more than the passive recipients of doctors’ actions; they influence the clinical encounter through use of their own linguistic resources.

Methods

Development of the Taxonomy

On the basis of clinical experience and preliminary discussions, our research group defined patient requests as:

… an expression of hope or desire that the physician provide information or perform action. Requests may be expressed as questions, commands, statements, or conjecture. Most questions are requests, except rhetorical questions (“Who do you think I am?”), exclamations (“You’re kidding, aren’t you?”), questions related to the mechanics of the physical examination (“Where should I sit?”), and chatting on topics unrelated to health or medicine (“It’s sure been hot, hasn’t it?”)

Following this definition, our group generated an initial set of categories that included requests for examinations, tests, prescriptions, referrals, social or psychological help, and information. These categories were then reviewed in general terms by colleagues and by 2 patient focus groups. The focus groups consisted of adult patients who were receiving care from one academic general medicine clinic and one group model health maintenance organization. The sessions were 90 minutes long, and the patients were asked to describe what they wanted from their physicians, relate any recent experiences with physicians that fell short of expectations, and comment on the sorts of things they might ask of their physician. Using this input, the original set of categories was revised and applied to a set of audiotapes obtained from a convenience sample of 20 adult general medicine outpatients visiting a small single-specialty group practice. Following review of these tapes, additional categories were added, and others were amended or deleted. There seemed to be a natural division between requests for information and requests for action.

The final taxonomy (TORP) is shown in Table 1. There are 11 categories of patient requests for information and 8 categories of patient requests for action. In addition, physician responses to patient requests are coded as 1 of 8 mutually exclusive categories that are modified from Roter and colleagues:10 (1) ignores; (2) acknowledges only; (3) fulfills (performs action or provides requested information); (4) partially fulfills; (5) negotiates, with fulfillment; (6) negotiates, with partial fulfillment; (7) negotiates, with denial; or (8) denies.

Evaluation of the Taxonomy

Data collection. To assess the reliability and validity of TORP in office practice, we applied it to 139 physician-patient encounters selected at random from 318 studied as part of a larger project on patients’ expectations for care. Details of that study are described elsewhere.11 To summarize, data were collected in 1994 from a community-based university-affiliated 6-physician general internal medicine practice in northern California. Patients were eligible for enrollment if they were at least 18 years of age, could speak and understand English, had a telephone, and had scheduled an office visit at least 1 day in advance.

Using patient appointment lists obtained the day before the scheduled visit, we contacted 503 eligible individuals; 396 (79%) agreed to participate. Seventy-eight patients failed to attend their appointment, arrived late, withdrew consent, or could not be successfully audiotaped, leaving complete data for 318 patients. Of those, we randomly selected 139 patients for inclusion in our study. The mean age of patients in this sample was 52 years (standard deviation [SD] = 16); 49% were men; 72% were white. Thirty-five percent had a college degree, and the median family income range was $40,000 to $49,000. There were no meaningful differences in age, sex, race, education, or income between the 139 randomly selected individuals and the 179 remaining patients.

Just before the visit, all patients were asked about demographic characteristics and health status. All encounters were audiotaped using unobtrusive equipment. After the visit, patients completed postvisit questionnaires that included questions about visit satisfaction, and physicians reported on the type of visit, medical diagnoses, interventions requested (by the patients), interventions performed, and the extent to which they perceived the visit to be demanding.

Measures. Patient were asked about demographic characteristics (age, sex, education, income, and employment status) with straightforward questions. We evaluated health status in terms of the patients’ health perceptions (“In general, would you say your health is: excellent, very good, good, fair, poor?”); health worry (“How worried are you about your health?” and “How concerned are you that you might have a serious disease or condition today: extremely…not at all?” [a reliability for the 2-item scale = 0.79]); and a chronic disease count derived from a 12-item checklist completed by the treating physician. Patient satisfaction with the visit was assessed using the Ware and Hays12 5-item visit-specific scale (a = 0.90).

 

 

We obtained physicians’ perceptions of how demanding the visit was by using a brief form with a single question and 5-point response scale (“Compared to your average patient visit, how demanding would you rate this visit in terms of the amount of effort required?” 1 = far more demanding than average; 5 = far less demanding).

Coding procedures. A research assistant reviewed all 139 audiotapes selected for this analysis. After identifying a patient request, she transcribed the request verbatim, assigned an appropriate request code and response code, and continued listening until the visit was over. A request-response exchange was coded as a “negotiation” when the physician’s initial demurral was met by a counter-request or demand from the patient. When a physician’s ultimate response to a patient request differed from the physician’s initial response, the lead coder recorded both an initial and final response code. Variables were created to reflect, at the patient level, the number of requests made, the number and proportion of requests not fulfilled, and the number of requests negotiated before ultimate fulfillment.

Assessment of reliability and validity. The first author reviewed all transcribed segments from the first 20 tapes and coded each segment independently. Interrater agreement was assessed using the k statistic.12 To determine whether reliability degraded with time, the lead author also coded transcribed segments from the last 20 tapes. Face validity was assessed through frequent discussion among the coinvestigators and by obtaining feedback from practicing physicians and patient focus groups. Content validity was assessed by monitoring the number of unclassifiable requests. Construct validity was evaluated quantitatively on the basis of tests of the following hypotheses: (1) patients with worse health status will make a greater number of requests; (2) greater request fulfillment will be associated with greater patient satisfaction; and (3) more requests will be associated with longer visit times and more demanding visits as perceived by physicians. The relevant associations were assessed using Pearson Product-Moment Correlation coefficients, t tests, chi-square tests, and analysis of variance, as appropriate, using Stata software, release 5.0 (Stata Corporation, College Station, Texas).13 Associations between patient requests and physicians’ perceptions of visit time, and those between patient requests and physicians’ perceptions of the visit’s demands were assessed using multiple linear regression, with Huber-White adjustment of standard errors to account for clustering of patients by physician.14 Power to identify bivariable correlations of moderate size (r >0.30) exceeded 0.90 for all inferential tests of significance. Two-tailed P values less than .05 were considered statistically significant. Explicit corrections for multiple statistical comparisons were not made.

Results

Interrater Agreement

On review of the first 20 cases, the lead coder identified and transcribed a total of 147 requests. Overall agreement between the lead and secondary coder was 94% (k = 0.93; P <.001), indicating excellent agreement beyond chance. Of the 9 coding disagreements, 2 were “major” (one coder classified a request as an “action request” and the other as an “information request”). There was no degradation of interrater reliability over time (agreement for the last 20 cases = 95%; k = 0.94; P <.001).

Prevalence of Patient Requests

Table 2 shows that the 139 patients made 772 requests (mean = 5.6; range = 0 to 32). Of these, 619 were requests for information (mean = 4.5 requests per patient) and 153 were requests for action (mean = 1.1). For any given patient, the number of information and action requests were only weakly correlated (r = 0.18; P = .04; data not shown in table). The most common information requests involved questions about medications or treatments (191 requests) and about symptoms, problems, or diseases (178 requests). The most prevalent action request was for medications or treatments Table 2. Among the 772 requests, only 33 (4.3%) were not classifiable into 1 of the 17 standing categories and had to be coded as “other requests for information” or “other requests for action.”

Patient Requests in Relation to Health Status

In assessing the construct validity of TORP, we hypothesized that patients with worse health perceptions, greater health worry, and more chronic diseases would make more requests of their physicians. As shown in Table 3, patients who rated their general health more positively made fewer total requests (r = -0.17; P = .048). The inverse relationship between health perceptions and requests was stronger for action requests (r = -0.25; P = .004) than for information requests (r = -0.11; P = .19). Greater health worry or concern was marginally associated with making more information requests. Having more chronic diseases was associated with more action requests (r = 0.32; P = .0002). Taken together, these results suggest that greater illness burden (as reflected by general health perceptions and number of chronic conditions) is associated with more health care resource needs, while greater health-related anxiety is associated with more informational needs.

 

 

Patient Request Fulfillment and Visit Satisfaction

Our second hypothesis was that patients whose requests were more frequently fulfilled would report greater visit satisfaction. We created 2 indicators of request fulfillment (or nonfulfillment) at the patient level according to the coder’s judgment: the number of unfulfilled requests (mean = 0.55; SD = 1.3; median = 0; range = 0-9) and the proportion of unfulfilled requests (mean = 7.5%; median = 0; range = 0%-60%). Mean patient satisfaction with the visit was 4.48 (SD = 0.65) on a scale from 1 to 5 scale (5 = excellent). Patient satisfaction was significantly and inversely correlated with the total number of unfulfilled requests (r = -0.32; P <.001). This relationship appeared to be driven more by action requests (r = -.39; P <.001) than information requests (r = -0.21; P = .015). There were no significant associations between satisfaction and the proportion of unfulfilled requests. Compared with patients without any unfulfilled action requests (n = 112), those with one or more unfulfilled request (n = 23) had lower mean satisfaction (4.21 vs 4.54, P = .03).

In a supsidiary analysis, we compared the 22 visits in which patients and physicians negotiated a request with the 117 visits in which no negotiation occurred. There were no significant differences in patient-reported satisfaction with these 2 types of visits (mean = 4.3 vs 4.5, P = .18), suggesting that the quality of the negotiation process may be more important in influencing patient evaluations than the presence or absence of negotiation.

Patient Requests and Physician Perceptions of the Visit

As a final test of TORP, we hypothesized that visits involving many patient requests would take more time and would be perceived by physicians as more demanding. Using linear regression with adjustment for clustering by physician, more information requests (but not action requests) were associated with increased physician-reported visit duration (P = .017, data not shown). Visits in which patients made more requests were rated by physicians as more demanding (r = 0.40 for total requests; r = 0.35 for information requests; and r = 0.29 for action requests; all P values <.001). Using multiple regression (with adjustment for clustering) to control for patients’ general health perceptions, the number of chronic diseases, physician-reported visit length (in minutes), and visit type (new, follow-up, or urgent care), total requests remained significantly associated with the perceived demands of the visit (regression coefficient = 0.05; P = .006; data not shown).

Discussion

TORP fills an important methodologic void for researchers interested in understanding how patient requests and physician responses influence clinical effectiveness. Our investigation demonstrates that TORP is capable of capturing and categorizing patients’ requests in adult primary care medicine. This coding system exhibits excellent reliability in the hands of trained coders and is relatively easy to apply in real time. TORP also measures meaningful phenomena as demonstrated by the significant associations between patient requests and patient health status, request fulfillment and visit satisfaction, and patients’ request behavior and physicians’ perceptions of the demands of the visit.

To our knowledge, TORP is the first direct-observation system designed to identify, classify, and enumerate patients’ requests and physicians’ responses in office practice. TORP may be usefully compared with 4 other popular coding schemes. The Roter Interactional Analysis System (RIAS) is a major refinement of previous work by Bales.15 It is a reliable and valid system that has been used with success in several studies16-19 evaluating the relationship between a clinician’s communication style and health care outcomes. The unit of analysis is the utterance (smallest meaningful unit of speech); the emphasis is on process rather than content; and the raw data consist of audiotapes or videotapes. Unlike TORP, RIAS does not code the content of patients’ requests for information, and it has a single “request for services” code that is used when the patient makes “a direct appeal to the physician’s authority.”

The Davis Observation Code (DOC) is an analysis system designed specifically for primary care.20 The unit of analysis is time (10-second blocks); the emphasis is on content (eg, the proportion of time spent discussing prevention); and data may be acquired either from videotapes or real-time observation. As with RIAS, there is no specific mechanism within the DOC system for extracting and classifying patient requests. RIAS and DOC are validated systems, but neither was specifically intended to examine patients’ requests.

In contrast to RIAS and DOC, the systems developed by Like and Zyzanski6 and by Eisenthal and coworkers21 provide for a detailed categorization of patients’ wishes. Like and Zyzanski’s Patient Request for Services Scale identified 5 clusters of desires: medical information, psychosocial assistance, therapeutic listening, general health advice, and biomedical treatment. Eisenthal and colleagues distinguished between the desired focus or objective and the desired form or method. For example, a patient might want pain relief (focus) achieved through the prescription of a narcotic analgesic (method). Both systems stress assessment of patients’ self-reported desires. TORP shares the same general objectives as these 2 systems but brings a detailed taxonomy to actual clinical behavior.

 

 

TORP was developed with input from clinicians, researchers, and patients. It is organized to reflect the major categories of patient-initiated interaction in primary care settings. The system relies on real-time coding during observation or from audiotapes (rather than transcripts) because we believe some requests are difficult to identify without hearing the requester’s intonation. Although there were relatively few uncodable requests, future versions of TORP will need to incorporate several new request categories.

Although we believe TORP is a useful system that could be productively applied to analysis of physician-patient interactions in a variety of settings, several opportunities for improvement remain. First, procedures for ensuring unitizing reliability (the ability of 2 raters to agree that a given segment of speech represents a request) should be developed and evaluated. Some types of requests may be easier to identify than others. Second, the rapidly changing health care environment virtually guarantees that any system for coding patient and physician behavior will require periodic updating. For example, as newer managed care models become dominant, request and response categories will be needed that account for the complex relationships among employers, insurers, medical groups, insurers, and patients. Third, codes are needed to acknowledge the involvement of family caregivers, especially in pediatric and geriatric settings. Fourth, greater attention to physician responses (including how clinicians promote effective negotiation) is needed. Fifth, TORP places a major emphasis on content; a more refined system that acknowledges form and emotionality may be needed when TORP is used for some research issues. One way to address this limitation would be to use TORP with an existing analysis system, such as RIAS.

More fundamentally, additional research is required to help researchers decide when direct observation is needed to understand critical elements of visit dynamics and when other data sources (such as patient or physician self-report, chart review, or administrative data) will suffice.22,23 Although audio-recording of visits can be intrusive and the coding of tapes is time consuming, direct observation is sometimes necessary because available evidence does not inspire optimism about the reliability of patient and physician reports of visit content.24-26 It is unlikely that reliance on self-report data alone can adequately support research on the give-and-take of clinical interactions.

Conclusions

TORP represents a new approach for understanding patients’ requests and physicians’ responses in office practice. This analysis system will provide new insights into a fundamental aspect of the physician-patient relationship that cannot be assessed by other means. By highlighting problematic requests and identifying successful and unsuccessful strategies for clinical negotiation, TORP may ultimately help clinicians to better meet patients’ needs in an increasingly demanding health care environment.

Acknowledgments

Data for this project were collected while Dr Kravitz was a Picker-Commonwealth Faculty Scholar. Analysis was performed with support from the Agency for Health Care Policy and Research (R03 HS09812-01). The authors thank Deirdre Antonius for coordinating the data collection effort, Shannon Quinlan for coding the audiotapes, and Charles E. Lewis, PhD, for providing mentorship and guidance.

References

 

1. Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med 1985;102:520-8.

2. Uhlmann RF, Inui TS, Carter WB. Patient requests and expectations: definitions and clinical applications. Medical Care 1984;22:681-5.

3. Lazare A, Eisenthal S, Wasserman L, Harford TC. Patient requests in a walk-in clinic. Compr Psychiatry 1975;16:467-77.

4. DelVecchio Good MJ, Good BJ, Nassi AJ. Patient requests in primary health care settings: development and validation of a research instrument. J Behavioral Med 1983;6:151-68.

5. Uhlmann RF, Carter WB, Inui TS. Fulfillment of patient requests in a general medicine clinic. Am J Public Health 1984;74:257-8.

6. Like R, Zyzanski SJ. Patient requests in family practice: a focal point for clinical negotiation. Fam Pract 1986;3:216-28.

7. Eisenthal S, Koopman C, Stoeckle JD. The nature of patients’ requests for physicians’ help. Acad Med 1990;65:401-5.

8. Valori R, Woloshynowych M, Bellenger N, Aluvihare V, Salmon P. The Patient Requests Form: a way of measuring what patients want from their general practitioner. J Psychosom Res 1996;40:87-94.

9. Arborelius E, Timpka T. In what way may videotapes be used to get significant information about the patient-physician relationship? Med Teacher 1990;12:197-208.

10. Roter D. The Roter method of interaction process analysis. Internal document, Johns Hopkins University; 1990.

11. Kravitz RL, Callahan EJ, Azari R, Antonius D, Lewis CE. Assessing patients’ expectations in ambulatory practice: does the measurement approach make a difference? J Gen Intern Med 1997;12:67-72.

12. Ware JE, Hays RD. Methods for measuring patient satisfaction with specific medical encounters. Med Care 1988;26:393-402.

13. Stata Corporation. Stata Statistical Software: release 5.0. College Station, Tex: Stata Corporation; 1997.

14. Huber PJ. The behavior of maximum likelihood estimates under nonstandard conditions. In: Proceedings of the fifth Berkeley symposium in mathematical statistics and probability. Berkeley, Calif: University of California, Berkeley Press; 1967.

15. Roter D, Hall JA. Doctors talking with patients/patients talking with doctors: improving communication in medical visits. Westport, Conn: Auburn House; 1992.

16. Levinson W, Roter DL, Mullooly JP, Dull VT, Frankel RM. Physician-patient communication: the relationship with malpractice claims among primary care physicians and surgeons. JAMA 1997;277:553-9.

17. Roter DL, Stewart M, Putnam SM, Lipkin M, Jr, Stiles W, Inui TS. Communication patterns of primary care physicians. JAMA 1997;277:350-6.

18. Hall JA, Irish JT, Roter DL, Ehrlich CM, Miller LH. Satisfaction, gender, and communication in medical visits. Med Care 1994;32:1216-31.

19. Wissow LS, Roter D, Bauman LJ, et al. Patient-provider communication during the emergency department care of children with asthma: the National Cooperative Inner-City Asthma Study, National Institute of Allergy and Infectious Diseases, NIH, Bethesda, Md. Med Care 1998;36:1439-50.

20. Callahan EJ, Bertakis KD. Development and validation of the Davis Observation Code. Fam Med 1991;23:19-24.

21. Eisenthal S, Koopman C, Stoeckle JD. The nature of patients’ requests for physicians’ help. Acad Med 1990;65:401-5.

22. Stange KC, Zyzanski SJ, Smith TF, et al. How valid are medical records and patient questionnaires for physician profiling and health services research? A comparison with direct observation of patient visits. Med Care 1998;36:851-67.

23. Stange KC, Zyzanski SJ, Jaen CR. Illuminating the black box: a description of 4454 patient visits to 138 family physicians. J Fam Pract 1998;46:377-89.

24. Gerbert B, Stone G, Stulbarg M, Gullion DS, Greenfield S. Agreement among physician assessment methods: searching for the truth among fallible methods. Med Care 1988;26:519-35.

25. Scheitel SM, Boland BJ, Wollan PC, Silverstein MD. Patient-physician agreement about medical diagnoses and cardiovascular risk factors in the ambulatory general medical examination. Mayo Clin Proc 1996;71:1131-7.

26. Temple W, Toews J, Fidler H, Lockyer JM, Taenzer P, Parboosingh EJ. Concordance in communication between surgeon and patient. Can J Surg 1998;41:439-45.

References

 

1. Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med 1985;102:520-8.

2. Uhlmann RF, Inui TS, Carter WB. Patient requests and expectations: definitions and clinical applications. Medical Care 1984;22:681-5.

3. Lazare A, Eisenthal S, Wasserman L, Harford TC. Patient requests in a walk-in clinic. Compr Psychiatry 1975;16:467-77.

4. DelVecchio Good MJ, Good BJ, Nassi AJ. Patient requests in primary health care settings: development and validation of a research instrument. J Behavioral Med 1983;6:151-68.

5. Uhlmann RF, Carter WB, Inui TS. Fulfillment of patient requests in a general medicine clinic. Am J Public Health 1984;74:257-8.

6. Like R, Zyzanski SJ. Patient requests in family practice: a focal point for clinical negotiation. Fam Pract 1986;3:216-28.

7. Eisenthal S, Koopman C, Stoeckle JD. The nature of patients’ requests for physicians’ help. Acad Med 1990;65:401-5.

8. Valori R, Woloshynowych M, Bellenger N, Aluvihare V, Salmon P. The Patient Requests Form: a way of measuring what patients want from their general practitioner. J Psychosom Res 1996;40:87-94.

9. Arborelius E, Timpka T. In what way may videotapes be used to get significant information about the patient-physician relationship? Med Teacher 1990;12:197-208.

10. Roter D. The Roter method of interaction process analysis. Internal document, Johns Hopkins University; 1990.

11. Kravitz RL, Callahan EJ, Azari R, Antonius D, Lewis CE. Assessing patients’ expectations in ambulatory practice: does the measurement approach make a difference? J Gen Intern Med 1997;12:67-72.

12. Ware JE, Hays RD. Methods for measuring patient satisfaction with specific medical encounters. Med Care 1988;26:393-402.

13. Stata Corporation. Stata Statistical Software: release 5.0. College Station, Tex: Stata Corporation; 1997.

14. Huber PJ. The behavior of maximum likelihood estimates under nonstandard conditions. In: Proceedings of the fifth Berkeley symposium in mathematical statistics and probability. Berkeley, Calif: University of California, Berkeley Press; 1967.

15. Roter D, Hall JA. Doctors talking with patients/patients talking with doctors: improving communication in medical visits. Westport, Conn: Auburn House; 1992.

16. Levinson W, Roter DL, Mullooly JP, Dull VT, Frankel RM. Physician-patient communication: the relationship with malpractice claims among primary care physicians and surgeons. JAMA 1997;277:553-9.

17. Roter DL, Stewart M, Putnam SM, Lipkin M, Jr, Stiles W, Inui TS. Communication patterns of primary care physicians. JAMA 1997;277:350-6.

18. Hall JA, Irish JT, Roter DL, Ehrlich CM, Miller LH. Satisfaction, gender, and communication in medical visits. Med Care 1994;32:1216-31.

19. Wissow LS, Roter D, Bauman LJ, et al. Patient-provider communication during the emergency department care of children with asthma: the National Cooperative Inner-City Asthma Study, National Institute of Allergy and Infectious Diseases, NIH, Bethesda, Md. Med Care 1998;36:1439-50.

20. Callahan EJ, Bertakis KD. Development and validation of the Davis Observation Code. Fam Med 1991;23:19-24.

21. Eisenthal S, Koopman C, Stoeckle JD. The nature of patients’ requests for physicians’ help. Acad Med 1990;65:401-5.

22. Stange KC, Zyzanski SJ, Smith TF, et al. How valid are medical records and patient questionnaires for physician profiling and health services research? A comparison with direct observation of patient visits. Med Care 1998;36:851-67.

23. Stange KC, Zyzanski SJ, Jaen CR. Illuminating the black box: a description of 4454 patient visits to 138 family physicians. J Fam Pract 1998;46:377-89.

24. Gerbert B, Stone G, Stulbarg M, Gullion DS, Greenfield S. Agreement among physician assessment methods: searching for the truth among fallible methods. Med Care 1988;26:519-35.

25. Scheitel SM, Boland BJ, Wollan PC, Silverstein MD. Patient-physician agreement about medical diagnoses and cardiovascular risk factors in the ambulatory general medical examination. Mayo Clin Proc 1996;71:1131-7.

26. Temple W, Toews J, Fidler H, Lockyer JM, Taenzer P, Parboosingh EJ. Concordance in communication between surgeon and patient. Can J Surg 1998;41:439-45.

Issue
The Journal of Family Practice - 48(11)
Issue
The Journal of Family Practice - 48(11)
Page Number
872-878
Page Number
872-878
Publications
Publications
Topics
Article Type
Display Headline
A Taxonomy of Requests by Patients (TORP) A New System for Understanding Clinical Negotiation in Office Practice
Display Headline
A Taxonomy of Requests by Patients (TORP) A New System for Understanding Clinical Negotiation in Office Practice
Legacy Keywords
,Physician-patient relationspatient satisfactionoffice visits. (J Fam Pract 1999; 48:872-878)
Legacy Keywords
,Physician-patient relationspatient satisfactionoffice visits. (J Fam Pract 1999; 48:872-878)
Sections
Disallow All Ads
Alternative CME

A Taxonomy of Requests by Patients (TORP) - A New System for Understanding Clinical Negotiation in Office Practice

Article Type
Changed
Fri, 01/18/2019 - 10:39
Display Headline
A Taxonomy of Requests by Patients (TORP): A New System for Understanding Clinical Negotiation in Office Practice
Article PDF
Issue
The Journal of Family Practice - 48(11)
Publications
Page Number
872-878
Sections
Article PDF
Article PDF
Issue
The Journal of Family Practice - 48(11)
Issue
The Journal of Family Practice - 48(11)
Page Number
872-878
Page Number
872-878
Publications
Publications
Article Type
Display Headline
A Taxonomy of Requests by Patients (TORP): A New System for Understanding Clinical Negotiation in Office Practice
Display Headline
A Taxonomy of Requests by Patients (TORP): A New System for Understanding Clinical Negotiation in Office Practice
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Article PDF Media

Advertisement Induced Prescription Drug Requests - Patients' Anticipated Reactions to a Physician Who Refuses

Article Type
Changed
Fri, 01/18/2019 - 10:35
Display Headline
Advertisement Induced Prescription Drug Requests: Patients' Anticipated Reactions to a Physician Who Refuses
Article PDF
Issue
The Journal of Family Practice - 48(6)
Publications
Page Number
446-452
Sections
Article PDF
Article PDF
Issue
The Journal of Family Practice - 48(6)
Issue
The Journal of Family Practice - 48(6)
Page Number
446-452
Page Number
446-452
Publications
Publications
Article Type
Display Headline
Advertisement Induced Prescription Drug Requests: Patients' Anticipated Reactions to a Physician Who Refuses
Display Headline
Advertisement Induced Prescription Drug Requests: Patients' Anticipated Reactions to a Physician Who Refuses
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Article PDF Media