User login
OBJECTIVE: We investigated the relationship between continuity of care and the quality of care received by patients with type 2 diabetes mellitus.
STUDY DESIGN: We used a cross-sectional patient survey and medical record review.
POPULATION: Consecutive patients with an established diagnosis of type 2 diabetes mellitus presented to 1 of 6 clinics within the Residency Research Network of South Texas, a network of 6 family practice residencies affiliated with the University of Texas Health Science Center at San Antonio.
OUTCOMES MEASURED: Continuity was measured as the proportion of visits within the past year to the patient’s usual primary care provider. A quality of care score was computed based on the American Diabetes Association’s Provider Recognition Program criteria from data collected through medical record review and patient surveys. Each patient was awarded points based on the presence or absence of each criterion.
RESULTS: The continuity score was associated significantly with the quality of care score in the anticipated direction (r = .15, P = .04). Patients who had seen their usual providers within the past year were significantly more likely to have had an eye examination, a foot examination, 2 blood pressure measurements, and a lipid analysis.
CONCLUSIONS: Continuity of care is associated with the quality of care received by patients with type 2 diabetes mellitus. Continuity of care may influence provider and patient behaviors in ways that improve quality. Further research on how continuity contributes to improved quality is needed.
- For patients with diabetes, continuity of care is associated with the quality of care: as continuity improves, so does the quality of care.
- Patients with diabetes who report that they have seen their usual primary care provider in the past year are more likely to have received an eye examination, a foot examination, 2 blood pressure measurements, and a lipid level analysis.
Studies of the care of adult diabetic patients in the primary care setting continue to document poor adherence to current guidelines for managing diabetes.1,2 One study of quality of care among diabetic patients in outpatient primary care offices found that Medicare patients often did not achieve recommended targets for blood glucose and lipid levels or blood pressure control and that glycosylated hemoglobin levels and cholesterol were not monitored at recommended intervals.3 As Blonde and colleagues pointed out, these variations in quality have no clear rationale or basis in scientific fact.4 Therefore, other explanations must be explored.
Berwick and others pointed out that quality of health care is determined most often by systems or processes rather than by individual behavior.5 One health care process that is important to primary care is continuity of care, or the development of a sustained relationship with a provider.6 Continuity of care is associated with favorable outcomes of care,7 including recognition of behavioral problems,8 patient adherence to physicians’ advice,9 being up to date on immunizations,10 effective communication between physician and patient, and the accumulated knowledge of the physician with regard to the patient’s history.11
In a previous study of continuity among patients with type 2 diabetes mellitus, patients with regular health care providers had improved glucose control and were more likely to have had a cholesterol measurement and influenza vaccination in the preceding year.12 These findings suggest that an understanding of the relation between continuity and quality might provide useful insights into improving the care diabetic patients receive. The purpose of this study was to examine the relation between continuity of care and the quality of care received by adult patients with type 2 diabetes mellitus.
Methods
Setting
The study was conducted at 6 clinics in 5 communities across south Texas. These clinics comprise the Residency Research Network of South Texas (RRNeST) and are in San Antonio, Corpus Christi, McAllen, Harlingen, and Laredo. The 174 family physicians at these sites serve a population that is predominantly Mexican American. A more detailed description of RRNeST has been published elsewhere.13
Participants
Patients at each site were eligible for the study if they said that they had an established diagnosis of type 2 diabetes for at least 1 year. Patients were excluded if they were younger than 18 years or pregnant. To provide adequate opportunity for continuity, patients also were excluded if they had been attending the clinic for less than 1 year. We also excluded patients who were seeing residents in their first year of training because these patients had experienced a change in their primary care provider within the past year when they were reassigned to a first-year resident.
Data collection and measures
A patient survey, offered in English or Spanish, included questions on demographics and patient satisfaction with diabetes care adapted from the Physician Recognition Program Survey, as described below. It also included questions on ambulatory health care use within the past year with the use of items from the Components of Primary Care Instrument.14 Consecutive patients who met the inclusion and exclusion criteria were asked by the office staff or their physicians to complete this survey. Patients returned the survey to staff or a survey collection box, and results were kept confidential from their physicians. Patient recruitment occurred over a 6-month period from October 1998 to March 1999.
Quality of care measurement
Quality of care measures are traditionally classified into 3 domains: structure, process, and outcomes.15 Structural measures consider whether the components of the health care delivery system are accessible and of high quality. Process indicators answer the question: Was the right thing done at the right time in the right place to the right person? An outcome measure of quality considers whether health care improves or declines as a result of the care given and includes death, disability, disease, discomfort, and dissatisfaction.16
The American Diabetes Association’s Provider Recognition Program (http://www.ncqa.org/dprp[mp1]), cosponsored by the National Committee for Quality Assurance, assessed key measures that were carefully defined and tested for their relation to improved care for people with diabetes (Table 1).17 Provider Recognition Program measures are consistent with the Diabetes Quality Improvement Project measures (see www.dqip.org), but go beyond the Diabetes Quality Improvement Project by applying performance criteria to each measure. The Provider Recognition Program includes primarily process measures (was an eye examination performed in the past year?) and 2 outcome measures (glycosylated hemoglobin and diastolic blood pressure). In addition, the Provider Recognition Program includes survey measures of patient satisfaction, which many consider the fourth domain of quality.18
Individual items from the Provider Recognition Program were obtained through a medical record abstraction for each patient who returned a completed survey. The chart abstractions were completed at each site by nurses or physicians but not by the primary care physician of the patient. A standard chart abstraction form addressed each item of the Provider Recognition Program measures.
The Provider Recognition Program patient satisfaction items were administered in the patient survey portion of the data collection and combined with the medical record data. A quality score was derived for each patient by using the Provider Recognition Program established scoring criteria, as shown in Table 1.
TABLE 1
American Diabetes Association and National Committee for Quality Assurance Provider Recognition Program measures
Measure | Frequency/patient response | Data source | Score |
---|---|---|---|
HbA1c | Once/year | Chart | 10.0 |
HbA1c < 8% | 2.5 | ||
HbA1c <10% | 2.5 | ||
Eye examination | Once/year | Chart | 10.0 |
Foot examination | Once/year | Chart | 10.0 |
BP frequency | Twice/year | Chart | 10.0 |
Diastolic 90 mm Hg | 5.0 | ||
Urine protein/microalbumin | Once/year | Chart | 10.0 |
Lipid profile | Once/year | Chart | 10.0 |
Self-management education | Once/year | Survey | 10.0 |
Nutrition counseling | Once/year | Survey | 10.0 |
Self-monitor glucose | Yes or no | Survey | |
Not on insulin | 1.0 | ||
On insulin | 4.0 | ||
Tobacco-use status and counseling if needed | Yes or no | Chart | 10.0 |
Patient satisfaction | Excellent, very good, good, fair, or poor | Survey | |
Overall DM care | 1.0 | ||
Questions answered | 1.0 | ||
Access for emergencies | 1.0 | ||
Laboratory results explained | 1.0 | ||
Courtesy/personal manner of provider | 1.0 | ||
Total | 110.0 | ||
BP, blood pressure; DM, diabetes mellitus; Hb, hemoglobin. |
Continuity measurement
Patients were asked to record the number of ambulatory physician visits to their usual provider, to another provider in the same office, or to any physicians outside of the usual provider’s office for the past 12 months. These items were adapted from the Components of Primary Care Instrument, a validated instrument for measuring the various components of primary care, including continuity.14 The responses to these questions were used to calculate a visit-based continuity of care score, the Usual Provider Continuity score. This score is calculated by dividing the number of visits to the usual provider by the total number of ambulatory visits. The continuity score ranged from 0 to 1, with a higher value representing a higher level of continuity. The Usual Provider Continuity score has been used in previous studies of continuity.19,20
Analysis
A t-test compared the quality of care mean scores between those who had and those who had not seen their usual physician in the past year. A Pearson bivariate correlation assessed the relationship between the Usual Provider Continuity score and the quality of care score. A chi-square test with odds ratios to determine the strength of the relationship evaluated the association between seeing one’s usual physician in the past year and each quality of care indicator. A 2-level regression model determined the relationship between the Usual Provider Continuity score and the quality of care score. In the first level of the model, we entered age, education, sex, total number of clinic visits, and self-rated health status. To adjust for clinic level effects on quality, a dummy variable was created for each clinic site in the first level of the regression model, with the San Antonio Family Health Center set as the default value. We entered the continuity score in the second level of the model to assess its relationship to quality of care, after adjusting for the above variables.
Results
A total of 397 patients completed surveys between November 1999 and April 2000. Each site returned an average of 66 surveys, with a range of 9 to 121. There were 76 physicians represented by these 397 patients, for an average of 5.22 patients per physician. At 1 site, only 9 surveys were returned due to a lack of adequate clinic staffing. Earlier patient surveys conducted within this network demonstrated a refusal rate of less than 20%. The mean number of physicians participating at each site was 18.3, with a range of 2 to 30; 35.6% of physicians were faculty (range by site, 0% to 100%).
Patient demographics are shown in Table 2 and are compared with the characteristics of the general adult patient population from a previous study (Sandra K. Burge, PhD, oral communication, December 2001). Most subjects were Hispanic, female, and married. Half of the sample had less than a high school education, and 36% had no health insurance. The mean Continuity and Quality of Care scores are also shown in Table 2. There were no significant differences in continuity scores across clinic sites, but 2 sites had significantly higher Quality of Care scores.
The first set of analyses compared quality of care between those who had (90.1%) and those who had not (9.9%) seen their usual providers in the past year. The overall quality of care score was significantly higher for patients who reported that they had seen their usual providers in the past year (73.0 vs 67.1, P = .038). The association between patients having seen their usual providers in the past year and each quality indicator is shown in Table 3. Patients who had seen their usual providers were significantly more likely to have had an eye examination, a foot examination, 2 blood pressure measurements, and a lipid analysis in the past year.
The second set of analyses examined the relation between the continuity or Usual Provider Continuity score and quality of care. A total of 214 subjects had complete chart and survey data that allowed for calculation of Continuity and quality of care scores. The overall quality of care score was associated significantly with the Usual Provider Continuity score in the hypothesized direction (r = .148, P = .03). As continuity improved, so did quality of care. In the 2-level multiple regression model, after adjusting for age, sex, education, total number of clinic visits, self-rated general health status, and clinic site, the relations between the Usual Provider Continuity score and the quality of care score remained significant (P = .03; Table 4). Total number of visits was not associated with the quality of care score.
TABLE 2
Characteristics of sample
Characteristic | Diabetic subjects | Adult clinic population |
---|---|---|
Mean (SD) age, y | 56.15 (12.34) | 41.4 |
% Female | 68.2 | 74 |
% Hispanic | 80.5 | 80 |
% Preferred Spanish survey | 19.2 | 19 |
% Married | 54.1 | 57.0 |
% Subjects with less than high school education | 49.8 | 29 |
% Subjects without health insurance | 36.6 | 31 |
Mean (SD) Usual Provider Continuity score | 0.72 (0.31) | NA |
Mean (SD) total visits | 7.75 (6.32) | NA |
Mean (SD) quality of care score | 72.3 (14.3) | NA |
NA, not available; SD, standard deviation. |
TABLE 3
Association between individual quality indicators and a visit to usual provider in past year
OR (CI) | |
---|---|
HbA1c in past year? | 1.76 (0.81–3.84) |
Eye examination in past year? | 1.99 (1.01–4.04)* |
Foot examination in past year? | 2.62 (1.27–5.41)* |
Blood pressure reading twice in past year? | 2.51 (1.07–5.94)* |
Lipid test in past year? | 4.11 (2.02–8.38)* |
Urine protein in past year? | 1.52 (0.76–3.05) |
Self-management education in past year? | 1.60 (0.75–3.43) |
Diet education in past year? | 1.04 (0.45–2.37) |
Self-monitoring of glucose? | 1.15 (0.52–2.56) |
Tobacco status and counseling? | 0.97 (0.38–2.46) |
Very satisfied with | |
Diabetes care overall? | 1.23 (0.54–2.81) |
Diabetes questions answered? | 1.32 (0.61–2.84) |
Access during emergencies? | 1.58 (0.69–3.61) |
Explanation of laboratory results? | 1.4 (0.55–3.90) |
Courtesy/personal manner of provider? | 1.46 (0.72–2.97) |
*P < .05. | |
CI, 95% confidence interval; Hb, hemoglobin; OR, odds ratio. |
TABLE 4
Regression model: continuity score and quality of care
Variable | Standardized beta | t | P |
---|---|---|---|
Age | .13 | 1.63 | .10 |
Sex | .02 | .21 | .83 |
Education level | .11 | 1.37 | .17 |
General health status | -.01 | -.06 | .95 |
Site A | .04 | .52 | .60 |
Site B | .20 | 2.25 | .02 |
Site C | .18 | 2.01 | .05 |
Site D | .03 | .38 | .71 |
Site E | -.02 | -.20 | .84 |
Total visits | .08 | 1.05 | .30 |
Continuity score | .17 | 2.24 | .03 |
Discussion
Patients who reported that they saw their regular providers in the past year had higher Quality of Care scores. Further, continuity of care received by diabetic patients was directly related to their overall quality of care. In a closer examination of the quality indicators, patients who reported that they had seen their usual providers within the past year were more likely to have received an eye examination, a foot examination, 2 blood pressure measurements, and a lipid analysis.
Why should continuity be associated with quality of care? Flocke and colleagues found that continuity was associated with accumulated knowledge of the patient by the physician as well as the coordination of care.14 These processes of care may have contributed to higher quality of care for patients with type 2 diabetes. The usual provider recognized the need for eye examinations and lipid measurements and coordinated these referrals. In another study, continuity was significantly related to patient adherence to advice about behavioral risk factors.10 In a similar fashion, continuity may have encouraged patient adherence to recommended screening tests such as referrals for eye examinations or returning for a fasting lipid measurement.
The lack of a relationship between the patients’ reports of seeing their usual providers within the past year and the other quality of care indicators is also of interest. Systems may have been established in those clinics to ensure delivery of those services regardless of whether or not patients are seen by their usual providers. For example, referral for diet education and self-monitoring of blood glucose may have been delegated to clinic staff. Some indicators, such as glycosylated hemoglobin, may be implemented at such high levels and with such low variability that there is not enough variation in the measure to detect any relation to continuity. Approximately 95% of our sample had a glycosylated hemoglobin measured within the past year on chart review.
Although the relationship between continuity and quality of care was significant, it was also fairly weak (r = .148). Other barriers may have been more important than continuity in determining the quality of care provided to patients with type 2 diabetes. For example, to improve quality of care, clinicians must keep track of multiple indicators over long periods. Many current medical record systems offer inadequate support for this function. Because this structure may vary by clinic, we included clinic sites as dummy variables in the multiple regression model. Even after adjusting for clinic site, continuity was significantly associated with quality. However, 2 clinic sites had significantly higher mean quality of care scores than did the other sites. Upon closer examination, 1 clinic site had an electronic medical record with prompts for preventive services.
Several limitations to this study must be mentioned. Recall bias is a possibility; the continuity data were based on patient recall of physician office visits over a 12-month period. This is a nonrandom sample; we enrolled a consecutive sample of consenting patients from the clinic population. Thus, this sample may have been heavily weighted with frequent attenders. Patients who were visually impaired, had low literacy skills, or had very poor health status may have declined participation in the study. We were able to collect only performance data from the primary care providers’ charts. If a patient had a blood pressure measurement or a glycosylated hemoglobin measurement recorded at another physician’s office, then the primary care chart might not be adequate to document the overall quality of care received by the patient over the past 12 months. Another limitation is the predominant use of process indicators rather than outcome indicators, such as quality of life, morbidity, or mortality, as measures of quality of care.
The cross-sectional design of the study and the limitations of data collected create the possibility that an unmeasured confounder caused the relation between continuity and quality. It is possible that patients who were more aggressive about seeking care from their usual providers were also more likely to keep appointments for eye and foot examinations. It is also possible that patients who did not see their usual providers sought care only for acute illnesses and were willing to see any available provider. If so, the competing demands of patient care during the acute care visit may have prevented the provider from obtaining the necessary laboratory tests or referrals needed to improve the quality of diabetes care.21 The setting of the study, ie, residency clinics, might have limited the generalizability of these findings to other community family physician practices. With the help of their supervising physicians, residents might have overcome competing demands of practice to attend to preventive measures, leading us to underestimate the strength of the relation between continuity and quality.
Current changes in the financing and organization of health care create significant threats to a sustained relationship between a provider and a patient.22 In a recent report from the Community Tracking Survey, 1 of 6 consumers changed insurance plans in a 1-year period. Of those, 23% also changed their usual source of care.23 Understanding how the physician–patient relationship might influence quality of care and patient outcomes may facilitate successful organizational interventions within a health care delivery system. If continuity promotes improvements in quality of care, as suggested by the results of this study, policies that promote continuity should be considered in an effort to improve the overall quality of care delivered to adult patients with diabetes.
1. Peters AL, Legorreta AP, Ossorio RC, Davidson MB. Quality of out-patient care provided to diabetic patients. Diabetes Care 1996;19:601-6.
2. Ho M, Marger M, Beart J, et al. Is the quality of diabetes care better in a diabetes clinic or a general medicine clinic? Diabetes Care 1997;20:472-5.
3. Kell SH, Drass J, Barker BR, et al. Measures of disease control in Medicare beneficiaries with diabetes mellitus. J Am Geriatr Soc 1999;47:417-22.
4. Blonde L, Dey J, Testa MA, Guthrie D. Defining and measuring quality of diabetes care. Prim Care 1999;26:841-55.
5. Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med 1989;320:53-6.
6. Starfield BH, Simborg DW, Horn SD, Yourtree SA. Continuity and coordination in primary care: their achievement and utility. Med Care 1976;14:625-36.
7. Dietrich AJ, Marton KI. Does continuous care from a physician make a difference? J Fam Pract 1982;15:929-37.
8. Becker M, Drachman R, Kirscht J. Continuity of pediatrician: new support for an old shibboleth. J Pediatr 1974;84:599-605.
9. Safran DG, Taira DA, Rogers WH, et al. Linking primary care performance to outcomes of care. J Fam Pract 1998;47:213-20.
10. Flocke SA, Stange KC, Zyzanski SJ. The association of the attributes of primary care with the delivery of clinical preventive services. Med Care 1998;36:AS21-30.
11. Safran DG, Kosinski M, Tarlov AR, et al. The primary care assessment survey: tests of data quality and measurement performance. Med Care 1998;36:728-38.
12. O’Conner PJ, Desai J, Rush WA, et al. Is having a regular provider of diabetes care related to the intensity of care and glycemic control? J Fam Pract 1998;47:290-7.
13. Albright TL, Parchman M, Burge SK. Predictors of self-care behavior in adults with type 2 diabetes: an RRNeST study. Fam Med 2001;33:354-60.
14. Flocke SA. Measuring the attributes of primary care: development of a new instrument. J Fam Pract 1997;45:64-74.
15. Donabedian A. Explorations in Quality Assessment and Montoring. Vol 1. The Definition of Quality and Approaches to Its Assessment. Ann Arbor, MI: Health Administration Press; 1980.
16. Lohr KN. Outcome measurement: concepts and questions Inquiry 1988;25:37-50.
17. American Diabetes Association. Standards of medical care for patients with diabetes mellitus. Diabetes Care 1997;20(suppl 1):S5-13.
18. Brooks RH, BcGlynn EA, Cleary P. Measuring Quality of Care. N Engl J Med 1996;335:966-70.
19. Starfield B. Primary Care: Concept, Evaluation and Policy. New York: Oxford University Press; 1992.
20. Mainous AG, Gill JM. The importance of continuity of care in the likelihood of future hospitalization: is site of care equivalent to a primary clinician? Am J Public Health 1998;88:1539-41.
21. Jaen CR, Stange KC, Nutting PA. Competing demands of primary care: a model for the delivery of clinical preventive services. J Fam Pract 1994;38:166-71.
22. Emmanuel EJ, Dubler NN. Preserving the physician-patient relationship in the era of managed care. JAMA 1995;273:323-9.
23. Cunningham PJ, Kohn LT. Who is likely to switch health plans? Data Bulletin Number 18. Washington, DC: Center for Studying Health System Change; July 2000. Available at: http://www.hschange.com/CONTENT/263/.
OBJECTIVE: We investigated the relationship between continuity of care and the quality of care received by patients with type 2 diabetes mellitus.
STUDY DESIGN: We used a cross-sectional patient survey and medical record review.
POPULATION: Consecutive patients with an established diagnosis of type 2 diabetes mellitus presented to 1 of 6 clinics within the Residency Research Network of South Texas, a network of 6 family practice residencies affiliated with the University of Texas Health Science Center at San Antonio.
OUTCOMES MEASURED: Continuity was measured as the proportion of visits within the past year to the patient’s usual primary care provider. A quality of care score was computed based on the American Diabetes Association’s Provider Recognition Program criteria from data collected through medical record review and patient surveys. Each patient was awarded points based on the presence or absence of each criterion.
RESULTS: The continuity score was associated significantly with the quality of care score in the anticipated direction (r = .15, P = .04). Patients who had seen their usual providers within the past year were significantly more likely to have had an eye examination, a foot examination, 2 blood pressure measurements, and a lipid analysis.
CONCLUSIONS: Continuity of care is associated with the quality of care received by patients with type 2 diabetes mellitus. Continuity of care may influence provider and patient behaviors in ways that improve quality. Further research on how continuity contributes to improved quality is needed.
- For patients with diabetes, continuity of care is associated with the quality of care: as continuity improves, so does the quality of care.
- Patients with diabetes who report that they have seen their usual primary care provider in the past year are more likely to have received an eye examination, a foot examination, 2 blood pressure measurements, and a lipid level analysis.
Studies of the care of adult diabetic patients in the primary care setting continue to document poor adherence to current guidelines for managing diabetes.1,2 One study of quality of care among diabetic patients in outpatient primary care offices found that Medicare patients often did not achieve recommended targets for blood glucose and lipid levels or blood pressure control and that glycosylated hemoglobin levels and cholesterol were not monitored at recommended intervals.3 As Blonde and colleagues pointed out, these variations in quality have no clear rationale or basis in scientific fact.4 Therefore, other explanations must be explored.
Berwick and others pointed out that quality of health care is determined most often by systems or processes rather than by individual behavior.5 One health care process that is important to primary care is continuity of care, or the development of a sustained relationship with a provider.6 Continuity of care is associated with favorable outcomes of care,7 including recognition of behavioral problems,8 patient adherence to physicians’ advice,9 being up to date on immunizations,10 effective communication between physician and patient, and the accumulated knowledge of the physician with regard to the patient’s history.11
In a previous study of continuity among patients with type 2 diabetes mellitus, patients with regular health care providers had improved glucose control and were more likely to have had a cholesterol measurement and influenza vaccination in the preceding year.12 These findings suggest that an understanding of the relation between continuity and quality might provide useful insights into improving the care diabetic patients receive. The purpose of this study was to examine the relation between continuity of care and the quality of care received by adult patients with type 2 diabetes mellitus.
Methods
Setting
The study was conducted at 6 clinics in 5 communities across south Texas. These clinics comprise the Residency Research Network of South Texas (RRNeST) and are in San Antonio, Corpus Christi, McAllen, Harlingen, and Laredo. The 174 family physicians at these sites serve a population that is predominantly Mexican American. A more detailed description of RRNeST has been published elsewhere.13
Participants
Patients at each site were eligible for the study if they said that they had an established diagnosis of type 2 diabetes for at least 1 year. Patients were excluded if they were younger than 18 years or pregnant. To provide adequate opportunity for continuity, patients also were excluded if they had been attending the clinic for less than 1 year. We also excluded patients who were seeing residents in their first year of training because these patients had experienced a change in their primary care provider within the past year when they were reassigned to a first-year resident.
Data collection and measures
A patient survey, offered in English or Spanish, included questions on demographics and patient satisfaction with diabetes care adapted from the Physician Recognition Program Survey, as described below. It also included questions on ambulatory health care use within the past year with the use of items from the Components of Primary Care Instrument.14 Consecutive patients who met the inclusion and exclusion criteria were asked by the office staff or their physicians to complete this survey. Patients returned the survey to staff or a survey collection box, and results were kept confidential from their physicians. Patient recruitment occurred over a 6-month period from October 1998 to March 1999.
Quality of care measurement
Quality of care measures are traditionally classified into 3 domains: structure, process, and outcomes.15 Structural measures consider whether the components of the health care delivery system are accessible and of high quality. Process indicators answer the question: Was the right thing done at the right time in the right place to the right person? An outcome measure of quality considers whether health care improves or declines as a result of the care given and includes death, disability, disease, discomfort, and dissatisfaction.16
The American Diabetes Association’s Provider Recognition Program (http://www.ncqa.org/dprp[mp1]), cosponsored by the National Committee for Quality Assurance, assessed key measures that were carefully defined and tested for their relation to improved care for people with diabetes (Table 1).17 Provider Recognition Program measures are consistent with the Diabetes Quality Improvement Project measures (see www.dqip.org), but go beyond the Diabetes Quality Improvement Project by applying performance criteria to each measure. The Provider Recognition Program includes primarily process measures (was an eye examination performed in the past year?) and 2 outcome measures (glycosylated hemoglobin and diastolic blood pressure). In addition, the Provider Recognition Program includes survey measures of patient satisfaction, which many consider the fourth domain of quality.18
Individual items from the Provider Recognition Program were obtained through a medical record abstraction for each patient who returned a completed survey. The chart abstractions were completed at each site by nurses or physicians but not by the primary care physician of the patient. A standard chart abstraction form addressed each item of the Provider Recognition Program measures.
The Provider Recognition Program patient satisfaction items were administered in the patient survey portion of the data collection and combined with the medical record data. A quality score was derived for each patient by using the Provider Recognition Program established scoring criteria, as shown in Table 1.
TABLE 1
American Diabetes Association and National Committee for Quality Assurance Provider Recognition Program measures
Measure | Frequency/patient response | Data source | Score |
---|---|---|---|
HbA1c | Once/year | Chart | 10.0 |
HbA1c < 8% | 2.5 | ||
HbA1c <10% | 2.5 | ||
Eye examination | Once/year | Chart | 10.0 |
Foot examination | Once/year | Chart | 10.0 |
BP frequency | Twice/year | Chart | 10.0 |
Diastolic 90 mm Hg | 5.0 | ||
Urine protein/microalbumin | Once/year | Chart | 10.0 |
Lipid profile | Once/year | Chart | 10.0 |
Self-management education | Once/year | Survey | 10.0 |
Nutrition counseling | Once/year | Survey | 10.0 |
Self-monitor glucose | Yes or no | Survey | |
Not on insulin | 1.0 | ||
On insulin | 4.0 | ||
Tobacco-use status and counseling if needed | Yes or no | Chart | 10.0 |
Patient satisfaction | Excellent, very good, good, fair, or poor | Survey | |
Overall DM care | 1.0 | ||
Questions answered | 1.0 | ||
Access for emergencies | 1.0 | ||
Laboratory results explained | 1.0 | ||
Courtesy/personal manner of provider | 1.0 | ||
Total | 110.0 | ||
BP, blood pressure; DM, diabetes mellitus; Hb, hemoglobin. |
Continuity measurement
Patients were asked to record the number of ambulatory physician visits to their usual provider, to another provider in the same office, or to any physicians outside of the usual provider’s office for the past 12 months. These items were adapted from the Components of Primary Care Instrument, a validated instrument for measuring the various components of primary care, including continuity.14 The responses to these questions were used to calculate a visit-based continuity of care score, the Usual Provider Continuity score. This score is calculated by dividing the number of visits to the usual provider by the total number of ambulatory visits. The continuity score ranged from 0 to 1, with a higher value representing a higher level of continuity. The Usual Provider Continuity score has been used in previous studies of continuity.19,20
Analysis
A t-test compared the quality of care mean scores between those who had and those who had not seen their usual physician in the past year. A Pearson bivariate correlation assessed the relationship between the Usual Provider Continuity score and the quality of care score. A chi-square test with odds ratios to determine the strength of the relationship evaluated the association between seeing one’s usual physician in the past year and each quality of care indicator. A 2-level regression model determined the relationship between the Usual Provider Continuity score and the quality of care score. In the first level of the model, we entered age, education, sex, total number of clinic visits, and self-rated health status. To adjust for clinic level effects on quality, a dummy variable was created for each clinic site in the first level of the regression model, with the San Antonio Family Health Center set as the default value. We entered the continuity score in the second level of the model to assess its relationship to quality of care, after adjusting for the above variables.
Results
A total of 397 patients completed surveys between November 1999 and April 2000. Each site returned an average of 66 surveys, with a range of 9 to 121. There were 76 physicians represented by these 397 patients, for an average of 5.22 patients per physician. At 1 site, only 9 surveys were returned due to a lack of adequate clinic staffing. Earlier patient surveys conducted within this network demonstrated a refusal rate of less than 20%. The mean number of physicians participating at each site was 18.3, with a range of 2 to 30; 35.6% of physicians were faculty (range by site, 0% to 100%).
Patient demographics are shown in Table 2 and are compared with the characteristics of the general adult patient population from a previous study (Sandra K. Burge, PhD, oral communication, December 2001). Most subjects were Hispanic, female, and married. Half of the sample had less than a high school education, and 36% had no health insurance. The mean Continuity and Quality of Care scores are also shown in Table 2. There were no significant differences in continuity scores across clinic sites, but 2 sites had significantly higher Quality of Care scores.
The first set of analyses compared quality of care between those who had (90.1%) and those who had not (9.9%) seen their usual providers in the past year. The overall quality of care score was significantly higher for patients who reported that they had seen their usual providers in the past year (73.0 vs 67.1, P = .038). The association between patients having seen their usual providers in the past year and each quality indicator is shown in Table 3. Patients who had seen their usual providers were significantly more likely to have had an eye examination, a foot examination, 2 blood pressure measurements, and a lipid analysis in the past year.
The second set of analyses examined the relation between the continuity or Usual Provider Continuity score and quality of care. A total of 214 subjects had complete chart and survey data that allowed for calculation of Continuity and quality of care scores. The overall quality of care score was associated significantly with the Usual Provider Continuity score in the hypothesized direction (r = .148, P = .03). As continuity improved, so did quality of care. In the 2-level multiple regression model, after adjusting for age, sex, education, total number of clinic visits, self-rated general health status, and clinic site, the relations between the Usual Provider Continuity score and the quality of care score remained significant (P = .03; Table 4). Total number of visits was not associated with the quality of care score.
TABLE 2
Characteristics of sample
Characteristic | Diabetic subjects | Adult clinic population |
---|---|---|
Mean (SD) age, y | 56.15 (12.34) | 41.4 |
% Female | 68.2 | 74 |
% Hispanic | 80.5 | 80 |
% Preferred Spanish survey | 19.2 | 19 |
% Married | 54.1 | 57.0 |
% Subjects with less than high school education | 49.8 | 29 |
% Subjects without health insurance | 36.6 | 31 |
Mean (SD) Usual Provider Continuity score | 0.72 (0.31) | NA |
Mean (SD) total visits | 7.75 (6.32) | NA |
Mean (SD) quality of care score | 72.3 (14.3) | NA |
NA, not available; SD, standard deviation. |
TABLE 3
Association between individual quality indicators and a visit to usual provider in past year
OR (CI) | |
---|---|
HbA1c in past year? | 1.76 (0.81–3.84) |
Eye examination in past year? | 1.99 (1.01–4.04)* |
Foot examination in past year? | 2.62 (1.27–5.41)* |
Blood pressure reading twice in past year? | 2.51 (1.07–5.94)* |
Lipid test in past year? | 4.11 (2.02–8.38)* |
Urine protein in past year? | 1.52 (0.76–3.05) |
Self-management education in past year? | 1.60 (0.75–3.43) |
Diet education in past year? | 1.04 (0.45–2.37) |
Self-monitoring of glucose? | 1.15 (0.52–2.56) |
Tobacco status and counseling? | 0.97 (0.38–2.46) |
Very satisfied with | |
Diabetes care overall? | 1.23 (0.54–2.81) |
Diabetes questions answered? | 1.32 (0.61–2.84) |
Access during emergencies? | 1.58 (0.69–3.61) |
Explanation of laboratory results? | 1.4 (0.55–3.90) |
Courtesy/personal manner of provider? | 1.46 (0.72–2.97) |
*P < .05. | |
CI, 95% confidence interval; Hb, hemoglobin; OR, odds ratio. |
TABLE 4
Regression model: continuity score and quality of care
Variable | Standardized beta | t | P |
---|---|---|---|
Age | .13 | 1.63 | .10 |
Sex | .02 | .21 | .83 |
Education level | .11 | 1.37 | .17 |
General health status | -.01 | -.06 | .95 |
Site A | .04 | .52 | .60 |
Site B | .20 | 2.25 | .02 |
Site C | .18 | 2.01 | .05 |
Site D | .03 | .38 | .71 |
Site E | -.02 | -.20 | .84 |
Total visits | .08 | 1.05 | .30 |
Continuity score | .17 | 2.24 | .03 |
Discussion
Patients who reported that they saw their regular providers in the past year had higher Quality of Care scores. Further, continuity of care received by diabetic patients was directly related to their overall quality of care. In a closer examination of the quality indicators, patients who reported that they had seen their usual providers within the past year were more likely to have received an eye examination, a foot examination, 2 blood pressure measurements, and a lipid analysis.
Why should continuity be associated with quality of care? Flocke and colleagues found that continuity was associated with accumulated knowledge of the patient by the physician as well as the coordination of care.14 These processes of care may have contributed to higher quality of care for patients with type 2 diabetes. The usual provider recognized the need for eye examinations and lipid measurements and coordinated these referrals. In another study, continuity was significantly related to patient adherence to advice about behavioral risk factors.10 In a similar fashion, continuity may have encouraged patient adherence to recommended screening tests such as referrals for eye examinations or returning for a fasting lipid measurement.
The lack of a relationship between the patients’ reports of seeing their usual providers within the past year and the other quality of care indicators is also of interest. Systems may have been established in those clinics to ensure delivery of those services regardless of whether or not patients are seen by their usual providers. For example, referral for diet education and self-monitoring of blood glucose may have been delegated to clinic staff. Some indicators, such as glycosylated hemoglobin, may be implemented at such high levels and with such low variability that there is not enough variation in the measure to detect any relation to continuity. Approximately 95% of our sample had a glycosylated hemoglobin measured within the past year on chart review.
Although the relationship between continuity and quality of care was significant, it was also fairly weak (r = .148). Other barriers may have been more important than continuity in determining the quality of care provided to patients with type 2 diabetes. For example, to improve quality of care, clinicians must keep track of multiple indicators over long periods. Many current medical record systems offer inadequate support for this function. Because this structure may vary by clinic, we included clinic sites as dummy variables in the multiple regression model. Even after adjusting for clinic site, continuity was significantly associated with quality. However, 2 clinic sites had significantly higher mean quality of care scores than did the other sites. Upon closer examination, 1 clinic site had an electronic medical record with prompts for preventive services.
Several limitations to this study must be mentioned. Recall bias is a possibility; the continuity data were based on patient recall of physician office visits over a 12-month period. This is a nonrandom sample; we enrolled a consecutive sample of consenting patients from the clinic population. Thus, this sample may have been heavily weighted with frequent attenders. Patients who were visually impaired, had low literacy skills, or had very poor health status may have declined participation in the study. We were able to collect only performance data from the primary care providers’ charts. If a patient had a blood pressure measurement or a glycosylated hemoglobin measurement recorded at another physician’s office, then the primary care chart might not be adequate to document the overall quality of care received by the patient over the past 12 months. Another limitation is the predominant use of process indicators rather than outcome indicators, such as quality of life, morbidity, or mortality, as measures of quality of care.
The cross-sectional design of the study and the limitations of data collected create the possibility that an unmeasured confounder caused the relation between continuity and quality. It is possible that patients who were more aggressive about seeking care from their usual providers were also more likely to keep appointments for eye and foot examinations. It is also possible that patients who did not see their usual providers sought care only for acute illnesses and were willing to see any available provider. If so, the competing demands of patient care during the acute care visit may have prevented the provider from obtaining the necessary laboratory tests or referrals needed to improve the quality of diabetes care.21 The setting of the study, ie, residency clinics, might have limited the generalizability of these findings to other community family physician practices. With the help of their supervising physicians, residents might have overcome competing demands of practice to attend to preventive measures, leading us to underestimate the strength of the relation between continuity and quality.
Current changes in the financing and organization of health care create significant threats to a sustained relationship between a provider and a patient.22 In a recent report from the Community Tracking Survey, 1 of 6 consumers changed insurance plans in a 1-year period. Of those, 23% also changed their usual source of care.23 Understanding how the physician–patient relationship might influence quality of care and patient outcomes may facilitate successful organizational interventions within a health care delivery system. If continuity promotes improvements in quality of care, as suggested by the results of this study, policies that promote continuity should be considered in an effort to improve the overall quality of care delivered to adult patients with diabetes.
OBJECTIVE: We investigated the relationship between continuity of care and the quality of care received by patients with type 2 diabetes mellitus.
STUDY DESIGN: We used a cross-sectional patient survey and medical record review.
POPULATION: Consecutive patients with an established diagnosis of type 2 diabetes mellitus presented to 1 of 6 clinics within the Residency Research Network of South Texas, a network of 6 family practice residencies affiliated with the University of Texas Health Science Center at San Antonio.
OUTCOMES MEASURED: Continuity was measured as the proportion of visits within the past year to the patient’s usual primary care provider. A quality of care score was computed based on the American Diabetes Association’s Provider Recognition Program criteria from data collected through medical record review and patient surveys. Each patient was awarded points based on the presence or absence of each criterion.
RESULTS: The continuity score was associated significantly with the quality of care score in the anticipated direction (r = .15, P = .04). Patients who had seen their usual providers within the past year were significantly more likely to have had an eye examination, a foot examination, 2 blood pressure measurements, and a lipid analysis.
CONCLUSIONS: Continuity of care is associated with the quality of care received by patients with type 2 diabetes mellitus. Continuity of care may influence provider and patient behaviors in ways that improve quality. Further research on how continuity contributes to improved quality is needed.
- For patients with diabetes, continuity of care is associated with the quality of care: as continuity improves, so does the quality of care.
- Patients with diabetes who report that they have seen their usual primary care provider in the past year are more likely to have received an eye examination, a foot examination, 2 blood pressure measurements, and a lipid level analysis.
Studies of the care of adult diabetic patients in the primary care setting continue to document poor adherence to current guidelines for managing diabetes.1,2 One study of quality of care among diabetic patients in outpatient primary care offices found that Medicare patients often did not achieve recommended targets for blood glucose and lipid levels or blood pressure control and that glycosylated hemoglobin levels and cholesterol were not monitored at recommended intervals.3 As Blonde and colleagues pointed out, these variations in quality have no clear rationale or basis in scientific fact.4 Therefore, other explanations must be explored.
Berwick and others pointed out that quality of health care is determined most often by systems or processes rather than by individual behavior.5 One health care process that is important to primary care is continuity of care, or the development of a sustained relationship with a provider.6 Continuity of care is associated with favorable outcomes of care,7 including recognition of behavioral problems,8 patient adherence to physicians’ advice,9 being up to date on immunizations,10 effective communication between physician and patient, and the accumulated knowledge of the physician with regard to the patient’s history.11
In a previous study of continuity among patients with type 2 diabetes mellitus, patients with regular health care providers had improved glucose control and were more likely to have had a cholesterol measurement and influenza vaccination in the preceding year.12 These findings suggest that an understanding of the relation between continuity and quality might provide useful insights into improving the care diabetic patients receive. The purpose of this study was to examine the relation between continuity of care and the quality of care received by adult patients with type 2 diabetes mellitus.
Methods
Setting
The study was conducted at 6 clinics in 5 communities across south Texas. These clinics comprise the Residency Research Network of South Texas (RRNeST) and are in San Antonio, Corpus Christi, McAllen, Harlingen, and Laredo. The 174 family physicians at these sites serve a population that is predominantly Mexican American. A more detailed description of RRNeST has been published elsewhere.13
Participants
Patients at each site were eligible for the study if they said that they had an established diagnosis of type 2 diabetes for at least 1 year. Patients were excluded if they were younger than 18 years or pregnant. To provide adequate opportunity for continuity, patients also were excluded if they had been attending the clinic for less than 1 year. We also excluded patients who were seeing residents in their first year of training because these patients had experienced a change in their primary care provider within the past year when they were reassigned to a first-year resident.
Data collection and measures
A patient survey, offered in English or Spanish, included questions on demographics and patient satisfaction with diabetes care adapted from the Physician Recognition Program Survey, as described below. It also included questions on ambulatory health care use within the past year with the use of items from the Components of Primary Care Instrument.14 Consecutive patients who met the inclusion and exclusion criteria were asked by the office staff or their physicians to complete this survey. Patients returned the survey to staff or a survey collection box, and results were kept confidential from their physicians. Patient recruitment occurred over a 6-month period from October 1998 to March 1999.
Quality of care measurement
Quality of care measures are traditionally classified into 3 domains: structure, process, and outcomes.15 Structural measures consider whether the components of the health care delivery system are accessible and of high quality. Process indicators answer the question: Was the right thing done at the right time in the right place to the right person? An outcome measure of quality considers whether health care improves or declines as a result of the care given and includes death, disability, disease, discomfort, and dissatisfaction.16
The American Diabetes Association’s Provider Recognition Program (http://www.ncqa.org/dprp[mp1]), cosponsored by the National Committee for Quality Assurance, assessed key measures that were carefully defined and tested for their relation to improved care for people with diabetes (Table 1).17 Provider Recognition Program measures are consistent with the Diabetes Quality Improvement Project measures (see www.dqip.org), but go beyond the Diabetes Quality Improvement Project by applying performance criteria to each measure. The Provider Recognition Program includes primarily process measures (was an eye examination performed in the past year?) and 2 outcome measures (glycosylated hemoglobin and diastolic blood pressure). In addition, the Provider Recognition Program includes survey measures of patient satisfaction, which many consider the fourth domain of quality.18
Individual items from the Provider Recognition Program were obtained through a medical record abstraction for each patient who returned a completed survey. The chart abstractions were completed at each site by nurses or physicians but not by the primary care physician of the patient. A standard chart abstraction form addressed each item of the Provider Recognition Program measures.
The Provider Recognition Program patient satisfaction items were administered in the patient survey portion of the data collection and combined with the medical record data. A quality score was derived for each patient by using the Provider Recognition Program established scoring criteria, as shown in Table 1.
TABLE 1
American Diabetes Association and National Committee for Quality Assurance Provider Recognition Program measures
Measure | Frequency/patient response | Data source | Score |
---|---|---|---|
HbA1c | Once/year | Chart | 10.0 |
HbA1c < 8% | 2.5 | ||
HbA1c <10% | 2.5 | ||
Eye examination | Once/year | Chart | 10.0 |
Foot examination | Once/year | Chart | 10.0 |
BP frequency | Twice/year | Chart | 10.0 |
Diastolic 90 mm Hg | 5.0 | ||
Urine protein/microalbumin | Once/year | Chart | 10.0 |
Lipid profile | Once/year | Chart | 10.0 |
Self-management education | Once/year | Survey | 10.0 |
Nutrition counseling | Once/year | Survey | 10.0 |
Self-monitor glucose | Yes or no | Survey | |
Not on insulin | 1.0 | ||
On insulin | 4.0 | ||
Tobacco-use status and counseling if needed | Yes or no | Chart | 10.0 |
Patient satisfaction | Excellent, very good, good, fair, or poor | Survey | |
Overall DM care | 1.0 | ||
Questions answered | 1.0 | ||
Access for emergencies | 1.0 | ||
Laboratory results explained | 1.0 | ||
Courtesy/personal manner of provider | 1.0 | ||
Total | 110.0 | ||
BP, blood pressure; DM, diabetes mellitus; Hb, hemoglobin. |
Continuity measurement
Patients were asked to record the number of ambulatory physician visits to their usual provider, to another provider in the same office, or to any physicians outside of the usual provider’s office for the past 12 months. These items were adapted from the Components of Primary Care Instrument, a validated instrument for measuring the various components of primary care, including continuity.14 The responses to these questions were used to calculate a visit-based continuity of care score, the Usual Provider Continuity score. This score is calculated by dividing the number of visits to the usual provider by the total number of ambulatory visits. The continuity score ranged from 0 to 1, with a higher value representing a higher level of continuity. The Usual Provider Continuity score has been used in previous studies of continuity.19,20
Analysis
A t-test compared the quality of care mean scores between those who had and those who had not seen their usual physician in the past year. A Pearson bivariate correlation assessed the relationship between the Usual Provider Continuity score and the quality of care score. A chi-square test with odds ratios to determine the strength of the relationship evaluated the association between seeing one’s usual physician in the past year and each quality of care indicator. A 2-level regression model determined the relationship between the Usual Provider Continuity score and the quality of care score. In the first level of the model, we entered age, education, sex, total number of clinic visits, and self-rated health status. To adjust for clinic level effects on quality, a dummy variable was created for each clinic site in the first level of the regression model, with the San Antonio Family Health Center set as the default value. We entered the continuity score in the second level of the model to assess its relationship to quality of care, after adjusting for the above variables.
Results
A total of 397 patients completed surveys between November 1999 and April 2000. Each site returned an average of 66 surveys, with a range of 9 to 121. There were 76 physicians represented by these 397 patients, for an average of 5.22 patients per physician. At 1 site, only 9 surveys were returned due to a lack of adequate clinic staffing. Earlier patient surveys conducted within this network demonstrated a refusal rate of less than 20%. The mean number of physicians participating at each site was 18.3, with a range of 2 to 30; 35.6% of physicians were faculty (range by site, 0% to 100%).
Patient demographics are shown in Table 2 and are compared with the characteristics of the general adult patient population from a previous study (Sandra K. Burge, PhD, oral communication, December 2001). Most subjects were Hispanic, female, and married. Half of the sample had less than a high school education, and 36% had no health insurance. The mean Continuity and Quality of Care scores are also shown in Table 2. There were no significant differences in continuity scores across clinic sites, but 2 sites had significantly higher Quality of Care scores.
The first set of analyses compared quality of care between those who had (90.1%) and those who had not (9.9%) seen their usual providers in the past year. The overall quality of care score was significantly higher for patients who reported that they had seen their usual providers in the past year (73.0 vs 67.1, P = .038). The association between patients having seen their usual providers in the past year and each quality indicator is shown in Table 3. Patients who had seen their usual providers were significantly more likely to have had an eye examination, a foot examination, 2 blood pressure measurements, and a lipid analysis in the past year.
The second set of analyses examined the relation between the continuity or Usual Provider Continuity score and quality of care. A total of 214 subjects had complete chart and survey data that allowed for calculation of Continuity and quality of care scores. The overall quality of care score was associated significantly with the Usual Provider Continuity score in the hypothesized direction (r = .148, P = .03). As continuity improved, so did quality of care. In the 2-level multiple regression model, after adjusting for age, sex, education, total number of clinic visits, self-rated general health status, and clinic site, the relations between the Usual Provider Continuity score and the quality of care score remained significant (P = .03; Table 4). Total number of visits was not associated with the quality of care score.
TABLE 2
Characteristics of sample
Characteristic | Diabetic subjects | Adult clinic population |
---|---|---|
Mean (SD) age, y | 56.15 (12.34) | 41.4 |
% Female | 68.2 | 74 |
% Hispanic | 80.5 | 80 |
% Preferred Spanish survey | 19.2 | 19 |
% Married | 54.1 | 57.0 |
% Subjects with less than high school education | 49.8 | 29 |
% Subjects without health insurance | 36.6 | 31 |
Mean (SD) Usual Provider Continuity score | 0.72 (0.31) | NA |
Mean (SD) total visits | 7.75 (6.32) | NA |
Mean (SD) quality of care score | 72.3 (14.3) | NA |
NA, not available; SD, standard deviation. |
TABLE 3
Association between individual quality indicators and a visit to usual provider in past year
OR (CI) | |
---|---|
HbA1c in past year? | 1.76 (0.81–3.84) |
Eye examination in past year? | 1.99 (1.01–4.04)* |
Foot examination in past year? | 2.62 (1.27–5.41)* |
Blood pressure reading twice in past year? | 2.51 (1.07–5.94)* |
Lipid test in past year? | 4.11 (2.02–8.38)* |
Urine protein in past year? | 1.52 (0.76–3.05) |
Self-management education in past year? | 1.60 (0.75–3.43) |
Diet education in past year? | 1.04 (0.45–2.37) |
Self-monitoring of glucose? | 1.15 (0.52–2.56) |
Tobacco status and counseling? | 0.97 (0.38–2.46) |
Very satisfied with | |
Diabetes care overall? | 1.23 (0.54–2.81) |
Diabetes questions answered? | 1.32 (0.61–2.84) |
Access during emergencies? | 1.58 (0.69–3.61) |
Explanation of laboratory results? | 1.4 (0.55–3.90) |
Courtesy/personal manner of provider? | 1.46 (0.72–2.97) |
*P < .05. | |
CI, 95% confidence interval; Hb, hemoglobin; OR, odds ratio. |
TABLE 4
Regression model: continuity score and quality of care
Variable | Standardized beta | t | P |
---|---|---|---|
Age | .13 | 1.63 | .10 |
Sex | .02 | .21 | .83 |
Education level | .11 | 1.37 | .17 |
General health status | -.01 | -.06 | .95 |
Site A | .04 | .52 | .60 |
Site B | .20 | 2.25 | .02 |
Site C | .18 | 2.01 | .05 |
Site D | .03 | .38 | .71 |
Site E | -.02 | -.20 | .84 |
Total visits | .08 | 1.05 | .30 |
Continuity score | .17 | 2.24 | .03 |
Discussion
Patients who reported that they saw their regular providers in the past year had higher Quality of Care scores. Further, continuity of care received by diabetic patients was directly related to their overall quality of care. In a closer examination of the quality indicators, patients who reported that they had seen their usual providers within the past year were more likely to have received an eye examination, a foot examination, 2 blood pressure measurements, and a lipid analysis.
Why should continuity be associated with quality of care? Flocke and colleagues found that continuity was associated with accumulated knowledge of the patient by the physician as well as the coordination of care.14 These processes of care may have contributed to higher quality of care for patients with type 2 diabetes. The usual provider recognized the need for eye examinations and lipid measurements and coordinated these referrals. In another study, continuity was significantly related to patient adherence to advice about behavioral risk factors.10 In a similar fashion, continuity may have encouraged patient adherence to recommended screening tests such as referrals for eye examinations or returning for a fasting lipid measurement.
The lack of a relationship between the patients’ reports of seeing their usual providers within the past year and the other quality of care indicators is also of interest. Systems may have been established in those clinics to ensure delivery of those services regardless of whether or not patients are seen by their usual providers. For example, referral for diet education and self-monitoring of blood glucose may have been delegated to clinic staff. Some indicators, such as glycosylated hemoglobin, may be implemented at such high levels and with such low variability that there is not enough variation in the measure to detect any relation to continuity. Approximately 95% of our sample had a glycosylated hemoglobin measured within the past year on chart review.
Although the relationship between continuity and quality of care was significant, it was also fairly weak (r = .148). Other barriers may have been more important than continuity in determining the quality of care provided to patients with type 2 diabetes. For example, to improve quality of care, clinicians must keep track of multiple indicators over long periods. Many current medical record systems offer inadequate support for this function. Because this structure may vary by clinic, we included clinic sites as dummy variables in the multiple regression model. Even after adjusting for clinic site, continuity was significantly associated with quality. However, 2 clinic sites had significantly higher mean quality of care scores than did the other sites. Upon closer examination, 1 clinic site had an electronic medical record with prompts for preventive services.
Several limitations to this study must be mentioned. Recall bias is a possibility; the continuity data were based on patient recall of physician office visits over a 12-month period. This is a nonrandom sample; we enrolled a consecutive sample of consenting patients from the clinic population. Thus, this sample may have been heavily weighted with frequent attenders. Patients who were visually impaired, had low literacy skills, or had very poor health status may have declined participation in the study. We were able to collect only performance data from the primary care providers’ charts. If a patient had a blood pressure measurement or a glycosylated hemoglobin measurement recorded at another physician’s office, then the primary care chart might not be adequate to document the overall quality of care received by the patient over the past 12 months. Another limitation is the predominant use of process indicators rather than outcome indicators, such as quality of life, morbidity, or mortality, as measures of quality of care.
The cross-sectional design of the study and the limitations of data collected create the possibility that an unmeasured confounder caused the relation between continuity and quality. It is possible that patients who were more aggressive about seeking care from their usual providers were also more likely to keep appointments for eye and foot examinations. It is also possible that patients who did not see their usual providers sought care only for acute illnesses and were willing to see any available provider. If so, the competing demands of patient care during the acute care visit may have prevented the provider from obtaining the necessary laboratory tests or referrals needed to improve the quality of diabetes care.21 The setting of the study, ie, residency clinics, might have limited the generalizability of these findings to other community family physician practices. With the help of their supervising physicians, residents might have overcome competing demands of practice to attend to preventive measures, leading us to underestimate the strength of the relation between continuity and quality.
Current changes in the financing and organization of health care create significant threats to a sustained relationship between a provider and a patient.22 In a recent report from the Community Tracking Survey, 1 of 6 consumers changed insurance plans in a 1-year period. Of those, 23% also changed their usual source of care.23 Understanding how the physician–patient relationship might influence quality of care and patient outcomes may facilitate successful organizational interventions within a health care delivery system. If continuity promotes improvements in quality of care, as suggested by the results of this study, policies that promote continuity should be considered in an effort to improve the overall quality of care delivered to adult patients with diabetes.
1. Peters AL, Legorreta AP, Ossorio RC, Davidson MB. Quality of out-patient care provided to diabetic patients. Diabetes Care 1996;19:601-6.
2. Ho M, Marger M, Beart J, et al. Is the quality of diabetes care better in a diabetes clinic or a general medicine clinic? Diabetes Care 1997;20:472-5.
3. Kell SH, Drass J, Barker BR, et al. Measures of disease control in Medicare beneficiaries with diabetes mellitus. J Am Geriatr Soc 1999;47:417-22.
4. Blonde L, Dey J, Testa MA, Guthrie D. Defining and measuring quality of diabetes care. Prim Care 1999;26:841-55.
5. Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med 1989;320:53-6.
6. Starfield BH, Simborg DW, Horn SD, Yourtree SA. Continuity and coordination in primary care: their achievement and utility. Med Care 1976;14:625-36.
7. Dietrich AJ, Marton KI. Does continuous care from a physician make a difference? J Fam Pract 1982;15:929-37.
8. Becker M, Drachman R, Kirscht J. Continuity of pediatrician: new support for an old shibboleth. J Pediatr 1974;84:599-605.
9. Safran DG, Taira DA, Rogers WH, et al. Linking primary care performance to outcomes of care. J Fam Pract 1998;47:213-20.
10. Flocke SA, Stange KC, Zyzanski SJ. The association of the attributes of primary care with the delivery of clinical preventive services. Med Care 1998;36:AS21-30.
11. Safran DG, Kosinski M, Tarlov AR, et al. The primary care assessment survey: tests of data quality and measurement performance. Med Care 1998;36:728-38.
12. O’Conner PJ, Desai J, Rush WA, et al. Is having a regular provider of diabetes care related to the intensity of care and glycemic control? J Fam Pract 1998;47:290-7.
13. Albright TL, Parchman M, Burge SK. Predictors of self-care behavior in adults with type 2 diabetes: an RRNeST study. Fam Med 2001;33:354-60.
14. Flocke SA. Measuring the attributes of primary care: development of a new instrument. J Fam Pract 1997;45:64-74.
15. Donabedian A. Explorations in Quality Assessment and Montoring. Vol 1. The Definition of Quality and Approaches to Its Assessment. Ann Arbor, MI: Health Administration Press; 1980.
16. Lohr KN. Outcome measurement: concepts and questions Inquiry 1988;25:37-50.
17. American Diabetes Association. Standards of medical care for patients with diabetes mellitus. Diabetes Care 1997;20(suppl 1):S5-13.
18. Brooks RH, BcGlynn EA, Cleary P. Measuring Quality of Care. N Engl J Med 1996;335:966-70.
19. Starfield B. Primary Care: Concept, Evaluation and Policy. New York: Oxford University Press; 1992.
20. Mainous AG, Gill JM. The importance of continuity of care in the likelihood of future hospitalization: is site of care equivalent to a primary clinician? Am J Public Health 1998;88:1539-41.
21. Jaen CR, Stange KC, Nutting PA. Competing demands of primary care: a model for the delivery of clinical preventive services. J Fam Pract 1994;38:166-71.
22. Emmanuel EJ, Dubler NN. Preserving the physician-patient relationship in the era of managed care. JAMA 1995;273:323-9.
23. Cunningham PJ, Kohn LT. Who is likely to switch health plans? Data Bulletin Number 18. Washington, DC: Center for Studying Health System Change; July 2000. Available at: http://www.hschange.com/CONTENT/263/.
1. Peters AL, Legorreta AP, Ossorio RC, Davidson MB. Quality of out-patient care provided to diabetic patients. Diabetes Care 1996;19:601-6.
2. Ho M, Marger M, Beart J, et al. Is the quality of diabetes care better in a diabetes clinic or a general medicine clinic? Diabetes Care 1997;20:472-5.
3. Kell SH, Drass J, Barker BR, et al. Measures of disease control in Medicare beneficiaries with diabetes mellitus. J Am Geriatr Soc 1999;47:417-22.
4. Blonde L, Dey J, Testa MA, Guthrie D. Defining and measuring quality of diabetes care. Prim Care 1999;26:841-55.
5. Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med 1989;320:53-6.
6. Starfield BH, Simborg DW, Horn SD, Yourtree SA. Continuity and coordination in primary care: their achievement and utility. Med Care 1976;14:625-36.
7. Dietrich AJ, Marton KI. Does continuous care from a physician make a difference? J Fam Pract 1982;15:929-37.
8. Becker M, Drachman R, Kirscht J. Continuity of pediatrician: new support for an old shibboleth. J Pediatr 1974;84:599-605.
9. Safran DG, Taira DA, Rogers WH, et al. Linking primary care performance to outcomes of care. J Fam Pract 1998;47:213-20.
10. Flocke SA, Stange KC, Zyzanski SJ. The association of the attributes of primary care with the delivery of clinical preventive services. Med Care 1998;36:AS21-30.
11. Safran DG, Kosinski M, Tarlov AR, et al. The primary care assessment survey: tests of data quality and measurement performance. Med Care 1998;36:728-38.
12. O’Conner PJ, Desai J, Rush WA, et al. Is having a regular provider of diabetes care related to the intensity of care and glycemic control? J Fam Pract 1998;47:290-7.
13. Albright TL, Parchman M, Burge SK. Predictors of self-care behavior in adults with type 2 diabetes: an RRNeST study. Fam Med 2001;33:354-60.
14. Flocke SA. Measuring the attributes of primary care: development of a new instrument. J Fam Pract 1997;45:64-74.
15. Donabedian A. Explorations in Quality Assessment and Montoring. Vol 1. The Definition of Quality and Approaches to Its Assessment. Ann Arbor, MI: Health Administration Press; 1980.
16. Lohr KN. Outcome measurement: concepts and questions Inquiry 1988;25:37-50.
17. American Diabetes Association. Standards of medical care for patients with diabetes mellitus. Diabetes Care 1997;20(suppl 1):S5-13.
18. Brooks RH, BcGlynn EA, Cleary P. Measuring Quality of Care. N Engl J Med 1996;335:966-70.
19. Starfield B. Primary Care: Concept, Evaluation and Policy. New York: Oxford University Press; 1992.
20. Mainous AG, Gill JM. The importance of continuity of care in the likelihood of future hospitalization: is site of care equivalent to a primary clinician? Am J Public Health 1998;88:1539-41.
21. Jaen CR, Stange KC, Nutting PA. Competing demands of primary care: a model for the delivery of clinical preventive services. J Fam Pract 1994;38:166-71.
22. Emmanuel EJ, Dubler NN. Preserving the physician-patient relationship in the era of managed care. JAMA 1995;273:323-9.
23. Cunningham PJ, Kohn LT. Who is likely to switch health plans? Data Bulletin Number 18. Washington, DC: Center for Studying Health System Change; July 2000. Available at: http://www.hschange.com/CONTENT/263/.