User login
Evaluation of Clerkship Structure
The third‐year pediatric clerkship at the University of Utah School of Medicine has a relatively unique inpatient service, the Glasgow Service, which consists of an academic attending, a third‐year pediatric resident, and 4 third‐year medical students, but no interns. (This service was named in honor of Lowell Glasgow, chair of pediatrics, 1972‐82.) This structure was introduced in 1992 by the chair of pediatrics, Michael Simmons, the residency program director, Richard Molteni, and the clerkship director, Karen Hansen. These individuals desired to improve students' inpatient experience by providing greater responsibility for patient care. An additional motive was to increase the total number of patients followed by house staff without increasing the size of the residency program.
This inpatient service is a part of a 6‐week pediatric clerkship. All students perform the 3‐week inpatient portion of their clerkship at Primary Children's Medical Center, a tertiary‐care, freestanding children's hospital. (The students also spend 1 week each in a newborn nursery, an outpatient clinic, and a subspecialty setting). The academic attendings include generalists, hospitalists, and specialists who concurrently have other clinical responsibilities. The students take in‐house call every fourth night, supervised by senior residents who are not necessarily members of their service. All students share the same formal teaching activities, including morning report, a noon conference, and a student conference.
Patients are assigned to the ward services by a senior admitting resident. The admitting resident distributes patients among the services based on the complexity and acuity of the patients' conditions as well as the census on the various services. The senior resident supervising a particular service then assigns patients among the members of that service. Each third‐year medical student is expected to care for 2 or 3 patients at a time.
In addition to the intervention service, students also rotate on 2 similar traditional services. These services are traditional in the sense that they are composed of an academic attending, a community attending, a third‐year pediatric resident, 4 interns, and up to 2 fourth‐year and 2 third‐year medical students. Faculty preferences regarding service assignments were accommodated when possible. Therefore, some faculty attended only on one type of service, intervention or traditional, and others attended on both types. Because they have more members and because interns are capable of caring for more patients than are medical students, the traditional services cared for more patients than the intervention service. Although identical in composition, the 2 traditional services differ with each other in several ways. One service typically admits children 3 years old and younger, whereas the other admits children who are between 3 and 12 years old. The service that admits older children also admits most of the hematology‐oncology patients.
Although other authors have described similar inpatient clerkship structures, to our knowledge, none have evaluated them through a prospective randomized controlled trial.1, 2 The recent literature on ambulatory experiences during third‐year clerkships provided a methodological framework for this study. Collectively, such studies have evaluated outcomes with a variety of measures, including patient logs,35 evaluations,3, 4, 6, 7 examinations,37 surveys,3, 5, 7, 8 and career choices.4, 68 Additional outcomes, such as the effect of educational interventions on patient care, have been emphasized.9
In the light of this research, we conducted a prospective, randomized controlled trial to compare outcomes on the intervention service with those on the traditional services. We hypothesized that, compared with the traditional services, the intervention service would show:
improved process measures in terms of increased number of patients admitted, number of key diagnoses encountered in the patients cared for, and range of ages of the patients admitted;
similar or improved student performance, as measured by faculty and resident evaluations and a National Board of Medical Examiners (NBME) subject examination;
increased student satisfaction, as assessed by an end‐of‐rotation questionnaire;
increased interest in pediatric and, more broadly, primary care careers, as measured by subinternship and internship selections; and
comparable or improved resource utilization in terms of length of stay and total charges.
METHODS
All students enrolled in the third‐year pediatric rotation during the 2001‐2003 academic years were individually randomized by the clerkship assistant to the intervention service or 1 of the 2 traditional services without respect to career preference. A 5:3 student randomization ratio was used to fulfill the requirement that 4 students be assigned to the intervention service during every 3‐week block. This permitted the service to have call every fourth night.
To evaluate the adequacy of the randomization process, we obtained baseline student characteristics on age, sex, and United States Medical Licensing Examination (USMLE) Step 1 score from the Dean of Student Affairs. The dean also reported the discipline each student enrolled in for the required fourth‐year subinternship(s) and matched in for internship. These data were reported anonymously and linked to the service to which the student was assigned. In this study, pediatrics, internal medicine, and family practice were all considered primary care, but preliminary or transitional internships were not.
Process Measures
Students were required to submit logs at the end of their rotations, recording patients' names, ages, diagnoses, and admission dates. The accuracy and completeness of these logs were not independently verified.
As there was no authoritative list of key diagnoses third‐year medical students should encounter in the patients they care for during their inpatient rotations, we relied on expert opinion at our institution. The Council on Medical Student Education in Pediatrics' curriculum was not used because it did not differentiate between inpatient and ambulatory contexts. A preliminary list of 93 diagnoses was developed from the table of contents of Pediatric Hospital Medicine.10 This list was distributed to the 26 clinical faculty members in the Divisions of Pediatric Inpatient Medicine and General Pediatrics who were asked to select the 10 most important diagnoses. Surveys were numerically coded to permit 1 reminder.
The survey had a response rate of 92.3% (24 of 26 surveys). One survey was excluded because the respondent significantly deviated from the instructions. The 10 key diagnoses and the percentages of respondents who selected each individual diagnosis are: asthma (100%), febrile infant (95.6%), diarrhea and dehydration (91.3%), bronchiolitis (78.2%), diabetes mellitus and diabetic ketoacidemia (60.9%), failure to thrive (56.5%), urinary tract infections (52.1%), pneumonia (47.8%), upper airway infections such as croup (43.5%), and seizures and status epilepticus (43.5%).
Two of the authors independently coded the diagnoses on the students' patient logs in terms of these 93 diagnoses. The authors were blinded to the students' service assignment. As many students reported more than 1 diagnosis, the authors prioritized primary, secondary, and tertiary diagnoses to simplify the evaluation. The most likely cause of admission was listed as the primary diagnosis. If the authors could not reconcile divergent views, a third party was consulted.
Student Performance
Students were evaluated by both the attending physician(s) and senior resident(s) using a standardized evaluation form available from the corresponding author. The evaluation contained 18 items in 7 categories: data gathering, data recording/reporting, knowledge, data interpretation, clinical performance, professional attitudes, and professional demeanor. The student was rated exceptional, above expectations, meets expectations, below expectations, unacceptable, or not observed on each item. A short narrative description illustrated each rating. The ratings were converted to a 5‐point scale, with exceptional being 5. If the evaluator marked the line between 2 ratings, it was recorded as half. When multiple attendings or residents evaluated a student, the scores for a given item were collapsed into an average score.
Students also completed a NBME pediatric subject examination on the last day of their rotation.
Additionally, students were requested to complete a questionnaire during the final week of the clerkship. The items on the questionnaire were meant to access students' perceptions of the quality of their attendings' and residents' teaching, a potentially confounding variable. The survey was piloted on a group of similar subjects. Informed consent was obtained for survey completion. The survey was anonymous and required approximately 7 minutes to complete.
Resource Utilization
Last, resource utilization data, length of stay and total charges, for the 4 most common primary diagnoses were compared between the intervention and the traditional services. The 4 most common primary diagnoses and the percentage of total diagnoses (n = 2047) that each represents were bronchiolitis, 13%; febrile infant, 8.6%; pneumonia, 7.1%; and asthma, 6.5% (the diagnosis other accounted for 12% of the total diagnoses). Unique patient identifiers were used to obtain length of stay and total charges from the hospital's database. All‐Patient‐Refined Diagnosis‐Related Groups Severity of Illness (APR‐DRG‐SOI) were also obtained and used to construct multivariate models. Patients who were admitted to the pediatric intensive care unit (PICU) were excluded from the analysis.
Statistical Analysis
Statistical analyses were conducted and frequencies and percentages were calculated using Stata SE version 8.0 (College Station, TX). For all interval and ratio‐scaled variables, distributions were tested for normality using the Shapiro‐Wilks test to determine whether to use parametric or nonparametric statistical tests. For distributions meeting the normality assumption, the unpaired t test was used to compare the intervention service with traditional services. Where the normality assumption was not met, the Mann‐Whitney test was used. Categorically scaled data were compared using Pearson's chi‐square test. The standardized mean differences, reported as d values, were calculated to determine the effect size. Small, medium, and large effect sizes were defined as d values of 0.20, 0.50, and 0.80, respectively.11 Teaching quality, an effect modifier, was entered as a covariate into a linear regression model. Analyses of length of stay and total charges were conducted using multivariate linear regression controlling for patient age and severity of illness.
This study was approved by the University of Utah and Primary Children's Medical Center's Institutional Review Board.
RESULTS
Two hundred and three students enrolled in the third‐year pediatric clerkship during the study period, and all students completed the clerkship on their assigned services. One hundred and twenty‐eight were randomized to the intervention service and 75 to the traditional services. There were no statistically significant differences in median age, percentage of male students, or mean USMLE Step 1 score between the students randomized to the intervention service and those randomized to the traditional services (Table 1).
Intervention service | Traditional services | P value | |
---|---|---|---|
| |||
Age (median) | 28 | 28 | .76* |
Sex (% male) | 58.6 | 62.7 | .57 |
USMLE Step 1 score | 217 | 217 | .94 |
Process Measures
Overall, 96.6% of students (196 of 203) submitted patient logs; 97.7% of students (125 of 128) on the intervention service and 94.7% of students (71 of 75) on the traditional services. The students on the intervention service admitted a median of 10 patients, whereas the students on the traditional services admitted a median of 11 patients (d = 0.45, P < .01). Age data were recorded on 137 patient logs (69.9% of submitted logs, 72.0% of students on the intervention service vs. 66.2% of students on the traditional services). The percentage of students who saw at least 1 newborn (birth‐23 months), child (2‐12 years), and adolescent (12‐18 years) was 34.8% on the intervention service and 33.3% on the traditional services (P = .87) (Table 2).
Intervention service | Traditional services | d | P value | |
---|---|---|---|---|
| ||||
Median number of patients | 10 | 11 | 0.45 | < .01* |
Percent of students who saw 1 newborn, child, and adolescent | 34.8% | 33.3% | 0.03 | .87 |
Top 10 diagnoses cared for (n) | 4.4 | 3.6 | 0.48 | < .01 |
Percent of patients cared for whose diagnoses were in top 10 | 59.3% | 46.8% | 0.62# | < .01 |
Percent of unique diagnoses (median) | 80.0% | 80.0% | 0.02 | .62 |
Students on the intervention service encountered, on average, a larger number of the 10 key diagnoses (4.4 vs. 3.6, d = 0.48, P < .01) and a higher percentage of their patients had clinical conditions among the key diagnoses (59.3 vs. 46.8, d = 0.62, P < .01). To determine if this higher percentage was the result of admitting multiple patients with the same diagnosis, we examined the percentage of unique primary diagnosesthe number of different primary diagnoses divided by the total number of patientsand found no differences (Table 2).
Student Performance
The faculty and resident evaluations of the students showed statistically significant differences between those in the intervention service and those in the traditional services in only 2 of the 18 items. These items were analysis in the data interpretation category (3.81 vs. 3.64, d = 0.35, P = .02) and patient interaction in the professional demeanor category (3.89 vs. 3.76, d = 0.31, P < .05). Both differences favored the intervention service. There were no statistical differences by service in student performance on the NBME subject examination (73.2 vs. 72.3, P = .39).
Student Satisfaction
Overall, 87.2% of students (177 of 203) completed the survey; 87.5% of students (112 of 124) on the intervention service and 86.7% of students (65 of 75) on the traditional services. The students on the intervention service both had a more positive overall attitude about their rotation and were more likely to find it a satisfying educational experience. Students on the intervention service also reported greater participation in patient care. Effect sizes ranged from small to medium (Table 3). The internal consistency of answers about participation in patient care was high (Pearson correlation coefficient r = 0.80).
Intervention service | Traditional services | d | P value | |
---|---|---|---|---|
| ||||
My overall attitude toward this rotation is: 1. highly negative to 5. highly positive | 4.48 | 4.26 | 0.26 | .02* |
I found this rotation a satisfying educational experience: 1. strongly disagree to 5. strongly agree | 4.49 | 4.22 | 0.35 | < .01* |
My role on this rotation was that of an: 1. observer, 3. participant, 5. director | 3.77 | 3.33 | 0.60# | < .01 |
My supervising interns/residents were _____ teachers: 1. poor, 3. good, 5. exemplary | 3.91 | 3.75 | 0.17 | .26* |
My input into patient care decisions was: 1. strongly discouraged to 5. strongly encouraged | 4.45 | 3.98 | 0.66# | < .01* |
I was able to make a significant contribution to patient care: 1. strongly disagree to 5. strongly agree | 4.19 | 3.92 | 0.34 | .02* |
I had direct responsibility for patient care: 1. strongly disagree to 5. strongly agree | 4.33 | 3.95 | 0.46 | .01* |
My attendings were _____ teachers: 1. poor, 3. good, 5. exemplary | 4.09 | 3.75 | 0.40 | < .01* |
I found the feedback I received during this rotation to be: 1. insufficient, 3. appropriate, 5. excessive | 2.84 | 2.65 | 0.22 | .17* |
The following best describes the quality of my supervision during this rotation: 1. I was expected to do things beyond my competence unsupervised 3. The degree of supervision was appropriate for my level of training 5. I was excessively supervised on skills I had already demonstrated | 2.95 | 3.06 | 0.18 | .19 |
During this rotation: 1. I was expected to see too many patients 3. I was expected to see an appropriate number of patients 5. I expected to see more patients | 3.46 | 3.31 | 0.18 | .33* |
Before this rotation I _____ pediatrics as a career choice: 1. had rejected, 3. was considering, 5. had decided on | 2.37 | 2.14 | 0.22 | .11* |
This rotation increased my interest in pursuing pediatrics as a career: 1. strongly disagree to 5. strongly agree | 3.74 | 3.60 | 0.14 | .32* |
Students on the intervention service rated the teaching of their attendings, but not of their residents, higher than did students on the traditional services. Controlling for the perceived quality of the attending, 3 of 6 satisfaction outcomes remained statistically significant: role on rotation (P < .01), input into patient care decisions (P < .01), and direct responsibility for patient care (P = .04). Students on both services believed they were appropriately supervised (P = .19). Despite the students on the traditional services on average admitting more patients, there was no significant difference by service in the students' rating of patient load (P = .33).
Career Choice
The odds ratio and 95% confidence interval for students enrolling in a pediatric subinternship was 1.94 (0.83‐4.49) and matching in a pediatric residency was 2.52 (0.99‐6.37). There were no statistically significant differences by service in the percentage of students enrolling in primary care (pediatric, internal medicine, and family practice) subinternships or residencies (Table 4).
Intervention service | Traditional services | Odds ratio (95% CI) | |
---|---|---|---|
Pediatric subinternship | 19.5% | 11.1% | 1.94 (0.83‐4.49) |
Primary care subinternship | 68.3% | 70.8% | 0.89 (0.47‐1.67) |
Pediatric residency | 18.6% | 8.3% | 2.52 (0.99‐6.37) |
Primary care residency | 40.7% | 31.9% | 1.46 (0.79‐2.70) |
Resource Utilization
One hundred and thirty‐five patients were excluded from the resource utilization analysis (n = 594) because their unique identifiers could not be found or they had been admitted to the PICU. Univariate analysis demonstrated statistically significant differences for patients with asthma, but not patients with bronchiolitis, febrile infants, or patients with pneumonia, favoring the intervention service. Patients with asthma admitted to the intervention service had a shorter length of stay (49.9 vs. 70.1 hours, P = .02) and lower total charges ($3600 vs. $4600, P = .02), as shown in Table 5. Of 4 multivariate models controlling for age and severity of illness, each with length of stay and total charges as the dependant variables, length of stay was significantly less for patients with asthma admitted to the intervention service only. Such patients were discharged an average of 23.3 hours earlier than patients with asthma admitted to the traditional services (P = .02).
Diagnosis (n) | n | Length of stay (hours) | P value | Total charges | ||||
---|---|---|---|---|---|---|---|---|
Intervention service | Traditional services | Intervention service | Traditional services | Intervention service | Traditional services | P value | ||
| ||||||||
Bronchiolitis (210) | 159 | 51 | 63.7 | 70.5 | .20* | $4300 | $4800 | .20* |
Febrile infant (152) | 105 | 47 | 58.8 | 58.9 | .50* | $4800 | $4900 | .28* |
Pneumonia (123) | 82 | 41 | 84.3 | 116.8 | .71* | $6300 | $9200 | .63* |
Asthma (109) | 80 | 29 | 49.9 | 70.1 | .02* | $3600 | $4600 | .02* |
DISCUSSION
This study's objective was to evaluate a third‐year pediatric clerkship structure that focuses on students, using multiple outcome parameters. Utilizing a robust design, the results of this study have demonstrated that the intervention service is more successful than the traditional services in several outcomes. Students assigned to the intervention service were more satisfied and more likely to select pediatrics as a career. These improvements were accomplished while maintaining similar process measures, student performance, and resource utilization compared with those of the traditional services.
Methods
The methods used in this study compare favorably with other evaluations of educational interventions. The present study incorporated a randomized controlled design.12 Although several studies of ambulatory clerkships used a randomized design, few randomized all eligible students.7, 8 The others used some form of selection prior to randomization. For example, in the Pangaro et al. study, students selected their clerkship site by lottery, with students selecting a certain site then offered the opportunity to participate in the intervention.6 The present study manifested several additional strengths. Multiple outcomes, including effects on patient care, were evaluated. Moreover, this study had a relatively large intervention group and total sample size compared with those in other medical education studies. Finally, because the intervention service had been in place for several years prior to its evaluation, the confounding influence of difficulties working out its implementation was minimized.
Results
Few studies of ambulatory experiences demonstrated statistically significant, let alone clinically significant, results. Most studies showed no statistically significant differences in student evaluations or examination scores. An exception is Grum et al., who showed improvements on 3 of 5 examinations.4 A few studies have found improved student satisfaction.3 None of the randomized controlled trials demonstrate increases in students matching in internal medicine or primary care residencies.4, 68 In contrast, this study produced statistically or programmatically significant results in process measures, evaluations, satisfaction, and career choices.
Several of our specific findings deserve additional comment. Although the admitting residents were instructed to assign patients to the intervention service based on their acuity and complexity, it is important to examine these residents' actual behavior. Several of our hypotheses were not validated. The students on the intervention service admitted fewer patients and were no more likely to see at least 1 patient in each age category. The admitting resident may have limited the number of patients admitted to the intervention service based on the workload of the supervising resident not that of the student. The supervising resident on the intervention service must round on all the patients, whereas the oversight of patients seen by students on the traditional services is shared with the interns. Having the attending on the intervention service share this supervising responsibility might improve this outcome.
Students on the intervention service had more positive attitudes toward the rotation. In addition, potentially negative attitudes were not manifest. For example, it might be argued that third‐year medical students are not prepared to bear this increased responsibility. However, there was not a significant difference in students' perception of the quality of supervision or the workload.
Although the goal of medical education is the production of competent physicians, it is important that the process not place undo burdens on patients and the health care system. Univariate analysis showed similar resource utilization. It might be contended that the admitting resident assigned the intervention service patients who were less acutely ill. Therefore, we performed multivariate analysis using APR‐DRG‐SOI to control for severity of illness. Of 8 comparisons, the only statistically significant difference, length of stay of patients with asthma, favored the intervention service.
Limitations
Although this study had numerous strengths, it also had several limitations. The primary limitations were lack of generalizability, difficulty in obtaining authentic assessments, the potential difference between statistical and educational significance, and inability to identify which components of the intervention service were responsible for the outcomes. This study's findings may not be generalizable to other institutions. For example, institutions without age or organ systembased teams may not observe increases in the number of key diagnoses encountered in the patients cared for. Regarding the assessments, there may be better measures of clinical competence, such as an objective structured clinical examination (OSCE),13 than those used in this study. However, there were not sufficient resources to implement an OSCE at the end of the rotation.
Some might question whether the statistically significant differences have educational significance. Although that is an important concern, this study should be compared with other educational interventions that found few statistically significant, let alone educationally significant, differences. To address this concern, we calculated effect sizes. The differences in student satisfaction were small to moderate. Although the lower limit of the 95% confidence interval of the odds ratio for matching in a pediatric residency was 0.99, the magnitude was programmatically important.
Finally, this study was an evaluation of an existing program. The authors were unable to control some potential confounders including patient allocation, average daily census, and quality of teaching. For example, Griffith and colleagues have shown that working with the best teachers improves student performance.14 We were not able to randomly assign the faculty among the services, and unequal distribution of better teachers could have biased this study's outcomes. The students on the intervention service rated their attendings, but not their residents, higher than did the students on the other services. However, the linear regression model showed that the perceived quality of the attending did not account for all the differences in student satisfaction. It was not possible to control for this factor in comparing student performance or subinternship or residency selection because the survey, which included the faculty evaluations, was anonymous and therefore could not be linked to the other data sets.
The perceived differences in the quality of teaching may not have been the result of differences in the attendings but instead of differences in the structure of the services. Accessibility is one of the characteristics of excellent clinical teachers.15 The intervention structure may permit faculty to spend more time with students, and this may increase the perceived quality of the teaching. However, it is not possible to resolve this issue with the available data.
CONCLUSIONS
The intervention service is a structure for the pediatric inpatient rotation of third‐year medical students that, instead of dividing the faculty and supervising resident's attention between interns and students, focuses their attention on the students. Although it has been difficult to demonstrate improvements as a result of the educational interventions, we have shown several improvements in the evaluations of the students. Moreover, the pattern of increased student satisfaction and a tendency toward more student selecting careers in pediatrics are remarkable. This was accomplished with similar resource utilization. Therefore, this program merits being continued at our institution and possibly adopted at other medical schools. Further research is needed to determine which aspects of the intervention are responsible for its effects. Some components, such as focused time with students, may be applicable to traditional services.
Acknowledgements
The authors thank Ronald Bloom for encouraging us to conduct this study; Kathy Bailey, Alice Dowling, and Margie Thompson for their assistance in the data collection; and Elizabeth Allen, Ronald Bloom, Flory Nkoy, Louis Pangaro, Stephanie Richardson, and Rajendu Srivastava for manuscript review.
- The role of the student ward in the medical clerkships.J Med Educ.1985;60:524–529. , , .
- Changing the fourth‐year medicine clerkship structure: A successful model for a teaching service without housestaff.J Gen Intern Med.1993;8:31–32. .
- A randomized, controlled pilot study of placing third‐year medical clerks in a continuity clinic.Acad Med.1993;68:845–847. , .
- Consequences of shifting medical‐student education to the outpatient setting: effects on performance and experiences.Acad Med.1996;71(suppl 1):S99–S101. , , .
- Learning outcomes of an ambulatory care rotation in internal medicine for junior medical students.J Gen Intern Med.1993;8:189–192. , .
- A prospective, randomized trial of a six‐week ambulatory medicine rotation.Acad Med.1995;70:537–541. , , , , .
- Ambulatory versus inpatient rotations in teaching third‐year students internal medicine.J Gen Intern Med.1998;13:327–330. , , , , .
- The effect of an ambulatory internal medicine rotation on students' career choices.Acad Med.1997;72:147–149. , , , , .
- Theme issue on medical education: Call for papers.JAMA.2005;293:742. .
- Perkin RM,Swift JD,Newton DA, eds.Pediatric Hospital Medicine: Textbook of Inpatient Management.Philadelphia:Lippincott Williams 2003.
- Call for greater emphasis on effect‐size measures in published articles in Teaching and Learning in Medicine.Teach Learn Med.2002;14:206–210. .
- Educational research and randomised trials.Med Educ.2002;36:1002–1003. .
- The objective structured clinical examination.Arch Pediatr Adolesc Med.2000;154:736–741. , .
- Relationship of how well attending physicians teach to their student's performances and residency choices.Acad Med.1997;72(suppl 1):S118–S120. , , , .
- Assessing quality and costs of education in the ambulatory setting: A review of the literature.Acad Med.2002;77:621–680. , .
The third‐year pediatric clerkship at the University of Utah School of Medicine has a relatively unique inpatient service, the Glasgow Service, which consists of an academic attending, a third‐year pediatric resident, and 4 third‐year medical students, but no interns. (This service was named in honor of Lowell Glasgow, chair of pediatrics, 1972‐82.) This structure was introduced in 1992 by the chair of pediatrics, Michael Simmons, the residency program director, Richard Molteni, and the clerkship director, Karen Hansen. These individuals desired to improve students' inpatient experience by providing greater responsibility for patient care. An additional motive was to increase the total number of patients followed by house staff without increasing the size of the residency program.
This inpatient service is a part of a 6‐week pediatric clerkship. All students perform the 3‐week inpatient portion of their clerkship at Primary Children's Medical Center, a tertiary‐care, freestanding children's hospital. (The students also spend 1 week each in a newborn nursery, an outpatient clinic, and a subspecialty setting). The academic attendings include generalists, hospitalists, and specialists who concurrently have other clinical responsibilities. The students take in‐house call every fourth night, supervised by senior residents who are not necessarily members of their service. All students share the same formal teaching activities, including morning report, a noon conference, and a student conference.
Patients are assigned to the ward services by a senior admitting resident. The admitting resident distributes patients among the services based on the complexity and acuity of the patients' conditions as well as the census on the various services. The senior resident supervising a particular service then assigns patients among the members of that service. Each third‐year medical student is expected to care for 2 or 3 patients at a time.
In addition to the intervention service, students also rotate on 2 similar traditional services. These services are traditional in the sense that they are composed of an academic attending, a community attending, a third‐year pediatric resident, 4 interns, and up to 2 fourth‐year and 2 third‐year medical students. Faculty preferences regarding service assignments were accommodated when possible. Therefore, some faculty attended only on one type of service, intervention or traditional, and others attended on both types. Because they have more members and because interns are capable of caring for more patients than are medical students, the traditional services cared for more patients than the intervention service. Although identical in composition, the 2 traditional services differ with each other in several ways. One service typically admits children 3 years old and younger, whereas the other admits children who are between 3 and 12 years old. The service that admits older children also admits most of the hematology‐oncology patients.
Although other authors have described similar inpatient clerkship structures, to our knowledge, none have evaluated them through a prospective randomized controlled trial.1, 2 The recent literature on ambulatory experiences during third‐year clerkships provided a methodological framework for this study. Collectively, such studies have evaluated outcomes with a variety of measures, including patient logs,35 evaluations,3, 4, 6, 7 examinations,37 surveys,3, 5, 7, 8 and career choices.4, 68 Additional outcomes, such as the effect of educational interventions on patient care, have been emphasized.9
In the light of this research, we conducted a prospective, randomized controlled trial to compare outcomes on the intervention service with those on the traditional services. We hypothesized that, compared with the traditional services, the intervention service would show:
improved process measures in terms of increased number of patients admitted, number of key diagnoses encountered in the patients cared for, and range of ages of the patients admitted;
similar or improved student performance, as measured by faculty and resident evaluations and a National Board of Medical Examiners (NBME) subject examination;
increased student satisfaction, as assessed by an end‐of‐rotation questionnaire;
increased interest in pediatric and, more broadly, primary care careers, as measured by subinternship and internship selections; and
comparable or improved resource utilization in terms of length of stay and total charges.
METHODS
All students enrolled in the third‐year pediatric rotation during the 2001‐2003 academic years were individually randomized by the clerkship assistant to the intervention service or 1 of the 2 traditional services without respect to career preference. A 5:3 student randomization ratio was used to fulfill the requirement that 4 students be assigned to the intervention service during every 3‐week block. This permitted the service to have call every fourth night.
To evaluate the adequacy of the randomization process, we obtained baseline student characteristics on age, sex, and United States Medical Licensing Examination (USMLE) Step 1 score from the Dean of Student Affairs. The dean also reported the discipline each student enrolled in for the required fourth‐year subinternship(s) and matched in for internship. These data were reported anonymously and linked to the service to which the student was assigned. In this study, pediatrics, internal medicine, and family practice were all considered primary care, but preliminary or transitional internships were not.
Process Measures
Students were required to submit logs at the end of their rotations, recording patients' names, ages, diagnoses, and admission dates. The accuracy and completeness of these logs were not independently verified.
As there was no authoritative list of key diagnoses third‐year medical students should encounter in the patients they care for during their inpatient rotations, we relied on expert opinion at our institution. The Council on Medical Student Education in Pediatrics' curriculum was not used because it did not differentiate between inpatient and ambulatory contexts. A preliminary list of 93 diagnoses was developed from the table of contents of Pediatric Hospital Medicine.10 This list was distributed to the 26 clinical faculty members in the Divisions of Pediatric Inpatient Medicine and General Pediatrics who were asked to select the 10 most important diagnoses. Surveys were numerically coded to permit 1 reminder.
The survey had a response rate of 92.3% (24 of 26 surveys). One survey was excluded because the respondent significantly deviated from the instructions. The 10 key diagnoses and the percentages of respondents who selected each individual diagnosis are: asthma (100%), febrile infant (95.6%), diarrhea and dehydration (91.3%), bronchiolitis (78.2%), diabetes mellitus and diabetic ketoacidemia (60.9%), failure to thrive (56.5%), urinary tract infections (52.1%), pneumonia (47.8%), upper airway infections such as croup (43.5%), and seizures and status epilepticus (43.5%).
Two of the authors independently coded the diagnoses on the students' patient logs in terms of these 93 diagnoses. The authors were blinded to the students' service assignment. As many students reported more than 1 diagnosis, the authors prioritized primary, secondary, and tertiary diagnoses to simplify the evaluation. The most likely cause of admission was listed as the primary diagnosis. If the authors could not reconcile divergent views, a third party was consulted.
Student Performance
Students were evaluated by both the attending physician(s) and senior resident(s) using a standardized evaluation form available from the corresponding author. The evaluation contained 18 items in 7 categories: data gathering, data recording/reporting, knowledge, data interpretation, clinical performance, professional attitudes, and professional demeanor. The student was rated exceptional, above expectations, meets expectations, below expectations, unacceptable, or not observed on each item. A short narrative description illustrated each rating. The ratings were converted to a 5‐point scale, with exceptional being 5. If the evaluator marked the line between 2 ratings, it was recorded as half. When multiple attendings or residents evaluated a student, the scores for a given item were collapsed into an average score.
Students also completed a NBME pediatric subject examination on the last day of their rotation.
Additionally, students were requested to complete a questionnaire during the final week of the clerkship. The items on the questionnaire were meant to access students' perceptions of the quality of their attendings' and residents' teaching, a potentially confounding variable. The survey was piloted on a group of similar subjects. Informed consent was obtained for survey completion. The survey was anonymous and required approximately 7 minutes to complete.
Resource Utilization
Last, resource utilization data, length of stay and total charges, for the 4 most common primary diagnoses were compared between the intervention and the traditional services. The 4 most common primary diagnoses and the percentage of total diagnoses (n = 2047) that each represents were bronchiolitis, 13%; febrile infant, 8.6%; pneumonia, 7.1%; and asthma, 6.5% (the diagnosis other accounted for 12% of the total diagnoses). Unique patient identifiers were used to obtain length of stay and total charges from the hospital's database. All‐Patient‐Refined Diagnosis‐Related Groups Severity of Illness (APR‐DRG‐SOI) were also obtained and used to construct multivariate models. Patients who were admitted to the pediatric intensive care unit (PICU) were excluded from the analysis.
Statistical Analysis
Statistical analyses were conducted and frequencies and percentages were calculated using Stata SE version 8.0 (College Station, TX). For all interval and ratio‐scaled variables, distributions were tested for normality using the Shapiro‐Wilks test to determine whether to use parametric or nonparametric statistical tests. For distributions meeting the normality assumption, the unpaired t test was used to compare the intervention service with traditional services. Where the normality assumption was not met, the Mann‐Whitney test was used. Categorically scaled data were compared using Pearson's chi‐square test. The standardized mean differences, reported as d values, were calculated to determine the effect size. Small, medium, and large effect sizes were defined as d values of 0.20, 0.50, and 0.80, respectively.11 Teaching quality, an effect modifier, was entered as a covariate into a linear regression model. Analyses of length of stay and total charges were conducted using multivariate linear regression controlling for patient age and severity of illness.
This study was approved by the University of Utah and Primary Children's Medical Center's Institutional Review Board.
RESULTS
Two hundred and three students enrolled in the third‐year pediatric clerkship during the study period, and all students completed the clerkship on their assigned services. One hundred and twenty‐eight were randomized to the intervention service and 75 to the traditional services. There were no statistically significant differences in median age, percentage of male students, or mean USMLE Step 1 score between the students randomized to the intervention service and those randomized to the traditional services (Table 1).
Intervention service | Traditional services | P value | |
---|---|---|---|
| |||
Age (median) | 28 | 28 | .76* |
Sex (% male) | 58.6 | 62.7 | .57 |
USMLE Step 1 score | 217 | 217 | .94 |
Process Measures
Overall, 96.6% of students (196 of 203) submitted patient logs; 97.7% of students (125 of 128) on the intervention service and 94.7% of students (71 of 75) on the traditional services. The students on the intervention service admitted a median of 10 patients, whereas the students on the traditional services admitted a median of 11 patients (d = 0.45, P < .01). Age data were recorded on 137 patient logs (69.9% of submitted logs, 72.0% of students on the intervention service vs. 66.2% of students on the traditional services). The percentage of students who saw at least 1 newborn (birth‐23 months), child (2‐12 years), and adolescent (12‐18 years) was 34.8% on the intervention service and 33.3% on the traditional services (P = .87) (Table 2).
Intervention service | Traditional services | d | P value | |
---|---|---|---|---|
| ||||
Median number of patients | 10 | 11 | 0.45 | < .01* |
Percent of students who saw 1 newborn, child, and adolescent | 34.8% | 33.3% | 0.03 | .87 |
Top 10 diagnoses cared for (n) | 4.4 | 3.6 | 0.48 | < .01 |
Percent of patients cared for whose diagnoses were in top 10 | 59.3% | 46.8% | 0.62# | < .01 |
Percent of unique diagnoses (median) | 80.0% | 80.0% | 0.02 | .62 |
Students on the intervention service encountered, on average, a larger number of the 10 key diagnoses (4.4 vs. 3.6, d = 0.48, P < .01) and a higher percentage of their patients had clinical conditions among the key diagnoses (59.3 vs. 46.8, d = 0.62, P < .01). To determine if this higher percentage was the result of admitting multiple patients with the same diagnosis, we examined the percentage of unique primary diagnosesthe number of different primary diagnoses divided by the total number of patientsand found no differences (Table 2).
Student Performance
The faculty and resident evaluations of the students showed statistically significant differences between those in the intervention service and those in the traditional services in only 2 of the 18 items. These items were analysis in the data interpretation category (3.81 vs. 3.64, d = 0.35, P = .02) and patient interaction in the professional demeanor category (3.89 vs. 3.76, d = 0.31, P < .05). Both differences favored the intervention service. There were no statistical differences by service in student performance on the NBME subject examination (73.2 vs. 72.3, P = .39).
Student Satisfaction
Overall, 87.2% of students (177 of 203) completed the survey; 87.5% of students (112 of 124) on the intervention service and 86.7% of students (65 of 75) on the traditional services. The students on the intervention service both had a more positive overall attitude about their rotation and were more likely to find it a satisfying educational experience. Students on the intervention service also reported greater participation in patient care. Effect sizes ranged from small to medium (Table 3). The internal consistency of answers about participation in patient care was high (Pearson correlation coefficient r = 0.80).
Intervention service | Traditional services | d | P value | |
---|---|---|---|---|
| ||||
My overall attitude toward this rotation is: 1. highly negative to 5. highly positive | 4.48 | 4.26 | 0.26 | .02* |
I found this rotation a satisfying educational experience: 1. strongly disagree to 5. strongly agree | 4.49 | 4.22 | 0.35 | < .01* |
My role on this rotation was that of an: 1. observer, 3. participant, 5. director | 3.77 | 3.33 | 0.60# | < .01 |
My supervising interns/residents were _____ teachers: 1. poor, 3. good, 5. exemplary | 3.91 | 3.75 | 0.17 | .26* |
My input into patient care decisions was: 1. strongly discouraged to 5. strongly encouraged | 4.45 | 3.98 | 0.66# | < .01* |
I was able to make a significant contribution to patient care: 1. strongly disagree to 5. strongly agree | 4.19 | 3.92 | 0.34 | .02* |
I had direct responsibility for patient care: 1. strongly disagree to 5. strongly agree | 4.33 | 3.95 | 0.46 | .01* |
My attendings were _____ teachers: 1. poor, 3. good, 5. exemplary | 4.09 | 3.75 | 0.40 | < .01* |
I found the feedback I received during this rotation to be: 1. insufficient, 3. appropriate, 5. excessive | 2.84 | 2.65 | 0.22 | .17* |
The following best describes the quality of my supervision during this rotation: 1. I was expected to do things beyond my competence unsupervised 3. The degree of supervision was appropriate for my level of training 5. I was excessively supervised on skills I had already demonstrated | 2.95 | 3.06 | 0.18 | .19 |
During this rotation: 1. I was expected to see too many patients 3. I was expected to see an appropriate number of patients 5. I expected to see more patients | 3.46 | 3.31 | 0.18 | .33* |
Before this rotation I _____ pediatrics as a career choice: 1. had rejected, 3. was considering, 5. had decided on | 2.37 | 2.14 | 0.22 | .11* |
This rotation increased my interest in pursuing pediatrics as a career: 1. strongly disagree to 5. strongly agree | 3.74 | 3.60 | 0.14 | .32* |
Students on the intervention service rated the teaching of their attendings, but not of their residents, higher than did students on the traditional services. Controlling for the perceived quality of the attending, 3 of 6 satisfaction outcomes remained statistically significant: role on rotation (P < .01), input into patient care decisions (P < .01), and direct responsibility for patient care (P = .04). Students on both services believed they were appropriately supervised (P = .19). Despite the students on the traditional services on average admitting more patients, there was no significant difference by service in the students' rating of patient load (P = .33).
Career Choice
The odds ratio and 95% confidence interval for students enrolling in a pediatric subinternship was 1.94 (0.83‐4.49) and matching in a pediatric residency was 2.52 (0.99‐6.37). There were no statistically significant differences by service in the percentage of students enrolling in primary care (pediatric, internal medicine, and family practice) subinternships or residencies (Table 4).
Intervention service | Traditional services | Odds ratio (95% CI) | |
---|---|---|---|
Pediatric subinternship | 19.5% | 11.1% | 1.94 (0.83‐4.49) |
Primary care subinternship | 68.3% | 70.8% | 0.89 (0.47‐1.67) |
Pediatric residency | 18.6% | 8.3% | 2.52 (0.99‐6.37) |
Primary care residency | 40.7% | 31.9% | 1.46 (0.79‐2.70) |
Resource Utilization
One hundred and thirty‐five patients were excluded from the resource utilization analysis (n = 594) because their unique identifiers could not be found or they had been admitted to the PICU. Univariate analysis demonstrated statistically significant differences for patients with asthma, but not patients with bronchiolitis, febrile infants, or patients with pneumonia, favoring the intervention service. Patients with asthma admitted to the intervention service had a shorter length of stay (49.9 vs. 70.1 hours, P = .02) and lower total charges ($3600 vs. $4600, P = .02), as shown in Table 5. Of 4 multivariate models controlling for age and severity of illness, each with length of stay and total charges as the dependant variables, length of stay was significantly less for patients with asthma admitted to the intervention service only. Such patients were discharged an average of 23.3 hours earlier than patients with asthma admitted to the traditional services (P = .02).
Diagnosis (n) | n | Length of stay (hours) | P value | Total charges | ||||
---|---|---|---|---|---|---|---|---|
Intervention service | Traditional services | Intervention service | Traditional services | Intervention service | Traditional services | P value | ||
| ||||||||
Bronchiolitis (210) | 159 | 51 | 63.7 | 70.5 | .20* | $4300 | $4800 | .20* |
Febrile infant (152) | 105 | 47 | 58.8 | 58.9 | .50* | $4800 | $4900 | .28* |
Pneumonia (123) | 82 | 41 | 84.3 | 116.8 | .71* | $6300 | $9200 | .63* |
Asthma (109) | 80 | 29 | 49.9 | 70.1 | .02* | $3600 | $4600 | .02* |
DISCUSSION
This study's objective was to evaluate a third‐year pediatric clerkship structure that focuses on students, using multiple outcome parameters. Utilizing a robust design, the results of this study have demonstrated that the intervention service is more successful than the traditional services in several outcomes. Students assigned to the intervention service were more satisfied and more likely to select pediatrics as a career. These improvements were accomplished while maintaining similar process measures, student performance, and resource utilization compared with those of the traditional services.
Methods
The methods used in this study compare favorably with other evaluations of educational interventions. The present study incorporated a randomized controlled design.12 Although several studies of ambulatory clerkships used a randomized design, few randomized all eligible students.7, 8 The others used some form of selection prior to randomization. For example, in the Pangaro et al. study, students selected their clerkship site by lottery, with students selecting a certain site then offered the opportunity to participate in the intervention.6 The present study manifested several additional strengths. Multiple outcomes, including effects on patient care, were evaluated. Moreover, this study had a relatively large intervention group and total sample size compared with those in other medical education studies. Finally, because the intervention service had been in place for several years prior to its evaluation, the confounding influence of difficulties working out its implementation was minimized.
Results
Few studies of ambulatory experiences demonstrated statistically significant, let alone clinically significant, results. Most studies showed no statistically significant differences in student evaluations or examination scores. An exception is Grum et al., who showed improvements on 3 of 5 examinations.4 A few studies have found improved student satisfaction.3 None of the randomized controlled trials demonstrate increases in students matching in internal medicine or primary care residencies.4, 68 In contrast, this study produced statistically or programmatically significant results in process measures, evaluations, satisfaction, and career choices.
Several of our specific findings deserve additional comment. Although the admitting residents were instructed to assign patients to the intervention service based on their acuity and complexity, it is important to examine these residents' actual behavior. Several of our hypotheses were not validated. The students on the intervention service admitted fewer patients and were no more likely to see at least 1 patient in each age category. The admitting resident may have limited the number of patients admitted to the intervention service based on the workload of the supervising resident not that of the student. The supervising resident on the intervention service must round on all the patients, whereas the oversight of patients seen by students on the traditional services is shared with the interns. Having the attending on the intervention service share this supervising responsibility might improve this outcome.
Students on the intervention service had more positive attitudes toward the rotation. In addition, potentially negative attitudes were not manifest. For example, it might be argued that third‐year medical students are not prepared to bear this increased responsibility. However, there was not a significant difference in students' perception of the quality of supervision or the workload.
Although the goal of medical education is the production of competent physicians, it is important that the process not place undo burdens on patients and the health care system. Univariate analysis showed similar resource utilization. It might be contended that the admitting resident assigned the intervention service patients who were less acutely ill. Therefore, we performed multivariate analysis using APR‐DRG‐SOI to control for severity of illness. Of 8 comparisons, the only statistically significant difference, length of stay of patients with asthma, favored the intervention service.
Limitations
Although this study had numerous strengths, it also had several limitations. The primary limitations were lack of generalizability, difficulty in obtaining authentic assessments, the potential difference between statistical and educational significance, and inability to identify which components of the intervention service were responsible for the outcomes. This study's findings may not be generalizable to other institutions. For example, institutions without age or organ systembased teams may not observe increases in the number of key diagnoses encountered in the patients cared for. Regarding the assessments, there may be better measures of clinical competence, such as an objective structured clinical examination (OSCE),13 than those used in this study. However, there were not sufficient resources to implement an OSCE at the end of the rotation.
Some might question whether the statistically significant differences have educational significance. Although that is an important concern, this study should be compared with other educational interventions that found few statistically significant, let alone educationally significant, differences. To address this concern, we calculated effect sizes. The differences in student satisfaction were small to moderate. Although the lower limit of the 95% confidence interval of the odds ratio for matching in a pediatric residency was 0.99, the magnitude was programmatically important.
Finally, this study was an evaluation of an existing program. The authors were unable to control some potential confounders including patient allocation, average daily census, and quality of teaching. For example, Griffith and colleagues have shown that working with the best teachers improves student performance.14 We were not able to randomly assign the faculty among the services, and unequal distribution of better teachers could have biased this study's outcomes. The students on the intervention service rated their attendings, but not their residents, higher than did the students on the other services. However, the linear regression model showed that the perceived quality of the attending did not account for all the differences in student satisfaction. It was not possible to control for this factor in comparing student performance or subinternship or residency selection because the survey, which included the faculty evaluations, was anonymous and therefore could not be linked to the other data sets.
The perceived differences in the quality of teaching may not have been the result of differences in the attendings but instead of differences in the structure of the services. Accessibility is one of the characteristics of excellent clinical teachers.15 The intervention structure may permit faculty to spend more time with students, and this may increase the perceived quality of the teaching. However, it is not possible to resolve this issue with the available data.
CONCLUSIONS
The intervention service is a structure for the pediatric inpatient rotation of third‐year medical students that, instead of dividing the faculty and supervising resident's attention between interns and students, focuses their attention on the students. Although it has been difficult to demonstrate improvements as a result of the educational interventions, we have shown several improvements in the evaluations of the students. Moreover, the pattern of increased student satisfaction and a tendency toward more student selecting careers in pediatrics are remarkable. This was accomplished with similar resource utilization. Therefore, this program merits being continued at our institution and possibly adopted at other medical schools. Further research is needed to determine which aspects of the intervention are responsible for its effects. Some components, such as focused time with students, may be applicable to traditional services.
Acknowledgements
The authors thank Ronald Bloom for encouraging us to conduct this study; Kathy Bailey, Alice Dowling, and Margie Thompson for their assistance in the data collection; and Elizabeth Allen, Ronald Bloom, Flory Nkoy, Louis Pangaro, Stephanie Richardson, and Rajendu Srivastava for manuscript review.
The third‐year pediatric clerkship at the University of Utah School of Medicine has a relatively unique inpatient service, the Glasgow Service, which consists of an academic attending, a third‐year pediatric resident, and 4 third‐year medical students, but no interns. (This service was named in honor of Lowell Glasgow, chair of pediatrics, 1972‐82.) This structure was introduced in 1992 by the chair of pediatrics, Michael Simmons, the residency program director, Richard Molteni, and the clerkship director, Karen Hansen. These individuals desired to improve students' inpatient experience by providing greater responsibility for patient care. An additional motive was to increase the total number of patients followed by house staff without increasing the size of the residency program.
This inpatient service is a part of a 6‐week pediatric clerkship. All students perform the 3‐week inpatient portion of their clerkship at Primary Children's Medical Center, a tertiary‐care, freestanding children's hospital. (The students also spend 1 week each in a newborn nursery, an outpatient clinic, and a subspecialty setting). The academic attendings include generalists, hospitalists, and specialists who concurrently have other clinical responsibilities. The students take in‐house call every fourth night, supervised by senior residents who are not necessarily members of their service. All students share the same formal teaching activities, including morning report, a noon conference, and a student conference.
Patients are assigned to the ward services by a senior admitting resident. The admitting resident distributes patients among the services based on the complexity and acuity of the patients' conditions as well as the census on the various services. The senior resident supervising a particular service then assigns patients among the members of that service. Each third‐year medical student is expected to care for 2 or 3 patients at a time.
In addition to the intervention service, students also rotate on 2 similar traditional services. These services are traditional in the sense that they are composed of an academic attending, a community attending, a third‐year pediatric resident, 4 interns, and up to 2 fourth‐year and 2 third‐year medical students. Faculty preferences regarding service assignments were accommodated when possible. Therefore, some faculty attended only on one type of service, intervention or traditional, and others attended on both types. Because they have more members and because interns are capable of caring for more patients than are medical students, the traditional services cared for more patients than the intervention service. Although identical in composition, the 2 traditional services differ with each other in several ways. One service typically admits children 3 years old and younger, whereas the other admits children who are between 3 and 12 years old. The service that admits older children also admits most of the hematology‐oncology patients.
Although other authors have described similar inpatient clerkship structures, to our knowledge, none have evaluated them through a prospective randomized controlled trial.1, 2 The recent literature on ambulatory experiences during third‐year clerkships provided a methodological framework for this study. Collectively, such studies have evaluated outcomes with a variety of measures, including patient logs,35 evaluations,3, 4, 6, 7 examinations,37 surveys,3, 5, 7, 8 and career choices.4, 68 Additional outcomes, such as the effect of educational interventions on patient care, have been emphasized.9
In the light of this research, we conducted a prospective, randomized controlled trial to compare outcomes on the intervention service with those on the traditional services. We hypothesized that, compared with the traditional services, the intervention service would show:
improved process measures in terms of increased number of patients admitted, number of key diagnoses encountered in the patients cared for, and range of ages of the patients admitted;
similar or improved student performance, as measured by faculty and resident evaluations and a National Board of Medical Examiners (NBME) subject examination;
increased student satisfaction, as assessed by an end‐of‐rotation questionnaire;
increased interest in pediatric and, more broadly, primary care careers, as measured by subinternship and internship selections; and
comparable or improved resource utilization in terms of length of stay and total charges.
METHODS
All students enrolled in the third‐year pediatric rotation during the 2001‐2003 academic years were individually randomized by the clerkship assistant to the intervention service or 1 of the 2 traditional services without respect to career preference. A 5:3 student randomization ratio was used to fulfill the requirement that 4 students be assigned to the intervention service during every 3‐week block. This permitted the service to have call every fourth night.
To evaluate the adequacy of the randomization process, we obtained baseline student characteristics on age, sex, and United States Medical Licensing Examination (USMLE) Step 1 score from the Dean of Student Affairs. The dean also reported the discipline each student enrolled in for the required fourth‐year subinternship(s) and matched in for internship. These data were reported anonymously and linked to the service to which the student was assigned. In this study, pediatrics, internal medicine, and family practice were all considered primary care, but preliminary or transitional internships were not.
Process Measures
Students were required to submit logs at the end of their rotations, recording patients' names, ages, diagnoses, and admission dates. The accuracy and completeness of these logs were not independently verified.
As there was no authoritative list of key diagnoses third‐year medical students should encounter in the patients they care for during their inpatient rotations, we relied on expert opinion at our institution. The Council on Medical Student Education in Pediatrics' curriculum was not used because it did not differentiate between inpatient and ambulatory contexts. A preliminary list of 93 diagnoses was developed from the table of contents of Pediatric Hospital Medicine.10 This list was distributed to the 26 clinical faculty members in the Divisions of Pediatric Inpatient Medicine and General Pediatrics who were asked to select the 10 most important diagnoses. Surveys were numerically coded to permit 1 reminder.
The survey had a response rate of 92.3% (24 of 26 surveys). One survey was excluded because the respondent significantly deviated from the instructions. The 10 key diagnoses and the percentages of respondents who selected each individual diagnosis are: asthma (100%), febrile infant (95.6%), diarrhea and dehydration (91.3%), bronchiolitis (78.2%), diabetes mellitus and diabetic ketoacidemia (60.9%), failure to thrive (56.5%), urinary tract infections (52.1%), pneumonia (47.8%), upper airway infections such as croup (43.5%), and seizures and status epilepticus (43.5%).
Two of the authors independently coded the diagnoses on the students' patient logs in terms of these 93 diagnoses. The authors were blinded to the students' service assignment. As many students reported more than 1 diagnosis, the authors prioritized primary, secondary, and tertiary diagnoses to simplify the evaluation. The most likely cause of admission was listed as the primary diagnosis. If the authors could not reconcile divergent views, a third party was consulted.
Student Performance
Students were evaluated by both the attending physician(s) and senior resident(s) using a standardized evaluation form available from the corresponding author. The evaluation contained 18 items in 7 categories: data gathering, data recording/reporting, knowledge, data interpretation, clinical performance, professional attitudes, and professional demeanor. The student was rated exceptional, above expectations, meets expectations, below expectations, unacceptable, or not observed on each item. A short narrative description illustrated each rating. The ratings were converted to a 5‐point scale, with exceptional being 5. If the evaluator marked the line between 2 ratings, it was recorded as half. When multiple attendings or residents evaluated a student, the scores for a given item were collapsed into an average score.
Students also completed a NBME pediatric subject examination on the last day of their rotation.
Additionally, students were requested to complete a questionnaire during the final week of the clerkship. The items on the questionnaire were meant to access students' perceptions of the quality of their attendings' and residents' teaching, a potentially confounding variable. The survey was piloted on a group of similar subjects. Informed consent was obtained for survey completion. The survey was anonymous and required approximately 7 minutes to complete.
Resource Utilization
Last, resource utilization data, length of stay and total charges, for the 4 most common primary diagnoses were compared between the intervention and the traditional services. The 4 most common primary diagnoses and the percentage of total diagnoses (n = 2047) that each represents were bronchiolitis, 13%; febrile infant, 8.6%; pneumonia, 7.1%; and asthma, 6.5% (the diagnosis other accounted for 12% of the total diagnoses). Unique patient identifiers were used to obtain length of stay and total charges from the hospital's database. All‐Patient‐Refined Diagnosis‐Related Groups Severity of Illness (APR‐DRG‐SOI) were also obtained and used to construct multivariate models. Patients who were admitted to the pediatric intensive care unit (PICU) were excluded from the analysis.
Statistical Analysis
Statistical analyses were conducted and frequencies and percentages were calculated using Stata SE version 8.0 (College Station, TX). For all interval and ratio‐scaled variables, distributions were tested for normality using the Shapiro‐Wilks test to determine whether to use parametric or nonparametric statistical tests. For distributions meeting the normality assumption, the unpaired t test was used to compare the intervention service with traditional services. Where the normality assumption was not met, the Mann‐Whitney test was used. Categorically scaled data were compared using Pearson's chi‐square test. The standardized mean differences, reported as d values, were calculated to determine the effect size. Small, medium, and large effect sizes were defined as d values of 0.20, 0.50, and 0.80, respectively.11 Teaching quality, an effect modifier, was entered as a covariate into a linear regression model. Analyses of length of stay and total charges were conducted using multivariate linear regression controlling for patient age and severity of illness.
This study was approved by the University of Utah and Primary Children's Medical Center's Institutional Review Board.
RESULTS
Two hundred and three students enrolled in the third‐year pediatric clerkship during the study period, and all students completed the clerkship on their assigned services. One hundred and twenty‐eight were randomized to the intervention service and 75 to the traditional services. There were no statistically significant differences in median age, percentage of male students, or mean USMLE Step 1 score between the students randomized to the intervention service and those randomized to the traditional services (Table 1).
Intervention service | Traditional services | P value | |
---|---|---|---|
| |||
Age (median) | 28 | 28 | .76* |
Sex (% male) | 58.6 | 62.7 | .57 |
USMLE Step 1 score | 217 | 217 | .94 |
Process Measures
Overall, 96.6% of students (196 of 203) submitted patient logs; 97.7% of students (125 of 128) on the intervention service and 94.7% of students (71 of 75) on the traditional services. The students on the intervention service admitted a median of 10 patients, whereas the students on the traditional services admitted a median of 11 patients (d = 0.45, P < .01). Age data were recorded on 137 patient logs (69.9% of submitted logs, 72.0% of students on the intervention service vs. 66.2% of students on the traditional services). The percentage of students who saw at least 1 newborn (birth‐23 months), child (2‐12 years), and adolescent (12‐18 years) was 34.8% on the intervention service and 33.3% on the traditional services (P = .87) (Table 2).
Intervention service | Traditional services | d | P value | |
---|---|---|---|---|
| ||||
Median number of patients | 10 | 11 | 0.45 | < .01* |
Percent of students who saw 1 newborn, child, and adolescent | 34.8% | 33.3% | 0.03 | .87 |
Top 10 diagnoses cared for (n) | 4.4 | 3.6 | 0.48 | < .01 |
Percent of patients cared for whose diagnoses were in top 10 | 59.3% | 46.8% | 0.62# | < .01 |
Percent of unique diagnoses (median) | 80.0% | 80.0% | 0.02 | .62 |
Students on the intervention service encountered, on average, a larger number of the 10 key diagnoses (4.4 vs. 3.6, d = 0.48, P < .01) and a higher percentage of their patients had clinical conditions among the key diagnoses (59.3 vs. 46.8, d = 0.62, P < .01). To determine if this higher percentage was the result of admitting multiple patients with the same diagnosis, we examined the percentage of unique primary diagnosesthe number of different primary diagnoses divided by the total number of patientsand found no differences (Table 2).
Student Performance
The faculty and resident evaluations of the students showed statistically significant differences between those in the intervention service and those in the traditional services in only 2 of the 18 items. These items were analysis in the data interpretation category (3.81 vs. 3.64, d = 0.35, P = .02) and patient interaction in the professional demeanor category (3.89 vs. 3.76, d = 0.31, P < .05). Both differences favored the intervention service. There were no statistical differences by service in student performance on the NBME subject examination (73.2 vs. 72.3, P = .39).
Student Satisfaction
Overall, 87.2% of students (177 of 203) completed the survey; 87.5% of students (112 of 124) on the intervention service and 86.7% of students (65 of 75) on the traditional services. The students on the intervention service both had a more positive overall attitude about their rotation and were more likely to find it a satisfying educational experience. Students on the intervention service also reported greater participation in patient care. Effect sizes ranged from small to medium (Table 3). The internal consistency of answers about participation in patient care was high (Pearson correlation coefficient r = 0.80).
Intervention service | Traditional services | d | P value | |
---|---|---|---|---|
| ||||
My overall attitude toward this rotation is: 1. highly negative to 5. highly positive | 4.48 | 4.26 | 0.26 | .02* |
I found this rotation a satisfying educational experience: 1. strongly disagree to 5. strongly agree | 4.49 | 4.22 | 0.35 | < .01* |
My role on this rotation was that of an: 1. observer, 3. participant, 5. director | 3.77 | 3.33 | 0.60# | < .01 |
My supervising interns/residents were _____ teachers: 1. poor, 3. good, 5. exemplary | 3.91 | 3.75 | 0.17 | .26* |
My input into patient care decisions was: 1. strongly discouraged to 5. strongly encouraged | 4.45 | 3.98 | 0.66# | < .01* |
I was able to make a significant contribution to patient care: 1. strongly disagree to 5. strongly agree | 4.19 | 3.92 | 0.34 | .02* |
I had direct responsibility for patient care: 1. strongly disagree to 5. strongly agree | 4.33 | 3.95 | 0.46 | .01* |
My attendings were _____ teachers: 1. poor, 3. good, 5. exemplary | 4.09 | 3.75 | 0.40 | < .01* |
I found the feedback I received during this rotation to be: 1. insufficient, 3. appropriate, 5. excessive | 2.84 | 2.65 | 0.22 | .17* |
The following best describes the quality of my supervision during this rotation: 1. I was expected to do things beyond my competence unsupervised 3. The degree of supervision was appropriate for my level of training 5. I was excessively supervised on skills I had already demonstrated | 2.95 | 3.06 | 0.18 | .19 |
During this rotation: 1. I was expected to see too many patients 3. I was expected to see an appropriate number of patients 5. I expected to see more patients | 3.46 | 3.31 | 0.18 | .33* |
Before this rotation I _____ pediatrics as a career choice: 1. had rejected, 3. was considering, 5. had decided on | 2.37 | 2.14 | 0.22 | .11* |
This rotation increased my interest in pursuing pediatrics as a career: 1. strongly disagree to 5. strongly agree | 3.74 | 3.60 | 0.14 | .32* |
Students on the intervention service rated the teaching of their attendings, but not of their residents, higher than did students on the traditional services. Controlling for the perceived quality of the attending, 3 of 6 satisfaction outcomes remained statistically significant: role on rotation (P < .01), input into patient care decisions (P < .01), and direct responsibility for patient care (P = .04). Students on both services believed they were appropriately supervised (P = .19). Despite the students on the traditional services on average admitting more patients, there was no significant difference by service in the students' rating of patient load (P = .33).
Career Choice
The odds ratio and 95% confidence interval for students enrolling in a pediatric subinternship was 1.94 (0.83‐4.49) and matching in a pediatric residency was 2.52 (0.99‐6.37). There were no statistically significant differences by service in the percentage of students enrolling in primary care (pediatric, internal medicine, and family practice) subinternships or residencies (Table 4).
Intervention service | Traditional services | Odds ratio (95% CI) | |
---|---|---|---|
Pediatric subinternship | 19.5% | 11.1% | 1.94 (0.83‐4.49) |
Primary care subinternship | 68.3% | 70.8% | 0.89 (0.47‐1.67) |
Pediatric residency | 18.6% | 8.3% | 2.52 (0.99‐6.37) |
Primary care residency | 40.7% | 31.9% | 1.46 (0.79‐2.70) |
Resource Utilization
One hundred and thirty‐five patients were excluded from the resource utilization analysis (n = 594) because their unique identifiers could not be found or they had been admitted to the PICU. Univariate analysis demonstrated statistically significant differences for patients with asthma, but not patients with bronchiolitis, febrile infants, or patients with pneumonia, favoring the intervention service. Patients with asthma admitted to the intervention service had a shorter length of stay (49.9 vs. 70.1 hours, P = .02) and lower total charges ($3600 vs. $4600, P = .02), as shown in Table 5. Of 4 multivariate models controlling for age and severity of illness, each with length of stay and total charges as the dependant variables, length of stay was significantly less for patients with asthma admitted to the intervention service only. Such patients were discharged an average of 23.3 hours earlier than patients with asthma admitted to the traditional services (P = .02).
Diagnosis (n) | n | Length of stay (hours) | P value | Total charges | ||||
---|---|---|---|---|---|---|---|---|
Intervention service | Traditional services | Intervention service | Traditional services | Intervention service | Traditional services | P value | ||
| ||||||||
Bronchiolitis (210) | 159 | 51 | 63.7 | 70.5 | .20* | $4300 | $4800 | .20* |
Febrile infant (152) | 105 | 47 | 58.8 | 58.9 | .50* | $4800 | $4900 | .28* |
Pneumonia (123) | 82 | 41 | 84.3 | 116.8 | .71* | $6300 | $9200 | .63* |
Asthma (109) | 80 | 29 | 49.9 | 70.1 | .02* | $3600 | $4600 | .02* |
DISCUSSION
This study's objective was to evaluate a third‐year pediatric clerkship structure that focuses on students, using multiple outcome parameters. Utilizing a robust design, the results of this study have demonstrated that the intervention service is more successful than the traditional services in several outcomes. Students assigned to the intervention service were more satisfied and more likely to select pediatrics as a career. These improvements were accomplished while maintaining similar process measures, student performance, and resource utilization compared with those of the traditional services.
Methods
The methods used in this study compare favorably with other evaluations of educational interventions. The present study incorporated a randomized controlled design.12 Although several studies of ambulatory clerkships used a randomized design, few randomized all eligible students.7, 8 The others used some form of selection prior to randomization. For example, in the Pangaro et al. study, students selected their clerkship site by lottery, with students selecting a certain site then offered the opportunity to participate in the intervention.6 The present study manifested several additional strengths. Multiple outcomes, including effects on patient care, were evaluated. Moreover, this study had a relatively large intervention group and total sample size compared with those in other medical education studies. Finally, because the intervention service had been in place for several years prior to its evaluation, the confounding influence of difficulties working out its implementation was minimized.
Results
Few studies of ambulatory experiences demonstrated statistically significant, let alone clinically significant, results. Most studies showed no statistically significant differences in student evaluations or examination scores. An exception is Grum et al., who showed improvements on 3 of 5 examinations.4 A few studies have found improved student satisfaction.3 None of the randomized controlled trials demonstrate increases in students matching in internal medicine or primary care residencies.4, 68 In contrast, this study produced statistically or programmatically significant results in process measures, evaluations, satisfaction, and career choices.
Several of our specific findings deserve additional comment. Although the admitting residents were instructed to assign patients to the intervention service based on their acuity and complexity, it is important to examine these residents' actual behavior. Several of our hypotheses were not validated. The students on the intervention service admitted fewer patients and were no more likely to see at least 1 patient in each age category. The admitting resident may have limited the number of patients admitted to the intervention service based on the workload of the supervising resident not that of the student. The supervising resident on the intervention service must round on all the patients, whereas the oversight of patients seen by students on the traditional services is shared with the interns. Having the attending on the intervention service share this supervising responsibility might improve this outcome.
Students on the intervention service had more positive attitudes toward the rotation. In addition, potentially negative attitudes were not manifest. For example, it might be argued that third‐year medical students are not prepared to bear this increased responsibility. However, there was not a significant difference in students' perception of the quality of supervision or the workload.
Although the goal of medical education is the production of competent physicians, it is important that the process not place undo burdens on patients and the health care system. Univariate analysis showed similar resource utilization. It might be contended that the admitting resident assigned the intervention service patients who were less acutely ill. Therefore, we performed multivariate analysis using APR‐DRG‐SOI to control for severity of illness. Of 8 comparisons, the only statistically significant difference, length of stay of patients with asthma, favored the intervention service.
Limitations
Although this study had numerous strengths, it also had several limitations. The primary limitations were lack of generalizability, difficulty in obtaining authentic assessments, the potential difference between statistical and educational significance, and inability to identify which components of the intervention service were responsible for the outcomes. This study's findings may not be generalizable to other institutions. For example, institutions without age or organ systembased teams may not observe increases in the number of key diagnoses encountered in the patients cared for. Regarding the assessments, there may be better measures of clinical competence, such as an objective structured clinical examination (OSCE),13 than those used in this study. However, there were not sufficient resources to implement an OSCE at the end of the rotation.
Some might question whether the statistically significant differences have educational significance. Although that is an important concern, this study should be compared with other educational interventions that found few statistically significant, let alone educationally significant, differences. To address this concern, we calculated effect sizes. The differences in student satisfaction were small to moderate. Although the lower limit of the 95% confidence interval of the odds ratio for matching in a pediatric residency was 0.99, the magnitude was programmatically important.
Finally, this study was an evaluation of an existing program. The authors were unable to control some potential confounders including patient allocation, average daily census, and quality of teaching. For example, Griffith and colleagues have shown that working with the best teachers improves student performance.14 We were not able to randomly assign the faculty among the services, and unequal distribution of better teachers could have biased this study's outcomes. The students on the intervention service rated their attendings, but not their residents, higher than did the students on the other services. However, the linear regression model showed that the perceived quality of the attending did not account for all the differences in student satisfaction. It was not possible to control for this factor in comparing student performance or subinternship or residency selection because the survey, which included the faculty evaluations, was anonymous and therefore could not be linked to the other data sets.
The perceived differences in the quality of teaching may not have been the result of differences in the attendings but instead of differences in the structure of the services. Accessibility is one of the characteristics of excellent clinical teachers.15 The intervention structure may permit faculty to spend more time with students, and this may increase the perceived quality of the teaching. However, it is not possible to resolve this issue with the available data.
CONCLUSIONS
The intervention service is a structure for the pediatric inpatient rotation of third‐year medical students that, instead of dividing the faculty and supervising resident's attention between interns and students, focuses their attention on the students. Although it has been difficult to demonstrate improvements as a result of the educational interventions, we have shown several improvements in the evaluations of the students. Moreover, the pattern of increased student satisfaction and a tendency toward more student selecting careers in pediatrics are remarkable. This was accomplished with similar resource utilization. Therefore, this program merits being continued at our institution and possibly adopted at other medical schools. Further research is needed to determine which aspects of the intervention are responsible for its effects. Some components, such as focused time with students, may be applicable to traditional services.
Acknowledgements
The authors thank Ronald Bloom for encouraging us to conduct this study; Kathy Bailey, Alice Dowling, and Margie Thompson for their assistance in the data collection; and Elizabeth Allen, Ronald Bloom, Flory Nkoy, Louis Pangaro, Stephanie Richardson, and Rajendu Srivastava for manuscript review.
- The role of the student ward in the medical clerkships.J Med Educ.1985;60:524–529. , , .
- Changing the fourth‐year medicine clerkship structure: A successful model for a teaching service without housestaff.J Gen Intern Med.1993;8:31–32. .
- A randomized, controlled pilot study of placing third‐year medical clerks in a continuity clinic.Acad Med.1993;68:845–847. , .
- Consequences of shifting medical‐student education to the outpatient setting: effects on performance and experiences.Acad Med.1996;71(suppl 1):S99–S101. , , .
- Learning outcomes of an ambulatory care rotation in internal medicine for junior medical students.J Gen Intern Med.1993;8:189–192. , .
- A prospective, randomized trial of a six‐week ambulatory medicine rotation.Acad Med.1995;70:537–541. , , , , .
- Ambulatory versus inpatient rotations in teaching third‐year students internal medicine.J Gen Intern Med.1998;13:327–330. , , , , .
- The effect of an ambulatory internal medicine rotation on students' career choices.Acad Med.1997;72:147–149. , , , , .
- Theme issue on medical education: Call for papers.JAMA.2005;293:742. .
- Perkin RM,Swift JD,Newton DA, eds.Pediatric Hospital Medicine: Textbook of Inpatient Management.Philadelphia:Lippincott Williams 2003.
- Call for greater emphasis on effect‐size measures in published articles in Teaching and Learning in Medicine.Teach Learn Med.2002;14:206–210. .
- Educational research and randomised trials.Med Educ.2002;36:1002–1003. .
- The objective structured clinical examination.Arch Pediatr Adolesc Med.2000;154:736–741. , .
- Relationship of how well attending physicians teach to their student's performances and residency choices.Acad Med.1997;72(suppl 1):S118–S120. , , , .
- Assessing quality and costs of education in the ambulatory setting: A review of the literature.Acad Med.2002;77:621–680. , .
- The role of the student ward in the medical clerkships.J Med Educ.1985;60:524–529. , , .
- Changing the fourth‐year medicine clerkship structure: A successful model for a teaching service without housestaff.J Gen Intern Med.1993;8:31–32. .
- A randomized, controlled pilot study of placing third‐year medical clerks in a continuity clinic.Acad Med.1993;68:845–847. , .
- Consequences of shifting medical‐student education to the outpatient setting: effects on performance and experiences.Acad Med.1996;71(suppl 1):S99–S101. , , .
- Learning outcomes of an ambulatory care rotation in internal medicine for junior medical students.J Gen Intern Med.1993;8:189–192. , .
- A prospective, randomized trial of a six‐week ambulatory medicine rotation.Acad Med.1995;70:537–541. , , , , .
- Ambulatory versus inpatient rotations in teaching third‐year students internal medicine.J Gen Intern Med.1998;13:327–330. , , , , .
- The effect of an ambulatory internal medicine rotation on students' career choices.Acad Med.1997;72:147–149. , , , , .
- Theme issue on medical education: Call for papers.JAMA.2005;293:742. .
- Perkin RM,Swift JD,Newton DA, eds.Pediatric Hospital Medicine: Textbook of Inpatient Management.Philadelphia:Lippincott Williams 2003.
- Call for greater emphasis on effect‐size measures in published articles in Teaching and Learning in Medicine.Teach Learn Med.2002;14:206–210. .
- Educational research and randomised trials.Med Educ.2002;36:1002–1003. .
- The objective structured clinical examination.Arch Pediatr Adolesc Med.2000;154:736–741. , .
- Relationship of how well attending physicians teach to their student's performances and residency choices.Acad Med.1997;72(suppl 1):S118–S120. , , , .
- Assessing quality and costs of education in the ambulatory setting: A review of the literature.Acad Med.2002;77:621–680. , .
Copyright © 2007 Society of Hospital Medicine