In the early 90s, I had the privilege of directing UCSF’s exceptional internal medicine residency program. It was a time of transition. A decade earlier, residency accreditation requirements (dictated by the Accreditation Council for Graduate Medical Education, ACGME) were fairly benign and largely ignored – marquee programs like ours were generally given carte blanche to organize our residents’ experience as we saw fit.
When I took over our residency in 1992, change was in the air. The ACGME began flexing its muscles, mandating that trainees spend at least one-quarter of their time in ambulatory practice, for example, and that we ensure that residents in VA-based clinics take care of women from time to time. While we residency directors weren’t thrilled with this challenge to our unfettered autonomy (truth be told, we were far more pissed at the ACGME’s maddening computer program), these early ACGME standards were perfectly reasonable and complying with them wasn’t a big deal.
But regulators and accreditors are like patients with Parkinson’s disease: they have a hard time getting started, but once they get started they tend not to stop. Soon there were more required rotations (geriatrics, adolescent medicine…) and a mandate to have at least one compensated residency leader for every 30-40 residents (today we have 5 Associate Residency Directors in addition to the director; back in my day, it was me vs. 150 residents, he says with envy). And, in 2003 came the Big Kahuna: the now-famous limits on housestaff duty hours.
While the duty hours limits have improved housestaff well being and have probably led to fewer shift-end traffic accidents, they have not had demonstrable effects on outcomes, patient safety, or resident education. We now understand that a “simple” mandate to reduce duty hours is actually an act of breathtaking complexity: raising questions of how to do effective handoffs, staff non-resident services, balance autonomy and supervision, ensure that our residents graduate ready to be independent practitioners, and more. Our learning curve has been steep, and it still feels like we’re constantly tweaking the model to get it right.
The ACGME is poised to announce its updated regulations in the next month, and the training world is holding its collective breath. (It’s worth reading my recent AHRQ WebM&M interview with Tom Nasca, ACGME’s CEO, for some insights into his thinking). The smart betting is that ACGME won’t cut the overall weekly hours again, but will mandate defined nap periods and overnight attending supervision (anticipating the latter, we are launching nocturnist coverage of our teaching service at UCSF in July – please contact me if you’re a hospitalist/insomniac looking for work).
While the duty hour limits and the requirement for constant attending supervision are forcing enormous changes, they may not be as disruptive as the changes that ultimately flow from a simple survey that ACGME now administers to all U.S. residents. On it, they ask residents to rate their program’s balance of service vs. education. What a concept!
Of course, this is a deceptively tricky question – service and education can be devilishly hard to tease apart. Sure, sometimes it’s easy: learning about TTP in residents’ report: education. Conversely, carrying the ever-squawking Medical Officer of the Day (the hospital’s air traffic controller) beeper: service. But how about admitting a septic patient at 3am: that’s both. So is participating in an M&M conference reviewing a bad outcome. I’m guessing that even the iconic act of holding the retractor has some educational value for the surgical trainee.
This service-education tension raises all sorts of issues that are colored by economics (housestaff remain the cheapest and most cost-effective labor force in healthcare), nostalgia and ego (“I went through hell, and look how good I turned out”), and even pedagogical theory. Is managing a simulated patient – even one with a “The Sims”-like look and feel – truly as educational as managing a real patient? I doubt it. Is doing a “social admission” on an 82-year old found at home hungry and lying in excrement education? It sure doesn’t feel that way at midnight, but it is how trainees learn about our tattered healthcare safety net and about empathy. The line between service and education is anything but sharp.
In this week’s issue of the New England Journal of Medicine, a group from the Brigham reports the results of an interesting experiment conducted at their affiliated Faulkner Hospital over the past few years. In a study that reminded me of our early test of the hospitalist model (dividing the service into two halves and running the old and new models simultaneously, measuring boatloads of outcomes), their study pitted a traditional team (one attending, a resident, two interns) against a new-fangled team, which they called the Integrated Teaching Unit (ITU). The latter team had two attendings (one hospitalist and one non-hospitalist, both handpicked for their “superior teaching ability”), two residents, three interns, and a reduced intern call schedule (every 6th night) that led to a patient volume about half of usual (3.5 vs. 6.6 patients). In addition to reorganizing the team structure, the investigators did everything else they could think of to enhance education and satisfaction: geographic location of ITU teams to a single nursing unit, multidisciplinary rounds, a faculty development program, and more.
The ITU achieved one of its key aims: improving the education life of the housestaff. Residents on experimental teams were far more satisfied than those on control teams (78 percent vs. 55 percent), and spent significantly more time in conferences and other learning and teaching activities (20% vs. 10% of total time). ITU-based faculty also characterized their ITU work as a much more satisfying teaching experience than their usual ward stint. Given the extent of the changes, these improvements in satisfaction (particularly for the residents) are hardly shocking (just imagine how the interns randomized to the control service felt, watching their colleagues on the ITU managing half the patient volume and being on call q 6 rather than q 4?).
What about patient-related quality outcomes? By and large, they were disappointing. Patient satisfaction was no different between the two groups, in part because the hope that decompressed housestaff would spend more time with patients was not realized. There were no differences in clinical or quality outcomes, belying the oft-made argument that our quality would be better if we just had more time.
But one patient-related outcome was seemingly affected: length of stay was significant shorter on the experimental service than on the control one: 4.1 vs. 4.6 days (p=0.002). To me, this may be the study’s most important finding, since this LOS reduction could help produce the dinero to pay for this kind of reorganization. In fact, a similar magnitude of LOS reduction fueled the growth of the hospitalist field: hospital CFOs became willing to help support the cost of hospitalists, since, under the DRG payment systems, lower hospital costs translate into more money in the hospital’s piggy bank. Remarkably, the authors of the Brigham study play down this LOS reduction (and don’t provide any data on costs) – not even mentioning it in the abstract (well, they mention it, but frame it, oddly, in the negative: “The experimental teams were not associated with a higher average length of patient stay”). They were similarly circumspect in the discussion section of the paper, where they pooh-poohed their LOS findings:
…patients assigned to our experimental teams did not have longer stays; indeed, the length of stay for these patients was shorter than that for the patients assigned to the traditional teams, but given the potential for bias in patient assignments, we cannot be sure of the validity of this finding.
I find all of this surprising, because the LOS reduction seemed fairly robust to me. After all, the patients were essentially randomly assigned and appeared well matched (see Table 1 in the paper), and the difference remained highly significant after adjustment for potential confounders.
I’m guessing that the submitted manuscript was more aggressive in touting the LOS finding, and that the reviewers or editors asked the authors to tone it down. To be sure, we don’t know that this finding will hold up in future studies, or whether it was due to a Hawthorne effect or careful cherry picking of the attendings. But the same could have been said of our original hospitalist study in JAMA, which showed a similar LOS reduction and helped launch the fastest growing specialty in medical history.
Why am I making a fuss over the LOS reduction – after all, it wasn’t the primary goal of the study? Because the question of whether people look at this study as an interesting academic exercise (which only Harvard, with its bottomless endowment, could fund) or as a national blueprint for residency redesign completely hinges on the economics. My friend, the medical historian Ken Ludmerer, makes this point in the article’s accompanying editorial:
Every measure that might be taken to improve the learning environment carries a cost — whether it be paying for teaching time, hiring other physicians to see patients that the resident staff once saw, or relieving residents from mundane chores by employing more phlebotomists and ward clerks. The critical issues become what value teaching hospitals will place on their educational mission and whether the requisite funds can be obtained.
The problem, of course, is the funding model for medical education, the lion’s share of which comes in the form of Medicare payments to hospitals to cover the direct (residents’ salaries) and indirect (sicker patients) costs of running a training program. Hospitals protect the information about the actual Medicare dollars they receive per resident the way Coca Cola protects its formula for soda. This means that the residency director or department chair interested in creating an environment in which housestaff are happier and better rested, with time to reflect on their patients, would somehow have to make the case to her medical center CEO that he should part with many more of his Medicare dollars for this purpose.
Will the CEO see this – cutting each intern’s average volume from 7 to 4 to improve teaching and housestaff happiness – as a worthwhile investment? I doubt it. And if CEOs don’t support it, the model has no chance of catching on. Although the NEJM study didn’t describe the incremental cost of the ITU, my back-of-the-envelope calculations tell me that staffing the entire Faulkner medical service (4000 admissions per year) with the new model would cost about a million extra dollars – both because you’re paying two teaching attendings to do the work previously done by one, and because you’d need to create a non-teaching service with 24-hour coverage to care for the patients that the decompressed residents no longer cover.
My hat goes off to our Harvard colleagues for carrying out, and funding, this provocative experiment. It certainly caused me to think (and fantasize) about how I’d reorganize my service if we had unlimited resources. But until we can demonstrate that this kind of reorganization improves hard outcomes that hospitals really care about (like length of stay, cost, readmission rates, quality measures, or mortality), I doubt many hospitals will be willing to ante up.
If they don’t – and if subsequent studies truly show improved educational outcomes (importantly, we need long term follow-up, since it remains an open question whether housestaff who care for half as many patients over their entire residency will come out as well prepared as their sleepier but more experienced forbears) – then it may fall on the ACGME to raise the bar on the education-to-service ratio, ultimately forcing hospitals and residency programs to implement changes designed to guarantee the optimal balance.