In this week’s JAMA, Dr. Don Berwick, CEO of the Institute for Healthcare Improvement, argues that evidence-based standards should be relaxed for quality improvement practices. Ironically, a few pages away, a Swiss study finds than an IHI-endorsed MRSA prevention strategy doesn’t work.
What’s a person or hospital to do?
A little background on both issues, beginning with the Berwick piece. Don, as everybody knows, is the world’s leading figure in safety and quality – much of today’s quality movement was generated by the force of his ideas and personality. His Institute for Healthcare Improvement (IHI) has become indispensable to workers in the field, spreading the gospel and providing tons of practical tools, conferences, and other resources that have undoubtedly saved thousands of lives. Personally, I think he is a uniquely brilliant and effective person – one of the truly remarkable people in healthcare.
But, beginning with a report on evidence-based patient safety practices that my colleagues and I published for AHRQ six years ago, he and I have found ourselves in different philosophical camps when it comes to the role of evidence. He argues that traditional rules of evidence-based medicine should be suspended or relaxed when it comes to safety and quality practices, since the risks and costs of the practices are generally low (at least compared with drugs or devices), rigorous studies are difficult to do (since they deal with such complex settings), and the “evidence” from experience and theory is often sufficiently compelling to merit adoption.
Conversely, my colleagues (largely Kaveh Shojania, now of the University of Toronto, Peter Pronovost of Johns Hopkins, and Andy Auerbach and Seth Landefeld of UCSF) and I have argued that some safety and quality practices that seem like slam dunks turn out to be worthless or even harmful, and many are surprisingly expensive. Accordingly, we should aim to study them using rigorous scientific methods, if feasible, before promoting or mandating widespread implementation. If this issue isn’t too wonkish and arcane for you, here are the JAMA Point-Counterpoint pieces that came out after our 2001 AHRQ Patient Safety report; our critique of the IHI’s decision to include implementation of Rapid Response Teams (RRTs) as part of their 100,000 Lives Campaign, accompanied by Berwick’s response; and the New England Journal article by Auerbach, Shojania and Landefeld that effectively advanced the scientific rigor viewpoint.
In this week’s JAMA, Berwick argues – with characteristic eloquence – his point, largely responding to the Auerbach article. He admonishes us to not robotically require randomized-controlled trials and P’s of less than 0.05 before taking action, and reminds us that is important to consider alternative ways of viewing evidence, some drawn from social science, in interpreting the results of QI practices. I continue to believe, though, that he undersells the possibility of unanticipated consequences and the costs of many quality interventions. Sometimes, the “just do something” philosophy can turn out to be more costly – in time, treasure and political capital – than a “just sit there until we have better evidence” stance. Thoughtful people and institutions, I believe, weigh the advantages and disadvantages of these two stances before choosing what to do.
In our analysis of the science behind IHI’s 100,000 Lives Campaign, Pronovost and I focused on RRTs, since the other campaign-endorsed practices had strong supportive evidence. In IHI’s current 5 Million Lives Campaign, the practice that caught my eye (but that I’ve not yet written about) was the one promoting universal screening (Active Surveillance Cultures, or ASC) for methicillin-resistant staph aureus (MRSA), followed by treatment and isolation of patients who test positive. I am not an expert in this area, but the literature I’ve reviewed left me with vibes similar to the ones I had when I first reviewed RRTs: there is no question that MRSA is a major problem that cries out for solutions; there is some anecdotal supportive evidence for an ASC-based strategy; such a strategy has reasonable face-validity; but the ASC-based strategy is so expensive and complex that, in my judgment, the evidence is not yet sufficiently compelling to dub the practice a standard of care.
Yet IHI did include ASC as a recommended practice in the 5 Million Lives Campaign. Even scarier, a number of states (beginning with Illinois) have mandated it through state law (and many more are considering doing this). This is really dangerous – it is one thing for IHI to recommend a practice, which creates substantial social pressure (but no mandate) to adhere; it is another for a state to legislate it. That is an awfully blunt and powerful weapon, and we better be damn sure that the practice really works before we deploy it.
So it was ironic that the best study to date of a widespread MRSA screening and treatment program appeared in this week’s JAMA, right alongside Berwick’s editorial. Investigators from the University of Geneva reported a prospective two-year cohort study that employed a crossover design, testing standard infection control procedures alone against these procedures plus ASC on admission. They found that the addition of the ASC-based program, despite its theoretical attractiveness, had no impact on MRSA rates or harm. Although the authors don’t cite costs, I can tell you that MRSA testing by PCR costs about $30 per patient (and they performed 10,000 tests), and finding single rooms for all positive patients is often expensive and a logistical nightmare. With the estimated cost per positive patient of about $150, a large-scale screening program represents an investment of hundreds of thousands of dollars for a typical hospital. Moreover, there is another set of costs: patients in isolation are unhappy, seen less often by caregivers, and suffer more adverse events. All of these would be acceptable if the strategy worked, but, at least according to this Swiss study, it doesn’t.
As the Geneva authors and an editorialist note, the best approach to MRSA reduction today remains religious hand hygiene practices. One wonders how much the Swiss hospital could have bumped its hand washing rate by taking the resources spent on ASC and isolation and investing them in hand hygiene (in the form of a campaign, better dispensers, direct observation for adherence with real-time feedback to clinicians, and perhaps even provider bonuses for increased rates of hand washing).
What is the lesson from all of this? First of all, the debate over the role of evidence in quality improvement is healthy and inevitable, and, as with most complex questions, the right answer will lie somewhere between the extremes. Requiring randomized, double-blind trials for simple, commonsense safety and quality practices would be as silly as requiring evidence before barricading the cockpit doors of airliners after 9/11. Nobody wants that.
On the other hand, in our zeal to “do something,” vigorously promoting or mandating practices with weak evidence risks squandering scarce resources, diverts us from better strategies, and subjects the safety field to the whims of opinions and biases. Berwick worries that our EBM pushback gives intellectual ammo to the dark forces of status quo. This is a reasonable concern. But given the public interest in quality and patient safety, I worry more that the distance between “this seems like a good idea” to “let’s include it as part of a campaign” to “let’s make it a new Joint Commission standard” to “let’s make it a state law” is perilously short. Accordingly, we should require awfully strong evidence that we’re doing the correct thing as we traverse that path, particularly when practices are complex and expensive.
Finally, I agree with Don that ours is an honest intellectual disagreement among friends who seek precisely the same goal. He concludes his JAMA piece with these words of caution, in part directed toward me and my colleagues:
The rhetoric and tone of comment on work in the field of day-to-day health care affect the pace of improvement. Academic medicine has a major opportunity to support the redesign of health care systems; it ought to bear part of the burden for accelerating the pace, confidence, and pervasiveness of that change. Health care researchers who believe that their main role is to ride the brakes on change—to weigh evidence with impoverished tools, ill-fit for use—are not being as helpful as they need to be. “Where is the randomized trial?” is, for many purposes, the right question, but for many others it is the wrong question, a myopic one. A better one is broader: “What is everyone learning?” Asking the question that way will help clinicians and researchers see further in navigating toward improvement.
Personally, I will do what I can to keep the debate collegial and friendly, and to continue to keep my eye on the ball: improving the quality and safety of patient care. In the spirit of Don’s article, it is worth reading the Swiss study and asking, “how can we use the results to help us figure out how best to prevent MRSA infections?” rather than simply saying, “well, that doesn’t work.”
But don’t be surprised if I continue to tap those brakes from time to time. Sometimes, that’s precisely what is needed to prevent a crash.