Are We Mature Enough to Make Use of Comparative Effectiveness Research?

Thanks to White House budget director Peter Orszag, a Dartmouth Atlas aficionado, $1.1 billion found its way into the stimulus piñata for “comparative effectiveness” research. Terrific, but – to paraphrase Jack Nicholson – can we handle the truth?

In other words, are we mature enough to use comparative effectiveness data to make tough decisions about what we will and won’t pay for? I worry that we’re not.

First, a bit of background. Our healthcare system, despite easily being the world’s most expensive, produces (by all objective measures) relatively poor quality care. Work begun 3 decades ago by Dartmouth’s Jack Wennberg and augmented more recently by Elliott Fisher has made a point sound-bitey enough for even legislators to understand: cost and quality vary markedly from region to region, variations that cannot be explained by clinical evidence and do not appear to be related to healthcare outcomes. In other words, plotting a 2×2 table with costs on one axis and quality on the other, we see a state-by-state Buckshot-o-Gram. Three key conclusions flow from this “variations research”:

  • Lots of what we do in healthcare is costly and ineffective
  • We must somehow goose the system to move all providers and patients into the high quality, low cost quadrant on that 2×2 table; and
  • Better evidence about what works would help with such goose-ing.

Since nothing can happen without the research, the new funding for comparative effectiveness is welcome and helpful. But will it be sufficient to move the needle?

Here’s where things get dicey. A chief medical officer I know was once discussing unnecessary procedures in his healthcare system. In a rare moment of unvarnished truthtelling, one of his procedural specialists told him, “I make my living off unnecessary procedures.” Even if we stick to the correct side of the ethical fault line, doctors and companies inevitably believe in their technologies and products, making it tricky to get them to willingly lay down their arms. Robert Pear described the political challenges surrounding effectiveness research in last week’s New York Times:

[the legislation has become] a lightening rod for pharmaceutical and medical-device lobbyists, who fear the findings will be used by insurers or the government to deny coverage for more expensive procedures and, thus, to ration care. In addition, Republican lawmakers and conservative commentators complained that the legislation would allow the federal government to intrude in a person’s health care by enforcing clinical guidelines and treatment protocols.

At this moment, Medicare’s rules – yes, the same Medicare that’s slated to go broke in a decade or so – forbid it to consider cost in its coverage decisions. Rather, its mandate is to cover treatments that are “reasonable and necessary.” So if Medicare comes to believe that a new chemotherapy will offer patients an extra week of life at a cost of $100,000 per patient, it is pretty much obligated to cover it. This is insane, obviously, but such are the rules.

And, if anybody tries to put the Kybosh on the Chemo, you can count on boatloads of oncologists, patient advocates, and pharma companies to descend on Washington like teenagers with Obama inaugural tickets, hammering the authorities to “be humane” and “take the decisions out of the hands of government bureaucrats and MBAs” and “put them in the hands of doctors, where they belong.” (This is precisely what happened in at Medicare’s hearings regarding cardiac CT, a technology that Medicare decided to cover despite a striking dearth of evidence of effectiveness). And TV newsmagazines will be right there, telling the compelling and tragic story of the kindly grandma who will never see her grandchildren’s bar mitzvahs because of Medicare’s heartlessness.

As Stalin said, “a single death is a tragedy, a million deaths a statistic.” Such is the problem with trying to make rational, evidence-based tradeoffs (that lead some people to not get the care they want) in a media-saturated open society.

But we can’t give up. We need to get a handle on healthcare costs, and it’s far better to do it by jettisoning non-evidence-based, wasteful care than by getting rid of the good stuff.

Luckily, we’ve waited long enough that we have some models to learn from – and some cautionary tales. Let’s begin by talking NICE. Literally.

A decade ago, Britain’s National Health Service launched NICE, the National Institute for Health and Clinical Excellence. In a recent NEJM article entitled “Saying No Isn’t NICE,” Robert Steinbrook reviewed the “travails” of NICE:

Since 2002, National Health Service organizations…have been required to pay for medicines and treatments recommended in NICE “technology appraisals.” The NHS usually does not provide medicines or treatments that are not recommended by NICE… NICE can be viewed as either a heartless rationing agency or an intrepid and impartial messenger for the need to set priorities in health care…

As we look to NICE for a roadmap, it is worth remembering the differing dynamics of a closed, tax-funded system such as the NHS, and the pluralistic, chaotic hodgepodge that is American healthcare. NICE’s physician-chair told Steinbrook that the Institute had to

be fair to all the patients in the National Health Service… If we spend a lot of money on a few patients, we have less money to spend on everyone else. We are not trying to be unkind or cruel. We are trying to look after everybody.

NICE, with its 270-member staff and $50M budget, not only reviews whether treatments work, but explicitly analyzes cost-effectiveness (leading some drug manufacturers to cut their prices to achieve better C-E ratios and chances of NICE approval). Although the cost-effectiveness cutoff is a bit fluid, NICE generally does not recommend treatments whose cost per quality-adjusted-life-year is more than about $40,000. According to American healthcare mythology, our cutoff is $50,000, but in reality it is hard to find examples of practices that have been withheld based on cost-effectiveness considerations.

Even in relatively non-litigious Britain, about one-third of NICE’s decisions are appealed, and several have generated impassioned pleas by patients and advocates for re-consideration. But most decisions have held up. Steinbrook praises NICE for helping to focus global attention on cost-effectiveness, but notes that

It remains to be seen… how many other countries will follow its lead. After all, saying no takes courage – and inevitably provokes outrage.

To me, NICE’s experience shows that rationing based on cost-effectiveness can be done, but we can count on it being about ten times harder in the United States (with our fragmented healthcare system, our sensationalist media, our hypertrophied legal system, and our tradition of individual benefit trumping the Good of the Commons) than it has been in the UK.

A second cautionary note: In November, the Times ran an article describing the sad case of the ALLHAT trial, the 2002 JAMA hypertension study that found that diuretics, costing pennies a day, worked better than 3 other classes of drugs (ACE inhibitors, calcium channel blockers, and alpha blockers) that cost up to 20 times more. The study, which took nearly a decade and cost over $100 million, was largely ignored – six years after its publication, the number of hypertensive patients on diuretics has bumped by an underwhelming 5% (from 35 to 40%).

Why the wimpy response to ALLHAT’s results? Partly resistance to change, partly new drugs that came out as the study was being conducted, and partly pharmaceutical company lobbying. As Medicare’s former CMO Sean Tunis said, “there’s a lot of magical thinking that [the application of comparative effectiveness studies] will all be science and won’t be politics.”

And if that isn’t depressing enough for anyone favoring science and rationality, here’s one last cautionary tale:

In the mid-1990s, the buzzword for encoding evidence-based practice was “practice guidelines,” and an agency called the Agency for Health Care and Policy Research (AHCPR) set out to create such guidelines using clinical evidence. Sound familiar? One of the first procedures AHCPR addressed was surgical management of back pain, bringing together a panel of national experts (led by Seattle’s Rick Deyo) to review the literature and recommend evidence-based practice standards.

You can guess the rest. The AHCPR panel found virtually no evidence supporting thousands of back surgeries each year, and recommended against them. Orthopedic surgeons worried that the guidelines were the first step to blocking insurance coverage for one of their favorite pastimes. As described by Jerome Groopman in a 2002 New Yorker article,

…almost as soon as the panel convened, it came under attack. Contending that the deliberations were not an open process and that the panelists were biased against surgery, a group of spine surgeons, led by Dr. Neil Kahanovitz, an orthopedist who was then a board member of the North American Spine Society, lobbied Congress to cut off AHCPR’s funding. Deyo recently told me [Groopman] that the line taken by the opponents of the panel was “ ‘These guys are anti-surgery, they’re anti-fusion.’ But we really had no axe to grind,” he went on. “Our aim was to critically examine the evidence and outcomes of common medical practices.”

Congress, led by then-House Speaker Newt Gingrich and in a nasty, budget-slashing mood, “zeroed-out” AHCPR’s funding. Though the Agency survived (a fraction of its budget was restored by the Senate), it did the only thing it could – ending the guideline program, re-branding itself as being about producing evidence but not recommending practice, and even changing its name to the Agency for Healthcare Research and Quality (AHRQ), a masterful series of moves by the late John Eisenberg credited with saving the agency from bureaucratic purgatory. As much as we like to blame the politicos, the drug and device companies, and the MBAs, the AHCPR fiasco demonstrated that physicians are every bit as capable of self-interested venality as any other group.

So, is it worth wasting our time and money on comparative effectiveness research? I’m hoping that this is a new day – the coming implosion of the healthcare system is now well recognized, as is the quality chasm. We simply must find ways to drive the system to produce the highest quality, safest care at the lowest cost, and we need to drag the self-interested laggards along, kicking and screaming if need be. Comparative effectiveness research is the scientific scaffolding for this revolution, so bring it on.

But let’s not be naïve about it – one person’s “cost-ineffective” procedure may be a provider’s mortgage payment, a manufacturer’s stock-levitator, and a patient’s last hope for survival.

So my hope is that we have the brains to produce the right kinds of data, and the maturity to act on it, humanely but responsibly.

6 Responses to “Are We Mature Enough to Make Use of Comparative Effectiveness Research?”

  1. menoalittle February 21, 2009 at 3:15 pm #

    Bob,

    Wonderful timing and well articulated. Just this past week, CBS Nightly News reported on an outcomes concluding that coronary angioplasty and stents are much more expensive but have no better outcomes than treating patients’ (who are not suffering a heart attack) coronary disease with medication. Hello?

    The angioplasty technology became the sliced bread of treating coronary disease in the 1980’s even though smallish studies then raised similar concerns. Sadly to say, the blindness (failure to do comparative studies) that embraced this technology was promoted by internationally renowned thought leaders at departments of medicine and divisions of cardiology at prominent academic university medical centers. Not only were the docs’ mortgages paid by these unnecessary and oft injurious procedures but also the entire budget of the affiliated “teaching” hospital and medicine faculty salaries was supplemented by such cardiology “services”.

    The stakeholders including the balloon and catheter makers, the stent makers, the balloon and stent inserting experts, the departments of medicine and cardiology (and surgery, for complications), the hospital administrators, the media who liked reporting on new technology, and the patients who were falsely and deceptively led to believe that balloons and stents will prevent heart attacks. Even the insurance companies got in to the act by failing to require appropriateness standards, and then, deciding that it was better for their bottom line to raise premiums by an amount much greater than the rate of inflation or the costs to them of the unnecessary procedures. Painful to say but they all fed at the trough.

    And now for the so-called HIT measuring tools that the government plans to use to help establish relative efficacy. CCHIT and HIMSS and other HIT trade promoting organizations have recently been featured on some well read blogosphere reports and comments starting (WSJ Health blog on Madoff fraud and the Mayo Clinic) here: http://blogs.wsj.com/health/2009/02/12/one-other-health-outfit-stung-by-madoff-mayo-clinic/#comments

    confirming suspicions that the very HIT devices that the government thinks will accurately record and provide the data it wants, have themselves not undergone scientific safety and efficacy evaluation. They are, however, “certified” (but not “qualified”) by CCHIT, an industry trade group financially supported by the HIT device makers. Indeed, the fox is guarding the henhouse.

    A concluding concept is this: most doctors know what works and what does not and what is safe and efficacious for most of the care they provide. If unsure, the practice guidelines are available as reference. In order for physicians to practice good medicine that way and embrace “do no harm” to the patient and to the economy, the flawed and convoluted payment schemes for doctors (first), and then hospitals, must be replaced. Beating the doctors into submission is not the way to go. Any ideas?

    Best regards,

    Menoalittle

  2. geriatricdoc February 21, 2009 at 8:38 pm #

    It is true that comparative efficacy is needed but it is also true that before large amounts of badly needed dollars are expended we ought to ascertain how the results of these studies will be implemented.
    The recently reported SYNTAX study detailing a comparison of PCI versus CABG is an example where even a well performed study will leave questions unanswered.
    However, in real life we see interventions that are of little nenefit and in fact can cause harm. This is exemplified by the drive by angioplasty where an incidentaly discoveredl renal artery stenosis is treated with no benefit to the patient.
    Cardiac CT is an example, even the most prestigious medical Meccas offer them as a part of an “executive physical” fully realizing the financial rewards and clinical uselessness.
    Similar examples are legion. They exemplify a triumph of money over morality and yet we see such behaviors daily and ignore them.
    Such studies will only be of benefit when we as a profession abandon the oath of Omerta

  3. Jim M February 25, 2009 at 2:05 am #

    Regarding evidence-based medicine and demonstrated compared effectiveness: It is challenging, expensive, and time consuming and difficult to demonstrate these concepts effectively. Sometimes we will just have to act on limited information. Unfortunately, we often rely on “expert opinion”, in spite of the fact that it is known to be the least accurate parameter. And so we consult the electrophysiologist or the invasive cardiologist. And they are the experts but they are also the wolf guarding the chicken coop. This is a difficult situation. I do not believe that the individual consumer is sophisticated enough to understand the debate. I believe that we grossly underuse mass media such as television to share of the issues. Experts exist that could provide and dramatize the issues and open them up to public inquiry.

    In my own area of health information-technology, I am more guarded. There isn’t enough data regarding effectiveness. And yet, I believe anyway. Recently, and a significant article was published in Archives of Internal Medicine. “Clinical Information Technologies and Inpatient Outcomes” January 26, 2009, Pp. 108-114 and associated editorial. The author’s showed substantial decrease in cost, mortality and complications associated with computerize physician order entry, clinical decision support systems, and automated progress notes. While I do believe that we do need more data, I don’t think we should be a slave to it. I don’t think that we need a “RCT for parachutes”!!

    I think that health information-technology, hospital case management and hospital medicine work very well together and are areas SHM should focus on as they synergize.

  4. jnmed February 26, 2009 at 5:22 pm #

    Wonderful post and replies. 2 thoughts

    1- We do need RCTs for some parachutes, particulary expensive ones. Nothing seems more logical to patients and physicians than angioplasty for CAD- but look at the RCTs. If the COURAGE trial changes practice (or reimbursement)- the financial implications would be pretty hefty. This applies to many things (tight glucose control being another “no brainer” that is being challenged)

    2- I wish we could recalibrate our standards of what a reasonble outcome is (NNT, QALY etc.), before we label something “effective”. Once something is trumpeted as “effective”- it goes into our clinical arsenal to be used frequently and broadly for most patients, most of the time. Currently many guidelines seem to emphasize any therapy with an NNT < 100 as effective- not considering that every RCT result is the usually the most optomistic outcome possible. After figuring in publication bias, generalizability, the effect of co-interventions and treating patients with multiple competing illnesses based on single disease RCTs- the presumed effectiveness around many, many interventions has got to be unbelievably uncertain for any specific patient. When, after considering the inherent uncertainty of applying 1 study (or even 1 metaanalysis) that our most realistically quantifiable expectation of benefit for a single patient is comparable to our expectation for a winning lottery ticket, it would seem that cost is as good a reason as any to limit the use of a therapy or diagnostic pathway.

    The more one thinks about these things, it is hard to see how incremental changes coming from comparative effectiveness studies will work, unless there is some political will from the medical community to dramatically change the status quo. As history teaches, I suspect we will have to be dragged kicking and screaming by outside parties in less denial. The driver is payment. More information without a hammer will have a very limited impact.

  5. MKirschMD May 10, 2009 at 8:31 pm #

    Comparative effectiveness research i(CER) s long overdue. The critics will be waving their fists and grabbing their pitchforks, but the concept of determining which interventions work is essential. See http://mdwhistleblower.blogspot.com/search/label/Health%20Care%20Reform%20Quality for additional thoughts.
    If CER permits us to cull those treatments and tests that don’t deliver, then we can save billions of dollars and raise medical quality simultaneously. To those who are hostile to this effort, what better idea do you have?

    Michael Kirsch, M.D.
    http://www.MDWhistleblower.blogspot.com

  6. fintel April 22, 2010 at 6:07 am #

    I just came across this blog I was I hope it gets fixed. Thanks

Leave a Reply