Rating Doctors Like Restaurants

So Zagat will now be rating doctors, using the methods it perfected helping you find the best sushi in Brooklyn Heights. What’s next, Consumer Reports rating grad schools? Fodor rating auto mechanics?

Whatever you think of Zagat’s cross-dressing, it again demonstrates the bottomless market for doctor rankings. HealthGrades, the Colorado company that breathlessly delivers its “200,000 Americans died from medical errors in 200X!” pronouncements every year (grabbing a bunch of headlines, despite the fact that this report is based on measures that were not intended for this purpose and really aren’t measuring deaths from errors), appears to be doing quite well, thank you, largely fueled by its doctor ratings. And every metropolis’s city magazine has its “[Your City’s Name Goes Here]’s Best Doctors” issue, based almost entirely on peer surveys. Most docs scoff at these ratings (particularly docs like me who haven’t made their city’s list), but they clearly move magazines. [I’ll discuss hospital rankings, especially US News & World Report’s Best Hospitals list, in a future posting.]

Clearly, real people want to know who is a good doctor. But how should we be approaching this task?

I have the privilege of serving on the board of the American Board of Internal Medicine. ABIM, and the other specialty boards, have generally taken their charge to be to determine “competence” (with board pass rates generally above 90%, a pretty low bar) and then not to differentiate further. A doc is either board certified or she’s not. End of report.

ABIM’s recently finalized strategic plan includes a commitment to make public more information about diplomates if and when it feels such distinctions are scientifically valid and dissemination would promote high quality care. Dr. Kevin Weiss, the new president of the American Board of Medical Specialties (the umbrella organization for all the specialty boards) might go even further, faster. He recently went on record as favoring having the Boards enter the doctor ranking business – not just determining competence, but differentiating excellence from not-so-much. In a recent talk to the ABIM board in Dallas, Dr. Weiss held a copy of Dallas Magazine’s Best Doctors issue and dramatically observed that if the Boards don’t get into this game, others – with far less allegiance to scientific and psychometric Truth – will. Needless to say, his remarks generated a wee bit of controversy.

I’m also on Google’s Healthcare Advisory Board. [Note that my comments about ABIM and Google represent my own opinions, not those of these fine organizations, and do not divulge any trade secrets. You decide whether to buy more Google stock on your own.] Anyhooo, it wouldn’t surprise you to learn that Google is also thinking about what contribution it can make to the doctor rating “space.” But how to balance consumer rankings (a la Zagat), which will invariably tilt toward bedside manner and office amenities (not unimportant things, but ones that may be quite different from clinical acumen), with more meaningful assessments of clinical competence? And, as I discussed earlier this month, even when you add standard process and outcome measures to the brew, we’re still stuck scratching our heads about how to factor in clinical knowledge and decision making, things that today’s quality measures completely whiff on.

The stakes are immense, and a balanced approach is more likely to bear fruit than any single peephole. Ultimately, if I’m choosing a doc for me or a loved one, I’d like to know it all: bedside manner (4 stars from Zagat), structural measures (is the doctor’s office computerized?), process measures (are diabetics getting statins appropriately?), surrogate outcomes (what’s the average hemoglobin A1c?), and hard outcomes (what are the risk-adjusted mortality or hospitalization rates?). And then I’d like the appropriate specialty board (ABIM, American Board of Surgery, etc.) to tell me whether the physician is meaningfully engaged in quality improvement activities, and how well he or she did on the certifying exam – the best measure we have of knowledge and clinical judgment. Yes, you heard me right: I’d like the Board to tell me whether the doc was in 5th percentile on the certifying exam or the 87th. It doesn’t pass the smell test to say that we consider both these board certified docs to be undifferentiate-able. In this new era of transparency, if we physicians would want that information before choosing a doc for ourselves (and I sure would), then I believe that patients should have access to it as well.

And then I’d like Google or somebody else to put all of this together into an attractive, user-friendly page that pops up when I type “Best Doctor Diabetes San Francisco” into a search engine, along with directions to the office, a link to his appointment calendar… and a parking spot.

Coming soon? The people have spoken, and the people have an uncanny way of getting what they want.

5 Responses to “Rating Doctors Like Restaurants”

  1. totoxm October 29, 2007 at 5:44 pm #

    Couldn’t agree more. I recently read a post somewhere where a physicians said “until we start admitting publicly that some doctors are better than others, we will always be a commodity”. If only physicians would get on this train, we would all be better off. The guy/gal that finishes last in his/her Medical School class is still called doctor. I’m happy to see the Boards coming around on this. Physicians have resisted change (good change or bad) that they end up not changing at all, while the rest of the world advances. They have been left out, fragmented, and commoditized. Now while the rest of the world is in the 21st Century, physicians are still practicing in the craft model. Physicians need to step up and lead the quality effort for their own good (not to mention ours).

  2. tholt October 30, 2007 at 3:13 am #

    I agree that we need to move ahead on rating quality, but I don’t think we know how to do it well. We recognize quality. All of us know physicians we think are the best or worst in our hospitals but it is hard to give an objective measure to that impression. I have spent several of the last few years in primary care leadership so I have experience failing to measure and improve quality. Here are some measurements with pros and cons:

    1) Measuring intermediate markers such as BP control, lipids or Hemoglobin A1c.
    Pro: easy to measure and reflects attention to standards of care in chronic disease management.
    Con: Encourages people to mess with the data. In my clinic I improved my numbers on optimal Hgb A1c values by just cleaning up the data. I removed all the patients who had not seen me in over 2 years despite every 6 month reminder letters. I could have made the data better by removing all patients who had not been in to see me in one year. It wouldn’t help improve patient outcomes but my numbers would be better.
    Physicians could also improve their numbers by encouraging difficult and noncompliant patients to go elsewhere for care. This unethical behavior would occur more frequently as the financial pressure is applied more vigorously.

    2) Reporting test scores.
    Pro: It is a core measure for choosing medical students. Why not use it to choose physicians? I also want the physician in the 87th percentile not the 5th.
    Con: It is an incomplete measure of intelligence and knowledge.

    3) Peer ratings. The Best Doctors in your city.
    Pro: In my small community I think they work well to identify physicians who get along with others and are good clinicians and leaders.
    Con: In my clinic even confidential ratings were wildly unpopular. I can’t imagine being able to gather negative data for public viewing.

    4) Patient surveys. Pro and con: It is a popularity contest. I think most patients wouldn’t choose excellent care from a physician with a lackluster personality over mediocre care from Dr. Charming.

    In my experience it is worth doing these quality ratings because physicians care more about the results than patients. We improve on measured goals because we want good grades. Even if patients are able to get this information I don’t think it matters as much to them. They will see their primary physician if they have a good relationship-regardless of scores. They will go to the local hospital because their physician referred them there and they have ties to that hospital. They will generally not be deterred by poor scores or attracted by high scores. (This is probably not true for payors.)

    I want physicians to be involved in deciding what is important and how to measure it. If we wait for perfect measures we’ll never get it done and some one else will decide.

  3. davisliumd October 30, 2007 at 8:23 am #

    The reason Zagat was so successful with its first endeavor, namely restaurant ratings, is because it asked consumers / diners to rate their meals. Eating a meal is a fairly common shared experience and done frequently enough that most of us know when food is so-so and when a meal is great.

    Asking consumers / patients to rate doctors on the quality of care provided isn’t nearly as meaningful or helpful. Not only are the interactions a lot fewer, but also when it comes to healthcare, they view quality in a very different way than doctors.

    The proposed criteria by Zagat — trust, communication, availability, and environment — are exactly how the public views so-so care and great care. The Employee Benefits Research Institute in its November 2005 Health Confidence Survey found that for over of surveyed individuals, over 90% felt the following traits were “very important” in determining if the care they received was of high quality.
    1) the skill, experience, and training of their doctors (97%)
    2) their provider’s communication skills and willingness to listen and explain thoroughly (90%)
    3) the degree of control you have in decisions made regarding your health care (90%)

    Again, “very important” but over 80% of the time these characterstics were valuable.
    1) the timeliness of getting care and treatments (89%)
    2) the ease of getting care and treatments (85%)
    3) the ability of your doctor or hospital to access your complete medical records (81%)
    4) the personal manner, sensitivity, and respect you receive from health care providers (80%)

    Whether the basic preventive screening tests and interventions were done to levels recommended by expert committees wasn’t as important to the public.
    Only 74% felt it was very important to have independent information about the quality of care provided by their doctor or hospital.

    While the simplicity of Zagat is certainly appealing, it will need to be balanced with information health policy experts and doctors feel is vital to keep people healthy. Who wouldn’t want the ideal doctor who listens, is caring, smart, and provides the right treatments and interventions regularly and consistently to keep you well? The concern is that surveys like Zagat, while well meaning, will instead showcase those doctors who have excellent bedside manner without providing equally if not more important information on how they perform on simple but basic preventive interventions.

    Without a doubt, the trend is favoring ratings for doctors. To be valid for consumers and doctors, both bedside manner and clinical quality need to be presented and weighted equally.

  4. Bob Wachter January 27, 2008 at 8:10 am #

    Here I am on yesterday’s Morning Edition on NPR, discussing doctor rating systems (including Zagat’s).

  5. Online Checking Account January 11, 2010 at 7:12 pm #

    The proposed criteria by Zagat — trust, communication, availability, and environment — are exactly how the public views so-so care and great care.

Leave a Reply