User login
I was flattered.
Vladimir had brought in his eczematous infant for a second opinion. No doubt he chose me because his GP was unsure and he'd heard I'm boarded in pediatrics. Not exactly.
In fact, he had already consulted a well-known pediatric dermatologist. "I was waiting for the commuter train in Sharon," said Vladimir, "I met this Russian guy and asked him if he knew a good dermatologist."
It's nice to hear that some Russian commuter thinks I'm good. But how does he know? And what is a good doctor, anyway?
This is not an idle question. Pay for performance is our Next Big Thing. HMOs now reward hospitals for practicing better medicine. Prodded by Medicare, professional associations are developing quality guidelines. Soon enough, patients will get lower copays for consulting better doctors.
OK, what's a better doctor?
This question is too complex for me to address in depth. I've observed, however, how hard it is to judge physician quality even when we want to, such as when referring patients to Mohs surgeons, internists, ophthalmologists, allergists, etc.
When people are referred to me, they often say things like, "Dr. Smith says you're terrific!" Since I've only seen a handful of Dr. Smith's patients and never met him, how does he know I'm terrific? Can he gauge my diagnostic acumen? Does he know my outcome data?
When I refer, I also say my colleague is swell; I want the patient to feel confident. Although I really believe the doctor is good, critical assessment forces me to concede that my evidence is thin. What, after all, do I really know about this doctor?
▸ Patients say his staff is nice.
▸ She sends prompt referral letters.
▸ He'll see an emergency right away.
▸ I once met her in the hall, and she seemed personable.
Such criteria imply something about my colleagues' characters and managerial skills but not much about competence. Is the internist a sharp diagnostician? Would she nail kala-azar if it came her way? How would I know? Because the people I send her mostly need routine physicals, does it matter? I guess the Mohs guy has good technique—he sends pictures of gaping wounds and neat stitching. But is he better or worse than anybody else? I must admit I'm in no position to judge.
If doctors aren't too clever at recognizing quality, patients are perhaps worse. At times, most of us learn about truly terrible physicians who miss basic diagnoses, treat patients with casual contempt, do surgery beyond their ability, or biopsy anything that moves. They're still in practice because most of their patients are still breathing. And many of these doctors have one striking thing in common:
They are wildly successful. Their patients swear by them.
In other pursuits, gauging quality is fairly straightforward: gardening, auto repair, taxidermy. Defining excellence in medical care is a bit subtler, for reasons too numerous to list. Soon, however, we're going to have to do it anyway, because those who pay our bills want value for their money. And they say value means "quality care."
So they've started with dramatic procedures with easily measured outcomes, like mortality rates for transplants. For the rest of us, they want process data: how often doctors measure hemoglobin A1c in people with diabetes or prescribe steroid inhalers to asthma patients, and so on. Good process may turn out to produce good outcome, or it may not. Either way, we're going to have to both do the right thing and—most crucial—report that we did it. If we don't, the counters will be displeased and our efforts won't count.
Will this make us better doctors? Consider: Everyone agrees that a good doctor assesses whether isotretinoin patients understand precautions. The iPLEDGE program forces us to click the box, "In my opinion, this patient understands and is capable of complying with the requirements of the iPLEDGE program." Does forcing us to click this box make us better?
Years of struggle with PCP referrals, OSHA, CLIA, and E/M codes make one realize the futility of debating bureaucratic imperatives. Soon we'll have more boxes to click, along with online physician-quality tables for patients to peruse.
But many will still find their way to excellence the old-fashioned way.
Like Eddie, who has a rare and debilitating neuropathy. "I'm seeing Dr. Lariat over at St. Anselm's," he says.
"I hear he's tops," I reply. "How'd you find him?"
"Funny," says Eddie. "My brother-in-law Dave has box seats at Fenway. Turns out that the guy in the next box is a neurologist at MBH. When Dave tells him what I've got, the guy says, 'Neuropathy? He's gotta see Lariat over at St. Anselm's. He's the best!'"
I was flattered.
Vladimir had brought in his eczematous infant for a second opinion. No doubt he chose me because his GP was unsure and he'd heard I'm boarded in pediatrics. Not exactly.
In fact, he had already consulted a well-known pediatric dermatologist. "I was waiting for the commuter train in Sharon," said Vladimir, "I met this Russian guy and asked him if he knew a good dermatologist."
It's nice to hear that some Russian commuter thinks I'm good. But how does he know? And what is a good doctor, anyway?
This is not an idle question. Pay for performance is our Next Big Thing. HMOs now reward hospitals for practicing better medicine. Prodded by Medicare, professional associations are developing quality guidelines. Soon enough, patients will get lower copays for consulting better doctors.
OK, what's a better doctor?
This question is too complex for me to address in depth. I've observed, however, how hard it is to judge physician quality even when we want to, such as when referring patients to Mohs surgeons, internists, ophthalmologists, allergists, etc.
When people are referred to me, they often say things like, "Dr. Smith says you're terrific!" Since I've only seen a handful of Dr. Smith's patients and never met him, how does he know I'm terrific? Can he gauge my diagnostic acumen? Does he know my outcome data?
When I refer, I also say my colleague is swell; I want the patient to feel confident. Although I really believe the doctor is good, critical assessment forces me to concede that my evidence is thin. What, after all, do I really know about this doctor?
▸ Patients say his staff is nice.
▸ She sends prompt referral letters.
▸ He'll see an emergency right away.
▸ I once met her in the hall, and she seemed personable.
Such criteria imply something about my colleagues' characters and managerial skills but not much about competence. Is the internist a sharp diagnostician? Would she nail kala-azar if it came her way? How would I know? Because the people I send her mostly need routine physicals, does it matter? I guess the Mohs guy has good technique—he sends pictures of gaping wounds and neat stitching. But is he better or worse than anybody else? I must admit I'm in no position to judge.
If doctors aren't too clever at recognizing quality, patients are perhaps worse. At times, most of us learn about truly terrible physicians who miss basic diagnoses, treat patients with casual contempt, do surgery beyond their ability, or biopsy anything that moves. They're still in practice because most of their patients are still breathing. And many of these doctors have one striking thing in common:
They are wildly successful. Their patients swear by them.
In other pursuits, gauging quality is fairly straightforward: gardening, auto repair, taxidermy. Defining excellence in medical care is a bit subtler, for reasons too numerous to list. Soon, however, we're going to have to do it anyway, because those who pay our bills want value for their money. And they say value means "quality care."
So they've started with dramatic procedures with easily measured outcomes, like mortality rates for transplants. For the rest of us, they want process data: how often doctors measure hemoglobin A1c in people with diabetes or prescribe steroid inhalers to asthma patients, and so on. Good process may turn out to produce good outcome, or it may not. Either way, we're going to have to both do the right thing and—most crucial—report that we did it. If we don't, the counters will be displeased and our efforts won't count.
Will this make us better doctors? Consider: Everyone agrees that a good doctor assesses whether isotretinoin patients understand precautions. The iPLEDGE program forces us to click the box, "In my opinion, this patient understands and is capable of complying with the requirements of the iPLEDGE program." Does forcing us to click this box make us better?
Years of struggle with PCP referrals, OSHA, CLIA, and E/M codes make one realize the futility of debating bureaucratic imperatives. Soon we'll have more boxes to click, along with online physician-quality tables for patients to peruse.
But many will still find their way to excellence the old-fashioned way.
Like Eddie, who has a rare and debilitating neuropathy. "I'm seeing Dr. Lariat over at St. Anselm's," he says.
"I hear he's tops," I reply. "How'd you find him?"
"Funny," says Eddie. "My brother-in-law Dave has box seats at Fenway. Turns out that the guy in the next box is a neurologist at MBH. When Dave tells him what I've got, the guy says, 'Neuropathy? He's gotta see Lariat over at St. Anselm's. He's the best!'"
I was flattered.
Vladimir had brought in his eczematous infant for a second opinion. No doubt he chose me because his GP was unsure and he'd heard I'm boarded in pediatrics. Not exactly.
In fact, he had already consulted a well-known pediatric dermatologist. "I was waiting for the commuter train in Sharon," said Vladimir, "I met this Russian guy and asked him if he knew a good dermatologist."
It's nice to hear that some Russian commuter thinks I'm good. But how does he know? And what is a good doctor, anyway?
This is not an idle question. Pay for performance is our Next Big Thing. HMOs now reward hospitals for practicing better medicine. Prodded by Medicare, professional associations are developing quality guidelines. Soon enough, patients will get lower copays for consulting better doctors.
OK, what's a better doctor?
This question is too complex for me to address in depth. I've observed, however, how hard it is to judge physician quality even when we want to, such as when referring patients to Mohs surgeons, internists, ophthalmologists, allergists, etc.
When people are referred to me, they often say things like, "Dr. Smith says you're terrific!" Since I've only seen a handful of Dr. Smith's patients and never met him, how does he know I'm terrific? Can he gauge my diagnostic acumen? Does he know my outcome data?
When I refer, I also say my colleague is swell; I want the patient to feel confident. Although I really believe the doctor is good, critical assessment forces me to concede that my evidence is thin. What, after all, do I really know about this doctor?
▸ Patients say his staff is nice.
▸ She sends prompt referral letters.
▸ He'll see an emergency right away.
▸ I once met her in the hall, and she seemed personable.
Such criteria imply something about my colleagues' characters and managerial skills but not much about competence. Is the internist a sharp diagnostician? Would she nail kala-azar if it came her way? How would I know? Because the people I send her mostly need routine physicals, does it matter? I guess the Mohs guy has good technique—he sends pictures of gaping wounds and neat stitching. But is he better or worse than anybody else? I must admit I'm in no position to judge.
If doctors aren't too clever at recognizing quality, patients are perhaps worse. At times, most of us learn about truly terrible physicians who miss basic diagnoses, treat patients with casual contempt, do surgery beyond their ability, or biopsy anything that moves. They're still in practice because most of their patients are still breathing. And many of these doctors have one striking thing in common:
They are wildly successful. Their patients swear by them.
In other pursuits, gauging quality is fairly straightforward: gardening, auto repair, taxidermy. Defining excellence in medical care is a bit subtler, for reasons too numerous to list. Soon, however, we're going to have to do it anyway, because those who pay our bills want value for their money. And they say value means "quality care."
So they've started with dramatic procedures with easily measured outcomes, like mortality rates for transplants. For the rest of us, they want process data: how often doctors measure hemoglobin A1c in people with diabetes or prescribe steroid inhalers to asthma patients, and so on. Good process may turn out to produce good outcome, or it may not. Either way, we're going to have to both do the right thing and—most crucial—report that we did it. If we don't, the counters will be displeased and our efforts won't count.
Will this make us better doctors? Consider: Everyone agrees that a good doctor assesses whether isotretinoin patients understand precautions. The iPLEDGE program forces us to click the box, "In my opinion, this patient understands and is capable of complying with the requirements of the iPLEDGE program." Does forcing us to click this box make us better?
Years of struggle with PCP referrals, OSHA, CLIA, and E/M codes make one realize the futility of debating bureaucratic imperatives. Soon we'll have more boxes to click, along with online physician-quality tables for patients to peruse.
But many will still find their way to excellence the old-fashioned way.
Like Eddie, who has a rare and debilitating neuropathy. "I'm seeing Dr. Lariat over at St. Anselm's," he says.
"I hear he's tops," I reply. "How'd you find him?"
"Funny," says Eddie. "My brother-in-law Dave has box seats at Fenway. Turns out that the guy in the next box is a neurologist at MBH. When Dave tells him what I've got, the guy says, 'Neuropathy? He's gotta see Lariat over at St. Anselm's. He's the best!'"