User login
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr. F. Perry Wilson of the Yale School of Medicine.
Today I am going to tell you the single best question you can ask any doctor, the one that has saved my butt countless times throughout my career, the one that every attending physician should be asking every intern and resident when they present a new case. That question: “What else could this be?”
I know, I know – “When you hear hoofbeats, think horses, not zebras.” I get it. But sometimes we get so good at our jobs, so good at recognizing horses, that we stop asking ourselves about zebras at all. You see this in a phenomenon known as “anchoring bias” where physicians, when presented with a diagnosis, tend to latch on to that diagnosis based on the first piece of information given, paying attention to data that support it and ignoring data that point in other directions.
That special question: “What else could this be?”, breaks through that barrier. It forces you, the medical team, everyone, to go through the exercise of real, old-fashioned differential diagnosis. And I promise that if you do this enough, at some point it will save someone’s life.
Though the concept of anchoring bias in medicine is broadly understood, it hasn’t been broadly studied until now, with this study appearing in JAMA Internal Medicine.
Here’s the setup.
The authors hypothesized that there would be substantial anchoring bias when patients with heart failure presented to the emergency department with shortness of breath if the triage “visit reason” section mentioned HF. We’re talking about the subtle difference between the following:
- Visit reason: Shortness of breath
- Visit reason: Shortness of breath/HF
People with HF can be short of breath for lots of reasons. HF exacerbation comes immediately to mind and it should. But there are obviously lots of answers to that “What else could this be?” question: pneumonia, pneumothorax, heart attack, COPD, and, of course, pulmonary embolism (PE).
The authors leveraged the nationwide VA database, allowing them to examine data from over 100,000 patients presenting to various VA EDs with shortness of breath. They then looked for particular tests – D-dimer, CT chest with contrast, V/Q scan, lower-extremity Doppler — that would suggest that the doctor was thinking about PE. The question, then, is whether mentioning HF in that little “visit reason” section would influence the likelihood of testing for PE.
I know what you’re thinking: Not everyone who is short of breath needs an evaluation for PE. And the authors did a nice job accounting for a variety of factors that might predict a PE workup: malignancy, recent surgery, elevated heart rate, low oxygen saturation, etc. Of course, some of those same factors might predict whether that triage nurse will write HF in the visit reason section. All of these things need to be accounted for statistically, and were, but – the unofficial Impact Factor motto reminds us that “there are always more confounders.”
But let’s dig into the results. I’m going to give you the raw numbers first. There were 4,392 people with HF whose visit reason section, in addition to noting shortness of breath, explicitly mentioned HF. Of those, 360 had PE testing and two had a PE diagnosed during that ED visit. So that’s around an 8% testing rate and a 0.5% hit rate for testing. But 43 people, presumably not tested in the ED, had a PE diagnosed within the next 30 days. Assuming that those PEs were present at the ED visit, that means the ED missed 95% of the PEs in the group with that HF label attached to them.
Let’s do the same thing for those whose visit reason just said “shortness of breath.”
Of the 103,627 people in that category, 13,886 were tested for PE and 231 of those tested positive. So that is an overall testing rate of around 13% and a hit rate of 1.7%. And 1,081 of these people had a PE diagnosed within 30 days. Assuming that those PEs were actually present at the ED visit, the docs missed 79% of them.
There’s one other thing to notice from the data: The overall PE rate (diagnosed by 30 days) was basically the same in both groups. That HF label does not really flag a group at lower risk for PE.
Yes, there are a lot of assumptions here, including that all PEs that were actually there in the ED got caught within 30 days, but the numbers do paint a picture. In this unadjusted analysis, it seems that the HF label leads to less testing and more missed PEs. Classic anchoring bias.
The adjusted analysis, accounting for all those PE risk factors, really didn’t change these results. You get nearly the same numbers and thus nearly the same conclusions.
Now, the main missing piece of this puzzle is in the mind of the clinician. We don’t know whether they didn’t consider PE or whether they considered PE but thought it unlikely. And in the end, it’s clear that the vast majority of people in this study did not have PE (though I suspect not all had a simple HF exacerbation). But this type of analysis is useful not only for the empiric evidence of the clinical impact of anchoring bias but because of the fact that it reminds us all to ask that all-important question: What else could this be?
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr. F. Perry Wilson of the Yale School of Medicine.
Today I am going to tell you the single best question you can ask any doctor, the one that has saved my butt countless times throughout my career, the one that every attending physician should be asking every intern and resident when they present a new case. That question: “What else could this be?”
I know, I know – “When you hear hoofbeats, think horses, not zebras.” I get it. But sometimes we get so good at our jobs, so good at recognizing horses, that we stop asking ourselves about zebras at all. You see this in a phenomenon known as “anchoring bias” where physicians, when presented with a diagnosis, tend to latch on to that diagnosis based on the first piece of information given, paying attention to data that support it and ignoring data that point in other directions.
That special question: “What else could this be?”, breaks through that barrier. It forces you, the medical team, everyone, to go through the exercise of real, old-fashioned differential diagnosis. And I promise that if you do this enough, at some point it will save someone’s life.
Though the concept of anchoring bias in medicine is broadly understood, it hasn’t been broadly studied until now, with this study appearing in JAMA Internal Medicine.
Here’s the setup.
The authors hypothesized that there would be substantial anchoring bias when patients with heart failure presented to the emergency department with shortness of breath if the triage “visit reason” section mentioned HF. We’re talking about the subtle difference between the following:
- Visit reason: Shortness of breath
- Visit reason: Shortness of breath/HF
People with HF can be short of breath for lots of reasons. HF exacerbation comes immediately to mind and it should. But there are obviously lots of answers to that “What else could this be?” question: pneumonia, pneumothorax, heart attack, COPD, and, of course, pulmonary embolism (PE).
The authors leveraged the nationwide VA database, allowing them to examine data from over 100,000 patients presenting to various VA EDs with shortness of breath. They then looked for particular tests – D-dimer, CT chest with contrast, V/Q scan, lower-extremity Doppler — that would suggest that the doctor was thinking about PE. The question, then, is whether mentioning HF in that little “visit reason” section would influence the likelihood of testing for PE.
I know what you’re thinking: Not everyone who is short of breath needs an evaluation for PE. And the authors did a nice job accounting for a variety of factors that might predict a PE workup: malignancy, recent surgery, elevated heart rate, low oxygen saturation, etc. Of course, some of those same factors might predict whether that triage nurse will write HF in the visit reason section. All of these things need to be accounted for statistically, and were, but – the unofficial Impact Factor motto reminds us that “there are always more confounders.”
But let’s dig into the results. I’m going to give you the raw numbers first. There were 4,392 people with HF whose visit reason section, in addition to noting shortness of breath, explicitly mentioned HF. Of those, 360 had PE testing and two had a PE diagnosed during that ED visit. So that’s around an 8% testing rate and a 0.5% hit rate for testing. But 43 people, presumably not tested in the ED, had a PE diagnosed within the next 30 days. Assuming that those PEs were present at the ED visit, that means the ED missed 95% of the PEs in the group with that HF label attached to them.
Let’s do the same thing for those whose visit reason just said “shortness of breath.”
Of the 103,627 people in that category, 13,886 were tested for PE and 231 of those tested positive. So that is an overall testing rate of around 13% and a hit rate of 1.7%. And 1,081 of these people had a PE diagnosed within 30 days. Assuming that those PEs were actually present at the ED visit, the docs missed 79% of them.
There’s one other thing to notice from the data: The overall PE rate (diagnosed by 30 days) was basically the same in both groups. That HF label does not really flag a group at lower risk for PE.
Yes, there are a lot of assumptions here, including that all PEs that were actually there in the ED got caught within 30 days, but the numbers do paint a picture. In this unadjusted analysis, it seems that the HF label leads to less testing and more missed PEs. Classic anchoring bias.
The adjusted analysis, accounting for all those PE risk factors, really didn’t change these results. You get nearly the same numbers and thus nearly the same conclusions.
Now, the main missing piece of this puzzle is in the mind of the clinician. We don’t know whether they didn’t consider PE or whether they considered PE but thought it unlikely. And in the end, it’s clear that the vast majority of people in this study did not have PE (though I suspect not all had a simple HF exacerbation). But this type of analysis is useful not only for the empiric evidence of the clinical impact of anchoring bias but because of the fact that it reminds us all to ask that all-important question: What else could this be?
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr. F. Perry Wilson of the Yale School of Medicine.
Today I am going to tell you the single best question you can ask any doctor, the one that has saved my butt countless times throughout my career, the one that every attending physician should be asking every intern and resident when they present a new case. That question: “What else could this be?”
I know, I know – “When you hear hoofbeats, think horses, not zebras.” I get it. But sometimes we get so good at our jobs, so good at recognizing horses, that we stop asking ourselves about zebras at all. You see this in a phenomenon known as “anchoring bias” where physicians, when presented with a diagnosis, tend to latch on to that diagnosis based on the first piece of information given, paying attention to data that support it and ignoring data that point in other directions.
That special question: “What else could this be?”, breaks through that barrier. It forces you, the medical team, everyone, to go through the exercise of real, old-fashioned differential diagnosis. And I promise that if you do this enough, at some point it will save someone’s life.
Though the concept of anchoring bias in medicine is broadly understood, it hasn’t been broadly studied until now, with this study appearing in JAMA Internal Medicine.
Here’s the setup.
The authors hypothesized that there would be substantial anchoring bias when patients with heart failure presented to the emergency department with shortness of breath if the triage “visit reason” section mentioned HF. We’re talking about the subtle difference between the following:
- Visit reason: Shortness of breath
- Visit reason: Shortness of breath/HF
People with HF can be short of breath for lots of reasons. HF exacerbation comes immediately to mind and it should. But there are obviously lots of answers to that “What else could this be?” question: pneumonia, pneumothorax, heart attack, COPD, and, of course, pulmonary embolism (PE).
The authors leveraged the nationwide VA database, allowing them to examine data from over 100,000 patients presenting to various VA EDs with shortness of breath. They then looked for particular tests – D-dimer, CT chest with contrast, V/Q scan, lower-extremity Doppler — that would suggest that the doctor was thinking about PE. The question, then, is whether mentioning HF in that little “visit reason” section would influence the likelihood of testing for PE.
I know what you’re thinking: Not everyone who is short of breath needs an evaluation for PE. And the authors did a nice job accounting for a variety of factors that might predict a PE workup: malignancy, recent surgery, elevated heart rate, low oxygen saturation, etc. Of course, some of those same factors might predict whether that triage nurse will write HF in the visit reason section. All of these things need to be accounted for statistically, and were, but – the unofficial Impact Factor motto reminds us that “there are always more confounders.”
But let’s dig into the results. I’m going to give you the raw numbers first. There were 4,392 people with HF whose visit reason section, in addition to noting shortness of breath, explicitly mentioned HF. Of those, 360 had PE testing and two had a PE diagnosed during that ED visit. So that’s around an 8% testing rate and a 0.5% hit rate for testing. But 43 people, presumably not tested in the ED, had a PE diagnosed within the next 30 days. Assuming that those PEs were present at the ED visit, that means the ED missed 95% of the PEs in the group with that HF label attached to them.
Let’s do the same thing for those whose visit reason just said “shortness of breath.”
Of the 103,627 people in that category, 13,886 were tested for PE and 231 of those tested positive. So that is an overall testing rate of around 13% and a hit rate of 1.7%. And 1,081 of these people had a PE diagnosed within 30 days. Assuming that those PEs were actually present at the ED visit, the docs missed 79% of them.
There’s one other thing to notice from the data: The overall PE rate (diagnosed by 30 days) was basically the same in both groups. That HF label does not really flag a group at lower risk for PE.
Yes, there are a lot of assumptions here, including that all PEs that were actually there in the ED got caught within 30 days, but the numbers do paint a picture. In this unadjusted analysis, it seems that the HF label leads to less testing and more missed PEs. Classic anchoring bias.
The adjusted analysis, accounting for all those PE risk factors, really didn’t change these results. You get nearly the same numbers and thus nearly the same conclusions.
Now, the main missing piece of this puzzle is in the mind of the clinician. We don’t know whether they didn’t consider PE or whether they considered PE but thought it unlikely. And in the end, it’s clear that the vast majority of people in this study did not have PE (though I suspect not all had a simple HF exacerbation). But this type of analysis is useful not only for the empiric evidence of the clinical impact of anchoring bias but because of the fact that it reminds us all to ask that all-important question: What else could this be?
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.