User login
Low-dose colchicine for ASCVD: Your questions answered
This transcript has been edited for clarity.
Dr. O’Donoghue: We’re going to discuss a very important and emerging topic, which is the use of low-dose colchicine. I think there’s much interest in the use of this drug, which now has a Food and Drug Administration indication, which we’ll talk about further, and it’s also been written into both European and American guidelines that have been recently released.
Lifestyle lipid-lowering paramount
Dr. O’Donoghue: As we think about the concept behind the use of colchicine, we’ve obviously done a large amount of research into lipid-lowering drugs, but where does colchicine now fit in?
Dr. Ridker: Let’s make sure we get the basics down. Anti-inflammatory therapy is going to be added on top of quality other care. This is not a replacement for lipids; it’s not a change in diet, exercise, and smoking cessation. The new data are really telling us that a patient who’s aggressively treated to guideline-recommended levels can still do much better in terms of preventing heart attack, stroke, cardiovascular death, and revascularization by adding low-dose colchicine as the first proven anti-inflammatory therapy for atherosclerotic disease.
I have to say, Michelle, for me, it’s been a wonderful end of a journey in many ways. This story starts almost 30 years ago for quite a few of us, thinking about inflammation and atherosclerosis. The whole C-reactive protein (CRP) story is still an ongoing one. We recently showed, for example, that residual inflammatory risk in some 30,000 patients, all taking a statin, was a far better predictor of the likelihood of more cardiovascular events, in particular cardiovascular death, than was residual cholesterol risk.
Think about that. We’re all aggressively giving second lipid-lowering drugs in our very sick patients, but that means inflammation is really the untapped piece of this.
The two clinical trials we have in front of us, the COLCOT trial and the LoDoCo2 trial – both New England Journal of Medicine papers, both with roughly 5,000 patients – provide very clear evidence that following a relatively recent myocardial infarction (that’s COLCOT) in chronic stable atherosclerosis (that’s LoDoCo2), we’re getting 25%-30% relative risk reductions in major adverse cardiovascular events (MACEs) on top of aggressive statin therapy. That’s a big deal. It’s safe, it works, and it’s fully consistent with all the information we have about inflammation being part and parcel of atherosclerosis. It’s a pretty exciting time.
Inflammatory pathway
Dr. O’Donoghue: It beautifully proves the inflammatory hypothesis in many ways. You led CANTOS, and that was a much more specific target. Here, in terms of the effects of colchicine, what do we know about how it may work on the inflammatory cascade?
Dr. Ridker: Our CANTOS trial was proof of principle that you could directly target, with a very specific monoclonal antibody, a specific piece of this innate immune cascade and lower cardiovascular event rates.
Colchicine is a more broad-spectrum drug. It does have a number of antineutrophil effects – that’s important, by the way. Neutrophils are really becoming very important in atherosclerotic disease progression. It’s an indirect inhibitor of the so-called NLRP3 inflammasome, which is where both interleukin-1 (that’s the target for canakinumab) and IL-6 are up-regulated. As you know, it’s been used to treat gout and pericarditis in high doses in short, little bursts.
The change here is this use of low-dose colchicine, that’s 0.5 mg once a day for years to treat chronic, stable atherosclerosis. It is very much like using a statin. The idea here is to prevent the progression of the disease by slowing down and maybe stabilizing the plaque so we have fewer heart attacks and strokes down the road.
It’s entering the armamentarium – at least my armamentarium – as chronic, stable secondary prevention. That’s where the new American College of Cardiology/American Heart Association guidelines also put it. It’s really in as a treatment for chronic, stable atherosclerosis. I think that’s where it belongs.
When to start colchicine, and in whom?
Dr. O’Donoghue: To that point, as we think about the efficacy, I think it’s nice, as you outlined, that we have two complementary trials that are both showing a consistent reduction in MACEs, one in the post–acute coronary syndrome (ACS) state and one for more chronic patients.
At what point do you think would be the appropriate time to start therapy, and who would you be starting it for?
Dr. Ridker: Michelle, that’s a great question. There’s a very interesting analysis that just came out from the LoDoCo2 investigators. It’s kind of a landmark analysis. What they show is that 1 year, 2 years, 3 years, and 4 years since the initiating myocardial infarction, the drug is very effective.
In fact, you could think about starting this drug at your clinic in patients with chronic, stable atherosclerotic disease. That’s just like we would start a statin in people who had a heart attack some time ago, and that’s absolutely fine.
I’m using it for what I call my frequent fliers, those patients who just keep coming back. They’re already on aggressive lipid-lowering therapy. I have them on beta-blockers, aspirin, and all the usual things. I say, look, I can get a large risk reduction by starting them on this drug.
There are a few caveats, Michelle. Like all drugs, colchicine comes with some adverse effects. Most of them are pretty rare, but there are some patients I would not give this drug to, just to be very clear. Colchicine is cleared by the kidney and by the liver. Patients who have severe chronic kidney disease and severe liver disease – this is a no-go for those patients. We should talk about where patients in that realm might want to go.
Then there are some unusual drugs. Colchicine is metabolized by the CYP3A4 and the P-glycoprotein pathway. There are a few drugs, such as ketoconazole, fluconazole, and cyclosporine, that if your primary care doctor or internist is going to start for a short term, you probably want to stop your colchicine for a week or two.
In people with familial Mediterranean fever, for whom colchicine is lifesaving and life-changing and who take it for 20, 30, or 40 years, there’s been no increase in risk for cancer. There have been very few adverse effects. I think it’s interesting that we, who practice in North America, basically never see familial Mediterranean fever. If we were practicing in Lebanon, Israel, or North Africa, this would be a very common therapy that we’d all be extremely familiar with.
Dr. O’Donoghue: To that point, it’s interesting to hear that colchicine was even used by the ancient Greeks and ancient Egyptians. It’s a drug that’s been around for a long time.
In terms of its safety, some people have been talking about the fact that an increase in noncardiovascular death was seen in LoDoCo2. What are your thoughts on that? Is that anything that we should be concerned about?
Colchicine safety and contraindications
Dr. Ridker: First, to set the record straight, a meta-analysis has been done of all-cause mortality in the various colchicine trials, and the hazard ratio is 1.04. I’ll remind you, and all of us know, that the hazard ratios for all-cause mortality in the PCSK9 trials, the bempedoic acid trials, and the ezetimibe trials are also essentially neutral. We’re in a state where we don’t let these trials roll long enough to see benefits necessarily on all-cause mortality. Some of us think we probably should, but that’s just the reality of trials.
One of most interesting things that was part of the FDA review, I suspect, was that there was no specific cause of any of this. It was not like there was a set of particular issues. I suspect that most people think this is probably the play of chance and with time, things will get better.
Again, I do want to emphasize this is not a drug for severe chronic kidney disease and severe liver disease, because those patients will get in trouble with this. The other thing that’s worth knowing is when you start a patient on low-dose colchicine – that’s 0.5 mg/d – there will be some patients who get some short-term gastrointestinal upset. That’s very common when you start colchicine at the much higher doses you might use to treat acute gout or pericarditis. In these trials, the vast majority of patients treated through that, and there were very few episodes long-term. I think it’s generally safe. That’s where we’re at.
Dr. O’Donoghue: Paul, you’ve been a leader, certainly, at looking at CRP as a marker of inflammation. Do you, in your practice, consider CRP levels when making a decision about who is appropriate for this therapy?
Dr. Ridker: That’s another terrific question. I do, because I’m trying to distinguish in my own mind patients who have residual inflammatory risk, in whom the high-sensitivity CRP (hsCRP) level remains high despite being on statins versus those with residual cholesterol risk, in whom I’m really predominantly worried about LDL cholesterol, that I haven’t brought it down far enough.
I do measure it, and if the CRP remains high and the LDL cholesterol is low, to me, that’s residual inflammatory risk and that’s the patient I would target this to. Conversely, if the LDL cholesterol was still, say, above some threshold of 75-100 and I’m worried about that, even if the CRP is low, I’ll probably add a second lipid-lowering drug.
The complexity of this, however, is that CRP was not measured in either LoDoCo2 or COLCOT. That’s mostly because they didn’t have much funding. These trials were done really on a shoestring. They were not sponsored by major pharma at all. We know that the median hsCRP in these trials was probably around 3.5-4 mg/L so I’m pretty comfortable doing that. Others have just advocated giving it to many patients. I must say I like to use biomarkers to think through the biology and who might have the best benefit-to-risk ratio. In my practice, I am doing it that way.
Inpatient vs. outpatient initiation
Dr. O’Donoghue: This is perhaps my last question for you before we wrap up. I know you talked about use of low-dose colchicine for patients with more chronic, stable coronary disease. Now obviously, COLCOT studied patients who were early post ACS, and there we certainly think about the anti-inflammatory effects as potentially having more benefit. What are your thoughts about early initiation of colchicine in that setting, the acute hospitalized setting? Do you think it’s more appropriate for an outpatient start?
Dr. Ridker: Today, I think this is all about chronic, stable atherosclerosis. Yes, COLCOT enrolled their patients within 30 days of a recent myocardial infarction, but as we all know, that’s a pretty stable phase. The vast majority were enrolled after 15 days. There were a small number enrolled within 3 days or something like that, but the benefit is about the same in all these patients.
Conversely, there’s been a small number of trials looking at colchicine in acute coronary ischemia and they’ve not been terribly promising. That makes some sense, though, right? We want to get an artery open. In acute ischemia, that’s about revascularization. It’s about oxygenation. It’s about reperfusion injury. My guess is that 3, 4, 5, or 6 days later, when it becomes a stable situation, is when the drug is probably effective.
Again, there will be some ongoing true intervention trials with large sample sizes for acute coronary ischemia. We don’t have those yet. Right now, I think it’s a therapy for chronic, stable angina. That’s many of our patients.
I would say that if you compare the relative benefit in these trials of adding ezetimibe to a statin, that’s a 5% or 6% benefit. For PCSK9 inhibitors – we all use them – it’s about a 15% benefit. These are 25%-30% risk reductions. If we’re going to think about what’s the next drug to give on top of the statin, serious consideration should be given to low-dose colchicine.
Let me also emphasize that this is not an either/or situation. This is about the fact that we now understand atherosclerosis to be a disorder both of lipid accumulation and a proinflammatory systemic response. We can give these drugs together. I suspect that the best patient care is going to be very aggressive lipid-lowering combined with pretty aggressive inflammation inhibition. I suspect that, down the road, that’s where all of us are going to be.
Dr. O’Donoghue: Thank you so much, Paul, for walking us through that today. I think it was a very nice, succinct review of the evidence, and then also just getting our minds more accustomed to the concept that we can now start to target more orthogonal axes that really get at the pathobiology of what’s going on in the atherosclerotic plaque. I think it’s an important topic.
Dr. O’Donoghue is an associate professor of medicine at Harvard Medical School and an associate physician at Brigham and Women’s Hospital, both in Boston. Dr. Ridker is director of the Center for Cardiovascular Disease Prevention at Brigham and Women’s Hospital. Both Dr. O’Donoghue and Dr. Ridker reported numerous conflicts of interest.
This transcript has been edited for clarity.
Dr. O’Donoghue: We’re going to discuss a very important and emerging topic, which is the use of low-dose colchicine. I think there’s much interest in the use of this drug, which now has a Food and Drug Administration indication, which we’ll talk about further, and it’s also been written into both European and American guidelines that have been recently released.
Lifestyle lipid-lowering paramount
Dr. O’Donoghue: As we think about the concept behind the use of colchicine, we’ve obviously done a large amount of research into lipid-lowering drugs, but where does colchicine now fit in?
Dr. Ridker: Let’s make sure we get the basics down. Anti-inflammatory therapy is going to be added on top of quality other care. This is not a replacement for lipids; it’s not a change in diet, exercise, and smoking cessation. The new data are really telling us that a patient who’s aggressively treated to guideline-recommended levels can still do much better in terms of preventing heart attack, stroke, cardiovascular death, and revascularization by adding low-dose colchicine as the first proven anti-inflammatory therapy for atherosclerotic disease.
I have to say, Michelle, for me, it’s been a wonderful end of a journey in many ways. This story starts almost 30 years ago for quite a few of us, thinking about inflammation and atherosclerosis. The whole C-reactive protein (CRP) story is still an ongoing one. We recently showed, for example, that residual inflammatory risk in some 30,000 patients, all taking a statin, was a far better predictor of the likelihood of more cardiovascular events, in particular cardiovascular death, than was residual cholesterol risk.
Think about that. We’re all aggressively giving second lipid-lowering drugs in our very sick patients, but that means inflammation is really the untapped piece of this.
The two clinical trials we have in front of us, the COLCOT trial and the LoDoCo2 trial – both New England Journal of Medicine papers, both with roughly 5,000 patients – provide very clear evidence that following a relatively recent myocardial infarction (that’s COLCOT) in chronic stable atherosclerosis (that’s LoDoCo2), we’re getting 25%-30% relative risk reductions in major adverse cardiovascular events (MACEs) on top of aggressive statin therapy. That’s a big deal. It’s safe, it works, and it’s fully consistent with all the information we have about inflammation being part and parcel of atherosclerosis. It’s a pretty exciting time.
Inflammatory pathway
Dr. O’Donoghue: It beautifully proves the inflammatory hypothesis in many ways. You led CANTOS, and that was a much more specific target. Here, in terms of the effects of colchicine, what do we know about how it may work on the inflammatory cascade?
Dr. Ridker: Our CANTOS trial was proof of principle that you could directly target, with a very specific monoclonal antibody, a specific piece of this innate immune cascade and lower cardiovascular event rates.
Colchicine is a more broad-spectrum drug. It does have a number of antineutrophil effects – that’s important, by the way. Neutrophils are really becoming very important in atherosclerotic disease progression. It’s an indirect inhibitor of the so-called NLRP3 inflammasome, which is where both interleukin-1 (that’s the target for canakinumab) and IL-6 are up-regulated. As you know, it’s been used to treat gout and pericarditis in high doses in short, little bursts.
The change here is this use of low-dose colchicine, that’s 0.5 mg once a day for years to treat chronic, stable atherosclerosis. It is very much like using a statin. The idea here is to prevent the progression of the disease by slowing down and maybe stabilizing the plaque so we have fewer heart attacks and strokes down the road.
It’s entering the armamentarium – at least my armamentarium – as chronic, stable secondary prevention. That’s where the new American College of Cardiology/American Heart Association guidelines also put it. It’s really in as a treatment for chronic, stable atherosclerosis. I think that’s where it belongs.
When to start colchicine, and in whom?
Dr. O’Donoghue: To that point, as we think about the efficacy, I think it’s nice, as you outlined, that we have two complementary trials that are both showing a consistent reduction in MACEs, one in the post–acute coronary syndrome (ACS) state and one for more chronic patients.
At what point do you think would be the appropriate time to start therapy, and who would you be starting it for?
Dr. Ridker: Michelle, that’s a great question. There’s a very interesting analysis that just came out from the LoDoCo2 investigators. It’s kind of a landmark analysis. What they show is that 1 year, 2 years, 3 years, and 4 years since the initiating myocardial infarction, the drug is very effective.
In fact, you could think about starting this drug at your clinic in patients with chronic, stable atherosclerotic disease. That’s just like we would start a statin in people who had a heart attack some time ago, and that’s absolutely fine.
I’m using it for what I call my frequent fliers, those patients who just keep coming back. They’re already on aggressive lipid-lowering therapy. I have them on beta-blockers, aspirin, and all the usual things. I say, look, I can get a large risk reduction by starting them on this drug.
There are a few caveats, Michelle. Like all drugs, colchicine comes with some adverse effects. Most of them are pretty rare, but there are some patients I would not give this drug to, just to be very clear. Colchicine is cleared by the kidney and by the liver. Patients who have severe chronic kidney disease and severe liver disease – this is a no-go for those patients. We should talk about where patients in that realm might want to go.
Then there are some unusual drugs. Colchicine is metabolized by the CYP3A4 and the P-glycoprotein pathway. There are a few drugs, such as ketoconazole, fluconazole, and cyclosporine, that if your primary care doctor or internist is going to start for a short term, you probably want to stop your colchicine for a week or two.
In people with familial Mediterranean fever, for whom colchicine is lifesaving and life-changing and who take it for 20, 30, or 40 years, there’s been no increase in risk for cancer. There have been very few adverse effects. I think it’s interesting that we, who practice in North America, basically never see familial Mediterranean fever. If we were practicing in Lebanon, Israel, or North Africa, this would be a very common therapy that we’d all be extremely familiar with.
Dr. O’Donoghue: To that point, it’s interesting to hear that colchicine was even used by the ancient Greeks and ancient Egyptians. It’s a drug that’s been around for a long time.
In terms of its safety, some people have been talking about the fact that an increase in noncardiovascular death was seen in LoDoCo2. What are your thoughts on that? Is that anything that we should be concerned about?
Colchicine safety and contraindications
Dr. Ridker: First, to set the record straight, a meta-analysis has been done of all-cause mortality in the various colchicine trials, and the hazard ratio is 1.04. I’ll remind you, and all of us know, that the hazard ratios for all-cause mortality in the PCSK9 trials, the bempedoic acid trials, and the ezetimibe trials are also essentially neutral. We’re in a state where we don’t let these trials roll long enough to see benefits necessarily on all-cause mortality. Some of us think we probably should, but that’s just the reality of trials.
One of most interesting things that was part of the FDA review, I suspect, was that there was no specific cause of any of this. It was not like there was a set of particular issues. I suspect that most people think this is probably the play of chance and with time, things will get better.
Again, I do want to emphasize this is not a drug for severe chronic kidney disease and severe liver disease, because those patients will get in trouble with this. The other thing that’s worth knowing is when you start a patient on low-dose colchicine – that’s 0.5 mg/d – there will be some patients who get some short-term gastrointestinal upset. That’s very common when you start colchicine at the much higher doses you might use to treat acute gout or pericarditis. In these trials, the vast majority of patients treated through that, and there were very few episodes long-term. I think it’s generally safe. That’s where we’re at.
Dr. O’Donoghue: Paul, you’ve been a leader, certainly, at looking at CRP as a marker of inflammation. Do you, in your practice, consider CRP levels when making a decision about who is appropriate for this therapy?
Dr. Ridker: That’s another terrific question. I do, because I’m trying to distinguish in my own mind patients who have residual inflammatory risk, in whom the high-sensitivity CRP (hsCRP) level remains high despite being on statins versus those with residual cholesterol risk, in whom I’m really predominantly worried about LDL cholesterol, that I haven’t brought it down far enough.
I do measure it, and if the CRP remains high and the LDL cholesterol is low, to me, that’s residual inflammatory risk and that’s the patient I would target this to. Conversely, if the LDL cholesterol was still, say, above some threshold of 75-100 and I’m worried about that, even if the CRP is low, I’ll probably add a second lipid-lowering drug.
The complexity of this, however, is that CRP was not measured in either LoDoCo2 or COLCOT. That’s mostly because they didn’t have much funding. These trials were done really on a shoestring. They were not sponsored by major pharma at all. We know that the median hsCRP in these trials was probably around 3.5-4 mg/L so I’m pretty comfortable doing that. Others have just advocated giving it to many patients. I must say I like to use biomarkers to think through the biology and who might have the best benefit-to-risk ratio. In my practice, I am doing it that way.
Inpatient vs. outpatient initiation
Dr. O’Donoghue: This is perhaps my last question for you before we wrap up. I know you talked about use of low-dose colchicine for patients with more chronic, stable coronary disease. Now obviously, COLCOT studied patients who were early post ACS, and there we certainly think about the anti-inflammatory effects as potentially having more benefit. What are your thoughts about early initiation of colchicine in that setting, the acute hospitalized setting? Do you think it’s more appropriate for an outpatient start?
Dr. Ridker: Today, I think this is all about chronic, stable atherosclerosis. Yes, COLCOT enrolled their patients within 30 days of a recent myocardial infarction, but as we all know, that’s a pretty stable phase. The vast majority were enrolled after 15 days. There were a small number enrolled within 3 days or something like that, but the benefit is about the same in all these patients.
Conversely, there’s been a small number of trials looking at colchicine in acute coronary ischemia and they’ve not been terribly promising. That makes some sense, though, right? We want to get an artery open. In acute ischemia, that’s about revascularization. It’s about oxygenation. It’s about reperfusion injury. My guess is that 3, 4, 5, or 6 days later, when it becomes a stable situation, is when the drug is probably effective.
Again, there will be some ongoing true intervention trials with large sample sizes for acute coronary ischemia. We don’t have those yet. Right now, I think it’s a therapy for chronic, stable angina. That’s many of our patients.
I would say that if you compare the relative benefit in these trials of adding ezetimibe to a statin, that’s a 5% or 6% benefit. For PCSK9 inhibitors – we all use them – it’s about a 15% benefit. These are 25%-30% risk reductions. If we’re going to think about what’s the next drug to give on top of the statin, serious consideration should be given to low-dose colchicine.
Let me also emphasize that this is not an either/or situation. This is about the fact that we now understand atherosclerosis to be a disorder both of lipid accumulation and a proinflammatory systemic response. We can give these drugs together. I suspect that the best patient care is going to be very aggressive lipid-lowering combined with pretty aggressive inflammation inhibition. I suspect that, down the road, that’s where all of us are going to be.
Dr. O’Donoghue: Thank you so much, Paul, for walking us through that today. I think it was a very nice, succinct review of the evidence, and then also just getting our minds more accustomed to the concept that we can now start to target more orthogonal axes that really get at the pathobiology of what’s going on in the atherosclerotic plaque. I think it’s an important topic.
Dr. O’Donoghue is an associate professor of medicine at Harvard Medical School and an associate physician at Brigham and Women’s Hospital, both in Boston. Dr. Ridker is director of the Center for Cardiovascular Disease Prevention at Brigham and Women’s Hospital. Both Dr. O’Donoghue and Dr. Ridker reported numerous conflicts of interest.
This transcript has been edited for clarity.
Dr. O’Donoghue: We’re going to discuss a very important and emerging topic, which is the use of low-dose colchicine. I think there’s much interest in the use of this drug, which now has a Food and Drug Administration indication, which we’ll talk about further, and it’s also been written into both European and American guidelines that have been recently released.
Lifestyle lipid-lowering paramount
Dr. O’Donoghue: As we think about the concept behind the use of colchicine, we’ve obviously done a large amount of research into lipid-lowering drugs, but where does colchicine now fit in?
Dr. Ridker: Let’s make sure we get the basics down. Anti-inflammatory therapy is going to be added on top of quality other care. This is not a replacement for lipids; it’s not a change in diet, exercise, and smoking cessation. The new data are really telling us that a patient who’s aggressively treated to guideline-recommended levels can still do much better in terms of preventing heart attack, stroke, cardiovascular death, and revascularization by adding low-dose colchicine as the first proven anti-inflammatory therapy for atherosclerotic disease.
I have to say, Michelle, for me, it’s been a wonderful end of a journey in many ways. This story starts almost 30 years ago for quite a few of us, thinking about inflammation and atherosclerosis. The whole C-reactive protein (CRP) story is still an ongoing one. We recently showed, for example, that residual inflammatory risk in some 30,000 patients, all taking a statin, was a far better predictor of the likelihood of more cardiovascular events, in particular cardiovascular death, than was residual cholesterol risk.
Think about that. We’re all aggressively giving second lipid-lowering drugs in our very sick patients, but that means inflammation is really the untapped piece of this.
The two clinical trials we have in front of us, the COLCOT trial and the LoDoCo2 trial – both New England Journal of Medicine papers, both with roughly 5,000 patients – provide very clear evidence that following a relatively recent myocardial infarction (that’s COLCOT) in chronic stable atherosclerosis (that’s LoDoCo2), we’re getting 25%-30% relative risk reductions in major adverse cardiovascular events (MACEs) on top of aggressive statin therapy. That’s a big deal. It’s safe, it works, and it’s fully consistent with all the information we have about inflammation being part and parcel of atherosclerosis. It’s a pretty exciting time.
Inflammatory pathway
Dr. O’Donoghue: It beautifully proves the inflammatory hypothesis in many ways. You led CANTOS, and that was a much more specific target. Here, in terms of the effects of colchicine, what do we know about how it may work on the inflammatory cascade?
Dr. Ridker: Our CANTOS trial was proof of principle that you could directly target, with a very specific monoclonal antibody, a specific piece of this innate immune cascade and lower cardiovascular event rates.
Colchicine is a more broad-spectrum drug. It does have a number of antineutrophil effects – that’s important, by the way. Neutrophils are really becoming very important in atherosclerotic disease progression. It’s an indirect inhibitor of the so-called NLRP3 inflammasome, which is where both interleukin-1 (that’s the target for canakinumab) and IL-6 are up-regulated. As you know, it’s been used to treat gout and pericarditis in high doses in short, little bursts.
The change here is this use of low-dose colchicine, that’s 0.5 mg once a day for years to treat chronic, stable atherosclerosis. It is very much like using a statin. The idea here is to prevent the progression of the disease by slowing down and maybe stabilizing the plaque so we have fewer heart attacks and strokes down the road.
It’s entering the armamentarium – at least my armamentarium – as chronic, stable secondary prevention. That’s where the new American College of Cardiology/American Heart Association guidelines also put it. It’s really in as a treatment for chronic, stable atherosclerosis. I think that’s where it belongs.
When to start colchicine, and in whom?
Dr. O’Donoghue: To that point, as we think about the efficacy, I think it’s nice, as you outlined, that we have two complementary trials that are both showing a consistent reduction in MACEs, one in the post–acute coronary syndrome (ACS) state and one for more chronic patients.
At what point do you think would be the appropriate time to start therapy, and who would you be starting it for?
Dr. Ridker: Michelle, that’s a great question. There’s a very interesting analysis that just came out from the LoDoCo2 investigators. It’s kind of a landmark analysis. What they show is that 1 year, 2 years, 3 years, and 4 years since the initiating myocardial infarction, the drug is very effective.
In fact, you could think about starting this drug at your clinic in patients with chronic, stable atherosclerotic disease. That’s just like we would start a statin in people who had a heart attack some time ago, and that’s absolutely fine.
I’m using it for what I call my frequent fliers, those patients who just keep coming back. They’re already on aggressive lipid-lowering therapy. I have them on beta-blockers, aspirin, and all the usual things. I say, look, I can get a large risk reduction by starting them on this drug.
There are a few caveats, Michelle. Like all drugs, colchicine comes with some adverse effects. Most of them are pretty rare, but there are some patients I would not give this drug to, just to be very clear. Colchicine is cleared by the kidney and by the liver. Patients who have severe chronic kidney disease and severe liver disease – this is a no-go for those patients. We should talk about where patients in that realm might want to go.
Then there are some unusual drugs. Colchicine is metabolized by the CYP3A4 and the P-glycoprotein pathway. There are a few drugs, such as ketoconazole, fluconazole, and cyclosporine, that if your primary care doctor or internist is going to start for a short term, you probably want to stop your colchicine for a week or two.
In people with familial Mediterranean fever, for whom colchicine is lifesaving and life-changing and who take it for 20, 30, or 40 years, there’s been no increase in risk for cancer. There have been very few adverse effects. I think it’s interesting that we, who practice in North America, basically never see familial Mediterranean fever. If we were practicing in Lebanon, Israel, or North Africa, this would be a very common therapy that we’d all be extremely familiar with.
Dr. O’Donoghue: To that point, it’s interesting to hear that colchicine was even used by the ancient Greeks and ancient Egyptians. It’s a drug that’s been around for a long time.
In terms of its safety, some people have been talking about the fact that an increase in noncardiovascular death was seen in LoDoCo2. What are your thoughts on that? Is that anything that we should be concerned about?
Colchicine safety and contraindications
Dr. Ridker: First, to set the record straight, a meta-analysis has been done of all-cause mortality in the various colchicine trials, and the hazard ratio is 1.04. I’ll remind you, and all of us know, that the hazard ratios for all-cause mortality in the PCSK9 trials, the bempedoic acid trials, and the ezetimibe trials are also essentially neutral. We’re in a state where we don’t let these trials roll long enough to see benefits necessarily on all-cause mortality. Some of us think we probably should, but that’s just the reality of trials.
One of most interesting things that was part of the FDA review, I suspect, was that there was no specific cause of any of this. It was not like there was a set of particular issues. I suspect that most people think this is probably the play of chance and with time, things will get better.
Again, I do want to emphasize this is not a drug for severe chronic kidney disease and severe liver disease, because those patients will get in trouble with this. The other thing that’s worth knowing is when you start a patient on low-dose colchicine – that’s 0.5 mg/d – there will be some patients who get some short-term gastrointestinal upset. That’s very common when you start colchicine at the much higher doses you might use to treat acute gout or pericarditis. In these trials, the vast majority of patients treated through that, and there were very few episodes long-term. I think it’s generally safe. That’s where we’re at.
Dr. O’Donoghue: Paul, you’ve been a leader, certainly, at looking at CRP as a marker of inflammation. Do you, in your practice, consider CRP levels when making a decision about who is appropriate for this therapy?
Dr. Ridker: That’s another terrific question. I do, because I’m trying to distinguish in my own mind patients who have residual inflammatory risk, in whom the high-sensitivity CRP (hsCRP) level remains high despite being on statins versus those with residual cholesterol risk, in whom I’m really predominantly worried about LDL cholesterol, that I haven’t brought it down far enough.
I do measure it, and if the CRP remains high and the LDL cholesterol is low, to me, that’s residual inflammatory risk and that’s the patient I would target this to. Conversely, if the LDL cholesterol was still, say, above some threshold of 75-100 and I’m worried about that, even if the CRP is low, I’ll probably add a second lipid-lowering drug.
The complexity of this, however, is that CRP was not measured in either LoDoCo2 or COLCOT. That’s mostly because they didn’t have much funding. These trials were done really on a shoestring. They were not sponsored by major pharma at all. We know that the median hsCRP in these trials was probably around 3.5-4 mg/L so I’m pretty comfortable doing that. Others have just advocated giving it to many patients. I must say I like to use biomarkers to think through the biology and who might have the best benefit-to-risk ratio. In my practice, I am doing it that way.
Inpatient vs. outpatient initiation
Dr. O’Donoghue: This is perhaps my last question for you before we wrap up. I know you talked about use of low-dose colchicine for patients with more chronic, stable coronary disease. Now obviously, COLCOT studied patients who were early post ACS, and there we certainly think about the anti-inflammatory effects as potentially having more benefit. What are your thoughts about early initiation of colchicine in that setting, the acute hospitalized setting? Do you think it’s more appropriate for an outpatient start?
Dr. Ridker: Today, I think this is all about chronic, stable atherosclerosis. Yes, COLCOT enrolled their patients within 30 days of a recent myocardial infarction, but as we all know, that’s a pretty stable phase. The vast majority were enrolled after 15 days. There were a small number enrolled within 3 days or something like that, but the benefit is about the same in all these patients.
Conversely, there’s been a small number of trials looking at colchicine in acute coronary ischemia and they’ve not been terribly promising. That makes some sense, though, right? We want to get an artery open. In acute ischemia, that’s about revascularization. It’s about oxygenation. It’s about reperfusion injury. My guess is that 3, 4, 5, or 6 days later, when it becomes a stable situation, is when the drug is probably effective.
Again, there will be some ongoing true intervention trials with large sample sizes for acute coronary ischemia. We don’t have those yet. Right now, I think it’s a therapy for chronic, stable angina. That’s many of our patients.
I would say that if you compare the relative benefit in these trials of adding ezetimibe to a statin, that’s a 5% or 6% benefit. For PCSK9 inhibitors – we all use them – it’s about a 15% benefit. These are 25%-30% risk reductions. If we’re going to think about what’s the next drug to give on top of the statin, serious consideration should be given to low-dose colchicine.
Let me also emphasize that this is not an either/or situation. This is about the fact that we now understand atherosclerosis to be a disorder both of lipid accumulation and a proinflammatory systemic response. We can give these drugs together. I suspect that the best patient care is going to be very aggressive lipid-lowering combined with pretty aggressive inflammation inhibition. I suspect that, down the road, that’s where all of us are going to be.
Dr. O’Donoghue: Thank you so much, Paul, for walking us through that today. I think it was a very nice, succinct review of the evidence, and then also just getting our minds more accustomed to the concept that we can now start to target more orthogonal axes that really get at the pathobiology of what’s going on in the atherosclerotic plaque. I think it’s an important topic.
Dr. O’Donoghue is an associate professor of medicine at Harvard Medical School and an associate physician at Brigham and Women’s Hospital, both in Boston. Dr. Ridker is director of the Center for Cardiovascular Disease Prevention at Brigham and Women’s Hospital. Both Dr. O’Donoghue and Dr. Ridker reported numerous conflicts of interest.
My favorite iron pearls
A 45-year-old women presents for evaluation of fatigue. She has been tired for the past 6 months. She has had no problems with sleep and no other new symptoms. Her physical exam is unremarkable. Her Patient Health Questionnaire–9 score is 4. Lab results are as follows: hemoglobin, 13 g/dL; hematocrit, 39%; mean corpuscular volume, 90 fL; blood urea nitrogen, 10 mg/dL; Cr, 1.0 mg/dL; AST, 20 IU/L; ALT, 15 IU/L; ferritin, 35 mcg/mL; thyroid-stimulating hormone, 3.5 mIU/L.
What would you recommend?
A. Sertraline
B. Sleep study
C. Iron supplementation
I would treat this patient with iron. Verdon and colleagues conducted a randomized, double-blind placebo-controlled trial of iron treatment in nonanemic women.1 The women who received iron had a much greater reduction in fatigue score, compared with the women who did not (P < .004). Only women with ferritin levels less than 50 mcg/L benefited. Houston and colleagues performed a systematic review of the literature of iron supplementation for fatigue and concluded that iron should be considered for treatment of fatigue in nonanemic women.2 The key number for benefit was a ferritin level less than 50 mcg/L.
Hair thinning is a common concern for many women. Does iron deficiency have a possible role in this problem? A number of studies have correlated low ferritin levels with hair loss.3 There is less clear evidence of iron treatment being effective. Hard studied 140 women with diffuse hair loss, and found 19% had iron deficiency without anemia.4 All patients with iron deficiency were treated with oral iron and in all patients hair loss ceased, and hair regrowth occurred. The target ferritin goal for treatment is greater than 40 mcg/L.5
Iron deficiency is an important trigger for restless leg syndrome (RLS). All patients who present with RLS should have ferritin checked, and appropriate evaluation for the cause of iron deficiency if ferritin levels are low. Allen and colleagues published clinical practice guidelines for iron treatment of RLS.6 The guidelines conclude that ferric carboxymaltose (1,000 mg) is effective for treating moderate to severe RLS in those with serum ferritin less than 300 mcg/L and could be used as first-line therapy for RLS in adults, with oral iron (65 mg) possibly effective in patients with ferritin levels less than 75 mcg/L.
Pearl: Think of iron as therapy for fatigue in nonanemic women with a ferritin level less than 50 mcg/L, consider a trial of iron for thinning hair in women with ferritin levels less than 50 mcg/L, and a trial of iron in those with RLS with ferritin levels less than 75 mcg/L.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected].
References
1. Verdon F et al. BMJ. 2003 May 24;326(7399):1124. .
2. Houston BL et al. BMJ Open. 2018 Apr 5;8(4):e019240.
3. Almohanna HM et al. Dermatol Ther (Heidelb). 2019 Mar;9(1):51-70. .
4. Hard S. Acta Derm Venereol. 1963;43:562-9.
5. Kantor J et al. J Invest Dermatol. 2003 Nov;121(5):985-8. .
6. Allen RP et al. Sleep Med. 2018 Jan;41:27-44. .
A 45-year-old women presents for evaluation of fatigue. She has been tired for the past 6 months. She has had no problems with sleep and no other new symptoms. Her physical exam is unremarkable. Her Patient Health Questionnaire–9 score is 4. Lab results are as follows: hemoglobin, 13 g/dL; hematocrit, 39%; mean corpuscular volume, 90 fL; blood urea nitrogen, 10 mg/dL; Cr, 1.0 mg/dL; AST, 20 IU/L; ALT, 15 IU/L; ferritin, 35 mcg/mL; thyroid-stimulating hormone, 3.5 mIU/L.
What would you recommend?
A. Sertraline
B. Sleep study
C. Iron supplementation
I would treat this patient with iron. Verdon and colleagues conducted a randomized, double-blind placebo-controlled trial of iron treatment in nonanemic women.1 The women who received iron had a much greater reduction in fatigue score, compared with the women who did not (P < .004). Only women with ferritin levels less than 50 mcg/L benefited. Houston and colleagues performed a systematic review of the literature of iron supplementation for fatigue and concluded that iron should be considered for treatment of fatigue in nonanemic women.2 The key number for benefit was a ferritin level less than 50 mcg/L.
Hair thinning is a common concern for many women. Does iron deficiency have a possible role in this problem? A number of studies have correlated low ferritin levels with hair loss.3 There is less clear evidence of iron treatment being effective. Hard studied 140 women with diffuse hair loss, and found 19% had iron deficiency without anemia.4 All patients with iron deficiency were treated with oral iron and in all patients hair loss ceased, and hair regrowth occurred. The target ferritin goal for treatment is greater than 40 mcg/L.5
Iron deficiency is an important trigger for restless leg syndrome (RLS). All patients who present with RLS should have ferritin checked, and appropriate evaluation for the cause of iron deficiency if ferritin levels are low. Allen and colleagues published clinical practice guidelines for iron treatment of RLS.6 The guidelines conclude that ferric carboxymaltose (1,000 mg) is effective for treating moderate to severe RLS in those with serum ferritin less than 300 mcg/L and could be used as first-line therapy for RLS in adults, with oral iron (65 mg) possibly effective in patients with ferritin levels less than 75 mcg/L.
Pearl: Think of iron as therapy for fatigue in nonanemic women with a ferritin level less than 50 mcg/L, consider a trial of iron for thinning hair in women with ferritin levels less than 50 mcg/L, and a trial of iron in those with RLS with ferritin levels less than 75 mcg/L.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected].
References
1. Verdon F et al. BMJ. 2003 May 24;326(7399):1124. .
2. Houston BL et al. BMJ Open. 2018 Apr 5;8(4):e019240.
3. Almohanna HM et al. Dermatol Ther (Heidelb). 2019 Mar;9(1):51-70. .
4. Hard S. Acta Derm Venereol. 1963;43:562-9.
5. Kantor J et al. J Invest Dermatol. 2003 Nov;121(5):985-8. .
6. Allen RP et al. Sleep Med. 2018 Jan;41:27-44. .
A 45-year-old women presents for evaluation of fatigue. She has been tired for the past 6 months. She has had no problems with sleep and no other new symptoms. Her physical exam is unremarkable. Her Patient Health Questionnaire–9 score is 4. Lab results are as follows: hemoglobin, 13 g/dL; hematocrit, 39%; mean corpuscular volume, 90 fL; blood urea nitrogen, 10 mg/dL; Cr, 1.0 mg/dL; AST, 20 IU/L; ALT, 15 IU/L; ferritin, 35 mcg/mL; thyroid-stimulating hormone, 3.5 mIU/L.
What would you recommend?
A. Sertraline
B. Sleep study
C. Iron supplementation
I would treat this patient with iron. Verdon and colleagues conducted a randomized, double-blind placebo-controlled trial of iron treatment in nonanemic women.1 The women who received iron had a much greater reduction in fatigue score, compared with the women who did not (P < .004). Only women with ferritin levels less than 50 mcg/L benefited. Houston and colleagues performed a systematic review of the literature of iron supplementation for fatigue and concluded that iron should be considered for treatment of fatigue in nonanemic women.2 The key number for benefit was a ferritin level less than 50 mcg/L.
Hair thinning is a common concern for many women. Does iron deficiency have a possible role in this problem? A number of studies have correlated low ferritin levels with hair loss.3 There is less clear evidence of iron treatment being effective. Hard studied 140 women with diffuse hair loss, and found 19% had iron deficiency without anemia.4 All patients with iron deficiency were treated with oral iron and in all patients hair loss ceased, and hair regrowth occurred. The target ferritin goal for treatment is greater than 40 mcg/L.5
Iron deficiency is an important trigger for restless leg syndrome (RLS). All patients who present with RLS should have ferritin checked, and appropriate evaluation for the cause of iron deficiency if ferritin levels are low. Allen and colleagues published clinical practice guidelines for iron treatment of RLS.6 The guidelines conclude that ferric carboxymaltose (1,000 mg) is effective for treating moderate to severe RLS in those with serum ferritin less than 300 mcg/L and could be used as first-line therapy for RLS in adults, with oral iron (65 mg) possibly effective in patients with ferritin levels less than 75 mcg/L.
Pearl: Think of iron as therapy for fatigue in nonanemic women with a ferritin level less than 50 mcg/L, consider a trial of iron for thinning hair in women with ferritin levels less than 50 mcg/L, and a trial of iron in those with RLS with ferritin levels less than 75 mcg/L.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected].
References
1. Verdon F et al. BMJ. 2003 May 24;326(7399):1124. .
2. Houston BL et al. BMJ Open. 2018 Apr 5;8(4):e019240.
3. Almohanna HM et al. Dermatol Ther (Heidelb). 2019 Mar;9(1):51-70. .
4. Hard S. Acta Derm Venereol. 1963;43:562-9.
5. Kantor J et al. J Invest Dermatol. 2003 Nov;121(5):985-8. .
6. Allen RP et al. Sleep Med. 2018 Jan;41:27-44. .
Are fish oils on the hook for AFib risk?
Questions about omega-3 fatty acid supplements come up often in the atrial fibrillation (AFib) clinic.
The story begins with the simple observation that populations who eat lots of oily fish have fewer coronary events. This correlation provoked great interest in concentrating fish oils in pill form and studying their use to promote health.
OMENI secondary analysis
Peder Myhre, MD, and colleagues recently published a secondary analysis of the OMENI trial looking at both the risk and possible causes of AFib in the omega-3 group.
The OMENI trial randomly assigned slightly more than 1,000 older patients (mean age, 75 years) post–myocardial infarction to either 1.8 g/d of fish oil supplements versus placebo for 2 years. The supplements comprised 930 mg of eicosapentaenoic acid (EPA) and 660 mg of docosahexaenoic acid (DHA). The main trial reported no difference in a composite primary endpoint of MI, revascularization, stroke, death, or hospitalization for heart failure.
The secondary analysis explored the 75% of patients in the main trial who had no history of AFib. It looked at how many in each group developed either true clinical AFib or what the authors called micro-AFib, defined as short bursts of irregular atrial activity lasting seconds.
The sub-analysis had three main findings: Patients in the supplement arm had a 90% higher rate of AFib or micro-AFib, compared with patients on placebo, EPA had the strongest effect on the association, and there was a graded risk for AFib with increasing serum EPA levels.
The authors raised the possibility that more micro-AFib might be a possible mediator of AFib risk.
Trials of low-dose EPA and DHA
First, the low-dose trials. In the ASCEND trial from 2018, more than 15,000 patients with diabetes were randomly assigned to either 1 g of omega-3 fatty acids (460-mg EPA and 380-mg DHA) or mineral oil.
The trial was neutral. After 7.4 years, the primary endpoint of MI, stroke, transient ischemic attack, or cardiovascular death occurred in 8.9% of the supplement group versus 9.2% of the placebo arm.The incidence of AFib was higher in the omega-3 group but did not reach statistical significance (2.1% vs. 1.7% for placebo; hazard ratio, 1.23; 95% confidence interval, 0.98-1.54).
Another neutral CV trial, VITAL, specifically studied the effects of marine omega-3 pills (460-mg EPA and 380-mg DHA) in older adults without heart disease, cancer, or AFib. After slightly more than 5 years, AFib occurred at a similar rate in the active arm and placebo arms (3.7% vs. 3.4% for placebo; HR, 1.09; 95% CI, 0.96-1.24; P = .19)
Trials of very high-dose marine omega-3s
Next came trials of higher doses in higher-risk populations.
In 2020, JAMA published the STRENGTH trial, which compared 4 g/d of a carboxylic acid formulation of EPA and DHA with a corn oil placebo in more than 13,000 patients who either had established atherosclerotic CV disease (ASCVD) or were at high risk for ASCVD.
The trial was terminated early because of futility and a signal of increased AFib risk in the supplement arm.
Nearly the same number of patients in the supplement versus placebo arm experienced a primary composite endpoint of major adverse cardiac events: 12.0% versus 12.2%, respectively.
AFib was a tertiary endpoint in this trial. An increase in investigator-reported new-onset AFib was observed in the omega-3 group: 2.2% vs. 1.3% for corn oil (HR, 1.69; 95% CI, 1.29-2.21; nominal P < .001).
The REDUCE-IT trial randomly assigned more than 8,000 patients who had ASCVD or diabetes and high ASCVD risk and elevated triglyceride levels to either 4 g of icosapent ethyl daily, a concentrated form of EPA, or a mineral oil placebo.
After nearly 5 years, there was a 4.8% absolute risk reduction in the primary endpoint of CV death, MI, stroke, revascularization, or unstable angina with icosapent ethyl. An increase in atherogenic biomarkers in the mineral oil placebo complicated interpretation of this trial.
Hospitalization for AFib or flutter occurred in 3.1% of the active arm versus 2.1% of the mineral oil group (P = .004).
Meta-analysis of marine omega-3 supplement trials
In 2021, Baris Gencer and colleagues performed a meta-analysis of these five trials plus 2 more (GISSI-HF and RP) looking specifically at risk for AFib. Their final analysis included more than 81,000 patients followed for nearly 5 years.
Omega 3 fatty acid supplements associated with a 25% increase in the risk for AFib (HR, 1.25; 95% CI, 1.07-1.46P =.013). Exploring further, they noted a dose-dependent relationship. Most of the increased risk occurred in trials that tested greater than 1 g/d.
Summary
When faced with surprise findings, I like to think things through.
First about plausibility. Omega-3 fatty acids clearly exert electrophysiologic effects on cardiac cells, an increase in AFib risk is plausible. The exact underlying mechanism may be unknown, but exact mechanisms are less important than actual clinical effects (see sodium-glucose cotransporter 2 inhibitors).
What about causality? Factors supporting causality include plausibility, consistency of increased AFib risk in multiple studies, and a dose-response relationship.
I see multiple clinical implications of this observation.
The first is the power of the randomized trial to inform practice. If we relied only on observational evidence, we might have assumed that since high fish consumption in populations associated with lower rates of cardiac events, fish oil supplementation would also reduce cardiac events. Other than the outlier trial, REDUCE-IT, with its mineral oil placebo, the preponderance of the randomized controlled trial evidence does not support fish oils for the reduction of CV events.
Randomized controlled trials also exposed the AFib risk. This would have been difficult to sort out in nonrandom observational studies.
Another underappreciated lesson is the notion that drugs, including supplements, can have off-target effects.
Consider the case of statin drugs. It is widely assumed that statins reduce cardiac events by lowering low-density lipoprotein cholesterol (LDL-C). Yet, statins became a mainstay not because of LDL-C lowering but because multiple trials found that this class of drugs reduced cardiac events without increasing adverse effects.
Omega-3 fatty acids reduce triglyceride levels, but this is not enough to adopt the use of these pills. The lack of consistent reduction in CV events and the off-target signal of AFib risk argue against routine use of fish-oil pills.
I will close with uncertainty. Though there is plausibility and multiple reasons to infer causality of marine omega-3s in increasing AFib risk, the effect size remains unknown.
In an editorial accompanying the recent meta-analysis, epidemiologist Michelle Samuel, MPH, PhD, and electrophysiologist Stanley Nattel, MD, cautioned readers on a technical but important point. It concerns the matter of competing risks, such as death, in the analysis of AFib risk, meaning that patients who died may have developed AFib had they lived. They provide a detailed explanation in the open access article, but the take-home is that the exact effect size is difficult to quantify without patient-level original data.
No matter. I find the signal of increased AFib risk an important one to use at the bedside.
Intermittent AFib has an unpredictable natural history. It often resolves as mysteriously as it arises. When patients take fish-oil supplements, I cite these studies, note the lack of CV protection, then I recommend stopping the pills.
This allows for one of the most important interventions in AFib care: time.
Dr. Mandrola is a clinical electrophysiologist with Baptist Medical Associates, Louisville, Ky. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Questions about omega-3 fatty acid supplements come up often in the atrial fibrillation (AFib) clinic.
The story begins with the simple observation that populations who eat lots of oily fish have fewer coronary events. This correlation provoked great interest in concentrating fish oils in pill form and studying their use to promote health.
OMENI secondary analysis
Peder Myhre, MD, and colleagues recently published a secondary analysis of the OMENI trial looking at both the risk and possible causes of AFib in the omega-3 group.
The OMENI trial randomly assigned slightly more than 1,000 older patients (mean age, 75 years) post–myocardial infarction to either 1.8 g/d of fish oil supplements versus placebo for 2 years. The supplements comprised 930 mg of eicosapentaenoic acid (EPA) and 660 mg of docosahexaenoic acid (DHA). The main trial reported no difference in a composite primary endpoint of MI, revascularization, stroke, death, or hospitalization for heart failure.
The secondary analysis explored the 75% of patients in the main trial who had no history of AFib. It looked at how many in each group developed either true clinical AFib or what the authors called micro-AFib, defined as short bursts of irregular atrial activity lasting seconds.
The sub-analysis had three main findings: Patients in the supplement arm had a 90% higher rate of AFib or micro-AFib, compared with patients on placebo, EPA had the strongest effect on the association, and there was a graded risk for AFib with increasing serum EPA levels.
The authors raised the possibility that more micro-AFib might be a possible mediator of AFib risk.
Trials of low-dose EPA and DHA
First, the low-dose trials. In the ASCEND trial from 2018, more than 15,000 patients with diabetes were randomly assigned to either 1 g of omega-3 fatty acids (460-mg EPA and 380-mg DHA) or mineral oil.
The trial was neutral. After 7.4 years, the primary endpoint of MI, stroke, transient ischemic attack, or cardiovascular death occurred in 8.9% of the supplement group versus 9.2% of the placebo arm.The incidence of AFib was higher in the omega-3 group but did not reach statistical significance (2.1% vs. 1.7% for placebo; hazard ratio, 1.23; 95% confidence interval, 0.98-1.54).
Another neutral CV trial, VITAL, specifically studied the effects of marine omega-3 pills (460-mg EPA and 380-mg DHA) in older adults without heart disease, cancer, or AFib. After slightly more than 5 years, AFib occurred at a similar rate in the active arm and placebo arms (3.7% vs. 3.4% for placebo; HR, 1.09; 95% CI, 0.96-1.24; P = .19)
Trials of very high-dose marine omega-3s
Next came trials of higher doses in higher-risk populations.
In 2020, JAMA published the STRENGTH trial, which compared 4 g/d of a carboxylic acid formulation of EPA and DHA with a corn oil placebo in more than 13,000 patients who either had established atherosclerotic CV disease (ASCVD) or were at high risk for ASCVD.
The trial was terminated early because of futility and a signal of increased AFib risk in the supplement arm.
Nearly the same number of patients in the supplement versus placebo arm experienced a primary composite endpoint of major adverse cardiac events: 12.0% versus 12.2%, respectively.
AFib was a tertiary endpoint in this trial. An increase in investigator-reported new-onset AFib was observed in the omega-3 group: 2.2% vs. 1.3% for corn oil (HR, 1.69; 95% CI, 1.29-2.21; nominal P < .001).
The REDUCE-IT trial randomly assigned more than 8,000 patients who had ASCVD or diabetes and high ASCVD risk and elevated triglyceride levels to either 4 g of icosapent ethyl daily, a concentrated form of EPA, or a mineral oil placebo.
After nearly 5 years, there was a 4.8% absolute risk reduction in the primary endpoint of CV death, MI, stroke, revascularization, or unstable angina with icosapent ethyl. An increase in atherogenic biomarkers in the mineral oil placebo complicated interpretation of this trial.
Hospitalization for AFib or flutter occurred in 3.1% of the active arm versus 2.1% of the mineral oil group (P = .004).
Meta-analysis of marine omega-3 supplement trials
In 2021, Baris Gencer and colleagues performed a meta-analysis of these five trials plus 2 more (GISSI-HF and RP) looking specifically at risk for AFib. Their final analysis included more than 81,000 patients followed for nearly 5 years.
Omega 3 fatty acid supplements associated with a 25% increase in the risk for AFib (HR, 1.25; 95% CI, 1.07-1.46P =.013). Exploring further, they noted a dose-dependent relationship. Most of the increased risk occurred in trials that tested greater than 1 g/d.
Summary
When faced with surprise findings, I like to think things through.
First about plausibility. Omega-3 fatty acids clearly exert electrophysiologic effects on cardiac cells, an increase in AFib risk is plausible. The exact underlying mechanism may be unknown, but exact mechanisms are less important than actual clinical effects (see sodium-glucose cotransporter 2 inhibitors).
What about causality? Factors supporting causality include plausibility, consistency of increased AFib risk in multiple studies, and a dose-response relationship.
I see multiple clinical implications of this observation.
The first is the power of the randomized trial to inform practice. If we relied only on observational evidence, we might have assumed that since high fish consumption in populations associated with lower rates of cardiac events, fish oil supplementation would also reduce cardiac events. Other than the outlier trial, REDUCE-IT, with its mineral oil placebo, the preponderance of the randomized controlled trial evidence does not support fish oils for the reduction of CV events.
Randomized controlled trials also exposed the AFib risk. This would have been difficult to sort out in nonrandom observational studies.
Another underappreciated lesson is the notion that drugs, including supplements, can have off-target effects.
Consider the case of statin drugs. It is widely assumed that statins reduce cardiac events by lowering low-density lipoprotein cholesterol (LDL-C). Yet, statins became a mainstay not because of LDL-C lowering but because multiple trials found that this class of drugs reduced cardiac events without increasing adverse effects.
Omega-3 fatty acids reduce triglyceride levels, but this is not enough to adopt the use of these pills. The lack of consistent reduction in CV events and the off-target signal of AFib risk argue against routine use of fish-oil pills.
I will close with uncertainty. Though there is plausibility and multiple reasons to infer causality of marine omega-3s in increasing AFib risk, the effect size remains unknown.
In an editorial accompanying the recent meta-analysis, epidemiologist Michelle Samuel, MPH, PhD, and electrophysiologist Stanley Nattel, MD, cautioned readers on a technical but important point. It concerns the matter of competing risks, such as death, in the analysis of AFib risk, meaning that patients who died may have developed AFib had they lived. They provide a detailed explanation in the open access article, but the take-home is that the exact effect size is difficult to quantify without patient-level original data.
No matter. I find the signal of increased AFib risk an important one to use at the bedside.
Intermittent AFib has an unpredictable natural history. It often resolves as mysteriously as it arises. When patients take fish-oil supplements, I cite these studies, note the lack of CV protection, then I recommend stopping the pills.
This allows for one of the most important interventions in AFib care: time.
Dr. Mandrola is a clinical electrophysiologist with Baptist Medical Associates, Louisville, Ky. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Questions about omega-3 fatty acid supplements come up often in the atrial fibrillation (AFib) clinic.
The story begins with the simple observation that populations who eat lots of oily fish have fewer coronary events. This correlation provoked great interest in concentrating fish oils in pill form and studying their use to promote health.
OMENI secondary analysis
Peder Myhre, MD, and colleagues recently published a secondary analysis of the OMENI trial looking at both the risk and possible causes of AFib in the omega-3 group.
The OMENI trial randomly assigned slightly more than 1,000 older patients (mean age, 75 years) post–myocardial infarction to either 1.8 g/d of fish oil supplements versus placebo for 2 years. The supplements comprised 930 mg of eicosapentaenoic acid (EPA) and 660 mg of docosahexaenoic acid (DHA). The main trial reported no difference in a composite primary endpoint of MI, revascularization, stroke, death, or hospitalization for heart failure.
The secondary analysis explored the 75% of patients in the main trial who had no history of AFib. It looked at how many in each group developed either true clinical AFib or what the authors called micro-AFib, defined as short bursts of irregular atrial activity lasting seconds.
The sub-analysis had three main findings: Patients in the supplement arm had a 90% higher rate of AFib or micro-AFib, compared with patients on placebo, EPA had the strongest effect on the association, and there was a graded risk for AFib with increasing serum EPA levels.
The authors raised the possibility that more micro-AFib might be a possible mediator of AFib risk.
Trials of low-dose EPA and DHA
First, the low-dose trials. In the ASCEND trial from 2018, more than 15,000 patients with diabetes were randomly assigned to either 1 g of omega-3 fatty acids (460-mg EPA and 380-mg DHA) or mineral oil.
The trial was neutral. After 7.4 years, the primary endpoint of MI, stroke, transient ischemic attack, or cardiovascular death occurred in 8.9% of the supplement group versus 9.2% of the placebo arm.The incidence of AFib was higher in the omega-3 group but did not reach statistical significance (2.1% vs. 1.7% for placebo; hazard ratio, 1.23; 95% confidence interval, 0.98-1.54).
Another neutral CV trial, VITAL, specifically studied the effects of marine omega-3 pills (460-mg EPA and 380-mg DHA) in older adults without heart disease, cancer, or AFib. After slightly more than 5 years, AFib occurred at a similar rate in the active arm and placebo arms (3.7% vs. 3.4% for placebo; HR, 1.09; 95% CI, 0.96-1.24; P = .19)
Trials of very high-dose marine omega-3s
Next came trials of higher doses in higher-risk populations.
In 2020, JAMA published the STRENGTH trial, which compared 4 g/d of a carboxylic acid formulation of EPA and DHA with a corn oil placebo in more than 13,000 patients who either had established atherosclerotic CV disease (ASCVD) or were at high risk for ASCVD.
The trial was terminated early because of futility and a signal of increased AFib risk in the supplement arm.
Nearly the same number of patients in the supplement versus placebo arm experienced a primary composite endpoint of major adverse cardiac events: 12.0% versus 12.2%, respectively.
AFib was a tertiary endpoint in this trial. An increase in investigator-reported new-onset AFib was observed in the omega-3 group: 2.2% vs. 1.3% for corn oil (HR, 1.69; 95% CI, 1.29-2.21; nominal P < .001).
The REDUCE-IT trial randomly assigned more than 8,000 patients who had ASCVD or diabetes and high ASCVD risk and elevated triglyceride levels to either 4 g of icosapent ethyl daily, a concentrated form of EPA, or a mineral oil placebo.
After nearly 5 years, there was a 4.8% absolute risk reduction in the primary endpoint of CV death, MI, stroke, revascularization, or unstable angina with icosapent ethyl. An increase in atherogenic biomarkers in the mineral oil placebo complicated interpretation of this trial.
Hospitalization for AFib or flutter occurred in 3.1% of the active arm versus 2.1% of the mineral oil group (P = .004).
Meta-analysis of marine omega-3 supplement trials
In 2021, Baris Gencer and colleagues performed a meta-analysis of these five trials plus 2 more (GISSI-HF and RP) looking specifically at risk for AFib. Their final analysis included more than 81,000 patients followed for nearly 5 years.
Omega 3 fatty acid supplements associated with a 25% increase in the risk for AFib (HR, 1.25; 95% CI, 1.07-1.46P =.013). Exploring further, they noted a dose-dependent relationship. Most of the increased risk occurred in trials that tested greater than 1 g/d.
Summary
When faced with surprise findings, I like to think things through.
First about plausibility. Omega-3 fatty acids clearly exert electrophysiologic effects on cardiac cells, an increase in AFib risk is plausible. The exact underlying mechanism may be unknown, but exact mechanisms are less important than actual clinical effects (see sodium-glucose cotransporter 2 inhibitors).
What about causality? Factors supporting causality include plausibility, consistency of increased AFib risk in multiple studies, and a dose-response relationship.
I see multiple clinical implications of this observation.
The first is the power of the randomized trial to inform practice. If we relied only on observational evidence, we might have assumed that since high fish consumption in populations associated with lower rates of cardiac events, fish oil supplementation would also reduce cardiac events. Other than the outlier trial, REDUCE-IT, with its mineral oil placebo, the preponderance of the randomized controlled trial evidence does not support fish oils for the reduction of CV events.
Randomized controlled trials also exposed the AFib risk. This would have been difficult to sort out in nonrandom observational studies.
Another underappreciated lesson is the notion that drugs, including supplements, can have off-target effects.
Consider the case of statin drugs. It is widely assumed that statins reduce cardiac events by lowering low-density lipoprotein cholesterol (LDL-C). Yet, statins became a mainstay not because of LDL-C lowering but because multiple trials found that this class of drugs reduced cardiac events without increasing adverse effects.
Omega-3 fatty acids reduce triglyceride levels, but this is not enough to adopt the use of these pills. The lack of consistent reduction in CV events and the off-target signal of AFib risk argue against routine use of fish-oil pills.
I will close with uncertainty. Though there is plausibility and multiple reasons to infer causality of marine omega-3s in increasing AFib risk, the effect size remains unknown.
In an editorial accompanying the recent meta-analysis, epidemiologist Michelle Samuel, MPH, PhD, and electrophysiologist Stanley Nattel, MD, cautioned readers on a technical but important point. It concerns the matter of competing risks, such as death, in the analysis of AFib risk, meaning that patients who died may have developed AFib had they lived. They provide a detailed explanation in the open access article, but the take-home is that the exact effect size is difficult to quantify without patient-level original data.
No matter. I find the signal of increased AFib risk an important one to use at the bedside.
Intermittent AFib has an unpredictable natural history. It often resolves as mysteriously as it arises. When patients take fish-oil supplements, I cite these studies, note the lack of CV protection, then I recommend stopping the pills.
This allows for one of the most important interventions in AFib care: time.
Dr. Mandrola is a clinical electrophysiologist with Baptist Medical Associates, Louisville, Ky. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
More expensive alcohol saves lives. Will it affect cancer?
This transcript has been edited for clarity.
I’d like to discuss an article that’s appeared recently in The Lancet. It looks at the impact of minimum unit pricing for alcohol on alcohol-related deaths and hospital admissions in Scotland, my home country. Why is that important to me as a cancer doctor? We know that alcohol underpins epidemiologically a whole range of different tumor types.
Anyway, it’s a really interesting experiment. It also looks at the impact of governments and health policy. In 2018, the Scottish government introduced a minimum unit pricing for alcohol of around $0.60 per unit of alcohol. The idea was that if you drive up the price of getting access to alcohol, that should reduce harm, deaths, and hospital admissions.
Wyper and colleagues did a rather nice controlled, time-interrupted series. The legislation was introduced in 2018, so they looked at our public-health databases, hospital admissions, deaths, and so on for the time span from 2012 to 2018, then for about 3 years after the introduction of legislation in 2018. They used England as a control.
What was also interesting was that the benefits were confined to the lower socioeconomic classes. One could argue, whether intended or otherwise, that this was a health-policy intervention targeted at the lower socioeconomic classes. Perhaps, one would hope as a consequence that this would reduce the health equity gap.
We know that the differences in Scotland are remarkable. When we compare the highest with the lowest socioeconomic classes, there’s a 4- to 4.5-fold difference in likelihood of death benefiting, of course, the wealthy. The health-equity gap between rich and poor is getting wider, not becoming narrower. Interventions of this sort make a difference.
Of course, there’s good evidence from other areas in which price control can make a difference. Tobacco is perhaps the best example of it. People have also talked about sugar or fat taxes to see whether their actions reduce levels of obesity, overeating, and other problems.
It’s a really nice study, with very compelling data, very well worked out in terms of the methodology and statistics. There are lives saved and lives prolonged.
What it doesn’t do is tell us about the amount of alcohol that people were taking. It shows that if you are less well off and the price of alcohol goes up, you’ve got less money to spend on alcohol. Therefore, that reduction results in the reduction in harm associated with it.
What’s really interesting is something I hadn’t realized about what’s called the alcohol-harm paradox. When you look at drinkers across the socioeconomic spectrum, including wealthy and poor drinkers, even for those who have exactly the same consumption of alcohol, there seems to be significantly more harm done to the poor than to the wealthy.
There may be some behavioral explanations for this, but they don’t explain all the difference. More work needs to be done there. It’s a really interesting story and I think a brave policy put forward by the Scottish government, which has returned rewards and is something that one would consider replicating around the world to see what other benefits might accrue from it.
I’m very interested to watch further forward over the next 2 decades to see what impact, if any, this alcohol-pricing legislation has on the incidence of cancer, looking at breast cancer, some gastrointestinal tumors, and so on, in which we know alcohol plays a part in their carcinogenesis.
Dr. Kerris a professor of cancer medicine at the University of Oxford (England). He reported conflicts of interest with Celleron Therapeutics, Oxford Cancer Biomarkers, Afrox, GlaxoSmithKline, Bayer, Genomic Health, Merck Serono, and Roche.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’d like to discuss an article that’s appeared recently in The Lancet. It looks at the impact of minimum unit pricing for alcohol on alcohol-related deaths and hospital admissions in Scotland, my home country. Why is that important to me as a cancer doctor? We know that alcohol underpins epidemiologically a whole range of different tumor types.
Anyway, it’s a really interesting experiment. It also looks at the impact of governments and health policy. In 2018, the Scottish government introduced a minimum unit pricing for alcohol of around $0.60 per unit of alcohol. The idea was that if you drive up the price of getting access to alcohol, that should reduce harm, deaths, and hospital admissions.
Wyper and colleagues did a rather nice controlled, time-interrupted series. The legislation was introduced in 2018, so they looked at our public-health databases, hospital admissions, deaths, and so on for the time span from 2012 to 2018, then for about 3 years after the introduction of legislation in 2018. They used England as a control.
What was also interesting was that the benefits were confined to the lower socioeconomic classes. One could argue, whether intended or otherwise, that this was a health-policy intervention targeted at the lower socioeconomic classes. Perhaps, one would hope as a consequence that this would reduce the health equity gap.
We know that the differences in Scotland are remarkable. When we compare the highest with the lowest socioeconomic classes, there’s a 4- to 4.5-fold difference in likelihood of death benefiting, of course, the wealthy. The health-equity gap between rich and poor is getting wider, not becoming narrower. Interventions of this sort make a difference.
Of course, there’s good evidence from other areas in which price control can make a difference. Tobacco is perhaps the best example of it. People have also talked about sugar or fat taxes to see whether their actions reduce levels of obesity, overeating, and other problems.
It’s a really nice study, with very compelling data, very well worked out in terms of the methodology and statistics. There are lives saved and lives prolonged.
What it doesn’t do is tell us about the amount of alcohol that people were taking. It shows that if you are less well off and the price of alcohol goes up, you’ve got less money to spend on alcohol. Therefore, that reduction results in the reduction in harm associated with it.
What’s really interesting is something I hadn’t realized about what’s called the alcohol-harm paradox. When you look at drinkers across the socioeconomic spectrum, including wealthy and poor drinkers, even for those who have exactly the same consumption of alcohol, there seems to be significantly more harm done to the poor than to the wealthy.
There may be some behavioral explanations for this, but they don’t explain all the difference. More work needs to be done there. It’s a really interesting story and I think a brave policy put forward by the Scottish government, which has returned rewards and is something that one would consider replicating around the world to see what other benefits might accrue from it.
I’m very interested to watch further forward over the next 2 decades to see what impact, if any, this alcohol-pricing legislation has on the incidence of cancer, looking at breast cancer, some gastrointestinal tumors, and so on, in which we know alcohol plays a part in their carcinogenesis.
Dr. Kerris a professor of cancer medicine at the University of Oxford (England). He reported conflicts of interest with Celleron Therapeutics, Oxford Cancer Biomarkers, Afrox, GlaxoSmithKline, Bayer, Genomic Health, Merck Serono, and Roche.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’d like to discuss an article that’s appeared recently in The Lancet. It looks at the impact of minimum unit pricing for alcohol on alcohol-related deaths and hospital admissions in Scotland, my home country. Why is that important to me as a cancer doctor? We know that alcohol underpins epidemiologically a whole range of different tumor types.
Anyway, it’s a really interesting experiment. It also looks at the impact of governments and health policy. In 2018, the Scottish government introduced a minimum unit pricing for alcohol of around $0.60 per unit of alcohol. The idea was that if you drive up the price of getting access to alcohol, that should reduce harm, deaths, and hospital admissions.
Wyper and colleagues did a rather nice controlled, time-interrupted series. The legislation was introduced in 2018, so they looked at our public-health databases, hospital admissions, deaths, and so on for the time span from 2012 to 2018, then for about 3 years after the introduction of legislation in 2018. They used England as a control.
What was also interesting was that the benefits were confined to the lower socioeconomic classes. One could argue, whether intended or otherwise, that this was a health-policy intervention targeted at the lower socioeconomic classes. Perhaps, one would hope as a consequence that this would reduce the health equity gap.
We know that the differences in Scotland are remarkable. When we compare the highest with the lowest socioeconomic classes, there’s a 4- to 4.5-fold difference in likelihood of death benefiting, of course, the wealthy. The health-equity gap between rich and poor is getting wider, not becoming narrower. Interventions of this sort make a difference.
Of course, there’s good evidence from other areas in which price control can make a difference. Tobacco is perhaps the best example of it. People have also talked about sugar or fat taxes to see whether their actions reduce levels of obesity, overeating, and other problems.
It’s a really nice study, with very compelling data, very well worked out in terms of the methodology and statistics. There are lives saved and lives prolonged.
What it doesn’t do is tell us about the amount of alcohol that people were taking. It shows that if you are less well off and the price of alcohol goes up, you’ve got less money to spend on alcohol. Therefore, that reduction results in the reduction in harm associated with it.
What’s really interesting is something I hadn’t realized about what’s called the alcohol-harm paradox. When you look at drinkers across the socioeconomic spectrum, including wealthy and poor drinkers, even for those who have exactly the same consumption of alcohol, there seems to be significantly more harm done to the poor than to the wealthy.
There may be some behavioral explanations for this, but they don’t explain all the difference. More work needs to be done there. It’s a really interesting story and I think a brave policy put forward by the Scottish government, which has returned rewards and is something that one would consider replicating around the world to see what other benefits might accrue from it.
I’m very interested to watch further forward over the next 2 decades to see what impact, if any, this alcohol-pricing legislation has on the incidence of cancer, looking at breast cancer, some gastrointestinal tumors, and so on, in which we know alcohol plays a part in their carcinogenesis.
Dr. Kerris a professor of cancer medicine at the University of Oxford (England). He reported conflicts of interest with Celleron Therapeutics, Oxford Cancer Biomarkers, Afrox, GlaxoSmithKline, Bayer, Genomic Health, Merck Serono, and Roche.
A version of this article first appeared on Medscape.com.
Try a little D.I.Y.
Burnout continues to be a hot topic in medicine. It seems like either you are a victim or are concerned that you may become one. Does the solution lie in a restructuring of our health care nonsystem? Or do we need to do a better job of preparing physicians for the realities of an increasingly challenging profession?
Which side of the work/life balance needs adjusting?
Obviously, it is both and a recent article in the Journal of the American Informatics Association provides some hints and suggests where we might begin to look for workable solutions. Targeting a single large university health care system, the investigators reviewed the answers provided by more than 600 attending physicians. Nearly half of the respondents reported symptoms of burnout. Those physicians feeling a higher level of EHR (electronic health record) stress were more likely to experiencing burnout. Interestingly, there was no difference in the odds of having burnout between the physicians who were receiving patient emails (MyChart messages) that had been screened by a pool support personnel and those physicians who were receiving the emails directly from the patients.
While this finding about delegating physician-patient communications may come as a surprise to some of you, it supports a series of observations I have made over the last several decades. Whether we are talking about a physicians’ office or an insurance agency, I suspect most business consultants will suggest that things will run more smoothly and efficiently if there is well-structured system in which incoming communications from the clients/patients are dealt with first by less skilled, and therefore less costly, members of the team before they are passed on to the most senior personnel. It just makes sense.
But, it doesn’t always work that well. If the screener has neglected to ask a critical question or anticipated a question by the ultimate decision-makers, this is likely to require another interaction between the client and then screener and then the screener with the decision-maker. If the decision-maker – let’s now call her a physician – had taken the call directly from the patient, it would have saved three people some time and very possibly ended up with a higher quality response, certainly a more patient-friendly one.
I can understand why you might consider my suggestion unworkable when we are talking about phone calls. It will only work if you dedicate specific call-in times for the patients as my partner and I did back in the dark ages. However, when we are talking about a communication a bit less time critical (e.g. an email or a text), it becomes very workable and I think that’s what this recent paper is hinting at.
Too many of us have adopted a protectionist attitude toward our patients in which somehow it is unprofessional or certainly inefficient to communicate with them directly unless we are sitting down together in our offices. Please, not in the checkout at the grocery store. I hope this is not because, like lawyers, we feel we can’t bill for it. The patients love hearing from you directly even if you keep your responses short and to the point. Many will learn to follow suit and adopt your communication style.
You can argue that your staff is so well trained that your communication with the patients seldom becomes a time-gobbling ping-pong match of he-said/she-said/he-said. Then good for you. You are a better delegator than I am.
If this is your first foray into Do-It-Yourself medicine and it works, I encourage you to consider giving your own injections. It’s a clear-cut statement of the importance you attach to immunizations. And ... it will keep your staffing overhead down.
Finally, I can’t resist adding that the authors of this paper also found that physicians sleeping less than 6 hours per night had a significantly higher odds of burnout.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Burnout continues to be a hot topic in medicine. It seems like either you are a victim or are concerned that you may become one. Does the solution lie in a restructuring of our health care nonsystem? Or do we need to do a better job of preparing physicians for the realities of an increasingly challenging profession?
Which side of the work/life balance needs adjusting?
Obviously, it is both and a recent article in the Journal of the American Informatics Association provides some hints and suggests where we might begin to look for workable solutions. Targeting a single large university health care system, the investigators reviewed the answers provided by more than 600 attending physicians. Nearly half of the respondents reported symptoms of burnout. Those physicians feeling a higher level of EHR (electronic health record) stress were more likely to experiencing burnout. Interestingly, there was no difference in the odds of having burnout between the physicians who were receiving patient emails (MyChart messages) that had been screened by a pool support personnel and those physicians who were receiving the emails directly from the patients.
While this finding about delegating physician-patient communications may come as a surprise to some of you, it supports a series of observations I have made over the last several decades. Whether we are talking about a physicians’ office or an insurance agency, I suspect most business consultants will suggest that things will run more smoothly and efficiently if there is well-structured system in which incoming communications from the clients/patients are dealt with first by less skilled, and therefore less costly, members of the team before they are passed on to the most senior personnel. It just makes sense.
But, it doesn’t always work that well. If the screener has neglected to ask a critical question or anticipated a question by the ultimate decision-makers, this is likely to require another interaction between the client and then screener and then the screener with the decision-maker. If the decision-maker – let’s now call her a physician – had taken the call directly from the patient, it would have saved three people some time and very possibly ended up with a higher quality response, certainly a more patient-friendly one.
I can understand why you might consider my suggestion unworkable when we are talking about phone calls. It will only work if you dedicate specific call-in times for the patients as my partner and I did back in the dark ages. However, when we are talking about a communication a bit less time critical (e.g. an email or a text), it becomes very workable and I think that’s what this recent paper is hinting at.
Too many of us have adopted a protectionist attitude toward our patients in which somehow it is unprofessional or certainly inefficient to communicate with them directly unless we are sitting down together in our offices. Please, not in the checkout at the grocery store. I hope this is not because, like lawyers, we feel we can’t bill for it. The patients love hearing from you directly even if you keep your responses short and to the point. Many will learn to follow suit and adopt your communication style.
You can argue that your staff is so well trained that your communication with the patients seldom becomes a time-gobbling ping-pong match of he-said/she-said/he-said. Then good for you. You are a better delegator than I am.
If this is your first foray into Do-It-Yourself medicine and it works, I encourage you to consider giving your own injections. It’s a clear-cut statement of the importance you attach to immunizations. And ... it will keep your staffing overhead down.
Finally, I can’t resist adding that the authors of this paper also found that physicians sleeping less than 6 hours per night had a significantly higher odds of burnout.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Burnout continues to be a hot topic in medicine. It seems like either you are a victim or are concerned that you may become one. Does the solution lie in a restructuring of our health care nonsystem? Or do we need to do a better job of preparing physicians for the realities of an increasingly challenging profession?
Which side of the work/life balance needs adjusting?
Obviously, it is both and a recent article in the Journal of the American Informatics Association provides some hints and suggests where we might begin to look for workable solutions. Targeting a single large university health care system, the investigators reviewed the answers provided by more than 600 attending physicians. Nearly half of the respondents reported symptoms of burnout. Those physicians feeling a higher level of EHR (electronic health record) stress were more likely to experiencing burnout. Interestingly, there was no difference in the odds of having burnout between the physicians who were receiving patient emails (MyChart messages) that had been screened by a pool support personnel and those physicians who were receiving the emails directly from the patients.
While this finding about delegating physician-patient communications may come as a surprise to some of you, it supports a series of observations I have made over the last several decades. Whether we are talking about a physicians’ office or an insurance agency, I suspect most business consultants will suggest that things will run more smoothly and efficiently if there is well-structured system in which incoming communications from the clients/patients are dealt with first by less skilled, and therefore less costly, members of the team before they are passed on to the most senior personnel. It just makes sense.
But, it doesn’t always work that well. If the screener has neglected to ask a critical question or anticipated a question by the ultimate decision-makers, this is likely to require another interaction between the client and then screener and then the screener with the decision-maker. If the decision-maker – let’s now call her a physician – had taken the call directly from the patient, it would have saved three people some time and very possibly ended up with a higher quality response, certainly a more patient-friendly one.
I can understand why you might consider my suggestion unworkable when we are talking about phone calls. It will only work if you dedicate specific call-in times for the patients as my partner and I did back in the dark ages. However, when we are talking about a communication a bit less time critical (e.g. an email or a text), it becomes very workable and I think that’s what this recent paper is hinting at.
Too many of us have adopted a protectionist attitude toward our patients in which somehow it is unprofessional or certainly inefficient to communicate with them directly unless we are sitting down together in our offices. Please, not in the checkout at the grocery store. I hope this is not because, like lawyers, we feel we can’t bill for it. The patients love hearing from you directly even if you keep your responses short and to the point. Many will learn to follow suit and adopt your communication style.
You can argue that your staff is so well trained that your communication with the patients seldom becomes a time-gobbling ping-pong match of he-said/she-said/he-said. Then good for you. You are a better delegator than I am.
If this is your first foray into Do-It-Yourself medicine and it works, I encourage you to consider giving your own injections. It’s a clear-cut statement of the importance you attach to immunizations. And ... it will keep your staffing overhead down.
Finally, I can’t resist adding that the authors of this paper also found that physicians sleeping less than 6 hours per night had a significantly higher odds of burnout.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Considering the true costs of clinical trials
This transcript has been edited for clarity.
We need to think about the ways that participating in clinical trials results in increased out-of-pocket costs to our patients and how that limits the ability of marginalized groups to participate. That should be a problem for us.
There are many subtle and some egregious ways that participating in clinical trials can result in increased costs. We may ask patients to come to the clinic more frequently. That may mean costs for transportation, wear and tear on your car, and gas prices. It may also mean that if you work in a job where you don’t have time off, and if you’re not at work, you don’t get paid. That’s a major hit to your take-home pay.
We also need to take a close and more honest look at our study budgets and what we consider standard of care. Now, this becomes a slippery slope because there are clear recommendations that we would all agree, but there are also differences of practice and differences of opinion.
How often should patients with advanced disease, who clinically are doing well, have scans to evaluate their disease status and look for subtle evidence of progression? Are laboratory studies part of the follow-up in patients in the adjuvant setting? Did you really need a urinalysis in somebody who’s going to be starting chemotherapy? Do you need an EKG if you’re going to be giving them a drug that doesn’t have potential cardiac toxicity, for which QTc prolongation is not a problem?
Those are often included in our clinical trials. In some cases, they might be paid for by the trial. In other cases, they’re billed to the insurance provider, which means they’ll contribute to deductibles and copays will apply. It is very likely that they will cost your patient something out of pocket.
Now, this becomes important because many of our consent forms would specifically say that things that are only done for the study are paid for by the study. How we define standard of care becomes vitally important. These issues have not been linked in this way frequently.
Clinical trials are how we make progress. The more patients who are able to participate in clinical trials, the better it is for all of us and all our future patients.
Kathy D. Miller, MD, is associate director of clinical research and codirector of the breast cancer program at the Melvin and Bren Simon Cancer Center at Indiana University, Indianapolis. She disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
We need to think about the ways that participating in clinical trials results in increased out-of-pocket costs to our patients and how that limits the ability of marginalized groups to participate. That should be a problem for us.
There are many subtle and some egregious ways that participating in clinical trials can result in increased costs. We may ask patients to come to the clinic more frequently. That may mean costs for transportation, wear and tear on your car, and gas prices. It may also mean that if you work in a job where you don’t have time off, and if you’re not at work, you don’t get paid. That’s a major hit to your take-home pay.
We also need to take a close and more honest look at our study budgets and what we consider standard of care. Now, this becomes a slippery slope because there are clear recommendations that we would all agree, but there are also differences of practice and differences of opinion.
How often should patients with advanced disease, who clinically are doing well, have scans to evaluate their disease status and look for subtle evidence of progression? Are laboratory studies part of the follow-up in patients in the adjuvant setting? Did you really need a urinalysis in somebody who’s going to be starting chemotherapy? Do you need an EKG if you’re going to be giving them a drug that doesn’t have potential cardiac toxicity, for which QTc prolongation is not a problem?
Those are often included in our clinical trials. In some cases, they might be paid for by the trial. In other cases, they’re billed to the insurance provider, which means they’ll contribute to deductibles and copays will apply. It is very likely that they will cost your patient something out of pocket.
Now, this becomes important because many of our consent forms would specifically say that things that are only done for the study are paid for by the study. How we define standard of care becomes vitally important. These issues have not been linked in this way frequently.
Clinical trials are how we make progress. The more patients who are able to participate in clinical trials, the better it is for all of us and all our future patients.
Kathy D. Miller, MD, is associate director of clinical research and codirector of the breast cancer program at the Melvin and Bren Simon Cancer Center at Indiana University, Indianapolis. She disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
We need to think about the ways that participating in clinical trials results in increased out-of-pocket costs to our patients and how that limits the ability of marginalized groups to participate. That should be a problem for us.
There are many subtle and some egregious ways that participating in clinical trials can result in increased costs. We may ask patients to come to the clinic more frequently. That may mean costs for transportation, wear and tear on your car, and gas prices. It may also mean that if you work in a job where you don’t have time off, and if you’re not at work, you don’t get paid. That’s a major hit to your take-home pay.
We also need to take a close and more honest look at our study budgets and what we consider standard of care. Now, this becomes a slippery slope because there are clear recommendations that we would all agree, but there are also differences of practice and differences of opinion.
How often should patients with advanced disease, who clinically are doing well, have scans to evaluate their disease status and look for subtle evidence of progression? Are laboratory studies part of the follow-up in patients in the adjuvant setting? Did you really need a urinalysis in somebody who’s going to be starting chemotherapy? Do you need an EKG if you’re going to be giving them a drug that doesn’t have potential cardiac toxicity, for which QTc prolongation is not a problem?
Those are often included in our clinical trials. In some cases, they might be paid for by the trial. In other cases, they’re billed to the insurance provider, which means they’ll contribute to deductibles and copays will apply. It is very likely that they will cost your patient something out of pocket.
Now, this becomes important because many of our consent forms would specifically say that things that are only done for the study are paid for by the study. How we define standard of care becomes vitally important. These issues have not been linked in this way frequently.
Clinical trials are how we make progress. The more patients who are able to participate in clinical trials, the better it is for all of us and all our future patients.
Kathy D. Miller, MD, is associate director of clinical research and codirector of the breast cancer program at the Melvin and Bren Simon Cancer Center at Indiana University, Indianapolis. She disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
A step forward in diabetic foot disease management
As we navigate the ever-evolving landscape of diabetic foot disease management,
The goal is to create a common language of risk that is easily related from clinician to clinician to patient.Whatever language we use, though, the problem we face is vast:
- Diabetic foot ulcers affect approximately 18.6 million people worldwide and 1.6 million in the United States each year.
- They are associated with high rates of premature death, with a 5-year mortality rate of 30%. This rate is greater than 70% for those with above-foot amputations, worse than all but the most aggressive cancers.
- The direct costs of treating diabetic foot ulcers in the United States is estimated at $9 billion-$13 billion annually.
- Over 550 million people worldwide have diabetes, with 18.6 million developing foot ulcers annually. Up to 34% of those with diabetes will develop a foot ulcer.
- About 20% of those with a diabetic foot ulcer will undergo amputation, a major cause of which is infection, which affects 50% of foot ulcers.
- Up to 20% of those with a foot ulcer require hospitalization, with 15%-20% undergoing amputation. Inequities exist in diabetes-related foot complications:
- –Rates of major amputation are higher in non-Hispanic Black, Hispanic, and Native American populations, compared with non-Hispanic White populations.
- –Non-Hispanic Black and Hispanic populations present with more advanced ulcers and peripheral artery disease, and are more likely to undergo amputation without revascularization attempt.
The IWGDF, a multidisciplinary team of international experts, has recently updated its guidelines. This team, comprising endocrinologists, internal medicine physicians, physiatrists, podiatrists, and vascular surgeons from across the globe, has worked tirelessly to provide us with a comprehensive guide to managing diabetes-related foot ulcers.
The updated guidelines address five critical clinical questions, each with up to 13 important outcomes. The systematic review that underpins these guidelines identified 149 eligible studies, assessing 28 different systems. This exhaustive research has led to the development of seven key recommendations that address the clinical questions and consider the existence of different clinical settings.
One of the significant updates in the 2023 guidelines is the recommendation of SINBAD – site, ischemia, neuropathy, bacterial infection, area, and depth – as the priority wound classification system for people with diabetes and a foot ulcer. This system is particularly useful for interprofessional communication, describing each composite variable, and conducting clinical audits using the full score. However, the guidelines also recommend the use of other, more specific assessment systems for infection and peripheral artery disease from the Infectious Diseases Society of America/IWGDF when resources and an appropriate level of expertise exist.
The introduction of the Wound, Ischemia and Foot Infection (WIfI) classification system in the guidelines is also a noteworthy development. This system is crucial in assessing perfusion and the likely benefit of revascularization in a person with diabetes and a foot ulcer. By assessing the level of wound ischemia and infection, we can make informed decisions about the need for vascular intervention, which can significantly affect the patient’s outcome. This can be done simply by classifying each of the three categories of wound, ischemia, or foot infection as none, mild, moderate, or severe. By simplifying the very dynamic comorbidities of tissue loss, ischemia, and infection into a usable and predictive scale, it helps us to communicate risk across disciplines. This has been found to be highly predictive of healing, amputation, and mortality.
We use WIfI every day across our system. An example might include a patient we recently treated:
A 76-year-old woman presented with a wound to her left foot. Her past medical history revealed type 2 diabetes, peripheral neuropathy, and documented peripheral artery disease with prior bilateral femoral-popliteal bypass conducted at an external facility. In addition to gangrenous changes to her fourth toe, she displayed erythema and lymphangitic streaking up her dorsal foot. While she was afebrile, her white cell count was 13,000/mcL. Radiographic examinations did not show signs of osteomyelitis. Noninvasive vascular evaluations revealed an ankle brachial index of 0.4 and a toe pressure of 10 mm Hg. An aortogram with a lower-extremity runoff arteriogram confirmed the obstruction of her left femoral-popliteal bypass.
Taking these results into account, her WIfI score was determined as: wound 2 (moderate), ischemia 3 (severe), foot infection 2 (moderate, no sepsis), translating to a clinical stage 4. This denotes a high risk for major amputation.
Following a team discussion, she was taken to the operating room for an initial debridement of her infection which consisted of a partial fourth ray resection to the level of the mid-metatarsal. Following control of the infection, she received a vascular assessment which ultimately constituted a femoral to distal anterior tibial bypass. Following both of these, she was discharged on a negative-pressure wound therapy device, receiving a split-thickness skin graft 4 weeks later.
The guidelines also emphasize the need for specific training, skills, and experience to ensure the accuracy of the recommended systems for characterizing foot ulcers. The person applying these systems should be appropriately trained and, according to their national or regional standards, should have the knowledge, expertise, and skills necessary to manage people with a diabetes-related foot ulcer.
As we continue to navigate the complexities of diabetes-related foot disease, these guidelines serve as a valuable compass, guiding our decisions and actions. They remind us of the importance of continuous learning, collaboration, and the application of evidence-based practice in our work.
I encourage you to delve into these guidelines. Let’s use them to improve our practice, enhance our communication, and, ultimately, provide better care for our patients.
Dr. Armstrong is professor of surgery, director of limb preservation, University of Southern California, Los Angeles. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
As we navigate the ever-evolving landscape of diabetic foot disease management,
The goal is to create a common language of risk that is easily related from clinician to clinician to patient.Whatever language we use, though, the problem we face is vast:
- Diabetic foot ulcers affect approximately 18.6 million people worldwide and 1.6 million in the United States each year.
- They are associated with high rates of premature death, with a 5-year mortality rate of 30%. This rate is greater than 70% for those with above-foot amputations, worse than all but the most aggressive cancers.
- The direct costs of treating diabetic foot ulcers in the United States is estimated at $9 billion-$13 billion annually.
- Over 550 million people worldwide have diabetes, with 18.6 million developing foot ulcers annually. Up to 34% of those with diabetes will develop a foot ulcer.
- About 20% of those with a diabetic foot ulcer will undergo amputation, a major cause of which is infection, which affects 50% of foot ulcers.
- Up to 20% of those with a foot ulcer require hospitalization, with 15%-20% undergoing amputation. Inequities exist in diabetes-related foot complications:
- –Rates of major amputation are higher in non-Hispanic Black, Hispanic, and Native American populations, compared with non-Hispanic White populations.
- –Non-Hispanic Black and Hispanic populations present with more advanced ulcers and peripheral artery disease, and are more likely to undergo amputation without revascularization attempt.
The IWGDF, a multidisciplinary team of international experts, has recently updated its guidelines. This team, comprising endocrinologists, internal medicine physicians, physiatrists, podiatrists, and vascular surgeons from across the globe, has worked tirelessly to provide us with a comprehensive guide to managing diabetes-related foot ulcers.
The updated guidelines address five critical clinical questions, each with up to 13 important outcomes. The systematic review that underpins these guidelines identified 149 eligible studies, assessing 28 different systems. This exhaustive research has led to the development of seven key recommendations that address the clinical questions and consider the existence of different clinical settings.
One of the significant updates in the 2023 guidelines is the recommendation of SINBAD – site, ischemia, neuropathy, bacterial infection, area, and depth – as the priority wound classification system for people with diabetes and a foot ulcer. This system is particularly useful for interprofessional communication, describing each composite variable, and conducting clinical audits using the full score. However, the guidelines also recommend the use of other, more specific assessment systems for infection and peripheral artery disease from the Infectious Diseases Society of America/IWGDF when resources and an appropriate level of expertise exist.
The introduction of the Wound, Ischemia and Foot Infection (WIfI) classification system in the guidelines is also a noteworthy development. This system is crucial in assessing perfusion and the likely benefit of revascularization in a person with diabetes and a foot ulcer. By assessing the level of wound ischemia and infection, we can make informed decisions about the need for vascular intervention, which can significantly affect the patient’s outcome. This can be done simply by classifying each of the three categories of wound, ischemia, or foot infection as none, mild, moderate, or severe. By simplifying the very dynamic comorbidities of tissue loss, ischemia, and infection into a usable and predictive scale, it helps us to communicate risk across disciplines. This has been found to be highly predictive of healing, amputation, and mortality.
We use WIfI every day across our system. An example might include a patient we recently treated:
A 76-year-old woman presented with a wound to her left foot. Her past medical history revealed type 2 diabetes, peripheral neuropathy, and documented peripheral artery disease with prior bilateral femoral-popliteal bypass conducted at an external facility. In addition to gangrenous changes to her fourth toe, she displayed erythema and lymphangitic streaking up her dorsal foot. While she was afebrile, her white cell count was 13,000/mcL. Radiographic examinations did not show signs of osteomyelitis. Noninvasive vascular evaluations revealed an ankle brachial index of 0.4 and a toe pressure of 10 mm Hg. An aortogram with a lower-extremity runoff arteriogram confirmed the obstruction of her left femoral-popliteal bypass.
Taking these results into account, her WIfI score was determined as: wound 2 (moderate), ischemia 3 (severe), foot infection 2 (moderate, no sepsis), translating to a clinical stage 4. This denotes a high risk for major amputation.
Following a team discussion, she was taken to the operating room for an initial debridement of her infection which consisted of a partial fourth ray resection to the level of the mid-metatarsal. Following control of the infection, she received a vascular assessment which ultimately constituted a femoral to distal anterior tibial bypass. Following both of these, she was discharged on a negative-pressure wound therapy device, receiving a split-thickness skin graft 4 weeks later.
The guidelines also emphasize the need for specific training, skills, and experience to ensure the accuracy of the recommended systems for characterizing foot ulcers. The person applying these systems should be appropriately trained and, according to their national or regional standards, should have the knowledge, expertise, and skills necessary to manage people with a diabetes-related foot ulcer.
As we continue to navigate the complexities of diabetes-related foot disease, these guidelines serve as a valuable compass, guiding our decisions and actions. They remind us of the importance of continuous learning, collaboration, and the application of evidence-based practice in our work.
I encourage you to delve into these guidelines. Let’s use them to improve our practice, enhance our communication, and, ultimately, provide better care for our patients.
Dr. Armstrong is professor of surgery, director of limb preservation, University of Southern California, Los Angeles. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
As we navigate the ever-evolving landscape of diabetic foot disease management,
The goal is to create a common language of risk that is easily related from clinician to clinician to patient.Whatever language we use, though, the problem we face is vast:
- Diabetic foot ulcers affect approximately 18.6 million people worldwide and 1.6 million in the United States each year.
- They are associated with high rates of premature death, with a 5-year mortality rate of 30%. This rate is greater than 70% for those with above-foot amputations, worse than all but the most aggressive cancers.
- The direct costs of treating diabetic foot ulcers in the United States is estimated at $9 billion-$13 billion annually.
- Over 550 million people worldwide have diabetes, with 18.6 million developing foot ulcers annually. Up to 34% of those with diabetes will develop a foot ulcer.
- About 20% of those with a diabetic foot ulcer will undergo amputation, a major cause of which is infection, which affects 50% of foot ulcers.
- Up to 20% of those with a foot ulcer require hospitalization, with 15%-20% undergoing amputation. Inequities exist in diabetes-related foot complications:
- –Rates of major amputation are higher in non-Hispanic Black, Hispanic, and Native American populations, compared with non-Hispanic White populations.
- –Non-Hispanic Black and Hispanic populations present with more advanced ulcers and peripheral artery disease, and are more likely to undergo amputation without revascularization attempt.
The IWGDF, a multidisciplinary team of international experts, has recently updated its guidelines. This team, comprising endocrinologists, internal medicine physicians, physiatrists, podiatrists, and vascular surgeons from across the globe, has worked tirelessly to provide us with a comprehensive guide to managing diabetes-related foot ulcers.
The updated guidelines address five critical clinical questions, each with up to 13 important outcomes. The systematic review that underpins these guidelines identified 149 eligible studies, assessing 28 different systems. This exhaustive research has led to the development of seven key recommendations that address the clinical questions and consider the existence of different clinical settings.
One of the significant updates in the 2023 guidelines is the recommendation of SINBAD – site, ischemia, neuropathy, bacterial infection, area, and depth – as the priority wound classification system for people with diabetes and a foot ulcer. This system is particularly useful for interprofessional communication, describing each composite variable, and conducting clinical audits using the full score. However, the guidelines also recommend the use of other, more specific assessment systems for infection and peripheral artery disease from the Infectious Diseases Society of America/IWGDF when resources and an appropriate level of expertise exist.
The introduction of the Wound, Ischemia and Foot Infection (WIfI) classification system in the guidelines is also a noteworthy development. This system is crucial in assessing perfusion and the likely benefit of revascularization in a person with diabetes and a foot ulcer. By assessing the level of wound ischemia and infection, we can make informed decisions about the need for vascular intervention, which can significantly affect the patient’s outcome. This can be done simply by classifying each of the three categories of wound, ischemia, or foot infection as none, mild, moderate, or severe. By simplifying the very dynamic comorbidities of tissue loss, ischemia, and infection into a usable and predictive scale, it helps us to communicate risk across disciplines. This has been found to be highly predictive of healing, amputation, and mortality.
We use WIfI every day across our system. An example might include a patient we recently treated:
A 76-year-old woman presented with a wound to her left foot. Her past medical history revealed type 2 diabetes, peripheral neuropathy, and documented peripheral artery disease with prior bilateral femoral-popliteal bypass conducted at an external facility. In addition to gangrenous changes to her fourth toe, she displayed erythema and lymphangitic streaking up her dorsal foot. While she was afebrile, her white cell count was 13,000/mcL. Radiographic examinations did not show signs of osteomyelitis. Noninvasive vascular evaluations revealed an ankle brachial index of 0.4 and a toe pressure of 10 mm Hg. An aortogram with a lower-extremity runoff arteriogram confirmed the obstruction of her left femoral-popliteal bypass.
Taking these results into account, her WIfI score was determined as: wound 2 (moderate), ischemia 3 (severe), foot infection 2 (moderate, no sepsis), translating to a clinical stage 4. This denotes a high risk for major amputation.
Following a team discussion, she was taken to the operating room for an initial debridement of her infection which consisted of a partial fourth ray resection to the level of the mid-metatarsal. Following control of the infection, she received a vascular assessment which ultimately constituted a femoral to distal anterior tibial bypass. Following both of these, she was discharged on a negative-pressure wound therapy device, receiving a split-thickness skin graft 4 weeks later.
The guidelines also emphasize the need for specific training, skills, and experience to ensure the accuracy of the recommended systems for characterizing foot ulcers. The person applying these systems should be appropriately trained and, according to their national or regional standards, should have the knowledge, expertise, and skills necessary to manage people with a diabetes-related foot ulcer.
As we continue to navigate the complexities of diabetes-related foot disease, these guidelines serve as a valuable compass, guiding our decisions and actions. They remind us of the importance of continuous learning, collaboration, and the application of evidence-based practice in our work.
I encourage you to delve into these guidelines. Let’s use them to improve our practice, enhance our communication, and, ultimately, provide better care for our patients.
Dr. Armstrong is professor of surgery, director of limb preservation, University of Southern California, Los Angeles. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
The bloated medical record
Until the 19th century there was nothing even resembling our current conception of the medical record. A few physicians may have kept personal notes, observations, and some sketches of their patients primarily to be used in teaching medical students or as part of their own curiosity-driven research. However, around 1800 the Governor Council of the State of New York adopted a proposition that all home doctors should register their medical cases again to be used as an educational tool. By 1830 these registries became annual reporting requirements that included admissions and discharges, treatment results, and expenditures. It shouldn’t surprise you learn that a review of these entries could be linked to a doctor’s prospects for promotion.
In 1919 the American College of Surgeons attempted to standardize its members’ “treatment diaries” to look something more like our current medical records with a history, lab tests, diagnosis, treatment plan, and something akin to daily progress notes. However, as late as the 1970s, when I began primary care practice, there were very few dictates on what our office notes should contain. A few (not including myself) had been trained to use a S.O.A.P. format (Subjective, Objective, Assessment, and Plan) to organize their observations. Back then I viewed my office records as primarily a mnemonic device and only because I had a partner did I make any passing attempt at legibility.
With AI staring us in the face and threatening to expand what has become an already bloated medical record,
Although there was a time when a doctor’s notes simply functioned as a mnemonic, few physicians today practice in isolation and their records must now serve as a vehicle to communicate with covering physicians and consultants.
How detailed do those notes need to be? Do we need more than the hard data – the numbers, the prescriptions, the biometrics, the chronology of the patient’s procedures? As a covering physician or consultant, I’m not really that interested in your subjective observations. It’s not that I don’t trust you, but like any good physician I’m going to take my own history directly from the patient and do my own physical exam. You may have missed something and I owe the patient a fresh look and listen before I render an opinion or prescribe a management plan.
The medical record has become a detailed invoice to be attached to your bill to third-party payers. You need to prove to them that your service has some value. It’s not that the third-party payers don’t trust you ... well maybe that’s the issue. They don’t. So you have to prove to them that you really did something. Since they weren’t in the exam room, you must document that you asked the patient questions, did a thorough exam, and spent a specified amount of time at it. Of course that assumes that there is a direct correlation between the amount of time you spent with the patient and the quality of care. Which isn’t always the case. One sentence merely stating that you are a well-trained professional and did a thorough job doesn’t seem to be good enough. It works for the plumber and the electrician. But again, it’s that trust thing.
Of course there are the licensing and certification organizations that have a legitimate interest in the quality of your work. Because having an observer following you around for a day or two is impractical (which I still think is a good idea), you need to include evidence in your chart that you practice the standard of care by following accepted screening measures and treating according to standard guidelines.
And finally, while we are talking about trust, there is the whole risk management thing – maybe the most potent inflater of medical records. The lawyer-promoted myth “if you didn’t document it, it didn’t happen” encourages doctors to use voluminous verbiage merely to give your lawyer ammunition when you find yourself in a he-said/she-said situation.
Of course all of this needs to be carefully worded because the patient now has and deserves the right to review his or her medical records. And this might be the only good news. AI can be taught to create a medical record that is complete and more easily read and digested by the patient. This could make the records even more voluminous and as more patients become familiar with their own health records they may begin to demand that they become more concise and actually reflect what went on in the visit.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Until the 19th century there was nothing even resembling our current conception of the medical record. A few physicians may have kept personal notes, observations, and some sketches of their patients primarily to be used in teaching medical students or as part of their own curiosity-driven research. However, around 1800 the Governor Council of the State of New York adopted a proposition that all home doctors should register their medical cases again to be used as an educational tool. By 1830 these registries became annual reporting requirements that included admissions and discharges, treatment results, and expenditures. It shouldn’t surprise you learn that a review of these entries could be linked to a doctor’s prospects for promotion.
In 1919 the American College of Surgeons attempted to standardize its members’ “treatment diaries” to look something more like our current medical records with a history, lab tests, diagnosis, treatment plan, and something akin to daily progress notes. However, as late as the 1970s, when I began primary care practice, there were very few dictates on what our office notes should contain. A few (not including myself) had been trained to use a S.O.A.P. format (Subjective, Objective, Assessment, and Plan) to organize their observations. Back then I viewed my office records as primarily a mnemonic device and only because I had a partner did I make any passing attempt at legibility.
With AI staring us in the face and threatening to expand what has become an already bloated medical record,
Although there was a time when a doctor’s notes simply functioned as a mnemonic, few physicians today practice in isolation and their records must now serve as a vehicle to communicate with covering physicians and consultants.
How detailed do those notes need to be? Do we need more than the hard data – the numbers, the prescriptions, the biometrics, the chronology of the patient’s procedures? As a covering physician or consultant, I’m not really that interested in your subjective observations. It’s not that I don’t trust you, but like any good physician I’m going to take my own history directly from the patient and do my own physical exam. You may have missed something and I owe the patient a fresh look and listen before I render an opinion or prescribe a management plan.
The medical record has become a detailed invoice to be attached to your bill to third-party payers. You need to prove to them that your service has some value. It’s not that the third-party payers don’t trust you ... well maybe that’s the issue. They don’t. So you have to prove to them that you really did something. Since they weren’t in the exam room, you must document that you asked the patient questions, did a thorough exam, and spent a specified amount of time at it. Of course that assumes that there is a direct correlation between the amount of time you spent with the patient and the quality of care. Which isn’t always the case. One sentence merely stating that you are a well-trained professional and did a thorough job doesn’t seem to be good enough. It works for the plumber and the electrician. But again, it’s that trust thing.
Of course there are the licensing and certification organizations that have a legitimate interest in the quality of your work. Because having an observer following you around for a day or two is impractical (which I still think is a good idea), you need to include evidence in your chart that you practice the standard of care by following accepted screening measures and treating according to standard guidelines.
And finally, while we are talking about trust, there is the whole risk management thing – maybe the most potent inflater of medical records. The lawyer-promoted myth “if you didn’t document it, it didn’t happen” encourages doctors to use voluminous verbiage merely to give your lawyer ammunition when you find yourself in a he-said/she-said situation.
Of course all of this needs to be carefully worded because the patient now has and deserves the right to review his or her medical records. And this might be the only good news. AI can be taught to create a medical record that is complete and more easily read and digested by the patient. This could make the records even more voluminous and as more patients become familiar with their own health records they may begin to demand that they become more concise and actually reflect what went on in the visit.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Until the 19th century there was nothing even resembling our current conception of the medical record. A few physicians may have kept personal notes, observations, and some sketches of their patients primarily to be used in teaching medical students or as part of their own curiosity-driven research. However, around 1800 the Governor Council of the State of New York adopted a proposition that all home doctors should register their medical cases again to be used as an educational tool. By 1830 these registries became annual reporting requirements that included admissions and discharges, treatment results, and expenditures. It shouldn’t surprise you learn that a review of these entries could be linked to a doctor’s prospects for promotion.
In 1919 the American College of Surgeons attempted to standardize its members’ “treatment diaries” to look something more like our current medical records with a history, lab tests, diagnosis, treatment plan, and something akin to daily progress notes. However, as late as the 1970s, when I began primary care practice, there were very few dictates on what our office notes should contain. A few (not including myself) had been trained to use a S.O.A.P. format (Subjective, Objective, Assessment, and Plan) to organize their observations. Back then I viewed my office records as primarily a mnemonic device and only because I had a partner did I make any passing attempt at legibility.
With AI staring us in the face and threatening to expand what has become an already bloated medical record,
Although there was a time when a doctor’s notes simply functioned as a mnemonic, few physicians today practice in isolation and their records must now serve as a vehicle to communicate with covering physicians and consultants.
How detailed do those notes need to be? Do we need more than the hard data – the numbers, the prescriptions, the biometrics, the chronology of the patient’s procedures? As a covering physician or consultant, I’m not really that interested in your subjective observations. It’s not that I don’t trust you, but like any good physician I’m going to take my own history directly from the patient and do my own physical exam. You may have missed something and I owe the patient a fresh look and listen before I render an opinion or prescribe a management plan.
The medical record has become a detailed invoice to be attached to your bill to third-party payers. You need to prove to them that your service has some value. It’s not that the third-party payers don’t trust you ... well maybe that’s the issue. They don’t. So you have to prove to them that you really did something. Since they weren’t in the exam room, you must document that you asked the patient questions, did a thorough exam, and spent a specified amount of time at it. Of course that assumes that there is a direct correlation between the amount of time you spent with the patient and the quality of care. Which isn’t always the case. One sentence merely stating that you are a well-trained professional and did a thorough job doesn’t seem to be good enough. It works for the plumber and the electrician. But again, it’s that trust thing.
Of course there are the licensing and certification organizations that have a legitimate interest in the quality of your work. Because having an observer following you around for a day or two is impractical (which I still think is a good idea), you need to include evidence in your chart that you practice the standard of care by following accepted screening measures and treating according to standard guidelines.
And finally, while we are talking about trust, there is the whole risk management thing – maybe the most potent inflater of medical records. The lawyer-promoted myth “if you didn’t document it, it didn’t happen” encourages doctors to use voluminous verbiage merely to give your lawyer ammunition when you find yourself in a he-said/she-said situation.
Of course all of this needs to be carefully worded because the patient now has and deserves the right to review his or her medical records. And this might be the only good news. AI can be taught to create a medical record that is complete and more easily read and digested by the patient. This could make the records even more voluminous and as more patients become familiar with their own health records they may begin to demand that they become more concise and actually reflect what went on in the visit.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
The best CRC screening test is still this one
I’m Dr. Kenny Lin. I am a family physician and associate director of the Lancaster General Hospital Family Medicine Residency, and I blog at Common Sense Family Doctor.
I’m 47 years old. Two years ago, when the U.S. Preventive Services Task Force (USPSTF) followed the American Cancer Society and lowered the starting age for colorectal cancer (CRC) screening from 50 to 45, my family physician brought up screening options at a health maintenance visit. Although I had expressed some skepticism about this change when the ACS updated its screening guideline in 2018, I generally follow the USPSTF recommendations in my own clinical practice, so I dutifully selected a test that, fortunately, came out negative.
Not everyone in the primary care community, however, is on board with screening average-risk adults in their late 40s for colorectal cancer. The American Academy of Family Physicians (AAFP) published a notable dissent, arguing that the evidence from modeling studies cited by the USPSTF to support lowering the starting age was insufficient. The AAFP also expressed concern that devoting screening resources to younger adults could come at the expense of improving screening rates in older adults who are at higher risk for CRC and increase health care costs without corresponding benefit.
Now, the American College of Physicians has joined the AAFP by releasing an updated guidance statement for CRC screening that discourages screening asymptomatic, average-risk adults between the ages of 45 and 49. In addition to the uncertainty surrounding benefits of screening adults in this age range, the ACP pointed out that starting screening at age 45, compared with age 50, would increase the number of colonoscopies and colonoscopy complications. My colleagues and I recently published a systematic review estimating that for every 10,000 screening colonoscopies performed, 8 people suffer a bowel perforation and 16 to 36 have severe bleeding requiring hospitalization. One in 3 patients undergoing colonoscopies report minor adverse events such as abdominal pain, bloating, and abdominal discomfort in the first 2 weeks following the procedure.
Other aspects of the ACP guidance differ from other colorectal cancer screening guidelines. Unlike the USPSTF, which made no distinctions between various recommended screening tests, the ACP preferentially endorsed fecal immunochemical or high-sensitivity fecal occult blood testing every 2 years, colonoscopy every 10 years, or flexible sigmoidoscopy every 10 years plus a fecal immunochemical test every 2 years. That leaves out stool DNA testing, which my patients increasingly request because they have seen television or online advertisements, and newer blood tests that detect methylation sequences in circulating tumor DNA.
Perhaps most controversial is the ACP’s suggestion that it is probably reasonable for some adults to start screening later than age 50 or undergo screening at longer intervals than currently recommended (for example, colonoscopy every 15 years). Recent data support extending the interval to repeat screening colonoscopy in selected populations; a large cross-sectional study found a low prevalence of advanced adenomas and colorectal cancers in colonoscopies performed 10 or more years after an initial negative colonoscopy, particularly in women and younger patients without gastrointestinal symptoms. A prominent BMJ guideline suggests that patients need not be screened until their estimated 15-year CRC risk is greater than 3% (which most people do not reach until their 60s) and then only need a single sigmoidoscopy or colonoscopy.
Despite some departures from other guidelines, it’s worth emphasizing that the ACP guideline agrees that screening for CRC is generally worthwhile between the ages of 50 and 75 years. On that front, we in primary care have more work to do; the Centers for Disease Control and Prevention estimates that 28% of American adults older than 50 are not up-to-date on CRC screening. And despite some recent debate about the relative benefits and harms of screening colonoscopy, compared with less invasive fecal tests, gastroenterologists and family physicians can agree that the best screening test is the test that gets done.
A version of this article first appeared on Medscape.com.
I’m Dr. Kenny Lin. I am a family physician and associate director of the Lancaster General Hospital Family Medicine Residency, and I blog at Common Sense Family Doctor.
I’m 47 years old. Two years ago, when the U.S. Preventive Services Task Force (USPSTF) followed the American Cancer Society and lowered the starting age for colorectal cancer (CRC) screening from 50 to 45, my family physician brought up screening options at a health maintenance visit. Although I had expressed some skepticism about this change when the ACS updated its screening guideline in 2018, I generally follow the USPSTF recommendations in my own clinical practice, so I dutifully selected a test that, fortunately, came out negative.
Not everyone in the primary care community, however, is on board with screening average-risk adults in their late 40s for colorectal cancer. The American Academy of Family Physicians (AAFP) published a notable dissent, arguing that the evidence from modeling studies cited by the USPSTF to support lowering the starting age was insufficient. The AAFP also expressed concern that devoting screening resources to younger adults could come at the expense of improving screening rates in older adults who are at higher risk for CRC and increase health care costs without corresponding benefit.
Now, the American College of Physicians has joined the AAFP by releasing an updated guidance statement for CRC screening that discourages screening asymptomatic, average-risk adults between the ages of 45 and 49. In addition to the uncertainty surrounding benefits of screening adults in this age range, the ACP pointed out that starting screening at age 45, compared with age 50, would increase the number of colonoscopies and colonoscopy complications. My colleagues and I recently published a systematic review estimating that for every 10,000 screening colonoscopies performed, 8 people suffer a bowel perforation and 16 to 36 have severe bleeding requiring hospitalization. One in 3 patients undergoing colonoscopies report minor adverse events such as abdominal pain, bloating, and abdominal discomfort in the first 2 weeks following the procedure.
Other aspects of the ACP guidance differ from other colorectal cancer screening guidelines. Unlike the USPSTF, which made no distinctions between various recommended screening tests, the ACP preferentially endorsed fecal immunochemical or high-sensitivity fecal occult blood testing every 2 years, colonoscopy every 10 years, or flexible sigmoidoscopy every 10 years plus a fecal immunochemical test every 2 years. That leaves out stool DNA testing, which my patients increasingly request because they have seen television or online advertisements, and newer blood tests that detect methylation sequences in circulating tumor DNA.
Perhaps most controversial is the ACP’s suggestion that it is probably reasonable for some adults to start screening later than age 50 or undergo screening at longer intervals than currently recommended (for example, colonoscopy every 15 years). Recent data support extending the interval to repeat screening colonoscopy in selected populations; a large cross-sectional study found a low prevalence of advanced adenomas and colorectal cancers in colonoscopies performed 10 or more years after an initial negative colonoscopy, particularly in women and younger patients without gastrointestinal symptoms. A prominent BMJ guideline suggests that patients need not be screened until their estimated 15-year CRC risk is greater than 3% (which most people do not reach until their 60s) and then only need a single sigmoidoscopy or colonoscopy.
Despite some departures from other guidelines, it’s worth emphasizing that the ACP guideline agrees that screening for CRC is generally worthwhile between the ages of 50 and 75 years. On that front, we in primary care have more work to do; the Centers for Disease Control and Prevention estimates that 28% of American adults older than 50 are not up-to-date on CRC screening. And despite some recent debate about the relative benefits and harms of screening colonoscopy, compared with less invasive fecal tests, gastroenterologists and family physicians can agree that the best screening test is the test that gets done.
A version of this article first appeared on Medscape.com.
I’m Dr. Kenny Lin. I am a family physician and associate director of the Lancaster General Hospital Family Medicine Residency, and I blog at Common Sense Family Doctor.
I’m 47 years old. Two years ago, when the U.S. Preventive Services Task Force (USPSTF) followed the American Cancer Society and lowered the starting age for colorectal cancer (CRC) screening from 50 to 45, my family physician brought up screening options at a health maintenance visit. Although I had expressed some skepticism about this change when the ACS updated its screening guideline in 2018, I generally follow the USPSTF recommendations in my own clinical practice, so I dutifully selected a test that, fortunately, came out negative.
Not everyone in the primary care community, however, is on board with screening average-risk adults in their late 40s for colorectal cancer. The American Academy of Family Physicians (AAFP) published a notable dissent, arguing that the evidence from modeling studies cited by the USPSTF to support lowering the starting age was insufficient. The AAFP also expressed concern that devoting screening resources to younger adults could come at the expense of improving screening rates in older adults who are at higher risk for CRC and increase health care costs without corresponding benefit.
Now, the American College of Physicians has joined the AAFP by releasing an updated guidance statement for CRC screening that discourages screening asymptomatic, average-risk adults between the ages of 45 and 49. In addition to the uncertainty surrounding benefits of screening adults in this age range, the ACP pointed out that starting screening at age 45, compared with age 50, would increase the number of colonoscopies and colonoscopy complications. My colleagues and I recently published a systematic review estimating that for every 10,000 screening colonoscopies performed, 8 people suffer a bowel perforation and 16 to 36 have severe bleeding requiring hospitalization. One in 3 patients undergoing colonoscopies report minor adverse events such as abdominal pain, bloating, and abdominal discomfort in the first 2 weeks following the procedure.
Other aspects of the ACP guidance differ from other colorectal cancer screening guidelines. Unlike the USPSTF, which made no distinctions between various recommended screening tests, the ACP preferentially endorsed fecal immunochemical or high-sensitivity fecal occult blood testing every 2 years, colonoscopy every 10 years, or flexible sigmoidoscopy every 10 years plus a fecal immunochemical test every 2 years. That leaves out stool DNA testing, which my patients increasingly request because they have seen television or online advertisements, and newer blood tests that detect methylation sequences in circulating tumor DNA.
Perhaps most controversial is the ACP’s suggestion that it is probably reasonable for some adults to start screening later than age 50 or undergo screening at longer intervals than currently recommended (for example, colonoscopy every 15 years). Recent data support extending the interval to repeat screening colonoscopy in selected populations; a large cross-sectional study found a low prevalence of advanced adenomas and colorectal cancers in colonoscopies performed 10 or more years after an initial negative colonoscopy, particularly in women and younger patients without gastrointestinal symptoms. A prominent BMJ guideline suggests that patients need not be screened until their estimated 15-year CRC risk is greater than 3% (which most people do not reach until their 60s) and then only need a single sigmoidoscopy or colonoscopy.
Despite some departures from other guidelines, it’s worth emphasizing that the ACP guideline agrees that screening for CRC is generally worthwhile between the ages of 50 and 75 years. On that front, we in primary care have more work to do; the Centers for Disease Control and Prevention estimates that 28% of American adults older than 50 are not up-to-date on CRC screening. And despite some recent debate about the relative benefits and harms of screening colonoscopy, compared with less invasive fecal tests, gastroenterologists and family physicians can agree that the best screening test is the test that gets done.
A version of this article first appeared on Medscape.com.
Unveiling the potential of prediction models in obstetrics
In the dawn of artificial intelligence’s potential to inform clinical practice, the importance of understanding the intent and interpretation of prediction tools is vital. In medicine, informed decision-making promotes patient autonomy and can lead to improved patient satisfaction and engagement in their own care.
In obstetric clinical practice, prediction tools have been created to assess risk of primary cesarean delivery in gestational diabetes,1 cesarean delivery in hypertensive disorders of pregnancy,2 and failed induction of labor in nulliparous patients with an unfavorable cervix.3 By assessing a patient’s risk profile, clinicians can identify high-risk individuals who may require closer monitoring, early interventions, or specialized care. This allows for more timely interventions to optimize maternal and fetal health outcomes.
Other prediction tools are created to better elucidate to patients their individual risk of an outcome that may be modifiable, aiding physician counseling on mitigating factors to improve overall results. A relevant example is the American Diabetes Association’s risk of type 2 diabetes calculator used for counseling patients on risk reduction. This model includes both preexisting (ethnicity, family history, age, sex assigned at birth) and modifiable risk factors (body mass index, hypertension, physical activity) to predict risk of type 2 diabetes and is widely used in clinical practice to encourage integration of lifestyle changes to decrease risk.4 This model highlights the utility of prediction tools in counseling, providing quantitative data to clinicians to discuss a patient’s individual risk and how to mitigate that risk.
While predictive models clearly have many advantages and potential to improve personalized medicine, concerns have been raised that their interpretation and application can sometimes have unintended consequences as the complexity of these models can lead to variation in understanding among clinicians that impact decision-making. Different clinicians may assign different levels of importance to the predicted risks, resulting in differences in treatment plans and interventions. This variability can lead to disparities in care and outcomes, as patients with similar risk profiles may receive different management approaches based on the interpreting clinician.
Providers may either overly rely on prediction models or completely disregard them, depending on their level of trust or skepticism. Overreliance on prediction models may lead to the neglect of important clinical information or intuition, while disregarding the models may result in missed opportunities for early intervention or appropriate risk stratification. Achieving a balance between clinical judgment and the use of prediction models is crucial for optimal decision-making.
An example of how misinterpretation of the role of prediction tools in patient counseling can have far reaching consequences is the vaginal birth after cesarean (VBAC) calculator where race and ethnicity naturalized racial differences and likely contributed to cesarean overuse in Black pregnant people as non-White race was associated with a decreased chance of successful VBAC. Although the authors of the study that created the VBAC calculator intended it to be used as an adjunct to counseling, institutions and providers used low calculator scores to discourage or prohibit pregnant people from attempting a trial of labor after cesarean (TOLAC). This highlighted the importance of contextualizing the intent of prediction models within the broader clinical setting and individual patient circumstances and preferences.
This gap between intent and interpretation and subsequent application is influenced by individual clinician experience, training, personal biases, and subjective judgment. These subjective elements can introduce inconsistencies and variability in the utilization of prediction tools, leading to potential discrepancies in patient care. Inadequate understanding of prediction models and their statistical concepts can contribute to misinterpretation. It is this bias that prevents prediction models from serving their true purpose: to inform clinical decision-making, improve patient outcomes, and optimize resource allocation.
Clinicians may struggle with concepts such as predictive accuracy, overfitting, calibration, and external validation. Educational initiatives and enhanced training in statistical literacy can empower clinicians to better comprehend and apply prediction models in their practice. Researchers should make it clear that models should not be used in isolation, but rather integrated with clinical expertise and patient preferences. Understanding the limitations of prediction models and incorporating additional clinical information is essential.
Prediction models in obstetrics should undergo continuous evaluation and improvement to enhance their reliability and applicability. Regular updates, external validation, and recalibration are necessary to account for evolving clinical practices, changes in patient populations, and emerging evidence. Engaging clinicians in the evaluation process can foster ownership and promote a sense of trust in the models.
As machine learning and artificial intelligence improve the accuracy of prediction models, there is potential to revolutionize obstetric care by enabling more accurate individualized risk assessment and decision-making. Machine learning has the potential to significantly enhance prediction models in obstetrics by leveraging complex algorithms and advanced computational techniques. However, the unpredictable nature of clinician interpretation poses challenges to the effective utilization of these models.
By emphasizing communication, collaboration, education, and continuous evaluation, we can bridge the gap between prediction models and clinician interpretation that optimizes their use. This concerted effort will ultimately lead to improved patient care, enhanced clinical outcomes, and a more harmonious integration of these tools into obstetric practice.
Dr. Ramos is assistant professor of maternal fetal medicine and associate principal investigator at the Mother Infant Research Institute, Tufts University and Tufts Medical Center, Boston.
References
1. Ramos SZ et al. Predicting primary cesarean delivery in pregnancies complicated by gestational diabetes mellitus. Am J Obstet Gynecol. 2023 Jun 7;S0002-9378(23)00371-X. doi: 10.1016/j.ajog.2023.06.002.
2. Beninati MJ et al. Prediction model for vaginal birth after induction of labor in women with hypertensive disorders of pregnancy. Obstet Gynecol. 2020 Aug;136(2):402-410. doi: 10.1097/AOG.0000000000003938.
3. Levine LD et al. A validated calculator to estimate risk of cesarean after an induction of labor with an unfavorable cervix. Am J Obstet Gynecol. 2018 Feb;218(2):254.e1-254.e7. doi: 10.1016/j.ajog.2017.11.603.
4. American Diabetes Association. Our 60-Second Type 2 Diabetes Risk Test.
In the dawn of artificial intelligence’s potential to inform clinical practice, the importance of understanding the intent and interpretation of prediction tools is vital. In medicine, informed decision-making promotes patient autonomy and can lead to improved patient satisfaction and engagement in their own care.
In obstetric clinical practice, prediction tools have been created to assess risk of primary cesarean delivery in gestational diabetes,1 cesarean delivery in hypertensive disorders of pregnancy,2 and failed induction of labor in nulliparous patients with an unfavorable cervix.3 By assessing a patient’s risk profile, clinicians can identify high-risk individuals who may require closer monitoring, early interventions, or specialized care. This allows for more timely interventions to optimize maternal and fetal health outcomes.
Other prediction tools are created to better elucidate to patients their individual risk of an outcome that may be modifiable, aiding physician counseling on mitigating factors to improve overall results. A relevant example is the American Diabetes Association’s risk of type 2 diabetes calculator used for counseling patients on risk reduction. This model includes both preexisting (ethnicity, family history, age, sex assigned at birth) and modifiable risk factors (body mass index, hypertension, physical activity) to predict risk of type 2 diabetes and is widely used in clinical practice to encourage integration of lifestyle changes to decrease risk.4 This model highlights the utility of prediction tools in counseling, providing quantitative data to clinicians to discuss a patient’s individual risk and how to mitigate that risk.
While predictive models clearly have many advantages and potential to improve personalized medicine, concerns have been raised that their interpretation and application can sometimes have unintended consequences as the complexity of these models can lead to variation in understanding among clinicians that impact decision-making. Different clinicians may assign different levels of importance to the predicted risks, resulting in differences in treatment plans and interventions. This variability can lead to disparities in care and outcomes, as patients with similar risk profiles may receive different management approaches based on the interpreting clinician.
Providers may either overly rely on prediction models or completely disregard them, depending on their level of trust or skepticism. Overreliance on prediction models may lead to the neglect of important clinical information or intuition, while disregarding the models may result in missed opportunities for early intervention or appropriate risk stratification. Achieving a balance between clinical judgment and the use of prediction models is crucial for optimal decision-making.
An example of how misinterpretation of the role of prediction tools in patient counseling can have far reaching consequences is the vaginal birth after cesarean (VBAC) calculator where race and ethnicity naturalized racial differences and likely contributed to cesarean overuse in Black pregnant people as non-White race was associated with a decreased chance of successful VBAC. Although the authors of the study that created the VBAC calculator intended it to be used as an adjunct to counseling, institutions and providers used low calculator scores to discourage or prohibit pregnant people from attempting a trial of labor after cesarean (TOLAC). This highlighted the importance of contextualizing the intent of prediction models within the broader clinical setting and individual patient circumstances and preferences.
This gap between intent and interpretation and subsequent application is influenced by individual clinician experience, training, personal biases, and subjective judgment. These subjective elements can introduce inconsistencies and variability in the utilization of prediction tools, leading to potential discrepancies in patient care. Inadequate understanding of prediction models and their statistical concepts can contribute to misinterpretation. It is this bias that prevents prediction models from serving their true purpose: to inform clinical decision-making, improve patient outcomes, and optimize resource allocation.
Clinicians may struggle with concepts such as predictive accuracy, overfitting, calibration, and external validation. Educational initiatives and enhanced training in statistical literacy can empower clinicians to better comprehend and apply prediction models in their practice. Researchers should make it clear that models should not be used in isolation, but rather integrated with clinical expertise and patient preferences. Understanding the limitations of prediction models and incorporating additional clinical information is essential.
Prediction models in obstetrics should undergo continuous evaluation and improvement to enhance their reliability and applicability. Regular updates, external validation, and recalibration are necessary to account for evolving clinical practices, changes in patient populations, and emerging evidence. Engaging clinicians in the evaluation process can foster ownership and promote a sense of trust in the models.
As machine learning and artificial intelligence improve the accuracy of prediction models, there is potential to revolutionize obstetric care by enabling more accurate individualized risk assessment and decision-making. Machine learning has the potential to significantly enhance prediction models in obstetrics by leveraging complex algorithms and advanced computational techniques. However, the unpredictable nature of clinician interpretation poses challenges to the effective utilization of these models.
By emphasizing communication, collaboration, education, and continuous evaluation, we can bridge the gap between prediction models and clinician interpretation that optimizes their use. This concerted effort will ultimately lead to improved patient care, enhanced clinical outcomes, and a more harmonious integration of these tools into obstetric practice.
Dr. Ramos is assistant professor of maternal fetal medicine and associate principal investigator at the Mother Infant Research Institute, Tufts University and Tufts Medical Center, Boston.
References
1. Ramos SZ et al. Predicting primary cesarean delivery in pregnancies complicated by gestational diabetes mellitus. Am J Obstet Gynecol. 2023 Jun 7;S0002-9378(23)00371-X. doi: 10.1016/j.ajog.2023.06.002.
2. Beninati MJ et al. Prediction model for vaginal birth after induction of labor in women with hypertensive disorders of pregnancy. Obstet Gynecol. 2020 Aug;136(2):402-410. doi: 10.1097/AOG.0000000000003938.
3. Levine LD et al. A validated calculator to estimate risk of cesarean after an induction of labor with an unfavorable cervix. Am J Obstet Gynecol. 2018 Feb;218(2):254.e1-254.e7. doi: 10.1016/j.ajog.2017.11.603.
4. American Diabetes Association. Our 60-Second Type 2 Diabetes Risk Test.
In the dawn of artificial intelligence’s potential to inform clinical practice, the importance of understanding the intent and interpretation of prediction tools is vital. In medicine, informed decision-making promotes patient autonomy and can lead to improved patient satisfaction and engagement in their own care.
In obstetric clinical practice, prediction tools have been created to assess risk of primary cesarean delivery in gestational diabetes,1 cesarean delivery in hypertensive disorders of pregnancy,2 and failed induction of labor in nulliparous patients with an unfavorable cervix.3 By assessing a patient’s risk profile, clinicians can identify high-risk individuals who may require closer monitoring, early interventions, or specialized care. This allows for more timely interventions to optimize maternal and fetal health outcomes.
Other prediction tools are created to better elucidate to patients their individual risk of an outcome that may be modifiable, aiding physician counseling on mitigating factors to improve overall results. A relevant example is the American Diabetes Association’s risk of type 2 diabetes calculator used for counseling patients on risk reduction. This model includes both preexisting (ethnicity, family history, age, sex assigned at birth) and modifiable risk factors (body mass index, hypertension, physical activity) to predict risk of type 2 diabetes and is widely used in clinical practice to encourage integration of lifestyle changes to decrease risk.4 This model highlights the utility of prediction tools in counseling, providing quantitative data to clinicians to discuss a patient’s individual risk and how to mitigate that risk.
While predictive models clearly have many advantages and potential to improve personalized medicine, concerns have been raised that their interpretation and application can sometimes have unintended consequences as the complexity of these models can lead to variation in understanding among clinicians that impact decision-making. Different clinicians may assign different levels of importance to the predicted risks, resulting in differences in treatment plans and interventions. This variability can lead to disparities in care and outcomes, as patients with similar risk profiles may receive different management approaches based on the interpreting clinician.
Providers may either overly rely on prediction models or completely disregard them, depending on their level of trust or skepticism. Overreliance on prediction models may lead to the neglect of important clinical information or intuition, while disregarding the models may result in missed opportunities for early intervention or appropriate risk stratification. Achieving a balance between clinical judgment and the use of prediction models is crucial for optimal decision-making.
An example of how misinterpretation of the role of prediction tools in patient counseling can have far reaching consequences is the vaginal birth after cesarean (VBAC) calculator where race and ethnicity naturalized racial differences and likely contributed to cesarean overuse in Black pregnant people as non-White race was associated with a decreased chance of successful VBAC. Although the authors of the study that created the VBAC calculator intended it to be used as an adjunct to counseling, institutions and providers used low calculator scores to discourage or prohibit pregnant people from attempting a trial of labor after cesarean (TOLAC). This highlighted the importance of contextualizing the intent of prediction models within the broader clinical setting and individual patient circumstances and preferences.
This gap between intent and interpretation and subsequent application is influenced by individual clinician experience, training, personal biases, and subjective judgment. These subjective elements can introduce inconsistencies and variability in the utilization of prediction tools, leading to potential discrepancies in patient care. Inadequate understanding of prediction models and their statistical concepts can contribute to misinterpretation. It is this bias that prevents prediction models from serving their true purpose: to inform clinical decision-making, improve patient outcomes, and optimize resource allocation.
Clinicians may struggle with concepts such as predictive accuracy, overfitting, calibration, and external validation. Educational initiatives and enhanced training in statistical literacy can empower clinicians to better comprehend and apply prediction models in their practice. Researchers should make it clear that models should not be used in isolation, but rather integrated with clinical expertise and patient preferences. Understanding the limitations of prediction models and incorporating additional clinical information is essential.
Prediction models in obstetrics should undergo continuous evaluation and improvement to enhance their reliability and applicability. Regular updates, external validation, and recalibration are necessary to account for evolving clinical practices, changes in patient populations, and emerging evidence. Engaging clinicians in the evaluation process can foster ownership and promote a sense of trust in the models.
As machine learning and artificial intelligence improve the accuracy of prediction models, there is potential to revolutionize obstetric care by enabling more accurate individualized risk assessment and decision-making. Machine learning has the potential to significantly enhance prediction models in obstetrics by leveraging complex algorithms and advanced computational techniques. However, the unpredictable nature of clinician interpretation poses challenges to the effective utilization of these models.
By emphasizing communication, collaboration, education, and continuous evaluation, we can bridge the gap between prediction models and clinician interpretation that optimizes their use. This concerted effort will ultimately lead to improved patient care, enhanced clinical outcomes, and a more harmonious integration of these tools into obstetric practice.
Dr. Ramos is assistant professor of maternal fetal medicine and associate principal investigator at the Mother Infant Research Institute, Tufts University and Tufts Medical Center, Boston.
References
1. Ramos SZ et al. Predicting primary cesarean delivery in pregnancies complicated by gestational diabetes mellitus. Am J Obstet Gynecol. 2023 Jun 7;S0002-9378(23)00371-X. doi: 10.1016/j.ajog.2023.06.002.
2. Beninati MJ et al. Prediction model for vaginal birth after induction of labor in women with hypertensive disorders of pregnancy. Obstet Gynecol. 2020 Aug;136(2):402-410. doi: 10.1097/AOG.0000000000003938.
3. Levine LD et al. A validated calculator to estimate risk of cesarean after an induction of labor with an unfavorable cervix. Am J Obstet Gynecol. 2018 Feb;218(2):254.e1-254.e7. doi: 10.1016/j.ajog.2017.11.603.
4. American Diabetes Association. Our 60-Second Type 2 Diabetes Risk Test.