It Would Be Nice if Olive Oil Really Did Prevent Dementia

Article Type
Changed
Tue, 05/14/2024 - 10:03

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Green Whistle’ Provides Pain Relief -- But Not in the US

Article Type
Changed
Wed, 05/15/2024 - 10:48

 

This discussion was recorded on March 29, 2024. The transcript has been edited for clarity.

Robert D. Glatter, MD: Joining me today to discuss the use of methoxyflurane (Penthrox), an inhaled nonopioid analgesic for the relief of acute pain, is Dr. William Kenneth (Ken) Milne, an emergency physician at Strathroy Middlesex General Hospital in Ontario, Canada, and the founder of the well-known podcast The Skeptics’ Guide to Emergency Medicine (SGEM).

Also joining me is Dr. Sergey Motov, an emergency physician and research director at Maimonides Medical Center in Brooklyn, New York, and an expert in pain management. I want to welcome both of you and thank you for joining me.
 

RAMPED Trial: Evaluating the Efficacy of Methoxyflurane

Dr. Glatter: Ken, your recent post on Twitter [now X] regarding the utility of Penthrox in the RAMPED trial really caught my attention. While the trial was from 2021, it really is relevant regarding the prehospital management of pain in the practice of emergency medicine, and certainly in-hospital practice. I was hoping you could review the study design but also get into the rationale behind the use of this novel agent.

William Kenneth (Ken) Milne, MD, MSc: Sure. I’d be happy to kick this episode off with talking about a study that was published in 2020 in Academic Emergency Medicine. It was an Australian study by Brichko et al., and they were doing a randomized controlled trial looking at methoxyflurane vs standard care.

They selected out a population of adults, which they defined as 18-75 years of age. They were in the prehospital setting and they had a pain score of greater than 8. They gave the participants methoxyflurane, which is also called the “green whistle.” They had the subjects take that for their prehospital pain, and they compared that with whatever your standard analgesic in the prehospital setting would be.

Their primary outcome was how many patients had at least 50% reduction in their pain score within 30 minutes. They recruited about 120 people, and they found that there was no statistical difference in the primary outcome between methoxyflurane and standard care. Again, that primary outcome was a reduction in pain score by greater than 50% at 30 minutes, and there wasn’t a statistical difference between the two.

There are obviously limits to any study, and it was a convenience sample. This was an unmasked trial, so people knew if they were getting this green whistle, which is popular in Australia. People would be familiar with this device, and they didn’t compare it with a sham or placebo group.

Pharmacology of Penthrox: Its Role and Mechanism of Action

Dr. Glatter: The primary outcome wasn’t met, but certainly secondary outcomes were. There was, again, a relatively small number of patients in this trial. That said, there was significant pain relief. I think there are issues with the trial, as with any trial limitations.

Getting to the pharmacology of Penthrox, can you describe this inhaled anesthetic and how we use it, specifically its role at the subanesthetic doses?

Sergey M. Motov, MD: Methoxyflurane is embedded in the green whistle package, and that whole contraption is called Penthrox. It’s an inhaled volatile fluorinated hydrocarbon anesthetic that was predominantly used, I’d say 40, 50 years ago, for general anesthesia and slowly but surely fell out of favor due to the fact that, when used for prolonged duration or in supratherapeutic doses, there were cases of severe or even fatal nephrotoxicity and hepatotoxicity.

In the late ‘70s and early ‘80s, all the fluranes came on board that are slightly different as general anesthetics, and methoxyflurane started slowly falling out of favor. Because of this paucity and then a subsequent slightly greater number of cases of nephrotoxicity and hepatotoxicity, [the US Food and Drug Administration] FDA made a decision to pull the drug off the market in 2005. FDA successfully accomplished its mission and since then has pretty much banned the use of inhaled methoxyflurane in any shape, form, or color in the United States.

Going back to the green whistle, it has been used in Australia probably for about 50-60 years, and has been used in Europe for probably 10-20 years. Ken can attest that it has been used in Canada for at least a decade and the track record is phenomenal.

We are using subanesthetic, even supratherapeutic doses that, based on available literature, has no incidence of this fatal hepatotoxicity or nephrotoxicity. We’re talking about 10 million doses administered worldwide, except in the United States. There are 40-plus randomized clinical trials with over 30,000 patients enrolled that prove efficacy and safety.

That’s where we are right now, in a conundrum. We have a great deal of data all over the world, except in the United States, that push for the use of this noninvasive, patient-controlled nonopioid inhaled anesthetic. We just don’t have the access in North America, with the exception of Canada.

 

 

Regulatory Hurdles: Challenges in FDA Approval

Dr. Glatter: Absolutely. The FDA wants to be cautious, but if you look at the evidence base of data on this, it really indicates otherwise. Do you think that these roadblocks can be somehow overcome?

Dr. Milne: In the 2000s and 2010s, everybody was focused on opioids and all the dangers and potential adverse events. Opioids are great drugs like many other drugs; it depends on dose and duration. If used properly, it’s an excellent drug. Well, here’s another excellent drug if it’s used properly, and the adverse events are dependent on their dose and duration. Penthrox, or methoxyflurane, is a subtherapeutic, small dose and there have been no reported cases of addiction or abuse related to these inhalers.

Dr. Glatter: That argues for the point — and I’ll turn this over to you, Sergey — of how can this not, in my mind, be an issue that the FDA can overcome.

Dr. Motov: I agree with you. It’s very hard for me to speak on behalf of the FDA, to allude to their thinking processes, but we need to be up to speed with the evidence. The first thing is, why don’t you study the drug in the United States? I’m not asking you to lift the ban, which you put in 2005, but why don’t you honor what has been done over two decades and at least open the door a little bit and let us do what we do best? Why don’t you allow us to do the research in a controlled setting with a carefully, properly selected group of patients without underlying renal or hepatic insufficiency and see where we’re at?

Let’s compare it against placebo. If that’s not ethical, let’s compare it against active comparators — God knows we have 15-20 drugs we can use — and let’s see where we’re at. Ken has been nothing short of superb when it comes to evidence. Let us put the evidence together.

Dr. Milne: If there were concerns decades ago, those need to be addressed. As science is iterative and as other information becomes available, the scientific method would say, Let’s reexamine this and let’s reexamine our position, and do that with evidence. To do that, it has to have validity within the US system. Someone like you doing the research, you are a pain research guru; you should be doing this research to say, “Does it work or not? Does this nonapproval still stand today in 2024?”

Dr. Motov: Thank you for the shout-out, and I agree with you. All of us, those who are interested, on the frontiers of emergency care — as present clinicians — we should be doing this. There is nothing that will convince the FDA more than properly and rightly conducted research, time to reassess the evidence, and time to be less rigid. I understand that you placed a ban 20 years ago, but let’s go with the science. We cannot be behind it.

Exploring the Ecological Footprint of Methoxyflurane

Dr. Milne: There was an Austrian study in 2022 and a very interesting study out of the UK looking at life-cycle impact assessment on the environment. If we’re not just concerned about patient care —obviously, we want to provide patients with a safe and effective product, compared with other products that are available that might not have as good a safety profile — this looks at the impact on the environment.

Dr. Glatter: Ken, can you tell me about some of your recent research regarding the environmental effects related to use of Penthrox, but also its utility pharmacologically and its mechanism of action?

Dr. Milne: There was a really interesting study published this year by Martindale in the Emergency Medicine Journal. It took a different approach to this question about could we be using this drug, and why should we be using this drug? Sergey and I have already talked about the potential benefits and the potential harms. I mentioned opioids and some of the concerns about that. For this drug, if we’re using it in the prehospital setting in this little green whistle, the potential benefits look really good, and we haven’t seen any of the potential harms come through in the literature.

This was another line of evidence of why this might be a good drug, because of the environmental impact of this low-dose methoxyflurane. They compared it with nitrous oxide and said, “Well, what about the life-cycle impact on the environment of using this and the overall cradle-to-grave environmental impacts?”

Obviously, Sergey and I are interested in patient care, and we treat patients one at a time. But we have a larger responsibility to social determinants of health, like our environment. If you look at the overall cradle-to-grave environmental impact of this drug, it was better than for nitrous oxide when looking specifically at climate-change impact. That might be another reason, another line of argument, that could be put forward in the United States to say, “We want to have a healthy environment and a healthy option for patients.”

I’ll let Sergey speak to mechanisms of action and those types of things.

Dr. Motov: As a general anesthetic and hydrocarbonated volatile ones, I’m just going to say that it causes this generalized diffuse cortical depression, and there are no particular channels, receptors, or enzymes we need to worry much about. In short, it’s an inhaled gas used to put patients or people to sleep.

Over the past 30 or 40 years — and I’ll go back to the past decade — there have been numerous studies in different countries (outside of the United States, of course), and with the recent study that Ken just cited, there were comparisons for managing predominantly acute traumatic injuries in pediatric and adult populations presenting to EDs in various regions of the world that compared Penthrox, or the green whistle, with either placebo or active comparators, which included parenteral opioids, oral opioids, and NSAIDs.

The recent systematic review by Fabbri, out of Italy, showed that for ultra–short-term pain — we’re talking about 5, 10, or 15 minutes — inhaled methoxyflurane was found to be equal or even superior to standard of care, primarily related to parenteral opioids, and safety was off the hook. Interestingly, with respect to analgesia, they found that geriatric patients seemed to be responding more, with respect to changing pain score, than younger adults — we’re talking about ages 18-64 vs 65 or older. Again, we need to make sure that we carefully select those elderly people without underlying renal or hepatic insufficiency.

To wrap this up, there is evidence clearly supporting its analgesic efficacy and safety, even in comparison to commonly used and traditionally accepted analgesic modalities that we use for managing acute pain.

 

 

US Military Use and Implications for Civilian Practice

Dr. Glatter: Do you think that methoxyflurane’s use in the military will help propel its use in clinical settings in the US, and possibly convince the FDA to look at this closer? The military is currently using it in deployed combat veterans in an ongoing fashion.

Dr. Motov: I’m excited that the Department of Defense in the United States has taken the lead, and they’re being very progressive. There are data that we’ve adapted to the civilian environment by use of intranasal opioids and intranasal ketamine with more doctors who came out of the military. In the military, it’s a kingdom within a kingdom. I don’t know their relationship with the FDA, but I support the military’s pharmacologic initiative by honoring and disseminating their research once it becomes available.

For us nonmilitary folks, we still need to work with the FDA. We need to convince the FDA to let us study the drug, and then we need to pile the evidence within the United States so that the FDA will start looking at this favorably. It wouldn’t hurt and it wouldn’t harm. Any piece of evidence will add to the existing body of literature that we need to allow this medication to be available to us.

Safety Considerations and Aerosolization Concerns

Dr. Glatter: Its safety in children is well established in Australia and throughout the world. I think it deserves a careful look, and the evidence that you’ve both presented argues for the use of this prehospital but also in hospital. I guess there was concern in the hospital with underventilation and healthcare workers being exposed to the fumes, and then getting headaches, dizziness, and so forth. I don’t know if that’s borne out, Ken, in any of your experience in Canada at all.

Dr. Milne: We currently don’t have it in our shop. It’s being used in British Columbia right now in the prehospital setting, and I’m not aware of anybody using it in their department. It’s used prehospital as far as I know.

Dr. Motov: I can attest to it, if I may, because I had familiarized myself with the device. I actually was able to hold it in my hands. I have not used it yet but I had the prototype. The way it’s set up, there is an activated charcoal chamber that sits right on top of the device, which serves as the scavenger for exhaled air that contains particles of methoxyflurane. In theory, but I’m telling how it is in practicality, it significantly reduces occupational exposure, based on data that lacks specifics.

Although most of the researchers did not measure the concentration of methoxyflurane in ambient air within the treatment room in the EDs, I believe the additional data sources clearly stating that it’s within or even below the detectable level that would cause any harm. Once again, we need to honor pathology. We need to make sure that pregnant women will not be exposed to it.

Dr. Milne: In 2024, we also need to be concerned about aerosolizing procedures and aerosolizing treatments, and just take that into account because we should be considering all the potential benefits and all the potential harms. Going through the COVID-19 pandemic, there was concern about transmission and whether or not it was droplet or aerosolized.

There was an observational study published in 2022 in Austria by Trimmel in BMC Emergency Medicine showing similar results. It seemed to work well and potential harms didn’t get picked up. They had to stop the study early because of COVID-19.

We need to always focus in on the potential benefits, the potential harms; where does the science land? Where do the data lie? Then we move forward from that and make informed decisions.

 

 

Final Thoughts

Dr. Glatter: Are there any key takeaways you’d like to share with our audience?

Dr. Milne: One of the takeaways from this whole conversation is that science is iterative and science changes. When new evidence becomes available, and we’ve seen it accumulate around the world, we as scientists, as a researcher, as somebody committed to great patient care should revisit our positions on this. Since there is a prohibition against this medication, I think it’s time to reassess that stance and move forward to see if it still is accurate today.

Dr. Motov: I wholeheartedly agree with this. Thank you, Ken, for bringing this up. Good point.

Dr. Glatter: This has been a really informative discussion. I think our audience will certainly embrace this. Thank you very much for your time; it’s much appreciated.
 

Dr. Glatter is an assistant professor of emergency medicine at Zucker School of Medicine at Hofstra/Northwell in Hempstead, New York. He is a medical adviser for Medscape and hosts the Hot Topics in EM series. Dr. Milne is an emergency physician at Strathroy Middlesex General Hospital in Ontario, Canada, and the founder of the well-known podcast The Skeptics’ Guide to Emergency Medicine (SGEM). Dr. Motov is professor of emergency medicine and director of research in the Department of Emergency Medicine at Maimonides Medical Center in Brooklyn, New York. He is passionate about safe and effective pain management in the emergency department, and has numerous publications on the subject of opioid alternatives in pain management. Dr. Glatter, Dr. Milne, and Dr. Motov had no conflicts of interest to disclose.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

This discussion was recorded on March 29, 2024. The transcript has been edited for clarity.

Robert D. Glatter, MD: Joining me today to discuss the use of methoxyflurane (Penthrox), an inhaled nonopioid analgesic for the relief of acute pain, is Dr. William Kenneth (Ken) Milne, an emergency physician at Strathroy Middlesex General Hospital in Ontario, Canada, and the founder of the well-known podcast The Skeptics’ Guide to Emergency Medicine (SGEM).

Also joining me is Dr. Sergey Motov, an emergency physician and research director at Maimonides Medical Center in Brooklyn, New York, and an expert in pain management. I want to welcome both of you and thank you for joining me.
 

RAMPED Trial: Evaluating the Efficacy of Methoxyflurane

Dr. Glatter: Ken, your recent post on Twitter [now X] regarding the utility of Penthrox in the RAMPED trial really caught my attention. While the trial was from 2021, it really is relevant regarding the prehospital management of pain in the practice of emergency medicine, and certainly in-hospital practice. I was hoping you could review the study design but also get into the rationale behind the use of this novel agent.

William Kenneth (Ken) Milne, MD, MSc: Sure. I’d be happy to kick this episode off with talking about a study that was published in 2020 in Academic Emergency Medicine. It was an Australian study by Brichko et al., and they were doing a randomized controlled trial looking at methoxyflurane vs standard care.

They selected out a population of adults, which they defined as 18-75 years of age. They were in the prehospital setting and they had a pain score of greater than 8. They gave the participants methoxyflurane, which is also called the “green whistle.” They had the subjects take that for their prehospital pain, and they compared that with whatever your standard analgesic in the prehospital setting would be.

Their primary outcome was how many patients had at least 50% reduction in their pain score within 30 minutes. They recruited about 120 people, and they found that there was no statistical difference in the primary outcome between methoxyflurane and standard care. Again, that primary outcome was a reduction in pain score by greater than 50% at 30 minutes, and there wasn’t a statistical difference between the two.

There are obviously limits to any study, and it was a convenience sample. This was an unmasked trial, so people knew if they were getting this green whistle, which is popular in Australia. People would be familiar with this device, and they didn’t compare it with a sham or placebo group.

Pharmacology of Penthrox: Its Role and Mechanism of Action

Dr. Glatter: The primary outcome wasn’t met, but certainly secondary outcomes were. There was, again, a relatively small number of patients in this trial. That said, there was significant pain relief. I think there are issues with the trial, as with any trial limitations.

Getting to the pharmacology of Penthrox, can you describe this inhaled anesthetic and how we use it, specifically its role at the subanesthetic doses?

Sergey M. Motov, MD: Methoxyflurane is embedded in the green whistle package, and that whole contraption is called Penthrox. It’s an inhaled volatile fluorinated hydrocarbon anesthetic that was predominantly used, I’d say 40, 50 years ago, for general anesthesia and slowly but surely fell out of favor due to the fact that, when used for prolonged duration or in supratherapeutic doses, there were cases of severe or even fatal nephrotoxicity and hepatotoxicity.

In the late ‘70s and early ‘80s, all the fluranes came on board that are slightly different as general anesthetics, and methoxyflurane started slowly falling out of favor. Because of this paucity and then a subsequent slightly greater number of cases of nephrotoxicity and hepatotoxicity, [the US Food and Drug Administration] FDA made a decision to pull the drug off the market in 2005. FDA successfully accomplished its mission and since then has pretty much banned the use of inhaled methoxyflurane in any shape, form, or color in the United States.

Going back to the green whistle, it has been used in Australia probably for about 50-60 years, and has been used in Europe for probably 10-20 years. Ken can attest that it has been used in Canada for at least a decade and the track record is phenomenal.

We are using subanesthetic, even supratherapeutic doses that, based on available literature, has no incidence of this fatal hepatotoxicity or nephrotoxicity. We’re talking about 10 million doses administered worldwide, except in the United States. There are 40-plus randomized clinical trials with over 30,000 patients enrolled that prove efficacy and safety.

That’s where we are right now, in a conundrum. We have a great deal of data all over the world, except in the United States, that push for the use of this noninvasive, patient-controlled nonopioid inhaled anesthetic. We just don’t have the access in North America, with the exception of Canada.

 

 

Regulatory Hurdles: Challenges in FDA Approval

Dr. Glatter: Absolutely. The FDA wants to be cautious, but if you look at the evidence base of data on this, it really indicates otherwise. Do you think that these roadblocks can be somehow overcome?

Dr. Milne: In the 2000s and 2010s, everybody was focused on opioids and all the dangers and potential adverse events. Opioids are great drugs like many other drugs; it depends on dose and duration. If used properly, it’s an excellent drug. Well, here’s another excellent drug if it’s used properly, and the adverse events are dependent on their dose and duration. Penthrox, or methoxyflurane, is a subtherapeutic, small dose and there have been no reported cases of addiction or abuse related to these inhalers.

Dr. Glatter: That argues for the point — and I’ll turn this over to you, Sergey — of how can this not, in my mind, be an issue that the FDA can overcome.

Dr. Motov: I agree with you. It’s very hard for me to speak on behalf of the FDA, to allude to their thinking processes, but we need to be up to speed with the evidence. The first thing is, why don’t you study the drug in the United States? I’m not asking you to lift the ban, which you put in 2005, but why don’t you honor what has been done over two decades and at least open the door a little bit and let us do what we do best? Why don’t you allow us to do the research in a controlled setting with a carefully, properly selected group of patients without underlying renal or hepatic insufficiency and see where we’re at?

Let’s compare it against placebo. If that’s not ethical, let’s compare it against active comparators — God knows we have 15-20 drugs we can use — and let’s see where we’re at. Ken has been nothing short of superb when it comes to evidence. Let us put the evidence together.

Dr. Milne: If there were concerns decades ago, those need to be addressed. As science is iterative and as other information becomes available, the scientific method would say, Let’s reexamine this and let’s reexamine our position, and do that with evidence. To do that, it has to have validity within the US system. Someone like you doing the research, you are a pain research guru; you should be doing this research to say, “Does it work or not? Does this nonapproval still stand today in 2024?”

Dr. Motov: Thank you for the shout-out, and I agree with you. All of us, those who are interested, on the frontiers of emergency care — as present clinicians — we should be doing this. There is nothing that will convince the FDA more than properly and rightly conducted research, time to reassess the evidence, and time to be less rigid. I understand that you placed a ban 20 years ago, but let’s go with the science. We cannot be behind it.

Exploring the Ecological Footprint of Methoxyflurane

Dr. Milne: There was an Austrian study in 2022 and a very interesting study out of the UK looking at life-cycle impact assessment on the environment. If we’re not just concerned about patient care —obviously, we want to provide patients with a safe and effective product, compared with other products that are available that might not have as good a safety profile — this looks at the impact on the environment.

Dr. Glatter: Ken, can you tell me about some of your recent research regarding the environmental effects related to use of Penthrox, but also its utility pharmacologically and its mechanism of action?

Dr. Milne: There was a really interesting study published this year by Martindale in the Emergency Medicine Journal. It took a different approach to this question about could we be using this drug, and why should we be using this drug? Sergey and I have already talked about the potential benefits and the potential harms. I mentioned opioids and some of the concerns about that. For this drug, if we’re using it in the prehospital setting in this little green whistle, the potential benefits look really good, and we haven’t seen any of the potential harms come through in the literature.

This was another line of evidence of why this might be a good drug, because of the environmental impact of this low-dose methoxyflurane. They compared it with nitrous oxide and said, “Well, what about the life-cycle impact on the environment of using this and the overall cradle-to-grave environmental impacts?”

Obviously, Sergey and I are interested in patient care, and we treat patients one at a time. But we have a larger responsibility to social determinants of health, like our environment. If you look at the overall cradle-to-grave environmental impact of this drug, it was better than for nitrous oxide when looking specifically at climate-change impact. That might be another reason, another line of argument, that could be put forward in the United States to say, “We want to have a healthy environment and a healthy option for patients.”

I’ll let Sergey speak to mechanisms of action and those types of things.

Dr. Motov: As a general anesthetic and hydrocarbonated volatile ones, I’m just going to say that it causes this generalized diffuse cortical depression, and there are no particular channels, receptors, or enzymes we need to worry much about. In short, it’s an inhaled gas used to put patients or people to sleep.

Over the past 30 or 40 years — and I’ll go back to the past decade — there have been numerous studies in different countries (outside of the United States, of course), and with the recent study that Ken just cited, there were comparisons for managing predominantly acute traumatic injuries in pediatric and adult populations presenting to EDs in various regions of the world that compared Penthrox, or the green whistle, with either placebo or active comparators, which included parenteral opioids, oral opioids, and NSAIDs.

The recent systematic review by Fabbri, out of Italy, showed that for ultra–short-term pain — we’re talking about 5, 10, or 15 minutes — inhaled methoxyflurane was found to be equal or even superior to standard of care, primarily related to parenteral opioids, and safety was off the hook. Interestingly, with respect to analgesia, they found that geriatric patients seemed to be responding more, with respect to changing pain score, than younger adults — we’re talking about ages 18-64 vs 65 or older. Again, we need to make sure that we carefully select those elderly people without underlying renal or hepatic insufficiency.

To wrap this up, there is evidence clearly supporting its analgesic efficacy and safety, even in comparison to commonly used and traditionally accepted analgesic modalities that we use for managing acute pain.

 

 

US Military Use and Implications for Civilian Practice

Dr. Glatter: Do you think that methoxyflurane’s use in the military will help propel its use in clinical settings in the US, and possibly convince the FDA to look at this closer? The military is currently using it in deployed combat veterans in an ongoing fashion.

Dr. Motov: I’m excited that the Department of Defense in the United States has taken the lead, and they’re being very progressive. There are data that we’ve adapted to the civilian environment by use of intranasal opioids and intranasal ketamine with more doctors who came out of the military. In the military, it’s a kingdom within a kingdom. I don’t know their relationship with the FDA, but I support the military’s pharmacologic initiative by honoring and disseminating their research once it becomes available.

For us nonmilitary folks, we still need to work with the FDA. We need to convince the FDA to let us study the drug, and then we need to pile the evidence within the United States so that the FDA will start looking at this favorably. It wouldn’t hurt and it wouldn’t harm. Any piece of evidence will add to the existing body of literature that we need to allow this medication to be available to us.

Safety Considerations and Aerosolization Concerns

Dr. Glatter: Its safety in children is well established in Australia and throughout the world. I think it deserves a careful look, and the evidence that you’ve both presented argues for the use of this prehospital but also in hospital. I guess there was concern in the hospital with underventilation and healthcare workers being exposed to the fumes, and then getting headaches, dizziness, and so forth. I don’t know if that’s borne out, Ken, in any of your experience in Canada at all.

Dr. Milne: We currently don’t have it in our shop. It’s being used in British Columbia right now in the prehospital setting, and I’m not aware of anybody using it in their department. It’s used prehospital as far as I know.

Dr. Motov: I can attest to it, if I may, because I had familiarized myself with the device. I actually was able to hold it in my hands. I have not used it yet but I had the prototype. The way it’s set up, there is an activated charcoal chamber that sits right on top of the device, which serves as the scavenger for exhaled air that contains particles of methoxyflurane. In theory, but I’m telling how it is in practicality, it significantly reduces occupational exposure, based on data that lacks specifics.

Although most of the researchers did not measure the concentration of methoxyflurane in ambient air within the treatment room in the EDs, I believe the additional data sources clearly stating that it’s within or even below the detectable level that would cause any harm. Once again, we need to honor pathology. We need to make sure that pregnant women will not be exposed to it.

Dr. Milne: In 2024, we also need to be concerned about aerosolizing procedures and aerosolizing treatments, and just take that into account because we should be considering all the potential benefits and all the potential harms. Going through the COVID-19 pandemic, there was concern about transmission and whether or not it was droplet or aerosolized.

There was an observational study published in 2022 in Austria by Trimmel in BMC Emergency Medicine showing similar results. It seemed to work well and potential harms didn’t get picked up. They had to stop the study early because of COVID-19.

We need to always focus in on the potential benefits, the potential harms; where does the science land? Where do the data lie? Then we move forward from that and make informed decisions.

 

 

Final Thoughts

Dr. Glatter: Are there any key takeaways you’d like to share with our audience?

Dr. Milne: One of the takeaways from this whole conversation is that science is iterative and science changes. When new evidence becomes available, and we’ve seen it accumulate around the world, we as scientists, as a researcher, as somebody committed to great patient care should revisit our positions on this. Since there is a prohibition against this medication, I think it’s time to reassess that stance and move forward to see if it still is accurate today.

Dr. Motov: I wholeheartedly agree with this. Thank you, Ken, for bringing this up. Good point.

Dr. Glatter: This has been a really informative discussion. I think our audience will certainly embrace this. Thank you very much for your time; it’s much appreciated.
 

Dr. Glatter is an assistant professor of emergency medicine at Zucker School of Medicine at Hofstra/Northwell in Hempstead, New York. He is a medical adviser for Medscape and hosts the Hot Topics in EM series. Dr. Milne is an emergency physician at Strathroy Middlesex General Hospital in Ontario, Canada, and the founder of the well-known podcast The Skeptics’ Guide to Emergency Medicine (SGEM). Dr. Motov is professor of emergency medicine and director of research in the Department of Emergency Medicine at Maimonides Medical Center in Brooklyn, New York. He is passionate about safe and effective pain management in the emergency department, and has numerous publications on the subject of opioid alternatives in pain management. Dr. Glatter, Dr. Milne, and Dr. Motov had no conflicts of interest to disclose.

A version of this article appeared on Medscape.com.

 

This discussion was recorded on March 29, 2024. The transcript has been edited for clarity.

Robert D. Glatter, MD: Joining me today to discuss the use of methoxyflurane (Penthrox), an inhaled nonopioid analgesic for the relief of acute pain, is Dr. William Kenneth (Ken) Milne, an emergency physician at Strathroy Middlesex General Hospital in Ontario, Canada, and the founder of the well-known podcast The Skeptics’ Guide to Emergency Medicine (SGEM).

Also joining me is Dr. Sergey Motov, an emergency physician and research director at Maimonides Medical Center in Brooklyn, New York, and an expert in pain management. I want to welcome both of you and thank you for joining me.
 

RAMPED Trial: Evaluating the Efficacy of Methoxyflurane

Dr. Glatter: Ken, your recent post on Twitter [now X] regarding the utility of Penthrox in the RAMPED trial really caught my attention. While the trial was from 2021, it really is relevant regarding the prehospital management of pain in the practice of emergency medicine, and certainly in-hospital practice. I was hoping you could review the study design but also get into the rationale behind the use of this novel agent.

William Kenneth (Ken) Milne, MD, MSc: Sure. I’d be happy to kick this episode off with talking about a study that was published in 2020 in Academic Emergency Medicine. It was an Australian study by Brichko et al., and they were doing a randomized controlled trial looking at methoxyflurane vs standard care.

They selected out a population of adults, which they defined as 18-75 years of age. They were in the prehospital setting and they had a pain score of greater than 8. They gave the participants methoxyflurane, which is also called the “green whistle.” They had the subjects take that for their prehospital pain, and they compared that with whatever your standard analgesic in the prehospital setting would be.

Their primary outcome was how many patients had at least 50% reduction in their pain score within 30 minutes. They recruited about 120 people, and they found that there was no statistical difference in the primary outcome between methoxyflurane and standard care. Again, that primary outcome was a reduction in pain score by greater than 50% at 30 minutes, and there wasn’t a statistical difference between the two.

There are obviously limits to any study, and it was a convenience sample. This was an unmasked trial, so people knew if they were getting this green whistle, which is popular in Australia. People would be familiar with this device, and they didn’t compare it with a sham or placebo group.

Pharmacology of Penthrox: Its Role and Mechanism of Action

Dr. Glatter: The primary outcome wasn’t met, but certainly secondary outcomes were. There was, again, a relatively small number of patients in this trial. That said, there was significant pain relief. I think there are issues with the trial, as with any trial limitations.

Getting to the pharmacology of Penthrox, can you describe this inhaled anesthetic and how we use it, specifically its role at the subanesthetic doses?

Sergey M. Motov, MD: Methoxyflurane is embedded in the green whistle package, and that whole contraption is called Penthrox. It’s an inhaled volatile fluorinated hydrocarbon anesthetic that was predominantly used, I’d say 40, 50 years ago, for general anesthesia and slowly but surely fell out of favor due to the fact that, when used for prolonged duration or in supratherapeutic doses, there were cases of severe or even fatal nephrotoxicity and hepatotoxicity.

In the late ‘70s and early ‘80s, all the fluranes came on board that are slightly different as general anesthetics, and methoxyflurane started slowly falling out of favor. Because of this paucity and then a subsequent slightly greater number of cases of nephrotoxicity and hepatotoxicity, [the US Food and Drug Administration] FDA made a decision to pull the drug off the market in 2005. FDA successfully accomplished its mission and since then has pretty much banned the use of inhaled methoxyflurane in any shape, form, or color in the United States.

Going back to the green whistle, it has been used in Australia probably for about 50-60 years, and has been used in Europe for probably 10-20 years. Ken can attest that it has been used in Canada for at least a decade and the track record is phenomenal.

We are using subanesthetic, even supratherapeutic doses that, based on available literature, has no incidence of this fatal hepatotoxicity or nephrotoxicity. We’re talking about 10 million doses administered worldwide, except in the United States. There are 40-plus randomized clinical trials with over 30,000 patients enrolled that prove efficacy and safety.

That’s where we are right now, in a conundrum. We have a great deal of data all over the world, except in the United States, that push for the use of this noninvasive, patient-controlled nonopioid inhaled anesthetic. We just don’t have the access in North America, with the exception of Canada.

 

 

Regulatory Hurdles: Challenges in FDA Approval

Dr. Glatter: Absolutely. The FDA wants to be cautious, but if you look at the evidence base of data on this, it really indicates otherwise. Do you think that these roadblocks can be somehow overcome?

Dr. Milne: In the 2000s and 2010s, everybody was focused on opioids and all the dangers and potential adverse events. Opioids are great drugs like many other drugs; it depends on dose and duration. If used properly, it’s an excellent drug. Well, here’s another excellent drug if it’s used properly, and the adverse events are dependent on their dose and duration. Penthrox, or methoxyflurane, is a subtherapeutic, small dose and there have been no reported cases of addiction or abuse related to these inhalers.

Dr. Glatter: That argues for the point — and I’ll turn this over to you, Sergey — of how can this not, in my mind, be an issue that the FDA can overcome.

Dr. Motov: I agree with you. It’s very hard for me to speak on behalf of the FDA, to allude to their thinking processes, but we need to be up to speed with the evidence. The first thing is, why don’t you study the drug in the United States? I’m not asking you to lift the ban, which you put in 2005, but why don’t you honor what has been done over two decades and at least open the door a little bit and let us do what we do best? Why don’t you allow us to do the research in a controlled setting with a carefully, properly selected group of patients without underlying renal or hepatic insufficiency and see where we’re at?

Let’s compare it against placebo. If that’s not ethical, let’s compare it against active comparators — God knows we have 15-20 drugs we can use — and let’s see where we’re at. Ken has been nothing short of superb when it comes to evidence. Let us put the evidence together.

Dr. Milne: If there were concerns decades ago, those need to be addressed. As science is iterative and as other information becomes available, the scientific method would say, Let’s reexamine this and let’s reexamine our position, and do that with evidence. To do that, it has to have validity within the US system. Someone like you doing the research, you are a pain research guru; you should be doing this research to say, “Does it work or not? Does this nonapproval still stand today in 2024?”

Dr. Motov: Thank you for the shout-out, and I agree with you. All of us, those who are interested, on the frontiers of emergency care — as present clinicians — we should be doing this. There is nothing that will convince the FDA more than properly and rightly conducted research, time to reassess the evidence, and time to be less rigid. I understand that you placed a ban 20 years ago, but let’s go with the science. We cannot be behind it.

Exploring the Ecological Footprint of Methoxyflurane

Dr. Milne: There was an Austrian study in 2022 and a very interesting study out of the UK looking at life-cycle impact assessment on the environment. If we’re not just concerned about patient care —obviously, we want to provide patients with a safe and effective product, compared with other products that are available that might not have as good a safety profile — this looks at the impact on the environment.

Dr. Glatter: Ken, can you tell me about some of your recent research regarding the environmental effects related to use of Penthrox, but also its utility pharmacologically and its mechanism of action?

Dr. Milne: There was a really interesting study published this year by Martindale in the Emergency Medicine Journal. It took a different approach to this question about could we be using this drug, and why should we be using this drug? Sergey and I have already talked about the potential benefits and the potential harms. I mentioned opioids and some of the concerns about that. For this drug, if we’re using it in the prehospital setting in this little green whistle, the potential benefits look really good, and we haven’t seen any of the potential harms come through in the literature.

This was another line of evidence of why this might be a good drug, because of the environmental impact of this low-dose methoxyflurane. They compared it with nitrous oxide and said, “Well, what about the life-cycle impact on the environment of using this and the overall cradle-to-grave environmental impacts?”

Obviously, Sergey and I are interested in patient care, and we treat patients one at a time. But we have a larger responsibility to social determinants of health, like our environment. If you look at the overall cradle-to-grave environmental impact of this drug, it was better than for nitrous oxide when looking specifically at climate-change impact. That might be another reason, another line of argument, that could be put forward in the United States to say, “We want to have a healthy environment and a healthy option for patients.”

I’ll let Sergey speak to mechanisms of action and those types of things.

Dr. Motov: As a general anesthetic and hydrocarbonated volatile ones, I’m just going to say that it causes this generalized diffuse cortical depression, and there are no particular channels, receptors, or enzymes we need to worry much about. In short, it’s an inhaled gas used to put patients or people to sleep.

Over the past 30 or 40 years — and I’ll go back to the past decade — there have been numerous studies in different countries (outside of the United States, of course), and with the recent study that Ken just cited, there were comparisons for managing predominantly acute traumatic injuries in pediatric and adult populations presenting to EDs in various regions of the world that compared Penthrox, or the green whistle, with either placebo or active comparators, which included parenteral opioids, oral opioids, and NSAIDs.

The recent systematic review by Fabbri, out of Italy, showed that for ultra–short-term pain — we’re talking about 5, 10, or 15 minutes — inhaled methoxyflurane was found to be equal or even superior to standard of care, primarily related to parenteral opioids, and safety was off the hook. Interestingly, with respect to analgesia, they found that geriatric patients seemed to be responding more, with respect to changing pain score, than younger adults — we’re talking about ages 18-64 vs 65 or older. Again, we need to make sure that we carefully select those elderly people without underlying renal or hepatic insufficiency.

To wrap this up, there is evidence clearly supporting its analgesic efficacy and safety, even in comparison to commonly used and traditionally accepted analgesic modalities that we use for managing acute pain.

 

 

US Military Use and Implications for Civilian Practice

Dr. Glatter: Do you think that methoxyflurane’s use in the military will help propel its use in clinical settings in the US, and possibly convince the FDA to look at this closer? The military is currently using it in deployed combat veterans in an ongoing fashion.

Dr. Motov: I’m excited that the Department of Defense in the United States has taken the lead, and they’re being very progressive. There are data that we’ve adapted to the civilian environment by use of intranasal opioids and intranasal ketamine with more doctors who came out of the military. In the military, it’s a kingdom within a kingdom. I don’t know their relationship with the FDA, but I support the military’s pharmacologic initiative by honoring and disseminating their research once it becomes available.

For us nonmilitary folks, we still need to work with the FDA. We need to convince the FDA to let us study the drug, and then we need to pile the evidence within the United States so that the FDA will start looking at this favorably. It wouldn’t hurt and it wouldn’t harm. Any piece of evidence will add to the existing body of literature that we need to allow this medication to be available to us.

Safety Considerations and Aerosolization Concerns

Dr. Glatter: Its safety in children is well established in Australia and throughout the world. I think it deserves a careful look, and the evidence that you’ve both presented argues for the use of this prehospital but also in hospital. I guess there was concern in the hospital with underventilation and healthcare workers being exposed to the fumes, and then getting headaches, dizziness, and so forth. I don’t know if that’s borne out, Ken, in any of your experience in Canada at all.

Dr. Milne: We currently don’t have it in our shop. It’s being used in British Columbia right now in the prehospital setting, and I’m not aware of anybody using it in their department. It’s used prehospital as far as I know.

Dr. Motov: I can attest to it, if I may, because I had familiarized myself with the device. I actually was able to hold it in my hands. I have not used it yet but I had the prototype. The way it’s set up, there is an activated charcoal chamber that sits right on top of the device, which serves as the scavenger for exhaled air that contains particles of methoxyflurane. In theory, but I’m telling how it is in practicality, it significantly reduces occupational exposure, based on data that lacks specifics.

Although most of the researchers did not measure the concentration of methoxyflurane in ambient air within the treatment room in the EDs, I believe the additional data sources clearly stating that it’s within or even below the detectable level that would cause any harm. Once again, we need to honor pathology. We need to make sure that pregnant women will not be exposed to it.

Dr. Milne: In 2024, we also need to be concerned about aerosolizing procedures and aerosolizing treatments, and just take that into account because we should be considering all the potential benefits and all the potential harms. Going through the COVID-19 pandemic, there was concern about transmission and whether or not it was droplet or aerosolized.

There was an observational study published in 2022 in Austria by Trimmel in BMC Emergency Medicine showing similar results. It seemed to work well and potential harms didn’t get picked up. They had to stop the study early because of COVID-19.

We need to always focus in on the potential benefits, the potential harms; where does the science land? Where do the data lie? Then we move forward from that and make informed decisions.

 

 

Final Thoughts

Dr. Glatter: Are there any key takeaways you’d like to share with our audience?

Dr. Milne: One of the takeaways from this whole conversation is that science is iterative and science changes. When new evidence becomes available, and we’ve seen it accumulate around the world, we as scientists, as a researcher, as somebody committed to great patient care should revisit our positions on this. Since there is a prohibition against this medication, I think it’s time to reassess that stance and move forward to see if it still is accurate today.

Dr. Motov: I wholeheartedly agree with this. Thank you, Ken, for bringing this up. Good point.

Dr. Glatter: This has been a really informative discussion. I think our audience will certainly embrace this. Thank you very much for your time; it’s much appreciated.
 

Dr. Glatter is an assistant professor of emergency medicine at Zucker School of Medicine at Hofstra/Northwell in Hempstead, New York. He is a medical adviser for Medscape and hosts the Hot Topics in EM series. Dr. Milne is an emergency physician at Strathroy Middlesex General Hospital in Ontario, Canada, and the founder of the well-known podcast The Skeptics’ Guide to Emergency Medicine (SGEM). Dr. Motov is professor of emergency medicine and director of research in the Department of Emergency Medicine at Maimonides Medical Center in Brooklyn, New York. He is passionate about safe and effective pain management in the emergency department, and has numerous publications on the subject of opioid alternatives in pain management. Dr. Glatter, Dr. Milne, and Dr. Motov had no conflicts of interest to disclose.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

PCP Compensation, Part 2

Article Type
Changed
Fri, 05/10/2024 - 11:15

In my last column, I began to explore the factors affecting the compensation of primary care providers (PCPs). I described two apparent economic paradoxes. First, while most healthcare systems consider their primary care segments as loss leaders, they continue to seek and hire more PCPs. The second is while PCPs are in short supply, most of them feel that they are underpaid. Supply and demand doesn’t seem to be making them more valuable in the economic sense. The explanations for these nonintuitive observations are first, healthcare systems need the volume of patients stored in the practices of even unprofitable primary care physicians to feed the high-profit specialties in their businesses. Second, there is a limit to how large a gap between revenue and overhead the systems can accept for their primary care practices. Not surprisingly, this means that system administrators must continue to nudge those PCP practices closer toward profitability, usually by demanding higher productivity.

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

As I did in my last letter, I will continue to lean on a discussion for PCP compensation by a large international management consulting firm I found on the internet. I am not condoning the consultant’s advice, but merely using it as a scaffolding on which to hang the rather squishy topics of time, clinical quality, and patient satisfaction. I only intend to ask questions, and I promise no answers.

First, let me make it clear that I am defining PCPs as providers who are on a performance-based pathway, which is by far the most prevalent model. A fixed-salary arrangement hasn’t made sense to me since I was a 17-year-old lifeguard paid by the hour for sitting by a pool. Had I been paid by the rescue, I would have finished the summer empty handed. A fixed salary provided me a sense of security, but it offered no path for advancement and was boring as hell. The primary care provider I am talking about has an interest in developing relationships with his/her patients, building a practice, and offering some degree of continuity. In other words, I am not considering providers working in walk-in clinics as PCPs.
 

Size Matters

My high-powered management consultant is recommending to his healthcare system management clients that they emphasize panel size component as they craft their compensation packages for PCPs. Maybe even to the point of giving it more weight than the productivity piece. This, of course, makes perfect business sense if the primary value of a PCP to the system lies in the patients he/she brings into the system.

What does this emphasis on size mean for you as a provider? If your boss is following my consultant’s advice, then you would want to be growing your panel size to improve your compensation. You could do this by a marketing plan that makes you more popular. But, I can hear you muttering that you never wanted to be a contestant in a popularity contest. Although I must say that historically this was a fact of life in any community when new providers came to town.

A provider can choose his/her own definition of popularity. You can let it be known that you are a liberal prescription writer and fill your practice with drug-seeking patients. Or you could promote customer-friendly schedules and behaviors in your office staff. And, of course, you can simply exude an aura of caring, which has always been an effective practice-building tool.

On the other hand, you may believe that you have more patients than you can handle. You may fear that growing your practice runs the risk of putting the quality of your patients’ care and your own physical and mental health at risk.

Theoretically, you could keep your panel size unchanged and increase your productivity to enhance your value and therefore your compensation. In the next part of this miniseries we’ll look at the stumbling blocks that can make increasing productivity difficult.
 

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Publications
Topics
Sections

In my last column, I began to explore the factors affecting the compensation of primary care providers (PCPs). I described two apparent economic paradoxes. First, while most healthcare systems consider their primary care segments as loss leaders, they continue to seek and hire more PCPs. The second is while PCPs are in short supply, most of them feel that they are underpaid. Supply and demand doesn’t seem to be making them more valuable in the economic sense. The explanations for these nonintuitive observations are first, healthcare systems need the volume of patients stored in the practices of even unprofitable primary care physicians to feed the high-profit specialties in their businesses. Second, there is a limit to how large a gap between revenue and overhead the systems can accept for their primary care practices. Not surprisingly, this means that system administrators must continue to nudge those PCP practices closer toward profitability, usually by demanding higher productivity.

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

As I did in my last letter, I will continue to lean on a discussion for PCP compensation by a large international management consulting firm I found on the internet. I am not condoning the consultant’s advice, but merely using it as a scaffolding on which to hang the rather squishy topics of time, clinical quality, and patient satisfaction. I only intend to ask questions, and I promise no answers.

First, let me make it clear that I am defining PCPs as providers who are on a performance-based pathway, which is by far the most prevalent model. A fixed-salary arrangement hasn’t made sense to me since I was a 17-year-old lifeguard paid by the hour for sitting by a pool. Had I been paid by the rescue, I would have finished the summer empty handed. A fixed salary provided me a sense of security, but it offered no path for advancement and was boring as hell. The primary care provider I am talking about has an interest in developing relationships with his/her patients, building a practice, and offering some degree of continuity. In other words, I am not considering providers working in walk-in clinics as PCPs.
 

Size Matters

My high-powered management consultant is recommending to his healthcare system management clients that they emphasize panel size component as they craft their compensation packages for PCPs. Maybe even to the point of giving it more weight than the productivity piece. This, of course, makes perfect business sense if the primary value of a PCP to the system lies in the patients he/she brings into the system.

What does this emphasis on size mean for you as a provider? If your boss is following my consultant’s advice, then you would want to be growing your panel size to improve your compensation. You could do this by a marketing plan that makes you more popular. But, I can hear you muttering that you never wanted to be a contestant in a popularity contest. Although I must say that historically this was a fact of life in any community when new providers came to town.

A provider can choose his/her own definition of popularity. You can let it be known that you are a liberal prescription writer and fill your practice with drug-seeking patients. Or you could promote customer-friendly schedules and behaviors in your office staff. And, of course, you can simply exude an aura of caring, which has always been an effective practice-building tool.

On the other hand, you may believe that you have more patients than you can handle. You may fear that growing your practice runs the risk of putting the quality of your patients’ care and your own physical and mental health at risk.

Theoretically, you could keep your panel size unchanged and increase your productivity to enhance your value and therefore your compensation. In the next part of this miniseries we’ll look at the stumbling blocks that can make increasing productivity difficult.
 

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

In my last column, I began to explore the factors affecting the compensation of primary care providers (PCPs). I described two apparent economic paradoxes. First, while most healthcare systems consider their primary care segments as loss leaders, they continue to seek and hire more PCPs. The second is while PCPs are in short supply, most of them feel that they are underpaid. Supply and demand doesn’t seem to be making them more valuable in the economic sense. The explanations for these nonintuitive observations are first, healthcare systems need the volume of patients stored in the practices of even unprofitable primary care physicians to feed the high-profit specialties in their businesses. Second, there is a limit to how large a gap between revenue and overhead the systems can accept for their primary care practices. Not surprisingly, this means that system administrators must continue to nudge those PCP practices closer toward profitability, usually by demanding higher productivity.

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

As I did in my last letter, I will continue to lean on a discussion for PCP compensation by a large international management consulting firm I found on the internet. I am not condoning the consultant’s advice, but merely using it as a scaffolding on which to hang the rather squishy topics of time, clinical quality, and patient satisfaction. I only intend to ask questions, and I promise no answers.

First, let me make it clear that I am defining PCPs as providers who are on a performance-based pathway, which is by far the most prevalent model. A fixed-salary arrangement hasn’t made sense to me since I was a 17-year-old lifeguard paid by the hour for sitting by a pool. Had I been paid by the rescue, I would have finished the summer empty handed. A fixed salary provided me a sense of security, but it offered no path for advancement and was boring as hell. The primary care provider I am talking about has an interest in developing relationships with his/her patients, building a practice, and offering some degree of continuity. In other words, I am not considering providers working in walk-in clinics as PCPs.
 

Size Matters

My high-powered management consultant is recommending to his healthcare system management clients that they emphasize panel size component as they craft their compensation packages for PCPs. Maybe even to the point of giving it more weight than the productivity piece. This, of course, makes perfect business sense if the primary value of a PCP to the system lies in the patients he/she brings into the system.

What does this emphasis on size mean for you as a provider? If your boss is following my consultant’s advice, then you would want to be growing your panel size to improve your compensation. You could do this by a marketing plan that makes you more popular. But, I can hear you muttering that you never wanted to be a contestant in a popularity contest. Although I must say that historically this was a fact of life in any community when new providers came to town.

A provider can choose his/her own definition of popularity. You can let it be known that you are a liberal prescription writer and fill your practice with drug-seeking patients. Or you could promote customer-friendly schedules and behaviors in your office staff. And, of course, you can simply exude an aura of caring, which has always been an effective practice-building tool.

On the other hand, you may believe that you have more patients than you can handle. You may fear that growing your practice runs the risk of putting the quality of your patients’ care and your own physical and mental health at risk.

Theoretically, you could keep your panel size unchanged and increase your productivity to enhance your value and therefore your compensation. In the next part of this miniseries we’ll look at the stumbling blocks that can make increasing productivity difficult.
 

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Intermittent Fasting + HIIT: Fitness Fad or Fix?

Article Type
Changed
Thu, 05/09/2024 - 13:35

Let’s be honest: Although as physicians we have the power of the prescription pad, so much of health, in the end, comes down to lifestyle. Of course, taking a pill is often way easier than changing your longstanding habits. And what’s worse, doesn’t it always seem like the lifestyle stuff that is good for your health is unpleasant?

Two recent lifestyle interventions that I have tried and find really not enjoyable are time-restricted eating (also known as intermittent fasting) and high-intensity interval training, or HIIT. The former leaves me hangry for half the day; the latter is, well, it’s just really hard compared with my usual jog.

Yale University
Dr. F. Perry Wilson

But given the rule of unpleasant lifestyle changes, I knew as soon as I saw this recent study what the result would be. What if we combined time-restricted eating with high-intensity interval training?

I’m referring to this study, appearing in PLOS ONE from Ranya Ameur and colleagues, which is a small trial that enrolled otherwise healthy women with a BMI > 30 and randomized them to one of three conditions.

First was time-restricted eating. Women in this group could eat whatever they wanted, but only from 8 a.m. to 4 p.m. daily.

Second: high-intensity functional training. This is a variant of high-intensity interval training which focuses a bit more on resistance exercise than on pure cardiovascular stuff but has the same vibe of doing brief bursts of intensive activity followed by a cool-down period.

Third: a combination of the two. Sounds rough to me.

The study was otherwise straightforward. At baseline, researchers collected data on body composition and dietary intake, and measured blood pressure, glucose, insulin, and lipid biomarkers. A 12-week intervention period followed, after which all of this stuff was measured again.

Now, you may have noticed that there is no control group in this study. We’ll come back to that — a few times.

Let me walk you through some of the outcomes here.

First off, body composition metrics. All three groups lost weight — on average, around 10% of body weight which, for a 12-week intervention, is fairly impressive. BMI and waist circumference went down as well, and, interestingly, much of the weight loss here was in fat mass, not fat-free mass.

Most interventions that lead to weight loss — and I’m including some of the newer drugs here — lead to both fat and muscle loss. That might not be as bad as it sounds; the truth is that muscle mass increases as fat increases because of the simple fact that if you’re carrying more weight when you walk around, your leg muscles get bigger. But to preserve muscle mass in the face of fat loss is sort of a Goldilocks finding, and, based on these results, there’s a suggestion that the high-intensity functional training helps to do just that.

The dietary intake findings were really surprising to me. Across the board, caloric intake decreased. It’s no surprise that time-restricted eating reduces calorie intake. That has been shown many times before and is probably the main reason it induces weight loss — less time to eat means you eat less.

But why would high-intensity functional training lead to lower caloric intake? Most people, myself included, get hungry after they exercise. In fact, one of the reasons it’s hard to lose weight with exercise alone is that we end up eating more calories to make up for what we lost during the exercise. This calorie reduction could be a unique effect of this type of exercise, but honestly this could also be something called the Hawthorne effect. Women in the study kept a food diary to track their intake, and the act of doing that might actually make you eat less. It makes it a little more annoying to snack a bit if you know you have to write it down. This is a situation where I would kill for a control group.

The lipid findings are also pretty striking, with around a 40% reduction in LDL across the board, and evidence of synergistic effects of combined TRE and high-intensity training on total cholesterol and triglycerides. This is like a statin level of effect — pretty impressive. Again, my heart pines for a control group, though.

Same story with glucose and insulin measures: an impressive reduction in fasting glucose and good evidence that the combination of time-restricted eating and high-intensity functional training reduces insulin levels and HOMA-IR as well.

Really the only thing that wasn’t very impressive was the change in blood pressure, with only modest decreases across the board.

Okay, so let’s take a breath after this high-intensity cerebral workout and put this all together. This was a small study, lacking a control group, but with large effect sizes in very relevant clinical areas. It confirms what we know about time-restricted eating — that it makes you eat less calories — and introduces the potential that vigorous exercise can not only magnify the benefits of time-restricted eating but maybe even mitigate some of the risks, like the risk for muscle loss. And of course, it comports with my central hypothesis, which is that the more unpleasant a lifestyle intervention is, the better it is for you. No pain, no gain, right?

Of course, I am being overly dogmatic. There are plenty of caveats. Wrestling bears is quite unpleasant and almost certainly bad for you. And there are even some pleasant things that are pretty good for you — like coffee and sex. And there are even people who find time-restricted eating and high-intensity training pleasurable. They are called masochists.

I’m joking. The truth is that any lifestyle change is hard, but with persistence the changes become habits and, eventually, those habits do become pleasurable. Or, at least, much less painful. The trick is getting over the hump of change. If only there were a pill for that.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Let’s be honest: Although as physicians we have the power of the prescription pad, so much of health, in the end, comes down to lifestyle. Of course, taking a pill is often way easier than changing your longstanding habits. And what’s worse, doesn’t it always seem like the lifestyle stuff that is good for your health is unpleasant?

Two recent lifestyle interventions that I have tried and find really not enjoyable are time-restricted eating (also known as intermittent fasting) and high-intensity interval training, or HIIT. The former leaves me hangry for half the day; the latter is, well, it’s just really hard compared with my usual jog.

Yale University
Dr. F. Perry Wilson

But given the rule of unpleasant lifestyle changes, I knew as soon as I saw this recent study what the result would be. What if we combined time-restricted eating with high-intensity interval training?

I’m referring to this study, appearing in PLOS ONE from Ranya Ameur and colleagues, which is a small trial that enrolled otherwise healthy women with a BMI > 30 and randomized them to one of three conditions.

First was time-restricted eating. Women in this group could eat whatever they wanted, but only from 8 a.m. to 4 p.m. daily.

Second: high-intensity functional training. This is a variant of high-intensity interval training which focuses a bit more on resistance exercise than on pure cardiovascular stuff but has the same vibe of doing brief bursts of intensive activity followed by a cool-down period.

Third: a combination of the two. Sounds rough to me.

The study was otherwise straightforward. At baseline, researchers collected data on body composition and dietary intake, and measured blood pressure, glucose, insulin, and lipid biomarkers. A 12-week intervention period followed, after which all of this stuff was measured again.

Now, you may have noticed that there is no control group in this study. We’ll come back to that — a few times.

Let me walk you through some of the outcomes here.

First off, body composition metrics. All three groups lost weight — on average, around 10% of body weight which, for a 12-week intervention, is fairly impressive. BMI and waist circumference went down as well, and, interestingly, much of the weight loss here was in fat mass, not fat-free mass.

Most interventions that lead to weight loss — and I’m including some of the newer drugs here — lead to both fat and muscle loss. That might not be as bad as it sounds; the truth is that muscle mass increases as fat increases because of the simple fact that if you’re carrying more weight when you walk around, your leg muscles get bigger. But to preserve muscle mass in the face of fat loss is sort of a Goldilocks finding, and, based on these results, there’s a suggestion that the high-intensity functional training helps to do just that.

The dietary intake findings were really surprising to me. Across the board, caloric intake decreased. It’s no surprise that time-restricted eating reduces calorie intake. That has been shown many times before and is probably the main reason it induces weight loss — less time to eat means you eat less.

But why would high-intensity functional training lead to lower caloric intake? Most people, myself included, get hungry after they exercise. In fact, one of the reasons it’s hard to lose weight with exercise alone is that we end up eating more calories to make up for what we lost during the exercise. This calorie reduction could be a unique effect of this type of exercise, but honestly this could also be something called the Hawthorne effect. Women in the study kept a food diary to track their intake, and the act of doing that might actually make you eat less. It makes it a little more annoying to snack a bit if you know you have to write it down. This is a situation where I would kill for a control group.

The lipid findings are also pretty striking, with around a 40% reduction in LDL across the board, and evidence of synergistic effects of combined TRE and high-intensity training on total cholesterol and triglycerides. This is like a statin level of effect — pretty impressive. Again, my heart pines for a control group, though.

Same story with glucose and insulin measures: an impressive reduction in fasting glucose and good evidence that the combination of time-restricted eating and high-intensity functional training reduces insulin levels and HOMA-IR as well.

Really the only thing that wasn’t very impressive was the change in blood pressure, with only modest decreases across the board.

Okay, so let’s take a breath after this high-intensity cerebral workout and put this all together. This was a small study, lacking a control group, but with large effect sizes in very relevant clinical areas. It confirms what we know about time-restricted eating — that it makes you eat less calories — and introduces the potential that vigorous exercise can not only magnify the benefits of time-restricted eating but maybe even mitigate some of the risks, like the risk for muscle loss. And of course, it comports with my central hypothesis, which is that the more unpleasant a lifestyle intervention is, the better it is for you. No pain, no gain, right?

Of course, I am being overly dogmatic. There are plenty of caveats. Wrestling bears is quite unpleasant and almost certainly bad for you. And there are even some pleasant things that are pretty good for you — like coffee and sex. And there are even people who find time-restricted eating and high-intensity training pleasurable. They are called masochists.

I’m joking. The truth is that any lifestyle change is hard, but with persistence the changes become habits and, eventually, those habits do become pleasurable. Or, at least, much less painful. The trick is getting over the hump of change. If only there were a pill for that.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.

A version of this article appeared on Medscape.com.

Let’s be honest: Although as physicians we have the power of the prescription pad, so much of health, in the end, comes down to lifestyle. Of course, taking a pill is often way easier than changing your longstanding habits. And what’s worse, doesn’t it always seem like the lifestyle stuff that is good for your health is unpleasant?

Two recent lifestyle interventions that I have tried and find really not enjoyable are time-restricted eating (also known as intermittent fasting) and high-intensity interval training, or HIIT. The former leaves me hangry for half the day; the latter is, well, it’s just really hard compared with my usual jog.

Yale University
Dr. F. Perry Wilson

But given the rule of unpleasant lifestyle changes, I knew as soon as I saw this recent study what the result would be. What if we combined time-restricted eating with high-intensity interval training?

I’m referring to this study, appearing in PLOS ONE from Ranya Ameur and colleagues, which is a small trial that enrolled otherwise healthy women with a BMI > 30 and randomized them to one of three conditions.

First was time-restricted eating. Women in this group could eat whatever they wanted, but only from 8 a.m. to 4 p.m. daily.

Second: high-intensity functional training. This is a variant of high-intensity interval training which focuses a bit more on resistance exercise than on pure cardiovascular stuff but has the same vibe of doing brief bursts of intensive activity followed by a cool-down period.

Third: a combination of the two. Sounds rough to me.

The study was otherwise straightforward. At baseline, researchers collected data on body composition and dietary intake, and measured blood pressure, glucose, insulin, and lipid biomarkers. A 12-week intervention period followed, after which all of this stuff was measured again.

Now, you may have noticed that there is no control group in this study. We’ll come back to that — a few times.

Let me walk you through some of the outcomes here.

First off, body composition metrics. All three groups lost weight — on average, around 10% of body weight which, for a 12-week intervention, is fairly impressive. BMI and waist circumference went down as well, and, interestingly, much of the weight loss here was in fat mass, not fat-free mass.

Most interventions that lead to weight loss — and I’m including some of the newer drugs here — lead to both fat and muscle loss. That might not be as bad as it sounds; the truth is that muscle mass increases as fat increases because of the simple fact that if you’re carrying more weight when you walk around, your leg muscles get bigger. But to preserve muscle mass in the face of fat loss is sort of a Goldilocks finding, and, based on these results, there’s a suggestion that the high-intensity functional training helps to do just that.

The dietary intake findings were really surprising to me. Across the board, caloric intake decreased. It’s no surprise that time-restricted eating reduces calorie intake. That has been shown many times before and is probably the main reason it induces weight loss — less time to eat means you eat less.

But why would high-intensity functional training lead to lower caloric intake? Most people, myself included, get hungry after they exercise. In fact, one of the reasons it’s hard to lose weight with exercise alone is that we end up eating more calories to make up for what we lost during the exercise. This calorie reduction could be a unique effect of this type of exercise, but honestly this could also be something called the Hawthorne effect. Women in the study kept a food diary to track their intake, and the act of doing that might actually make you eat less. It makes it a little more annoying to snack a bit if you know you have to write it down. This is a situation where I would kill for a control group.

The lipid findings are also pretty striking, with around a 40% reduction in LDL across the board, and evidence of synergistic effects of combined TRE and high-intensity training on total cholesterol and triglycerides. This is like a statin level of effect — pretty impressive. Again, my heart pines for a control group, though.

Same story with glucose and insulin measures: an impressive reduction in fasting glucose and good evidence that the combination of time-restricted eating and high-intensity functional training reduces insulin levels and HOMA-IR as well.

Really the only thing that wasn’t very impressive was the change in blood pressure, with only modest decreases across the board.

Okay, so let’s take a breath after this high-intensity cerebral workout and put this all together. This was a small study, lacking a control group, but with large effect sizes in very relevant clinical areas. It confirms what we know about time-restricted eating — that it makes you eat less calories — and introduces the potential that vigorous exercise can not only magnify the benefits of time-restricted eating but maybe even mitigate some of the risks, like the risk for muscle loss. And of course, it comports with my central hypothesis, which is that the more unpleasant a lifestyle intervention is, the better it is for you. No pain, no gain, right?

Of course, I am being overly dogmatic. There are plenty of caveats. Wrestling bears is quite unpleasant and almost certainly bad for you. And there are even some pleasant things that are pretty good for you — like coffee and sex. And there are even people who find time-restricted eating and high-intensity training pleasurable. They are called masochists.

I’m joking. The truth is that any lifestyle change is hard, but with persistence the changes become habits and, eventually, those habits do become pleasurable. Or, at least, much less painful. The trick is getting over the hump of change. If only there were a pill for that.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Remembering the Dead in Unity and Peace

Article Type
Changed
Mon, 05/13/2024 - 14:42

Soldiers’ graves are the greatest preachers of peace.

Albert Schweitzer 1

From the window of my room in the house where I grew up, I could see the American flag flying over Fort Sam Houston National Cemetery. I would ride my bicycle around the paths that divided the grassy sections of graves to the blocks where my father and grandfather were buried. I would stand before the gravesites in a state combining prayer, processing, and remembrance. Carved into my grandfather’s headstone were the 2 world wars he fought in and on my father’s, the 3 conflicts in which he served. I would walk up to their headstones and trace the emblems of belief: the engraved Star of David that marked my grandfather’s grave and the simple cross for my father.

My visits and writing about them may strike some readers as morbid. However, for me, the experience and memories are calming and peaceful, like the cemetery. There was something incredibly comforting about the uniformity of the headstones standing out for miles, mirroring the ranks of soldiers in the wars they commemorated. Yet, as with the men and women who fought each conflict, every grave told a succinct Hemingway-like story of their military career etched in stone. I know now that discrimination in the military segregated even the burial of service members.2 It appeared to my younger self that at least compared to civilian cemeteries with their massive monuments to the wealthy and powerful, there was an egalitarian effect: my master sergeant grandfather’s plot was indistinguishable from that of my colonel father.

Memorial Day and military cemeteries have a shared history. While Veterans Day honors all who have worn the uniform, living and dead, Memorial Day, as its name suggests, remembers those who have died in a broadly conceived line of duty. To emphasize the more solemn character of the holiday, the original name, Decoration Day, was changed to emphasize the reverence of remembrance.3 The first widespread observance of Memorial Day was to commemorate those who perished in the Civil War, which remains the conflict with the highest number of casualties in American history. The first national commemoration occurred at Arlington National Cemetery when 5000 volunteers decorated 20,000 Union and Confederate graves in an act of solidarity and reconciliation. The practice struck a chord in a country beleaguered by war and division.2

National cemeteries also emerged from the grief and gratitude that marked the Civil War. President Abraham Lincoln, who gave us the famous US Department of Veterans Affairs (VA) mission motto, also inaugurated national cemeteries. At the beginning of the Civil War, only Union soldiers who sacrificed their lives to end slavery were entitled to burial. Reflective of the rift that divided the country, Confederate soldiers contended that such divisiveness should not continue unto death and were granted the right to be buried beside those they fought against, united in death and memory.4

Today, the country is more divided than ever: more than a few observers of American culture, including the new popular film Civil War, believe we are on the brink of another civil war.5 While we take their warning seriously, there are still signs of unity amongst the people, like those who followed the war between the states. Recently, in that same national cemetery where I first contemplated these themes, justice, delayed too long, was not entirely denied. A ceremony was held to dedicate 17 headstones to honor the memories of Black World War I Army soldiers who were court-martialed and hanged in the wake of the Houston riots of 1917. As a sign of their dishonor, their headstones listed only their dates and names—nothing of their military service. At the urging of their descendants, the US Army reopened the files and found the verdict to have been racially motivated. They set aside their convictions, gave them honorable discharges for their service in life, and replaced their gravesites with ones that enshrined that respect in death.6

Some reading this column may, like me, have had the profound privilege of participating in a burial at a national cemetery. We recall the stirring mix of pride and loss when the honor guard hands the perfectly folded flag to the bereaved family member and bids farewell to their comrade with a salute. Yet, not all families have this privilege. One of the saddest experiences I recall is when I was in a leadership position at a VA facility and unable to help impoverished families who were denied VA burial benefits or payments to transport their deceased veteran closer to home. That sorrow often turned to thankful relief when a veterans service organization or other community group offered to pay the funerary expenses. Fortunately, like eligibility for VA health care, the criteria for burial benefits have steadily expanded to encompass spouses, adult children, and others who served.7

In a similar display of altruism this Memorial Day, veterans service organizations, Boy Scouts, and volunteers will place a flag on every grave to show that some memories are stronger than death. If you have never seen it, I encourage you to visit a VA or a national cemetery this holiday or, even better, volunteer to place flags. Either way, spend a few moments thankfully remembering that we can all engage in those uniquely American Memorial Day pastimes of barbecues and baseball games because so many served and died to protect our way of life. The epigraph at the beginning of this column is attributed to Albert Schweitzer, the physician-theologian of reverence for life. The news today is full of war and rumors of war.8 Let us all hope that the message is heard around the world so there is no need to build more national cemeteries to remember our veterans.

References

1. Cohen R. On Omaha Beach today, where’s the comradeship? The New York Times. June 5, 2024. Accessed April 26, 2024. https://www.nytimes.com/2004/06/05/world/on-omaha-beach-today-where-s-the-comradeship.html

2. Stillwell B. ‘How decoration day’ became memorial day. Military.com. Published May 12, 2020. Accessed April 26, 2024. https://www.military.com/holidays/memorial-day/how-decoration-day-became-memorial-day.html

3. The history of Memorial Day. PBS. Accessed April 26, 2024. https://www.pbs.org/national-memorial-day-concert/memorial-day/history/

4. US Department of Veterans Affairs, National Cemetery Administration. Facts: NCA history and development. Updated October 18, 2023. Accessed April 26, 2024. https://www.cem.va.gov/facts/NCA_History_and_Development_1.asp

5. Lerer L. How the movie ‘civil war’ echoes real political anxieties. The New York Times. April 21, 2024. Accessed April 26, 2024. https://www.nytimes.com/2024/04/21/us/politics/civil-war-movie-politics.html

6. VA’s national cemetery administration dedicates new headstones to honor black soldiers, correcting 1917 injustice. News release. US Department of Veterans Affairs. Published February 22, 2024. Accessed April 26, 2024. https://news.va.gov/press-room/va-headstones-black-soldiers-1917-injustice/

7. US Department of Veterans Affairs, National Cemetery Administration. Burial benefits. Updated September 27, 2023. Accessed April 26, 2024. https://www.cem.va.gov/burial_benefits/

8. Racker M. Why so many politicians are talking about world war III. Time. November 20, 2023. Accessed April 29, 2024. https://time.com/6336897/israel-war-gaza-world-war-iii/

Article PDF
Author and Disclosure Information

Cynthia M.A. Geppert, MD, MA, PhD, MPH, MSBE

Correspondence:  Cynthia Geppert  ([email protected])

Disclaimer

The opinions expressed herein are those of the author and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Issue
Federal Practitioner - 41(5)a
Publications
Topics
Page Number
134-135
Sections
Author and Disclosure Information

Cynthia M.A. Geppert, MD, MA, PhD, MPH, MSBE

Correspondence:  Cynthia Geppert  ([email protected])

Disclaimer

The opinions expressed herein are those of the author and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Author and Disclosure Information

Cynthia M.A. Geppert, MD, MA, PhD, MPH, MSBE

Correspondence:  Cynthia Geppert  ([email protected])

Disclaimer

The opinions expressed herein are those of the author and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Article PDF
Article PDF
Related Articles

Soldiers’ graves are the greatest preachers of peace.

Albert Schweitzer 1

From the window of my room in the house where I grew up, I could see the American flag flying over Fort Sam Houston National Cemetery. I would ride my bicycle around the paths that divided the grassy sections of graves to the blocks where my father and grandfather were buried. I would stand before the gravesites in a state combining prayer, processing, and remembrance. Carved into my grandfather’s headstone were the 2 world wars he fought in and on my father’s, the 3 conflicts in which he served. I would walk up to their headstones and trace the emblems of belief: the engraved Star of David that marked my grandfather’s grave and the simple cross for my father.

My visits and writing about them may strike some readers as morbid. However, for me, the experience and memories are calming and peaceful, like the cemetery. There was something incredibly comforting about the uniformity of the headstones standing out for miles, mirroring the ranks of soldiers in the wars they commemorated. Yet, as with the men and women who fought each conflict, every grave told a succinct Hemingway-like story of their military career etched in stone. I know now that discrimination in the military segregated even the burial of service members.2 It appeared to my younger self that at least compared to civilian cemeteries with their massive monuments to the wealthy and powerful, there was an egalitarian effect: my master sergeant grandfather’s plot was indistinguishable from that of my colonel father.

Memorial Day and military cemeteries have a shared history. While Veterans Day honors all who have worn the uniform, living and dead, Memorial Day, as its name suggests, remembers those who have died in a broadly conceived line of duty. To emphasize the more solemn character of the holiday, the original name, Decoration Day, was changed to emphasize the reverence of remembrance.3 The first widespread observance of Memorial Day was to commemorate those who perished in the Civil War, which remains the conflict with the highest number of casualties in American history. The first national commemoration occurred at Arlington National Cemetery when 5000 volunteers decorated 20,000 Union and Confederate graves in an act of solidarity and reconciliation. The practice struck a chord in a country beleaguered by war and division.2

National cemeteries also emerged from the grief and gratitude that marked the Civil War. President Abraham Lincoln, who gave us the famous US Department of Veterans Affairs (VA) mission motto, also inaugurated national cemeteries. At the beginning of the Civil War, only Union soldiers who sacrificed their lives to end slavery were entitled to burial. Reflective of the rift that divided the country, Confederate soldiers contended that such divisiveness should not continue unto death and were granted the right to be buried beside those they fought against, united in death and memory.4

Today, the country is more divided than ever: more than a few observers of American culture, including the new popular film Civil War, believe we are on the brink of another civil war.5 While we take their warning seriously, there are still signs of unity amongst the people, like those who followed the war between the states. Recently, in that same national cemetery where I first contemplated these themes, justice, delayed too long, was not entirely denied. A ceremony was held to dedicate 17 headstones to honor the memories of Black World War I Army soldiers who were court-martialed and hanged in the wake of the Houston riots of 1917. As a sign of their dishonor, their headstones listed only their dates and names—nothing of their military service. At the urging of their descendants, the US Army reopened the files and found the verdict to have been racially motivated. They set aside their convictions, gave them honorable discharges for their service in life, and replaced their gravesites with ones that enshrined that respect in death.6

Some reading this column may, like me, have had the profound privilege of participating in a burial at a national cemetery. We recall the stirring mix of pride and loss when the honor guard hands the perfectly folded flag to the bereaved family member and bids farewell to their comrade with a salute. Yet, not all families have this privilege. One of the saddest experiences I recall is when I was in a leadership position at a VA facility and unable to help impoverished families who were denied VA burial benefits or payments to transport their deceased veteran closer to home. That sorrow often turned to thankful relief when a veterans service organization or other community group offered to pay the funerary expenses. Fortunately, like eligibility for VA health care, the criteria for burial benefits have steadily expanded to encompass spouses, adult children, and others who served.7

In a similar display of altruism this Memorial Day, veterans service organizations, Boy Scouts, and volunteers will place a flag on every grave to show that some memories are stronger than death. If you have never seen it, I encourage you to visit a VA or a national cemetery this holiday or, even better, volunteer to place flags. Either way, spend a few moments thankfully remembering that we can all engage in those uniquely American Memorial Day pastimes of barbecues and baseball games because so many served and died to protect our way of life. The epigraph at the beginning of this column is attributed to Albert Schweitzer, the physician-theologian of reverence for life. The news today is full of war and rumors of war.8 Let us all hope that the message is heard around the world so there is no need to build more national cemeteries to remember our veterans.

Soldiers’ graves are the greatest preachers of peace.

Albert Schweitzer 1

From the window of my room in the house where I grew up, I could see the American flag flying over Fort Sam Houston National Cemetery. I would ride my bicycle around the paths that divided the grassy sections of graves to the blocks where my father and grandfather were buried. I would stand before the gravesites in a state combining prayer, processing, and remembrance. Carved into my grandfather’s headstone were the 2 world wars he fought in and on my father’s, the 3 conflicts in which he served. I would walk up to their headstones and trace the emblems of belief: the engraved Star of David that marked my grandfather’s grave and the simple cross for my father.

My visits and writing about them may strike some readers as morbid. However, for me, the experience and memories are calming and peaceful, like the cemetery. There was something incredibly comforting about the uniformity of the headstones standing out for miles, mirroring the ranks of soldiers in the wars they commemorated. Yet, as with the men and women who fought each conflict, every grave told a succinct Hemingway-like story of their military career etched in stone. I know now that discrimination in the military segregated even the burial of service members.2 It appeared to my younger self that at least compared to civilian cemeteries with their massive monuments to the wealthy and powerful, there was an egalitarian effect: my master sergeant grandfather’s plot was indistinguishable from that of my colonel father.

Memorial Day and military cemeteries have a shared history. While Veterans Day honors all who have worn the uniform, living and dead, Memorial Day, as its name suggests, remembers those who have died in a broadly conceived line of duty. To emphasize the more solemn character of the holiday, the original name, Decoration Day, was changed to emphasize the reverence of remembrance.3 The first widespread observance of Memorial Day was to commemorate those who perished in the Civil War, which remains the conflict with the highest number of casualties in American history. The first national commemoration occurred at Arlington National Cemetery when 5000 volunteers decorated 20,000 Union and Confederate graves in an act of solidarity and reconciliation. The practice struck a chord in a country beleaguered by war and division.2

National cemeteries also emerged from the grief and gratitude that marked the Civil War. President Abraham Lincoln, who gave us the famous US Department of Veterans Affairs (VA) mission motto, also inaugurated national cemeteries. At the beginning of the Civil War, only Union soldiers who sacrificed their lives to end slavery were entitled to burial. Reflective of the rift that divided the country, Confederate soldiers contended that such divisiveness should not continue unto death and were granted the right to be buried beside those they fought against, united in death and memory.4

Today, the country is more divided than ever: more than a few observers of American culture, including the new popular film Civil War, believe we are on the brink of another civil war.5 While we take their warning seriously, there are still signs of unity amongst the people, like those who followed the war between the states. Recently, in that same national cemetery where I first contemplated these themes, justice, delayed too long, was not entirely denied. A ceremony was held to dedicate 17 headstones to honor the memories of Black World War I Army soldiers who were court-martialed and hanged in the wake of the Houston riots of 1917. As a sign of their dishonor, their headstones listed only their dates and names—nothing of their military service. At the urging of their descendants, the US Army reopened the files and found the verdict to have been racially motivated. They set aside their convictions, gave them honorable discharges for their service in life, and replaced their gravesites with ones that enshrined that respect in death.6

Some reading this column may, like me, have had the profound privilege of participating in a burial at a national cemetery. We recall the stirring mix of pride and loss when the honor guard hands the perfectly folded flag to the bereaved family member and bids farewell to their comrade with a salute. Yet, not all families have this privilege. One of the saddest experiences I recall is when I was in a leadership position at a VA facility and unable to help impoverished families who were denied VA burial benefits or payments to transport their deceased veteran closer to home. That sorrow often turned to thankful relief when a veterans service organization or other community group offered to pay the funerary expenses. Fortunately, like eligibility for VA health care, the criteria for burial benefits have steadily expanded to encompass spouses, adult children, and others who served.7

In a similar display of altruism this Memorial Day, veterans service organizations, Boy Scouts, and volunteers will place a flag on every grave to show that some memories are stronger than death. If you have never seen it, I encourage you to visit a VA or a national cemetery this holiday or, even better, volunteer to place flags. Either way, spend a few moments thankfully remembering that we can all engage in those uniquely American Memorial Day pastimes of barbecues and baseball games because so many served and died to protect our way of life. The epigraph at the beginning of this column is attributed to Albert Schweitzer, the physician-theologian of reverence for life. The news today is full of war and rumors of war.8 Let us all hope that the message is heard around the world so there is no need to build more national cemeteries to remember our veterans.

References

1. Cohen R. On Omaha Beach today, where’s the comradeship? The New York Times. June 5, 2024. Accessed April 26, 2024. https://www.nytimes.com/2004/06/05/world/on-omaha-beach-today-where-s-the-comradeship.html

2. Stillwell B. ‘How decoration day’ became memorial day. Military.com. Published May 12, 2020. Accessed April 26, 2024. https://www.military.com/holidays/memorial-day/how-decoration-day-became-memorial-day.html

3. The history of Memorial Day. PBS. Accessed April 26, 2024. https://www.pbs.org/national-memorial-day-concert/memorial-day/history/

4. US Department of Veterans Affairs, National Cemetery Administration. Facts: NCA history and development. Updated October 18, 2023. Accessed April 26, 2024. https://www.cem.va.gov/facts/NCA_History_and_Development_1.asp

5. Lerer L. How the movie ‘civil war’ echoes real political anxieties. The New York Times. April 21, 2024. Accessed April 26, 2024. https://www.nytimes.com/2024/04/21/us/politics/civil-war-movie-politics.html

6. VA’s national cemetery administration dedicates new headstones to honor black soldiers, correcting 1917 injustice. News release. US Department of Veterans Affairs. Published February 22, 2024. Accessed April 26, 2024. https://news.va.gov/press-room/va-headstones-black-soldiers-1917-injustice/

7. US Department of Veterans Affairs, National Cemetery Administration. Burial benefits. Updated September 27, 2023. Accessed April 26, 2024. https://www.cem.va.gov/burial_benefits/

8. Racker M. Why so many politicians are talking about world war III. Time. November 20, 2023. Accessed April 29, 2024. https://time.com/6336897/israel-war-gaza-world-war-iii/

References

1. Cohen R. On Omaha Beach today, where’s the comradeship? The New York Times. June 5, 2024. Accessed April 26, 2024. https://www.nytimes.com/2004/06/05/world/on-omaha-beach-today-where-s-the-comradeship.html

2. Stillwell B. ‘How decoration day’ became memorial day. Military.com. Published May 12, 2020. Accessed April 26, 2024. https://www.military.com/holidays/memorial-day/how-decoration-day-became-memorial-day.html

3. The history of Memorial Day. PBS. Accessed April 26, 2024. https://www.pbs.org/national-memorial-day-concert/memorial-day/history/

4. US Department of Veterans Affairs, National Cemetery Administration. Facts: NCA history and development. Updated October 18, 2023. Accessed April 26, 2024. https://www.cem.va.gov/facts/NCA_History_and_Development_1.asp

5. Lerer L. How the movie ‘civil war’ echoes real political anxieties. The New York Times. April 21, 2024. Accessed April 26, 2024. https://www.nytimes.com/2024/04/21/us/politics/civil-war-movie-politics.html

6. VA’s national cemetery administration dedicates new headstones to honor black soldiers, correcting 1917 injustice. News release. US Department of Veterans Affairs. Published February 22, 2024. Accessed April 26, 2024. https://news.va.gov/press-room/va-headstones-black-soldiers-1917-injustice/

7. US Department of Veterans Affairs, National Cemetery Administration. Burial benefits. Updated September 27, 2023. Accessed April 26, 2024. https://www.cem.va.gov/burial_benefits/

8. Racker M. Why so many politicians are talking about world war III. Time. November 20, 2023. Accessed April 29, 2024. https://time.com/6336897/israel-war-gaza-world-war-iii/

Issue
Federal Practitioner - 41(5)a
Issue
Federal Practitioner - 41(5)a
Page Number
134-135
Page Number
134-135
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Eyebrow Default
Editorial
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Artificial Intelligence in GI and Hepatology

Article Type
Changed
Fri, 05/03/2024 - 15:33

 

Dear colleagues,

Since our prior Perspectives piece on artificial intelligence (AI) in GI and Hepatology in 2022, the field has seen almost exponential growth. Expectations are high that AI will revolutionize our field and significantly improve patient care. But as the global discussion on AI has shown, there are real challenges with adoption, including issues with accuracy, reliability, and privacy.

In this issue, Dr. Nabil M. Mansour and Dr. Thomas R. McCarty explore the current and future impact of AI on gastroenterology, while Dr. Basile Njei and Yazan A. Al Ajlouni assess its role in hepatology. We hope these pieces will help your discussions in incorporating or researching AI for use in your own practices. We welcome your thoughts on this issue on X @AGA_GIHN.

Gyanprakash A. Ketwaroo, MD, MSc, is associate professor of medicine, Yale University, New Haven, Conn., and chief of endoscopy at West Haven (Conn.) VA Medical Center. He is an associate editor for GI & Hepatology News.

Artificial Intelligence in Gastrointestinal Endoscopy

BY THOMAS R. MCCARTY, MD, MPH; NABIL M. MANSOUR, MD

The last few decades have seen an exponential increase and interest in the role of artificial intelligence (AI) and adoption of deep learning algorithms within healthcare and patient care services. The field of gastroenterology and endoscopy has similarly seen a tremendous uptake in acceptance and implementation of AI for a variety of gastrointestinal conditions. The spectrum of AI-based applications includes detection or diagnostic-based as well as therapeutic assistance tools. From the first US Food and Drug Administration (FDA)-approved device that uses machine learning to assist clinicians in detecting lesions during colonoscopy, to other more innovative machine learning techniques for small bowel, esophageal, and hepatobiliary conditions, AI has dramatically changed the landscape of gastrointestinal endoscopy.

Baylor College of Medicine
Dr. Nabil M. Mansour


Approved applications for colorectal cancer

In an attempt to improve colorectal cancer screening and outcomes related to screening and surveillance, efforts have been focused on procedural performance metrics, quality indicators, and tools to aid in lesion detection and improve quality of care. One such tool has been computer-aided detection (CADe), with early randomized controlled trial (RCT) data showing significantly increased adenoma detection rate (ADR) and adenomas per colonoscopy (APC).1-3

Ultimately, this data led to FDA approval of the CADe system GI Genius (Medtronic, Dublin, Ireland) in 2021.4 Additional systems have since been FDA approved or 510(k) cleared including Endoscreener (Wision AI, Shanghai, China), SKOUT (Iterative Health, Cambridge, Massachusetts), MAGENTIQ-COLO (MAGENTIQ-EYE LTD, Haifa, Israel), and CAD EYE (Fujifilm, Tokyo), all of which have shown increased ADR and/or increased APC and/or reduced adenoma miss rates in randomized trials.5

Yet despite the promise of improved quality and subsequent translation to better patient outcomes, there has been a noticeable disconnect between RCT data and more real-world literature.6 In a recent study, no improvement was seen in ADR after implementation of a CADe system for colorectal cancer screening — including both higher and lower-ADR performers. Looking at change over time after implementation, CADe had no positive effect in any group over time, divergent from early RCT data. In a more recent multicenter, community-based RCT study, again CADe did not result in a statistically significant difference in the number of adenomas detected.7 The differences between some of these more recent “real-world” studies vs the majority of data from RCTs raise important questions regarding the potential of bias (due to unblinding) in prospective trials, as well as the role of the human-AI interaction.

Importantly for RCT data, both cohorts in these studies met adequate ADR benchmarks, though it remains unclear whether a truly increased ADR necessitates better patient outcomes — is higher always better? In addition, an important consideration with evaluating any AI/CADe system is that they often undergo frequent updates, each promising improved accuracy, sensitivity, and specificity. This is an interesting dilemma and raises questions about the enduring relevance of studies conducted using an outdated version of a CADe system.

Additional unanswered questions regarding an ideal ADR for implementation, preferred patient populations for screening (especially for younger individuals), and the role and adoption of computer-aided polyp diagnosis/characterization (CADx) within the United States remain. Furthermore, questions regarding procedural withdrawal time, impact on sessile serrated lesion detection, cost-effectiveness, and preferred adoption strategies have begun to be explored, though require more data to better define a best practice approach. Ultimately, answers to some of these unknowns may explain the discordant results and help guide future implementation measures.

 

 

Innovative applications for alternative gastrointestinal conditions

Given the fervor and excitement, as well as the outcomes associated with AI-based colorectal screening, it is not surprising these techniques have been expanded to other gastrointestinal conditions. At this time, all of these are fledgling, mostly single-center tools, not yet ready for widespread adoption. Nonetheless, these represent a potentially important step forward for difficult-to-manage gastrointestinal diseases.

Machine learning CADe systems have been developed to help identify early Barrett’s neoplasia, depth and invasion of gastric cancer, as well as lesion detection in small bowel video capsule endoscopy.8-10 Endoscopic retrograde cholangiopancreatography (ERCP)-based applications for cholangiocarcinoma and indeterminate stricture diagnosis have also been studied.11 Additional AI-based algorithms have been employed for complex procedures such as endoscopic submucosal dissection (ESD) or peroral endoscopic myotomy (POEM) to delineate vessels, better define tissue planes for dissection, and visualize landmark structures.12,13 Furthermore, AI-based scope guidance/manipulation, bleeding detection, landmark identification, and lesion detection have the potential to revolutionize endoscopic training and education. The impact that generative AI can potentially have on clinical practice is also an exciting prospect that warrants further investigation.

Artificial intelligence adoption in clinical practice

Clinical practice with regard to AI and colorectal cancer screening largely mirrors the disconnect in the current literature, with “believers” and “non-believers” as well as innovators and early adopters alongside laggards. In our own academic practices, we continue to struggle with the adoption and standardized implementation of AI-based colorectal cancer CADe systems, despite the RCT data showing positive results. It is likely that AI uptake will follow the technology predictions of Amara’s Law — i.e., individuals tend to overestimate the short-term impact of new technologies while underestimating long-term effects. In the end, more widespread adoption in community practice and larger scale real-world clinical outcomes studies are likely to determine the true impact of these exciting technologies. For other, less established AI-based tools, more data are currently required.

Conclusions

Ultimately, AI-based algorithms are likely here to stay, with continued improvement and evolution to occur based on provider feedback and patient care needs. Current tools, while not all-encompassing, have the potential to dramatically change the landscape of endoscopic training, diagnostic evaluation, and therapeutic care. It is critically important that relevant stakeholders, both endoscopists and patients, be involved in future applications and design to improve efficiency and quality outcomes overall.

Dr. McCarty is based in the Lynda K. and David M. Underwood Center for Digestive Disorders, Houston Methodist Hospital. Dr. Mansour is based in the section of gastroenterology, Baylor College of Medicine, Houston. Dr. McCarty reports no conflicts of interest. Dr. Mansour reports having been a consultant for Iterative Health.

References

1. Repici A, et al. Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial. Gastroenterology. 2020 Aug. doi: 10.1053/j.gastro.2020.04.062.

2. Repici A, et al. Artificial intelligence and colonoscopy experience: Lessons from two randomised trials. Gut. Apr 2022. doi: 10.1136/gutjnl-2021-324471.

3. Wallace MB, et al. Impact of artificial intelligence on miss rate of colorectal neoplasia. Gastroenterology 2022 Jul. doi: 10.1053/j.gastro.2022.03.007.

4. United States Food and Drug Administration (FDA). GI Genius FDA Approval [April 9, 2021]. Accessed January 5, 2022. Available at: www.accessdata.fda.gov/cdrh_docs/pdf21/K211951.pdf.

5. Maas MHJ, et al. A computer-aided polyp detection system in screening and surveillance colonoscopy: An international, multicentre, randomised, tandem trial. Lancet Digit Health. 2024 Mar. doi: 10.1016/S2589-7500(23)00242-X.

6. Ladabaum U, et al. Computer-aided detection of polyps does not improve colonoscopist performance in a pragmatic implementation trial. Gastroenterology. 2023 Mar. doi: 10.1053/j.gastro.2022.12.004.

7. Wei MT, et al. Evaluation of computer-aided detection during colonoscopy in the community (AI-SEE): A multicenter randomized clinical trial. Am J Gastroenterol. 2023 Oct. doi: 10.14309/ajg.0000000000002239.

8. de Groof J, et al. The Argos project: The development of a computer-aided detection system to improve detection of Barrett’s neoplasia on white light endoscopy. United European Gastroenterol J. 2019 May. doi: 10.1177/2050640619837443.

9. Kanesaka T, et al. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018 May. doi: 10.1016/j.gie.2017.11.029.

10. Sahafi A, et al. Edge artificial intelligence wireless video capsule endoscopy. Sci Rep. 2022 Aug. doi: 10.1038/s41598-022-17502-7.

11. Njei B, et al. Artificial intelligence in endoscopic imaging for detection of malignant biliary strictures and cholangiocarcinoma: A systematic review. Ann Gastroenterol. 2023 Mar-Apr. doi: 10.20524/aog.2023.0779.

12. Ebigbo A, et al. Vessel and tissue recognition during third-space endoscopy using a deep learning algorithm. Gut. 2022 Dec. doi: 10.1136/gutjnl-2021-326470.

13. Cao J, et al. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun. 2023 Oct. doi: 10.1038/s41467-023-42451-8.

 

 

The Promise and Challenges of AI in Hepatology

BY BASILE NJEI, MD, MPH, PHD; YAZAN A. AL-AJLOUNI, MPHIL

In the dynamic realm of medicine, artificial intelligence (AI) emerges as a transformative force, notably within hepatology. The discipline of hepatology, dedicated to liver and related organ diseases, is ripe for AI’s promise to revolutionize diagnostics and treatment, pushing toward a future of precision medicine. Yet, the path to fully realizing AI’s potential in hepatology is laced with data, ethical, and integration challenges.

The application of AI, particularly in histopathology, significantly enhances disease diagnosis and staging in hepatology. AI-driven approaches remedy traditional histopathological challenges, such as interpretative variability, providing more consistent and accurate disease analyses. This is especially evident in conditions like metabolic dysfunction-associated steatohepatitis (MASH) and hepatocellular carcinoma (HCC), where AI aids in identifying critical gene signatures, thereby refining therapy selection.

Yale School of Medicine
Dr. Basile Njei

Similarly, deep learning (DL), a branch of AI, has attracted significant interest globally, particularly in image recognition. AI’s incorporation into medical imaging marks a significant advancement, enabling early detection of malignancies like HCC and improving diagnostics in steatotic liver disease through enhanced imaging analyses using convolutional neural networks (CNN). The abundance of imaging data alongside clinical outcomes has catalyzed AI’s integration into radiology, leading to the swift growth of radiomics as a novel domain in medical research.

AI has also been shown to identify nuanced alterations in electrocardiograms (EKGs) associated with liver conditions, potentially detecting the progression of liver diseases at an earlier stage than currently possible. By leveraging complex algorithms and machine learning, AI can analyze EKG patterns with a precision and depth unattainable through traditional manual interpretation. Given that liver diseases, such as cirrhosis or hepatitis, can induce subtle cardiac changes long before other clinical symptoms manifest, early detection through AI-enhanced EKG analysis could lead to timely interventions, potentially halting or reversing disease progression. This approach further enriches our understanding of the intricate interplay between liver function and cardiac health, highlighting the potential for AI to transform not just liver disease diagnostics but also to foster a more integrated approach to patient care.

New York Medical College
Yazan A. Al-Ajlouni

Beyond diagnostics, the burgeoning field of generative AI introduces groundbreaking possibilities in treatment planning and patient education, particularly for chronic conditions like cirrhosis. Generative AI produces original content, including text, visuals, and music, by identifying and learning patterns from its training data. When it leverages large language models (LLMs), it entails training on vast collections of textual data and using AI models characterized by many parameters. A notable instance of generative AI employing LLMs is ChatGPT (General Pretrained Transformers). By simulating disease progression and treatment outcomes, generative AI can foster personalized treatment strategies and empower patients with knowledge about their health trajectories. Yet, realizing these potential demands requires overcoming data quality and interpretability challenges, and ensuring AI outputs are accessible and actionable for clinicians and patients.

Despite these advancements, leveraging AI in hepatology is not devoid of hurdles. The development and training of AI models require extensive and diverse datasets, raising concerns about data privacy and ethical use. Addressing these concerns is paramount for successfully integrating AI into clinical hepatology practice, necessitating transparent algorithmic processes and stringent ethical standards. Ethical considerations are central to AI’s integration into hepatology. Algorithmic biases, patient privacy, and the impact of AI-driven decisions underscore the need for cautious AI deployment. Developing transparent, understandable algorithms and establishing ethical guidelines for AI use are critical steps towards ethically leveraging AI in patient care.

In conclusion, AI’s integration into hepatology holds tremendous promise for advancing patient care through enhanced diagnostics, treatment planning, and patient education. Overcoming the associated challenges, including ethical concerns, data diversity, and algorithm interpretability, is crucial. As the hepatology community navigates this technological evolution, a balanced approach that marries technological advancements with ethical stewardship will be key to harnessing AI’s full potential, ensuring it serves the best interests of patients and propels the field of hepatology into the future.

We predict a trajectory of increased use and adoption of AI in hepatology. AI in hepatology is likely to meet the test of pervasiveness, improvement, and innovation. The adoption of AI in routine hepatology diagnosis and management will likely follow Amara’s law and the five stages of the hype cycle. We believe that we are still in the infant stages of adopting AI technology in hepatology, and this phase may last 5 years before there is a peak of inflated expectations. The trough of disillusionment and slopes of enlightenment may only be observed in the next decades.

 

 

Dr. Njei is based in the Section of Digestive Diseases, Yale School of Medicine, New Haven, Conn. Mr. Al-Ajlouni is a senior medical student at New York Medical College School of Medicine, Valhalla, N.Y. They have no conflicts of interest to declare.

Sources

Taylor-Weiner A, et al. A Machine Learning Approach Enables Quantitative Measurement of Liver Histology and Disease Monitoring in NASH. Hepatology. 2021 Jul. doi: 10.1002/hep.31750.

Zeng Q, et al. Artificial intelligence predicts immune and inflammatory gene signatures directly from hepatocellular carcinoma histology. J Hepatol. 2022 Jul. doi: 10.1016/j.jhep.2022.01.018.

Ahn JC, et al. Development of the AI-Cirrhosis-ECG Score: An Electrocardiogram-Based Deep Learning Model in Cirrhosis. Am J Gastroenterol. 2022 Mar. doi: 10.14309/ajg.0000000000001617.

Nduma BN, et al. The Application of Artificial Intelligence (AI)-Based Ultrasound for the Diagnosis of Fatty Liver Disease: A Systematic Review. Cureus. 2023 Dec 15. doi: 10.7759/cureus.50601.

Publications
Topics
Sections

 

Dear colleagues,

Since our prior Perspectives piece on artificial intelligence (AI) in GI and Hepatology in 2022, the field has seen almost exponential growth. Expectations are high that AI will revolutionize our field and significantly improve patient care. But as the global discussion on AI has shown, there are real challenges with adoption, including issues with accuracy, reliability, and privacy.

In this issue, Dr. Nabil M. Mansour and Dr. Thomas R. McCarty explore the current and future impact of AI on gastroenterology, while Dr. Basile Njei and Yazan A. Al Ajlouni assess its role in hepatology. We hope these pieces will help your discussions in incorporating or researching AI for use in your own practices. We welcome your thoughts on this issue on X @AGA_GIHN.

Gyanprakash A. Ketwaroo, MD, MSc, is associate professor of medicine, Yale University, New Haven, Conn., and chief of endoscopy at West Haven (Conn.) VA Medical Center. He is an associate editor for GI & Hepatology News.

Artificial Intelligence in Gastrointestinal Endoscopy

BY THOMAS R. MCCARTY, MD, MPH; NABIL M. MANSOUR, MD

The last few decades have seen an exponential increase and interest in the role of artificial intelligence (AI) and adoption of deep learning algorithms within healthcare and patient care services. The field of gastroenterology and endoscopy has similarly seen a tremendous uptake in acceptance and implementation of AI for a variety of gastrointestinal conditions. The spectrum of AI-based applications includes detection or diagnostic-based as well as therapeutic assistance tools. From the first US Food and Drug Administration (FDA)-approved device that uses machine learning to assist clinicians in detecting lesions during colonoscopy, to other more innovative machine learning techniques for small bowel, esophageal, and hepatobiliary conditions, AI has dramatically changed the landscape of gastrointestinal endoscopy.

Baylor College of Medicine
Dr. Nabil M. Mansour


Approved applications for colorectal cancer

In an attempt to improve colorectal cancer screening and outcomes related to screening and surveillance, efforts have been focused on procedural performance metrics, quality indicators, and tools to aid in lesion detection and improve quality of care. One such tool has been computer-aided detection (CADe), with early randomized controlled trial (RCT) data showing significantly increased adenoma detection rate (ADR) and adenomas per colonoscopy (APC).1-3

Ultimately, this data led to FDA approval of the CADe system GI Genius (Medtronic, Dublin, Ireland) in 2021.4 Additional systems have since been FDA approved or 510(k) cleared including Endoscreener (Wision AI, Shanghai, China), SKOUT (Iterative Health, Cambridge, Massachusetts), MAGENTIQ-COLO (MAGENTIQ-EYE LTD, Haifa, Israel), and CAD EYE (Fujifilm, Tokyo), all of which have shown increased ADR and/or increased APC and/or reduced adenoma miss rates in randomized trials.5

Yet despite the promise of improved quality and subsequent translation to better patient outcomes, there has been a noticeable disconnect between RCT data and more real-world literature.6 In a recent study, no improvement was seen in ADR after implementation of a CADe system for colorectal cancer screening — including both higher and lower-ADR performers. Looking at change over time after implementation, CADe had no positive effect in any group over time, divergent from early RCT data. In a more recent multicenter, community-based RCT study, again CADe did not result in a statistically significant difference in the number of adenomas detected.7 The differences between some of these more recent “real-world” studies vs the majority of data from RCTs raise important questions regarding the potential of bias (due to unblinding) in prospective trials, as well as the role of the human-AI interaction.

Importantly for RCT data, both cohorts in these studies met adequate ADR benchmarks, though it remains unclear whether a truly increased ADR necessitates better patient outcomes — is higher always better? In addition, an important consideration with evaluating any AI/CADe system is that they often undergo frequent updates, each promising improved accuracy, sensitivity, and specificity. This is an interesting dilemma and raises questions about the enduring relevance of studies conducted using an outdated version of a CADe system.

Additional unanswered questions regarding an ideal ADR for implementation, preferred patient populations for screening (especially for younger individuals), and the role and adoption of computer-aided polyp diagnosis/characterization (CADx) within the United States remain. Furthermore, questions regarding procedural withdrawal time, impact on sessile serrated lesion detection, cost-effectiveness, and preferred adoption strategies have begun to be explored, though require more data to better define a best practice approach. Ultimately, answers to some of these unknowns may explain the discordant results and help guide future implementation measures.

 

 

Innovative applications for alternative gastrointestinal conditions

Given the fervor and excitement, as well as the outcomes associated with AI-based colorectal screening, it is not surprising these techniques have been expanded to other gastrointestinal conditions. At this time, all of these are fledgling, mostly single-center tools, not yet ready for widespread adoption. Nonetheless, these represent a potentially important step forward for difficult-to-manage gastrointestinal diseases.

Machine learning CADe systems have been developed to help identify early Barrett’s neoplasia, depth and invasion of gastric cancer, as well as lesion detection in small bowel video capsule endoscopy.8-10 Endoscopic retrograde cholangiopancreatography (ERCP)-based applications for cholangiocarcinoma and indeterminate stricture diagnosis have also been studied.11 Additional AI-based algorithms have been employed for complex procedures such as endoscopic submucosal dissection (ESD) or peroral endoscopic myotomy (POEM) to delineate vessels, better define tissue planes for dissection, and visualize landmark structures.12,13 Furthermore, AI-based scope guidance/manipulation, bleeding detection, landmark identification, and lesion detection have the potential to revolutionize endoscopic training and education. The impact that generative AI can potentially have on clinical practice is also an exciting prospect that warrants further investigation.

Artificial intelligence adoption in clinical practice

Clinical practice with regard to AI and colorectal cancer screening largely mirrors the disconnect in the current literature, with “believers” and “non-believers” as well as innovators and early adopters alongside laggards. In our own academic practices, we continue to struggle with the adoption and standardized implementation of AI-based colorectal cancer CADe systems, despite the RCT data showing positive results. It is likely that AI uptake will follow the technology predictions of Amara’s Law — i.e., individuals tend to overestimate the short-term impact of new technologies while underestimating long-term effects. In the end, more widespread adoption in community practice and larger scale real-world clinical outcomes studies are likely to determine the true impact of these exciting technologies. For other, less established AI-based tools, more data are currently required.

Conclusions

Ultimately, AI-based algorithms are likely here to stay, with continued improvement and evolution to occur based on provider feedback and patient care needs. Current tools, while not all-encompassing, have the potential to dramatically change the landscape of endoscopic training, diagnostic evaluation, and therapeutic care. It is critically important that relevant stakeholders, both endoscopists and patients, be involved in future applications and design to improve efficiency and quality outcomes overall.

Dr. McCarty is based in the Lynda K. and David M. Underwood Center for Digestive Disorders, Houston Methodist Hospital. Dr. Mansour is based in the section of gastroenterology, Baylor College of Medicine, Houston. Dr. McCarty reports no conflicts of interest. Dr. Mansour reports having been a consultant for Iterative Health.

References

1. Repici A, et al. Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial. Gastroenterology. 2020 Aug. doi: 10.1053/j.gastro.2020.04.062.

2. Repici A, et al. Artificial intelligence and colonoscopy experience: Lessons from two randomised trials. Gut. Apr 2022. doi: 10.1136/gutjnl-2021-324471.

3. Wallace MB, et al. Impact of artificial intelligence on miss rate of colorectal neoplasia. Gastroenterology 2022 Jul. doi: 10.1053/j.gastro.2022.03.007.

4. United States Food and Drug Administration (FDA). GI Genius FDA Approval [April 9, 2021]. Accessed January 5, 2022. Available at: www.accessdata.fda.gov/cdrh_docs/pdf21/K211951.pdf.

5. Maas MHJ, et al. A computer-aided polyp detection system in screening and surveillance colonoscopy: An international, multicentre, randomised, tandem trial. Lancet Digit Health. 2024 Mar. doi: 10.1016/S2589-7500(23)00242-X.

6. Ladabaum U, et al. Computer-aided detection of polyps does not improve colonoscopist performance in a pragmatic implementation trial. Gastroenterology. 2023 Mar. doi: 10.1053/j.gastro.2022.12.004.

7. Wei MT, et al. Evaluation of computer-aided detection during colonoscopy in the community (AI-SEE): A multicenter randomized clinical trial. Am J Gastroenterol. 2023 Oct. doi: 10.14309/ajg.0000000000002239.

8. de Groof J, et al. The Argos project: The development of a computer-aided detection system to improve detection of Barrett’s neoplasia on white light endoscopy. United European Gastroenterol J. 2019 May. doi: 10.1177/2050640619837443.

9. Kanesaka T, et al. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018 May. doi: 10.1016/j.gie.2017.11.029.

10. Sahafi A, et al. Edge artificial intelligence wireless video capsule endoscopy. Sci Rep. 2022 Aug. doi: 10.1038/s41598-022-17502-7.

11. Njei B, et al. Artificial intelligence in endoscopic imaging for detection of malignant biliary strictures and cholangiocarcinoma: A systematic review. Ann Gastroenterol. 2023 Mar-Apr. doi: 10.20524/aog.2023.0779.

12. Ebigbo A, et al. Vessel and tissue recognition during third-space endoscopy using a deep learning algorithm. Gut. 2022 Dec. doi: 10.1136/gutjnl-2021-326470.

13. Cao J, et al. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun. 2023 Oct. doi: 10.1038/s41467-023-42451-8.

 

 

The Promise and Challenges of AI in Hepatology

BY BASILE NJEI, MD, MPH, PHD; YAZAN A. AL-AJLOUNI, MPHIL

In the dynamic realm of medicine, artificial intelligence (AI) emerges as a transformative force, notably within hepatology. The discipline of hepatology, dedicated to liver and related organ diseases, is ripe for AI’s promise to revolutionize diagnostics and treatment, pushing toward a future of precision medicine. Yet, the path to fully realizing AI’s potential in hepatology is laced with data, ethical, and integration challenges.

The application of AI, particularly in histopathology, significantly enhances disease diagnosis and staging in hepatology. AI-driven approaches remedy traditional histopathological challenges, such as interpretative variability, providing more consistent and accurate disease analyses. This is especially evident in conditions like metabolic dysfunction-associated steatohepatitis (MASH) and hepatocellular carcinoma (HCC), where AI aids in identifying critical gene signatures, thereby refining therapy selection.

Yale School of Medicine
Dr. Basile Njei

Similarly, deep learning (DL), a branch of AI, has attracted significant interest globally, particularly in image recognition. AI’s incorporation into medical imaging marks a significant advancement, enabling early detection of malignancies like HCC and improving diagnostics in steatotic liver disease through enhanced imaging analyses using convolutional neural networks (CNN). The abundance of imaging data alongside clinical outcomes has catalyzed AI’s integration into radiology, leading to the swift growth of radiomics as a novel domain in medical research.

AI has also been shown to identify nuanced alterations in electrocardiograms (EKGs) associated with liver conditions, potentially detecting the progression of liver diseases at an earlier stage than currently possible. By leveraging complex algorithms and machine learning, AI can analyze EKG patterns with a precision and depth unattainable through traditional manual interpretation. Given that liver diseases, such as cirrhosis or hepatitis, can induce subtle cardiac changes long before other clinical symptoms manifest, early detection through AI-enhanced EKG analysis could lead to timely interventions, potentially halting or reversing disease progression. This approach further enriches our understanding of the intricate interplay between liver function and cardiac health, highlighting the potential for AI to transform not just liver disease diagnostics but also to foster a more integrated approach to patient care.

New York Medical College
Yazan A. Al-Ajlouni

Beyond diagnostics, the burgeoning field of generative AI introduces groundbreaking possibilities in treatment planning and patient education, particularly for chronic conditions like cirrhosis. Generative AI produces original content, including text, visuals, and music, by identifying and learning patterns from its training data. When it leverages large language models (LLMs), it entails training on vast collections of textual data and using AI models characterized by many parameters. A notable instance of generative AI employing LLMs is ChatGPT (General Pretrained Transformers). By simulating disease progression and treatment outcomes, generative AI can foster personalized treatment strategies and empower patients with knowledge about their health trajectories. Yet, realizing these potential demands requires overcoming data quality and interpretability challenges, and ensuring AI outputs are accessible and actionable for clinicians and patients.

Despite these advancements, leveraging AI in hepatology is not devoid of hurdles. The development and training of AI models require extensive and diverse datasets, raising concerns about data privacy and ethical use. Addressing these concerns is paramount for successfully integrating AI into clinical hepatology practice, necessitating transparent algorithmic processes and stringent ethical standards. Ethical considerations are central to AI’s integration into hepatology. Algorithmic biases, patient privacy, and the impact of AI-driven decisions underscore the need for cautious AI deployment. Developing transparent, understandable algorithms and establishing ethical guidelines for AI use are critical steps towards ethically leveraging AI in patient care.

In conclusion, AI’s integration into hepatology holds tremendous promise for advancing patient care through enhanced diagnostics, treatment planning, and patient education. Overcoming the associated challenges, including ethical concerns, data diversity, and algorithm interpretability, is crucial. As the hepatology community navigates this technological evolution, a balanced approach that marries technological advancements with ethical stewardship will be key to harnessing AI’s full potential, ensuring it serves the best interests of patients and propels the field of hepatology into the future.

We predict a trajectory of increased use and adoption of AI in hepatology. AI in hepatology is likely to meet the test of pervasiveness, improvement, and innovation. The adoption of AI in routine hepatology diagnosis and management will likely follow Amara’s law and the five stages of the hype cycle. We believe that we are still in the infant stages of adopting AI technology in hepatology, and this phase may last 5 years before there is a peak of inflated expectations. The trough of disillusionment and slopes of enlightenment may only be observed in the next decades.

 

 

Dr. Njei is based in the Section of Digestive Diseases, Yale School of Medicine, New Haven, Conn. Mr. Al-Ajlouni is a senior medical student at New York Medical College School of Medicine, Valhalla, N.Y. They have no conflicts of interest to declare.

Sources

Taylor-Weiner A, et al. A Machine Learning Approach Enables Quantitative Measurement of Liver Histology and Disease Monitoring in NASH. Hepatology. 2021 Jul. doi: 10.1002/hep.31750.

Zeng Q, et al. Artificial intelligence predicts immune and inflammatory gene signatures directly from hepatocellular carcinoma histology. J Hepatol. 2022 Jul. doi: 10.1016/j.jhep.2022.01.018.

Ahn JC, et al. Development of the AI-Cirrhosis-ECG Score: An Electrocardiogram-Based Deep Learning Model in Cirrhosis. Am J Gastroenterol. 2022 Mar. doi: 10.14309/ajg.0000000000001617.

Nduma BN, et al. The Application of Artificial Intelligence (AI)-Based Ultrasound for the Diagnosis of Fatty Liver Disease: A Systematic Review. Cureus. 2023 Dec 15. doi: 10.7759/cureus.50601.

 

Dear colleagues,

Since our prior Perspectives piece on artificial intelligence (AI) in GI and Hepatology in 2022, the field has seen almost exponential growth. Expectations are high that AI will revolutionize our field and significantly improve patient care. But as the global discussion on AI has shown, there are real challenges with adoption, including issues with accuracy, reliability, and privacy.

In this issue, Dr. Nabil M. Mansour and Dr. Thomas R. McCarty explore the current and future impact of AI on gastroenterology, while Dr. Basile Njei and Yazan A. Al Ajlouni assess its role in hepatology. We hope these pieces will help your discussions in incorporating or researching AI for use in your own practices. We welcome your thoughts on this issue on X @AGA_GIHN.

Gyanprakash A. Ketwaroo, MD, MSc, is associate professor of medicine, Yale University, New Haven, Conn., and chief of endoscopy at West Haven (Conn.) VA Medical Center. He is an associate editor for GI & Hepatology News.

Artificial Intelligence in Gastrointestinal Endoscopy

BY THOMAS R. MCCARTY, MD, MPH; NABIL M. MANSOUR, MD

The last few decades have seen an exponential increase and interest in the role of artificial intelligence (AI) and adoption of deep learning algorithms within healthcare and patient care services. The field of gastroenterology and endoscopy has similarly seen a tremendous uptake in acceptance and implementation of AI for a variety of gastrointestinal conditions. The spectrum of AI-based applications includes detection or diagnostic-based as well as therapeutic assistance tools. From the first US Food and Drug Administration (FDA)-approved device that uses machine learning to assist clinicians in detecting lesions during colonoscopy, to other more innovative machine learning techniques for small bowel, esophageal, and hepatobiliary conditions, AI has dramatically changed the landscape of gastrointestinal endoscopy.

Baylor College of Medicine
Dr. Nabil M. Mansour


Approved applications for colorectal cancer

In an attempt to improve colorectal cancer screening and outcomes related to screening and surveillance, efforts have been focused on procedural performance metrics, quality indicators, and tools to aid in lesion detection and improve quality of care. One such tool has been computer-aided detection (CADe), with early randomized controlled trial (RCT) data showing significantly increased adenoma detection rate (ADR) and adenomas per colonoscopy (APC).1-3

Ultimately, this data led to FDA approval of the CADe system GI Genius (Medtronic, Dublin, Ireland) in 2021.4 Additional systems have since been FDA approved or 510(k) cleared including Endoscreener (Wision AI, Shanghai, China), SKOUT (Iterative Health, Cambridge, Massachusetts), MAGENTIQ-COLO (MAGENTIQ-EYE LTD, Haifa, Israel), and CAD EYE (Fujifilm, Tokyo), all of which have shown increased ADR and/or increased APC and/or reduced adenoma miss rates in randomized trials.5

Yet despite the promise of improved quality and subsequent translation to better patient outcomes, there has been a noticeable disconnect between RCT data and more real-world literature.6 In a recent study, no improvement was seen in ADR after implementation of a CADe system for colorectal cancer screening — including both higher and lower-ADR performers. Looking at change over time after implementation, CADe had no positive effect in any group over time, divergent from early RCT data. In a more recent multicenter, community-based RCT study, again CADe did not result in a statistically significant difference in the number of adenomas detected.7 The differences between some of these more recent “real-world” studies vs the majority of data from RCTs raise important questions regarding the potential of bias (due to unblinding) in prospective trials, as well as the role of the human-AI interaction.

Importantly for RCT data, both cohorts in these studies met adequate ADR benchmarks, though it remains unclear whether a truly increased ADR necessitates better patient outcomes — is higher always better? In addition, an important consideration with evaluating any AI/CADe system is that they often undergo frequent updates, each promising improved accuracy, sensitivity, and specificity. This is an interesting dilemma and raises questions about the enduring relevance of studies conducted using an outdated version of a CADe system.

Additional unanswered questions regarding an ideal ADR for implementation, preferred patient populations for screening (especially for younger individuals), and the role and adoption of computer-aided polyp diagnosis/characterization (CADx) within the United States remain. Furthermore, questions regarding procedural withdrawal time, impact on sessile serrated lesion detection, cost-effectiveness, and preferred adoption strategies have begun to be explored, though require more data to better define a best practice approach. Ultimately, answers to some of these unknowns may explain the discordant results and help guide future implementation measures.

 

 

Innovative applications for alternative gastrointestinal conditions

Given the fervor and excitement, as well as the outcomes associated with AI-based colorectal screening, it is not surprising these techniques have been expanded to other gastrointestinal conditions. At this time, all of these are fledgling, mostly single-center tools, not yet ready for widespread adoption. Nonetheless, these represent a potentially important step forward for difficult-to-manage gastrointestinal diseases.

Machine learning CADe systems have been developed to help identify early Barrett’s neoplasia, depth and invasion of gastric cancer, as well as lesion detection in small bowel video capsule endoscopy.8-10 Endoscopic retrograde cholangiopancreatography (ERCP)-based applications for cholangiocarcinoma and indeterminate stricture diagnosis have also been studied.11 Additional AI-based algorithms have been employed for complex procedures such as endoscopic submucosal dissection (ESD) or peroral endoscopic myotomy (POEM) to delineate vessels, better define tissue planes for dissection, and visualize landmark structures.12,13 Furthermore, AI-based scope guidance/manipulation, bleeding detection, landmark identification, and lesion detection have the potential to revolutionize endoscopic training and education. The impact that generative AI can potentially have on clinical practice is also an exciting prospect that warrants further investigation.

Artificial intelligence adoption in clinical practice

Clinical practice with regard to AI and colorectal cancer screening largely mirrors the disconnect in the current literature, with “believers” and “non-believers” as well as innovators and early adopters alongside laggards. In our own academic practices, we continue to struggle with the adoption and standardized implementation of AI-based colorectal cancer CADe systems, despite the RCT data showing positive results. It is likely that AI uptake will follow the technology predictions of Amara’s Law — i.e., individuals tend to overestimate the short-term impact of new technologies while underestimating long-term effects. In the end, more widespread adoption in community practice and larger scale real-world clinical outcomes studies are likely to determine the true impact of these exciting technologies. For other, less established AI-based tools, more data are currently required.

Conclusions

Ultimately, AI-based algorithms are likely here to stay, with continued improvement and evolution to occur based on provider feedback and patient care needs. Current tools, while not all-encompassing, have the potential to dramatically change the landscape of endoscopic training, diagnostic evaluation, and therapeutic care. It is critically important that relevant stakeholders, both endoscopists and patients, be involved in future applications and design to improve efficiency and quality outcomes overall.

Dr. McCarty is based in the Lynda K. and David M. Underwood Center for Digestive Disorders, Houston Methodist Hospital. Dr. Mansour is based in the section of gastroenterology, Baylor College of Medicine, Houston. Dr. McCarty reports no conflicts of interest. Dr. Mansour reports having been a consultant for Iterative Health.

References

1. Repici A, et al. Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial. Gastroenterology. 2020 Aug. doi: 10.1053/j.gastro.2020.04.062.

2. Repici A, et al. Artificial intelligence and colonoscopy experience: Lessons from two randomised trials. Gut. Apr 2022. doi: 10.1136/gutjnl-2021-324471.

3. Wallace MB, et al. Impact of artificial intelligence on miss rate of colorectal neoplasia. Gastroenterology 2022 Jul. doi: 10.1053/j.gastro.2022.03.007.

4. United States Food and Drug Administration (FDA). GI Genius FDA Approval [April 9, 2021]. Accessed January 5, 2022. Available at: www.accessdata.fda.gov/cdrh_docs/pdf21/K211951.pdf.

5. Maas MHJ, et al. A computer-aided polyp detection system in screening and surveillance colonoscopy: An international, multicentre, randomised, tandem trial. Lancet Digit Health. 2024 Mar. doi: 10.1016/S2589-7500(23)00242-X.

6. Ladabaum U, et al. Computer-aided detection of polyps does not improve colonoscopist performance in a pragmatic implementation trial. Gastroenterology. 2023 Mar. doi: 10.1053/j.gastro.2022.12.004.

7. Wei MT, et al. Evaluation of computer-aided detection during colonoscopy in the community (AI-SEE): A multicenter randomized clinical trial. Am J Gastroenterol. 2023 Oct. doi: 10.14309/ajg.0000000000002239.

8. de Groof J, et al. The Argos project: The development of a computer-aided detection system to improve detection of Barrett’s neoplasia on white light endoscopy. United European Gastroenterol J. 2019 May. doi: 10.1177/2050640619837443.

9. Kanesaka T, et al. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018 May. doi: 10.1016/j.gie.2017.11.029.

10. Sahafi A, et al. Edge artificial intelligence wireless video capsule endoscopy. Sci Rep. 2022 Aug. doi: 10.1038/s41598-022-17502-7.

11. Njei B, et al. Artificial intelligence in endoscopic imaging for detection of malignant biliary strictures and cholangiocarcinoma: A systematic review. Ann Gastroenterol. 2023 Mar-Apr. doi: 10.20524/aog.2023.0779.

12. Ebigbo A, et al. Vessel and tissue recognition during third-space endoscopy using a deep learning algorithm. Gut. 2022 Dec. doi: 10.1136/gutjnl-2021-326470.

13. Cao J, et al. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun. 2023 Oct. doi: 10.1038/s41467-023-42451-8.

 

 

The Promise and Challenges of AI in Hepatology

BY BASILE NJEI, MD, MPH, PHD; YAZAN A. AL-AJLOUNI, MPHIL

In the dynamic realm of medicine, artificial intelligence (AI) emerges as a transformative force, notably within hepatology. The discipline of hepatology, dedicated to liver and related organ diseases, is ripe for AI’s promise to revolutionize diagnostics and treatment, pushing toward a future of precision medicine. Yet, the path to fully realizing AI’s potential in hepatology is laced with data, ethical, and integration challenges.

The application of AI, particularly in histopathology, significantly enhances disease diagnosis and staging in hepatology. AI-driven approaches remedy traditional histopathological challenges, such as interpretative variability, providing more consistent and accurate disease analyses. This is especially evident in conditions like metabolic dysfunction-associated steatohepatitis (MASH) and hepatocellular carcinoma (HCC), where AI aids in identifying critical gene signatures, thereby refining therapy selection.

Yale School of Medicine
Dr. Basile Njei

Similarly, deep learning (DL), a branch of AI, has attracted significant interest globally, particularly in image recognition. AI’s incorporation into medical imaging marks a significant advancement, enabling early detection of malignancies like HCC and improving diagnostics in steatotic liver disease through enhanced imaging analyses using convolutional neural networks (CNN). The abundance of imaging data alongside clinical outcomes has catalyzed AI’s integration into radiology, leading to the swift growth of radiomics as a novel domain in medical research.

AI has also been shown to identify nuanced alterations in electrocardiograms (EKGs) associated with liver conditions, potentially detecting the progression of liver diseases at an earlier stage than currently possible. By leveraging complex algorithms and machine learning, AI can analyze EKG patterns with a precision and depth unattainable through traditional manual interpretation. Given that liver diseases, such as cirrhosis or hepatitis, can induce subtle cardiac changes long before other clinical symptoms manifest, early detection through AI-enhanced EKG analysis could lead to timely interventions, potentially halting or reversing disease progression. This approach further enriches our understanding of the intricate interplay between liver function and cardiac health, highlighting the potential for AI to transform not just liver disease diagnostics but also to foster a more integrated approach to patient care.

New York Medical College
Yazan A. Al-Ajlouni

Beyond diagnostics, the burgeoning field of generative AI introduces groundbreaking possibilities in treatment planning and patient education, particularly for chronic conditions like cirrhosis. Generative AI produces original content, including text, visuals, and music, by identifying and learning patterns from its training data. When it leverages large language models (LLMs), it entails training on vast collections of textual data and using AI models characterized by many parameters. A notable instance of generative AI employing LLMs is ChatGPT (General Pretrained Transformers). By simulating disease progression and treatment outcomes, generative AI can foster personalized treatment strategies and empower patients with knowledge about their health trajectories. Yet, realizing these potential demands requires overcoming data quality and interpretability challenges, and ensuring AI outputs are accessible and actionable for clinicians and patients.

Despite these advancements, leveraging AI in hepatology is not devoid of hurdles. The development and training of AI models require extensive and diverse datasets, raising concerns about data privacy and ethical use. Addressing these concerns is paramount for successfully integrating AI into clinical hepatology practice, necessitating transparent algorithmic processes and stringent ethical standards. Ethical considerations are central to AI’s integration into hepatology. Algorithmic biases, patient privacy, and the impact of AI-driven decisions underscore the need for cautious AI deployment. Developing transparent, understandable algorithms and establishing ethical guidelines for AI use are critical steps towards ethically leveraging AI in patient care.

In conclusion, AI’s integration into hepatology holds tremendous promise for advancing patient care through enhanced diagnostics, treatment planning, and patient education. Overcoming the associated challenges, including ethical concerns, data diversity, and algorithm interpretability, is crucial. As the hepatology community navigates this technological evolution, a balanced approach that marries technological advancements with ethical stewardship will be key to harnessing AI’s full potential, ensuring it serves the best interests of patients and propels the field of hepatology into the future.

We predict a trajectory of increased use and adoption of AI in hepatology. AI in hepatology is likely to meet the test of pervasiveness, improvement, and innovation. The adoption of AI in routine hepatology diagnosis and management will likely follow Amara’s law and the five stages of the hype cycle. We believe that we are still in the infant stages of adopting AI technology in hepatology, and this phase may last 5 years before there is a peak of inflated expectations. The trough of disillusionment and slopes of enlightenment may only be observed in the next decades.

 

 

Dr. Njei is based in the Section of Digestive Diseases, Yale School of Medicine, New Haven, Conn. Mr. Al-Ajlouni is a senior medical student at New York Medical College School of Medicine, Valhalla, N.Y. They have no conflicts of interest to declare.

Sources

Taylor-Weiner A, et al. A Machine Learning Approach Enables Quantitative Measurement of Liver Histology and Disease Monitoring in NASH. Hepatology. 2021 Jul. doi: 10.1002/hep.31750.

Zeng Q, et al. Artificial intelligence predicts immune and inflammatory gene signatures directly from hepatocellular carcinoma histology. J Hepatol. 2022 Jul. doi: 10.1016/j.jhep.2022.01.018.

Ahn JC, et al. Development of the AI-Cirrhosis-ECG Score: An Electrocardiogram-Based Deep Learning Model in Cirrhosis. Am J Gastroenterol. 2022 Mar. doi: 10.14309/ajg.0000000000001617.

Nduma BN, et al. The Application of Artificial Intelligence (AI)-Based Ultrasound for the Diagnosis of Fatty Liver Disease: A Systematic Review. Cureus. 2023 Dec 15. doi: 10.7759/cureus.50601.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘We Need to Rethink Our Options’: Lung Cancer Recurrence

Article Type
Changed
Mon, 04/29/2024 - 17:37

 



This transcript has been edited for clarity.

Hello. It’s Mark Kris reporting back after attending the New York Lung Cancer Foundation Summit here in New York. A large amount of discussion went on, but as usual, I was most interested in the perioperative space.

In previous videos, I’ve talked about this ongoing discussion of whether you should operate and give adjuvant therapy or give neoadjuvant therapy, and I’ve addressed that already. One thing I want to bring up – and as we move off of that argument, which frankly doesn’t have an answer today, with neoadjuvant therapy, having all the data to support it – is what are the patterns of recurrence now that we have more successful systemic therapies, both targeted therapies and checkpoint inhibitors?

I was taught early on by my surgical mentors that the issue here was systemic control. While they could do very successful surgery to get high levels of local control, they could not control systemic disease. Sadly, the tools we had early on with chemotherapy were just not good enough. Suddenly, we have better tools to control systemic spread. In the past, the vast majority of occurrences were systemic; they’re now local.

What I think we need to do as a group of practitioners trying to deal with the problems getting in the way of curing our patients is look at what the issue is now. Frankly, the big issue now, as systemic therapy has controlled metastatic disease, is recurrence in the chest.

We give adjuvant osimertinib. Please remember what the numbers are. In the osimertinib arm, of the 11 recurrences reported in the European Society for Medical Oncology presentation a few years back, nine of them were in the chest or mediastinal nodes. In the arm that got no osimertinib afterward, there were 46 recurrences, and 32 of those 46 recurrences were in the chest, either the lung or mediastinal nodes. Therefore, 74% of the recurrences are suddenly in the chest. What’s the issue here?

The issue is we need to find strategies to give better disease control in the chest, as we have made inroads in controlling systemic disease with the targeted therapies in the endothelial growth factor receptor space, and very likely the checkpoint inhibitors, too, as that data kind of filters out. We need to think about how better to get local control.

I think rather than continue to get into this argument of neoadjuvant vs adjuvant, we should move to what’s really hurting our patients. Again, the data I quoted you was from the ADAURA trial, which was adjuvant therapy, and I’m sure the neoadjuvant is going to show the same thing. It’s better systemic therapy but now, more trouble in the chest.

How are we going to deal with that? I’d like to throw out one strategy, and that is to rethink the role of radiation in these patients. Again, if the problem is local in the chest, lung, and lymph nodes, we have to think about local therapy. Yes, we’re not recommending it routinely for everybody, but now that we have better systemic control, we need to rethink our options. The obvious one to rethink is about giving radiotherapy.

We should also use what we learned in the earlier trials, which is that there is harm in giving excessive radiation to the heart. If you avoid the heart, you avoid the harm. We have better planning strategies for stereotactic body radiotherapy and more traditional radiation, and of course, we have proton therapy as well.

As we continue to struggle with the idea of that patient with stage II or III disease, whether to give adjuvant vs neoadjuvant therapy, please remember to consider their risk in 2024. Their risk for first recurrence is in the chest.

What are we going to do to better control disease in the chest? We have a challenge. I’m sure we can meet it if we put our heads together.

Dr. Kris is professor of medicine at Weill Cornell Medical College, and attending physician, Thoracic Oncology Service, Memorial Sloan Kettering Cancer Center, New York. He disclosed ties with AstraZeneca, Roche/Genentech, Ariad Pharmaceuticals, Pfizer, and PUMA.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 



This transcript has been edited for clarity.

Hello. It’s Mark Kris reporting back after attending the New York Lung Cancer Foundation Summit here in New York. A large amount of discussion went on, but as usual, I was most interested in the perioperative space.

In previous videos, I’ve talked about this ongoing discussion of whether you should operate and give adjuvant therapy or give neoadjuvant therapy, and I’ve addressed that already. One thing I want to bring up – and as we move off of that argument, which frankly doesn’t have an answer today, with neoadjuvant therapy, having all the data to support it – is what are the patterns of recurrence now that we have more successful systemic therapies, both targeted therapies and checkpoint inhibitors?

I was taught early on by my surgical mentors that the issue here was systemic control. While they could do very successful surgery to get high levels of local control, they could not control systemic disease. Sadly, the tools we had early on with chemotherapy were just not good enough. Suddenly, we have better tools to control systemic spread. In the past, the vast majority of occurrences were systemic; they’re now local.

What I think we need to do as a group of practitioners trying to deal with the problems getting in the way of curing our patients is look at what the issue is now. Frankly, the big issue now, as systemic therapy has controlled metastatic disease, is recurrence in the chest.

We give adjuvant osimertinib. Please remember what the numbers are. In the osimertinib arm, of the 11 recurrences reported in the European Society for Medical Oncology presentation a few years back, nine of them were in the chest or mediastinal nodes. In the arm that got no osimertinib afterward, there were 46 recurrences, and 32 of those 46 recurrences were in the chest, either the lung or mediastinal nodes. Therefore, 74% of the recurrences are suddenly in the chest. What’s the issue here?

The issue is we need to find strategies to give better disease control in the chest, as we have made inroads in controlling systemic disease with the targeted therapies in the endothelial growth factor receptor space, and very likely the checkpoint inhibitors, too, as that data kind of filters out. We need to think about how better to get local control.

I think rather than continue to get into this argument of neoadjuvant vs adjuvant, we should move to what’s really hurting our patients. Again, the data I quoted you was from the ADAURA trial, which was adjuvant therapy, and I’m sure the neoadjuvant is going to show the same thing. It’s better systemic therapy but now, more trouble in the chest.

How are we going to deal with that? I’d like to throw out one strategy, and that is to rethink the role of radiation in these patients. Again, if the problem is local in the chest, lung, and lymph nodes, we have to think about local therapy. Yes, we’re not recommending it routinely for everybody, but now that we have better systemic control, we need to rethink our options. The obvious one to rethink is about giving radiotherapy.

We should also use what we learned in the earlier trials, which is that there is harm in giving excessive radiation to the heart. If you avoid the heart, you avoid the harm. We have better planning strategies for stereotactic body radiotherapy and more traditional radiation, and of course, we have proton therapy as well.

As we continue to struggle with the idea of that patient with stage II or III disease, whether to give adjuvant vs neoadjuvant therapy, please remember to consider their risk in 2024. Their risk for first recurrence is in the chest.

What are we going to do to better control disease in the chest? We have a challenge. I’m sure we can meet it if we put our heads together.

Dr. Kris is professor of medicine at Weill Cornell Medical College, and attending physician, Thoracic Oncology Service, Memorial Sloan Kettering Cancer Center, New York. He disclosed ties with AstraZeneca, Roche/Genentech, Ariad Pharmaceuticals, Pfizer, and PUMA.

A version of this article appeared on Medscape.com.

 



This transcript has been edited for clarity.

Hello. It’s Mark Kris reporting back after attending the New York Lung Cancer Foundation Summit here in New York. A large amount of discussion went on, but as usual, I was most interested in the perioperative space.

In previous videos, I’ve talked about this ongoing discussion of whether you should operate and give adjuvant therapy or give neoadjuvant therapy, and I’ve addressed that already. One thing I want to bring up – and as we move off of that argument, which frankly doesn’t have an answer today, with neoadjuvant therapy, having all the data to support it – is what are the patterns of recurrence now that we have more successful systemic therapies, both targeted therapies and checkpoint inhibitors?

I was taught early on by my surgical mentors that the issue here was systemic control. While they could do very successful surgery to get high levels of local control, they could not control systemic disease. Sadly, the tools we had early on with chemotherapy were just not good enough. Suddenly, we have better tools to control systemic spread. In the past, the vast majority of occurrences were systemic; they’re now local.

What I think we need to do as a group of practitioners trying to deal with the problems getting in the way of curing our patients is look at what the issue is now. Frankly, the big issue now, as systemic therapy has controlled metastatic disease, is recurrence in the chest.

We give adjuvant osimertinib. Please remember what the numbers are. In the osimertinib arm, of the 11 recurrences reported in the European Society for Medical Oncology presentation a few years back, nine of them were in the chest or mediastinal nodes. In the arm that got no osimertinib afterward, there were 46 recurrences, and 32 of those 46 recurrences were in the chest, either the lung or mediastinal nodes. Therefore, 74% of the recurrences are suddenly in the chest. What’s the issue here?

The issue is we need to find strategies to give better disease control in the chest, as we have made inroads in controlling systemic disease with the targeted therapies in the endothelial growth factor receptor space, and very likely the checkpoint inhibitors, too, as that data kind of filters out. We need to think about how better to get local control.

I think rather than continue to get into this argument of neoadjuvant vs adjuvant, we should move to what’s really hurting our patients. Again, the data I quoted you was from the ADAURA trial, which was adjuvant therapy, and I’m sure the neoadjuvant is going to show the same thing. It’s better systemic therapy but now, more trouble in the chest.

How are we going to deal with that? I’d like to throw out one strategy, and that is to rethink the role of radiation in these patients. Again, if the problem is local in the chest, lung, and lymph nodes, we have to think about local therapy. Yes, we’re not recommending it routinely for everybody, but now that we have better systemic control, we need to rethink our options. The obvious one to rethink is about giving radiotherapy.

We should also use what we learned in the earlier trials, which is that there is harm in giving excessive radiation to the heart. If you avoid the heart, you avoid the harm. We have better planning strategies for stereotactic body radiotherapy and more traditional radiation, and of course, we have proton therapy as well.

As we continue to struggle with the idea of that patient with stage II or III disease, whether to give adjuvant vs neoadjuvant therapy, please remember to consider their risk in 2024. Their risk for first recurrence is in the chest.

What are we going to do to better control disease in the chest? We have a challenge. I’m sure we can meet it if we put our heads together.

Dr. Kris is professor of medicine at Weill Cornell Medical College, and attending physician, Thoracic Oncology Service, Memorial Sloan Kettering Cancer Center, New York. He disclosed ties with AstraZeneca, Roche/Genentech, Ariad Pharmaceuticals, Pfizer, and PUMA.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

GLP-1 Receptor Agonists: Which Drug for Which Patient?

Article Type
Changed
Thu, 04/25/2024 - 12:15

 

With all the excitement about GLP-1 agonists, I get many questions from providers about which antiobesity drug they should prescribe. I’ll tell you the methods that I use to determine which drug is best for which patient.

Of course, we want to make sure that we’re treating the right condition. If the patient has type 2 diabetes, we tend to give them medication that is indicated for type 2 diabetes. Many GLP-1 agonists are available in a diabetes version and a chronic weight management or obesity version. If a patient has diabetes and obesity, they can receive either one. If a patient has only diabetes but not obesity, they should be prescribed the diabetes version. For obesity without diabetes, we tend to stick with the drugs that are indicated for chronic weight management.

Let’s go through them.

Exenatide. In chronological order of approval, the first GLP-1 drug that was used for diabetes dates back to exenatide (Bydureon). Bydureon had a partner called Byetta (also exenatide), both of which are still on the market but infrequently used. Some patients reported that these medications were inconvenient because they required twice-daily injections and caused painful injection-site nodules.

Diabetes drugs in more common use include liraglutide (Victoza) for type 2 diabetes. It is a daily injection and has various doses. We always start low and increase with tolerance and desired effect for A1c.

Liraglutide. Victoza has an antiobesity counterpart called Saxenda. The Saxenda pen looks very similar to the Victoza pen. It is a daily GLP-1 agonist for chronic weight management. The SCALE trial demonstrated 8%-12% weight loss with Saxenda.

Those are the daily injections: Victoza for diabetes and Saxenda for weight loss.

Our patients are very excited about the advent of weekly injections for diabetes and weight management. Ozempic is very popular. It is a weekly GLP-1 agonist for type 2 diabetes. Many patients come in asking for Ozempic, and we must make sure that we’re moving them in the right direction depending on their condition.

Semaglutide. Ozempic has a few different doses. It is a weekly injection and has been found to be quite efficacious for treating diabetes. The drug’s weight loss counterpart is called Wegovy, which comes in a different pen. Both forms contain the compound semaglutide. While all of these GLP-1 agonists are indicated to treat type 2 diabetes or for weight management, Wegovy has a special indication that none of the others have. In March 2024, Wegovy acquired an indication to decrease cardiac risk in those with a BMI ≥ 27 and a previous cardiac history. This will really change the accessibility of this medication because patients with heart conditions who are on Medicare are expected to have access to Wegovy.

Tirzepatide. Another weekly injection for treatment of type 2 diabetes is called Mounjaro. Its counterpart for weight management is called Zepbound, which was found to have about 20.9% weight loss over 72 weeks. These medications have similar side effects in differing degrees, but the most-often reported are nausea, stool changes, abdominal pain, and reflux. There are some other potential side effects; I recommend that you read the individual prescribing information available for each drug to have more clarity about that.

It is important that we stay on label for using the GLP-1 receptor agonists, for many reasons. One, it increases our patients’ accessibility to the right medication for them, and we can also make sure that we’re treating the patient with the right drug according to the clinical trials. When the clinical trials are done, the study populations demonstrate safety and efficacy for that population. But if we’re prescribing a GLP-1 for a different population, it is considered off-label use.
 

Dr. Lofton, an obesity medicine specialist, is clinical associate professor of surgery and medicine at NYU Grossman School of Medicine, and director of the medical weight management program at NYU Langone Weight Management Center, New York. She disclosed ties to Novo Nordisk and Eli Lilly. This transcript has been edited for clarity.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

With all the excitement about GLP-1 agonists, I get many questions from providers about which antiobesity drug they should prescribe. I’ll tell you the methods that I use to determine which drug is best for which patient.

Of course, we want to make sure that we’re treating the right condition. If the patient has type 2 diabetes, we tend to give them medication that is indicated for type 2 diabetes. Many GLP-1 agonists are available in a diabetes version and a chronic weight management or obesity version. If a patient has diabetes and obesity, they can receive either one. If a patient has only diabetes but not obesity, they should be prescribed the diabetes version. For obesity without diabetes, we tend to stick with the drugs that are indicated for chronic weight management.

Let’s go through them.

Exenatide. In chronological order of approval, the first GLP-1 drug that was used for diabetes dates back to exenatide (Bydureon). Bydureon had a partner called Byetta (also exenatide), both of which are still on the market but infrequently used. Some patients reported that these medications were inconvenient because they required twice-daily injections and caused painful injection-site nodules.

Diabetes drugs in more common use include liraglutide (Victoza) for type 2 diabetes. It is a daily injection and has various doses. We always start low and increase with tolerance and desired effect for A1c.

Liraglutide. Victoza has an antiobesity counterpart called Saxenda. The Saxenda pen looks very similar to the Victoza pen. It is a daily GLP-1 agonist for chronic weight management. The SCALE trial demonstrated 8%-12% weight loss with Saxenda.

Those are the daily injections: Victoza for diabetes and Saxenda for weight loss.

Our patients are very excited about the advent of weekly injections for diabetes and weight management. Ozempic is very popular. It is a weekly GLP-1 agonist for type 2 diabetes. Many patients come in asking for Ozempic, and we must make sure that we’re moving them in the right direction depending on their condition.

Semaglutide. Ozempic has a few different doses. It is a weekly injection and has been found to be quite efficacious for treating diabetes. The drug’s weight loss counterpart is called Wegovy, which comes in a different pen. Both forms contain the compound semaglutide. While all of these GLP-1 agonists are indicated to treat type 2 diabetes or for weight management, Wegovy has a special indication that none of the others have. In March 2024, Wegovy acquired an indication to decrease cardiac risk in those with a BMI ≥ 27 and a previous cardiac history. This will really change the accessibility of this medication because patients with heart conditions who are on Medicare are expected to have access to Wegovy.

Tirzepatide. Another weekly injection for treatment of type 2 diabetes is called Mounjaro. Its counterpart for weight management is called Zepbound, which was found to have about 20.9% weight loss over 72 weeks. These medications have similar side effects in differing degrees, but the most-often reported are nausea, stool changes, abdominal pain, and reflux. There are some other potential side effects; I recommend that you read the individual prescribing information available for each drug to have more clarity about that.

It is important that we stay on label for using the GLP-1 receptor agonists, for many reasons. One, it increases our patients’ accessibility to the right medication for them, and we can also make sure that we’re treating the patient with the right drug according to the clinical trials. When the clinical trials are done, the study populations demonstrate safety and efficacy for that population. But if we’re prescribing a GLP-1 for a different population, it is considered off-label use.
 

Dr. Lofton, an obesity medicine specialist, is clinical associate professor of surgery and medicine at NYU Grossman School of Medicine, and director of the medical weight management program at NYU Langone Weight Management Center, New York. She disclosed ties to Novo Nordisk and Eli Lilly. This transcript has been edited for clarity.

A version of this article appeared on Medscape.com.

 

With all the excitement about GLP-1 agonists, I get many questions from providers about which antiobesity drug they should prescribe. I’ll tell you the methods that I use to determine which drug is best for which patient.

Of course, we want to make sure that we’re treating the right condition. If the patient has type 2 diabetes, we tend to give them medication that is indicated for type 2 diabetes. Many GLP-1 agonists are available in a diabetes version and a chronic weight management or obesity version. If a patient has diabetes and obesity, they can receive either one. If a patient has only diabetes but not obesity, they should be prescribed the diabetes version. For obesity without diabetes, we tend to stick with the drugs that are indicated for chronic weight management.

Let’s go through them.

Exenatide. In chronological order of approval, the first GLP-1 drug that was used for diabetes dates back to exenatide (Bydureon). Bydureon had a partner called Byetta (also exenatide), both of which are still on the market but infrequently used. Some patients reported that these medications were inconvenient because they required twice-daily injections and caused painful injection-site nodules.

Diabetes drugs in more common use include liraglutide (Victoza) for type 2 diabetes. It is a daily injection and has various doses. We always start low and increase with tolerance and desired effect for A1c.

Liraglutide. Victoza has an antiobesity counterpart called Saxenda. The Saxenda pen looks very similar to the Victoza pen. It is a daily GLP-1 agonist for chronic weight management. The SCALE trial demonstrated 8%-12% weight loss with Saxenda.

Those are the daily injections: Victoza for diabetes and Saxenda for weight loss.

Our patients are very excited about the advent of weekly injections for diabetes and weight management. Ozempic is very popular. It is a weekly GLP-1 agonist for type 2 diabetes. Many patients come in asking for Ozempic, and we must make sure that we’re moving them in the right direction depending on their condition.

Semaglutide. Ozempic has a few different doses. It is a weekly injection and has been found to be quite efficacious for treating diabetes. The drug’s weight loss counterpart is called Wegovy, which comes in a different pen. Both forms contain the compound semaglutide. While all of these GLP-1 agonists are indicated to treat type 2 diabetes or for weight management, Wegovy has a special indication that none of the others have. In March 2024, Wegovy acquired an indication to decrease cardiac risk in those with a BMI ≥ 27 and a previous cardiac history. This will really change the accessibility of this medication because patients with heart conditions who are on Medicare are expected to have access to Wegovy.

Tirzepatide. Another weekly injection for treatment of type 2 diabetes is called Mounjaro. Its counterpart for weight management is called Zepbound, which was found to have about 20.9% weight loss over 72 weeks. These medications have similar side effects in differing degrees, but the most-often reported are nausea, stool changes, abdominal pain, and reflux. There are some other potential side effects; I recommend that you read the individual prescribing information available for each drug to have more clarity about that.

It is important that we stay on label for using the GLP-1 receptor agonists, for many reasons. One, it increases our patients’ accessibility to the right medication for them, and we can also make sure that we’re treating the patient with the right drug according to the clinical trials. When the clinical trials are done, the study populations demonstrate safety and efficacy for that population. But if we’re prescribing a GLP-1 for a different population, it is considered off-label use.
 

Dr. Lofton, an obesity medicine specialist, is clinical associate professor of surgery and medicine at NYU Grossman School of Medicine, and director of the medical weight management program at NYU Langone Weight Management Center, New York. She disclosed ties to Novo Nordisk and Eli Lilly. This transcript has been edited for clarity.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

CRC Screening in Primary Care: The Blood Test Option

Article Type
Changed
Tue, 04/23/2024 - 16:06

 

Last year, I concluded a commentary for this news organization on colorectal cancer (CRC) screening guidelines by stating that between stool-based tests, flexible sigmoidoscopy, and colonoscopy, “the best screening test is the test that gets done.” But should that maxim apply to the new blood-based screening test, Guardant Health Shield? This proprietary test, which costs $895 and is not generally covered by insurance, identifies alterations in cell-free DNA that are characteristic of CRC.

Shield’s test characteristics were recently evaluated in a prospective study of more than 10,000 adults aged 45-84 at average risk for CRC. The test had an 87.5% sensitivity for stage I, II, or III colorectal cancer but only a 13% sensitivity for advanced precancerous lesions. Test specificity was 89.6%, meaning that about 1 in 10 participants without CRC or advanced precancerous lesions on colonoscopy had a false-positive result.

Although the Shield blood test has a higher rate of false positives than the traditional fecal immunochemical test (FIT) and lower sensitivity and specificity than a multitarget stool DNA (FIT-DNA) test designed to improve on Cologuard, it meets the previously established criteria set forth by the Centers for Medicare & Medicaid Services (CMS) to be covered for Medicare beneficiaries at 3-year intervals, pending FDA approval. If public and private payers start covering Shield alongside other CRC screening tests, it presents an opportunity for primary care physicians to reach the approximately 3 in 10 adults between ages 45 and 75 who are not being routinely screened.

A big concern, however, is that the availability of a blood test may cause patients who would have otherwise been screened with colonoscopy or stool tests to switch to the blood test. A cost-effectiveness analysis found that offering a blood test to patients who decline screening colonoscopy saves additional lives, but at the cost of more than $377,000 per life-year gained. Another study relying on three microsimulation models previously utilized by the US Preventive Services Task Force (USPSTF) found that annual FIT results in more life-years gained at substantially lower cost than blood-based screening every 3 years “even when uptake of blood-based screening was 20 percentage points higher than uptake of FIT.” As a result, a multidisciplinary expert panel concluded that blood-based screening should not substitute for established CRC screening tests, but instead be offered only to patients who decline those tests.

In practice, this will increase the complexity of the CRC screening conversations we have with patients. We will need to be clear that the blood test is not yet endorsed by the USPSTF or any major guideline group and is a second-line test that will miss most precancerous polyps. As with the stool tests, it is essential to emphasize that a positive result must be followed by diagnostic colonoscopy. To addend the cancer screening maxim I mentioned before, the blood test is not the best test for CRC, but it’s probably better than no test at all.

Dr. Lin is a family physician and associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

Last year, I concluded a commentary for this news organization on colorectal cancer (CRC) screening guidelines by stating that between stool-based tests, flexible sigmoidoscopy, and colonoscopy, “the best screening test is the test that gets done.” But should that maxim apply to the new blood-based screening test, Guardant Health Shield? This proprietary test, which costs $895 and is not generally covered by insurance, identifies alterations in cell-free DNA that are characteristic of CRC.

Shield’s test characteristics were recently evaluated in a prospective study of more than 10,000 adults aged 45-84 at average risk for CRC. The test had an 87.5% sensitivity for stage I, II, or III colorectal cancer but only a 13% sensitivity for advanced precancerous lesions. Test specificity was 89.6%, meaning that about 1 in 10 participants without CRC or advanced precancerous lesions on colonoscopy had a false-positive result.

Although the Shield blood test has a higher rate of false positives than the traditional fecal immunochemical test (FIT) and lower sensitivity and specificity than a multitarget stool DNA (FIT-DNA) test designed to improve on Cologuard, it meets the previously established criteria set forth by the Centers for Medicare & Medicaid Services (CMS) to be covered for Medicare beneficiaries at 3-year intervals, pending FDA approval. If public and private payers start covering Shield alongside other CRC screening tests, it presents an opportunity for primary care physicians to reach the approximately 3 in 10 adults between ages 45 and 75 who are not being routinely screened.

A big concern, however, is that the availability of a blood test may cause patients who would have otherwise been screened with colonoscopy or stool tests to switch to the blood test. A cost-effectiveness analysis found that offering a blood test to patients who decline screening colonoscopy saves additional lives, but at the cost of more than $377,000 per life-year gained. Another study relying on three microsimulation models previously utilized by the US Preventive Services Task Force (USPSTF) found that annual FIT results in more life-years gained at substantially lower cost than blood-based screening every 3 years “even when uptake of blood-based screening was 20 percentage points higher than uptake of FIT.” As a result, a multidisciplinary expert panel concluded that blood-based screening should not substitute for established CRC screening tests, but instead be offered only to patients who decline those tests.

In practice, this will increase the complexity of the CRC screening conversations we have with patients. We will need to be clear that the blood test is not yet endorsed by the USPSTF or any major guideline group and is a second-line test that will miss most precancerous polyps. As with the stool tests, it is essential to emphasize that a positive result must be followed by diagnostic colonoscopy. To addend the cancer screening maxim I mentioned before, the blood test is not the best test for CRC, but it’s probably better than no test at all.

Dr. Lin is a family physician and associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor.

A version of this article appeared on Medscape.com.

 

Last year, I concluded a commentary for this news organization on colorectal cancer (CRC) screening guidelines by stating that between stool-based tests, flexible sigmoidoscopy, and colonoscopy, “the best screening test is the test that gets done.” But should that maxim apply to the new blood-based screening test, Guardant Health Shield? This proprietary test, which costs $895 and is not generally covered by insurance, identifies alterations in cell-free DNA that are characteristic of CRC.

Shield’s test characteristics were recently evaluated in a prospective study of more than 10,000 adults aged 45-84 at average risk for CRC. The test had an 87.5% sensitivity for stage I, II, or III colorectal cancer but only a 13% sensitivity for advanced precancerous lesions. Test specificity was 89.6%, meaning that about 1 in 10 participants without CRC or advanced precancerous lesions on colonoscopy had a false-positive result.

Although the Shield blood test has a higher rate of false positives than the traditional fecal immunochemical test (FIT) and lower sensitivity and specificity than a multitarget stool DNA (FIT-DNA) test designed to improve on Cologuard, it meets the previously established criteria set forth by the Centers for Medicare & Medicaid Services (CMS) to be covered for Medicare beneficiaries at 3-year intervals, pending FDA approval. If public and private payers start covering Shield alongside other CRC screening tests, it presents an opportunity for primary care physicians to reach the approximately 3 in 10 adults between ages 45 and 75 who are not being routinely screened.

A big concern, however, is that the availability of a blood test may cause patients who would have otherwise been screened with colonoscopy or stool tests to switch to the blood test. A cost-effectiveness analysis found that offering a blood test to patients who decline screening colonoscopy saves additional lives, but at the cost of more than $377,000 per life-year gained. Another study relying on three microsimulation models previously utilized by the US Preventive Services Task Force (USPSTF) found that annual FIT results in more life-years gained at substantially lower cost than blood-based screening every 3 years “even when uptake of blood-based screening was 20 percentage points higher than uptake of FIT.” As a result, a multidisciplinary expert panel concluded that blood-based screening should not substitute for established CRC screening tests, but instead be offered only to patients who decline those tests.

In practice, this will increase the complexity of the CRC screening conversations we have with patients. We will need to be clear that the blood test is not yet endorsed by the USPSTF or any major guideline group and is a second-line test that will miss most precancerous polyps. As with the stool tests, it is essential to emphasize that a positive result must be followed by diagnostic colonoscopy. To addend the cancer screening maxim I mentioned before, the blood test is not the best test for CRC, but it’s probably better than no test at all.

Dr. Lin is a family physician and associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Are Carbs Really the Enemy?

Article Type
Changed
Thu, 04/25/2024 - 12:15

 

Recent headlines scream that we have an obesity problem and that carbs are the culprit for the problem. That leads me to ask: How did we get to blaming carbs as the enemy in the war against obesity?

First, a quick review of the history of diet and macronutrient content.

A long time ago, prehistoric humans foraged and hunted for food. Protein and fat were procured from animal meat, which was very important for encephalization, or evolutionary increase in the complexity or relative size of the brain. Most of the requirements for protein and iron were satisfied by hunting and eating land animals as well as consuming marine life that washed up on shore.

Carbohydrates in the form of plant foods served as the only sources of energy available to prehistoric hunter-gatherers, which offset the high protein content of the rest of their diet. These were only available during spring and summer.

Then, about 10,000 years ago, plant and animal agriculture began, and humans saw a permanent shift in the macronutrient content of our daily intake so that it was more consistent and stable. Initially, the nutrient characteristic changes were subtle, going from wild food to cultivated food with the Agricultural Revolution in the mid-17th century. Then, it changed even more rapidly less than 200 years ago with the Industrial Revolution, resulting in semiprocessed and ultraprocessed foods.

This change in food intake altered human physiology, with major changes in our digestive, immune, and neural physiology and an increase in chronic disease prevalence. The last 50 years has seen an increase in obesity in the United States, along with increases in chronic disease such as type 2 diabetes, which leads cardiovascular disease and certain cancers. 
 

Back to Carbohydrates: Do We Need Them? How Much? What Kind?

The increase in the macronutrient content of the food we eat containing saturated fat and refined carbohydrates and sugars represents a major change and is arguably the smoking gun of the obesity epidemic. Unfortunately, ultraprocessed foods have become a staple of the standard American or Western diet. 

Ultraprocessed foods such as cakes, cookies, crackers, sugary breakfast cereals, pizza, potato chips, soft drinks, and ice cream are eons away from our prehistoric diet of wild game, nuts, fruits, and berries, at which time, our digestive immune and nervous systems evolved. The pace at which ultraprocessed foods have entered our diet outpaces the time necessary for adaptation of our digestive systems and genes to these foods. They are indeed pathogenic in this context. 

So when was the time when humans consumed an “optimal” diet? This is hard to say because during the time of brain evolution, we needed protein and iron and succumbed to infections and trauma. In the early 1900s, we continued to succumb to infection until the discovery of antibiotics. Soon thereafter, industrialization and processed foods led to weight gain and the chronic diseases of the cardiovascular system and type 2 diabetes. 

Carbohydrates provide calories and fiber and some micronutrients, which are needed for energy, metabolism, and bowel and immune health. But how much do we need? 

Currently in the United States, the percentage of total food energy derived from the three major macronutrients is: carbohydrates, 51.8%; fat, 32.8%; and protein, 15.4%. Current advice for a healthy diet to lower risk for cardiovascular disease is to limit fat intake to 30% of total energy, protein to 15%, and to increase complex carbohydrates to 55%-60% of total energy. But we also need to qualify this in terms of the quality of the macronutrient, particularly carbohydrates. 

In addition to the quality, the macronutrient content of the diet has varied considerably from our prehistoric times when dietary protein intakes were high at 19%-35% of energy at the expense of carbohydrate (22%-40% of energy). 

If our genes haven’t kept up with industrialization, then why do we need so many carbohydrates to equate to 55%-60% of energy? Is it possible that we are confusing what is available with what we actually need? What do I mean by this?

We certainly have changed the landscape of the world due to agriculture, which has allowed us to procreate and feed ourselves, and certainly, industrialization has increased the availability of accessible cheap food. Protein in the form of meat, fish, and fowl are harder to get in industrialized nations as are fruits and vegetables. These macronutrients were the foods of our ancestors. It may be that a healthy diet is considered the one that is available. 

For instance, the Mediterranean diet is somewhat higher in fat content, 40%-50% fat (mostly mono and unsaturated), and similar in protein content but lower in carbohydrate content than the typical Western diet. The Dietary Approaches to Stop Hypertension (DASH) diet is lower in fat at 25% total calories, is higher in carbohydrates at 55%, and is lower in protein, but this diet was generated in the United States, therefore it is more Western. 

We need high-quality protein for organ and muscle function, high-quality unsaturated and monounsaturated fats for brain function and cellular functions, and high-quality complex carbohydrates for energy and gut health as well as micronutrients for many cellular functions. A ketogenic diet is not sustainable in the long-term for these reasons: chiefly the need for some carbohydrates for gut health and micronutrients. 

How much carbohydrate content is needed should take into consideration energy expenditure as well as micronutrients and fiber intake. Protein and fat can contribute to energy production but not as readily as carbohydrates that can quickly restore glycogen in the muscle and liver. What’s interesting is that our ancestors were able to hunt and run away from danger with the small amounts of carbohydrates from plants and berries plus the protein and fat intake from animals and fish — but the Olympics weren’t a thing then!

It may be another 200,000 years before our genes catch up to ultraprocessed foods and the simple carbohydrates and sugars contained in these products. Evidence suggests that ultraprocessed foods cause inflammation in organs like the liver, adipose tissue, the heart, and even the brain. In the brain, this inflammation may be what’s causing us to defend a higher body weight set point in this environment of easily obtained highly palatable ultraprocessed foods. 

Let’s not wait until our genes catch up and our bodies tolerate junk food without disease progression. It could be like waiting for Godot!

Dr. Apovian is professor of medicine, Harvard Medical School, and codirector, Center for Weight Management and Wellness, Brigham and Women’s Hospital, Boston, Massachusetts. She disclosed ties to Altimmune, CinFina Pharma, Cowen and Company, EPG Communication Holdings, Form Health, Gelesis, and L-Nutra.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

Recent headlines scream that we have an obesity problem and that carbs are the culprit for the problem. That leads me to ask: How did we get to blaming carbs as the enemy in the war against obesity?

First, a quick review of the history of diet and macronutrient content.

A long time ago, prehistoric humans foraged and hunted for food. Protein and fat were procured from animal meat, which was very important for encephalization, or evolutionary increase in the complexity or relative size of the brain. Most of the requirements for protein and iron were satisfied by hunting and eating land animals as well as consuming marine life that washed up on shore.

Carbohydrates in the form of plant foods served as the only sources of energy available to prehistoric hunter-gatherers, which offset the high protein content of the rest of their diet. These were only available during spring and summer.

Then, about 10,000 years ago, plant and animal agriculture began, and humans saw a permanent shift in the macronutrient content of our daily intake so that it was more consistent and stable. Initially, the nutrient characteristic changes were subtle, going from wild food to cultivated food with the Agricultural Revolution in the mid-17th century. Then, it changed even more rapidly less than 200 years ago with the Industrial Revolution, resulting in semiprocessed and ultraprocessed foods.

This change in food intake altered human physiology, with major changes in our digestive, immune, and neural physiology and an increase in chronic disease prevalence. The last 50 years has seen an increase in obesity in the United States, along with increases in chronic disease such as type 2 diabetes, which leads cardiovascular disease and certain cancers. 
 

Back to Carbohydrates: Do We Need Them? How Much? What Kind?

The increase in the macronutrient content of the food we eat containing saturated fat and refined carbohydrates and sugars represents a major change and is arguably the smoking gun of the obesity epidemic. Unfortunately, ultraprocessed foods have become a staple of the standard American or Western diet. 

Ultraprocessed foods such as cakes, cookies, crackers, sugary breakfast cereals, pizza, potato chips, soft drinks, and ice cream are eons away from our prehistoric diet of wild game, nuts, fruits, and berries, at which time, our digestive immune and nervous systems evolved. The pace at which ultraprocessed foods have entered our diet outpaces the time necessary for adaptation of our digestive systems and genes to these foods. They are indeed pathogenic in this context. 

So when was the time when humans consumed an “optimal” diet? This is hard to say because during the time of brain evolution, we needed protein and iron and succumbed to infections and trauma. In the early 1900s, we continued to succumb to infection until the discovery of antibiotics. Soon thereafter, industrialization and processed foods led to weight gain and the chronic diseases of the cardiovascular system and type 2 diabetes. 

Carbohydrates provide calories and fiber and some micronutrients, which are needed for energy, metabolism, and bowel and immune health. But how much do we need? 

Currently in the United States, the percentage of total food energy derived from the three major macronutrients is: carbohydrates, 51.8%; fat, 32.8%; and protein, 15.4%. Current advice for a healthy diet to lower risk for cardiovascular disease is to limit fat intake to 30% of total energy, protein to 15%, and to increase complex carbohydrates to 55%-60% of total energy. But we also need to qualify this in terms of the quality of the macronutrient, particularly carbohydrates. 

In addition to the quality, the macronutrient content of the diet has varied considerably from our prehistoric times when dietary protein intakes were high at 19%-35% of energy at the expense of carbohydrate (22%-40% of energy). 

If our genes haven’t kept up with industrialization, then why do we need so many carbohydrates to equate to 55%-60% of energy? Is it possible that we are confusing what is available with what we actually need? What do I mean by this?

We certainly have changed the landscape of the world due to agriculture, which has allowed us to procreate and feed ourselves, and certainly, industrialization has increased the availability of accessible cheap food. Protein in the form of meat, fish, and fowl are harder to get in industrialized nations as are fruits and vegetables. These macronutrients were the foods of our ancestors. It may be that a healthy diet is considered the one that is available. 

For instance, the Mediterranean diet is somewhat higher in fat content, 40%-50% fat (mostly mono and unsaturated), and similar in protein content but lower in carbohydrate content than the typical Western diet. The Dietary Approaches to Stop Hypertension (DASH) diet is lower in fat at 25% total calories, is higher in carbohydrates at 55%, and is lower in protein, but this diet was generated in the United States, therefore it is more Western. 

We need high-quality protein for organ and muscle function, high-quality unsaturated and monounsaturated fats for brain function and cellular functions, and high-quality complex carbohydrates for energy and gut health as well as micronutrients for many cellular functions. A ketogenic diet is not sustainable in the long-term for these reasons: chiefly the need for some carbohydrates for gut health and micronutrients. 

How much carbohydrate content is needed should take into consideration energy expenditure as well as micronutrients and fiber intake. Protein and fat can contribute to energy production but not as readily as carbohydrates that can quickly restore glycogen in the muscle and liver. What’s interesting is that our ancestors were able to hunt and run away from danger with the small amounts of carbohydrates from plants and berries plus the protein and fat intake from animals and fish — but the Olympics weren’t a thing then!

It may be another 200,000 years before our genes catch up to ultraprocessed foods and the simple carbohydrates and sugars contained in these products. Evidence suggests that ultraprocessed foods cause inflammation in organs like the liver, adipose tissue, the heart, and even the brain. In the brain, this inflammation may be what’s causing us to defend a higher body weight set point in this environment of easily obtained highly palatable ultraprocessed foods. 

Let’s not wait until our genes catch up and our bodies tolerate junk food without disease progression. It could be like waiting for Godot!

Dr. Apovian is professor of medicine, Harvard Medical School, and codirector, Center for Weight Management and Wellness, Brigham and Women’s Hospital, Boston, Massachusetts. She disclosed ties to Altimmune, CinFina Pharma, Cowen and Company, EPG Communication Holdings, Form Health, Gelesis, and L-Nutra.

A version of this article appeared on Medscape.com.

 

Recent headlines scream that we have an obesity problem and that carbs are the culprit for the problem. That leads me to ask: How did we get to blaming carbs as the enemy in the war against obesity?

First, a quick review of the history of diet and macronutrient content.

A long time ago, prehistoric humans foraged and hunted for food. Protein and fat were procured from animal meat, which was very important for encephalization, or evolutionary increase in the complexity or relative size of the brain. Most of the requirements for protein and iron were satisfied by hunting and eating land animals as well as consuming marine life that washed up on shore.

Carbohydrates in the form of plant foods served as the only sources of energy available to prehistoric hunter-gatherers, which offset the high protein content of the rest of their diet. These were only available during spring and summer.

Then, about 10,000 years ago, plant and animal agriculture began, and humans saw a permanent shift in the macronutrient content of our daily intake so that it was more consistent and stable. Initially, the nutrient characteristic changes were subtle, going from wild food to cultivated food with the Agricultural Revolution in the mid-17th century. Then, it changed even more rapidly less than 200 years ago with the Industrial Revolution, resulting in semiprocessed and ultraprocessed foods.

This change in food intake altered human physiology, with major changes in our digestive, immune, and neural physiology and an increase in chronic disease prevalence. The last 50 years has seen an increase in obesity in the United States, along with increases in chronic disease such as type 2 diabetes, which leads cardiovascular disease and certain cancers. 
 

Back to Carbohydrates: Do We Need Them? How Much? What Kind?

The increase in the macronutrient content of the food we eat containing saturated fat and refined carbohydrates and sugars represents a major change and is arguably the smoking gun of the obesity epidemic. Unfortunately, ultraprocessed foods have become a staple of the standard American or Western diet. 

Ultraprocessed foods such as cakes, cookies, crackers, sugary breakfast cereals, pizza, potato chips, soft drinks, and ice cream are eons away from our prehistoric diet of wild game, nuts, fruits, and berries, at which time, our digestive immune and nervous systems evolved. The pace at which ultraprocessed foods have entered our diet outpaces the time necessary for adaptation of our digestive systems and genes to these foods. They are indeed pathogenic in this context. 

So when was the time when humans consumed an “optimal” diet? This is hard to say because during the time of brain evolution, we needed protein and iron and succumbed to infections and trauma. In the early 1900s, we continued to succumb to infection until the discovery of antibiotics. Soon thereafter, industrialization and processed foods led to weight gain and the chronic diseases of the cardiovascular system and type 2 diabetes. 

Carbohydrates provide calories and fiber and some micronutrients, which are needed for energy, metabolism, and bowel and immune health. But how much do we need? 

Currently in the United States, the percentage of total food energy derived from the three major macronutrients is: carbohydrates, 51.8%; fat, 32.8%; and protein, 15.4%. Current advice for a healthy diet to lower risk for cardiovascular disease is to limit fat intake to 30% of total energy, protein to 15%, and to increase complex carbohydrates to 55%-60% of total energy. But we also need to qualify this in terms of the quality of the macronutrient, particularly carbohydrates. 

In addition to the quality, the macronutrient content of the diet has varied considerably from our prehistoric times when dietary protein intakes were high at 19%-35% of energy at the expense of carbohydrate (22%-40% of energy). 

If our genes haven’t kept up with industrialization, then why do we need so many carbohydrates to equate to 55%-60% of energy? Is it possible that we are confusing what is available with what we actually need? What do I mean by this?

We certainly have changed the landscape of the world due to agriculture, which has allowed us to procreate and feed ourselves, and certainly, industrialization has increased the availability of accessible cheap food. Protein in the form of meat, fish, and fowl are harder to get in industrialized nations as are fruits and vegetables. These macronutrients were the foods of our ancestors. It may be that a healthy diet is considered the one that is available. 

For instance, the Mediterranean diet is somewhat higher in fat content, 40%-50% fat (mostly mono and unsaturated), and similar in protein content but lower in carbohydrate content than the typical Western diet. The Dietary Approaches to Stop Hypertension (DASH) diet is lower in fat at 25% total calories, is higher in carbohydrates at 55%, and is lower in protein, but this diet was generated in the United States, therefore it is more Western. 

We need high-quality protein for organ and muscle function, high-quality unsaturated and monounsaturated fats for brain function and cellular functions, and high-quality complex carbohydrates for energy and gut health as well as micronutrients for many cellular functions. A ketogenic diet is not sustainable in the long-term for these reasons: chiefly the need for some carbohydrates for gut health and micronutrients. 

How much carbohydrate content is needed should take into consideration energy expenditure as well as micronutrients and fiber intake. Protein and fat can contribute to energy production but not as readily as carbohydrates that can quickly restore glycogen in the muscle and liver. What’s interesting is that our ancestors were able to hunt and run away from danger with the small amounts of carbohydrates from plants and berries plus the protein and fat intake from animals and fish — but the Olympics weren’t a thing then!

It may be another 200,000 years before our genes catch up to ultraprocessed foods and the simple carbohydrates and sugars contained in these products. Evidence suggests that ultraprocessed foods cause inflammation in organs like the liver, adipose tissue, the heart, and even the brain. In the brain, this inflammation may be what’s causing us to defend a higher body weight set point in this environment of easily obtained highly palatable ultraprocessed foods. 

Let’s not wait until our genes catch up and our bodies tolerate junk food without disease progression. It could be like waiting for Godot!

Dr. Apovian is professor of medicine, Harvard Medical School, and codirector, Center for Weight Management and Wellness, Brigham and Women’s Hospital, Boston, Massachusetts. She disclosed ties to Altimmune, CinFina Pharma, Cowen and Company, EPG Communication Holdings, Form Health, Gelesis, and L-Nutra.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article