Is AI Use Causing Endoscopists to Lose Their Skills?

Article Type
Changed
Thu, 09/18/2025 - 11:41

Routine use of artificial intelligence (AI) may lead to a loss of skills among clinicians who perform colonoscopies, thereby affecting patient outcomes, a large observational study suggested.

“The extent and consistency of the adenoma detection rate (ADR) drop after long-term AI use were not expected,” study authors Krzysztof Budzyń, MD, and Marcin Romańczyk, MD, of the Academy of Silesia, Katowice, Poland, told GI & Hepatology News. “We thought there might be a small effect, but the 6% absolute decrease — observed in several centers and among most endoscopists — points to a genuine change in behavior. This was especially notable because all participants were very experienced, with more than 2000 colonoscopies each.”

Another unexpected result, they said, “was that the decrease was stronger in centers with higher starting ADRs and in certain patient groups, such as women under 60. We had assumed experienced clinicians would be less affected, but our results show that even highly skilled practitioners can be influenced.”

The study was published online in The Lancet Gastroenterology & Hepatology.

 

ADR Reduced After AI Use

To assess how endoscopists who used AI regularly performed colonoscopy when AI was not in use, researchers conducted a retrospective, observational study at four endoscopy centers in Poland taking part in the ACCEPT trial.

These centers introduced AI tools for polyp detection at the end of 2021, after which colonoscopies were randomly assigned to be done with or without AI assistance.

The researchers assessed colonoscopy quality by comparing two different phases: 3 months before and 3 months after AI implementation. All diagnostic colonoscopies were included, except for those involving intensive anticoagulant use, pregnancy, or a history of colorectal resection or inflammatory bowel disease.

The primary outcome was the change in the ADR of standard, non-AI-assisted colonoscopy before and after AI exposure.

Between September 2021 and March 2022, a total of 2177 colonoscopies were conducted, including 1443 without AI use and 734 with AI. The current analysis focused on the 795 patients who underwent non-AI-assisted colonoscopy before the introduction of AI and the 648 who underwent non-AI-assisted colonoscopy after.

Participants’ median age was 61 years, and 59% were women. The colonoscopies were performed by 19 experienced endoscopists who had conducted over 2000 colonoscopies each.

The ADR of standard colonoscopy decreased significantly from 28.4% (226 of 795) before the introduction of AI to 22.4% (145 of 648) after, corresponding to a 20% relative and 6% absolute reduction in the ADR.

The ADR for AI-assisted colonoscopies was 25.3% (186 of 734).

The number of adenomas per colonoscopy (APC) in patients with at least one adenoma detected did not change significantly between the groups before and after AI exposure, with a mean of 1.91 before vs 1.92 after. Similarly, the number of mean advanced APC was comparable between the two periods (0.062 vs 0.063).

The mean advanced APC detection on standard colonoscopy in patients with at least one adenoma detected was 0.22 before AI exposure and 0.28 after AI exposure.

Colorectal cancers were detected in 6 (0.8%) of 795 colonoscopies before AI exposure and in 8 (1.2%) of 648 after AI exposure.

In multivariable logistic regression analysis, exposure to AI (odds ratio [OR], 0.69), patient’s male sex (OR, 1.78), and patient age at least 60 years (OR, 3.60) were independent factors significantly associated with ADR.

In all centers, the ADR for standard, non-AI-assisted colonoscopy was reduced after AI exposure, although the magnitude of ADR reduction varied greatly between centers, according to the authors.

“Clinicians should be aware that while AI can boost detection rates, prolonged reliance may subtly affect their performance when the technology is not available,” Budzyń and Romańczyk said. “This does not mean AI should be avoided — rather, it highlights the need for conscious engagement with the task, even when AI is assisting. Monitoring one’s own detection rates in both AI-assisted and non-AI-assisted procedures can help identify changes early.”

“Endoscopists should view AI as a collaborative partner, not a replacement for their vigilance and judgment,” they concluded. “Integrating AI effectively means using it to complement, not substitute, core observational and diagnostic skills. In short, enjoy the benefits of AI, but keep your skills sharp — your patients depend on both.”

Omer Ahmed, MD, of University College London, London, England, gives a similar message in a related editorial. The study “compels us to carefully consider the effect of AI integration into routine endoscopic practice,” he wrote. “Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy.”

 

‘Certainly a Signal’

Commenting on the study for GI & Hepatology News, Rajiv Bhuta, MD, assistant professor of clinical gastroenterology and hepatology at Temple University and a gastroenterologist at Temple University Hospital, both in Philadelphia, said, “On the face of it, these findings would seem to correlate with all our lived experiences as humans. Any skill or task that we give to a machine will inherently ‘de-skill’ or weaken our ability to perform it.”

Dr. Rajiv Bhuta

“The only way to miss a polyp is either due to lack of attention/recognition of a polyp in the field of view or a lack of fold exposure and cleansing,” said Bhuta, who was not involved in the study. “For AI to specifically de-skill polyp detection, it would mean the AI is conditioning physicians to pay less active attention during the procedure, similar to the way a driver may pay less attention in a car that has self-driving capabilities.”

That said, he noted that this is a small retrospective observational study with a short timeframe and an average of fewer than 100 colonoscopies per physician.

“My own ADR may vary by 8% or more by random chance in such a small dataset,” he said. “It’s hard to draw any real conclusions, but it is certainly a signal.”

The issue of de-skilling goes beyond gastroenterology and medicine, Bhuta noted. “We have invented millions of machines that have ‘de-skilled’ us in thousands of small ways, and mostly, we have benefited as a society. However, we’ve never had a machine that can de-skill our attention, our creativity, and our reason.”

“The question is not whether AI will de-skill us but when, where, and how do we set the boundaries of what we want a machine to do for us,” he said. “What is lost and what is gained by AI taking over these roles, and is that an acceptable trade-off?”

The study was funded by the European Commission and the Japan Society for the Promotion of Science. Budzyń, Romańczyk, and Bhuta declared having no competing interests. Ahmed declared receiving medical consultancy fees from Olympus, Odin Vision, Medtronic, and Norgine.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Routine use of artificial intelligence (AI) may lead to a loss of skills among clinicians who perform colonoscopies, thereby affecting patient outcomes, a large observational study suggested.

“The extent and consistency of the adenoma detection rate (ADR) drop after long-term AI use were not expected,” study authors Krzysztof Budzyń, MD, and Marcin Romańczyk, MD, of the Academy of Silesia, Katowice, Poland, told GI & Hepatology News. “We thought there might be a small effect, but the 6% absolute decrease — observed in several centers and among most endoscopists — points to a genuine change in behavior. This was especially notable because all participants were very experienced, with more than 2000 colonoscopies each.”

Another unexpected result, they said, “was that the decrease was stronger in centers with higher starting ADRs and in certain patient groups, such as women under 60. We had assumed experienced clinicians would be less affected, but our results show that even highly skilled practitioners can be influenced.”

The study was published online in The Lancet Gastroenterology & Hepatology.

 

ADR Reduced After AI Use

To assess how endoscopists who used AI regularly performed colonoscopy when AI was not in use, researchers conducted a retrospective, observational study at four endoscopy centers in Poland taking part in the ACCEPT trial.

These centers introduced AI tools for polyp detection at the end of 2021, after which colonoscopies were randomly assigned to be done with or without AI assistance.

The researchers assessed colonoscopy quality by comparing two different phases: 3 months before and 3 months after AI implementation. All diagnostic colonoscopies were included, except for those involving intensive anticoagulant use, pregnancy, or a history of colorectal resection or inflammatory bowel disease.

The primary outcome was the change in the ADR of standard, non-AI-assisted colonoscopy before and after AI exposure.

Between September 2021 and March 2022, a total of 2177 colonoscopies were conducted, including 1443 without AI use and 734 with AI. The current analysis focused on the 795 patients who underwent non-AI-assisted colonoscopy before the introduction of AI and the 648 who underwent non-AI-assisted colonoscopy after.

Participants’ median age was 61 years, and 59% were women. The colonoscopies were performed by 19 experienced endoscopists who had conducted over 2000 colonoscopies each.

The ADR of standard colonoscopy decreased significantly from 28.4% (226 of 795) before the introduction of AI to 22.4% (145 of 648) after, corresponding to a 20% relative and 6% absolute reduction in the ADR.

The ADR for AI-assisted colonoscopies was 25.3% (186 of 734).

The number of adenomas per colonoscopy (APC) in patients with at least one adenoma detected did not change significantly between the groups before and after AI exposure, with a mean of 1.91 before vs 1.92 after. Similarly, the number of mean advanced APC was comparable between the two periods (0.062 vs 0.063).

The mean advanced APC detection on standard colonoscopy in patients with at least one adenoma detected was 0.22 before AI exposure and 0.28 after AI exposure.

Colorectal cancers were detected in 6 (0.8%) of 795 colonoscopies before AI exposure and in 8 (1.2%) of 648 after AI exposure.

In multivariable logistic regression analysis, exposure to AI (odds ratio [OR], 0.69), patient’s male sex (OR, 1.78), and patient age at least 60 years (OR, 3.60) were independent factors significantly associated with ADR.

In all centers, the ADR for standard, non-AI-assisted colonoscopy was reduced after AI exposure, although the magnitude of ADR reduction varied greatly between centers, according to the authors.

“Clinicians should be aware that while AI can boost detection rates, prolonged reliance may subtly affect their performance when the technology is not available,” Budzyń and Romańczyk said. “This does not mean AI should be avoided — rather, it highlights the need for conscious engagement with the task, even when AI is assisting. Monitoring one’s own detection rates in both AI-assisted and non-AI-assisted procedures can help identify changes early.”

“Endoscopists should view AI as a collaborative partner, not a replacement for their vigilance and judgment,” they concluded. “Integrating AI effectively means using it to complement, not substitute, core observational and diagnostic skills. In short, enjoy the benefits of AI, but keep your skills sharp — your patients depend on both.”

Omer Ahmed, MD, of University College London, London, England, gives a similar message in a related editorial. The study “compels us to carefully consider the effect of AI integration into routine endoscopic practice,” he wrote. “Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy.”

 

‘Certainly a Signal’

Commenting on the study for GI & Hepatology News, Rajiv Bhuta, MD, assistant professor of clinical gastroenterology and hepatology at Temple University and a gastroenterologist at Temple University Hospital, both in Philadelphia, said, “On the face of it, these findings would seem to correlate with all our lived experiences as humans. Any skill or task that we give to a machine will inherently ‘de-skill’ or weaken our ability to perform it.”

Dr. Rajiv Bhuta

“The only way to miss a polyp is either due to lack of attention/recognition of a polyp in the field of view or a lack of fold exposure and cleansing,” said Bhuta, who was not involved in the study. “For AI to specifically de-skill polyp detection, it would mean the AI is conditioning physicians to pay less active attention during the procedure, similar to the way a driver may pay less attention in a car that has self-driving capabilities.”

That said, he noted that this is a small retrospective observational study with a short timeframe and an average of fewer than 100 colonoscopies per physician.

“My own ADR may vary by 8% or more by random chance in such a small dataset,” he said. “It’s hard to draw any real conclusions, but it is certainly a signal.”

The issue of de-skilling goes beyond gastroenterology and medicine, Bhuta noted. “We have invented millions of machines that have ‘de-skilled’ us in thousands of small ways, and mostly, we have benefited as a society. However, we’ve never had a machine that can de-skill our attention, our creativity, and our reason.”

“The question is not whether AI will de-skill us but when, where, and how do we set the boundaries of what we want a machine to do for us,” he said. “What is lost and what is gained by AI taking over these roles, and is that an acceptable trade-off?”

The study was funded by the European Commission and the Japan Society for the Promotion of Science. Budzyń, Romańczyk, and Bhuta declared having no competing interests. Ahmed declared receiving medical consultancy fees from Olympus, Odin Vision, Medtronic, and Norgine.

A version of this article appeared on Medscape.com.

Routine use of artificial intelligence (AI) may lead to a loss of skills among clinicians who perform colonoscopies, thereby affecting patient outcomes, a large observational study suggested.

“The extent and consistency of the adenoma detection rate (ADR) drop after long-term AI use were not expected,” study authors Krzysztof Budzyń, MD, and Marcin Romańczyk, MD, of the Academy of Silesia, Katowice, Poland, told GI & Hepatology News. “We thought there might be a small effect, but the 6% absolute decrease — observed in several centers and among most endoscopists — points to a genuine change in behavior. This was especially notable because all participants were very experienced, with more than 2000 colonoscopies each.”

Another unexpected result, they said, “was that the decrease was stronger in centers with higher starting ADRs and in certain patient groups, such as women under 60. We had assumed experienced clinicians would be less affected, but our results show that even highly skilled practitioners can be influenced.”

The study was published online in The Lancet Gastroenterology & Hepatology.

 

ADR Reduced After AI Use

To assess how endoscopists who used AI regularly performed colonoscopy when AI was not in use, researchers conducted a retrospective, observational study at four endoscopy centers in Poland taking part in the ACCEPT trial.

These centers introduced AI tools for polyp detection at the end of 2021, after which colonoscopies were randomly assigned to be done with or without AI assistance.

The researchers assessed colonoscopy quality by comparing two different phases: 3 months before and 3 months after AI implementation. All diagnostic colonoscopies were included, except for those involving intensive anticoagulant use, pregnancy, or a history of colorectal resection or inflammatory bowel disease.

The primary outcome was the change in the ADR of standard, non-AI-assisted colonoscopy before and after AI exposure.

Between September 2021 and March 2022, a total of 2177 colonoscopies were conducted, including 1443 without AI use and 734 with AI. The current analysis focused on the 795 patients who underwent non-AI-assisted colonoscopy before the introduction of AI and the 648 who underwent non-AI-assisted colonoscopy after.

Participants’ median age was 61 years, and 59% were women. The colonoscopies were performed by 19 experienced endoscopists who had conducted over 2000 colonoscopies each.

The ADR of standard colonoscopy decreased significantly from 28.4% (226 of 795) before the introduction of AI to 22.4% (145 of 648) after, corresponding to a 20% relative and 6% absolute reduction in the ADR.

The ADR for AI-assisted colonoscopies was 25.3% (186 of 734).

The number of adenomas per colonoscopy (APC) in patients with at least one adenoma detected did not change significantly between the groups before and after AI exposure, with a mean of 1.91 before vs 1.92 after. Similarly, the number of mean advanced APC was comparable between the two periods (0.062 vs 0.063).

The mean advanced APC detection on standard colonoscopy in patients with at least one adenoma detected was 0.22 before AI exposure and 0.28 after AI exposure.

Colorectal cancers were detected in 6 (0.8%) of 795 colonoscopies before AI exposure and in 8 (1.2%) of 648 after AI exposure.

In multivariable logistic regression analysis, exposure to AI (odds ratio [OR], 0.69), patient’s male sex (OR, 1.78), and patient age at least 60 years (OR, 3.60) were independent factors significantly associated with ADR.

In all centers, the ADR for standard, non-AI-assisted colonoscopy was reduced after AI exposure, although the magnitude of ADR reduction varied greatly between centers, according to the authors.

“Clinicians should be aware that while AI can boost detection rates, prolonged reliance may subtly affect their performance when the technology is not available,” Budzyń and Romańczyk said. “This does not mean AI should be avoided — rather, it highlights the need for conscious engagement with the task, even when AI is assisting. Monitoring one’s own detection rates in both AI-assisted and non-AI-assisted procedures can help identify changes early.”

“Endoscopists should view AI as a collaborative partner, not a replacement for their vigilance and judgment,” they concluded. “Integrating AI effectively means using it to complement, not substitute, core observational and diagnostic skills. In short, enjoy the benefits of AI, but keep your skills sharp — your patients depend on both.”

Omer Ahmed, MD, of University College London, London, England, gives a similar message in a related editorial. The study “compels us to carefully consider the effect of AI integration into routine endoscopic practice,” he wrote. “Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy.”

 

‘Certainly a Signal’

Commenting on the study for GI & Hepatology News, Rajiv Bhuta, MD, assistant professor of clinical gastroenterology and hepatology at Temple University and a gastroenterologist at Temple University Hospital, both in Philadelphia, said, “On the face of it, these findings would seem to correlate with all our lived experiences as humans. Any skill or task that we give to a machine will inherently ‘de-skill’ or weaken our ability to perform it.”

Dr. Rajiv Bhuta

“The only way to miss a polyp is either due to lack of attention/recognition of a polyp in the field of view or a lack of fold exposure and cleansing,” said Bhuta, who was not involved in the study. “For AI to specifically de-skill polyp detection, it would mean the AI is conditioning physicians to pay less active attention during the procedure, similar to the way a driver may pay less attention in a car that has self-driving capabilities.”

That said, he noted that this is a small retrospective observational study with a short timeframe and an average of fewer than 100 colonoscopies per physician.

“My own ADR may vary by 8% or more by random chance in such a small dataset,” he said. “It’s hard to draw any real conclusions, but it is certainly a signal.”

The issue of de-skilling goes beyond gastroenterology and medicine, Bhuta noted. “We have invented millions of machines that have ‘de-skilled’ us in thousands of small ways, and mostly, we have benefited as a society. However, we’ve never had a machine that can de-skill our attention, our creativity, and our reason.”

“The question is not whether AI will de-skill us but when, where, and how do we set the boundaries of what we want a machine to do for us,” he said. “What is lost and what is gained by AI taking over these roles, and is that an acceptable trade-off?”

The study was funded by the European Commission and the Japan Society for the Promotion of Science. Budzyń, Romańczyk, and Bhuta declared having no competing interests. Ahmed declared receiving medical consultancy fees from Olympus, Odin Vision, Medtronic, and Norgine.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 09/18/2025 - 11:40
Un-Gate On Date
Thu, 09/18/2025 - 11:40
Use ProPublica
CFC Schedule Remove Status
Thu, 09/18/2025 - 11:40
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Thu, 09/18/2025 - 11:40

New Guidelines for Pregnancy and IBD Aim to Quell Fears

Article Type
Changed
Wed, 09/10/2025 - 12:06

The first-ever global guidelines for pregnancy and inflammatory bowel disease (IBD) recommend continuing biologics and low-risk medications through pregnancy and lactation in women with IBD, suggesting this approach will not harm the fetus.

The guidelines also recommend that all women with IBD receive preconception counseling and be followed as high-risk pregnancies.

“Management of chronic illness in pregnant women has always been defined by fear of harming the fetus,” said Uma Mahadevan, MD, AGAF, director of the Colitis and Crohn’s Disease Center at the University of California San Francisco and chair of the Global Consensus Consortium that developed the guidelines. 

Dr. Uma Mahadevan



As a result, pregnant women are excluded from clinical trials of experimental therapies for IBD. And when a new therapy achieves regulatory approval, there are no human pregnancy safety data, only animal data. To fill this gap, the PIANO study, of which Mahadevan is principal investigator, looked at the safety of IBD medications in pregnancy and short- and long-term outcomes of the children.

“With our ongoing work in pregnancy in the patient with IBD, we realized that inflammation in the mother is the leading cause of poor outcome for the infant,” she told GI & Hepatology News

“We also have a better understanding of placental transfer of biologic agents” and the lack of exposure to the fetus during the first trimester, “a key period of organogenesis,” she added. 

Final recommendations were published simultaneously in six international journals, namely, Clinical Gastroenterology and Hepatology, American Journal of GastroenterologyGUTInflammatory Bowel DiseasesJournal of Crohn’s and Colitis, and Alimentary Pharmacology and Therapeutics.

 

Surprising, Novel Findings

Limited provider knowledge led to varied practices in caring for women with IBD who become pregnant, according to the consensus authors. Practices are affected by local dogma, available resources, individual interpretation of the literature, and fear of harming the fetus. 

“The variations in guidelines by different societies and countries reflect this and lead to confusion for physicians and patients alike,” the authors of the guidelines wrote. 

Therefore, the Global Consensus Consortium — a group of 39 IBD experts, including teratologists and maternal fetal medicine specialists and seven patient advocates from six continents — convened to review and assess current data and come to an agreement on best practices. The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) process was used when sufficient published data were available, and the Research and Development process when expert opinion was needed to guide consistent practice.

“Some of the findings were expected, but others were novel,” said Mahadevan. 

Recommendations that might surprise clinicians include GRADE statement 9, which suggests that pregnant women with IBD take low-dose aspirin by 12 to 16 weeks’ gestation to prevent preterm preeclampsia. “This is based on the ASPRE study, showing that women at risk of preeclampsia can lower their risk by taking low-dose aspirin,” with no risk for flare, Mahadevan said.

In addition, GRADE statements 17-20 recommend/suggest that women continue their biologic throughout pregnancy without stopping. “North America has always recommended continuing during the third trimester, while Europe only recently has come to this,” Mahadevan said. “However, there was always some looseness about stopping at week X, Y, or Z. Now, we do recommend continuing the dose on schedule with no holding.”

Continuing medications considered low risk for use during pregnancy, such as 5-amino salicylic acids, sulfasalazine, thiopurines, and all monoclonal antibodies during preconception, pregnancy, and lactation, was also recommended. 

However, small-molecule drugs such as S1P receptor molecules and JAK inhibitors should be avoided for at least 1 month, and in some cases for 3 months prior to attempting conception, unless there is no alternative for the health of the mother. They should also be avoided during lactation.

Grade statement 33, which suggests that live rotavirus vaccine may be provided in children with in utero exposure to biologics, is also new, Mahadevan noted. “All prior recommendations were that no live vaccine should be given in the first 6 months or longer if infants were exposed to biologics in utero, but based on a prospective Canadian study, there is no harm when given to these infants.”

Another novel recommendation is that women with IBD on any monoclonal antibodies, including newer interleukin-23s, may breastfeed even though there are not clinical trial data at this point. The recommendation to continue them through pregnancy and lactation is based on placental physiology, as well as on the physiology of monoclonal antibody transfer in breast milk, according to the consortium.

Furthermore, the authors noted, there was no increase in infant infections at 4 months or 12 months if they were exposed to a biologic or thiopurine (or both) during pregnancy.

Overall, the consortium recommended that all pregnancies for women with IBD be considered as “high risk” for complications. This is due to the fact that many parts of the world, including the US, are “resource-limited,” Mahadevan explained. Since maternal fetal medicine specialists are not widely available, the consortium suggested all these patients be followed with increased monitoring and surveillance based on available resources.

In addition to the guidelines, patient videos in seven languages, a professional slide deck in English and Spanish, and a video on the global consensus are all available at https://pianostudy.org/.

This study was funded by The Leona B. and Harry H. Helmsley Charitable Trust.

Mahadevan reported being a consultant for AbbVie, Bristol Myers Squibb, Boehringer Ingelheim, Celltrion, Enveda, Gilead, Janssen, Lilly, Merck, Pfizer, Protagonist, Roivant, and Takeda.

A version of this article appeared on Medscape.com

Publications
Topics
Sections

The first-ever global guidelines for pregnancy and inflammatory bowel disease (IBD) recommend continuing biologics and low-risk medications through pregnancy and lactation in women with IBD, suggesting this approach will not harm the fetus.

The guidelines also recommend that all women with IBD receive preconception counseling and be followed as high-risk pregnancies.

“Management of chronic illness in pregnant women has always been defined by fear of harming the fetus,” said Uma Mahadevan, MD, AGAF, director of the Colitis and Crohn’s Disease Center at the University of California San Francisco and chair of the Global Consensus Consortium that developed the guidelines. 

Dr. Uma Mahadevan



As a result, pregnant women are excluded from clinical trials of experimental therapies for IBD. And when a new therapy achieves regulatory approval, there are no human pregnancy safety data, only animal data. To fill this gap, the PIANO study, of which Mahadevan is principal investigator, looked at the safety of IBD medications in pregnancy and short- and long-term outcomes of the children.

“With our ongoing work in pregnancy in the patient with IBD, we realized that inflammation in the mother is the leading cause of poor outcome for the infant,” she told GI & Hepatology News

“We also have a better understanding of placental transfer of biologic agents” and the lack of exposure to the fetus during the first trimester, “a key period of organogenesis,” she added. 

Final recommendations were published simultaneously in six international journals, namely, Clinical Gastroenterology and Hepatology, American Journal of GastroenterologyGUTInflammatory Bowel DiseasesJournal of Crohn’s and Colitis, and Alimentary Pharmacology and Therapeutics.

 

Surprising, Novel Findings

Limited provider knowledge led to varied practices in caring for women with IBD who become pregnant, according to the consensus authors. Practices are affected by local dogma, available resources, individual interpretation of the literature, and fear of harming the fetus. 

“The variations in guidelines by different societies and countries reflect this and lead to confusion for physicians and patients alike,” the authors of the guidelines wrote. 

Therefore, the Global Consensus Consortium — a group of 39 IBD experts, including teratologists and maternal fetal medicine specialists and seven patient advocates from six continents — convened to review and assess current data and come to an agreement on best practices. The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) process was used when sufficient published data were available, and the Research and Development process when expert opinion was needed to guide consistent practice.

“Some of the findings were expected, but others were novel,” said Mahadevan. 

Recommendations that might surprise clinicians include GRADE statement 9, which suggests that pregnant women with IBD take low-dose aspirin by 12 to 16 weeks’ gestation to prevent preterm preeclampsia. “This is based on the ASPRE study, showing that women at risk of preeclampsia can lower their risk by taking low-dose aspirin,” with no risk for flare, Mahadevan said.

In addition, GRADE statements 17-20 recommend/suggest that women continue their biologic throughout pregnancy without stopping. “North America has always recommended continuing during the third trimester, while Europe only recently has come to this,” Mahadevan said. “However, there was always some looseness about stopping at week X, Y, or Z. Now, we do recommend continuing the dose on schedule with no holding.”

Continuing medications considered low risk for use during pregnancy, such as 5-amino salicylic acids, sulfasalazine, thiopurines, and all monoclonal antibodies during preconception, pregnancy, and lactation, was also recommended. 

However, small-molecule drugs such as S1P receptor molecules and JAK inhibitors should be avoided for at least 1 month, and in some cases for 3 months prior to attempting conception, unless there is no alternative for the health of the mother. They should also be avoided during lactation.

Grade statement 33, which suggests that live rotavirus vaccine may be provided in children with in utero exposure to biologics, is also new, Mahadevan noted. “All prior recommendations were that no live vaccine should be given in the first 6 months or longer if infants were exposed to biologics in utero, but based on a prospective Canadian study, there is no harm when given to these infants.”

Another novel recommendation is that women with IBD on any monoclonal antibodies, including newer interleukin-23s, may breastfeed even though there are not clinical trial data at this point. The recommendation to continue them through pregnancy and lactation is based on placental physiology, as well as on the physiology of monoclonal antibody transfer in breast milk, according to the consortium.

Furthermore, the authors noted, there was no increase in infant infections at 4 months or 12 months if they were exposed to a biologic or thiopurine (or both) during pregnancy.

Overall, the consortium recommended that all pregnancies for women with IBD be considered as “high risk” for complications. This is due to the fact that many parts of the world, including the US, are “resource-limited,” Mahadevan explained. Since maternal fetal medicine specialists are not widely available, the consortium suggested all these patients be followed with increased monitoring and surveillance based on available resources.

In addition to the guidelines, patient videos in seven languages, a professional slide deck in English and Spanish, and a video on the global consensus are all available at https://pianostudy.org/.

This study was funded by The Leona B. and Harry H. Helmsley Charitable Trust.

Mahadevan reported being a consultant for AbbVie, Bristol Myers Squibb, Boehringer Ingelheim, Celltrion, Enveda, Gilead, Janssen, Lilly, Merck, Pfizer, Protagonist, Roivant, and Takeda.

A version of this article appeared on Medscape.com

The first-ever global guidelines for pregnancy and inflammatory bowel disease (IBD) recommend continuing biologics and low-risk medications through pregnancy and lactation in women with IBD, suggesting this approach will not harm the fetus.

The guidelines also recommend that all women with IBD receive preconception counseling and be followed as high-risk pregnancies.

“Management of chronic illness in pregnant women has always been defined by fear of harming the fetus,” said Uma Mahadevan, MD, AGAF, director of the Colitis and Crohn’s Disease Center at the University of California San Francisco and chair of the Global Consensus Consortium that developed the guidelines. 

Dr. Uma Mahadevan



As a result, pregnant women are excluded from clinical trials of experimental therapies for IBD. And when a new therapy achieves regulatory approval, there are no human pregnancy safety data, only animal data. To fill this gap, the PIANO study, of which Mahadevan is principal investigator, looked at the safety of IBD medications in pregnancy and short- and long-term outcomes of the children.

“With our ongoing work in pregnancy in the patient with IBD, we realized that inflammation in the mother is the leading cause of poor outcome for the infant,” she told GI & Hepatology News

“We also have a better understanding of placental transfer of biologic agents” and the lack of exposure to the fetus during the first trimester, “a key period of organogenesis,” she added. 

Final recommendations were published simultaneously in six international journals, namely, Clinical Gastroenterology and Hepatology, American Journal of GastroenterologyGUTInflammatory Bowel DiseasesJournal of Crohn’s and Colitis, and Alimentary Pharmacology and Therapeutics.

 

Surprising, Novel Findings

Limited provider knowledge led to varied practices in caring for women with IBD who become pregnant, according to the consensus authors. Practices are affected by local dogma, available resources, individual interpretation of the literature, and fear of harming the fetus. 

“The variations in guidelines by different societies and countries reflect this and lead to confusion for physicians and patients alike,” the authors of the guidelines wrote. 

Therefore, the Global Consensus Consortium — a group of 39 IBD experts, including teratologists and maternal fetal medicine specialists and seven patient advocates from six continents — convened to review and assess current data and come to an agreement on best practices. The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) process was used when sufficient published data were available, and the Research and Development process when expert opinion was needed to guide consistent practice.

“Some of the findings were expected, but others were novel,” said Mahadevan. 

Recommendations that might surprise clinicians include GRADE statement 9, which suggests that pregnant women with IBD take low-dose aspirin by 12 to 16 weeks’ gestation to prevent preterm preeclampsia. “This is based on the ASPRE study, showing that women at risk of preeclampsia can lower their risk by taking low-dose aspirin,” with no risk for flare, Mahadevan said.

In addition, GRADE statements 17-20 recommend/suggest that women continue their biologic throughout pregnancy without stopping. “North America has always recommended continuing during the third trimester, while Europe only recently has come to this,” Mahadevan said. “However, there was always some looseness about stopping at week X, Y, or Z. Now, we do recommend continuing the dose on schedule with no holding.”

Continuing medications considered low risk for use during pregnancy, such as 5-amino salicylic acids, sulfasalazine, thiopurines, and all monoclonal antibodies during preconception, pregnancy, and lactation, was also recommended. 

However, small-molecule drugs such as S1P receptor molecules and JAK inhibitors should be avoided for at least 1 month, and in some cases for 3 months prior to attempting conception, unless there is no alternative for the health of the mother. They should also be avoided during lactation.

Grade statement 33, which suggests that live rotavirus vaccine may be provided in children with in utero exposure to biologics, is also new, Mahadevan noted. “All prior recommendations were that no live vaccine should be given in the first 6 months or longer if infants were exposed to biologics in utero, but based on a prospective Canadian study, there is no harm when given to these infants.”

Another novel recommendation is that women with IBD on any monoclonal antibodies, including newer interleukin-23s, may breastfeed even though there are not clinical trial data at this point. The recommendation to continue them through pregnancy and lactation is based on placental physiology, as well as on the physiology of monoclonal antibody transfer in breast milk, according to the consortium.

Furthermore, the authors noted, there was no increase in infant infections at 4 months or 12 months if they were exposed to a biologic or thiopurine (or both) during pregnancy.

Overall, the consortium recommended that all pregnancies for women with IBD be considered as “high risk” for complications. This is due to the fact that many parts of the world, including the US, are “resource-limited,” Mahadevan explained. Since maternal fetal medicine specialists are not widely available, the consortium suggested all these patients be followed with increased monitoring and surveillance based on available resources.

In addition to the guidelines, patient videos in seven languages, a professional slide deck in English and Spanish, and a video on the global consensus are all available at https://pianostudy.org/.

This study was funded by The Leona B. and Harry H. Helmsley Charitable Trust.

Mahadevan reported being a consultant for AbbVie, Bristol Myers Squibb, Boehringer Ingelheim, Celltrion, Enveda, Gilead, Janssen, Lilly, Merck, Pfizer, Protagonist, Roivant, and Takeda.

A version of this article appeared on Medscape.com

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 09/10/2025 - 09:16
Un-Gate On Date
Wed, 09/10/2025 - 09:16
Use ProPublica
CFC Schedule Remove Status
Wed, 09/10/2025 - 09:16
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 09/10/2025 - 09:16

GLP-1 Use After Bariatric Surgery on the Rise

Article Type
Changed
Tue, 09/09/2025 - 09:41

The proportion of patients taking a GLP-1 weight-loss drug following bariatric surgery increased substantially in recent years, although the timing of initiation after surgery varied widely, a large retrospective cohort study showed.

GLP-1 initiation was also more common among women, those who underwent sleeve gastrectomy, and those with lower postoperative weight loss as measured by BMI.

“Some patients do not lose as much weight as expected, or they regain weight after a few years. In such cases, GLP-1 therapies are emerging as an important option for weight management,” said principal investigator Hemalkumar Mehta, PhD, associate professor at Johns Hopkins Bloomberg School of Public Health in Baltimore. 

“We also noted many personal stories circulating on social media in which patients shared their experiences using GLP-1 after bariatric surgery,” he told GI & Hepatology News

But when the researchers reviewed the scientific literature, they found no published evidence on GLP-1 use in this setting and little or no data on outcomes with the newer drugs such as semaglutide and tirzepatide. “This gap motivated us to conduct the current study,” said Mehta. The study was published in JAMA Surgery.

The researchers analyzed data from a national multicenter database of electronic health records of approximately 113 million US adults to characterize the use of and factors associated with GLP-1 initiation after bariatric surgery.

Among 112,858 individuals undergoing bariatric surgery during the study period, the mean age was 45.2 years, and 78.9% were women.

By self-report race, 1.1% were Asian, 22.1% were Black or African American, 64.2% were White individuals, and 12.6% reported belonging to other races (American Indian or Alaska Native, Native Hawaiian or Other Pacific Islander, or unknown).

A total of 15,749 individuals (14%) initiated GLP-1s post-surgery, with 3391 (21.5%) beginning within 2 years of surgery and the remainder initiating during postsurgical years 3-4 (32.3%), 5-6 (25.2%), or later (21%).

Notably, the proportion of GLP-1 use increased more in the more recent cohort, from 1.7% in the January 2015-December 2019 cohort to 12.6% from June 2020 to May 2025.

 

Differences Between Users and Nonusers

Those who initiated GLP-1s differed significantly from those who did not: GLP-1 users vs nonusers were younger (mean age, 44.9 years vs 45.2 years), and use was more common among women vs men (15.1% vs 9.7%), among Black or African American vs White patients (15.8% vs 13.5%), and among those who underwent sleeve gastrectomy vs Roux-en-Y gastric bypass (14.9% vs 12.1%).

Looked at another way, women (adjusted hazard ratio [aHR], 1.61), those undergoing sleeve gastrectomy (aHR, 1.42), and those with type 2 diabetes (aHR, 1.34) were more likely to initiate GLP-1s than their counterparts.

The overall median presurgical BMI was 42. On analyzing obesity classification based on BMI, the researchers found that the chances of GLP-1 use were 1.73 times higher among class 1 obesity patients (BMI, 30.0-34.9), 2.19 times higher among class 2 obesity patients (BMI, 35.0-39.9), and 2.69 times higher among patients with class 3 obesity (BMI ≥ 40) than among overweight patients (BMI, 25.0-29.9).

The median post-surgery BMI for GLP-1 users at drug initiation was 36.7. Each one-unit increase in postsurgical BMI was associated with an 8% increase in the likelihood of GLP-1 initiation (aHR, 1.08).

“Importantly, our study did not specifically evaluate the effectiveness of GLP-1 therapy on weight loss after surgery,” Mehta noted. That issue and others, such as optimal timing for initiating GLP-1s, are currently under investigation.

In a related editorial, Kate Lauer, MD, of the University of Wisconsin-Madison and colleagues noted that the study had several limitations. It relied on data prior to the USFDA approvals of semaglutide and tirzepatide, the two most prescribed GLP-1s currently, potentially limiting its applicability to current practice.

Furthermore, the prescribing data did not capture dose, titration schedules, or adherence, which are “critical for understanding treatment efficacy,” they wrote. “Nonetheless, the findings highlight two important trends: (1) GLP-1s are being increasingly used as an adjunct after bariatric surgery, and (2) there is substantial variability in the timing of their initiation.”

 

‘Logical’ to Use GLP-1s Post Surgery

Commenting on the study findings for GI & Hepatology News, Louis Aronne, MD, director of the Comprehensive Weight Control Center at Weill Cornell Medicine in New York City, who was not involved in the study, said, “I think it is perfectly logical to use GLP-1s in patients who have had bariatric surgery.”

In this study, weight loss in those who took GLP-1s was about 12% (from a median BMI of 42 pre-surgery to 36.7 when a GLP-1 was initiated), which is significantly less than average, Aronne noted. “The patients still had Class 2 obesity.”

“Obesity is the same as other metabolic diseases,” he added. “We have to use common sense and good medical judgment when treating patients. If surgery isn’t completely effective and weight loss is inadequate, I would recommend medications.”

Of note, his team has found that lower doses of GLP-1s are required in those who have had surgery than in those who have not. “My opinion is that patients who have undergone bariatric surgery seem to be more sensitive to the medications than the average patient, but this hasn’t been carefully studied.”

To prepare patients for the possible use of GLP1s post-surgery, he suggested telling those with very high BMI that “they may need medication in addition to the procedure in order to get the best result.”

Mehta added, “Ultimately, the decision to start GLP-1s after surgery is shared between patients and clinicians. Given the amount of media coverage on GLP-1 therapies, it is not surprising that more patients are initiating these discussions with their doctors.”

Mehta is supported by the US National Institute on Aging and reported receiving grants from the institute for this study; no other funding was reported. Lauer reported receiving grants from the US National Institutes of Health.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The proportion of patients taking a GLP-1 weight-loss drug following bariatric surgery increased substantially in recent years, although the timing of initiation after surgery varied widely, a large retrospective cohort study showed.

GLP-1 initiation was also more common among women, those who underwent sleeve gastrectomy, and those with lower postoperative weight loss as measured by BMI.

“Some patients do not lose as much weight as expected, or they regain weight after a few years. In such cases, GLP-1 therapies are emerging as an important option for weight management,” said principal investigator Hemalkumar Mehta, PhD, associate professor at Johns Hopkins Bloomberg School of Public Health in Baltimore. 

“We also noted many personal stories circulating on social media in which patients shared their experiences using GLP-1 after bariatric surgery,” he told GI & Hepatology News

But when the researchers reviewed the scientific literature, they found no published evidence on GLP-1 use in this setting and little or no data on outcomes with the newer drugs such as semaglutide and tirzepatide. “This gap motivated us to conduct the current study,” said Mehta. The study was published in JAMA Surgery.

The researchers analyzed data from a national multicenter database of electronic health records of approximately 113 million US adults to characterize the use of and factors associated with GLP-1 initiation after bariatric surgery.

Among 112,858 individuals undergoing bariatric surgery during the study period, the mean age was 45.2 years, and 78.9% were women.

By self-report race, 1.1% were Asian, 22.1% were Black or African American, 64.2% were White individuals, and 12.6% reported belonging to other races (American Indian or Alaska Native, Native Hawaiian or Other Pacific Islander, or unknown).

A total of 15,749 individuals (14%) initiated GLP-1s post-surgery, with 3391 (21.5%) beginning within 2 years of surgery and the remainder initiating during postsurgical years 3-4 (32.3%), 5-6 (25.2%), or later (21%).

Notably, the proportion of GLP-1 use increased more in the more recent cohort, from 1.7% in the January 2015-December 2019 cohort to 12.6% from June 2020 to May 2025.

 

Differences Between Users and Nonusers

Those who initiated GLP-1s differed significantly from those who did not: GLP-1 users vs nonusers were younger (mean age, 44.9 years vs 45.2 years), and use was more common among women vs men (15.1% vs 9.7%), among Black or African American vs White patients (15.8% vs 13.5%), and among those who underwent sleeve gastrectomy vs Roux-en-Y gastric bypass (14.9% vs 12.1%).

Looked at another way, women (adjusted hazard ratio [aHR], 1.61), those undergoing sleeve gastrectomy (aHR, 1.42), and those with type 2 diabetes (aHR, 1.34) were more likely to initiate GLP-1s than their counterparts.

The overall median presurgical BMI was 42. On analyzing obesity classification based on BMI, the researchers found that the chances of GLP-1 use were 1.73 times higher among class 1 obesity patients (BMI, 30.0-34.9), 2.19 times higher among class 2 obesity patients (BMI, 35.0-39.9), and 2.69 times higher among patients with class 3 obesity (BMI ≥ 40) than among overweight patients (BMI, 25.0-29.9).

The median post-surgery BMI for GLP-1 users at drug initiation was 36.7. Each one-unit increase in postsurgical BMI was associated with an 8% increase in the likelihood of GLP-1 initiation (aHR, 1.08).

“Importantly, our study did not specifically evaluate the effectiveness of GLP-1 therapy on weight loss after surgery,” Mehta noted. That issue and others, such as optimal timing for initiating GLP-1s, are currently under investigation.

In a related editorial, Kate Lauer, MD, of the University of Wisconsin-Madison and colleagues noted that the study had several limitations. It relied on data prior to the USFDA approvals of semaglutide and tirzepatide, the two most prescribed GLP-1s currently, potentially limiting its applicability to current practice.

Furthermore, the prescribing data did not capture dose, titration schedules, or adherence, which are “critical for understanding treatment efficacy,” they wrote. “Nonetheless, the findings highlight two important trends: (1) GLP-1s are being increasingly used as an adjunct after bariatric surgery, and (2) there is substantial variability in the timing of their initiation.”

 

‘Logical’ to Use GLP-1s Post Surgery

Commenting on the study findings for GI & Hepatology News, Louis Aronne, MD, director of the Comprehensive Weight Control Center at Weill Cornell Medicine in New York City, who was not involved in the study, said, “I think it is perfectly logical to use GLP-1s in patients who have had bariatric surgery.”

In this study, weight loss in those who took GLP-1s was about 12% (from a median BMI of 42 pre-surgery to 36.7 when a GLP-1 was initiated), which is significantly less than average, Aronne noted. “The patients still had Class 2 obesity.”

“Obesity is the same as other metabolic diseases,” he added. “We have to use common sense and good medical judgment when treating patients. If surgery isn’t completely effective and weight loss is inadequate, I would recommend medications.”

Of note, his team has found that lower doses of GLP-1s are required in those who have had surgery than in those who have not. “My opinion is that patients who have undergone bariatric surgery seem to be more sensitive to the medications than the average patient, but this hasn’t been carefully studied.”

To prepare patients for the possible use of GLP1s post-surgery, he suggested telling those with very high BMI that “they may need medication in addition to the procedure in order to get the best result.”

Mehta added, “Ultimately, the decision to start GLP-1s after surgery is shared between patients and clinicians. Given the amount of media coverage on GLP-1 therapies, it is not surprising that more patients are initiating these discussions with their doctors.”

Mehta is supported by the US National Institute on Aging and reported receiving grants from the institute for this study; no other funding was reported. Lauer reported receiving grants from the US National Institutes of Health.

A version of this article first appeared on Medscape.com.

The proportion of patients taking a GLP-1 weight-loss drug following bariatric surgery increased substantially in recent years, although the timing of initiation after surgery varied widely, a large retrospective cohort study showed.

GLP-1 initiation was also more common among women, those who underwent sleeve gastrectomy, and those with lower postoperative weight loss as measured by BMI.

“Some patients do not lose as much weight as expected, or they regain weight after a few years. In such cases, GLP-1 therapies are emerging as an important option for weight management,” said principal investigator Hemalkumar Mehta, PhD, associate professor at Johns Hopkins Bloomberg School of Public Health in Baltimore. 

“We also noted many personal stories circulating on social media in which patients shared their experiences using GLP-1 after bariatric surgery,” he told GI & Hepatology News

But when the researchers reviewed the scientific literature, they found no published evidence on GLP-1 use in this setting and little or no data on outcomes with the newer drugs such as semaglutide and tirzepatide. “This gap motivated us to conduct the current study,” said Mehta. The study was published in JAMA Surgery.

The researchers analyzed data from a national multicenter database of electronic health records of approximately 113 million US adults to characterize the use of and factors associated with GLP-1 initiation after bariatric surgery.

Among 112,858 individuals undergoing bariatric surgery during the study period, the mean age was 45.2 years, and 78.9% were women.

By self-report race, 1.1% were Asian, 22.1% were Black or African American, 64.2% were White individuals, and 12.6% reported belonging to other races (American Indian or Alaska Native, Native Hawaiian or Other Pacific Islander, or unknown).

A total of 15,749 individuals (14%) initiated GLP-1s post-surgery, with 3391 (21.5%) beginning within 2 years of surgery and the remainder initiating during postsurgical years 3-4 (32.3%), 5-6 (25.2%), or later (21%).

Notably, the proportion of GLP-1 use increased more in the more recent cohort, from 1.7% in the January 2015-December 2019 cohort to 12.6% from June 2020 to May 2025.

 

Differences Between Users and Nonusers

Those who initiated GLP-1s differed significantly from those who did not: GLP-1 users vs nonusers were younger (mean age, 44.9 years vs 45.2 years), and use was more common among women vs men (15.1% vs 9.7%), among Black or African American vs White patients (15.8% vs 13.5%), and among those who underwent sleeve gastrectomy vs Roux-en-Y gastric bypass (14.9% vs 12.1%).

Looked at another way, women (adjusted hazard ratio [aHR], 1.61), those undergoing sleeve gastrectomy (aHR, 1.42), and those with type 2 diabetes (aHR, 1.34) were more likely to initiate GLP-1s than their counterparts.

The overall median presurgical BMI was 42. On analyzing obesity classification based on BMI, the researchers found that the chances of GLP-1 use were 1.73 times higher among class 1 obesity patients (BMI, 30.0-34.9), 2.19 times higher among class 2 obesity patients (BMI, 35.0-39.9), and 2.69 times higher among patients with class 3 obesity (BMI ≥ 40) than among overweight patients (BMI, 25.0-29.9).

The median post-surgery BMI for GLP-1 users at drug initiation was 36.7. Each one-unit increase in postsurgical BMI was associated with an 8% increase in the likelihood of GLP-1 initiation (aHR, 1.08).

“Importantly, our study did not specifically evaluate the effectiveness of GLP-1 therapy on weight loss after surgery,” Mehta noted. That issue and others, such as optimal timing for initiating GLP-1s, are currently under investigation.

In a related editorial, Kate Lauer, MD, of the University of Wisconsin-Madison and colleagues noted that the study had several limitations. It relied on data prior to the USFDA approvals of semaglutide and tirzepatide, the two most prescribed GLP-1s currently, potentially limiting its applicability to current practice.

Furthermore, the prescribing data did not capture dose, titration schedules, or adherence, which are “critical for understanding treatment efficacy,” they wrote. “Nonetheless, the findings highlight two important trends: (1) GLP-1s are being increasingly used as an adjunct after bariatric surgery, and (2) there is substantial variability in the timing of their initiation.”

 

‘Logical’ to Use GLP-1s Post Surgery

Commenting on the study findings for GI & Hepatology News, Louis Aronne, MD, director of the Comprehensive Weight Control Center at Weill Cornell Medicine in New York City, who was not involved in the study, said, “I think it is perfectly logical to use GLP-1s in patients who have had bariatric surgery.”

In this study, weight loss in those who took GLP-1s was about 12% (from a median BMI of 42 pre-surgery to 36.7 when a GLP-1 was initiated), which is significantly less than average, Aronne noted. “The patients still had Class 2 obesity.”

“Obesity is the same as other metabolic diseases,” he added. “We have to use common sense and good medical judgment when treating patients. If surgery isn’t completely effective and weight loss is inadequate, I would recommend medications.”

Of note, his team has found that lower doses of GLP-1s are required in those who have had surgery than in those who have not. “My opinion is that patients who have undergone bariatric surgery seem to be more sensitive to the medications than the average patient, but this hasn’t been carefully studied.”

To prepare patients for the possible use of GLP1s post-surgery, he suggested telling those with very high BMI that “they may need medication in addition to the procedure in order to get the best result.”

Mehta added, “Ultimately, the decision to start GLP-1s after surgery is shared between patients and clinicians. Given the amount of media coverage on GLP-1 therapies, it is not surprising that more patients are initiating these discussions with their doctors.”

Mehta is supported by the US National Institute on Aging and reported receiving grants from the institute for this study; no other funding was reported. Lauer reported receiving grants from the US National Institutes of Health.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 09/09/2025 - 09:39
Un-Gate On Date
Tue, 09/09/2025 - 09:39
Use ProPublica
CFC Schedule Remove Status
Tue, 09/09/2025 - 09:39
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Tue, 09/09/2025 - 09:39

Wegovy Approved for MASH With Fibrosis, No Cirrhosis

Article Type
Changed
Fri, 08/22/2025 - 15:24

The FDA has granted accelerated approval to Novo Nordisk’s Wegovy for the treatment of metabolic-associated steatohepatitis (MASH) in adults with moderate-to-advanced fibrosis but without cirrhosis.

The once-weekly 2.4 mg semaglutide subcutaneous injection is given in conjunction with a reduced calorie diet and increased physical activity.

Among people living with overweight or obesity globally, 1 in 3 also have MASH.

The accelerated approval was based on part-one results from the ongoing two-part, phase-3 ESSENCE trial, in which Wegovy demonstrated a significant improvement in liver fibrosis with no worsening of steatohepatitis, as well as resolution of steatohepatitis with no worsening of liver fibrosis, compared with placebo at week 72. Those results were published online in April in The New England Journal of Medicine.

For the trial, 800 participants were randomly assigned to either Wegovy (534 participants) or placebo (266 participants) in addition to lifestyle changes. The mean age was 56 years and the mean BMI was 34. Most patients were white individuals (67.5%) and women (57.1%), and 55.9% of the patients had type 2 diabetes; 250 patients (31.3%) had stage II fibrosis and 550 (68.8%) had stage III fibrosis. Participants were on stable doses of lipid-lowering, glucose-management, and weight-loss medications.

At week 72, the first primary endpoint showed 63% of the 534 people treated with Wegovy achieved resolution of steatohepatitis and no worsening of liver fibrosis compared with 34% of 266 individuals treated with placebo — a statistically significant difference.

The second primary endpoint showed 37% of people treated with Wegovy achieved improvement in liver fibrosis and no worsening of steatohepatitis compared with 22% of those treated with placebo, also a significant difference.

A confirmatory secondary endpoint at week 72 showed 33% of patients treated with Wegovy achieved both resolution of steatohepatitis and improvement in liver fibrosis compared with 16% of those treated with placebo — a statistically significant difference in response rate of 17%.

In addition, 83.5% of the patients in the semaglutide group maintained the target dose of 2.4 mg until week 72.

Wegovy is also indicated, along with diet and physical activity, to reduce the risk for major cardiovascular events in adults with known heart disease and with either obesity or overweight. It is also indicated for adults and children aged 12 years or older with obesity, and some adults with overweight who also have weight-related medical problems, to help them lose excess body weight and keep the weight off.

 

What’s Next for Wegovy?

In February 2025, Novo Nordisk filed for regulatory approval in the EU, followed by regulatory submission in Japan in May 2025. Also in May, the FDA accepted a filing application for oral semaglutide 25 mg.

Furthermore, “There’s an expected readout of part 2 of ESSENCE in 2029, which aims to demonstrate treatment with Wegovy lowers the risk of liver-related clinical events, compared to placebo, in patients with MASH and F2 or F3 fibrosis at week 240,” a Novo Nordisk spokesperson told GI & Hepatology News.

Although the company has the technology to produce semaglutide as a pill or tablet, she said, “the US launch of oral semaglutide for obesity will be contingent on portfolio prioritization and manufacturing capacity.” The company has not yet submitted the 50 mg oral semaglutide to regulatory authorities.

“The oral form requires more active pharmaceutical ingredient (API),” she noted. “Given that we have a fixed amount of API, the injectable form enables us to treat more patients. We are currently expanding our oral and injectable production capacities globally with the aim of serving as many patients as possible. It requires time to build, install, validate, and ramp-up these production processes.”

 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

The FDA has granted accelerated approval to Novo Nordisk’s Wegovy for the treatment of metabolic-associated steatohepatitis (MASH) in adults with moderate-to-advanced fibrosis but without cirrhosis.

The once-weekly 2.4 mg semaglutide subcutaneous injection is given in conjunction with a reduced calorie diet and increased physical activity.

Among people living with overweight or obesity globally, 1 in 3 also have MASH.

The accelerated approval was based on part-one results from the ongoing two-part, phase-3 ESSENCE trial, in which Wegovy demonstrated a significant improvement in liver fibrosis with no worsening of steatohepatitis, as well as resolution of steatohepatitis with no worsening of liver fibrosis, compared with placebo at week 72. Those results were published online in April in The New England Journal of Medicine.

For the trial, 800 participants were randomly assigned to either Wegovy (534 participants) or placebo (266 participants) in addition to lifestyle changes. The mean age was 56 years and the mean BMI was 34. Most patients were white individuals (67.5%) and women (57.1%), and 55.9% of the patients had type 2 diabetes; 250 patients (31.3%) had stage II fibrosis and 550 (68.8%) had stage III fibrosis. Participants were on stable doses of lipid-lowering, glucose-management, and weight-loss medications.

At week 72, the first primary endpoint showed 63% of the 534 people treated with Wegovy achieved resolution of steatohepatitis and no worsening of liver fibrosis compared with 34% of 266 individuals treated with placebo — a statistically significant difference.

The second primary endpoint showed 37% of people treated with Wegovy achieved improvement in liver fibrosis and no worsening of steatohepatitis compared with 22% of those treated with placebo, also a significant difference.

A confirmatory secondary endpoint at week 72 showed 33% of patients treated with Wegovy achieved both resolution of steatohepatitis and improvement in liver fibrosis compared with 16% of those treated with placebo — a statistically significant difference in response rate of 17%.

In addition, 83.5% of the patients in the semaglutide group maintained the target dose of 2.4 mg until week 72.

Wegovy is also indicated, along with diet and physical activity, to reduce the risk for major cardiovascular events in adults with known heart disease and with either obesity or overweight. It is also indicated for adults and children aged 12 years or older with obesity, and some adults with overweight who also have weight-related medical problems, to help them lose excess body weight and keep the weight off.

 

What’s Next for Wegovy?

In February 2025, Novo Nordisk filed for regulatory approval in the EU, followed by regulatory submission in Japan in May 2025. Also in May, the FDA accepted a filing application for oral semaglutide 25 mg.

Furthermore, “There’s an expected readout of part 2 of ESSENCE in 2029, which aims to demonstrate treatment with Wegovy lowers the risk of liver-related clinical events, compared to placebo, in patients with MASH and F2 or F3 fibrosis at week 240,” a Novo Nordisk spokesperson told GI & Hepatology News.

Although the company has the technology to produce semaglutide as a pill or tablet, she said, “the US launch of oral semaglutide for obesity will be contingent on portfolio prioritization and manufacturing capacity.” The company has not yet submitted the 50 mg oral semaglutide to regulatory authorities.

“The oral form requires more active pharmaceutical ingredient (API),” she noted. “Given that we have a fixed amount of API, the injectable form enables us to treat more patients. We are currently expanding our oral and injectable production capacities globally with the aim of serving as many patients as possible. It requires time to build, install, validate, and ramp-up these production processes.”

 

A version of this article appeared on Medscape.com.

The FDA has granted accelerated approval to Novo Nordisk’s Wegovy for the treatment of metabolic-associated steatohepatitis (MASH) in adults with moderate-to-advanced fibrosis but without cirrhosis.

The once-weekly 2.4 mg semaglutide subcutaneous injection is given in conjunction with a reduced calorie diet and increased physical activity.

Among people living with overweight or obesity globally, 1 in 3 also have MASH.

The accelerated approval was based on part-one results from the ongoing two-part, phase-3 ESSENCE trial, in which Wegovy demonstrated a significant improvement in liver fibrosis with no worsening of steatohepatitis, as well as resolution of steatohepatitis with no worsening of liver fibrosis, compared with placebo at week 72. Those results were published online in April in The New England Journal of Medicine.

For the trial, 800 participants were randomly assigned to either Wegovy (534 participants) or placebo (266 participants) in addition to lifestyle changes. The mean age was 56 years and the mean BMI was 34. Most patients were white individuals (67.5%) and women (57.1%), and 55.9% of the patients had type 2 diabetes; 250 patients (31.3%) had stage II fibrosis and 550 (68.8%) had stage III fibrosis. Participants were on stable doses of lipid-lowering, glucose-management, and weight-loss medications.

At week 72, the first primary endpoint showed 63% of the 534 people treated with Wegovy achieved resolution of steatohepatitis and no worsening of liver fibrosis compared with 34% of 266 individuals treated with placebo — a statistically significant difference.

The second primary endpoint showed 37% of people treated with Wegovy achieved improvement in liver fibrosis and no worsening of steatohepatitis compared with 22% of those treated with placebo, also a significant difference.

A confirmatory secondary endpoint at week 72 showed 33% of patients treated with Wegovy achieved both resolution of steatohepatitis and improvement in liver fibrosis compared with 16% of those treated with placebo — a statistically significant difference in response rate of 17%.

In addition, 83.5% of the patients in the semaglutide group maintained the target dose of 2.4 mg until week 72.

Wegovy is also indicated, along with diet and physical activity, to reduce the risk for major cardiovascular events in adults with known heart disease and with either obesity or overweight. It is also indicated for adults and children aged 12 years or older with obesity, and some adults with overweight who also have weight-related medical problems, to help them lose excess body weight and keep the weight off.

 

What’s Next for Wegovy?

In February 2025, Novo Nordisk filed for regulatory approval in the EU, followed by regulatory submission in Japan in May 2025. Also in May, the FDA accepted a filing application for oral semaglutide 25 mg.

Furthermore, “There’s an expected readout of part 2 of ESSENCE in 2029, which aims to demonstrate treatment with Wegovy lowers the risk of liver-related clinical events, compared to placebo, in patients with MASH and F2 or F3 fibrosis at week 240,” a Novo Nordisk spokesperson told GI & Hepatology News.

Although the company has the technology to produce semaglutide as a pill or tablet, she said, “the US launch of oral semaglutide for obesity will be contingent on portfolio prioritization and manufacturing capacity.” The company has not yet submitted the 50 mg oral semaglutide to regulatory authorities.

“The oral form requires more active pharmaceutical ingredient (API),” she noted. “Given that we have a fixed amount of API, the injectable form enables us to treat more patients. We are currently expanding our oral and injectable production capacities globally with the aim of serving as many patients as possible. It requires time to build, install, validate, and ramp-up these production processes.”

 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 08/20/2025 - 12:41
Un-Gate On Date
Wed, 08/20/2025 - 12:41
Use ProPublica
CFC Schedule Remove Status
Wed, 08/20/2025 - 12:41
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 08/20/2025 - 12:41

Cirrhosis Mortality Prediction Boosted by Machine Learning

Article Type
Changed
Fri, 08/01/2025 - 17:16

Among hospitalized patients with cirrhosis, a machine learning (ML) model enhanced mortality prediction compared with traditional methods and was consistent across country income levels in a large global study.

“This highly inclusive, representative, and globally derived model has been externally validated,” Jasmohan Bajaj, MD, AGAF, professor of medicine at Virginia Commonwealth University in Richmond, Virginia, told GI & Hepatology News. “This gives us a crystal ball. It helps hospital teams, transplant centers, gastroenterology and intensive care unit services triage and prioritize patients more effectively.”

Dr. Jasmohan Bajaj



The study supporting the model, which Bajaj said “could be used at this stage,” was published online in Gastroenterology. The model is available for downloading at https://silveys.shinyapps.io/app_cleared/.

 

CLEARED Cohort Analyzed

Wide variations across the world regarding available resources, outpatient services, reasons for admission, and etiologies of cirrhosis can influence patient outcomes, according to Bajaj and colleagues. Therefore, they sought to use ML approaches to improve prognostication for all countries.

They analyzed admission-day data from the prospective Chronic Liver Disease Evolution And Registry for Events and Decompensation (CLEARED) consortium, which includes inpatients with cirrhosis enrolled from six continents. The analysis compared ML approaches with logistical regression to predict inpatient mortality.

The researchers performed internal validation (75/25 split) and subdivision using World-Bank income status: low/low-middle (L-LMIC), upper middle (UMIC), and high (HIC). They determined that the ML model with the best area-under-the-curve (AUC) would be externally validated in a US-Veteran cirrhosis inpatient population.

The CLEARED cohort included 7239 cirrhosis inpatients (mean age, 56 years; 64% men; median MELD-Na, 25) from 115 centers globally; 22.5% of centers belonged to LMICs, 41% to UMICs, and 34% to HICs.

A total of 808 patients (11.1%) died in the hospital.

Random-Forest analysis showed the best AUC (0.815) with high calibration. This was significantly better than parametric logistic regression (AUC, 0.774) and LASSO (AUC, 0.787) models.

Random-Forest also was better than logistic regression regardless of country income-level: HIC (AUC,0.806), UMIC (AUC, 0.867), and L-LMICs (AUC, 0.768).

Of the top 15 important variables selected from Random-Forest, admission for acute kidney injury, hepatic encephalopathy, high MELD-Na/white blood count, and not being in high income country were variables most predictive of mortality.

In contrast, higher albumin, hemoglobin, diuretic use on admission, viral etiology, and being in a high-income country were most protective.

The Random-Forest model was validated in 28,670 veterans (mean age, 67 years; 96% men; median MELD-Na,15), with an inpatient mortality of 4% (1158 patients).

The final Random-Forest model, using 48 of the 67 original covariates, attained a strong AUC of 0.859. A refit version using only the top 15 variables achieved a comparable AUC of 0.851.

 

Clinical Relevance

“Cirrhosis and resultant organ failures remain a dynamic and multidisciplinary problem,” Bajaj noted. “Machine learning techniques are one part of multi-faceted management strategy that is required in this population.”

If patients fall into the high-risk category, he said, “careful consultation with patients, families, and clinical teams is needed before providing information, including where this model was derived from. The results of these discussions could be instructive regarding decisions for transfer, more aggressive monitoring/ICU transfer, palliative care or transplant assessments.”

Meena B. Bansal, MD, system chief, Division of Liver Diseases, Mount Sinai Health System in New York City, called the tool “very promising.” However, she told GI & Hepatology News, “it was validated on a VA [Veterans Affairs] cohort, which is a bit different than the cohort of patients seen at Mount Sinai. Therefore, validation in more academic tertiary care medical centers with high volume liver transplant would be helpful.”

Dr. Meena B. Bansal

 

Furthermore, said Bansal, who was not involved in the study, “they excluded those that receiving a liver transplant, and while only a small number, this is an important limitation.”

Nevertheless, she added, “Artificial intelligence has great potential in predictive risk models and will likely be a tool that assists for risk stratification, clinical management, and hopefully improved clinical outcomes.”

This study was partly supported by a VA Merit review to Bajaj and the National Center for Advancing Translational Sciences, National Institutes of Health. No conflicts of interest were reported by any author.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Among hospitalized patients with cirrhosis, a machine learning (ML) model enhanced mortality prediction compared with traditional methods and was consistent across country income levels in a large global study.

“This highly inclusive, representative, and globally derived model has been externally validated,” Jasmohan Bajaj, MD, AGAF, professor of medicine at Virginia Commonwealth University in Richmond, Virginia, told GI & Hepatology News. “This gives us a crystal ball. It helps hospital teams, transplant centers, gastroenterology and intensive care unit services triage and prioritize patients more effectively.”

Dr. Jasmohan Bajaj



The study supporting the model, which Bajaj said “could be used at this stage,” was published online in Gastroenterology. The model is available for downloading at https://silveys.shinyapps.io/app_cleared/.

 

CLEARED Cohort Analyzed

Wide variations across the world regarding available resources, outpatient services, reasons for admission, and etiologies of cirrhosis can influence patient outcomes, according to Bajaj and colleagues. Therefore, they sought to use ML approaches to improve prognostication for all countries.

They analyzed admission-day data from the prospective Chronic Liver Disease Evolution And Registry for Events and Decompensation (CLEARED) consortium, which includes inpatients with cirrhosis enrolled from six continents. The analysis compared ML approaches with logistical regression to predict inpatient mortality.

The researchers performed internal validation (75/25 split) and subdivision using World-Bank income status: low/low-middle (L-LMIC), upper middle (UMIC), and high (HIC). They determined that the ML model with the best area-under-the-curve (AUC) would be externally validated in a US-Veteran cirrhosis inpatient population.

The CLEARED cohort included 7239 cirrhosis inpatients (mean age, 56 years; 64% men; median MELD-Na, 25) from 115 centers globally; 22.5% of centers belonged to LMICs, 41% to UMICs, and 34% to HICs.

A total of 808 patients (11.1%) died in the hospital.

Random-Forest analysis showed the best AUC (0.815) with high calibration. This was significantly better than parametric logistic regression (AUC, 0.774) and LASSO (AUC, 0.787) models.

Random-Forest also was better than logistic regression regardless of country income-level: HIC (AUC,0.806), UMIC (AUC, 0.867), and L-LMICs (AUC, 0.768).

Of the top 15 important variables selected from Random-Forest, admission for acute kidney injury, hepatic encephalopathy, high MELD-Na/white blood count, and not being in high income country were variables most predictive of mortality.

In contrast, higher albumin, hemoglobin, diuretic use on admission, viral etiology, and being in a high-income country were most protective.

The Random-Forest model was validated in 28,670 veterans (mean age, 67 years; 96% men; median MELD-Na,15), with an inpatient mortality of 4% (1158 patients).

The final Random-Forest model, using 48 of the 67 original covariates, attained a strong AUC of 0.859. A refit version using only the top 15 variables achieved a comparable AUC of 0.851.

 

Clinical Relevance

“Cirrhosis and resultant organ failures remain a dynamic and multidisciplinary problem,” Bajaj noted. “Machine learning techniques are one part of multi-faceted management strategy that is required in this population.”

If patients fall into the high-risk category, he said, “careful consultation with patients, families, and clinical teams is needed before providing information, including where this model was derived from. The results of these discussions could be instructive regarding decisions for transfer, more aggressive monitoring/ICU transfer, palliative care or transplant assessments.”

Meena B. Bansal, MD, system chief, Division of Liver Diseases, Mount Sinai Health System in New York City, called the tool “very promising.” However, she told GI & Hepatology News, “it was validated on a VA [Veterans Affairs] cohort, which is a bit different than the cohort of patients seen at Mount Sinai. Therefore, validation in more academic tertiary care medical centers with high volume liver transplant would be helpful.”

Dr. Meena B. Bansal

 

Furthermore, said Bansal, who was not involved in the study, “they excluded those that receiving a liver transplant, and while only a small number, this is an important limitation.”

Nevertheless, she added, “Artificial intelligence has great potential in predictive risk models and will likely be a tool that assists for risk stratification, clinical management, and hopefully improved clinical outcomes.”

This study was partly supported by a VA Merit review to Bajaj and the National Center for Advancing Translational Sciences, National Institutes of Health. No conflicts of interest were reported by any author.

A version of this article appeared on Medscape.com.

Among hospitalized patients with cirrhosis, a machine learning (ML) model enhanced mortality prediction compared with traditional methods and was consistent across country income levels in a large global study.

“This highly inclusive, representative, and globally derived model has been externally validated,” Jasmohan Bajaj, MD, AGAF, professor of medicine at Virginia Commonwealth University in Richmond, Virginia, told GI & Hepatology News. “This gives us a crystal ball. It helps hospital teams, transplant centers, gastroenterology and intensive care unit services triage and prioritize patients more effectively.”

Dr. Jasmohan Bajaj



The study supporting the model, which Bajaj said “could be used at this stage,” was published online in Gastroenterology. The model is available for downloading at https://silveys.shinyapps.io/app_cleared/.

 

CLEARED Cohort Analyzed

Wide variations across the world regarding available resources, outpatient services, reasons for admission, and etiologies of cirrhosis can influence patient outcomes, according to Bajaj and colleagues. Therefore, they sought to use ML approaches to improve prognostication for all countries.

They analyzed admission-day data from the prospective Chronic Liver Disease Evolution And Registry for Events and Decompensation (CLEARED) consortium, which includes inpatients with cirrhosis enrolled from six continents. The analysis compared ML approaches with logistical regression to predict inpatient mortality.

The researchers performed internal validation (75/25 split) and subdivision using World-Bank income status: low/low-middle (L-LMIC), upper middle (UMIC), and high (HIC). They determined that the ML model with the best area-under-the-curve (AUC) would be externally validated in a US-Veteran cirrhosis inpatient population.

The CLEARED cohort included 7239 cirrhosis inpatients (mean age, 56 years; 64% men; median MELD-Na, 25) from 115 centers globally; 22.5% of centers belonged to LMICs, 41% to UMICs, and 34% to HICs.

A total of 808 patients (11.1%) died in the hospital.

Random-Forest analysis showed the best AUC (0.815) with high calibration. This was significantly better than parametric logistic regression (AUC, 0.774) and LASSO (AUC, 0.787) models.

Random-Forest also was better than logistic regression regardless of country income-level: HIC (AUC,0.806), UMIC (AUC, 0.867), and L-LMICs (AUC, 0.768).

Of the top 15 important variables selected from Random-Forest, admission for acute kidney injury, hepatic encephalopathy, high MELD-Na/white blood count, and not being in high income country were variables most predictive of mortality.

In contrast, higher albumin, hemoglobin, diuretic use on admission, viral etiology, and being in a high-income country were most protective.

The Random-Forest model was validated in 28,670 veterans (mean age, 67 years; 96% men; median MELD-Na,15), with an inpatient mortality of 4% (1158 patients).

The final Random-Forest model, using 48 of the 67 original covariates, attained a strong AUC of 0.859. A refit version using only the top 15 variables achieved a comparable AUC of 0.851.

 

Clinical Relevance

“Cirrhosis and resultant organ failures remain a dynamic and multidisciplinary problem,” Bajaj noted. “Machine learning techniques are one part of multi-faceted management strategy that is required in this population.”

If patients fall into the high-risk category, he said, “careful consultation with patients, families, and clinical teams is needed before providing information, including where this model was derived from. The results of these discussions could be instructive regarding decisions for transfer, more aggressive monitoring/ICU transfer, palliative care or transplant assessments.”

Meena B. Bansal, MD, system chief, Division of Liver Diseases, Mount Sinai Health System in New York City, called the tool “very promising.” However, she told GI & Hepatology News, “it was validated on a VA [Veterans Affairs] cohort, which is a bit different than the cohort of patients seen at Mount Sinai. Therefore, validation in more academic tertiary care medical centers with high volume liver transplant would be helpful.”

Dr. Meena B. Bansal

 

Furthermore, said Bansal, who was not involved in the study, “they excluded those that receiving a liver transplant, and while only a small number, this is an important limitation.”

Nevertheless, she added, “Artificial intelligence has great potential in predictive risk models and will likely be a tool that assists for risk stratification, clinical management, and hopefully improved clinical outcomes.”

This study was partly supported by a VA Merit review to Bajaj and the National Center for Advancing Translational Sciences, National Institutes of Health. No conflicts of interest were reported by any author.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 08/01/2025 - 16:05
Un-Gate On Date
Fri, 08/01/2025 - 16:05
Use ProPublica
CFC Schedule Remove Status
Fri, 08/01/2025 - 16:05
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 08/01/2025 - 16:05

Sleep Changes in IBD Could Signal Inflammation, Flareups

Article Type
Changed
Tue, 07/29/2025 - 14:00

Changes in sleep metrics detected with wearable technology could serve as an inflammation marker and potentially predict inflammatory bowel disease (IBD) flareups, regardless of whether a patient has symptoms, an observational study suggested.

Sleep data from 101 study participants over a mean duration of about 228 days revealed that altered sleep architecture was only apparent when inflammation was present — symptoms alone did not impact sleep cycles or signal inflammation.

“We thought symptoms might have an impact on sleep, but interestingly, our data showed that measurable changes like reduced rapid eye movement (REM) sleep and increased light sleep only occurred during periods of active inflammation,” Robert Hirten, MD, associate professor of Medicine (Gastroenterology), and Artificial Intelligence and Human Health, at the Icahn School of Medicine at Mount Sinai, New York City, told GI & Hepatology News.

Dr. Robert Hirten



“It was also interesting to see distinct patterns in sleep metrics begin to shift over the 45 days before a flare, suggesting the potential for sleep to serve as an early indicator of disease activity,” he added.

“Sleep is often overlooked in the management of IBD, but it may provide valuable insights into a patient’s underlying disease state,” he said. “While sleep monitoring isn’t yet a standard part of IBD care, this study highlights its potential as a noninvasive window into disease activity, and a promising area for future clinical integration.”

The study was published online in Clinical Gastroenterology and Hepatology.

 

Less REM Sleep, More Light Sleep

Researchers assessed the impact of inflammation and symptoms on sleep architecture in IBD by analyzing data from 101 individuals who answered daily disease activity surveys and wore a wearable device.

The mean age of participants was 41 years and 65.3% were women. Sixty-three participants (62.4%) had Crohn’s disease (CD) and 38 (37.6%) had ulcerative colitis (UC).

Almost 40 (39.6%) participants used an Apple Watch; 50 (49.5%) used a Fitbit; and 11 (10.9%) used an Oura ring. Sleep architecture, sleep efficiency, and total hours asleep were collected from the devices. Participants were encouraged to wear their devices for at least 4 days per week and 8 hours per day and were not required to wear them at night. Participants provided data by linking their devices to ehive, Mount Sinai’s custom app.

Daily clinical disease activity was assessed using the UC or CD Patient Reported Outcome-2 survey. Participants were asked to answer at least four daily surveys each week.

Associations between sleep metrics and periods of symptomatic and inflammatory flares, and combinations of symptomatic and inflammatory activity, were compared to periods of symptomatic and inflammatory remission.

Furthermore, researchers explored the rate of change in sleep metrics for 45 days before and after inflammatory and symptomatic flares.

Participants contributed a mean duration of 228.16 nights of wearable data. During active inflammation, they spent a lower percentage of sleep time in REM (20% vs 21.59%) and a greater percentage of sleep time in light sleep (62.23% vs 59.95%) than during inflammatory remission. No differences were observed in the mean percentage of time in deep sleep, sleep efficiency, or total time asleep.

During symptomatic flares, there were no differences in the percentage of sleep time in REM sleep, deep sleep, light sleep, or sleep efficiency compared with periods of inflammatory remission. However, participants slept less overall during symptomatic flares compared with during symptomatic remission.

Compared with during asymptomatic and uninflamed periods, during asymptomatic but inflamed periods, participants spent a lower percentage of time in REM sleep, and more time in light sleep; however, there were no differences in sleep efficiency or total time asleep.

Similarly, participants had more light sleep and less REM sleep during symptomatic and inflammatory flares than during asymptomatic and uninflamed periods — but there were no differences in the percentage of time spent in deep sleep, in sleep efficiency, and the total time asleep.

Symptomatic flares alone, without inflammation, did not impact sleep metrics, the researchers concluded. However, periods with active inflammation were associated with a significantly smaller percentage of sleep time in REM sleep and a greater percentage of sleep time in light sleep.

The team also performed longitudinal mapping of sleep patterns before, during, and after disease exacerbations by analyzing sleep data for 6 weeks before and 6 weeks after flare episodes.

They found that sleep disturbances significantly worsen leading up to inflammatory flares and improve afterward, suggesting that sleep changes may signal upcoming increased disease activity. Evaluating the intersection of inflammatory and symptomatic flares, altered sleep architecture was only evident when inflammation was present.

“These findings raise important questions about whether intervening on sleep can actually impact inflammation or disease trajectory in IBD,” Hirten said. “Next steps include studying whether targeted sleep interventions can improve both sleep and IBD outcomes.”

While this research is still in the early stages, he said, “it suggests that sleep may have a relationship with inflammatory activity in IBD. For patients, it reinforces the value of paying attention to sleep changes.”

The findings also show the potential of wearable devices to guide more personalized monitoring, he added. “More work is needed before sleep metrics can be used routinely in clinical decision-making.”

 

Validates the Use of Wearables

Commenting on the study for GI & Hepatology News, Michael Mintz, MD, a gastroenterologist at Weill Cornell Medicine and NewYork-Presbyterian in New York City, observed, “Gastrointestinal symptoms often do not correlate with objective disease activity in IBD, creating a diagnostic challenge for gastroenterologists. Burdensome, expensive, and/or invasive testing, such as colonoscopies, stool tests, or imaging, are frequently required to monitor disease activity.” 

“This study is a first step in objectively monitoring inflammation in a patient-centric way that does not create undue burden to our patients,” he said. “It also provides longitudinal data that suggests changes in sleep patterns can pre-date disease flares, which ideally can lead to earlier intervention to prevent disease complications.”

Like Hirten, he noted that clinical decisions, such as changing IBD therapy, should not be based on the results of this study. “Rather this provides validation that wearable technology can provide useful objective data that correlates with disease activity.”

Furthermore, he said, it is not clear whether analyzing sleep data is a cost-effective way of monitoring IBD disease activity, or whether that data should be used alone or in combination with other objective disease markers, to influence clinical decision-making.

“This study provides proof of concept that there is a relationship between sleep characteristics and objective inflammation, but further studies are needed,” he said. “I am hopeful that this technology will give us another tool that we can use in clinical practice to monitor disease activity and improve outcomes in a way that is comfortable and convenient for our patients.”

This study was supported by a grant to Hirten from the US National Institutes of Health. Hirten reported receiving consulting fees from Bristol Meyers Squibb, AbbVie; stock options from Salvo Health; and research support from Janssen, Intralytix, EnLiSense, Crohn’s and Colitis Foundation. Mintz declared no competing interests.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Changes in sleep metrics detected with wearable technology could serve as an inflammation marker and potentially predict inflammatory bowel disease (IBD) flareups, regardless of whether a patient has symptoms, an observational study suggested.

Sleep data from 101 study participants over a mean duration of about 228 days revealed that altered sleep architecture was only apparent when inflammation was present — symptoms alone did not impact sleep cycles or signal inflammation.

“We thought symptoms might have an impact on sleep, but interestingly, our data showed that measurable changes like reduced rapid eye movement (REM) sleep and increased light sleep only occurred during periods of active inflammation,” Robert Hirten, MD, associate professor of Medicine (Gastroenterology), and Artificial Intelligence and Human Health, at the Icahn School of Medicine at Mount Sinai, New York City, told GI & Hepatology News.

Dr. Robert Hirten



“It was also interesting to see distinct patterns in sleep metrics begin to shift over the 45 days before a flare, suggesting the potential for sleep to serve as an early indicator of disease activity,” he added.

“Sleep is often overlooked in the management of IBD, but it may provide valuable insights into a patient’s underlying disease state,” he said. “While sleep monitoring isn’t yet a standard part of IBD care, this study highlights its potential as a noninvasive window into disease activity, and a promising area for future clinical integration.”

The study was published online in Clinical Gastroenterology and Hepatology.

 

Less REM Sleep, More Light Sleep

Researchers assessed the impact of inflammation and symptoms on sleep architecture in IBD by analyzing data from 101 individuals who answered daily disease activity surveys and wore a wearable device.

The mean age of participants was 41 years and 65.3% were women. Sixty-three participants (62.4%) had Crohn’s disease (CD) and 38 (37.6%) had ulcerative colitis (UC).

Almost 40 (39.6%) participants used an Apple Watch; 50 (49.5%) used a Fitbit; and 11 (10.9%) used an Oura ring. Sleep architecture, sleep efficiency, and total hours asleep were collected from the devices. Participants were encouraged to wear their devices for at least 4 days per week and 8 hours per day and were not required to wear them at night. Participants provided data by linking their devices to ehive, Mount Sinai’s custom app.

Daily clinical disease activity was assessed using the UC or CD Patient Reported Outcome-2 survey. Participants were asked to answer at least four daily surveys each week.

Associations between sleep metrics and periods of symptomatic and inflammatory flares, and combinations of symptomatic and inflammatory activity, were compared to periods of symptomatic and inflammatory remission.

Furthermore, researchers explored the rate of change in sleep metrics for 45 days before and after inflammatory and symptomatic flares.

Participants contributed a mean duration of 228.16 nights of wearable data. During active inflammation, they spent a lower percentage of sleep time in REM (20% vs 21.59%) and a greater percentage of sleep time in light sleep (62.23% vs 59.95%) than during inflammatory remission. No differences were observed in the mean percentage of time in deep sleep, sleep efficiency, or total time asleep.

During symptomatic flares, there were no differences in the percentage of sleep time in REM sleep, deep sleep, light sleep, or sleep efficiency compared with periods of inflammatory remission. However, participants slept less overall during symptomatic flares compared with during symptomatic remission.

Compared with during asymptomatic and uninflamed periods, during asymptomatic but inflamed periods, participants spent a lower percentage of time in REM sleep, and more time in light sleep; however, there were no differences in sleep efficiency or total time asleep.

Similarly, participants had more light sleep and less REM sleep during symptomatic and inflammatory flares than during asymptomatic and uninflamed periods — but there were no differences in the percentage of time spent in deep sleep, in sleep efficiency, and the total time asleep.

Symptomatic flares alone, without inflammation, did not impact sleep metrics, the researchers concluded. However, periods with active inflammation were associated with a significantly smaller percentage of sleep time in REM sleep and a greater percentage of sleep time in light sleep.

The team also performed longitudinal mapping of sleep patterns before, during, and after disease exacerbations by analyzing sleep data for 6 weeks before and 6 weeks after flare episodes.

They found that sleep disturbances significantly worsen leading up to inflammatory flares and improve afterward, suggesting that sleep changes may signal upcoming increased disease activity. Evaluating the intersection of inflammatory and symptomatic flares, altered sleep architecture was only evident when inflammation was present.

“These findings raise important questions about whether intervening on sleep can actually impact inflammation or disease trajectory in IBD,” Hirten said. “Next steps include studying whether targeted sleep interventions can improve both sleep and IBD outcomes.”

While this research is still in the early stages, he said, “it suggests that sleep may have a relationship with inflammatory activity in IBD. For patients, it reinforces the value of paying attention to sleep changes.”

The findings also show the potential of wearable devices to guide more personalized monitoring, he added. “More work is needed before sleep metrics can be used routinely in clinical decision-making.”

 

Validates the Use of Wearables

Commenting on the study for GI & Hepatology News, Michael Mintz, MD, a gastroenterologist at Weill Cornell Medicine and NewYork-Presbyterian in New York City, observed, “Gastrointestinal symptoms often do not correlate with objective disease activity in IBD, creating a diagnostic challenge for gastroenterologists. Burdensome, expensive, and/or invasive testing, such as colonoscopies, stool tests, or imaging, are frequently required to monitor disease activity.” 

“This study is a first step in objectively monitoring inflammation in a patient-centric way that does not create undue burden to our patients,” he said. “It also provides longitudinal data that suggests changes in sleep patterns can pre-date disease flares, which ideally can lead to earlier intervention to prevent disease complications.”

Like Hirten, he noted that clinical decisions, such as changing IBD therapy, should not be based on the results of this study. “Rather this provides validation that wearable technology can provide useful objective data that correlates with disease activity.”

Furthermore, he said, it is not clear whether analyzing sleep data is a cost-effective way of monitoring IBD disease activity, or whether that data should be used alone or in combination with other objective disease markers, to influence clinical decision-making.

“This study provides proof of concept that there is a relationship between sleep characteristics and objective inflammation, but further studies are needed,” he said. “I am hopeful that this technology will give us another tool that we can use in clinical practice to monitor disease activity and improve outcomes in a way that is comfortable and convenient for our patients.”

This study was supported by a grant to Hirten from the US National Institutes of Health. Hirten reported receiving consulting fees from Bristol Meyers Squibb, AbbVie; stock options from Salvo Health; and research support from Janssen, Intralytix, EnLiSense, Crohn’s and Colitis Foundation. Mintz declared no competing interests.

A version of this article appeared on Medscape.com.

Changes in sleep metrics detected with wearable technology could serve as an inflammation marker and potentially predict inflammatory bowel disease (IBD) flareups, regardless of whether a patient has symptoms, an observational study suggested.

Sleep data from 101 study participants over a mean duration of about 228 days revealed that altered sleep architecture was only apparent when inflammation was present — symptoms alone did not impact sleep cycles or signal inflammation.

“We thought symptoms might have an impact on sleep, but interestingly, our data showed that measurable changes like reduced rapid eye movement (REM) sleep and increased light sleep only occurred during periods of active inflammation,” Robert Hirten, MD, associate professor of Medicine (Gastroenterology), and Artificial Intelligence and Human Health, at the Icahn School of Medicine at Mount Sinai, New York City, told GI & Hepatology News.

Dr. Robert Hirten



“It was also interesting to see distinct patterns in sleep metrics begin to shift over the 45 days before a flare, suggesting the potential for sleep to serve as an early indicator of disease activity,” he added.

“Sleep is often overlooked in the management of IBD, but it may provide valuable insights into a patient’s underlying disease state,” he said. “While sleep monitoring isn’t yet a standard part of IBD care, this study highlights its potential as a noninvasive window into disease activity, and a promising area for future clinical integration.”

The study was published online in Clinical Gastroenterology and Hepatology.

 

Less REM Sleep, More Light Sleep

Researchers assessed the impact of inflammation and symptoms on sleep architecture in IBD by analyzing data from 101 individuals who answered daily disease activity surveys and wore a wearable device.

The mean age of participants was 41 years and 65.3% were women. Sixty-three participants (62.4%) had Crohn’s disease (CD) and 38 (37.6%) had ulcerative colitis (UC).

Almost 40 (39.6%) participants used an Apple Watch; 50 (49.5%) used a Fitbit; and 11 (10.9%) used an Oura ring. Sleep architecture, sleep efficiency, and total hours asleep were collected from the devices. Participants were encouraged to wear their devices for at least 4 days per week and 8 hours per day and were not required to wear them at night. Participants provided data by linking their devices to ehive, Mount Sinai’s custom app.

Daily clinical disease activity was assessed using the UC or CD Patient Reported Outcome-2 survey. Participants were asked to answer at least four daily surveys each week.

Associations between sleep metrics and periods of symptomatic and inflammatory flares, and combinations of symptomatic and inflammatory activity, were compared to periods of symptomatic and inflammatory remission.

Furthermore, researchers explored the rate of change in sleep metrics for 45 days before and after inflammatory and symptomatic flares.

Participants contributed a mean duration of 228.16 nights of wearable data. During active inflammation, they spent a lower percentage of sleep time in REM (20% vs 21.59%) and a greater percentage of sleep time in light sleep (62.23% vs 59.95%) than during inflammatory remission. No differences were observed in the mean percentage of time in deep sleep, sleep efficiency, or total time asleep.

During symptomatic flares, there were no differences in the percentage of sleep time in REM sleep, deep sleep, light sleep, or sleep efficiency compared with periods of inflammatory remission. However, participants slept less overall during symptomatic flares compared with during symptomatic remission.

Compared with during asymptomatic and uninflamed periods, during asymptomatic but inflamed periods, participants spent a lower percentage of time in REM sleep, and more time in light sleep; however, there were no differences in sleep efficiency or total time asleep.

Similarly, participants had more light sleep and less REM sleep during symptomatic and inflammatory flares than during asymptomatic and uninflamed periods — but there were no differences in the percentage of time spent in deep sleep, in sleep efficiency, and the total time asleep.

Symptomatic flares alone, without inflammation, did not impact sleep metrics, the researchers concluded. However, periods with active inflammation were associated with a significantly smaller percentage of sleep time in REM sleep and a greater percentage of sleep time in light sleep.

The team also performed longitudinal mapping of sleep patterns before, during, and after disease exacerbations by analyzing sleep data for 6 weeks before and 6 weeks after flare episodes.

They found that sleep disturbances significantly worsen leading up to inflammatory flares and improve afterward, suggesting that sleep changes may signal upcoming increased disease activity. Evaluating the intersection of inflammatory and symptomatic flares, altered sleep architecture was only evident when inflammation was present.

“These findings raise important questions about whether intervening on sleep can actually impact inflammation or disease trajectory in IBD,” Hirten said. “Next steps include studying whether targeted sleep interventions can improve both sleep and IBD outcomes.”

While this research is still in the early stages, he said, “it suggests that sleep may have a relationship with inflammatory activity in IBD. For patients, it reinforces the value of paying attention to sleep changes.”

The findings also show the potential of wearable devices to guide more personalized monitoring, he added. “More work is needed before sleep metrics can be used routinely in clinical decision-making.”

 

Validates the Use of Wearables

Commenting on the study for GI & Hepatology News, Michael Mintz, MD, a gastroenterologist at Weill Cornell Medicine and NewYork-Presbyterian in New York City, observed, “Gastrointestinal symptoms often do not correlate with objective disease activity in IBD, creating a diagnostic challenge for gastroenterologists. Burdensome, expensive, and/or invasive testing, such as colonoscopies, stool tests, or imaging, are frequently required to monitor disease activity.” 

“This study is a first step in objectively monitoring inflammation in a patient-centric way that does not create undue burden to our patients,” he said. “It also provides longitudinal data that suggests changes in sleep patterns can pre-date disease flares, which ideally can lead to earlier intervention to prevent disease complications.”

Like Hirten, he noted that clinical decisions, such as changing IBD therapy, should not be based on the results of this study. “Rather this provides validation that wearable technology can provide useful objective data that correlates with disease activity.”

Furthermore, he said, it is not clear whether analyzing sleep data is a cost-effective way of monitoring IBD disease activity, or whether that data should be used alone or in combination with other objective disease markers, to influence clinical decision-making.

“This study provides proof of concept that there is a relationship between sleep characteristics and objective inflammation, but further studies are needed,” he said. “I am hopeful that this technology will give us another tool that we can use in clinical practice to monitor disease activity and improve outcomes in a way that is comfortable and convenient for our patients.”

This study was supported by a grant to Hirten from the US National Institutes of Health. Hirten reported receiving consulting fees from Bristol Meyers Squibb, AbbVie; stock options from Salvo Health; and research support from Janssen, Intralytix, EnLiSense, Crohn’s and Colitis Foundation. Mintz declared no competing interests.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 07/28/2025 - 15:05
Un-Gate On Date
Mon, 07/28/2025 - 15:05
Use ProPublica
CFC Schedule Remove Status
Mon, 07/28/2025 - 15:05
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Mon, 07/28/2025 - 15:05

Can Nonresponders to Antiobesity Medicines Be Predicted?

Article Type
Changed
Fri, 07/25/2025 - 13:49

Emerging research indicates that phenotype-based testing may help identify which biologic process is driving an individual’s obesity, enabling clinicians to better tailor antiobesity medication (AOM) to the patient.

Currently, patient response to AOMs varies widely, with some patients responding robustly to AOMs and others responding weakly or not at all.

For example, trials of the GLP-1 semaglutide found that 32%-39.6% of people are “super responders,” achieving weight loss in excess of 20%, and a subgroup of 10.2%-16.7% of individuals are nonresponders. Similar variability was found with other AOMs, including the GLP-1 liraglutide and tirzepatide, a dual GLP-1/glucose-dependent insulinotropic polypeptide receptor agonist.

Studies of semaglutide suggest that people with obesity and type 2 diabetes (T2D) lose less weight on the drug than those without T2D, and men tend to lose less weight than women.

However, little else is known about predictors of response rates for various AOMs, and medication selection is typically based on patient or physician preference, comorbidities, medication interactions, and insurance coverage.

Although definitions of a “nonresponder” vary, the Endocrine Society’s latest guideline, which many clinicians follow, states that an AOM is considered effective if patients lose more than 5% of their body weight within 3 months.

Can nonresponders and lower responders be identified and helped? Yes, but it’s complicated.

“Treating obesity effectively means recognizing that not all patients respond the same way to the same treatment, and that’s not a failure; it’s a signal,” said Andres Acosta, MD, PhD, an obesity expert at Mayo Clinic, Rochester, Minnesota, and a cofounder of Phenomix Sciences, a biotech company in Menlo Park, California.

Acosta_AndresJ_MIN_web-ETOC
Dr. Andres Acosta



“Obesity is not a single disease. It’s a complex, multifactorial condition driven by diverse biological pathways,” he told GI & Hepatology News. “Semaglutide and other GLP-1s primarily act by reducing appetite and slowing gastric emptying, but not all patients have obesity that is primarily driven by appetite dysregulation.”

 

Phenotype-Based Profiling

Figuring out what drives an individual’s obesity is where a phenotype-based profiling test could possibly help.

Acosta and colleagues previously used a variety of validated studies and questionnaires to identify four phenotypes that represent distinct biologic drivers of obesity: hungry brain (abnormal satiation), emotional hunger (hedonic eating), hungry gut (abnormal satiety), and slow burn (decreased metabolic rate). In their pragmatic clinical trial, phenotype-guided AOM selection was associated with 1.75-fold greater weight loss after 12 months than the standard approach to drug selection, with mean weight loss of 15.9% and 9%, respectively.

“If a patient’s obesity isn’t primarily rooted in the mechanisms targeted by a particular drug, their response will naturally be limited,” Acosta said. “It’s not that they’re failing the medication; the medication simply isn’t the right match for their biology.”

For their new study, published online in Cell Metabolism, Acosta and colleagues built on their previous research by analyzing the genetic and nongenetic factors that influenced calories needed to reach satiation (Calories to Satiation [CTS]) in adults with obesity. They then used machine learning techniques to develop a CTS gene risk score (CTS-GRS) that could be measured by a DNA saliva test.

The study included 717 adults with obesity (mean age, 41; 75% women) with marked variability in satiation, ranging from 140 to 2166 kcals to reach satiation.

CTS was assessed through an ad libitum meal, combined with physiological and behavioral evaluations, including calorimetry, imaging, blood sampling, and gastric emptying tests. The largest contributors to CTS variability were sex and genetic factors, while other anthropometric measurements played lesser roles.

Various analyses and assessments of participants’ CTS-GRS scores showed that individuals with a high CTS-GRS, or hungry brain phenotype, experienced significantly greater weight loss when treated with phentermine/topiramate than those with a low CTS-GRS, or hungry gut, phenotype. After 52 weeks of treatment, individuals with the hungry brain phenotype lost an average of 17.4% of their body weight compared with 11.2% in those with the hungry gut phenotype.

An analysis of a separate 16-week study showed that patients with the hungry gut phenotype responded better to the GLP-1 liraglutide, losing 6.4% total body weight, compared to 3.3% for those with the hungry brain phenotype.

Overall, the CTS-GRS test predicted drug response with up to 84% accuracy (area under the curve, 0.76 in men and 0.84 in women). The authors acknowledged that these results need to be replicated prospectively and in more diverse populations to validate the test’s predictive ability.

“This kind of phenotype-based profiling allows us to predict which patients are more likely to respond and who might need a different intervention,” Acosta said. “It’s a critical step toward eliminating trial-and-error in obesity treatment.”

The test (MyPhenome test) is used at more than 80 healthcare clinics in the United States, according to Phenomix Sciences, which manufactures it. A company spokesperson said the test does not require FDA approval because it is used to predict obesity phenotypes to help inform treatment, but not to identify specific medications or other interventions. “If it were to do the latter,” the spokesperson said, “it would be considered a ‘companion diagnostic’ and subject to the FDA clearance process.”

 

What to Do if an AOM Isn’t Working?

It’s one thing to predict whether an individual might do better on one drug vs another, but what should clinicians do meanwhile to optimize weight loss for their patients who may be struggling on a particular drug?

“Efforts to predict the response to GLP-1 therapy have been a hot topic,” noted Sriram Machineni, MD, associate professor at Montefiore Medical Center, Bronx, New York, and founding director of the Fleischer Institute Medical Weight Center at Montefiore Einstein. Although the current study showed that genetic testing could predict responders, like Acosta, he agreed that the results need to be replicated in a prospective manner.

“In the absence of a validated tool for predicting response to specific medications, we use a prioritization process for trialing medications,” Machineni told GI & Hepatology News. “The prioritization is based on the suitability of the side-effect profile to the specific patient, including contraindications; benefits independent of weight loss, such as cardiovascular protection for semaglutide; average efficacy; and financial accessibility for patients.”

Predicting responders isn’t straightforward, said Robert Kushner, MD, professor of medicine and medical education at the Feinberg School of Medicine at Northwestern University and medical director of the Wellness Institute at Northwestern Memorial Hospital in Chicago.

Dr. Robert Kushner



“Despite looking at baseline demographic data such as race, ethnicity, age, weight, and BMI, we are unable to predict who will lose more or less weight,” he told GI & Hepatology News. The one exception is that women generally lose more weight than men. “However, even among females, we cannot discern which females will lose more weight than other females,” he said.

If an individual is not showing sufficient weight loss on a particular medication, “we first explore potential reasons that can be addressed, such as the patient is not taking the medication or is skipping doses,” Kushner said. If need be, they discuss changing to a different drug to improve compliance. He also stresses the importance of making lifestyle changes in diet and physical activity for patients taking AOMs.

Often patients who do not lose at least 5% of their weight within 3 months are not likely to respond well to that medication even if they remain on it. “So, early response rates determine longer-term success,” Kushner said.

Acosta said that if a patient isn’t responding to one class of medication, he pivots to a treatment better aligned with their phenotype. “That could mean switching from a GLP-1 to a medication like [naltrexone/bupropion] or trying a new method altogether,” he said. “The key is that the treatment decision is rooted in the patient’s biology, not just a reaction to short-term results. We also emphasize the importance of long-term follow-up and support.”

The goal isn’t just weight loss but also improved health and quality of life, Acosta said. “Whether through medication, surgery, or behavior change, what matters most is tailoring the care plan to each individual’s unique biology and needs.”

The new study received support from the Mayo Clinic Clinical Research Trials Unit, Vivus Inc., and Phenomix Sciences. Acosta is supported by a National Institutes of Health grant.

Acosta is a co-founder and inventor of intellectual property licensed to Phenomix Sciences Inc.; has served as a consultant for Rhythm Pharmaceuticals, Gila Therapeutics, Amgen, General Mills, Boehringer Ingelheim, Currax Pharmaceuticals, Nestlé, Bausch Health, and Rare Diseases; and has received research support or had contracts with Vivus Inc., Satiogen Pharmaceuticals, Boehringer Ingelheim, and Rhythm Pharmaceuticals. Machineni has been involved in semaglutide and tirzepatide clinical trials and has been a consultant to Novo Nordisk, Eli Lilly and Company, and Rhythm Pharmaceuticals. Kushner is on the scientific advisory board for Novo Nordisk.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Emerging research indicates that phenotype-based testing may help identify which biologic process is driving an individual’s obesity, enabling clinicians to better tailor antiobesity medication (AOM) to the patient.

Currently, patient response to AOMs varies widely, with some patients responding robustly to AOMs and others responding weakly or not at all.

For example, trials of the GLP-1 semaglutide found that 32%-39.6% of people are “super responders,” achieving weight loss in excess of 20%, and a subgroup of 10.2%-16.7% of individuals are nonresponders. Similar variability was found with other AOMs, including the GLP-1 liraglutide and tirzepatide, a dual GLP-1/glucose-dependent insulinotropic polypeptide receptor agonist.

Studies of semaglutide suggest that people with obesity and type 2 diabetes (T2D) lose less weight on the drug than those without T2D, and men tend to lose less weight than women.

However, little else is known about predictors of response rates for various AOMs, and medication selection is typically based on patient or physician preference, comorbidities, medication interactions, and insurance coverage.

Although definitions of a “nonresponder” vary, the Endocrine Society’s latest guideline, which many clinicians follow, states that an AOM is considered effective if patients lose more than 5% of their body weight within 3 months.

Can nonresponders and lower responders be identified and helped? Yes, but it’s complicated.

“Treating obesity effectively means recognizing that not all patients respond the same way to the same treatment, and that’s not a failure; it’s a signal,” said Andres Acosta, MD, PhD, an obesity expert at Mayo Clinic, Rochester, Minnesota, and a cofounder of Phenomix Sciences, a biotech company in Menlo Park, California.

Acosta_AndresJ_MIN_web-ETOC
Dr. Andres Acosta



“Obesity is not a single disease. It’s a complex, multifactorial condition driven by diverse biological pathways,” he told GI & Hepatology News. “Semaglutide and other GLP-1s primarily act by reducing appetite and slowing gastric emptying, but not all patients have obesity that is primarily driven by appetite dysregulation.”

 

Phenotype-Based Profiling

Figuring out what drives an individual’s obesity is where a phenotype-based profiling test could possibly help.

Acosta and colleagues previously used a variety of validated studies and questionnaires to identify four phenotypes that represent distinct biologic drivers of obesity: hungry brain (abnormal satiation), emotional hunger (hedonic eating), hungry gut (abnormal satiety), and slow burn (decreased metabolic rate). In their pragmatic clinical trial, phenotype-guided AOM selection was associated with 1.75-fold greater weight loss after 12 months than the standard approach to drug selection, with mean weight loss of 15.9% and 9%, respectively.

“If a patient’s obesity isn’t primarily rooted in the mechanisms targeted by a particular drug, their response will naturally be limited,” Acosta said. “It’s not that they’re failing the medication; the medication simply isn’t the right match for their biology.”

For their new study, published online in Cell Metabolism, Acosta and colleagues built on their previous research by analyzing the genetic and nongenetic factors that influenced calories needed to reach satiation (Calories to Satiation [CTS]) in adults with obesity. They then used machine learning techniques to develop a CTS gene risk score (CTS-GRS) that could be measured by a DNA saliva test.

The study included 717 adults with obesity (mean age, 41; 75% women) with marked variability in satiation, ranging from 140 to 2166 kcals to reach satiation.

CTS was assessed through an ad libitum meal, combined with physiological and behavioral evaluations, including calorimetry, imaging, blood sampling, and gastric emptying tests. The largest contributors to CTS variability were sex and genetic factors, while other anthropometric measurements played lesser roles.

Various analyses and assessments of participants’ CTS-GRS scores showed that individuals with a high CTS-GRS, or hungry brain phenotype, experienced significantly greater weight loss when treated with phentermine/topiramate than those with a low CTS-GRS, or hungry gut, phenotype. After 52 weeks of treatment, individuals with the hungry brain phenotype lost an average of 17.4% of their body weight compared with 11.2% in those with the hungry gut phenotype.

An analysis of a separate 16-week study showed that patients with the hungry gut phenotype responded better to the GLP-1 liraglutide, losing 6.4% total body weight, compared to 3.3% for those with the hungry brain phenotype.

Overall, the CTS-GRS test predicted drug response with up to 84% accuracy (area under the curve, 0.76 in men and 0.84 in women). The authors acknowledged that these results need to be replicated prospectively and in more diverse populations to validate the test’s predictive ability.

“This kind of phenotype-based profiling allows us to predict which patients are more likely to respond and who might need a different intervention,” Acosta said. “It’s a critical step toward eliminating trial-and-error in obesity treatment.”

The test (MyPhenome test) is used at more than 80 healthcare clinics in the United States, according to Phenomix Sciences, which manufactures it. A company spokesperson said the test does not require FDA approval because it is used to predict obesity phenotypes to help inform treatment, but not to identify specific medications or other interventions. “If it were to do the latter,” the spokesperson said, “it would be considered a ‘companion diagnostic’ and subject to the FDA clearance process.”

 

What to Do if an AOM Isn’t Working?

It’s one thing to predict whether an individual might do better on one drug vs another, but what should clinicians do meanwhile to optimize weight loss for their patients who may be struggling on a particular drug?

“Efforts to predict the response to GLP-1 therapy have been a hot topic,” noted Sriram Machineni, MD, associate professor at Montefiore Medical Center, Bronx, New York, and founding director of the Fleischer Institute Medical Weight Center at Montefiore Einstein. Although the current study showed that genetic testing could predict responders, like Acosta, he agreed that the results need to be replicated in a prospective manner.

“In the absence of a validated tool for predicting response to specific medications, we use a prioritization process for trialing medications,” Machineni told GI & Hepatology News. “The prioritization is based on the suitability of the side-effect profile to the specific patient, including contraindications; benefits independent of weight loss, such as cardiovascular protection for semaglutide; average efficacy; and financial accessibility for patients.”

Predicting responders isn’t straightforward, said Robert Kushner, MD, professor of medicine and medical education at the Feinberg School of Medicine at Northwestern University and medical director of the Wellness Institute at Northwestern Memorial Hospital in Chicago.

Dr. Robert Kushner



“Despite looking at baseline demographic data such as race, ethnicity, age, weight, and BMI, we are unable to predict who will lose more or less weight,” he told GI & Hepatology News. The one exception is that women generally lose more weight than men. “However, even among females, we cannot discern which females will lose more weight than other females,” he said.

If an individual is not showing sufficient weight loss on a particular medication, “we first explore potential reasons that can be addressed, such as the patient is not taking the medication or is skipping doses,” Kushner said. If need be, they discuss changing to a different drug to improve compliance. He also stresses the importance of making lifestyle changes in diet and physical activity for patients taking AOMs.

Often patients who do not lose at least 5% of their weight within 3 months are not likely to respond well to that medication even if they remain on it. “So, early response rates determine longer-term success,” Kushner said.

Acosta said that if a patient isn’t responding to one class of medication, he pivots to a treatment better aligned with their phenotype. “That could mean switching from a GLP-1 to a medication like [naltrexone/bupropion] or trying a new method altogether,” he said. “The key is that the treatment decision is rooted in the patient’s biology, not just a reaction to short-term results. We also emphasize the importance of long-term follow-up and support.”

The goal isn’t just weight loss but also improved health and quality of life, Acosta said. “Whether through medication, surgery, or behavior change, what matters most is tailoring the care plan to each individual’s unique biology and needs.”

The new study received support from the Mayo Clinic Clinical Research Trials Unit, Vivus Inc., and Phenomix Sciences. Acosta is supported by a National Institutes of Health grant.

Acosta is a co-founder and inventor of intellectual property licensed to Phenomix Sciences Inc.; has served as a consultant for Rhythm Pharmaceuticals, Gila Therapeutics, Amgen, General Mills, Boehringer Ingelheim, Currax Pharmaceuticals, Nestlé, Bausch Health, and Rare Diseases; and has received research support or had contracts with Vivus Inc., Satiogen Pharmaceuticals, Boehringer Ingelheim, and Rhythm Pharmaceuticals. Machineni has been involved in semaglutide and tirzepatide clinical trials and has been a consultant to Novo Nordisk, Eli Lilly and Company, and Rhythm Pharmaceuticals. Kushner is on the scientific advisory board for Novo Nordisk.

A version of this article appeared on Medscape.com.

Emerging research indicates that phenotype-based testing may help identify which biologic process is driving an individual’s obesity, enabling clinicians to better tailor antiobesity medication (AOM) to the patient.

Currently, patient response to AOMs varies widely, with some patients responding robustly to AOMs and others responding weakly or not at all.

For example, trials of the GLP-1 semaglutide found that 32%-39.6% of people are “super responders,” achieving weight loss in excess of 20%, and a subgroup of 10.2%-16.7% of individuals are nonresponders. Similar variability was found with other AOMs, including the GLP-1 liraglutide and tirzepatide, a dual GLP-1/glucose-dependent insulinotropic polypeptide receptor agonist.

Studies of semaglutide suggest that people with obesity and type 2 diabetes (T2D) lose less weight on the drug than those without T2D, and men tend to lose less weight than women.

However, little else is known about predictors of response rates for various AOMs, and medication selection is typically based on patient or physician preference, comorbidities, medication interactions, and insurance coverage.

Although definitions of a “nonresponder” vary, the Endocrine Society’s latest guideline, which many clinicians follow, states that an AOM is considered effective if patients lose more than 5% of their body weight within 3 months.

Can nonresponders and lower responders be identified and helped? Yes, but it’s complicated.

“Treating obesity effectively means recognizing that not all patients respond the same way to the same treatment, and that’s not a failure; it’s a signal,” said Andres Acosta, MD, PhD, an obesity expert at Mayo Clinic, Rochester, Minnesota, and a cofounder of Phenomix Sciences, a biotech company in Menlo Park, California.

Acosta_AndresJ_MIN_web-ETOC
Dr. Andres Acosta



“Obesity is not a single disease. It’s a complex, multifactorial condition driven by diverse biological pathways,” he told GI & Hepatology News. “Semaglutide and other GLP-1s primarily act by reducing appetite and slowing gastric emptying, but not all patients have obesity that is primarily driven by appetite dysregulation.”

 

Phenotype-Based Profiling

Figuring out what drives an individual’s obesity is where a phenotype-based profiling test could possibly help.

Acosta and colleagues previously used a variety of validated studies and questionnaires to identify four phenotypes that represent distinct biologic drivers of obesity: hungry brain (abnormal satiation), emotional hunger (hedonic eating), hungry gut (abnormal satiety), and slow burn (decreased metabolic rate). In their pragmatic clinical trial, phenotype-guided AOM selection was associated with 1.75-fold greater weight loss after 12 months than the standard approach to drug selection, with mean weight loss of 15.9% and 9%, respectively.

“If a patient’s obesity isn’t primarily rooted in the mechanisms targeted by a particular drug, their response will naturally be limited,” Acosta said. “It’s not that they’re failing the medication; the medication simply isn’t the right match for their biology.”

For their new study, published online in Cell Metabolism, Acosta and colleagues built on their previous research by analyzing the genetic and nongenetic factors that influenced calories needed to reach satiation (Calories to Satiation [CTS]) in adults with obesity. They then used machine learning techniques to develop a CTS gene risk score (CTS-GRS) that could be measured by a DNA saliva test.

The study included 717 adults with obesity (mean age, 41; 75% women) with marked variability in satiation, ranging from 140 to 2166 kcals to reach satiation.

CTS was assessed through an ad libitum meal, combined with physiological and behavioral evaluations, including calorimetry, imaging, blood sampling, and gastric emptying tests. The largest contributors to CTS variability were sex and genetic factors, while other anthropometric measurements played lesser roles.

Various analyses and assessments of participants’ CTS-GRS scores showed that individuals with a high CTS-GRS, or hungry brain phenotype, experienced significantly greater weight loss when treated with phentermine/topiramate than those with a low CTS-GRS, or hungry gut, phenotype. After 52 weeks of treatment, individuals with the hungry brain phenotype lost an average of 17.4% of their body weight compared with 11.2% in those with the hungry gut phenotype.

An analysis of a separate 16-week study showed that patients with the hungry gut phenotype responded better to the GLP-1 liraglutide, losing 6.4% total body weight, compared to 3.3% for those with the hungry brain phenotype.

Overall, the CTS-GRS test predicted drug response with up to 84% accuracy (area under the curve, 0.76 in men and 0.84 in women). The authors acknowledged that these results need to be replicated prospectively and in more diverse populations to validate the test’s predictive ability.

“This kind of phenotype-based profiling allows us to predict which patients are more likely to respond and who might need a different intervention,” Acosta said. “It’s a critical step toward eliminating trial-and-error in obesity treatment.”

The test (MyPhenome test) is used at more than 80 healthcare clinics in the United States, according to Phenomix Sciences, which manufactures it. A company spokesperson said the test does not require FDA approval because it is used to predict obesity phenotypes to help inform treatment, but not to identify specific medications or other interventions. “If it were to do the latter,” the spokesperson said, “it would be considered a ‘companion diagnostic’ and subject to the FDA clearance process.”

 

What to Do if an AOM Isn’t Working?

It’s one thing to predict whether an individual might do better on one drug vs another, but what should clinicians do meanwhile to optimize weight loss for their patients who may be struggling on a particular drug?

“Efforts to predict the response to GLP-1 therapy have been a hot topic,” noted Sriram Machineni, MD, associate professor at Montefiore Medical Center, Bronx, New York, and founding director of the Fleischer Institute Medical Weight Center at Montefiore Einstein. Although the current study showed that genetic testing could predict responders, like Acosta, he agreed that the results need to be replicated in a prospective manner.

“In the absence of a validated tool for predicting response to specific medications, we use a prioritization process for trialing medications,” Machineni told GI & Hepatology News. “The prioritization is based on the suitability of the side-effect profile to the specific patient, including contraindications; benefits independent of weight loss, such as cardiovascular protection for semaglutide; average efficacy; and financial accessibility for patients.”

Predicting responders isn’t straightforward, said Robert Kushner, MD, professor of medicine and medical education at the Feinberg School of Medicine at Northwestern University and medical director of the Wellness Institute at Northwestern Memorial Hospital in Chicago.

Dr. Robert Kushner



“Despite looking at baseline demographic data such as race, ethnicity, age, weight, and BMI, we are unable to predict who will lose more or less weight,” he told GI & Hepatology News. The one exception is that women generally lose more weight than men. “However, even among females, we cannot discern which females will lose more weight than other females,” he said.

If an individual is not showing sufficient weight loss on a particular medication, “we first explore potential reasons that can be addressed, such as the patient is not taking the medication or is skipping doses,” Kushner said. If need be, they discuss changing to a different drug to improve compliance. He also stresses the importance of making lifestyle changes in diet and physical activity for patients taking AOMs.

Often patients who do not lose at least 5% of their weight within 3 months are not likely to respond well to that medication even if they remain on it. “So, early response rates determine longer-term success,” Kushner said.

Acosta said that if a patient isn’t responding to one class of medication, he pivots to a treatment better aligned with their phenotype. “That could mean switching from a GLP-1 to a medication like [naltrexone/bupropion] or trying a new method altogether,” he said. “The key is that the treatment decision is rooted in the patient’s biology, not just a reaction to short-term results. We also emphasize the importance of long-term follow-up and support.”

The goal isn’t just weight loss but also improved health and quality of life, Acosta said. “Whether through medication, surgery, or behavior change, what matters most is tailoring the care plan to each individual’s unique biology and needs.”

The new study received support from the Mayo Clinic Clinical Research Trials Unit, Vivus Inc., and Phenomix Sciences. Acosta is supported by a National Institutes of Health grant.

Acosta is a co-founder and inventor of intellectual property licensed to Phenomix Sciences Inc.; has served as a consultant for Rhythm Pharmaceuticals, Gila Therapeutics, Amgen, General Mills, Boehringer Ingelheim, Currax Pharmaceuticals, Nestlé, Bausch Health, and Rare Diseases; and has received research support or had contracts with Vivus Inc., Satiogen Pharmaceuticals, Boehringer Ingelheim, and Rhythm Pharmaceuticals. Machineni has been involved in semaglutide and tirzepatide clinical trials and has been a consultant to Novo Nordisk, Eli Lilly and Company, and Rhythm Pharmaceuticals. Kushner is on the scientific advisory board for Novo Nordisk.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 07/25/2025 - 09:23
Un-Gate On Date
Fri, 07/25/2025 - 09:23
Use ProPublica
CFC Schedule Remove Status
Fri, 07/25/2025 - 09:23
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 07/25/2025 - 09:23

Novel Gene Risk Score Predicts Outcomes After RYGB Surgery

Article Type
Changed
Wed, 06/11/2025 - 09:46

SAN DIEGO –A novel gene risk score, informed by machine learning, predicted weight-loss outcomes after Roux-en-Y gastric bypass (RYGB) surgery, a new analysis showed.

The findings suggested that the MyPhenome test (Phenomix Sciences) can help clinicians identify the patients most likely to benefit from bariatric procedures and at a greater risk for long-term weight regain after surgery.

“Patients with both a high genetic risk score and rare mutations in the leptin-melanocortin pathway (LMP) had significantly worse outcomes, maintaining only 4.9% total body weight loss [TBWL] over 15 years compared to up to 24.8% in other genetic groups,” Phenomix Sciences Co-founder Andres Acosta, MD, PhD, told GI & Hepatology News.

Acosta_AndresJ_MIN_web-ETOC
Dr. Andres Acosta



The study included details on the score’s development and predictive capability. It was presented at Digestive Disease Week® (DDW) 2025

‘More Precise Bariatric Care’

The researchers recently developed a machine learning-assisted gene risk score for calories to satiation (CTSGRS), which mainly involves genes in the LMP. To assess the role of the score with or without LMP gene variants on weight loss and weight recurrence after RYGB, they identified 707 patients with a history of bariatric procedures from the Mayo Clinic Biobank. Patients with duodenal switch, revisional procedures, or who used antiobesity medications or became pregnant during follow-up were excluded.

To make predictions for 442 of the patients, the team first collected anthropometric data up to 15 years after RYGB. Then they used a two-step approach: Assessing for monogenic variants in the LMP and defining participants as carriers (LMP+) or noncarriers (LMP-). Then they defined the gene risk score (CTSGRS+ or CTSGRS-).

The result was four groups: LMP+/CTSGRS+, LMP+/CTSGRS-, LMP-/CTSGRS+, and LMP-/CTSGRS-. Multiple regression analysis was used to analyze TBWL percentage (TBWL%) between the groups at different timepoints, adjusting for baseline weight, age, and gender.

At the 10-year follow-up, the LMP+/CTSGRS+ group demonstrated a significantly higher weight recurrence (regain) of TBW% compared to the other groups.

At 15 years post-RYGB, the mean TBWL% for LMP+/CTSGRS+ was -4.9 vs -20.3 for LMP+/CTSGRS-, -18.0 for LMP-/CTSGRS+, and -24.8 for LMP-/CTSGRS-.

Further analyses showed that the LMP+/CTSGRS+ group had significantly less weight loss than LMP+/CTSGRS- and LMP-/CTSGRS- groups.

Based on the findings, the authors wrote, “Genotyping patients could improve the implementation of individualized weight-loss interventions, enhance weight-loss outcomes, and/or may explain one of the etiological factors associated with weight recurrence after RYGB.”

Acosta noted, “We’re actively expanding our research to include more diverse populations by age, sex, and race. This includes ongoing analysis to understand whether certain demographic or physiological characteristics affect how the test performs, particularly in the context of bariatric surgery.”

The team also is investigating the benefits of phenotyping for obesity comorbidities such as heart disease and diabetes, he said, and exploring whether early interventions in high-risk patients can prevent long-term weight regain and improve outcomes.

In addition, Acosta said, the team recently launched “the first prospective, placebo-controlled clinical trial using the MyPhenome test to predict response to semaglutide.” That study is based on earlier findings showing that patients identified with a Hungry Gut phenotype lost nearly twice as much weight on semaglutide compared with those who tested negative.

Overall, he concluded, “These findings open the door to more precise bariatric care. When we understand a patient’s biological drivers of obesity, we can make better decisions about the right procedure, follow-up, and long-term support. This moves us away from a one-size-fits-all model to care rooted in each patient’s unique biology.”

 

Potentially Paradigm-Shifting

Onur Kutlu, MD, associate professor of surgery and director of the Metabolic Surgery and Metabolic Health Program at the Miller School of Medicine, University of Miami, in Miami, Florida, commented on the study for GI & Hepatology News. “By integrating polygenic risk scores into predictive models, the authors offer an innovative method for identifying patients at elevated risk for weight regain following RYGB.”

“Their findings support the hypothesis that genetic predisposition — particularly involving energy homeostasis pathways — may underlie differential postoperative trajectories,” he said. “This approach has the potential to shift the paradigm from reactive to proactive management of weight recurrence.”

Because current options for treat weight regain are “suboptimal,” he said, “prevention becomes paramount. Preoperative identification of high-risk individuals could inform surgical decision-making, enable earlier interventions, and facilitate personalized postoperative monitoring and support.”

“If validated in larger, prospective cohorts, genetic risk stratification could enhance the precision of bariatric care and improve long-term outcomes,” he added. “Future studies should aim to validate these genetic models across diverse populations and explore how integration of behavioral, psychological, and genetic data may further refine patient selection and care pathways.”

The study was funded by Mayo Clinic and Phenomix Sciences. Gila Therapeutics and Phenomix Sciences licensed Acosta’s research technologies from the University of Florida and Mayo Clinic. Acosta declared receiving consultant fees in the past 5 years from Rhythm Pharmaceuticals, Gila Therapeutics, Amgen, General Mills, BI, Currax, Nestle, Phenomix Sciences, Bausch Health, and RareDiseases, as well as funding support from the National Institutes of Health, Vivus Pharmaceuticals, Novo Nordisk, Apollo Endosurgery, Satiogen Pharmaceuticals, Spatz Medical, and Rhythm Pharmaceuticals. Kutlu declared having no conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO –A novel gene risk score, informed by machine learning, predicted weight-loss outcomes after Roux-en-Y gastric bypass (RYGB) surgery, a new analysis showed.

The findings suggested that the MyPhenome test (Phenomix Sciences) can help clinicians identify the patients most likely to benefit from bariatric procedures and at a greater risk for long-term weight regain after surgery.

“Patients with both a high genetic risk score and rare mutations in the leptin-melanocortin pathway (LMP) had significantly worse outcomes, maintaining only 4.9% total body weight loss [TBWL] over 15 years compared to up to 24.8% in other genetic groups,” Phenomix Sciences Co-founder Andres Acosta, MD, PhD, told GI & Hepatology News.

Acosta_AndresJ_MIN_web-ETOC
Dr. Andres Acosta



The study included details on the score’s development and predictive capability. It was presented at Digestive Disease Week® (DDW) 2025

‘More Precise Bariatric Care’

The researchers recently developed a machine learning-assisted gene risk score for calories to satiation (CTSGRS), which mainly involves genes in the LMP. To assess the role of the score with or without LMP gene variants on weight loss and weight recurrence after RYGB, they identified 707 patients with a history of bariatric procedures from the Mayo Clinic Biobank. Patients with duodenal switch, revisional procedures, or who used antiobesity medications or became pregnant during follow-up were excluded.

To make predictions for 442 of the patients, the team first collected anthropometric data up to 15 years after RYGB. Then they used a two-step approach: Assessing for monogenic variants in the LMP and defining participants as carriers (LMP+) or noncarriers (LMP-). Then they defined the gene risk score (CTSGRS+ or CTSGRS-).

The result was four groups: LMP+/CTSGRS+, LMP+/CTSGRS-, LMP-/CTSGRS+, and LMP-/CTSGRS-. Multiple regression analysis was used to analyze TBWL percentage (TBWL%) between the groups at different timepoints, adjusting for baseline weight, age, and gender.

At the 10-year follow-up, the LMP+/CTSGRS+ group demonstrated a significantly higher weight recurrence (regain) of TBW% compared to the other groups.

At 15 years post-RYGB, the mean TBWL% for LMP+/CTSGRS+ was -4.9 vs -20.3 for LMP+/CTSGRS-, -18.0 for LMP-/CTSGRS+, and -24.8 for LMP-/CTSGRS-.

Further analyses showed that the LMP+/CTSGRS+ group had significantly less weight loss than LMP+/CTSGRS- and LMP-/CTSGRS- groups.

Based on the findings, the authors wrote, “Genotyping patients could improve the implementation of individualized weight-loss interventions, enhance weight-loss outcomes, and/or may explain one of the etiological factors associated with weight recurrence after RYGB.”

Acosta noted, “We’re actively expanding our research to include more diverse populations by age, sex, and race. This includes ongoing analysis to understand whether certain demographic or physiological characteristics affect how the test performs, particularly in the context of bariatric surgery.”

The team also is investigating the benefits of phenotyping for obesity comorbidities such as heart disease and diabetes, he said, and exploring whether early interventions in high-risk patients can prevent long-term weight regain and improve outcomes.

In addition, Acosta said, the team recently launched “the first prospective, placebo-controlled clinical trial using the MyPhenome test to predict response to semaglutide.” That study is based on earlier findings showing that patients identified with a Hungry Gut phenotype lost nearly twice as much weight on semaglutide compared with those who tested negative.

Overall, he concluded, “These findings open the door to more precise bariatric care. When we understand a patient’s biological drivers of obesity, we can make better decisions about the right procedure, follow-up, and long-term support. This moves us away from a one-size-fits-all model to care rooted in each patient’s unique biology.”

 

Potentially Paradigm-Shifting

Onur Kutlu, MD, associate professor of surgery and director of the Metabolic Surgery and Metabolic Health Program at the Miller School of Medicine, University of Miami, in Miami, Florida, commented on the study for GI & Hepatology News. “By integrating polygenic risk scores into predictive models, the authors offer an innovative method for identifying patients at elevated risk for weight regain following RYGB.”

“Their findings support the hypothesis that genetic predisposition — particularly involving energy homeostasis pathways — may underlie differential postoperative trajectories,” he said. “This approach has the potential to shift the paradigm from reactive to proactive management of weight recurrence.”

Because current options for treat weight regain are “suboptimal,” he said, “prevention becomes paramount. Preoperative identification of high-risk individuals could inform surgical decision-making, enable earlier interventions, and facilitate personalized postoperative monitoring and support.”

“If validated in larger, prospective cohorts, genetic risk stratification could enhance the precision of bariatric care and improve long-term outcomes,” he added. “Future studies should aim to validate these genetic models across diverse populations and explore how integration of behavioral, psychological, and genetic data may further refine patient selection and care pathways.”

The study was funded by Mayo Clinic and Phenomix Sciences. Gila Therapeutics and Phenomix Sciences licensed Acosta’s research technologies from the University of Florida and Mayo Clinic. Acosta declared receiving consultant fees in the past 5 years from Rhythm Pharmaceuticals, Gila Therapeutics, Amgen, General Mills, BI, Currax, Nestle, Phenomix Sciences, Bausch Health, and RareDiseases, as well as funding support from the National Institutes of Health, Vivus Pharmaceuticals, Novo Nordisk, Apollo Endosurgery, Satiogen Pharmaceuticals, Spatz Medical, and Rhythm Pharmaceuticals. Kutlu declared having no conflicts of interest.

A version of this article appeared on Medscape.com.

SAN DIEGO –A novel gene risk score, informed by machine learning, predicted weight-loss outcomes after Roux-en-Y gastric bypass (RYGB) surgery, a new analysis showed.

The findings suggested that the MyPhenome test (Phenomix Sciences) can help clinicians identify the patients most likely to benefit from bariatric procedures and at a greater risk for long-term weight regain after surgery.

“Patients with both a high genetic risk score and rare mutations in the leptin-melanocortin pathway (LMP) had significantly worse outcomes, maintaining only 4.9% total body weight loss [TBWL] over 15 years compared to up to 24.8% in other genetic groups,” Phenomix Sciences Co-founder Andres Acosta, MD, PhD, told GI & Hepatology News.

Acosta_AndresJ_MIN_web-ETOC
Dr. Andres Acosta



The study included details on the score’s development and predictive capability. It was presented at Digestive Disease Week® (DDW) 2025

‘More Precise Bariatric Care’

The researchers recently developed a machine learning-assisted gene risk score for calories to satiation (CTSGRS), which mainly involves genes in the LMP. To assess the role of the score with or without LMP gene variants on weight loss and weight recurrence after RYGB, they identified 707 patients with a history of bariatric procedures from the Mayo Clinic Biobank. Patients with duodenal switch, revisional procedures, or who used antiobesity medications or became pregnant during follow-up were excluded.

To make predictions for 442 of the patients, the team first collected anthropometric data up to 15 years after RYGB. Then they used a two-step approach: Assessing for monogenic variants in the LMP and defining participants as carriers (LMP+) or noncarriers (LMP-). Then they defined the gene risk score (CTSGRS+ or CTSGRS-).

The result was four groups: LMP+/CTSGRS+, LMP+/CTSGRS-, LMP-/CTSGRS+, and LMP-/CTSGRS-. Multiple regression analysis was used to analyze TBWL percentage (TBWL%) between the groups at different timepoints, adjusting for baseline weight, age, and gender.

At the 10-year follow-up, the LMP+/CTSGRS+ group demonstrated a significantly higher weight recurrence (regain) of TBW% compared to the other groups.

At 15 years post-RYGB, the mean TBWL% for LMP+/CTSGRS+ was -4.9 vs -20.3 for LMP+/CTSGRS-, -18.0 for LMP-/CTSGRS+, and -24.8 for LMP-/CTSGRS-.

Further analyses showed that the LMP+/CTSGRS+ group had significantly less weight loss than LMP+/CTSGRS- and LMP-/CTSGRS- groups.

Based on the findings, the authors wrote, “Genotyping patients could improve the implementation of individualized weight-loss interventions, enhance weight-loss outcomes, and/or may explain one of the etiological factors associated with weight recurrence after RYGB.”

Acosta noted, “We’re actively expanding our research to include more diverse populations by age, sex, and race. This includes ongoing analysis to understand whether certain demographic or physiological characteristics affect how the test performs, particularly in the context of bariatric surgery.”

The team also is investigating the benefits of phenotyping for obesity comorbidities such as heart disease and diabetes, he said, and exploring whether early interventions in high-risk patients can prevent long-term weight regain and improve outcomes.

In addition, Acosta said, the team recently launched “the first prospective, placebo-controlled clinical trial using the MyPhenome test to predict response to semaglutide.” That study is based on earlier findings showing that patients identified with a Hungry Gut phenotype lost nearly twice as much weight on semaglutide compared with those who tested negative.

Overall, he concluded, “These findings open the door to more precise bariatric care. When we understand a patient’s biological drivers of obesity, we can make better decisions about the right procedure, follow-up, and long-term support. This moves us away from a one-size-fits-all model to care rooted in each patient’s unique biology.”

 

Potentially Paradigm-Shifting

Onur Kutlu, MD, associate professor of surgery and director of the Metabolic Surgery and Metabolic Health Program at the Miller School of Medicine, University of Miami, in Miami, Florida, commented on the study for GI & Hepatology News. “By integrating polygenic risk scores into predictive models, the authors offer an innovative method for identifying patients at elevated risk for weight regain following RYGB.”

“Their findings support the hypothesis that genetic predisposition — particularly involving energy homeostasis pathways — may underlie differential postoperative trajectories,” he said. “This approach has the potential to shift the paradigm from reactive to proactive management of weight recurrence.”

Because current options for treat weight regain are “suboptimal,” he said, “prevention becomes paramount. Preoperative identification of high-risk individuals could inform surgical decision-making, enable earlier interventions, and facilitate personalized postoperative monitoring and support.”

“If validated in larger, prospective cohorts, genetic risk stratification could enhance the precision of bariatric care and improve long-term outcomes,” he added. “Future studies should aim to validate these genetic models across diverse populations and explore how integration of behavioral, psychological, and genetic data may further refine patient selection and care pathways.”

The study was funded by Mayo Clinic and Phenomix Sciences. Gila Therapeutics and Phenomix Sciences licensed Acosta’s research technologies from the University of Florida and Mayo Clinic. Acosta declared receiving consultant fees in the past 5 years from Rhythm Pharmaceuticals, Gila Therapeutics, Amgen, General Mills, BI, Currax, Nestle, Phenomix Sciences, Bausch Health, and RareDiseases, as well as funding support from the National Institutes of Health, Vivus Pharmaceuticals, Novo Nordisk, Apollo Endosurgery, Satiogen Pharmaceuticals, Spatz Medical, and Rhythm Pharmaceuticals. Kutlu declared having no conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 06/11/2025 - 09:43
Un-Gate On Date
Wed, 06/11/2025 - 09:43
Use ProPublica
CFC Schedule Remove Status
Wed, 06/11/2025 - 09:43
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 06/11/2025 - 09:43

Walnuts Cut Gut Permeability in Obesity

Article Type
Changed
Thu, 06/05/2025 - 09:53

Walnut consumption modified the fecal microbiota and metabolome, improved insulin response, and reduced gut permeability in adults with obesity, a small study showed.

“Less than 10% of adults are meeting their fiber needs each day, and walnuts are a source of dietary fiber, which helps nourish the gut microbiota,” study coauthor Hannah Holscher, PhD, RD, associate professor of nutrition at the University of Illinois at Urbana-Champaign, told GI & Hepatology News.

Hannah Holscher



Holscher and her colleagues previously conducted a study on the effects of walnut consumption on the human intestinal microbiota “and found interesting results,” she said. Among 18 healthy men and women with a mean age of 53 years, “walnuts enriched intestinal microorganisms, including Roseburia that provide important gut-health promoting attributes, like short-chain fatty acid production. We also saw lower proinflammatory secondary bile acid concentrations in individuals that ate walnuts.”

The current study, presented at NUTRITION 2025 in Orlando, Florida, found similar benefits among 30 adults with obesity but without diabetes or gastrointestinal disease.

 

Walnut Halves, Walnut Oil, Corn Oil — Compared

The researchers aimed to determine the impact of walnut consumption on the gut microbiome, serum and fecal bile acid profiles, systemic inflammation, and oral glucose tolerance to a mixed-meal challenge.

Participants were enrolled in a randomized, controlled, crossover, complete feeding trial with three 3-week conditions, each identical except for walnut halves (WH), walnut oil (WO), or corn oil (CO) in the diet. A 3-week washout separated each condition.

“This was a fully controlled dietary feeding intervention,” Holscher said. “We provided their breakfast, lunch, snacks and dinners — all of their foods and beverages during the three dietary intervention periods that lasted for 3 weeks each. Their base diet consisted of typical American foods that you would find in a grocery store in central Illinois.”

Fecal samples were collected on days 18-20. On day 20, participants underwent a 6-hour mixed-meal tolerance test (75 g glucose + treatment) with a fasting blood draw followed by blood sampling every 30 minutes.

The fecal microbiome and microbiota were assessed using metagenomic and amplicon sequencing, respectively. Fecal microbial metabolites were quantified using gas chromatography-mass spectrometry.

Blood glucose, insulin, and inflammatory biomarkers (interleukin-6, tumor necrosis factor-alpha, C-reactive protein, and lipopolysaccharide-binding protein) were quantified. Fecal and circulating bile acids were measured via liquid chromatography tandem mass spectrometry.

Gut permeability was assessed by quantifying 24-hour urinary excretion of orally ingested sucralose and erythritol on day 21.

Linear mixed-effects models and repeated measures ANOVA were used for the statistical analysis.

The team found that Roseburia spp were greatest following WH (3.9%) vs WO (1.6) and CO (1.9); Lachnospiraceae UCG-001 and UCG-004 were also greatest with WH vs WO and CO.

WH fecal isobutyrate concentrations (5.41 µmol/g) were lower than WO (7.17 µmol/g) and CO (7.77). Similarly, fecal isovalerate concentrations were lowest with WH (7.84 µmol/g) vs WO (10.3µmol/g) and CO (11.6 µmol/g).

In contrast, indoles were highest in WH (36.8 µmol/g) vs WO (6.78 µmol/g) and CO (8.67µmol/g).

No differences in glucose concentrations were seen among groups. The 2-hour area under the curve (AUC) for insulin was lower with WH (469 µIU/mL/min) and WO (494) vs CO (604 µIU/mL/min).

The 4-hour AUC for glycolithocholic acid was lower with WH vs WO and CO. Furthermore, sucralose recovery was lowest following WH (10.5) vs WO (14.3) and CO (14.6).

“Our current efforts are focused on understanding connections between plasma bile acids and glycemic control (ie, blood glucose and insulin concentrations),” Holscher said. “We are also interested in studying individualized or personalized responses, since people had different magnitudes of responses.”

In addition, she said, “as the gut microbiome is one of the factors that can underpin the physiological response to the diet, we are interested in determining if there are microbial signatures that are predictive of glycemic control.”

Because the research is still in the early stages, at this point, Holscher simply encourages people to eat a variety of fruits, vegetables, whole grains, legumes and nuts to meet their daily fiber recommendations and support their gut microbiome.

This study was funded by a USDA NIFA grant. No competing interests were reported.

A version of this article appeared on Medscape.com . 

Publications
Topics
Sections

Walnut consumption modified the fecal microbiota and metabolome, improved insulin response, and reduced gut permeability in adults with obesity, a small study showed.

“Less than 10% of adults are meeting their fiber needs each day, and walnuts are a source of dietary fiber, which helps nourish the gut microbiota,” study coauthor Hannah Holscher, PhD, RD, associate professor of nutrition at the University of Illinois at Urbana-Champaign, told GI & Hepatology News.

Hannah Holscher



Holscher and her colleagues previously conducted a study on the effects of walnut consumption on the human intestinal microbiota “and found interesting results,” she said. Among 18 healthy men and women with a mean age of 53 years, “walnuts enriched intestinal microorganisms, including Roseburia that provide important gut-health promoting attributes, like short-chain fatty acid production. We also saw lower proinflammatory secondary bile acid concentrations in individuals that ate walnuts.”

The current study, presented at NUTRITION 2025 in Orlando, Florida, found similar benefits among 30 adults with obesity but without diabetes or gastrointestinal disease.

 

Walnut Halves, Walnut Oil, Corn Oil — Compared

The researchers aimed to determine the impact of walnut consumption on the gut microbiome, serum and fecal bile acid profiles, systemic inflammation, and oral glucose tolerance to a mixed-meal challenge.

Participants were enrolled in a randomized, controlled, crossover, complete feeding trial with three 3-week conditions, each identical except for walnut halves (WH), walnut oil (WO), or corn oil (CO) in the diet. A 3-week washout separated each condition.

“This was a fully controlled dietary feeding intervention,” Holscher said. “We provided their breakfast, lunch, snacks and dinners — all of their foods and beverages during the three dietary intervention periods that lasted for 3 weeks each. Their base diet consisted of typical American foods that you would find in a grocery store in central Illinois.”

Fecal samples were collected on days 18-20. On day 20, participants underwent a 6-hour mixed-meal tolerance test (75 g glucose + treatment) with a fasting blood draw followed by blood sampling every 30 minutes.

The fecal microbiome and microbiota were assessed using metagenomic and amplicon sequencing, respectively. Fecal microbial metabolites were quantified using gas chromatography-mass spectrometry.

Blood glucose, insulin, and inflammatory biomarkers (interleukin-6, tumor necrosis factor-alpha, C-reactive protein, and lipopolysaccharide-binding protein) were quantified. Fecal and circulating bile acids were measured via liquid chromatography tandem mass spectrometry.

Gut permeability was assessed by quantifying 24-hour urinary excretion of orally ingested sucralose and erythritol on day 21.

Linear mixed-effects models and repeated measures ANOVA were used for the statistical analysis.

The team found that Roseburia spp were greatest following WH (3.9%) vs WO (1.6) and CO (1.9); Lachnospiraceae UCG-001 and UCG-004 were also greatest with WH vs WO and CO.

WH fecal isobutyrate concentrations (5.41 µmol/g) were lower than WO (7.17 µmol/g) and CO (7.77). Similarly, fecal isovalerate concentrations were lowest with WH (7.84 µmol/g) vs WO (10.3µmol/g) and CO (11.6 µmol/g).

In contrast, indoles were highest in WH (36.8 µmol/g) vs WO (6.78 µmol/g) and CO (8.67µmol/g).

No differences in glucose concentrations were seen among groups. The 2-hour area under the curve (AUC) for insulin was lower with WH (469 µIU/mL/min) and WO (494) vs CO (604 µIU/mL/min).

The 4-hour AUC for glycolithocholic acid was lower with WH vs WO and CO. Furthermore, sucralose recovery was lowest following WH (10.5) vs WO (14.3) and CO (14.6).

“Our current efforts are focused on understanding connections between plasma bile acids and glycemic control (ie, blood glucose and insulin concentrations),” Holscher said. “We are also interested in studying individualized or personalized responses, since people had different magnitudes of responses.”

In addition, she said, “as the gut microbiome is one of the factors that can underpin the physiological response to the diet, we are interested in determining if there are microbial signatures that are predictive of glycemic control.”

Because the research is still in the early stages, at this point, Holscher simply encourages people to eat a variety of fruits, vegetables, whole grains, legumes and nuts to meet their daily fiber recommendations and support their gut microbiome.

This study was funded by a USDA NIFA grant. No competing interests were reported.

A version of this article appeared on Medscape.com . 

Walnut consumption modified the fecal microbiota and metabolome, improved insulin response, and reduced gut permeability in adults with obesity, a small study showed.

“Less than 10% of adults are meeting their fiber needs each day, and walnuts are a source of dietary fiber, which helps nourish the gut microbiota,” study coauthor Hannah Holscher, PhD, RD, associate professor of nutrition at the University of Illinois at Urbana-Champaign, told GI & Hepatology News.

Hannah Holscher



Holscher and her colleagues previously conducted a study on the effects of walnut consumption on the human intestinal microbiota “and found interesting results,” she said. Among 18 healthy men and women with a mean age of 53 years, “walnuts enriched intestinal microorganisms, including Roseburia that provide important gut-health promoting attributes, like short-chain fatty acid production. We also saw lower proinflammatory secondary bile acid concentrations in individuals that ate walnuts.”

The current study, presented at NUTRITION 2025 in Orlando, Florida, found similar benefits among 30 adults with obesity but without diabetes or gastrointestinal disease.

 

Walnut Halves, Walnut Oil, Corn Oil — Compared

The researchers aimed to determine the impact of walnut consumption on the gut microbiome, serum and fecal bile acid profiles, systemic inflammation, and oral glucose tolerance to a mixed-meal challenge.

Participants were enrolled in a randomized, controlled, crossover, complete feeding trial with three 3-week conditions, each identical except for walnut halves (WH), walnut oil (WO), or corn oil (CO) in the diet. A 3-week washout separated each condition.

“This was a fully controlled dietary feeding intervention,” Holscher said. “We provided their breakfast, lunch, snacks and dinners — all of their foods and beverages during the three dietary intervention periods that lasted for 3 weeks each. Their base diet consisted of typical American foods that you would find in a grocery store in central Illinois.”

Fecal samples were collected on days 18-20. On day 20, participants underwent a 6-hour mixed-meal tolerance test (75 g glucose + treatment) with a fasting blood draw followed by blood sampling every 30 minutes.

The fecal microbiome and microbiota were assessed using metagenomic and amplicon sequencing, respectively. Fecal microbial metabolites were quantified using gas chromatography-mass spectrometry.

Blood glucose, insulin, and inflammatory biomarkers (interleukin-6, tumor necrosis factor-alpha, C-reactive protein, and lipopolysaccharide-binding protein) were quantified. Fecal and circulating bile acids were measured via liquid chromatography tandem mass spectrometry.

Gut permeability was assessed by quantifying 24-hour urinary excretion of orally ingested sucralose and erythritol on day 21.

Linear mixed-effects models and repeated measures ANOVA were used for the statistical analysis.

The team found that Roseburia spp were greatest following WH (3.9%) vs WO (1.6) and CO (1.9); Lachnospiraceae UCG-001 and UCG-004 were also greatest with WH vs WO and CO.

WH fecal isobutyrate concentrations (5.41 µmol/g) were lower than WO (7.17 µmol/g) and CO (7.77). Similarly, fecal isovalerate concentrations were lowest with WH (7.84 µmol/g) vs WO (10.3µmol/g) and CO (11.6 µmol/g).

In contrast, indoles were highest in WH (36.8 µmol/g) vs WO (6.78 µmol/g) and CO (8.67µmol/g).

No differences in glucose concentrations were seen among groups. The 2-hour area under the curve (AUC) for insulin was lower with WH (469 µIU/mL/min) and WO (494) vs CO (604 µIU/mL/min).

The 4-hour AUC for glycolithocholic acid was lower with WH vs WO and CO. Furthermore, sucralose recovery was lowest following WH (10.5) vs WO (14.3) and CO (14.6).

“Our current efforts are focused on understanding connections between plasma bile acids and glycemic control (ie, blood glucose and insulin concentrations),” Holscher said. “We are also interested in studying individualized or personalized responses, since people had different magnitudes of responses.”

In addition, she said, “as the gut microbiome is one of the factors that can underpin the physiological response to the diet, we are interested in determining if there are microbial signatures that are predictive of glycemic control.”

Because the research is still in the early stages, at this point, Holscher simply encourages people to eat a variety of fruits, vegetables, whole grains, legumes and nuts to meet their daily fiber recommendations and support their gut microbiome.

This study was funded by a USDA NIFA grant. No competing interests were reported.

A version of this article appeared on Medscape.com . 

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 06/05/2025 - 09:52
Un-Gate On Date
Thu, 06/05/2025 - 09:52
Use ProPublica
CFC Schedule Remove Status
Thu, 06/05/2025 - 09:52
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Thu, 06/05/2025 - 09:52

Patient Navigation Boosts Follow-Up Colonoscopy Completion

Article Type
Changed
Fri, 04/11/2025 - 12:14

Patient navigation was more effective than usual care in increasing follow-up colonoscopy rates after an abnormal stool test result, a new randomized controlled trial revealed.

The intervention led to a significant 13-point increase in follow-up colonoscopy completion at 1 year, compared with usual care (55.1% vs 42.1%), according the study, which was published online in Annals of Internal Medicine.

 

Dr. Gloria Coronado

“Patients with an abnormal fecal test results have about a 1 in 20 chance of having colorectal cancer found, and many more will be found to have advanced adenomas that can be removed to prevent cancer,” Gloria Coronado, PhD, of Kaiser Permanente Center for Health Research, Portland, Oregon, and University of Arizona Cancer Center, Tucson, said in an interview.

“It is critical that these patients get a follow-up colonoscopy,” she said. “Patient navigation can accomplish this goal.”

 

‘Highly Effective’ Intervention

Researchers compared the effectiveness of a patient navigation program with that of usual care outreach in increasing follow-up colonoscopy completion after an abnormal stool test. They also developed a risk-prediction model that calculated a patient’s probability of obtaining a follow-up colonoscopy without navigation to determine if the addition of this intervention had a greater impact on those determined to be less likely to follow through.

The study included 967 patients from a community health center in Washington State who received an abnormal fecal test result within the prior month. The mean age of participants was 61 years, approximately 45% were women and 77% were White, and 18% preferred a Spanish-language intervention. In total, 479 patients received the intervention and 488 received usual care.

The intervention was delivered by a patient navigator who mailed introductory letters, sent text messages, and made live phone calls. In the calls, the navigators addressed the topics of barrier assessment and resolution, bowel preparation instruction and reminders, colonoscopy check-in, and understanding colonoscopy results and retesting intervals.

Patients in the usual-care group were contacted by a referral coordinator to schedule a follow-up colonoscopy appointment. If they couldn’t be reached initially, up to two follow-up attempts were made at 30 and 45 days after the referral date.

Patient navigation resulted in a significant 13% increase in follow-up, and those in this group completed a colonoscopy 27 days sooner than those in the usual care group (mean, 229 days vs 256 days).

Contrary to the authors’ expectation, the effectiveness of the intervention did not vary by patients’ predicted likelihood of obtaining a colonoscopy without navigation.

Notably, 20.3% of patients were unreachable or lost to follow-up, and 29.7% did not receive navigation. Among the 479 patients assigned to navigation, 79 (16.5%) declined participation and 56 (11.7%) were never reached.

The study was primarily conducted during the height of the COVID-19 pandemic, which created additional systemic and individual barriers to completing colonoscopies.

Nevertheless, the authors wrote, “our findings suggest that patient navigation is highly effective for patients eligible for colonoscopy.”

“Most patients who were reached were contacted with six or fewer phone attempts,” Coronado noted. “Further efforts are needed to determine how to reach and motivate patients [who did not participate] to get a follow-up colonoscopy.”

Coronado and colleagues are exploring ways to leverage artificial intelligence and virtual approaches to augment patient navigation programs — for example, by using a virtual navigator or low-cost automated tools to provide education to build patient confidence in getting a colonoscopy.

 

‘A Promising Tool’

“Colonoscopy completion after positive stool-based testing is critical to mitigating the impact of colon cancer,” commented Rajiv Bhuta, MD, assistant professor of clinical gastroenterology & hepatology, Lewis Katz School of Medicine, Temple University, Philadelphia, who was not involved in the study. “While prior studies assessing navigation have demonstrated improvements, none were as large enrollment-wise or as generalizable as the current study.”

Dr. Rajiv Bhuta

That said, Bhuta said in an interview that the study could have provided more detail about coordination and communication with local gastrointestinal practices.

“Local ordering and prescribing practices vary and can significantly impact compliance rates. Were colonoscopies completed via an open access pathway or were the patients required to see a gastroenterologist first? How long was the average wait time for colonoscopy once scheduled? What were the local policies on requiring an escort after the procedure?”

He also noted that some aspects of the study — such as access to reduced-cost specialty care and free ride-share services — may limit generalizable to settings without such resources.

He added: “Although patient navigators for cancer treatment have mandated reimbursement, there is no current reimbursement for navigators for abnormal screening tests, another barrier to wide-spread implementation.”

Bhuta said that the dropout rate in the study mirrors that of his own real-world practice, which serves a high-risk, low-resource community. “I would specifically like to see research that provides behavioral insights on why patients respond positively to navigation — whether it is due to reminders, emotional support, or logistical assistance. Is it systemic barriers or patient disinterest or both that drives noncompliance?”

Despite these uncertainties and the need to refine implementation logistics, Bhuta concluded, “this strategy is a promising tool to reduce disparities and improve colorectal cancer outcomes. Clinicians should advocate for or implement structured follow-up systems, particularly in high-risk populations.”

The study was funded by the US National Cancer Institute. Coronado received a grant/contract from Guardant Health. Bhuta declared no relevant conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Patient navigation was more effective than usual care in increasing follow-up colonoscopy rates after an abnormal stool test result, a new randomized controlled trial revealed.

The intervention led to a significant 13-point increase in follow-up colonoscopy completion at 1 year, compared with usual care (55.1% vs 42.1%), according the study, which was published online in Annals of Internal Medicine.

 

Dr. Gloria Coronado

“Patients with an abnormal fecal test results have about a 1 in 20 chance of having colorectal cancer found, and many more will be found to have advanced adenomas that can be removed to prevent cancer,” Gloria Coronado, PhD, of Kaiser Permanente Center for Health Research, Portland, Oregon, and University of Arizona Cancer Center, Tucson, said in an interview.

“It is critical that these patients get a follow-up colonoscopy,” she said. “Patient navigation can accomplish this goal.”

 

‘Highly Effective’ Intervention

Researchers compared the effectiveness of a patient navigation program with that of usual care outreach in increasing follow-up colonoscopy completion after an abnormal stool test. They also developed a risk-prediction model that calculated a patient’s probability of obtaining a follow-up colonoscopy without navigation to determine if the addition of this intervention had a greater impact on those determined to be less likely to follow through.

The study included 967 patients from a community health center in Washington State who received an abnormal fecal test result within the prior month. The mean age of participants was 61 years, approximately 45% were women and 77% were White, and 18% preferred a Spanish-language intervention. In total, 479 patients received the intervention and 488 received usual care.

The intervention was delivered by a patient navigator who mailed introductory letters, sent text messages, and made live phone calls. In the calls, the navigators addressed the topics of barrier assessment and resolution, bowel preparation instruction and reminders, colonoscopy check-in, and understanding colonoscopy results and retesting intervals.

Patients in the usual-care group were contacted by a referral coordinator to schedule a follow-up colonoscopy appointment. If they couldn’t be reached initially, up to two follow-up attempts were made at 30 and 45 days after the referral date.

Patient navigation resulted in a significant 13% increase in follow-up, and those in this group completed a colonoscopy 27 days sooner than those in the usual care group (mean, 229 days vs 256 days).

Contrary to the authors’ expectation, the effectiveness of the intervention did not vary by patients’ predicted likelihood of obtaining a colonoscopy without navigation.

Notably, 20.3% of patients were unreachable or lost to follow-up, and 29.7% did not receive navigation. Among the 479 patients assigned to navigation, 79 (16.5%) declined participation and 56 (11.7%) were never reached.

The study was primarily conducted during the height of the COVID-19 pandemic, which created additional systemic and individual barriers to completing colonoscopies.

Nevertheless, the authors wrote, “our findings suggest that patient navigation is highly effective for patients eligible for colonoscopy.”

“Most patients who were reached were contacted with six or fewer phone attempts,” Coronado noted. “Further efforts are needed to determine how to reach and motivate patients [who did not participate] to get a follow-up colonoscopy.”

Coronado and colleagues are exploring ways to leverage artificial intelligence and virtual approaches to augment patient navigation programs — for example, by using a virtual navigator or low-cost automated tools to provide education to build patient confidence in getting a colonoscopy.

 

‘A Promising Tool’

“Colonoscopy completion after positive stool-based testing is critical to mitigating the impact of colon cancer,” commented Rajiv Bhuta, MD, assistant professor of clinical gastroenterology & hepatology, Lewis Katz School of Medicine, Temple University, Philadelphia, who was not involved in the study. “While prior studies assessing navigation have demonstrated improvements, none were as large enrollment-wise or as generalizable as the current study.”

Dr. Rajiv Bhuta

That said, Bhuta said in an interview that the study could have provided more detail about coordination and communication with local gastrointestinal practices.

“Local ordering and prescribing practices vary and can significantly impact compliance rates. Were colonoscopies completed via an open access pathway or were the patients required to see a gastroenterologist first? How long was the average wait time for colonoscopy once scheduled? What were the local policies on requiring an escort after the procedure?”

He also noted that some aspects of the study — such as access to reduced-cost specialty care and free ride-share services — may limit generalizable to settings without such resources.

He added: “Although patient navigators for cancer treatment have mandated reimbursement, there is no current reimbursement for navigators for abnormal screening tests, another barrier to wide-spread implementation.”

Bhuta said that the dropout rate in the study mirrors that of his own real-world practice, which serves a high-risk, low-resource community. “I would specifically like to see research that provides behavioral insights on why patients respond positively to navigation — whether it is due to reminders, emotional support, or logistical assistance. Is it systemic barriers or patient disinterest or both that drives noncompliance?”

Despite these uncertainties and the need to refine implementation logistics, Bhuta concluded, “this strategy is a promising tool to reduce disparities and improve colorectal cancer outcomes. Clinicians should advocate for or implement structured follow-up systems, particularly in high-risk populations.”

The study was funded by the US National Cancer Institute. Coronado received a grant/contract from Guardant Health. Bhuta declared no relevant conflicts of interest.

A version of this article appeared on Medscape.com.

Patient navigation was more effective than usual care in increasing follow-up colonoscopy rates after an abnormal stool test result, a new randomized controlled trial revealed.

The intervention led to a significant 13-point increase in follow-up colonoscopy completion at 1 year, compared with usual care (55.1% vs 42.1%), according the study, which was published online in Annals of Internal Medicine.

 

Dr. Gloria Coronado

“Patients with an abnormal fecal test results have about a 1 in 20 chance of having colorectal cancer found, and many more will be found to have advanced adenomas that can be removed to prevent cancer,” Gloria Coronado, PhD, of Kaiser Permanente Center for Health Research, Portland, Oregon, and University of Arizona Cancer Center, Tucson, said in an interview.

“It is critical that these patients get a follow-up colonoscopy,” she said. “Patient navigation can accomplish this goal.”

 

‘Highly Effective’ Intervention

Researchers compared the effectiveness of a patient navigation program with that of usual care outreach in increasing follow-up colonoscopy completion after an abnormal stool test. They also developed a risk-prediction model that calculated a patient’s probability of obtaining a follow-up colonoscopy without navigation to determine if the addition of this intervention had a greater impact on those determined to be less likely to follow through.

The study included 967 patients from a community health center in Washington State who received an abnormal fecal test result within the prior month. The mean age of participants was 61 years, approximately 45% were women and 77% were White, and 18% preferred a Spanish-language intervention. In total, 479 patients received the intervention and 488 received usual care.

The intervention was delivered by a patient navigator who mailed introductory letters, sent text messages, and made live phone calls. In the calls, the navigators addressed the topics of barrier assessment and resolution, bowel preparation instruction and reminders, colonoscopy check-in, and understanding colonoscopy results and retesting intervals.

Patients in the usual-care group were contacted by a referral coordinator to schedule a follow-up colonoscopy appointment. If they couldn’t be reached initially, up to two follow-up attempts were made at 30 and 45 days after the referral date.

Patient navigation resulted in a significant 13% increase in follow-up, and those in this group completed a colonoscopy 27 days sooner than those in the usual care group (mean, 229 days vs 256 days).

Contrary to the authors’ expectation, the effectiveness of the intervention did not vary by patients’ predicted likelihood of obtaining a colonoscopy without navigation.

Notably, 20.3% of patients were unreachable or lost to follow-up, and 29.7% did not receive navigation. Among the 479 patients assigned to navigation, 79 (16.5%) declined participation and 56 (11.7%) were never reached.

The study was primarily conducted during the height of the COVID-19 pandemic, which created additional systemic and individual barriers to completing colonoscopies.

Nevertheless, the authors wrote, “our findings suggest that patient navigation is highly effective for patients eligible for colonoscopy.”

“Most patients who were reached were contacted with six or fewer phone attempts,” Coronado noted. “Further efforts are needed to determine how to reach and motivate patients [who did not participate] to get a follow-up colonoscopy.”

Coronado and colleagues are exploring ways to leverage artificial intelligence and virtual approaches to augment patient navigation programs — for example, by using a virtual navigator or low-cost automated tools to provide education to build patient confidence in getting a colonoscopy.

 

‘A Promising Tool’

“Colonoscopy completion after positive stool-based testing is critical to mitigating the impact of colon cancer,” commented Rajiv Bhuta, MD, assistant professor of clinical gastroenterology & hepatology, Lewis Katz School of Medicine, Temple University, Philadelphia, who was not involved in the study. “While prior studies assessing navigation have demonstrated improvements, none were as large enrollment-wise or as generalizable as the current study.”

Dr. Rajiv Bhuta

That said, Bhuta said in an interview that the study could have provided more detail about coordination and communication with local gastrointestinal practices.

“Local ordering and prescribing practices vary and can significantly impact compliance rates. Were colonoscopies completed via an open access pathway or were the patients required to see a gastroenterologist first? How long was the average wait time for colonoscopy once scheduled? What were the local policies on requiring an escort after the procedure?”

He also noted that some aspects of the study — such as access to reduced-cost specialty care and free ride-share services — may limit generalizable to settings without such resources.

He added: “Although patient navigators for cancer treatment have mandated reimbursement, there is no current reimbursement for navigators for abnormal screening tests, another barrier to wide-spread implementation.”

Bhuta said that the dropout rate in the study mirrors that of his own real-world practice, which serves a high-risk, low-resource community. “I would specifically like to see research that provides behavioral insights on why patients respond positively to navigation — whether it is due to reminders, emotional support, or logistical assistance. Is it systemic barriers or patient disinterest or both that drives noncompliance?”

Despite these uncertainties and the need to refine implementation logistics, Bhuta concluded, “this strategy is a promising tool to reduce disparities and improve colorectal cancer outcomes. Clinicians should advocate for or implement structured follow-up systems, particularly in high-risk populations.”

The study was funded by the US National Cancer Institute. Coronado received a grant/contract from Guardant Health. Bhuta declared no relevant conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ANNALS OF INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 04/10/2025 - 09:52
Un-Gate On Date
Thu, 04/10/2025 - 09:52
Use ProPublica
CFC Schedule Remove Status
Thu, 04/10/2025 - 09:52
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Thu, 04/10/2025 - 09:52