Hypothyroidism Linked to Gut Microbiome Disturbances

Article Type
Changed
Mon, 07/28/2025 - 16:18

People with hypothyroidism show significantly higher levels of small intestinal bacterial overgrowth (SIBO) and key bacterial distinctions than those without the thyroid condition, according to results of a study. 

“[The research] supports the idea that improving gut health could have far-reaching effects beyond digestion, possibly even helping to prevent autoimmune diseases, such as Hashimoto thyroiditis,” said senior author Ruchi Mathur, MD, director of the Diabetes Outpatient Treatment and Education Center and director of Clinical Operations of Medically Associated Science and Technology, at Cedars-Sinai in Los Angeles, in a press statement for the study, which was presented at ENDO 2025: The Endocrine Society Annual Meeting

“These findings open the door to new screening and prevention strategies,” Mathur added. “For example, doctors may begin to monitor thyroid health more closely in patients with SIBO, and vice versa.” 

With some small studies previously suggesting an association between the gut microbiome and hypothyroidism, Mathur and colleagues further explored the relationship in two analyses.

 

Assessing the Role of the Small Bowel

For the first, they evaluated data on 49 patients with Hashimoto thyroiditis (HT) and 323 controls without the condition from their REIMAGINE trial, which included small bowel fluid samples from upper endoscopies and DNA sequencing.

In the study, all patients with HT were treated with thyroid replacement (levothyroxine), hence, there were notably no significant differences between the two groups in terms of thyroid stimulating hormone (TSH) levels.

Despite the lack of those differences, patients with HT had a prevalence of SIBO more than twice that of the control group, independent of gender (33% vs 15%; odds ratio, 2.71; P = .005).

When the two groups were further subdivided into two groups each — those with and without SIBO — significant further variations of microbial diversity were observed between those with and without HT, Mathur told GI & Hepatology News.

“Interestingly, we saw the small bowel microbiome was not only different in SIBO-positive patients, including higher gram negatives, which is to be expected, but that the presence or absence of hypothyroidism itself was associated with specific patterns of these gram-negative bacteria,” she explained.

“In addition, when we looked at hypothyroidism without SIBO present, there were also changes between groups, such as higher Neisseria in the hypothyroid group.” 

“All these findings are novel as this is the first paper to look specifically at the small bowel,” she added, noting that previous smaller studies have focused more on evaluation of stool samples.

“We believe the small bowel is the most metabolically active area of the intestine and plays an important role in metabolism,” Mathur noted. “Thus, the microbial changes here are likely more physiologically significant than the patterns seen in stool.”

 

Further Findings from a Large Population

In a separate analysis, the team evaluated data from the TriNetX database on the 10-year incidence of developing SIBO among 1.1 million subjects with hypothyroidism in the US compared with 1 million controls.

They found that people with hypothyroidism were approximately twice as likely to develop SIBO compared with those without hypothyroidism (relative risk [RR], 2.20).

Furthermore, those with HT, in particular, had an even higher risk, at 2.4 times the controls (RR, 2.40).

Treatment with levothyroxine decreased the risk of developing SIBO in hypothyroidism (RR, 0.33) and HT (RR, 0.78) vs those who did not receive treatment.

 

Mechanisms?

However, the fact that differences in SIBO were observed even between people who were treated for HT and those without the condition in the first analysis, and hence had similar TSH levels, was notable, Mathur said.

“This suggests that perhaps there are other factors aside from TSH levels and free T4 that are at play here,” she said. “Some people have theorized that perhaps delayed gut motility in hypothyroidism promotes the development of SIBO; however, there are many other factors within this complex interplay between the microbiome and the thyroid that could also be playing a role.” 

“For example, SIBO leads to inflammation and weakening of the gut barrier,” Mathur explained.

Furthermore, “levothyroxine absorption and cycling of the thyroid hormone occurs predominantly in the small bowel, [while the] microbiome plays a key role in the absorption of iron, selenium, iodine, and zinc, which are critical for thyroid function.” 

Overall, “further research is needed to understand how the mechanisms are affected during the development of SIBO and hypothyroidism,” Mathur said.

 

Assessment of Changes Over Time Anticipated

Commenting on the research, Gregory A. Brent, MD, senior executive academic vice-chair of the Department of Medicine and professor of medicine and physiology at the David Geffen School of Medicine at University of California Los Angeles said the study is indeed novel.

“This, to my knowledge, is the first investigation to link characteristics of the small bowel microbiome with hypothyroidism,” Brent told GI & Hepatology News.

While any clinical significance has yet to be determined, “the association of these small bowel microbiome changes with hypothyroidism may have implications for contributing to the onset of autoimmune hypothyroidism in susceptible populations as well as influences on levothyroxine absorption in hypothyroid patients on levothyroxine therapy,” Brent said.

With the SIBO differences observed even among treated patients with vs without HT, “it seems less likely that the microbiome changes are the result of reduced thyroid hormone signaling,” Brent noted.

Furthermore, a key piece of the puzzle will be to observe the microbiome changes over time, he added.

“These studies were at a single time point [and] longitudinal studies will be especially important to see how the association changes over time and are influenced by the treatment of hypothyroidism and of SIBO,” Brent said.

The authors and Brent had no disclosures to report.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

People with hypothyroidism show significantly higher levels of small intestinal bacterial overgrowth (SIBO) and key bacterial distinctions than those without the thyroid condition, according to results of a study. 

“[The research] supports the idea that improving gut health could have far-reaching effects beyond digestion, possibly even helping to prevent autoimmune diseases, such as Hashimoto thyroiditis,” said senior author Ruchi Mathur, MD, director of the Diabetes Outpatient Treatment and Education Center and director of Clinical Operations of Medically Associated Science and Technology, at Cedars-Sinai in Los Angeles, in a press statement for the study, which was presented at ENDO 2025: The Endocrine Society Annual Meeting

“These findings open the door to new screening and prevention strategies,” Mathur added. “For example, doctors may begin to monitor thyroid health more closely in patients with SIBO, and vice versa.” 

With some small studies previously suggesting an association between the gut microbiome and hypothyroidism, Mathur and colleagues further explored the relationship in two analyses.

 

Assessing the Role of the Small Bowel

For the first, they evaluated data on 49 patients with Hashimoto thyroiditis (HT) and 323 controls without the condition from their REIMAGINE trial, which included small bowel fluid samples from upper endoscopies and DNA sequencing.

In the study, all patients with HT were treated with thyroid replacement (levothyroxine), hence, there were notably no significant differences between the two groups in terms of thyroid stimulating hormone (TSH) levels.

Despite the lack of those differences, patients with HT had a prevalence of SIBO more than twice that of the control group, independent of gender (33% vs 15%; odds ratio, 2.71; P = .005).

When the two groups were further subdivided into two groups each — those with and without SIBO — significant further variations of microbial diversity were observed between those with and without HT, Mathur told GI & Hepatology News.

“Interestingly, we saw the small bowel microbiome was not only different in SIBO-positive patients, including higher gram negatives, which is to be expected, but that the presence or absence of hypothyroidism itself was associated with specific patterns of these gram-negative bacteria,” she explained.

“In addition, when we looked at hypothyroidism without SIBO present, there were also changes between groups, such as higher Neisseria in the hypothyroid group.” 

“All these findings are novel as this is the first paper to look specifically at the small bowel,” she added, noting that previous smaller studies have focused more on evaluation of stool samples.

“We believe the small bowel is the most metabolically active area of the intestine and plays an important role in metabolism,” Mathur noted. “Thus, the microbial changes here are likely more physiologically significant than the patterns seen in stool.”

 

Further Findings from a Large Population

In a separate analysis, the team evaluated data from the TriNetX database on the 10-year incidence of developing SIBO among 1.1 million subjects with hypothyroidism in the US compared with 1 million controls.

They found that people with hypothyroidism were approximately twice as likely to develop SIBO compared with those without hypothyroidism (relative risk [RR], 2.20).

Furthermore, those with HT, in particular, had an even higher risk, at 2.4 times the controls (RR, 2.40).

Treatment with levothyroxine decreased the risk of developing SIBO in hypothyroidism (RR, 0.33) and HT (RR, 0.78) vs those who did not receive treatment.

 

Mechanisms?

However, the fact that differences in SIBO were observed even between people who were treated for HT and those without the condition in the first analysis, and hence had similar TSH levels, was notable, Mathur said.

“This suggests that perhaps there are other factors aside from TSH levels and free T4 that are at play here,” she said. “Some people have theorized that perhaps delayed gut motility in hypothyroidism promotes the development of SIBO; however, there are many other factors within this complex interplay between the microbiome and the thyroid that could also be playing a role.” 

“For example, SIBO leads to inflammation and weakening of the gut barrier,” Mathur explained.

Furthermore, “levothyroxine absorption and cycling of the thyroid hormone occurs predominantly in the small bowel, [while the] microbiome plays a key role in the absorption of iron, selenium, iodine, and zinc, which are critical for thyroid function.” 

Overall, “further research is needed to understand how the mechanisms are affected during the development of SIBO and hypothyroidism,” Mathur said.

 

Assessment of Changes Over Time Anticipated

Commenting on the research, Gregory A. Brent, MD, senior executive academic vice-chair of the Department of Medicine and professor of medicine and physiology at the David Geffen School of Medicine at University of California Los Angeles said the study is indeed novel.

“This, to my knowledge, is the first investigation to link characteristics of the small bowel microbiome with hypothyroidism,” Brent told GI & Hepatology News.

While any clinical significance has yet to be determined, “the association of these small bowel microbiome changes with hypothyroidism may have implications for contributing to the onset of autoimmune hypothyroidism in susceptible populations as well as influences on levothyroxine absorption in hypothyroid patients on levothyroxine therapy,” Brent said.

With the SIBO differences observed even among treated patients with vs without HT, “it seems less likely that the microbiome changes are the result of reduced thyroid hormone signaling,” Brent noted.

Furthermore, a key piece of the puzzle will be to observe the microbiome changes over time, he added.

“These studies were at a single time point [and] longitudinal studies will be especially important to see how the association changes over time and are influenced by the treatment of hypothyroidism and of SIBO,” Brent said.

The authors and Brent had no disclosures to report.

A version of this article appeared on Medscape.com.

People with hypothyroidism show significantly higher levels of small intestinal bacterial overgrowth (SIBO) and key bacterial distinctions than those without the thyroid condition, according to results of a study. 

“[The research] supports the idea that improving gut health could have far-reaching effects beyond digestion, possibly even helping to prevent autoimmune diseases, such as Hashimoto thyroiditis,” said senior author Ruchi Mathur, MD, director of the Diabetes Outpatient Treatment and Education Center and director of Clinical Operations of Medically Associated Science and Technology, at Cedars-Sinai in Los Angeles, in a press statement for the study, which was presented at ENDO 2025: The Endocrine Society Annual Meeting

“These findings open the door to new screening and prevention strategies,” Mathur added. “For example, doctors may begin to monitor thyroid health more closely in patients with SIBO, and vice versa.” 

With some small studies previously suggesting an association between the gut microbiome and hypothyroidism, Mathur and colleagues further explored the relationship in two analyses.

 

Assessing the Role of the Small Bowel

For the first, they evaluated data on 49 patients with Hashimoto thyroiditis (HT) and 323 controls without the condition from their REIMAGINE trial, which included small bowel fluid samples from upper endoscopies and DNA sequencing.

In the study, all patients with HT were treated with thyroid replacement (levothyroxine), hence, there were notably no significant differences between the two groups in terms of thyroid stimulating hormone (TSH) levels.

Despite the lack of those differences, patients with HT had a prevalence of SIBO more than twice that of the control group, independent of gender (33% vs 15%; odds ratio, 2.71; P = .005).

When the two groups were further subdivided into two groups each — those with and without SIBO — significant further variations of microbial diversity were observed between those with and without HT, Mathur told GI & Hepatology News.

“Interestingly, we saw the small bowel microbiome was not only different in SIBO-positive patients, including higher gram negatives, which is to be expected, but that the presence or absence of hypothyroidism itself was associated with specific patterns of these gram-negative bacteria,” she explained.

“In addition, when we looked at hypothyroidism without SIBO present, there were also changes between groups, such as higher Neisseria in the hypothyroid group.” 

“All these findings are novel as this is the first paper to look specifically at the small bowel,” she added, noting that previous smaller studies have focused more on evaluation of stool samples.

“We believe the small bowel is the most metabolically active area of the intestine and plays an important role in metabolism,” Mathur noted. “Thus, the microbial changes here are likely more physiologically significant than the patterns seen in stool.”

 

Further Findings from a Large Population

In a separate analysis, the team evaluated data from the TriNetX database on the 10-year incidence of developing SIBO among 1.1 million subjects with hypothyroidism in the US compared with 1 million controls.

They found that people with hypothyroidism were approximately twice as likely to develop SIBO compared with those without hypothyroidism (relative risk [RR], 2.20).

Furthermore, those with HT, in particular, had an even higher risk, at 2.4 times the controls (RR, 2.40).

Treatment with levothyroxine decreased the risk of developing SIBO in hypothyroidism (RR, 0.33) and HT (RR, 0.78) vs those who did not receive treatment.

 

Mechanisms?

However, the fact that differences in SIBO were observed even between people who were treated for HT and those without the condition in the first analysis, and hence had similar TSH levels, was notable, Mathur said.

“This suggests that perhaps there are other factors aside from TSH levels and free T4 that are at play here,” she said. “Some people have theorized that perhaps delayed gut motility in hypothyroidism promotes the development of SIBO; however, there are many other factors within this complex interplay between the microbiome and the thyroid that could also be playing a role.” 

“For example, SIBO leads to inflammation and weakening of the gut barrier,” Mathur explained.

Furthermore, “levothyroxine absorption and cycling of the thyroid hormone occurs predominantly in the small bowel, [while the] microbiome plays a key role in the absorption of iron, selenium, iodine, and zinc, which are critical for thyroid function.” 

Overall, “further research is needed to understand how the mechanisms are affected during the development of SIBO and hypothyroidism,” Mathur said.

 

Assessment of Changes Over Time Anticipated

Commenting on the research, Gregory A. Brent, MD, senior executive academic vice-chair of the Department of Medicine and professor of medicine and physiology at the David Geffen School of Medicine at University of California Los Angeles said the study is indeed novel.

“This, to my knowledge, is the first investigation to link characteristics of the small bowel microbiome with hypothyroidism,” Brent told GI & Hepatology News.

While any clinical significance has yet to be determined, “the association of these small bowel microbiome changes with hypothyroidism may have implications for contributing to the onset of autoimmune hypothyroidism in susceptible populations as well as influences on levothyroxine absorption in hypothyroid patients on levothyroxine therapy,” Brent said.

With the SIBO differences observed even among treated patients with vs without HT, “it seems less likely that the microbiome changes are the result of reduced thyroid hormone signaling,” Brent noted.

Furthermore, a key piece of the puzzle will be to observe the microbiome changes over time, he added.

“These studies were at a single time point [and] longitudinal studies will be especially important to see how the association changes over time and are influenced by the treatment of hypothyroidism and of SIBO,” Brent said.

The authors and Brent had no disclosures to report.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 07/28/2025 - 16:16
Un-Gate On Date
Mon, 07/28/2025 - 16:16
Use ProPublica
CFC Schedule Remove Status
Mon, 07/28/2025 - 16:16
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Mon, 07/28/2025 - 16:16

Does Tofacitinib Worsen Postoperative Complications in Acute Severe Ulcerative Colitis?

Article Type
Changed
Mon, 07/14/2025 - 11:41

A head-to-head comparison of the JAK inhibitor drug tofacitinib and chimeric monoclonal antibody infliximab in the treatment of acute severe ulcerative colitis (ASUC) shows that, contrary to concerns, tofacitinib is not associated with worse postoperative complications and in fact may reduce the risk of the need for colectomy.

“Tofacitinib has shown efficacy in managing ASUC, but concerns about postoperative complications have limited its adoption,” reported the authors in research published in Clinical Gastroenterology and Hepatology.“This study shows that tofacitinib is safe and doesn’t impair wound healing or lead to more infections if the patient needs an urgent colectomy, which is unfortunately common in this population,” senior author Jeffrey A. Berinstein, MD, of the Division of Gastroenterology and Hepatology, Michigan Medicine, Ann Arbor, Michigan, told GI & Hepatology News. 

Dr. Jeffrey A. Berinstein



Recent treatment advances for UC have provided significant benefits in reducing the severity of symptoms; however, about a quarter of patients go on to experience flares, with fecal urgency, rectal bleeding, and severe abdominal pain of ASUC potentially requiring hospitalization.

The standard of care for those patients is rapid induction with intravenous (IV) corticosteroids; however, up to 30% of patients don’t respond to those interventions, and even with subsequent treatment of cyclosporine and infliximab helping to reduce the risk for an urgent colectomy, patients often don’t respond, and ultimately, up to a third of patients with ASUC end up having to receive a colectomy.

While JAK inhibitor therapies, including tofacitinib and upadacitinib, have recently emerged as potentially important treatment options in such cases, showing reductions in the risk for colectomy, concerns about the drugs’ downstream biologic effects have given many clinicians reservations about their use.

“Anecdotally, gastroenterologists and surgeons have expressed concern about JAK inhibitors leading to poor wound healing, as well as increasing both intraoperative and postoperative complications, despite limited data to support these claims,” the authors wrote.

To better understand those possible risks, first author Charlotte Larson, MD, of the Department of Internal Medicine, Michigan Medicine, and colleagues conducted a multicenter, retrospective, case-control study of 109 patients hospitalized with ASUC at two centers in the US and 14 in France.

Of the patients, 41 were treated with tofacitinib and 68 with infliximab prior to colectomy. 

Among patients treated with tofacitinib, five (12.2%) received infliximab and four (9.8%) received cyclosporine rescue immediately prior to receiving tofacitinib during the index admission. In the infliximab group, one (1.5%) received rescue cyclosporine.

In a univariate analysis, the tofacitinib-treated patients showed significantly lower overall rates of postoperative complications than infliximab-treated patients (31.7% vs 64.7%; odds ratio [OR], 0.33; P = .006).

The tofacitinib-treated group also had lower rates of serious postoperative complications (12% vs 28.9; OR, 0.20; P = .016).

After adjusting for multivariate factors including age, inflammatory burden, nutrition status, 90-day cumulative corticosteroid exposure and open surgery, there was a trend favoring tofacitinib but no statistically significant difference between the two treatments in terms of serious postoperative complications (P = .061). 

However, a significantly lower rate of overall postoperative complications with tofacitinib was observed after the adjustment (odds ratio, 0.38; P = .023).

Importantly, a subanalysis showed that the 63.4% of tofacitinib-treated patients receiving the standard FDA-approved induction dose of 10 mg twice daily did indeed have significantly lower rates than infliximab-treated patients in terms of serious postoperative complications (OR, .10; P = .031), as well as overall postoperative complications (OR, 0.23; P = .003), whereas neither of the outcomes were significantly improved among the 36.6% of patients who received the higher-intensity thrice-daily tofacitinib dose (P = .3 and P = .4, respectively).

Further complicating the matter, in a previous case-control study that the research team conducted, it was the off-label, 10 mg thrice-daily dose of tofacitinib that performed favorably and was associated with a significantly lower risk for colectomy than the twice-daily dose (hazard ratio 0.28; P = .018); the twice-daily dose was not protective.

Berinstein added that a hypothesis for the benefits overall, with either dose, is that tofacitinib’s anti-inflammatory properties are key.

“We believe that lowering inflammation as much as possible, with the colon less inflamed, could be providing the benefit in lowering complications rate in surgery,” he explained.

Regarding the dosing, “it’s a careful trade-off,” Berinstein added. “Obviously, we want to avoid the need for a colectomy in the first place, as it is a life-changing surgery, but we don’t want to increase the risk of infections.” 

In other findings, the tofacitinib group had no increased risk for postoperative venous thrombotic embolisms (VTEs), which is important as tofacitinib exposure has previously been associated with an increased risk for VTEs independent of other prothrombotic factors common to patients with ASUC, including decreased ambulation, active inflammation, corticosteroid use, and major colorectal surgery.

“This observed absence of an increased VTE risk may alleviate some of the hypothetical postoperative safety concern attributed to JAK inhibitor therapy in this high-risk population,” the authors wrote.

Overall, the results underscore that “providers should feel comfortable using this medication if they need it and if they think it’s most likely to help their patients avoid colectomy,” Berinstein said.

“They should not give pause over concerns of postoperative complications because we didn’t show that,” he said.

Dr. Joseph D. Feuerstein



Commenting on the study, Joseph D. Feuerstein, MD, AGAF, of the Department of Medicine and Division of Gastroenterology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, noted that, in general, in patients with ASUC who fail on IV steroids, “the main treatments are infliximab, cyclosporine, or a JAK inhibitor like tofacitinib or upadacitinib, [and] knowing that if someone needs surgery, the complication rates are similar and that pre-operative use is okay is reassuring.”

Regarding the protective effect observed with some circumstances, “I don’t put too much weight into that,” he noted. “[One] could speculate that it is somehow related to faster half-life of the drug, and it might not sit around as long,” he said.

Feuerstein added that “the study design being retrospective is a limitation, but this is the best data we have to date.”

Berinstein and Feuerstein had no disclosures to report.

A version of this article appeared on Medscape.com . 

Publications
Topics
Sections

A head-to-head comparison of the JAK inhibitor drug tofacitinib and chimeric monoclonal antibody infliximab in the treatment of acute severe ulcerative colitis (ASUC) shows that, contrary to concerns, tofacitinib is not associated with worse postoperative complications and in fact may reduce the risk of the need for colectomy.

“Tofacitinib has shown efficacy in managing ASUC, but concerns about postoperative complications have limited its adoption,” reported the authors in research published in Clinical Gastroenterology and Hepatology.“This study shows that tofacitinib is safe and doesn’t impair wound healing or lead to more infections if the patient needs an urgent colectomy, which is unfortunately common in this population,” senior author Jeffrey A. Berinstein, MD, of the Division of Gastroenterology and Hepatology, Michigan Medicine, Ann Arbor, Michigan, told GI & Hepatology News. 

Dr. Jeffrey A. Berinstein



Recent treatment advances for UC have provided significant benefits in reducing the severity of symptoms; however, about a quarter of patients go on to experience flares, with fecal urgency, rectal bleeding, and severe abdominal pain of ASUC potentially requiring hospitalization.

The standard of care for those patients is rapid induction with intravenous (IV) corticosteroids; however, up to 30% of patients don’t respond to those interventions, and even with subsequent treatment of cyclosporine and infliximab helping to reduce the risk for an urgent colectomy, patients often don’t respond, and ultimately, up to a third of patients with ASUC end up having to receive a colectomy.

While JAK inhibitor therapies, including tofacitinib and upadacitinib, have recently emerged as potentially important treatment options in such cases, showing reductions in the risk for colectomy, concerns about the drugs’ downstream biologic effects have given many clinicians reservations about their use.

“Anecdotally, gastroenterologists and surgeons have expressed concern about JAK inhibitors leading to poor wound healing, as well as increasing both intraoperative and postoperative complications, despite limited data to support these claims,” the authors wrote.

To better understand those possible risks, first author Charlotte Larson, MD, of the Department of Internal Medicine, Michigan Medicine, and colleagues conducted a multicenter, retrospective, case-control study of 109 patients hospitalized with ASUC at two centers in the US and 14 in France.

Of the patients, 41 were treated with tofacitinib and 68 with infliximab prior to colectomy. 

Among patients treated with tofacitinib, five (12.2%) received infliximab and four (9.8%) received cyclosporine rescue immediately prior to receiving tofacitinib during the index admission. In the infliximab group, one (1.5%) received rescue cyclosporine.

In a univariate analysis, the tofacitinib-treated patients showed significantly lower overall rates of postoperative complications than infliximab-treated patients (31.7% vs 64.7%; odds ratio [OR], 0.33; P = .006).

The tofacitinib-treated group also had lower rates of serious postoperative complications (12% vs 28.9; OR, 0.20; P = .016).

After adjusting for multivariate factors including age, inflammatory burden, nutrition status, 90-day cumulative corticosteroid exposure and open surgery, there was a trend favoring tofacitinib but no statistically significant difference between the two treatments in terms of serious postoperative complications (P = .061). 

However, a significantly lower rate of overall postoperative complications with tofacitinib was observed after the adjustment (odds ratio, 0.38; P = .023).

Importantly, a subanalysis showed that the 63.4% of tofacitinib-treated patients receiving the standard FDA-approved induction dose of 10 mg twice daily did indeed have significantly lower rates than infliximab-treated patients in terms of serious postoperative complications (OR, .10; P = .031), as well as overall postoperative complications (OR, 0.23; P = .003), whereas neither of the outcomes were significantly improved among the 36.6% of patients who received the higher-intensity thrice-daily tofacitinib dose (P = .3 and P = .4, respectively).

Further complicating the matter, in a previous case-control study that the research team conducted, it was the off-label, 10 mg thrice-daily dose of tofacitinib that performed favorably and was associated with a significantly lower risk for colectomy than the twice-daily dose (hazard ratio 0.28; P = .018); the twice-daily dose was not protective.

Berinstein added that a hypothesis for the benefits overall, with either dose, is that tofacitinib’s anti-inflammatory properties are key.

“We believe that lowering inflammation as much as possible, with the colon less inflamed, could be providing the benefit in lowering complications rate in surgery,” he explained.

Regarding the dosing, “it’s a careful trade-off,” Berinstein added. “Obviously, we want to avoid the need for a colectomy in the first place, as it is a life-changing surgery, but we don’t want to increase the risk of infections.” 

In other findings, the tofacitinib group had no increased risk for postoperative venous thrombotic embolisms (VTEs), which is important as tofacitinib exposure has previously been associated with an increased risk for VTEs independent of other prothrombotic factors common to patients with ASUC, including decreased ambulation, active inflammation, corticosteroid use, and major colorectal surgery.

“This observed absence of an increased VTE risk may alleviate some of the hypothetical postoperative safety concern attributed to JAK inhibitor therapy in this high-risk population,” the authors wrote.

Overall, the results underscore that “providers should feel comfortable using this medication if they need it and if they think it’s most likely to help their patients avoid colectomy,” Berinstein said.

“They should not give pause over concerns of postoperative complications because we didn’t show that,” he said.

Dr. Joseph D. Feuerstein



Commenting on the study, Joseph D. Feuerstein, MD, AGAF, of the Department of Medicine and Division of Gastroenterology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, noted that, in general, in patients with ASUC who fail on IV steroids, “the main treatments are infliximab, cyclosporine, or a JAK inhibitor like tofacitinib or upadacitinib, [and] knowing that if someone needs surgery, the complication rates are similar and that pre-operative use is okay is reassuring.”

Regarding the protective effect observed with some circumstances, “I don’t put too much weight into that,” he noted. “[One] could speculate that it is somehow related to faster half-life of the drug, and it might not sit around as long,” he said.

Feuerstein added that “the study design being retrospective is a limitation, but this is the best data we have to date.”

Berinstein and Feuerstein had no disclosures to report.

A version of this article appeared on Medscape.com . 

A head-to-head comparison of the JAK inhibitor drug tofacitinib and chimeric monoclonal antibody infliximab in the treatment of acute severe ulcerative colitis (ASUC) shows that, contrary to concerns, tofacitinib is not associated with worse postoperative complications and in fact may reduce the risk of the need for colectomy.

“Tofacitinib has shown efficacy in managing ASUC, but concerns about postoperative complications have limited its adoption,” reported the authors in research published in Clinical Gastroenterology and Hepatology.“This study shows that tofacitinib is safe and doesn’t impair wound healing or lead to more infections if the patient needs an urgent colectomy, which is unfortunately common in this population,” senior author Jeffrey A. Berinstein, MD, of the Division of Gastroenterology and Hepatology, Michigan Medicine, Ann Arbor, Michigan, told GI & Hepatology News. 

Dr. Jeffrey A. Berinstein



Recent treatment advances for UC have provided significant benefits in reducing the severity of symptoms; however, about a quarter of patients go on to experience flares, with fecal urgency, rectal bleeding, and severe abdominal pain of ASUC potentially requiring hospitalization.

The standard of care for those patients is rapid induction with intravenous (IV) corticosteroids; however, up to 30% of patients don’t respond to those interventions, and even with subsequent treatment of cyclosporine and infliximab helping to reduce the risk for an urgent colectomy, patients often don’t respond, and ultimately, up to a third of patients with ASUC end up having to receive a colectomy.

While JAK inhibitor therapies, including tofacitinib and upadacitinib, have recently emerged as potentially important treatment options in such cases, showing reductions in the risk for colectomy, concerns about the drugs’ downstream biologic effects have given many clinicians reservations about their use.

“Anecdotally, gastroenterologists and surgeons have expressed concern about JAK inhibitors leading to poor wound healing, as well as increasing both intraoperative and postoperative complications, despite limited data to support these claims,” the authors wrote.

To better understand those possible risks, first author Charlotte Larson, MD, of the Department of Internal Medicine, Michigan Medicine, and colleagues conducted a multicenter, retrospective, case-control study of 109 patients hospitalized with ASUC at two centers in the US and 14 in France.

Of the patients, 41 were treated with tofacitinib and 68 with infliximab prior to colectomy. 

Among patients treated with tofacitinib, five (12.2%) received infliximab and four (9.8%) received cyclosporine rescue immediately prior to receiving tofacitinib during the index admission. In the infliximab group, one (1.5%) received rescue cyclosporine.

In a univariate analysis, the tofacitinib-treated patients showed significantly lower overall rates of postoperative complications than infliximab-treated patients (31.7% vs 64.7%; odds ratio [OR], 0.33; P = .006).

The tofacitinib-treated group also had lower rates of serious postoperative complications (12% vs 28.9; OR, 0.20; P = .016).

After adjusting for multivariate factors including age, inflammatory burden, nutrition status, 90-day cumulative corticosteroid exposure and open surgery, there was a trend favoring tofacitinib but no statistically significant difference between the two treatments in terms of serious postoperative complications (P = .061). 

However, a significantly lower rate of overall postoperative complications with tofacitinib was observed after the adjustment (odds ratio, 0.38; P = .023).

Importantly, a subanalysis showed that the 63.4% of tofacitinib-treated patients receiving the standard FDA-approved induction dose of 10 mg twice daily did indeed have significantly lower rates than infliximab-treated patients in terms of serious postoperative complications (OR, .10; P = .031), as well as overall postoperative complications (OR, 0.23; P = .003), whereas neither of the outcomes were significantly improved among the 36.6% of patients who received the higher-intensity thrice-daily tofacitinib dose (P = .3 and P = .4, respectively).

Further complicating the matter, in a previous case-control study that the research team conducted, it was the off-label, 10 mg thrice-daily dose of tofacitinib that performed favorably and was associated with a significantly lower risk for colectomy than the twice-daily dose (hazard ratio 0.28; P = .018); the twice-daily dose was not protective.

Berinstein added that a hypothesis for the benefits overall, with either dose, is that tofacitinib’s anti-inflammatory properties are key.

“We believe that lowering inflammation as much as possible, with the colon less inflamed, could be providing the benefit in lowering complications rate in surgery,” he explained.

Regarding the dosing, “it’s a careful trade-off,” Berinstein added. “Obviously, we want to avoid the need for a colectomy in the first place, as it is a life-changing surgery, but we don’t want to increase the risk of infections.” 

In other findings, the tofacitinib group had no increased risk for postoperative venous thrombotic embolisms (VTEs), which is important as tofacitinib exposure has previously been associated with an increased risk for VTEs independent of other prothrombotic factors common to patients with ASUC, including decreased ambulation, active inflammation, corticosteroid use, and major colorectal surgery.

“This observed absence of an increased VTE risk may alleviate some of the hypothetical postoperative safety concern attributed to JAK inhibitor therapy in this high-risk population,” the authors wrote.

Overall, the results underscore that “providers should feel comfortable using this medication if they need it and if they think it’s most likely to help their patients avoid colectomy,” Berinstein said.

“They should not give pause over concerns of postoperative complications because we didn’t show that,” he said.

Dr. Joseph D. Feuerstein



Commenting on the study, Joseph D. Feuerstein, MD, AGAF, of the Department of Medicine and Division of Gastroenterology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, noted that, in general, in patients with ASUC who fail on IV steroids, “the main treatments are infliximab, cyclosporine, or a JAK inhibitor like tofacitinib or upadacitinib, [and] knowing that if someone needs surgery, the complication rates are similar and that pre-operative use is okay is reassuring.”

Regarding the protective effect observed with some circumstances, “I don’t put too much weight into that,” he noted. “[One] could speculate that it is somehow related to faster half-life of the drug, and it might not sit around as long,” he said.

Feuerstein added that “the study design being retrospective is a limitation, but this is the best data we have to date.”

Berinstein and Feuerstein had no disclosures to report.

A version of this article appeared on Medscape.com . 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 07/14/2025 - 10:38
Un-Gate On Date
Mon, 07/14/2025 - 10:38
Use ProPublica
CFC Schedule Remove Status
Mon, 07/14/2025 - 10:38
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Mon, 07/14/2025 - 10:38

Colorectal Cancer Screening Choices: Is Compliance Key?

Article Type
Changed
Tue, 05/20/2025 - 12:51

SAN DIEGO — In the ever-expanding options for colorectal cancer (CRC) screening, blood tests using precision medicine are becoming more advanced and convenient than ever; however, caveats abound, and when it comes to potentially life-saving screening measures, picking the optimal screening tool is critical.

Regarding tests, “perfect is not possible,” said William M. Grady, MD, AGAF, of the Fred Hutchinson Cancer Center, University of Washington School of Medicine, in Seattle, who took part in a debate on the pros and cons of key screening options at Digestive Disease Week® (DDW) 2025.

Dr. William M. Grady



“We have to remember that that’s the reality of colorectal cancer screening, and we need to meet our patients where they live,” said Grady, who argued on behalf of blood-based tests, including cell-free (cf) DNA (Shield, Guardant Health) and cfDNA plus protein biomarkers (Freenome).

A big point in their favor is their convenience and higher patient compliance — better tests that don’t get done do not work, he stressed.

He cited data that showed suboptimal compliance rates with standard colonoscopy: Rates range from about 70% among non-Hispanic White individuals to 67% among Black individuals, 51% among Hispanic individuals, and the low rate of just 26% among patients aged between 45 and 50 years.

With troubling increases in CRC incidence among younger patients, “that’s a group we’re particularly concerned about,” Grady said.

Meanwhile, studies show compliance rates with blood-based tests are ≥ 80%, with similar rates seen among those racial and ethnic groups, with lower rates for conventional colonoscopy, he noted.

Importantly, in terms of performance in detecting CRC, blood-based tests stand up to other modalities, as demonstrated in a real-world study conducted by Grady and his colleagues showing a sensitivity of 83% for the cfDNA test, 74% for the fecal immunochemical test (FIT) stool test, and 92% for a multitarget stool DNA test compared with 95% for colonoscopy.

“What we can see is that the sensitivity of blood-based tests looks favorable and comparable to other tests,” he said.

Among the four options, cfDNA had a highest patient adherence rate (85%-86%) compared with colonoscopy (28%-42%), FIT (43%-65%), and multitarget stool DNA (48%-60%).

“The bottom line is that these tests decrease CRC mortality and incidence, and we know there’s a potential to improve compliance with colorectal cancer screening if we offer blood-based tests for average-risk people who refuse colonoscopy,” Grady said.

 

Blood-Based Tests: Caveats, Harms?

Arguing against blood-based tests in the debate, Robert E. Schoen, MD, MPH, professor of medicine and epidemiology, Division of Gastroenterology, Hepatology and Nutrition, at the University of Pittsburgh, in Pittsburgh, Pennsylvania, checked off some of the key caveats.

While the overall sensitivity of blood-based tests may look favorable, these tests don’t detect early CRC well,” said Schoen. The sensitivity rates for stage 1 CRC are 64.7% with Guardant Health and 57.1% with Freenome.

Furthermore, their rates of detecting advanced adenomas are very low; the rate with Guardant Health is only about 13%, and with Freenome is even lower at 12.5%, he reported.

These rates are “similar to the false positive rate, with poor discrimination and accuracy for advanced adenomas,” Schoen said. “Without substantial detection of advanced adenomas, blood-based testing is inferior [to other options].”

Importantly, the low advanced adenoma rate translates to a lack of CRC prevention, which is key to reducing CRC mortality, he noted.

Essential to success with blood-based biopsies, as well as with stool tests, is the need for a follow-up colonoscopy if results are positive, but Schoen pointed out that this may or may not happen.

He cited research from FIT data showing that among 33,000 patients with abnormal stool tests, the rate of follow-up colonoscopy within a year, despite the concerning results, was a dismal 56%.

“We have a long way to go to make sure that people who get positive noninvasive tests get followed up,” he said.

In terms of the argument that blood-based screening is better than no screening at all, Schoen cited recent research that projected reductions in the risk for CRC incidence and mortality among 100,000 patients with each of the screening modalities.

Starting with standard colonoscopy performed every 10 years, the reductions in incidence and mortality would be 79% and 81%, respectively, followed by annual FIT, at 72% and 76%; multitarget DNA every 3 years, at 68% and 73%; and cfDNA (Shield), at 45% and 55%.

Based on those rates, if patients originally opting for FIT were to shift to blood-based tests, “the rate of CRC deaths would increase,” Schoen noted.

The findings underscore that “blood testing is unfavorable as a ‘substitution test,’” he added. “In fact, widespread adoption of blood testing could increase CRC morbidity.”

“Is it better than nothing?” he asked. “Yes, but only if performance of a colonoscopy after a positive test is accomplished.”

 

What About FIT?

Arguing that stool-based testing, or FIT, is the ideal choice as a first-line CRC test Jill Tinmouth, MD, PhD, a professor at the University of Toronto, Ontario, Canada, pointed to its prominent role in organized screening programs, including regions where resources may limit the widespread utilization of routine first-line colonoscopy screening. In addition, it narrows colonoscopies to those that are already prescreened as being at risk.

Dr. Jill Tinmouth

Data from one such program, reported by Kaiser Permanente of Northern California, showed that participation in CRC screening doubled from 40% to 80% over 10 years after initiating FIT screening. CRC mortality over the same period decreased by 50% from baseline, and incidence fell by as much as 75%.

In follow-up colonoscopies, Tinmouth noted that collective research from studies reflecting real-world participation and adherence to FIT in populations in the United Kingdom, the Netherlands, Taiwan, and California show follow-up colonoscopy rates of 88%, 85%, 70%, and 78%, respectively.

Meanwhile, a recent large comparison of biennial FIT (n = 26,719) vs one-time colonoscopy (n = 26,332) screening, the first study to directly compare the two, showed noninferiority, with nearly identical rates of CRC mortality at 10 years (0.22% colonoscopy vs 0.24% FIT) as well as CRC incidence (1.13% vs 1.22%, respectively).

“This study shows that in the context of organized screening, the benefits of FIT are the same as colonoscopy in the most important outcome of CRC — mortality,” Tinmouth said.

Furthermore, as noted with blood-based screening, the higher participation with FIT shows a much more even racial/ethnic participation than that observed with colonoscopy.

“FIT has clear and compelling advantages over colonoscopy,” she said. As well as better compliance among all groups, “it is less costly and also better for the environment [by using fewer resources],” she added.

 

Colonoscopy: ‘Best for First-Line Screening’

Making the case that standard colonoscopy should in fact be the first-line test, Swati G. Patel, MD, director of the Gastrointestinal Cancer Risk and Prevention Center at the University of Colorado Anschutz Medical Center, Aurora, Colorado, emphasized the robust, large population studies showing its benefits. Among them is a landmark national policy study showing a significant reduction in CRC incidence and mortality associated with first-line colonoscopy and adenoma removal.

Dr. Swati G. Patel

A multitude of other studies in different settings have also shown similar benefits across large populations, Patel added.

In terms of its key advantages over FIT, the once-a-decade screening requirement for average-risk patients is seen as highly favorable by many, as evidenced in clinical trial data showing that individuals highly value tests that are accurate and do not need to be completed frequently, she said. Research from various other trials of organized screening programs further showed patients crossing over from FIT to colonoscopy, including one study of more than 3500 patients comparing colonoscopy and FIT, which had approximately 40% adherence with FIT vs nearly 90% with colonoscopy.

Notably, as many as 25% of the patients in the FIT arm in that study crossed over to colonoscopy, presumably due to preference for the once-a-decade regimen, Patel said.

“Colonoscopy had a substantial and impressive long-term protective benefit both in terms of developing colon cancer and dying from colon cancer,” she said.

Regarding the head-to-head FIT and colonoscopy comparison that Tinmouth described, Patel noted that a supplemental table in the study’s appendix of patients who completed screening does reveal increasing separation between the two approaches, favoring colonoscopy, in terms of longer-term CRC incidence and mortality.

The collective findings underscore that “colonoscopy as a standalone test is uniquely cost-effective,” in the face of costs related to colon cancer treatment.

Instead of relying on biennial tests with FIT, colonoscopy allows clinicians to immediately risk-stratify those individuals who can benefit from closer surveillance and really relax surveillance for those who are determined to be low risk, she said.

Grady had been on the scientific advisory boards for Guardant Health and Freenome and had consulted for Karius. Shoen reported relationships with Guardant Health and grant/research support from Exact Sciences, Freenome, and Immunovia. Tinmouth had no disclosures to report. Patel disclosed relationships with Olympus America and Exact Sciences.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO — In the ever-expanding options for colorectal cancer (CRC) screening, blood tests using precision medicine are becoming more advanced and convenient than ever; however, caveats abound, and when it comes to potentially life-saving screening measures, picking the optimal screening tool is critical.

Regarding tests, “perfect is not possible,” said William M. Grady, MD, AGAF, of the Fred Hutchinson Cancer Center, University of Washington School of Medicine, in Seattle, who took part in a debate on the pros and cons of key screening options at Digestive Disease Week® (DDW) 2025.

Dr. William M. Grady



“We have to remember that that’s the reality of colorectal cancer screening, and we need to meet our patients where they live,” said Grady, who argued on behalf of blood-based tests, including cell-free (cf) DNA (Shield, Guardant Health) and cfDNA plus protein biomarkers (Freenome).

A big point in their favor is their convenience and higher patient compliance — better tests that don’t get done do not work, he stressed.

He cited data that showed suboptimal compliance rates with standard colonoscopy: Rates range from about 70% among non-Hispanic White individuals to 67% among Black individuals, 51% among Hispanic individuals, and the low rate of just 26% among patients aged between 45 and 50 years.

With troubling increases in CRC incidence among younger patients, “that’s a group we’re particularly concerned about,” Grady said.

Meanwhile, studies show compliance rates with blood-based tests are ≥ 80%, with similar rates seen among those racial and ethnic groups, with lower rates for conventional colonoscopy, he noted.

Importantly, in terms of performance in detecting CRC, blood-based tests stand up to other modalities, as demonstrated in a real-world study conducted by Grady and his colleagues showing a sensitivity of 83% for the cfDNA test, 74% for the fecal immunochemical test (FIT) stool test, and 92% for a multitarget stool DNA test compared with 95% for colonoscopy.

“What we can see is that the sensitivity of blood-based tests looks favorable and comparable to other tests,” he said.

Among the four options, cfDNA had a highest patient adherence rate (85%-86%) compared with colonoscopy (28%-42%), FIT (43%-65%), and multitarget stool DNA (48%-60%).

“The bottom line is that these tests decrease CRC mortality and incidence, and we know there’s a potential to improve compliance with colorectal cancer screening if we offer blood-based tests for average-risk people who refuse colonoscopy,” Grady said.

 

Blood-Based Tests: Caveats, Harms?

Arguing against blood-based tests in the debate, Robert E. Schoen, MD, MPH, professor of medicine and epidemiology, Division of Gastroenterology, Hepatology and Nutrition, at the University of Pittsburgh, in Pittsburgh, Pennsylvania, checked off some of the key caveats.

While the overall sensitivity of blood-based tests may look favorable, these tests don’t detect early CRC well,” said Schoen. The sensitivity rates for stage 1 CRC are 64.7% with Guardant Health and 57.1% with Freenome.

Furthermore, their rates of detecting advanced adenomas are very low; the rate with Guardant Health is only about 13%, and with Freenome is even lower at 12.5%, he reported.

These rates are “similar to the false positive rate, with poor discrimination and accuracy for advanced adenomas,” Schoen said. “Without substantial detection of advanced adenomas, blood-based testing is inferior [to other options].”

Importantly, the low advanced adenoma rate translates to a lack of CRC prevention, which is key to reducing CRC mortality, he noted.

Essential to success with blood-based biopsies, as well as with stool tests, is the need for a follow-up colonoscopy if results are positive, but Schoen pointed out that this may or may not happen.

He cited research from FIT data showing that among 33,000 patients with abnormal stool tests, the rate of follow-up colonoscopy within a year, despite the concerning results, was a dismal 56%.

“We have a long way to go to make sure that people who get positive noninvasive tests get followed up,” he said.

In terms of the argument that blood-based screening is better than no screening at all, Schoen cited recent research that projected reductions in the risk for CRC incidence and mortality among 100,000 patients with each of the screening modalities.

Starting with standard colonoscopy performed every 10 years, the reductions in incidence and mortality would be 79% and 81%, respectively, followed by annual FIT, at 72% and 76%; multitarget DNA every 3 years, at 68% and 73%; and cfDNA (Shield), at 45% and 55%.

Based on those rates, if patients originally opting for FIT were to shift to blood-based tests, “the rate of CRC deaths would increase,” Schoen noted.

The findings underscore that “blood testing is unfavorable as a ‘substitution test,’” he added. “In fact, widespread adoption of blood testing could increase CRC morbidity.”

“Is it better than nothing?” he asked. “Yes, but only if performance of a colonoscopy after a positive test is accomplished.”

 

What About FIT?

Arguing that stool-based testing, or FIT, is the ideal choice as a first-line CRC test Jill Tinmouth, MD, PhD, a professor at the University of Toronto, Ontario, Canada, pointed to its prominent role in organized screening programs, including regions where resources may limit the widespread utilization of routine first-line colonoscopy screening. In addition, it narrows colonoscopies to those that are already prescreened as being at risk.

Dr. Jill Tinmouth

Data from one such program, reported by Kaiser Permanente of Northern California, showed that participation in CRC screening doubled from 40% to 80% over 10 years after initiating FIT screening. CRC mortality over the same period decreased by 50% from baseline, and incidence fell by as much as 75%.

In follow-up colonoscopies, Tinmouth noted that collective research from studies reflecting real-world participation and adherence to FIT in populations in the United Kingdom, the Netherlands, Taiwan, and California show follow-up colonoscopy rates of 88%, 85%, 70%, and 78%, respectively.

Meanwhile, a recent large comparison of biennial FIT (n = 26,719) vs one-time colonoscopy (n = 26,332) screening, the first study to directly compare the two, showed noninferiority, with nearly identical rates of CRC mortality at 10 years (0.22% colonoscopy vs 0.24% FIT) as well as CRC incidence (1.13% vs 1.22%, respectively).

“This study shows that in the context of organized screening, the benefits of FIT are the same as colonoscopy in the most important outcome of CRC — mortality,” Tinmouth said.

Furthermore, as noted with blood-based screening, the higher participation with FIT shows a much more even racial/ethnic participation than that observed with colonoscopy.

“FIT has clear and compelling advantages over colonoscopy,” she said. As well as better compliance among all groups, “it is less costly and also better for the environment [by using fewer resources],” she added.

 

Colonoscopy: ‘Best for First-Line Screening’

Making the case that standard colonoscopy should in fact be the first-line test, Swati G. Patel, MD, director of the Gastrointestinal Cancer Risk and Prevention Center at the University of Colorado Anschutz Medical Center, Aurora, Colorado, emphasized the robust, large population studies showing its benefits. Among them is a landmark national policy study showing a significant reduction in CRC incidence and mortality associated with first-line colonoscopy and adenoma removal.

Dr. Swati G. Patel

A multitude of other studies in different settings have also shown similar benefits across large populations, Patel added.

In terms of its key advantages over FIT, the once-a-decade screening requirement for average-risk patients is seen as highly favorable by many, as evidenced in clinical trial data showing that individuals highly value tests that are accurate and do not need to be completed frequently, she said. Research from various other trials of organized screening programs further showed patients crossing over from FIT to colonoscopy, including one study of more than 3500 patients comparing colonoscopy and FIT, which had approximately 40% adherence with FIT vs nearly 90% with colonoscopy.

Notably, as many as 25% of the patients in the FIT arm in that study crossed over to colonoscopy, presumably due to preference for the once-a-decade regimen, Patel said.

“Colonoscopy had a substantial and impressive long-term protective benefit both in terms of developing colon cancer and dying from colon cancer,” she said.

Regarding the head-to-head FIT and colonoscopy comparison that Tinmouth described, Patel noted that a supplemental table in the study’s appendix of patients who completed screening does reveal increasing separation between the two approaches, favoring colonoscopy, in terms of longer-term CRC incidence and mortality.

The collective findings underscore that “colonoscopy as a standalone test is uniquely cost-effective,” in the face of costs related to colon cancer treatment.

Instead of relying on biennial tests with FIT, colonoscopy allows clinicians to immediately risk-stratify those individuals who can benefit from closer surveillance and really relax surveillance for those who are determined to be low risk, she said.

Grady had been on the scientific advisory boards for Guardant Health and Freenome and had consulted for Karius. Shoen reported relationships with Guardant Health and grant/research support from Exact Sciences, Freenome, and Immunovia. Tinmouth had no disclosures to report. Patel disclosed relationships with Olympus America and Exact Sciences.

A version of this article appeared on Medscape.com.

SAN DIEGO — In the ever-expanding options for colorectal cancer (CRC) screening, blood tests using precision medicine are becoming more advanced and convenient than ever; however, caveats abound, and when it comes to potentially life-saving screening measures, picking the optimal screening tool is critical.

Regarding tests, “perfect is not possible,” said William M. Grady, MD, AGAF, of the Fred Hutchinson Cancer Center, University of Washington School of Medicine, in Seattle, who took part in a debate on the pros and cons of key screening options at Digestive Disease Week® (DDW) 2025.

Dr. William M. Grady



“We have to remember that that’s the reality of colorectal cancer screening, and we need to meet our patients where they live,” said Grady, who argued on behalf of blood-based tests, including cell-free (cf) DNA (Shield, Guardant Health) and cfDNA plus protein biomarkers (Freenome).

A big point in their favor is their convenience and higher patient compliance — better tests that don’t get done do not work, he stressed.

He cited data that showed suboptimal compliance rates with standard colonoscopy: Rates range from about 70% among non-Hispanic White individuals to 67% among Black individuals, 51% among Hispanic individuals, and the low rate of just 26% among patients aged between 45 and 50 years.

With troubling increases in CRC incidence among younger patients, “that’s a group we’re particularly concerned about,” Grady said.

Meanwhile, studies show compliance rates with blood-based tests are ≥ 80%, with similar rates seen among those racial and ethnic groups, with lower rates for conventional colonoscopy, he noted.

Importantly, in terms of performance in detecting CRC, blood-based tests stand up to other modalities, as demonstrated in a real-world study conducted by Grady and his colleagues showing a sensitivity of 83% for the cfDNA test, 74% for the fecal immunochemical test (FIT) stool test, and 92% for a multitarget stool DNA test compared with 95% for colonoscopy.

“What we can see is that the sensitivity of blood-based tests looks favorable and comparable to other tests,” he said.

Among the four options, cfDNA had a highest patient adherence rate (85%-86%) compared with colonoscopy (28%-42%), FIT (43%-65%), and multitarget stool DNA (48%-60%).

“The bottom line is that these tests decrease CRC mortality and incidence, and we know there’s a potential to improve compliance with colorectal cancer screening if we offer blood-based tests for average-risk people who refuse colonoscopy,” Grady said.

 

Blood-Based Tests: Caveats, Harms?

Arguing against blood-based tests in the debate, Robert E. Schoen, MD, MPH, professor of medicine and epidemiology, Division of Gastroenterology, Hepatology and Nutrition, at the University of Pittsburgh, in Pittsburgh, Pennsylvania, checked off some of the key caveats.

While the overall sensitivity of blood-based tests may look favorable, these tests don’t detect early CRC well,” said Schoen. The sensitivity rates for stage 1 CRC are 64.7% with Guardant Health and 57.1% with Freenome.

Furthermore, their rates of detecting advanced adenomas are very low; the rate with Guardant Health is only about 13%, and with Freenome is even lower at 12.5%, he reported.

These rates are “similar to the false positive rate, with poor discrimination and accuracy for advanced adenomas,” Schoen said. “Without substantial detection of advanced adenomas, blood-based testing is inferior [to other options].”

Importantly, the low advanced adenoma rate translates to a lack of CRC prevention, which is key to reducing CRC mortality, he noted.

Essential to success with blood-based biopsies, as well as with stool tests, is the need for a follow-up colonoscopy if results are positive, but Schoen pointed out that this may or may not happen.

He cited research from FIT data showing that among 33,000 patients with abnormal stool tests, the rate of follow-up colonoscopy within a year, despite the concerning results, was a dismal 56%.

“We have a long way to go to make sure that people who get positive noninvasive tests get followed up,” he said.

In terms of the argument that blood-based screening is better than no screening at all, Schoen cited recent research that projected reductions in the risk for CRC incidence and mortality among 100,000 patients with each of the screening modalities.

Starting with standard colonoscopy performed every 10 years, the reductions in incidence and mortality would be 79% and 81%, respectively, followed by annual FIT, at 72% and 76%; multitarget DNA every 3 years, at 68% and 73%; and cfDNA (Shield), at 45% and 55%.

Based on those rates, if patients originally opting for FIT were to shift to blood-based tests, “the rate of CRC deaths would increase,” Schoen noted.

The findings underscore that “blood testing is unfavorable as a ‘substitution test,’” he added. “In fact, widespread adoption of blood testing could increase CRC morbidity.”

“Is it better than nothing?” he asked. “Yes, but only if performance of a colonoscopy after a positive test is accomplished.”

 

What About FIT?

Arguing that stool-based testing, or FIT, is the ideal choice as a first-line CRC test Jill Tinmouth, MD, PhD, a professor at the University of Toronto, Ontario, Canada, pointed to its prominent role in organized screening programs, including regions where resources may limit the widespread utilization of routine first-line colonoscopy screening. In addition, it narrows colonoscopies to those that are already prescreened as being at risk.

Dr. Jill Tinmouth

Data from one such program, reported by Kaiser Permanente of Northern California, showed that participation in CRC screening doubled from 40% to 80% over 10 years after initiating FIT screening. CRC mortality over the same period decreased by 50% from baseline, and incidence fell by as much as 75%.

In follow-up colonoscopies, Tinmouth noted that collective research from studies reflecting real-world participation and adherence to FIT in populations in the United Kingdom, the Netherlands, Taiwan, and California show follow-up colonoscopy rates of 88%, 85%, 70%, and 78%, respectively.

Meanwhile, a recent large comparison of biennial FIT (n = 26,719) vs one-time colonoscopy (n = 26,332) screening, the first study to directly compare the two, showed noninferiority, with nearly identical rates of CRC mortality at 10 years (0.22% colonoscopy vs 0.24% FIT) as well as CRC incidence (1.13% vs 1.22%, respectively).

“This study shows that in the context of organized screening, the benefits of FIT are the same as colonoscopy in the most important outcome of CRC — mortality,” Tinmouth said.

Furthermore, as noted with blood-based screening, the higher participation with FIT shows a much more even racial/ethnic participation than that observed with colonoscopy.

“FIT has clear and compelling advantages over colonoscopy,” she said. As well as better compliance among all groups, “it is less costly and also better for the environment [by using fewer resources],” she added.

 

Colonoscopy: ‘Best for First-Line Screening’

Making the case that standard colonoscopy should in fact be the first-line test, Swati G. Patel, MD, director of the Gastrointestinal Cancer Risk and Prevention Center at the University of Colorado Anschutz Medical Center, Aurora, Colorado, emphasized the robust, large population studies showing its benefits. Among them is a landmark national policy study showing a significant reduction in CRC incidence and mortality associated with first-line colonoscopy and adenoma removal.

Dr. Swati G. Patel

A multitude of other studies in different settings have also shown similar benefits across large populations, Patel added.

In terms of its key advantages over FIT, the once-a-decade screening requirement for average-risk patients is seen as highly favorable by many, as evidenced in clinical trial data showing that individuals highly value tests that are accurate and do not need to be completed frequently, she said. Research from various other trials of organized screening programs further showed patients crossing over from FIT to colonoscopy, including one study of more than 3500 patients comparing colonoscopy and FIT, which had approximately 40% adherence with FIT vs nearly 90% with colonoscopy.

Notably, as many as 25% of the patients in the FIT arm in that study crossed over to colonoscopy, presumably due to preference for the once-a-decade regimen, Patel said.

“Colonoscopy had a substantial and impressive long-term protective benefit both in terms of developing colon cancer and dying from colon cancer,” she said.

Regarding the head-to-head FIT and colonoscopy comparison that Tinmouth described, Patel noted that a supplemental table in the study’s appendix of patients who completed screening does reveal increasing separation between the two approaches, favoring colonoscopy, in terms of longer-term CRC incidence and mortality.

The collective findings underscore that “colonoscopy as a standalone test is uniquely cost-effective,” in the face of costs related to colon cancer treatment.

Instead of relying on biennial tests with FIT, colonoscopy allows clinicians to immediately risk-stratify those individuals who can benefit from closer surveillance and really relax surveillance for those who are determined to be low risk, she said.

Grady had been on the scientific advisory boards for Guardant Health and Freenome and had consulted for Karius. Shoen reported relationships with Guardant Health and grant/research support from Exact Sciences, Freenome, and Immunovia. Tinmouth had no disclosures to report. Patel disclosed relationships with Olympus America and Exact Sciences.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 05/20/2025 - 11:21
Un-Gate On Date
Tue, 05/20/2025 - 11:21
Use ProPublica
CFC Schedule Remove Status
Tue, 05/20/2025 - 11:21
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Tue, 05/20/2025 - 11:21

Post-Polypectomy Colorectal Cancers Common Before Follow-Up

Article Type
Changed
Mon, 05/19/2025 - 16:47

SAN DIEGO — The majority of colorectal cancers (CRCs) that emerge following a negative colonoscopy and polypectomy occur prior to recommended surveillance exams, and those cases are more likely to be at an advanced stage, according to new research.

Of key factors linked to a higher risk for such cases, one stands out — the quality of the baseline colonoscopy procedure.

“A lot of the neoplasia that we see after polypectomy was probably either missed or incompletely resected at baseline,” said Samir Gupta, MD, AGAF, a professor of medicine in the Division of Gastroenterology, UC San Diego Health, La Jolla, California, in discussing the topic at Digestive Diseases Week® (DDW) 2025.

Dr. Samir Gupta



“Therefore, what is key to emphasize is that [colonoscopy] quality is probably the most important factor in post-polypectomy risk,” he said. “But, advantageously, it’s also the most modifiable factor.”

Research shows that the risk for CRC incidence following a colonoscopy ranges from just about 3.4 to 5 cases per 10,000 person-years when baseline findings show no adenoma or a low risk; however, higher rates ranging from 13.8 to 20.9 cases per 10,000 person-years are observed for high-risk adenomas or serrated polyps, Gupta reported.

“Compared with those who have normal colonoscopy, the risk [for CRC] with high-risk adenomas is increased by nearly threefold,” Gupta said.

In a recent study of US veterans who underwent a colonoscopy with polypectomy between 1999 and 2016 that was labeled negative for cancer, Gupta and his colleagues found that over a median follow-up of 3.9 years, as many as 55% of 396 CRCs that occurred post-polypectomy were detected prior to the recommended surveillance colonoscopy.

The study also showed that 40% of post-polypectomy CRC deaths occurred prior to the recommended surveillance exam over a median follow-up of 4.2 years.

Cancers detected prior to the recommended surveillance exam were more likely to be diagnosed as stage IV compared with those diagnosed later (16% prior to recommended surveillance vs 2.1% and 8.3% during and after, respectively; P = .003).

Importantly, the most prominent reason for the cancers emerging in the interval before follow-up surveillance was missed lesions during the baseline colonoscopy (60%), Gupta said.

 

Colonoscopist Skill and Benchmarks

larger study of 173,288 colonoscopies further underscores colonoscopist skill as a key factor in post-polypectomy CRC, showing that colonoscopists with low vs high performance quality — defined as an adenoma detection rate (ADR) of either < 20% vs ≥ 20% — had higher 10-year cumulative rates of CRC incidence among patients following a negative colonoscopy (P < .001).

Likewise, in another analysis of low-risk vs high-risk polyps, a higher colonoscopist performance status was significantly associated with lower rates of CRCs (P < .001).

“Higher colonoscopist performance was associated with a lower cumulative colorectal cancer risk within each [polyp risk] group, such that the cumulative risk after high-risk adenoma removal by a higher performing colonoscopist is similar to that in patients who had a low-risk adenoma removed by a lower performer,” Gupta explained.

“So, this has nothing to do with the type of polyp that was removed — it really has to do with the quality of the colonoscopist,” he said.

The American College of Gastroenterology and the American Society for Gastrointestinal Endoscopy Quality Task Force recently updated recommended benchmarks for colonoscopists for detecting polyps, said Aasma Shaukat, MD, AGAF, director of GI Outcomes Research at NYU Grossman School of Medicine, New York City, in further discussing the issue in the session.

Dr. Aasma Shaukat



They recommend an ADR of 35% overall, with the recommended benchmark being ≥ 40% for men aged 45 years or older and ≥ 30% for women aged 45 years or older, with a rate of 50% for patients aged 45 years or older with an abnormal stool test, Shaukat explained.

And “these are minimum benchmarks,” she said. “Multiple studies suggest that, in fact, the reported rates are much higher.”

Among key strategies for detecting elusive adenomas is the need to slow down withdrawal time during the colonoscopy in order to take as close a look as possible, Shaukat emphasized.

She noted research that her team has published showing that physicians’ shorter withdrawal times were in fact inversely associated with an increased risk for cancers occurring prior to the recommended surveillance (P < .0001).

“Multiple studies have shown it isn’t just the time but the technique with withdrawal,” she added, underscoring the need to flatten as much of the mucosa and folds as possible during the withdrawal. “It’s important to perfect our technique.”

Sessile serrated lesions, with often subtle and indistinct borders, can be among the most difficult polyps to remove, Shaukat noted. Studies have shown that as many as 31% of sessile serrated lesions are incompletely resected, compared with about 7% of tubular adenomas.

 

Patient Compliance Can’t Be Counted On 

In addition to physician-related factors, patients themselves can also play a role in post-polypectomy cancer risk — specifically in not complying with surveillance recommendations, with reasons ranging from cost to the invasiveness and burden of undergoing a surveillance colonoscopy.

“Colonoscopies are expensive, and participation is suboptimal,” Gupta said.

One study of high-risk patients with adenoma shows that only 64% received surveillance, and many who did receive surveillance received it late, he noted.

This underscores the need for better prevention as well as follow-up strategies, he added.

Recommendations for surveillance exams from the World Endoscopy Organization range from every 3 to 10 years for patients with polyps, depending on the number, size, and type of polyps, to every 10 years for those with normal colonoscopies and no polyps.

A key potential solution to improve patient monitoring within those periods is the use of fecal immunochemical tests (FITs), which are noninvasive, substantially less burdensome alternatives to colonoscopies, which check for blood in the stool, Gupta said.

While the tests can’t replace the gold standard of colonoscopies, the tests nevertheless can play an important role in monitoring patients, he said.

Evidence supporting their benefits includes a recent important study of 2226 patients who underwent either post-polypectomy colonoscopy, FIT (either with FOB Gold or OC-Sensor), or FIT-fecal DNA (Cologuard) test, he noted.

The results showed that the OC-Sensor FIT had a 71% sensitivity, and FIT-fecal DNA had a sensitivity of 86% in the detection of CRC.

Importantly, the study found that a positive FIT result prior to the recommended surveillance colonoscopy reduced the time-to-diagnosis for CRC and advanced adenoma by a median of 30 and 20 months, respectively.

 

FIT Tests Potentially a ‘Major Advantage’

“The predictive models and these noninvasive tests are likely better than current guidelines for predicting who has metachronous advanced neoplasia or colon cancer,” Gupta said.

“For this reason, I really think that these alternatives have a potentially major advantage in reducing colonoscopy burdens. These alternatives are worthwhile of studying, and we really do need to consider them,” he said.

More broadly, the collective evidence points to factors that can and should be addressed with a proactive diligence, Gupta noted.

“We need to be able to shift from using guidelines that are just based on the number, size, and histology of polyps to a scenario where we’re doing very high-quality colonoscopies with excellent ADR rates and complete polyp excision,” Gupta said.

Furthermore, “the use of tools for more precise risk stratification could result in a big, low-risk group that could just require 10-year colonoscopy surveillance or maybe even periodic noninvasive surveillance, and a much smaller high-risk group that we could really focus our attention on, doing surveillance colonoscopy every 3-5 years or maybe even intense noninvasive surveillance.”

Gupta’s disclosures included relationships with Guardant Health, Universal DX, CellMax, and Geneoscopy. Shaukat’s disclosures included relationships with Iterative Health and Freenome.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO — The majority of colorectal cancers (CRCs) that emerge following a negative colonoscopy and polypectomy occur prior to recommended surveillance exams, and those cases are more likely to be at an advanced stage, according to new research.

Of key factors linked to a higher risk for such cases, one stands out — the quality of the baseline colonoscopy procedure.

“A lot of the neoplasia that we see after polypectomy was probably either missed or incompletely resected at baseline,” said Samir Gupta, MD, AGAF, a professor of medicine in the Division of Gastroenterology, UC San Diego Health, La Jolla, California, in discussing the topic at Digestive Diseases Week® (DDW) 2025.

Dr. Samir Gupta



“Therefore, what is key to emphasize is that [colonoscopy] quality is probably the most important factor in post-polypectomy risk,” he said. “But, advantageously, it’s also the most modifiable factor.”

Research shows that the risk for CRC incidence following a colonoscopy ranges from just about 3.4 to 5 cases per 10,000 person-years when baseline findings show no adenoma or a low risk; however, higher rates ranging from 13.8 to 20.9 cases per 10,000 person-years are observed for high-risk adenomas or serrated polyps, Gupta reported.

“Compared with those who have normal colonoscopy, the risk [for CRC] with high-risk adenomas is increased by nearly threefold,” Gupta said.

In a recent study of US veterans who underwent a colonoscopy with polypectomy between 1999 and 2016 that was labeled negative for cancer, Gupta and his colleagues found that over a median follow-up of 3.9 years, as many as 55% of 396 CRCs that occurred post-polypectomy were detected prior to the recommended surveillance colonoscopy.

The study also showed that 40% of post-polypectomy CRC deaths occurred prior to the recommended surveillance exam over a median follow-up of 4.2 years.

Cancers detected prior to the recommended surveillance exam were more likely to be diagnosed as stage IV compared with those diagnosed later (16% prior to recommended surveillance vs 2.1% and 8.3% during and after, respectively; P = .003).

Importantly, the most prominent reason for the cancers emerging in the interval before follow-up surveillance was missed lesions during the baseline colonoscopy (60%), Gupta said.

 

Colonoscopist Skill and Benchmarks

larger study of 173,288 colonoscopies further underscores colonoscopist skill as a key factor in post-polypectomy CRC, showing that colonoscopists with low vs high performance quality — defined as an adenoma detection rate (ADR) of either < 20% vs ≥ 20% — had higher 10-year cumulative rates of CRC incidence among patients following a negative colonoscopy (P < .001).

Likewise, in another analysis of low-risk vs high-risk polyps, a higher colonoscopist performance status was significantly associated with lower rates of CRCs (P < .001).

“Higher colonoscopist performance was associated with a lower cumulative colorectal cancer risk within each [polyp risk] group, such that the cumulative risk after high-risk adenoma removal by a higher performing colonoscopist is similar to that in patients who had a low-risk adenoma removed by a lower performer,” Gupta explained.

“So, this has nothing to do with the type of polyp that was removed — it really has to do with the quality of the colonoscopist,” he said.

The American College of Gastroenterology and the American Society for Gastrointestinal Endoscopy Quality Task Force recently updated recommended benchmarks for colonoscopists for detecting polyps, said Aasma Shaukat, MD, AGAF, director of GI Outcomes Research at NYU Grossman School of Medicine, New York City, in further discussing the issue in the session.

Dr. Aasma Shaukat



They recommend an ADR of 35% overall, with the recommended benchmark being ≥ 40% for men aged 45 years or older and ≥ 30% for women aged 45 years or older, with a rate of 50% for patients aged 45 years or older with an abnormal stool test, Shaukat explained.

And “these are minimum benchmarks,” she said. “Multiple studies suggest that, in fact, the reported rates are much higher.”

Among key strategies for detecting elusive adenomas is the need to slow down withdrawal time during the colonoscopy in order to take as close a look as possible, Shaukat emphasized.

She noted research that her team has published showing that physicians’ shorter withdrawal times were in fact inversely associated with an increased risk for cancers occurring prior to the recommended surveillance (P < .0001).

“Multiple studies have shown it isn’t just the time but the technique with withdrawal,” she added, underscoring the need to flatten as much of the mucosa and folds as possible during the withdrawal. “It’s important to perfect our technique.”

Sessile serrated lesions, with often subtle and indistinct borders, can be among the most difficult polyps to remove, Shaukat noted. Studies have shown that as many as 31% of sessile serrated lesions are incompletely resected, compared with about 7% of tubular adenomas.

 

Patient Compliance Can’t Be Counted On 

In addition to physician-related factors, patients themselves can also play a role in post-polypectomy cancer risk — specifically in not complying with surveillance recommendations, with reasons ranging from cost to the invasiveness and burden of undergoing a surveillance colonoscopy.

“Colonoscopies are expensive, and participation is suboptimal,” Gupta said.

One study of high-risk patients with adenoma shows that only 64% received surveillance, and many who did receive surveillance received it late, he noted.

This underscores the need for better prevention as well as follow-up strategies, he added.

Recommendations for surveillance exams from the World Endoscopy Organization range from every 3 to 10 years for patients with polyps, depending on the number, size, and type of polyps, to every 10 years for those with normal colonoscopies and no polyps.

A key potential solution to improve patient monitoring within those periods is the use of fecal immunochemical tests (FITs), which are noninvasive, substantially less burdensome alternatives to colonoscopies, which check for blood in the stool, Gupta said.

While the tests can’t replace the gold standard of colonoscopies, the tests nevertheless can play an important role in monitoring patients, he said.

Evidence supporting their benefits includes a recent important study of 2226 patients who underwent either post-polypectomy colonoscopy, FIT (either with FOB Gold or OC-Sensor), or FIT-fecal DNA (Cologuard) test, he noted.

The results showed that the OC-Sensor FIT had a 71% sensitivity, and FIT-fecal DNA had a sensitivity of 86% in the detection of CRC.

Importantly, the study found that a positive FIT result prior to the recommended surveillance colonoscopy reduced the time-to-diagnosis for CRC and advanced adenoma by a median of 30 and 20 months, respectively.

 

FIT Tests Potentially a ‘Major Advantage’

“The predictive models and these noninvasive tests are likely better than current guidelines for predicting who has metachronous advanced neoplasia or colon cancer,” Gupta said.

“For this reason, I really think that these alternatives have a potentially major advantage in reducing colonoscopy burdens. These alternatives are worthwhile of studying, and we really do need to consider them,” he said.

More broadly, the collective evidence points to factors that can and should be addressed with a proactive diligence, Gupta noted.

“We need to be able to shift from using guidelines that are just based on the number, size, and histology of polyps to a scenario where we’re doing very high-quality colonoscopies with excellent ADR rates and complete polyp excision,” Gupta said.

Furthermore, “the use of tools for more precise risk stratification could result in a big, low-risk group that could just require 10-year colonoscopy surveillance or maybe even periodic noninvasive surveillance, and a much smaller high-risk group that we could really focus our attention on, doing surveillance colonoscopy every 3-5 years or maybe even intense noninvasive surveillance.”

Gupta’s disclosures included relationships with Guardant Health, Universal DX, CellMax, and Geneoscopy. Shaukat’s disclosures included relationships with Iterative Health and Freenome.

A version of this article appeared on Medscape.com.

SAN DIEGO — The majority of colorectal cancers (CRCs) that emerge following a negative colonoscopy and polypectomy occur prior to recommended surveillance exams, and those cases are more likely to be at an advanced stage, according to new research.

Of key factors linked to a higher risk for such cases, one stands out — the quality of the baseline colonoscopy procedure.

“A lot of the neoplasia that we see after polypectomy was probably either missed or incompletely resected at baseline,” said Samir Gupta, MD, AGAF, a professor of medicine in the Division of Gastroenterology, UC San Diego Health, La Jolla, California, in discussing the topic at Digestive Diseases Week® (DDW) 2025.

Dr. Samir Gupta



“Therefore, what is key to emphasize is that [colonoscopy] quality is probably the most important factor in post-polypectomy risk,” he said. “But, advantageously, it’s also the most modifiable factor.”

Research shows that the risk for CRC incidence following a colonoscopy ranges from just about 3.4 to 5 cases per 10,000 person-years when baseline findings show no adenoma or a low risk; however, higher rates ranging from 13.8 to 20.9 cases per 10,000 person-years are observed for high-risk adenomas or serrated polyps, Gupta reported.

“Compared with those who have normal colonoscopy, the risk [for CRC] with high-risk adenomas is increased by nearly threefold,” Gupta said.

In a recent study of US veterans who underwent a colonoscopy with polypectomy between 1999 and 2016 that was labeled negative for cancer, Gupta and his colleagues found that over a median follow-up of 3.9 years, as many as 55% of 396 CRCs that occurred post-polypectomy were detected prior to the recommended surveillance colonoscopy.

The study also showed that 40% of post-polypectomy CRC deaths occurred prior to the recommended surveillance exam over a median follow-up of 4.2 years.

Cancers detected prior to the recommended surveillance exam were more likely to be diagnosed as stage IV compared with those diagnosed later (16% prior to recommended surveillance vs 2.1% and 8.3% during and after, respectively; P = .003).

Importantly, the most prominent reason for the cancers emerging in the interval before follow-up surveillance was missed lesions during the baseline colonoscopy (60%), Gupta said.

 

Colonoscopist Skill and Benchmarks

larger study of 173,288 colonoscopies further underscores colonoscopist skill as a key factor in post-polypectomy CRC, showing that colonoscopists with low vs high performance quality — defined as an adenoma detection rate (ADR) of either < 20% vs ≥ 20% — had higher 10-year cumulative rates of CRC incidence among patients following a negative colonoscopy (P < .001).

Likewise, in another analysis of low-risk vs high-risk polyps, a higher colonoscopist performance status was significantly associated with lower rates of CRCs (P < .001).

“Higher colonoscopist performance was associated with a lower cumulative colorectal cancer risk within each [polyp risk] group, such that the cumulative risk after high-risk adenoma removal by a higher performing colonoscopist is similar to that in patients who had a low-risk adenoma removed by a lower performer,” Gupta explained.

“So, this has nothing to do with the type of polyp that was removed — it really has to do with the quality of the colonoscopist,” he said.

The American College of Gastroenterology and the American Society for Gastrointestinal Endoscopy Quality Task Force recently updated recommended benchmarks for colonoscopists for detecting polyps, said Aasma Shaukat, MD, AGAF, director of GI Outcomes Research at NYU Grossman School of Medicine, New York City, in further discussing the issue in the session.

Dr. Aasma Shaukat



They recommend an ADR of 35% overall, with the recommended benchmark being ≥ 40% for men aged 45 years or older and ≥ 30% for women aged 45 years or older, with a rate of 50% for patients aged 45 years or older with an abnormal stool test, Shaukat explained.

And “these are minimum benchmarks,” she said. “Multiple studies suggest that, in fact, the reported rates are much higher.”

Among key strategies for detecting elusive adenomas is the need to slow down withdrawal time during the colonoscopy in order to take as close a look as possible, Shaukat emphasized.

She noted research that her team has published showing that physicians’ shorter withdrawal times were in fact inversely associated with an increased risk for cancers occurring prior to the recommended surveillance (P < .0001).

“Multiple studies have shown it isn’t just the time but the technique with withdrawal,” she added, underscoring the need to flatten as much of the mucosa and folds as possible during the withdrawal. “It’s important to perfect our technique.”

Sessile serrated lesions, with often subtle and indistinct borders, can be among the most difficult polyps to remove, Shaukat noted. Studies have shown that as many as 31% of sessile serrated lesions are incompletely resected, compared with about 7% of tubular adenomas.

 

Patient Compliance Can’t Be Counted On 

In addition to physician-related factors, patients themselves can also play a role in post-polypectomy cancer risk — specifically in not complying with surveillance recommendations, with reasons ranging from cost to the invasiveness and burden of undergoing a surveillance colonoscopy.

“Colonoscopies are expensive, and participation is suboptimal,” Gupta said.

One study of high-risk patients with adenoma shows that only 64% received surveillance, and many who did receive surveillance received it late, he noted.

This underscores the need for better prevention as well as follow-up strategies, he added.

Recommendations for surveillance exams from the World Endoscopy Organization range from every 3 to 10 years for patients with polyps, depending on the number, size, and type of polyps, to every 10 years for those with normal colonoscopies and no polyps.

A key potential solution to improve patient monitoring within those periods is the use of fecal immunochemical tests (FITs), which are noninvasive, substantially less burdensome alternatives to colonoscopies, which check for blood in the stool, Gupta said.

While the tests can’t replace the gold standard of colonoscopies, the tests nevertheless can play an important role in monitoring patients, he said.

Evidence supporting their benefits includes a recent important study of 2226 patients who underwent either post-polypectomy colonoscopy, FIT (either with FOB Gold or OC-Sensor), or FIT-fecal DNA (Cologuard) test, he noted.

The results showed that the OC-Sensor FIT had a 71% sensitivity, and FIT-fecal DNA had a sensitivity of 86% in the detection of CRC.

Importantly, the study found that a positive FIT result prior to the recommended surveillance colonoscopy reduced the time-to-diagnosis for CRC and advanced adenoma by a median of 30 and 20 months, respectively.

 

FIT Tests Potentially a ‘Major Advantage’

“The predictive models and these noninvasive tests are likely better than current guidelines for predicting who has metachronous advanced neoplasia or colon cancer,” Gupta said.

“For this reason, I really think that these alternatives have a potentially major advantage in reducing colonoscopy burdens. These alternatives are worthwhile of studying, and we really do need to consider them,” he said.

More broadly, the collective evidence points to factors that can and should be addressed with a proactive diligence, Gupta noted.

“We need to be able to shift from using guidelines that are just based on the number, size, and histology of polyps to a scenario where we’re doing very high-quality colonoscopies with excellent ADR rates and complete polyp excision,” Gupta said.

Furthermore, “the use of tools for more precise risk stratification could result in a big, low-risk group that could just require 10-year colonoscopy surveillance or maybe even periodic noninvasive surveillance, and a much smaller high-risk group that we could really focus our attention on, doing surveillance colonoscopy every 3-5 years or maybe even intense noninvasive surveillance.”

Gupta’s disclosures included relationships with Guardant Health, Universal DX, CellMax, and Geneoscopy. Shaukat’s disclosures included relationships with Iterative Health and Freenome.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 05/19/2025 - 14:09
Un-Gate On Date
Mon, 05/19/2025 - 14:09
Use ProPublica
CFC Schedule Remove Status
Mon, 05/19/2025 - 14:09
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Mon, 05/19/2025 - 14:09

Precision-Medicine Approach Improves IBD Infliximab Outcomes

Article Type
Changed
Wed, 05/14/2025 - 12:42

SAN DIEGO — In tough-to-treat chronic inflammatory bowel disease (IBD), a precision-medicine strategy based on patients’ molecular profiles showed efficacy in guiding anti–tumor necrosis factor (TNF) therapy treatment decisions to improve outcomes.

This is the first study implementing multi-biomarker signatures in informed decisions instead of single biomarker–based trial algorithms [in IBD],” said first author Florian Tran, MD, in presenting the late-breaking study at Digestive Disease Week (DDW) 2025.

“We have now been able to demonstrate for the first time that precision medicine can be successfully applied in the context of chronic inflammatory bowel diseases, leading to improved long-term outcomes,” said Tran, a professor of pathophysiology of chronic inflammation at the Institute of Clinical Molecular Biology and department of internal medicine, Kiel University and University Hospital Schleswig Holstein, Kiel, Germany.

Anti-TNF therapies can be highly effective in a range of immune-mediated inflammatory diseases, including IBD, but not all patients respond. Key singular candidates of biomarkers that could better predict a response show relatively low predictive power or a failure to replicate in independent studies.

In previous research, Tran and colleagues reported identification of early dynamic molecular changes in the blood that are more robustly predictive of responses to anti-TNF therapy.

“We have generated compelling evidence that therapy-induced changes in inflammatory pathways can reliably predict patient outcomes,” he said.

Among them are dynamic transcriptome changes that emerge early during treatment and can predict response to anti-TNF therapy, as well as clinical response trajectories that differ based on subgroups.

To further investigate the benefits of the multi-biomarker signatures collectively, as opposed to single biomarker–based algorithms, Tran and colleagues conducted the phase 3, open-label GUIDE-IBD trial, enrolling 102 adults with a confirmed diagnosis of Crohn’s disease (CD) or ulcerative colitis (UC) at three German university hospitals between February 2021 and January 2024.

All patients had been assigned the anti-TNF drug infliximab for the first time.

Study participants were randomized to receive either the molecular-guided care or standard medical care, with stratifications based on diagnosis, recruiting center, and baseline corticosteroid use.

Those in the molecular group received real-time molecular assessments at baseline and weeks 2, 6, 14, and 26, which included peripheral blood samples and biopsies of known messenger ribonucleic acid–based biomarkers.

The assessments also looked at infliximab and anti-drug antibody levels.

Molecular reports were then provided through molecular medicine boards for patients in the molecular-guidance group at weeks 2, 14, 26, and 52, whereas data from the standard-care group was not communicated.

Based on the biomarker data, therapy decisions were made such as adjustments to dosing, intervals, comedication, and switches in therapy.

The primary endpoint was the combined end point of disease control, defined as clinical remission (CD Activity Index < 150, partial Mayo score < 2), endoscopic remission (simplified endoscopic score for CD ≤ 4 [≤ 2 for isolated ileal disease], endoscopic Mayo score ≤ 1), or biochemical remission (C-reactive protein < 5 mg/L, fecal calprotectin < 250 mg/g).

A total of 87 patients completed the study with available primary endpoint data to week 52; there were 38 in the molecular care group and 49 in the standard care group. In each group, approximately half of the patients had CD and half had UC.

For the primary endpoint, comprehensive disease control was significantly more frequent in the molecular care group at week 52 (55.3%) than in the standard-care group (26.5%), with an absolute difference of 29% (P = .0072).

Furthermore, the secondary endpoint of the combined rate of endoscopic and clinical remission at week 52 was also higher in the molecular group (60.5%) than in the standard-care group (32.7%; P = .0163).

An exploratory analysis further showed therapy switches were more common in the molecular group (47%) than in the standard-care group (29%), with an increased rate of drug switching between 14 weeks and 26 weeks.

More patients in the molecular guidance group achieved comprehensive disease control (deep remission) at week 52 (P = .0135).

“The [molecular guidance] group was more likely to switch therapies after the induction period, reducing the number of patients who are suboptimally treated under infliximab,” Tran noted.

The results underscore that “even in the absence of a single ‘magic’ biomarker, precision medicine can materially improve patient outcomes by integrating complex molecular data into everyday clinical decisions, enabling more effective therapy choices and reducing unnecessary drug side effects,” he said.

 

Approach Pioneered in Oncology

Tran noted that the collaboration necessary for the approach was based on strategies pioneered in cancer centers, which have resulted in significant improvements in cancer therapy outcomes.

“For the first time in IBD, we have now integrated multidimensional molecular data and innovative drug-dosing models into these inflammation boards,” he said.

Key components of the intervention include a highly structured patient care process with fixed assessment timepoints, as well as “a multidisciplinary, quality-controlled decision-making process that is meticulously documented,” he explained.

The results underscore that “we must move away from sole physician–driven treatment decisions in IBD toward a structured expert-board model for therapy decision-making.” 

 

Benefits in Other IBD Therapies Unclear

Commenting on the study, Ashwin N. Ananthakrishnan, MBBS, MPH, AGAF, an associate professor of medicine with Massachusetts General Hospital, Boston, noted that “this approach may help optimize existing treatment early and ensure that ineffective treatments don’t get dragged on.”

Dr. Ashwin N. Ananthakrishnan

Importantly, however, a key limitation is that “this does not tell you upfront whether a treatment is likely to work or what the best treatment is,” Ananthakrishnan said in an interview. “That is a critically important unmet need in IBD.”

While doses are escalated, if needed, with infliximab and other biologic drugs, that may not be the case with other therapies, he explained.

“For other agents, such as JAK [Janus kinase] inhibitors, we actually start at a higher dose and then reduce for maintenance,” said Ananthakrishnan.

How this approach would work for these drugs or “for drugs where blood levels are not reflective of efficacy is also not clear,” he said.

Tran’s disclosures included relationships with AbbVie, Bristol-Myers-Squibb, CED Service, Celltrion Healthcare, Eli Lilly, Ferring Pharmaceutical, Janssen, Johnson & Johnson, Takeda, LEK Consulting, and Sanofi/Regeneron. Ananthakrishnan had no disclosures to report.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO — In tough-to-treat chronic inflammatory bowel disease (IBD), a precision-medicine strategy based on patients’ molecular profiles showed efficacy in guiding anti–tumor necrosis factor (TNF) therapy treatment decisions to improve outcomes.

This is the first study implementing multi-biomarker signatures in informed decisions instead of single biomarker–based trial algorithms [in IBD],” said first author Florian Tran, MD, in presenting the late-breaking study at Digestive Disease Week (DDW) 2025.

“We have now been able to demonstrate for the first time that precision medicine can be successfully applied in the context of chronic inflammatory bowel diseases, leading to improved long-term outcomes,” said Tran, a professor of pathophysiology of chronic inflammation at the Institute of Clinical Molecular Biology and department of internal medicine, Kiel University and University Hospital Schleswig Holstein, Kiel, Germany.

Anti-TNF therapies can be highly effective in a range of immune-mediated inflammatory diseases, including IBD, but not all patients respond. Key singular candidates of biomarkers that could better predict a response show relatively low predictive power or a failure to replicate in independent studies.

In previous research, Tran and colleagues reported identification of early dynamic molecular changes in the blood that are more robustly predictive of responses to anti-TNF therapy.

“We have generated compelling evidence that therapy-induced changes in inflammatory pathways can reliably predict patient outcomes,” he said.

Among them are dynamic transcriptome changes that emerge early during treatment and can predict response to anti-TNF therapy, as well as clinical response trajectories that differ based on subgroups.

To further investigate the benefits of the multi-biomarker signatures collectively, as opposed to single biomarker–based algorithms, Tran and colleagues conducted the phase 3, open-label GUIDE-IBD trial, enrolling 102 adults with a confirmed diagnosis of Crohn’s disease (CD) or ulcerative colitis (UC) at three German university hospitals between February 2021 and January 2024.

All patients had been assigned the anti-TNF drug infliximab for the first time.

Study participants were randomized to receive either the molecular-guided care or standard medical care, with stratifications based on diagnosis, recruiting center, and baseline corticosteroid use.

Those in the molecular group received real-time molecular assessments at baseline and weeks 2, 6, 14, and 26, which included peripheral blood samples and biopsies of known messenger ribonucleic acid–based biomarkers.

The assessments also looked at infliximab and anti-drug antibody levels.

Molecular reports were then provided through molecular medicine boards for patients in the molecular-guidance group at weeks 2, 14, 26, and 52, whereas data from the standard-care group was not communicated.

Based on the biomarker data, therapy decisions were made such as adjustments to dosing, intervals, comedication, and switches in therapy.

The primary endpoint was the combined end point of disease control, defined as clinical remission (CD Activity Index < 150, partial Mayo score < 2), endoscopic remission (simplified endoscopic score for CD ≤ 4 [≤ 2 for isolated ileal disease], endoscopic Mayo score ≤ 1), or biochemical remission (C-reactive protein < 5 mg/L, fecal calprotectin < 250 mg/g).

A total of 87 patients completed the study with available primary endpoint data to week 52; there were 38 in the molecular care group and 49 in the standard care group. In each group, approximately half of the patients had CD and half had UC.

For the primary endpoint, comprehensive disease control was significantly more frequent in the molecular care group at week 52 (55.3%) than in the standard-care group (26.5%), with an absolute difference of 29% (P = .0072).

Furthermore, the secondary endpoint of the combined rate of endoscopic and clinical remission at week 52 was also higher in the molecular group (60.5%) than in the standard-care group (32.7%; P = .0163).

An exploratory analysis further showed therapy switches were more common in the molecular group (47%) than in the standard-care group (29%), with an increased rate of drug switching between 14 weeks and 26 weeks.

More patients in the molecular guidance group achieved comprehensive disease control (deep remission) at week 52 (P = .0135).

“The [molecular guidance] group was more likely to switch therapies after the induction period, reducing the number of patients who are suboptimally treated under infliximab,” Tran noted.

The results underscore that “even in the absence of a single ‘magic’ biomarker, precision medicine can materially improve patient outcomes by integrating complex molecular data into everyday clinical decisions, enabling more effective therapy choices and reducing unnecessary drug side effects,” he said.

 

Approach Pioneered in Oncology

Tran noted that the collaboration necessary for the approach was based on strategies pioneered in cancer centers, which have resulted in significant improvements in cancer therapy outcomes.

“For the first time in IBD, we have now integrated multidimensional molecular data and innovative drug-dosing models into these inflammation boards,” he said.

Key components of the intervention include a highly structured patient care process with fixed assessment timepoints, as well as “a multidisciplinary, quality-controlled decision-making process that is meticulously documented,” he explained.

The results underscore that “we must move away from sole physician–driven treatment decisions in IBD toward a structured expert-board model for therapy decision-making.” 

 

Benefits in Other IBD Therapies Unclear

Commenting on the study, Ashwin N. Ananthakrishnan, MBBS, MPH, AGAF, an associate professor of medicine with Massachusetts General Hospital, Boston, noted that “this approach may help optimize existing treatment early and ensure that ineffective treatments don’t get dragged on.”

Dr. Ashwin N. Ananthakrishnan

Importantly, however, a key limitation is that “this does not tell you upfront whether a treatment is likely to work or what the best treatment is,” Ananthakrishnan said in an interview. “That is a critically important unmet need in IBD.”

While doses are escalated, if needed, with infliximab and other biologic drugs, that may not be the case with other therapies, he explained.

“For other agents, such as JAK [Janus kinase] inhibitors, we actually start at a higher dose and then reduce for maintenance,” said Ananthakrishnan.

How this approach would work for these drugs or “for drugs where blood levels are not reflective of efficacy is also not clear,” he said.

Tran’s disclosures included relationships with AbbVie, Bristol-Myers-Squibb, CED Service, Celltrion Healthcare, Eli Lilly, Ferring Pharmaceutical, Janssen, Johnson & Johnson, Takeda, LEK Consulting, and Sanofi/Regeneron. Ananthakrishnan had no disclosures to report.

A version of this article appeared on Medscape.com.

SAN DIEGO — In tough-to-treat chronic inflammatory bowel disease (IBD), a precision-medicine strategy based on patients’ molecular profiles showed efficacy in guiding anti–tumor necrosis factor (TNF) therapy treatment decisions to improve outcomes.

This is the first study implementing multi-biomarker signatures in informed decisions instead of single biomarker–based trial algorithms [in IBD],” said first author Florian Tran, MD, in presenting the late-breaking study at Digestive Disease Week (DDW) 2025.

“We have now been able to demonstrate for the first time that precision medicine can be successfully applied in the context of chronic inflammatory bowel diseases, leading to improved long-term outcomes,” said Tran, a professor of pathophysiology of chronic inflammation at the Institute of Clinical Molecular Biology and department of internal medicine, Kiel University and University Hospital Schleswig Holstein, Kiel, Germany.

Anti-TNF therapies can be highly effective in a range of immune-mediated inflammatory diseases, including IBD, but not all patients respond. Key singular candidates of biomarkers that could better predict a response show relatively low predictive power or a failure to replicate in independent studies.

In previous research, Tran and colleagues reported identification of early dynamic molecular changes in the blood that are more robustly predictive of responses to anti-TNF therapy.

“We have generated compelling evidence that therapy-induced changes in inflammatory pathways can reliably predict patient outcomes,” he said.

Among them are dynamic transcriptome changes that emerge early during treatment and can predict response to anti-TNF therapy, as well as clinical response trajectories that differ based on subgroups.

To further investigate the benefits of the multi-biomarker signatures collectively, as opposed to single biomarker–based algorithms, Tran and colleagues conducted the phase 3, open-label GUIDE-IBD trial, enrolling 102 adults with a confirmed diagnosis of Crohn’s disease (CD) or ulcerative colitis (UC) at three German university hospitals between February 2021 and January 2024.

All patients had been assigned the anti-TNF drug infliximab for the first time.

Study participants were randomized to receive either the molecular-guided care or standard medical care, with stratifications based on diagnosis, recruiting center, and baseline corticosteroid use.

Those in the molecular group received real-time molecular assessments at baseline and weeks 2, 6, 14, and 26, which included peripheral blood samples and biopsies of known messenger ribonucleic acid–based biomarkers.

The assessments also looked at infliximab and anti-drug antibody levels.

Molecular reports were then provided through molecular medicine boards for patients in the molecular-guidance group at weeks 2, 14, 26, and 52, whereas data from the standard-care group was not communicated.

Based on the biomarker data, therapy decisions were made such as adjustments to dosing, intervals, comedication, and switches in therapy.

The primary endpoint was the combined end point of disease control, defined as clinical remission (CD Activity Index < 150, partial Mayo score < 2), endoscopic remission (simplified endoscopic score for CD ≤ 4 [≤ 2 for isolated ileal disease], endoscopic Mayo score ≤ 1), or biochemical remission (C-reactive protein < 5 mg/L, fecal calprotectin < 250 mg/g).

A total of 87 patients completed the study with available primary endpoint data to week 52; there were 38 in the molecular care group and 49 in the standard care group. In each group, approximately half of the patients had CD and half had UC.

For the primary endpoint, comprehensive disease control was significantly more frequent in the molecular care group at week 52 (55.3%) than in the standard-care group (26.5%), with an absolute difference of 29% (P = .0072).

Furthermore, the secondary endpoint of the combined rate of endoscopic and clinical remission at week 52 was also higher in the molecular group (60.5%) than in the standard-care group (32.7%; P = .0163).

An exploratory analysis further showed therapy switches were more common in the molecular group (47%) than in the standard-care group (29%), with an increased rate of drug switching between 14 weeks and 26 weeks.

More patients in the molecular guidance group achieved comprehensive disease control (deep remission) at week 52 (P = .0135).

“The [molecular guidance] group was more likely to switch therapies after the induction period, reducing the number of patients who are suboptimally treated under infliximab,” Tran noted.

The results underscore that “even in the absence of a single ‘magic’ biomarker, precision medicine can materially improve patient outcomes by integrating complex molecular data into everyday clinical decisions, enabling more effective therapy choices and reducing unnecessary drug side effects,” he said.

 

Approach Pioneered in Oncology

Tran noted that the collaboration necessary for the approach was based on strategies pioneered in cancer centers, which have resulted in significant improvements in cancer therapy outcomes.

“For the first time in IBD, we have now integrated multidimensional molecular data and innovative drug-dosing models into these inflammation boards,” he said.

Key components of the intervention include a highly structured patient care process with fixed assessment timepoints, as well as “a multidisciplinary, quality-controlled decision-making process that is meticulously documented,” he explained.

The results underscore that “we must move away from sole physician–driven treatment decisions in IBD toward a structured expert-board model for therapy decision-making.” 

 

Benefits in Other IBD Therapies Unclear

Commenting on the study, Ashwin N. Ananthakrishnan, MBBS, MPH, AGAF, an associate professor of medicine with Massachusetts General Hospital, Boston, noted that “this approach may help optimize existing treatment early and ensure that ineffective treatments don’t get dragged on.”

Dr. Ashwin N. Ananthakrishnan

Importantly, however, a key limitation is that “this does not tell you upfront whether a treatment is likely to work or what the best treatment is,” Ananthakrishnan said in an interview. “That is a critically important unmet need in IBD.”

While doses are escalated, if needed, with infliximab and other biologic drugs, that may not be the case with other therapies, he explained.

“For other agents, such as JAK [Janus kinase] inhibitors, we actually start at a higher dose and then reduce for maintenance,” said Ananthakrishnan.

How this approach would work for these drugs or “for drugs where blood levels are not reflective of efficacy is also not clear,” he said.

Tran’s disclosures included relationships with AbbVie, Bristol-Myers-Squibb, CED Service, Celltrion Healthcare, Eli Lilly, Ferring Pharmaceutical, Janssen, Johnson & Johnson, Takeda, LEK Consulting, and Sanofi/Regeneron. Ananthakrishnan had no disclosures to report.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 05/14/2025 - 09:33
Un-Gate On Date
Wed, 05/14/2025 - 09:33
Use ProPublica
CFC Schedule Remove Status
Wed, 05/14/2025 - 09:33
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 05/14/2025 - 09:33

Papilla Sphincterotomy Shows No Risk Reduction in Pancreas Divisum

Article Type
Changed
Mon, 05/12/2025 - 12:26

SAN DIEGO — In treating pancreas divisum, the common use of endoscopic retrograde cholangiopancreatography (ERCP) with minor papilla endoscopic sphincterotomy showed no significant benefit over a sham procedure, suggesting that patients can be spared the intervention, which can carry risks of its own.

“This is a topic that has been debated for decades,” said first author Gregory A. Coté, MD, AGAF, Division Head, professor of medicine, Division of Gastroenterology & Hepatology, Oregon Health & Science University, in Portland, Oregon.

Dr. Gregory A. Cote



“Many doctors believe the procedure helps and offer it because we have limited options to help our patients, whereas others believe the procedure is harmful and doesn’t help,” he explained in a press briefing for the late-breaking study, presented at Digestive Disease Week (DDW) 2025.

The study’s findings supported the latter argument.

“Patients who underwent ERCP with sphincterotomy were just as likely as those who did not have this procedure to develop acute pancreatitis again,” Coté reported.

While clinical guidelines currently recommend ERCP as treatment for pancreas divisum, “these guidelines are likely to change based on this study,” he said.

Pancreas divisum, occurring in about 7%-10% of people, is an anatomic variation that can represent an obstructive risk factor for acute recurrent pancreatitis.

The common use of ERCP with minor papilla endoscopic sphincterotomy to treat the condition is based on prior retrospective studies showing that in patients who did develop acute pancreatitis, up to 70% with the treatment never developed acute pancreatitis again. However, there have been no studies comparing the use of the treatment with a control group.

Coté and colleagues conducted the multicenter SHARP trial, in which 148 patients with pancreas divisum were enrolled between September 2018 and August 2024 and randomized to receive either ERCP with minor papilla endoscopic sphincterotomy (n = 75) or a sham treatment (n = 73).

The patients, who had a median age of 51 years, had a median of 3 acute pancreatitis episodes prior to randomization.

With a median follow-up of 33.5 months (range, 6-48 months), 34.7% of patients in the ERCP arm experienced an acute pancreatitis incident compared with 43.8% in the sham arm, for a hazard ratio of 0.83 after adjusting for duct size and the number of episodes, which was not a statistically significant difference (P = .27).

A subgroup analysis further showed no indication of a treatment effect based on factors including age, diabetes status, sex, alcohol or tobacco use, or other factors.

“Compared with a sham ERCP group, we found that minor papillotomy did not reduce the risk of acute pancreatitis, incident chronic pancreatitis, endocrine pancreatic insufficiency or diabetes, or pancreas-related pain events,” Coté said.

The findings are particularly important because the treatment itself is associated with some risks, he added.

“Ironically, the problem with this procedure is that it can cause acute pancreatitis in 10%-20% of patients and may instigate other issues later,” such as the development of scarring of the pancreas related to incisions in the procedure.

“No one wants to offer an expensive procedure that has its own risks if it doesn’t help,” Coté said.

Based on the findings, “pancreas divisum anatomy should no longer be considered an indication for ERCP, even for idiopathic acute pancreatitis,” he concluded.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO — In treating pancreas divisum, the common use of endoscopic retrograde cholangiopancreatography (ERCP) with minor papilla endoscopic sphincterotomy showed no significant benefit over a sham procedure, suggesting that patients can be spared the intervention, which can carry risks of its own.

“This is a topic that has been debated for decades,” said first author Gregory A. Coté, MD, AGAF, Division Head, professor of medicine, Division of Gastroenterology & Hepatology, Oregon Health & Science University, in Portland, Oregon.

Dr. Gregory A. Cote



“Many doctors believe the procedure helps and offer it because we have limited options to help our patients, whereas others believe the procedure is harmful and doesn’t help,” he explained in a press briefing for the late-breaking study, presented at Digestive Disease Week (DDW) 2025.

The study’s findings supported the latter argument.

“Patients who underwent ERCP with sphincterotomy were just as likely as those who did not have this procedure to develop acute pancreatitis again,” Coté reported.

While clinical guidelines currently recommend ERCP as treatment for pancreas divisum, “these guidelines are likely to change based on this study,” he said.

Pancreas divisum, occurring in about 7%-10% of people, is an anatomic variation that can represent an obstructive risk factor for acute recurrent pancreatitis.

The common use of ERCP with minor papilla endoscopic sphincterotomy to treat the condition is based on prior retrospective studies showing that in patients who did develop acute pancreatitis, up to 70% with the treatment never developed acute pancreatitis again. However, there have been no studies comparing the use of the treatment with a control group.

Coté and colleagues conducted the multicenter SHARP trial, in which 148 patients with pancreas divisum were enrolled between September 2018 and August 2024 and randomized to receive either ERCP with minor papilla endoscopic sphincterotomy (n = 75) or a sham treatment (n = 73).

The patients, who had a median age of 51 years, had a median of 3 acute pancreatitis episodes prior to randomization.

With a median follow-up of 33.5 months (range, 6-48 months), 34.7% of patients in the ERCP arm experienced an acute pancreatitis incident compared with 43.8% in the sham arm, for a hazard ratio of 0.83 after adjusting for duct size and the number of episodes, which was not a statistically significant difference (P = .27).

A subgroup analysis further showed no indication of a treatment effect based on factors including age, diabetes status, sex, alcohol or tobacco use, or other factors.

“Compared with a sham ERCP group, we found that minor papillotomy did not reduce the risk of acute pancreatitis, incident chronic pancreatitis, endocrine pancreatic insufficiency or diabetes, or pancreas-related pain events,” Coté said.

The findings are particularly important because the treatment itself is associated with some risks, he added.

“Ironically, the problem with this procedure is that it can cause acute pancreatitis in 10%-20% of patients and may instigate other issues later,” such as the development of scarring of the pancreas related to incisions in the procedure.

“No one wants to offer an expensive procedure that has its own risks if it doesn’t help,” Coté said.

Based on the findings, “pancreas divisum anatomy should no longer be considered an indication for ERCP, even for idiopathic acute pancreatitis,” he concluded.

A version of this article appeared on Medscape.com.

SAN DIEGO — In treating pancreas divisum, the common use of endoscopic retrograde cholangiopancreatography (ERCP) with minor papilla endoscopic sphincterotomy showed no significant benefit over a sham procedure, suggesting that patients can be spared the intervention, which can carry risks of its own.

“This is a topic that has been debated for decades,” said first author Gregory A. Coté, MD, AGAF, Division Head, professor of medicine, Division of Gastroenterology & Hepatology, Oregon Health & Science University, in Portland, Oregon.

Dr. Gregory A. Cote



“Many doctors believe the procedure helps and offer it because we have limited options to help our patients, whereas others believe the procedure is harmful and doesn’t help,” he explained in a press briefing for the late-breaking study, presented at Digestive Disease Week (DDW) 2025.

The study’s findings supported the latter argument.

“Patients who underwent ERCP with sphincterotomy were just as likely as those who did not have this procedure to develop acute pancreatitis again,” Coté reported.

While clinical guidelines currently recommend ERCP as treatment for pancreas divisum, “these guidelines are likely to change based on this study,” he said.

Pancreas divisum, occurring in about 7%-10% of people, is an anatomic variation that can represent an obstructive risk factor for acute recurrent pancreatitis.

The common use of ERCP with minor papilla endoscopic sphincterotomy to treat the condition is based on prior retrospective studies showing that in patients who did develop acute pancreatitis, up to 70% with the treatment never developed acute pancreatitis again. However, there have been no studies comparing the use of the treatment with a control group.

Coté and colleagues conducted the multicenter SHARP trial, in which 148 patients with pancreas divisum were enrolled between September 2018 and August 2024 and randomized to receive either ERCP with minor papilla endoscopic sphincterotomy (n = 75) or a sham treatment (n = 73).

The patients, who had a median age of 51 years, had a median of 3 acute pancreatitis episodes prior to randomization.

With a median follow-up of 33.5 months (range, 6-48 months), 34.7% of patients in the ERCP arm experienced an acute pancreatitis incident compared with 43.8% in the sham arm, for a hazard ratio of 0.83 after adjusting for duct size and the number of episodes, which was not a statistically significant difference (P = .27).

A subgroup analysis further showed no indication of a treatment effect based on factors including age, diabetes status, sex, alcohol or tobacco use, or other factors.

“Compared with a sham ERCP group, we found that minor papillotomy did not reduce the risk of acute pancreatitis, incident chronic pancreatitis, endocrine pancreatic insufficiency or diabetes, or pancreas-related pain events,” Coté said.

The findings are particularly important because the treatment itself is associated with some risks, he added.

“Ironically, the problem with this procedure is that it can cause acute pancreatitis in 10%-20% of patients and may instigate other issues later,” such as the development of scarring of the pancreas related to incisions in the procedure.

“No one wants to offer an expensive procedure that has its own risks if it doesn’t help,” Coté said.

Based on the findings, “pancreas divisum anatomy should no longer be considered an indication for ERCP, even for idiopathic acute pancreatitis,” he concluded.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 05/12/2025 - 10:42
Un-Gate On Date
Mon, 05/12/2025 - 10:42
Use ProPublica
CFC Schedule Remove Status
Mon, 05/12/2025 - 10:42
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Mon, 05/12/2025 - 10:42

Do GLP-1s Lower CRC Risk in Patients With Obesity and T2D?

Article Type
Changed
Fri, 05/09/2025 - 11:03

Patients with obesity and type 2 diabetes treated with glucagon-like peptide 1 (GLP-1) receptor agonists had significantly reduced rates of colorectal cancer (CRC) risk and associated mortality compared with those undergoing bariatric surgery, new research showed.

CRC risk was also lower for patients taking GLP-1s than the general population.

“Our findings show we might need to evaluate these therapies beyond their glycemic or weight loss [effects],” said first author Omar Al Ta’ani, MD, of the Allegheny Health Network, Pittsburgh.

This supports future prospective studies examining GLP-1s for CRC reduction, added Ta’ani, who presented the results at Digestive Disease Week (DDW) 2025.

Patients with type 2 diabetes and obesity are known to have a higher risk for CRC, stemming from metabolic risk factors. Whereas prior studies suggested that GLP-1s decrease the risk for CRC compared with other antidiabetic medications, studies looking at the risk for CRC associated with bariatric surgery have had more mixed results, Ta’ani said.

For the comparison, Ta’ani and colleagues conducted a retrospective analysis of the TriNetX database, identifying patients with type 2 diabetes and obesity (body mass index [BMI] > 30) enrolled in the database between 2005 and 2019.

Overall, the study included 94,098 GLP-1 users and 24,969 patients who underwent bariatric surgery. Those with a prior history of CRC were excluded.

Using propensity score matching, patients treated with GLP-1s were matched 1:1 with patients who had bariatric surgery based on wide-ranging factors including age, race, gender, demographics, diseases, medications, personal and family history, and hemoglobin A1c.

After the propensity matching, each group included 21,022 patients. About 64% in each group were women; their median age was 53 years and about 65% were White.

Overall, the results showed that patients on GLP-1s had a significantly lower CRC risk compared with those who had bariatric surgery (adjusted hazard ratio [aHR], 0.29; P < .0001). The lower risk was also observed among those with high obesity (defined as BMI > 35) compared with those who had surgery (aHR, 0.39; P < .0001).

The results were consistent across genders; however, the differences between GLP-1s and bariatric surgery were not observed in the 18- to 45-year-old age group (BMI > 30, P = .0809; BMI > 35, P = .2318).

Compared with the general population, patients on GLP-1s also had a reduced risk for CRC (aHR, 0.28; P < .0001); however, the difference was not observed between the bariatric surgery group and the general population (aHR, 1.11; P = .3).

Among patients with type 2 diabetes with CRC and a BMI > 30, the 5-year mortality rate was lower in the GLP-1 group vs the bariatric surgery group (aHR, 0.42; P < .001).

Speculating on the mechanisms of GLP-1s that could result in a greater reduction in CRC risk, Ta’ani explained that the key pathways linking type 2 diabetes, obesity, and CRC include hyperinsulinemia, chronic inflammation, and impaired immune surveillance.

Studies have shown that GLP-1s may be more effective in addressing the collective pathways, he said. They “may improve insulin resistance and lower systemic inflammation.” 

Furthermore, GLP1s “inhibit tumor pathways like Wnt/beta-catenin and PI3K/Akt/mTOR signaling, which promote apoptosis and reduce tumor cell proliferation,” he added.

 

Bariatric Surgery Findings Questioned

Meanwhile, “bariatric surgery’s impact on CRC remains mixed,” said Ta’ani.

Dr. Vance L. Albaugh

Commenting on the study, Vance L. Albaugh, MD, an assistant professor of metabolic surgery at the Metamor Institute, Pennington Biomedical Research Center, Baton Rouge, Louisiana, noted that prior studies, including a recent meta-analysis, suggest a potential benefit of bariatric surgery in cancer prevention.

“I think the [current study] is interesting, but it’s been pretty [well-reported] that bariatric surgery does decrease cancer incidence, so I find it questionable that this study shows the opposite of what’s in the literature,” Albaugh, an obesity medicine specialist and bariatric surgeon, said in an interview.

Ta’ani acknowledged the study’s important limitations, including that with a retrospective design, causality cannot be firmly established.

And, as noted by an audience member in the session’s Q&A, the study ended in 2019, which was before GLP-1s had taken off as anti-obesity drugs and before US Food and Drug Administration approvals for weight loss.

Participants were matched based on BMI, however, Ta’ani pointed out.

Albaugh agreed that the study ending in 2019 was a notable limitation. However, the relatively long study period — extending from 2005 to 2019 — was a strength.

“It’s nice to have a very long period to capture people who are diagnosed, because it takes a long time to develop CRC,” he said. “To evaluate effects [of more recent drug regimens], you would not be able to have the follow-up they had.”

Other study limitations included the need to adjust for ranges of obesity severity, said Albaugh. “The risk of colorectal cancer is probably much different for someone with a BMI of 60 vs a BMI of 30.” 

Ultimately, a key question the study results raise is whether GLP-1 drugs have protective effects above and beyond that of weight loss, he said.

“I think that’s a very exciting question and that’s what I think the researchers’ next work should really focus on.”

Ta’ani had no disclosures to report. Albaugh reported that he had consulted for Novo Nordisk.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Patients with obesity and type 2 diabetes treated with glucagon-like peptide 1 (GLP-1) receptor agonists had significantly reduced rates of colorectal cancer (CRC) risk and associated mortality compared with those undergoing bariatric surgery, new research showed.

CRC risk was also lower for patients taking GLP-1s than the general population.

“Our findings show we might need to evaluate these therapies beyond their glycemic or weight loss [effects],” said first author Omar Al Ta’ani, MD, of the Allegheny Health Network, Pittsburgh.

This supports future prospective studies examining GLP-1s for CRC reduction, added Ta’ani, who presented the results at Digestive Disease Week (DDW) 2025.

Patients with type 2 diabetes and obesity are known to have a higher risk for CRC, stemming from metabolic risk factors. Whereas prior studies suggested that GLP-1s decrease the risk for CRC compared with other antidiabetic medications, studies looking at the risk for CRC associated with bariatric surgery have had more mixed results, Ta’ani said.

For the comparison, Ta’ani and colleagues conducted a retrospective analysis of the TriNetX database, identifying patients with type 2 diabetes and obesity (body mass index [BMI] > 30) enrolled in the database between 2005 and 2019.

Overall, the study included 94,098 GLP-1 users and 24,969 patients who underwent bariatric surgery. Those with a prior history of CRC were excluded.

Using propensity score matching, patients treated with GLP-1s were matched 1:1 with patients who had bariatric surgery based on wide-ranging factors including age, race, gender, demographics, diseases, medications, personal and family history, and hemoglobin A1c.

After the propensity matching, each group included 21,022 patients. About 64% in each group were women; their median age was 53 years and about 65% were White.

Overall, the results showed that patients on GLP-1s had a significantly lower CRC risk compared with those who had bariatric surgery (adjusted hazard ratio [aHR], 0.29; P < .0001). The lower risk was also observed among those with high obesity (defined as BMI > 35) compared with those who had surgery (aHR, 0.39; P < .0001).

The results were consistent across genders; however, the differences between GLP-1s and bariatric surgery were not observed in the 18- to 45-year-old age group (BMI > 30, P = .0809; BMI > 35, P = .2318).

Compared with the general population, patients on GLP-1s also had a reduced risk for CRC (aHR, 0.28; P < .0001); however, the difference was not observed between the bariatric surgery group and the general population (aHR, 1.11; P = .3).

Among patients with type 2 diabetes with CRC and a BMI > 30, the 5-year mortality rate was lower in the GLP-1 group vs the bariatric surgery group (aHR, 0.42; P < .001).

Speculating on the mechanisms of GLP-1s that could result in a greater reduction in CRC risk, Ta’ani explained that the key pathways linking type 2 diabetes, obesity, and CRC include hyperinsulinemia, chronic inflammation, and impaired immune surveillance.

Studies have shown that GLP-1s may be more effective in addressing the collective pathways, he said. They “may improve insulin resistance and lower systemic inflammation.” 

Furthermore, GLP1s “inhibit tumor pathways like Wnt/beta-catenin and PI3K/Akt/mTOR signaling, which promote apoptosis and reduce tumor cell proliferation,” he added.

 

Bariatric Surgery Findings Questioned

Meanwhile, “bariatric surgery’s impact on CRC remains mixed,” said Ta’ani.

Dr. Vance L. Albaugh

Commenting on the study, Vance L. Albaugh, MD, an assistant professor of metabolic surgery at the Metamor Institute, Pennington Biomedical Research Center, Baton Rouge, Louisiana, noted that prior studies, including a recent meta-analysis, suggest a potential benefit of bariatric surgery in cancer prevention.

“I think the [current study] is interesting, but it’s been pretty [well-reported] that bariatric surgery does decrease cancer incidence, so I find it questionable that this study shows the opposite of what’s in the literature,” Albaugh, an obesity medicine specialist and bariatric surgeon, said in an interview.

Ta’ani acknowledged the study’s important limitations, including that with a retrospective design, causality cannot be firmly established.

And, as noted by an audience member in the session’s Q&A, the study ended in 2019, which was before GLP-1s had taken off as anti-obesity drugs and before US Food and Drug Administration approvals for weight loss.

Participants were matched based on BMI, however, Ta’ani pointed out.

Albaugh agreed that the study ending in 2019 was a notable limitation. However, the relatively long study period — extending from 2005 to 2019 — was a strength.

“It’s nice to have a very long period to capture people who are diagnosed, because it takes a long time to develop CRC,” he said. “To evaluate effects [of more recent drug regimens], you would not be able to have the follow-up they had.”

Other study limitations included the need to adjust for ranges of obesity severity, said Albaugh. “The risk of colorectal cancer is probably much different for someone with a BMI of 60 vs a BMI of 30.” 

Ultimately, a key question the study results raise is whether GLP-1 drugs have protective effects above and beyond that of weight loss, he said.

“I think that’s a very exciting question and that’s what I think the researchers’ next work should really focus on.”

Ta’ani had no disclosures to report. Albaugh reported that he had consulted for Novo Nordisk.

A version of this article appeared on Medscape.com.

Patients with obesity and type 2 diabetes treated with glucagon-like peptide 1 (GLP-1) receptor agonists had significantly reduced rates of colorectal cancer (CRC) risk and associated mortality compared with those undergoing bariatric surgery, new research showed.

CRC risk was also lower for patients taking GLP-1s than the general population.

“Our findings show we might need to evaluate these therapies beyond their glycemic or weight loss [effects],” said first author Omar Al Ta’ani, MD, of the Allegheny Health Network, Pittsburgh.

This supports future prospective studies examining GLP-1s for CRC reduction, added Ta’ani, who presented the results at Digestive Disease Week (DDW) 2025.

Patients with type 2 diabetes and obesity are known to have a higher risk for CRC, stemming from metabolic risk factors. Whereas prior studies suggested that GLP-1s decrease the risk for CRC compared with other antidiabetic medications, studies looking at the risk for CRC associated with bariatric surgery have had more mixed results, Ta’ani said.

For the comparison, Ta’ani and colleagues conducted a retrospective analysis of the TriNetX database, identifying patients with type 2 diabetes and obesity (body mass index [BMI] > 30) enrolled in the database between 2005 and 2019.

Overall, the study included 94,098 GLP-1 users and 24,969 patients who underwent bariatric surgery. Those with a prior history of CRC were excluded.

Using propensity score matching, patients treated with GLP-1s were matched 1:1 with patients who had bariatric surgery based on wide-ranging factors including age, race, gender, demographics, diseases, medications, personal and family history, and hemoglobin A1c.

After the propensity matching, each group included 21,022 patients. About 64% in each group were women; their median age was 53 years and about 65% were White.

Overall, the results showed that patients on GLP-1s had a significantly lower CRC risk compared with those who had bariatric surgery (adjusted hazard ratio [aHR], 0.29; P < .0001). The lower risk was also observed among those with high obesity (defined as BMI > 35) compared with those who had surgery (aHR, 0.39; P < .0001).

The results were consistent across genders; however, the differences between GLP-1s and bariatric surgery were not observed in the 18- to 45-year-old age group (BMI > 30, P = .0809; BMI > 35, P = .2318).

Compared with the general population, patients on GLP-1s also had a reduced risk for CRC (aHR, 0.28; P < .0001); however, the difference was not observed between the bariatric surgery group and the general population (aHR, 1.11; P = .3).

Among patients with type 2 diabetes with CRC and a BMI > 30, the 5-year mortality rate was lower in the GLP-1 group vs the bariatric surgery group (aHR, 0.42; P < .001).

Speculating on the mechanisms of GLP-1s that could result in a greater reduction in CRC risk, Ta’ani explained that the key pathways linking type 2 diabetes, obesity, and CRC include hyperinsulinemia, chronic inflammation, and impaired immune surveillance.

Studies have shown that GLP-1s may be more effective in addressing the collective pathways, he said. They “may improve insulin resistance and lower systemic inflammation.” 

Furthermore, GLP1s “inhibit tumor pathways like Wnt/beta-catenin and PI3K/Akt/mTOR signaling, which promote apoptosis and reduce tumor cell proliferation,” he added.

 

Bariatric Surgery Findings Questioned

Meanwhile, “bariatric surgery’s impact on CRC remains mixed,” said Ta’ani.

Dr. Vance L. Albaugh

Commenting on the study, Vance L. Albaugh, MD, an assistant professor of metabolic surgery at the Metamor Institute, Pennington Biomedical Research Center, Baton Rouge, Louisiana, noted that prior studies, including a recent meta-analysis, suggest a potential benefit of bariatric surgery in cancer prevention.

“I think the [current study] is interesting, but it’s been pretty [well-reported] that bariatric surgery does decrease cancer incidence, so I find it questionable that this study shows the opposite of what’s in the literature,” Albaugh, an obesity medicine specialist and bariatric surgeon, said in an interview.

Ta’ani acknowledged the study’s important limitations, including that with a retrospective design, causality cannot be firmly established.

And, as noted by an audience member in the session’s Q&A, the study ended in 2019, which was before GLP-1s had taken off as anti-obesity drugs and before US Food and Drug Administration approvals for weight loss.

Participants were matched based on BMI, however, Ta’ani pointed out.

Albaugh agreed that the study ending in 2019 was a notable limitation. However, the relatively long study period — extending from 2005 to 2019 — was a strength.

“It’s nice to have a very long period to capture people who are diagnosed, because it takes a long time to develop CRC,” he said. “To evaluate effects [of more recent drug regimens], you would not be able to have the follow-up they had.”

Other study limitations included the need to adjust for ranges of obesity severity, said Albaugh. “The risk of colorectal cancer is probably much different for someone with a BMI of 60 vs a BMI of 30.” 

Ultimately, a key question the study results raise is whether GLP-1 drugs have protective effects above and beyond that of weight loss, he said.

“I think that’s a very exciting question and that’s what I think the researchers’ next work should really focus on.”

Ta’ani had no disclosures to report. Albaugh reported that he had consulted for Novo Nordisk.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 05/09/2025 - 09:41
Un-Gate On Date
Fri, 05/09/2025 - 09:41
Use ProPublica
CFC Schedule Remove Status
Fri, 05/09/2025 - 09:41
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 05/09/2025 - 09:41

ctDNA Positivity in Colorectal Cancer Links to Chemotherapy Response

Article Type
Changed
Fri, 05/09/2025 - 10:59

Molecular residual disease (MRD) positivity, as detected via circulating tumor (ct) DNA following curative resection, was significantly associated with improved disease-free survival after chemotherapy in patients with stage II or III colorectal cancer (CRC), the results of the BESPOKE study showed.

“These findings highlight the value of utilizing ctDNA to select which patients should receive management chemotherapy and which patients can be potentially spared chemotherapy’s physical, emotional, and financial toxicities without compromising their long-term outcomes,” said first author Kim Magee of Natera, a clinical genetic testing company in Austin, Texas.

“ctDNA is emerging as the most powerful and prognostic biomarker in colorectal cancer,” said Magee, who presented the findings at Digestive Disease Week (DDW) 2025.

In stage II CRC, as many as 80% of patients are cured by surgery alone, while only about 5% benefit from chemotherapy. In stage III CRC, about half of patients are cured by surgery alone, while only 20% benefit from chemotherapy, and 30% recur despite chemotherapy, Magee explained.

The inability to pinpoint which patients will most benefit from chemotherapy means “we know we are needlessly treating [many] of these patients,” she said.

 

ctDNA Offers Insights Into Tumor’s Real-Time Status

Just as cells release fragments (cell-free DNA) into the blood as they regenerate, tumor cells also release fragments — ctDNA — which can represent a biomarker of a cancer’s current state, Magee explained.

Because the DNA fragments have a half-life of only about 2 hours, they represent a key snapshot in real time, “as opposed to imaging, which can take several weeks or months to show changes,” she said.

To determine the effects of ctDNA testing on treatment decisions and asymptomatic recurrence rates, Magee and colleagues analyzed data from the multicenter, prospective study, which used the Signatera (Natera) residual disease test.

The study included 1794 patients with resected stage II-III CRC who were treated with the standard of care between May 2020 and March 2023 who had complete clinical and laboratory data available.

ctDNA was collected 2-6 weeks post surgery and at surveillance months 2, 4, 6, and every 3 months through month 24.

Among the 1166 patients included in a final analysis, 694 (59.5%) patients received adjunctive chemotherapy, and 472 (40.5%) received no chemotherapy.

Among those with stage II CRC, a postoperative MRD positivity rate was 7.54%, while the rate in those with stage III disease was 28.35%.

Overall, 16.1% of patients had a recurrence by the trial end at 24 months.

The results showed that among patients who tested negative for ctDNA, the disease-free survival estimates were highly favorable, at 91.8% for stage II and 87.4% for stage III CRC.

Comparatively, for those who were ctDNA-positive, disease-free survival rates were just 45.9% and 35.5%, respectively, regardless of whether those patients received adjunctive chemotherapy.

At the study’s first ctDNA surveillance timepoint, patients who were ctDNA-positive with stage II and III CRC combined had substantially worse disease-free survival than patients who were ctDNA-negative (HR, 26.4; P < .0001).

 

Impact of Chemotherapy

Patients who were found to be MRD-positive on ctDNA testing and treated with chemotherapy had a 40.3% 2-year disease-free survival rate compared with just 24.7% among MRD-positive patients who did not receive chemotherapy.

Meanwhile, those who were MRD-negative and treated with chemotherapy had a substantially higher 2-year disease-free survival rate of 89.7% — nearly identical to the 89.5% observed in the no-chemotherapy group.

The findings underscored that “the adjuvant chemotherapy benefits were only observed among those who were ctDNA-positive,” Magee said.

“ctDNA can guide postsurgical treatment decisions by identifying which patients are most likely to benefit from chemotherapy, and in the surveillance setting, ctDNA can predict recurrence — usually ahead of scans,” she added. “This opens the opportunity to intervene and give those patients a second chance at cure.”

On the heels of major recent advances including CT, MRI, and PET-CT, “we believe that ctDNA represents the next major pivotal advancement in monitoring and eventually better understanding cancer diagnostics,” Magee said.

 

Dr. William M. Grady

Commenting on the study, William M. Grady, MD, AGAF, medical director of the Fred Hutchinson Cancer Center Gastrointestinal Cancer Prevention Clinic, Seattle, said the BESPOKE trial represents a “well-done” study, adding to research underscoring that “MRD testing is a more accurate prognostic assay than the current standards of CT scan and CEA [carcinoembryonic antigen, a tumor marker] testing.”

However, “a limitation is that this is 2 years of follow-up, [while] 5-year follow-up data would be ideal,” he said in an interview, noting, importantly, that “a small number of patients who have no evidence of disease (NED) at 2 years develop recurrence by 5 years.”

Furthermore, more research demonstrating the outcomes of MRD detection is needed, Grady added.

“A caveat is that studies are still needed showing that if you change your care of patients based on the MRD result, that you improve outcomes,” he said. “These studies are being planned and initiated at this time, from my understanding.”

Oncologists treating patients with CRC are commonly performing MRD assessment with ctDNA assays; however, Grady noted that the practice is still not the standard of care.

Regarding the suggestion of ctDNA representing the next major, pivotal step in cancer monitoring, Grady responded that “I think this is aspirational, and further studies are needed to make this claim.”

However, “it does look like it has the promise to turn out to be true.”

Magee is an employee of Nater. Grady has been on the scientific advisory boards for Guardant Health and Freenome and has consulted for Karius.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Molecular residual disease (MRD) positivity, as detected via circulating tumor (ct) DNA following curative resection, was significantly associated with improved disease-free survival after chemotherapy in patients with stage II or III colorectal cancer (CRC), the results of the BESPOKE study showed.

“These findings highlight the value of utilizing ctDNA to select which patients should receive management chemotherapy and which patients can be potentially spared chemotherapy’s physical, emotional, and financial toxicities without compromising their long-term outcomes,” said first author Kim Magee of Natera, a clinical genetic testing company in Austin, Texas.

“ctDNA is emerging as the most powerful and prognostic biomarker in colorectal cancer,” said Magee, who presented the findings at Digestive Disease Week (DDW) 2025.

In stage II CRC, as many as 80% of patients are cured by surgery alone, while only about 5% benefit from chemotherapy. In stage III CRC, about half of patients are cured by surgery alone, while only 20% benefit from chemotherapy, and 30% recur despite chemotherapy, Magee explained.

The inability to pinpoint which patients will most benefit from chemotherapy means “we know we are needlessly treating [many] of these patients,” she said.

 

ctDNA Offers Insights Into Tumor’s Real-Time Status

Just as cells release fragments (cell-free DNA) into the blood as they regenerate, tumor cells also release fragments — ctDNA — which can represent a biomarker of a cancer’s current state, Magee explained.

Because the DNA fragments have a half-life of only about 2 hours, they represent a key snapshot in real time, “as opposed to imaging, which can take several weeks or months to show changes,” she said.

To determine the effects of ctDNA testing on treatment decisions and asymptomatic recurrence rates, Magee and colleagues analyzed data from the multicenter, prospective study, which used the Signatera (Natera) residual disease test.

The study included 1794 patients with resected stage II-III CRC who were treated with the standard of care between May 2020 and March 2023 who had complete clinical and laboratory data available.

ctDNA was collected 2-6 weeks post surgery and at surveillance months 2, 4, 6, and every 3 months through month 24.

Among the 1166 patients included in a final analysis, 694 (59.5%) patients received adjunctive chemotherapy, and 472 (40.5%) received no chemotherapy.

Among those with stage II CRC, a postoperative MRD positivity rate was 7.54%, while the rate in those with stage III disease was 28.35%.

Overall, 16.1% of patients had a recurrence by the trial end at 24 months.

The results showed that among patients who tested negative for ctDNA, the disease-free survival estimates were highly favorable, at 91.8% for stage II and 87.4% for stage III CRC.

Comparatively, for those who were ctDNA-positive, disease-free survival rates were just 45.9% and 35.5%, respectively, regardless of whether those patients received adjunctive chemotherapy.

At the study’s first ctDNA surveillance timepoint, patients who were ctDNA-positive with stage II and III CRC combined had substantially worse disease-free survival than patients who were ctDNA-negative (HR, 26.4; P < .0001).

 

Impact of Chemotherapy

Patients who were found to be MRD-positive on ctDNA testing and treated with chemotherapy had a 40.3% 2-year disease-free survival rate compared with just 24.7% among MRD-positive patients who did not receive chemotherapy.

Meanwhile, those who were MRD-negative and treated with chemotherapy had a substantially higher 2-year disease-free survival rate of 89.7% — nearly identical to the 89.5% observed in the no-chemotherapy group.

The findings underscored that “the adjuvant chemotherapy benefits were only observed among those who were ctDNA-positive,” Magee said.

“ctDNA can guide postsurgical treatment decisions by identifying which patients are most likely to benefit from chemotherapy, and in the surveillance setting, ctDNA can predict recurrence — usually ahead of scans,” she added. “This opens the opportunity to intervene and give those patients a second chance at cure.”

On the heels of major recent advances including CT, MRI, and PET-CT, “we believe that ctDNA represents the next major pivotal advancement in monitoring and eventually better understanding cancer diagnostics,” Magee said.

 

Dr. William M. Grady

Commenting on the study, William M. Grady, MD, AGAF, medical director of the Fred Hutchinson Cancer Center Gastrointestinal Cancer Prevention Clinic, Seattle, said the BESPOKE trial represents a “well-done” study, adding to research underscoring that “MRD testing is a more accurate prognostic assay than the current standards of CT scan and CEA [carcinoembryonic antigen, a tumor marker] testing.”

However, “a limitation is that this is 2 years of follow-up, [while] 5-year follow-up data would be ideal,” he said in an interview, noting, importantly, that “a small number of patients who have no evidence of disease (NED) at 2 years develop recurrence by 5 years.”

Furthermore, more research demonstrating the outcomes of MRD detection is needed, Grady added.

“A caveat is that studies are still needed showing that if you change your care of patients based on the MRD result, that you improve outcomes,” he said. “These studies are being planned and initiated at this time, from my understanding.”

Oncologists treating patients with CRC are commonly performing MRD assessment with ctDNA assays; however, Grady noted that the practice is still not the standard of care.

Regarding the suggestion of ctDNA representing the next major, pivotal step in cancer monitoring, Grady responded that “I think this is aspirational, and further studies are needed to make this claim.”

However, “it does look like it has the promise to turn out to be true.”

Magee is an employee of Nater. Grady has been on the scientific advisory boards for Guardant Health and Freenome and has consulted for Karius.

A version of this article appeared on Medscape.com.

Molecular residual disease (MRD) positivity, as detected via circulating tumor (ct) DNA following curative resection, was significantly associated with improved disease-free survival after chemotherapy in patients with stage II or III colorectal cancer (CRC), the results of the BESPOKE study showed.

“These findings highlight the value of utilizing ctDNA to select which patients should receive management chemotherapy and which patients can be potentially spared chemotherapy’s physical, emotional, and financial toxicities without compromising their long-term outcomes,” said first author Kim Magee of Natera, a clinical genetic testing company in Austin, Texas.

“ctDNA is emerging as the most powerful and prognostic biomarker in colorectal cancer,” said Magee, who presented the findings at Digestive Disease Week (DDW) 2025.

In stage II CRC, as many as 80% of patients are cured by surgery alone, while only about 5% benefit from chemotherapy. In stage III CRC, about half of patients are cured by surgery alone, while only 20% benefit from chemotherapy, and 30% recur despite chemotherapy, Magee explained.

The inability to pinpoint which patients will most benefit from chemotherapy means “we know we are needlessly treating [many] of these patients,” she said.

 

ctDNA Offers Insights Into Tumor’s Real-Time Status

Just as cells release fragments (cell-free DNA) into the blood as they regenerate, tumor cells also release fragments — ctDNA — which can represent a biomarker of a cancer’s current state, Magee explained.

Because the DNA fragments have a half-life of only about 2 hours, they represent a key snapshot in real time, “as opposed to imaging, which can take several weeks or months to show changes,” she said.

To determine the effects of ctDNA testing on treatment decisions and asymptomatic recurrence rates, Magee and colleagues analyzed data from the multicenter, prospective study, which used the Signatera (Natera) residual disease test.

The study included 1794 patients with resected stage II-III CRC who were treated with the standard of care between May 2020 and March 2023 who had complete clinical and laboratory data available.

ctDNA was collected 2-6 weeks post surgery and at surveillance months 2, 4, 6, and every 3 months through month 24.

Among the 1166 patients included in a final analysis, 694 (59.5%) patients received adjunctive chemotherapy, and 472 (40.5%) received no chemotherapy.

Among those with stage II CRC, a postoperative MRD positivity rate was 7.54%, while the rate in those with stage III disease was 28.35%.

Overall, 16.1% of patients had a recurrence by the trial end at 24 months.

The results showed that among patients who tested negative for ctDNA, the disease-free survival estimates were highly favorable, at 91.8% for stage II and 87.4% for stage III CRC.

Comparatively, for those who were ctDNA-positive, disease-free survival rates were just 45.9% and 35.5%, respectively, regardless of whether those patients received adjunctive chemotherapy.

At the study’s first ctDNA surveillance timepoint, patients who were ctDNA-positive with stage II and III CRC combined had substantially worse disease-free survival than patients who were ctDNA-negative (HR, 26.4; P < .0001).

 

Impact of Chemotherapy

Patients who were found to be MRD-positive on ctDNA testing and treated with chemotherapy had a 40.3% 2-year disease-free survival rate compared with just 24.7% among MRD-positive patients who did not receive chemotherapy.

Meanwhile, those who were MRD-negative and treated with chemotherapy had a substantially higher 2-year disease-free survival rate of 89.7% — nearly identical to the 89.5% observed in the no-chemotherapy group.

The findings underscored that “the adjuvant chemotherapy benefits were only observed among those who were ctDNA-positive,” Magee said.

“ctDNA can guide postsurgical treatment decisions by identifying which patients are most likely to benefit from chemotherapy, and in the surveillance setting, ctDNA can predict recurrence — usually ahead of scans,” she added. “This opens the opportunity to intervene and give those patients a second chance at cure.”

On the heels of major recent advances including CT, MRI, and PET-CT, “we believe that ctDNA represents the next major pivotal advancement in monitoring and eventually better understanding cancer diagnostics,” Magee said.

 

Dr. William M. Grady

Commenting on the study, William M. Grady, MD, AGAF, medical director of the Fred Hutchinson Cancer Center Gastrointestinal Cancer Prevention Clinic, Seattle, said the BESPOKE trial represents a “well-done” study, adding to research underscoring that “MRD testing is a more accurate prognostic assay than the current standards of CT scan and CEA [carcinoembryonic antigen, a tumor marker] testing.”

However, “a limitation is that this is 2 years of follow-up, [while] 5-year follow-up data would be ideal,” he said in an interview, noting, importantly, that “a small number of patients who have no evidence of disease (NED) at 2 years develop recurrence by 5 years.”

Furthermore, more research demonstrating the outcomes of MRD detection is needed, Grady added.

“A caveat is that studies are still needed showing that if you change your care of patients based on the MRD result, that you improve outcomes,” he said. “These studies are being planned and initiated at this time, from my understanding.”

Oncologists treating patients with CRC are commonly performing MRD assessment with ctDNA assays; however, Grady noted that the practice is still not the standard of care.

Regarding the suggestion of ctDNA representing the next major, pivotal step in cancer monitoring, Grady responded that “I think this is aspirational, and further studies are needed to make this claim.”

However, “it does look like it has the promise to turn out to be true.”

Magee is an employee of Nater. Grady has been on the scientific advisory boards for Guardant Health and Freenome and has consulted for Karius.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 05/09/2025 - 09:34
Un-Gate On Date
Fri, 05/09/2025 - 09:34
Use ProPublica
CFC Schedule Remove Status
Fri, 05/09/2025 - 09:34
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 05/09/2025 - 09:34

SGLT2 Inhibitors Reduce Portal Hypertension From Cirrhosis

Article Type
Changed
Thu, 05/08/2025 - 16:44

SAN DIEGO — Patients with cirrhosis treated with sodium-glucose cotransporter 2 (SGLT2) inhibitors show significant reductions in a range of portal hypertension complications and all-cause mortality compared with those not receiving the drugs, new research shows.

“Our study found that SGLT2 inhibitors were associated with fewer portal hypertension complications and lower mortality, suggesting they may be a valuable addition to cirrhosis management,” first author Abhinav K. Rao, MD, of the Medical University of South Carolina, Charleston, South Carolina, told GI & Hepatology News.

The findings were presented at Digestive Disease Week (DDW) 2025.

Portal hypertension, a potentially life-threatening complication of cirrhosis, can be a key driver of additional complications including ascites and gastro-esophageal varices in cirrhosis.

Current treatments such as beta-blockers can prevent some complications, however, more effective therapies are needed.

SGLT2 inhibitors are often used in the treatment of cardiovascular disease as well as metabolic dysfunction–associated steatohepatitis (MASH)–mediated liver disease; research is lacking regarding their effects in portal hypertension in the broader population of people with cirrhosis.

“The therapeutic efficacy of SGLT2 inhibitors might be related to their ability to improve vascular function, making them attractive in portal hypertension,” Rao explained.

To investigate, Rao and colleagues evaluated data on 637,079 patients with cirrhosis in the TriNetX database, which includes patients in the United States from 66 healthcare organizations.

Patients were divided into three subgroups, including those with MASH, alcohol-associated, and other etiologies of cirrhosis.

Using robust 1:1 propensity score matching, patients in each subgroup were stratified as either having or not having been treated with SGLT2 inhibitors, limited to those who initiated the drugs within 1 year of their cirrhosis diagnosis to prevent immortal time bias. Patients were matched on other characteristics.

For the primary outcome of all-cause mortality, with an overall median follow-up of 2 years, patients prescribed SGLT2 inhibitors in the MASH cirrhosis (n = 47,385), alcohol-associated cirrhosis (n = 107,844), and other etiologies of cirrhosis (n = 59,499) groups all had a significantly lower risk for all-cause mortality than those not prescribed SGLT2 inhibitors (P < .05 for all).

 

SGLT2 Inhibitors in MASH Cirrhosis

Specifically looking at the MASH cirrhosis group, Rao described outcomes of the two groups of 3026 patients each who were and were not treated with SGLT2 inhibitors.

The patients had similar rates of esophageal varices (25% in the SGLT2 group and 22% in the no SGLT2 group), ascites (19% in each group), and a similar rate of 19% had hepatic encephalopathy (HE).

About 57% of patients in each treatment group used beta-blockers and 33% used glucagon-like peptide 1 (GLP-1) receptor agonists. Those with a history of liver transplantation, hemodialysis, or transjugular intrahepatic portosystemic shunt placement were excluded.

The secondary outcome results in those patients showed that treatment with SGLT2 inhibitors was associated with significantly reduced risks of developing portal hypertension complications including ascites, HE, spontaneous bacterial peritonitis (SBP), and hepatorenal syndrome (P < .05 for all).

Esophageal variceal bleeding was also reduced with SGLT-2 inhibitors; however the difference was not statistically significant.

 

Effects Diminished With Beta-Blocker Treatment

In a secondary analysis of patients in the MASH cirrhosis group treated with one type of a nonselective beta-blockers (n = 509) and another nonselective beta-blockers (n = 2561), the beneficial effects of SGLT2 inhibitors on portal hypertension, with the exception of HE and SBP, were found to be somewhat diminished, likely because patients were already benefitting from the beta-blockers, Rao noted.

Other Groups

In outcomes of the non–MASH-related cirrhosis groups, patients prescribed SGLT2 inhibitors also had a reduced risk for specific, as well as any portal hypertension complications (P < .05), Rao noted.

Overall, the findings add to previous studies on SGLT2 inhibitors in MASH and expand on the possible benefits, he said.

“Our findings validate these [previous] results and suggest potential benefits across for patients with other types of liver disease and raise the possibility of a beneficial effect in portal hypertension,” he said.

“Given the marked reduction in portal hypertension complications after SGLT2 inhibitor initiation, the associated survival benefit may not be surprising,” he noted.

“However, we were intrigued by the consistent reduction in portal hypertension complications across all cirrhosis types, especially since SGLT-2 inhibitors are most commonly used in patients with diabetes who have MASH-mediated liver disease.”

 

‘Real World Glimpse’ at SGLT2 Inhibitors; Limitations Need Noting 

Commenting on the study, Rotonya M. Carr, MD, Division Head of Gastroenterology at the University of Washington, Seattle, said the study sheds important light on an issue previously addressed only in smaller cohorts.

Dr. Rotonya M. Carr

“To date, there have only been a few small prospective, retrospective, and case series studies investigating SGTL2 inhibitors in patients with cirrhosis,” she told GI & Hepatology Newsv.

“This retrospective study is a real-world glimpse at how patients with cirrhosis may fare on these drugs — very exciting data.”

Carr cautioned, however, that, in addition to the retrospective study design, limitations included that the study doesn’t provide details on the duration of therapy, preventing an understanding of whether the results represent chronic, sustained use of SGLT2 inhibitors.

“[Therefore], we cannot interpret these results to mean that chronic, sustained use of SGTL2 inh is beneficial, or does not cause harm, in patients with cirrhosis.”

“While these data are provocative, more work needs to be done before we understand the full safety and efficacy of SGTL2 inhibitors for patients with cirrhosis,” Carr added.

“However, these data are very encouraging, and I am optimistic that we will indeed see both SGTL2 inhibitors and GLP-1s among the group of medications we use in the future for the primary management of patients with liver disease.”

The authors had no disclosures to report. Carr’s disclosures included relationships with Intercept and Novo Nordisk and research funding from Merck.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO — Patients with cirrhosis treated with sodium-glucose cotransporter 2 (SGLT2) inhibitors show significant reductions in a range of portal hypertension complications and all-cause mortality compared with those not receiving the drugs, new research shows.

“Our study found that SGLT2 inhibitors were associated with fewer portal hypertension complications and lower mortality, suggesting they may be a valuable addition to cirrhosis management,” first author Abhinav K. Rao, MD, of the Medical University of South Carolina, Charleston, South Carolina, told GI & Hepatology News.

The findings were presented at Digestive Disease Week (DDW) 2025.

Portal hypertension, a potentially life-threatening complication of cirrhosis, can be a key driver of additional complications including ascites and gastro-esophageal varices in cirrhosis.

Current treatments such as beta-blockers can prevent some complications, however, more effective therapies are needed.

SGLT2 inhibitors are often used in the treatment of cardiovascular disease as well as metabolic dysfunction–associated steatohepatitis (MASH)–mediated liver disease; research is lacking regarding their effects in portal hypertension in the broader population of people with cirrhosis.

“The therapeutic efficacy of SGLT2 inhibitors might be related to their ability to improve vascular function, making them attractive in portal hypertension,” Rao explained.

To investigate, Rao and colleagues evaluated data on 637,079 patients with cirrhosis in the TriNetX database, which includes patients in the United States from 66 healthcare organizations.

Patients were divided into three subgroups, including those with MASH, alcohol-associated, and other etiologies of cirrhosis.

Using robust 1:1 propensity score matching, patients in each subgroup were stratified as either having or not having been treated with SGLT2 inhibitors, limited to those who initiated the drugs within 1 year of their cirrhosis diagnosis to prevent immortal time bias. Patients were matched on other characteristics.

For the primary outcome of all-cause mortality, with an overall median follow-up of 2 years, patients prescribed SGLT2 inhibitors in the MASH cirrhosis (n = 47,385), alcohol-associated cirrhosis (n = 107,844), and other etiologies of cirrhosis (n = 59,499) groups all had a significantly lower risk for all-cause mortality than those not prescribed SGLT2 inhibitors (P < .05 for all).

 

SGLT2 Inhibitors in MASH Cirrhosis

Specifically looking at the MASH cirrhosis group, Rao described outcomes of the two groups of 3026 patients each who were and were not treated with SGLT2 inhibitors.

The patients had similar rates of esophageal varices (25% in the SGLT2 group and 22% in the no SGLT2 group), ascites (19% in each group), and a similar rate of 19% had hepatic encephalopathy (HE).

About 57% of patients in each treatment group used beta-blockers and 33% used glucagon-like peptide 1 (GLP-1) receptor agonists. Those with a history of liver transplantation, hemodialysis, or transjugular intrahepatic portosystemic shunt placement were excluded.

The secondary outcome results in those patients showed that treatment with SGLT2 inhibitors was associated with significantly reduced risks of developing portal hypertension complications including ascites, HE, spontaneous bacterial peritonitis (SBP), and hepatorenal syndrome (P < .05 for all).

Esophageal variceal bleeding was also reduced with SGLT-2 inhibitors; however the difference was not statistically significant.

 

Effects Diminished With Beta-Blocker Treatment

In a secondary analysis of patients in the MASH cirrhosis group treated with one type of a nonselective beta-blockers (n = 509) and another nonselective beta-blockers (n = 2561), the beneficial effects of SGLT2 inhibitors on portal hypertension, with the exception of HE and SBP, were found to be somewhat diminished, likely because patients were already benefitting from the beta-blockers, Rao noted.

Other Groups

In outcomes of the non–MASH-related cirrhosis groups, patients prescribed SGLT2 inhibitors also had a reduced risk for specific, as well as any portal hypertension complications (P < .05), Rao noted.

Overall, the findings add to previous studies on SGLT2 inhibitors in MASH and expand on the possible benefits, he said.

“Our findings validate these [previous] results and suggest potential benefits across for patients with other types of liver disease and raise the possibility of a beneficial effect in portal hypertension,” he said.

“Given the marked reduction in portal hypertension complications after SGLT2 inhibitor initiation, the associated survival benefit may not be surprising,” he noted.

“However, we were intrigued by the consistent reduction in portal hypertension complications across all cirrhosis types, especially since SGLT-2 inhibitors are most commonly used in patients with diabetes who have MASH-mediated liver disease.”

 

‘Real World Glimpse’ at SGLT2 Inhibitors; Limitations Need Noting 

Commenting on the study, Rotonya M. Carr, MD, Division Head of Gastroenterology at the University of Washington, Seattle, said the study sheds important light on an issue previously addressed only in smaller cohorts.

Dr. Rotonya M. Carr

“To date, there have only been a few small prospective, retrospective, and case series studies investigating SGTL2 inhibitors in patients with cirrhosis,” she told GI & Hepatology Newsv.

“This retrospective study is a real-world glimpse at how patients with cirrhosis may fare on these drugs — very exciting data.”

Carr cautioned, however, that, in addition to the retrospective study design, limitations included that the study doesn’t provide details on the duration of therapy, preventing an understanding of whether the results represent chronic, sustained use of SGLT2 inhibitors.

“[Therefore], we cannot interpret these results to mean that chronic, sustained use of SGTL2 inh is beneficial, or does not cause harm, in patients with cirrhosis.”

“While these data are provocative, more work needs to be done before we understand the full safety and efficacy of SGTL2 inhibitors for patients with cirrhosis,” Carr added.

“However, these data are very encouraging, and I am optimistic that we will indeed see both SGTL2 inhibitors and GLP-1s among the group of medications we use in the future for the primary management of patients with liver disease.”

The authors had no disclosures to report. Carr’s disclosures included relationships with Intercept and Novo Nordisk and research funding from Merck.

A version of this article appeared on Medscape.com.

SAN DIEGO — Patients with cirrhosis treated with sodium-glucose cotransporter 2 (SGLT2) inhibitors show significant reductions in a range of portal hypertension complications and all-cause mortality compared with those not receiving the drugs, new research shows.

“Our study found that SGLT2 inhibitors were associated with fewer portal hypertension complications and lower mortality, suggesting they may be a valuable addition to cirrhosis management,” first author Abhinav K. Rao, MD, of the Medical University of South Carolina, Charleston, South Carolina, told GI & Hepatology News.

The findings were presented at Digestive Disease Week (DDW) 2025.

Portal hypertension, a potentially life-threatening complication of cirrhosis, can be a key driver of additional complications including ascites and gastro-esophageal varices in cirrhosis.

Current treatments such as beta-blockers can prevent some complications, however, more effective therapies are needed.

SGLT2 inhibitors are often used in the treatment of cardiovascular disease as well as metabolic dysfunction–associated steatohepatitis (MASH)–mediated liver disease; research is lacking regarding their effects in portal hypertension in the broader population of people with cirrhosis.

“The therapeutic efficacy of SGLT2 inhibitors might be related to their ability to improve vascular function, making them attractive in portal hypertension,” Rao explained.

To investigate, Rao and colleagues evaluated data on 637,079 patients with cirrhosis in the TriNetX database, which includes patients in the United States from 66 healthcare organizations.

Patients were divided into three subgroups, including those with MASH, alcohol-associated, and other etiologies of cirrhosis.

Using robust 1:1 propensity score matching, patients in each subgroup were stratified as either having or not having been treated with SGLT2 inhibitors, limited to those who initiated the drugs within 1 year of their cirrhosis diagnosis to prevent immortal time bias. Patients were matched on other characteristics.

For the primary outcome of all-cause mortality, with an overall median follow-up of 2 years, patients prescribed SGLT2 inhibitors in the MASH cirrhosis (n = 47,385), alcohol-associated cirrhosis (n = 107,844), and other etiologies of cirrhosis (n = 59,499) groups all had a significantly lower risk for all-cause mortality than those not prescribed SGLT2 inhibitors (P < .05 for all).

 

SGLT2 Inhibitors in MASH Cirrhosis

Specifically looking at the MASH cirrhosis group, Rao described outcomes of the two groups of 3026 patients each who were and were not treated with SGLT2 inhibitors.

The patients had similar rates of esophageal varices (25% in the SGLT2 group and 22% in the no SGLT2 group), ascites (19% in each group), and a similar rate of 19% had hepatic encephalopathy (HE).

About 57% of patients in each treatment group used beta-blockers and 33% used glucagon-like peptide 1 (GLP-1) receptor agonists. Those with a history of liver transplantation, hemodialysis, or transjugular intrahepatic portosystemic shunt placement were excluded.

The secondary outcome results in those patients showed that treatment with SGLT2 inhibitors was associated with significantly reduced risks of developing portal hypertension complications including ascites, HE, spontaneous bacterial peritonitis (SBP), and hepatorenal syndrome (P < .05 for all).

Esophageal variceal bleeding was also reduced with SGLT-2 inhibitors; however the difference was not statistically significant.

 

Effects Diminished With Beta-Blocker Treatment

In a secondary analysis of patients in the MASH cirrhosis group treated with one type of a nonselective beta-blockers (n = 509) and another nonselective beta-blockers (n = 2561), the beneficial effects of SGLT2 inhibitors on portal hypertension, with the exception of HE and SBP, were found to be somewhat diminished, likely because patients were already benefitting from the beta-blockers, Rao noted.

Other Groups

In outcomes of the non–MASH-related cirrhosis groups, patients prescribed SGLT2 inhibitors also had a reduced risk for specific, as well as any portal hypertension complications (P < .05), Rao noted.

Overall, the findings add to previous studies on SGLT2 inhibitors in MASH and expand on the possible benefits, he said.

“Our findings validate these [previous] results and suggest potential benefits across for patients with other types of liver disease and raise the possibility of a beneficial effect in portal hypertension,” he said.

“Given the marked reduction in portal hypertension complications after SGLT2 inhibitor initiation, the associated survival benefit may not be surprising,” he noted.

“However, we were intrigued by the consistent reduction in portal hypertension complications across all cirrhosis types, especially since SGLT-2 inhibitors are most commonly used in patients with diabetes who have MASH-mediated liver disease.”

 

‘Real World Glimpse’ at SGLT2 Inhibitors; Limitations Need Noting 

Commenting on the study, Rotonya M. Carr, MD, Division Head of Gastroenterology at the University of Washington, Seattle, said the study sheds important light on an issue previously addressed only in smaller cohorts.

Dr. Rotonya M. Carr

“To date, there have only been a few small prospective, retrospective, and case series studies investigating SGTL2 inhibitors in patients with cirrhosis,” she told GI & Hepatology Newsv.

“This retrospective study is a real-world glimpse at how patients with cirrhosis may fare on these drugs — very exciting data.”

Carr cautioned, however, that, in addition to the retrospective study design, limitations included that the study doesn’t provide details on the duration of therapy, preventing an understanding of whether the results represent chronic, sustained use of SGLT2 inhibitors.

“[Therefore], we cannot interpret these results to mean that chronic, sustained use of SGTL2 inh is beneficial, or does not cause harm, in patients with cirrhosis.”

“While these data are provocative, more work needs to be done before we understand the full safety and efficacy of SGTL2 inhibitors for patients with cirrhosis,” Carr added.

“However, these data are very encouraging, and I am optimistic that we will indeed see both SGTL2 inhibitors and GLP-1s among the group of medications we use in the future for the primary management of patients with liver disease.”

The authors had no disclosures to report. Carr’s disclosures included relationships with Intercept and Novo Nordisk and research funding from Merck.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 05/08/2025 - 13:37
Un-Gate On Date
Thu, 05/08/2025 - 13:37
Use ProPublica
CFC Schedule Remove Status
Thu, 05/08/2025 - 13:37
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Thu, 05/08/2025 - 13:37

Key Blood Proteins Predict MASLD Up to 16 Years in Advance

Article Type
Changed
Wed, 04/30/2025 - 12:31

SAN DIEGO – The presence of five key proteins in the blood was strongly associated with the development of metabolic dysfunction-associated steatotic liver disease (MASLD) as much as 16 years before symptoms appeared, new research showed.

“This represents the first high-performance, ultra-early (16 years) predictive model for MASLD,” said first author Shiyi Yu, MD, resident physician in the department of gastroenterology, Guangdong Provincial People’s Hospital in China.

“The findings could be a game-changer for how we screen for and intervene in liver disease,” Yu said at a press briefing for Digestive Disease Week® (DDW) 2025.

“Instead of waiting for symptoms or irreversible damage, we can [identify] high-risk individuals early and take steps to prevent MASLD from developing, which is particularly important because MASLD often progresses silently until advanced stages,” she added.

MASLD is the most common liver disorder in the world and carries a high risk of morbidity and mortality, with a mortality rate that is doubled compared with those without MASLD.

To identify any long-term predictive markers that could be used in simple predictive models, Yu and colleagues evaluated data on 52,952 participants enrolled in the UK Biobank between 2006 and 2010 who did not have MASLD at baseline and were followed up for up to 16.6 years.

Overall, 782 participants were diagnosed with MASLD over the course of the study.

A total of 2,737 blood proteins were analyzed, and among them, the five that emerged as being robust predictive biomarkers for development of MASLD within 5 years included CDHR2 (area under the curve [AUC] = 0.825), FUOM (AUC = 0.815), KRT18 (AUC = 0.810), ACY1 (AUC = 0.803), and GGT1 (AUC = 0.797). 

Deviations of the proteins in plasma concentrations were observed up to 16 years prior to MASLD onset, with higher levels of the proteins at baseline associated with up to a nearly 10-times higher risk of MASLD (hazard ratios, 7.05-9.81). 

A combination of the five proteins was predictive of incident MASLD at all time frames, including at 5-years (AUC = 0.857), 10-years (AUC = 0.775), and at all time points (AUC = 0.758).

The combined proteins gained even stronger predictive performance when added to key clinical biomarkers such as BMI and daily exercise, with an accuracy of 90.4% at 5 years and 82.2% at 16 years, “surpassing all existing short-term prediction models,” Yu reported.

Similar results were observed with the predictive model in a separate, smaller cohort of 100 participants in China, “further supporting the robustness of the model and showing it can be effective across diverse populations,” she noted in the press briefing.

 

Potential for Interventions ‘Years Before’ Damage Begins

Yu underscored the potential benefits of informing patients of their risk of MASLD.

“Too often, people do not find out they are at risk for liver disease before they are diagnosed and coping with symptoms,” she said.

A protein-based risk score could “profoundly transform early intervention strategies, triggering personalized lifestyle interventions for high-risk individuals” she said. 

With obesitytype 2 diabetes, and high cholesterol levels among key risk factors for MASLD, such personalized interventions could include “counseling on diet, physical activity, and other factors years before liver damage begins, potentially averting disease progression altogether,” Yu noted.

Instead of waiting for abnormal liver function tests or imaging findings, patients could receive more frequent monitoring with annual elastography or ultrasound, for example, she explained.

In addition, “knowing one’s individualized protein-based risk may be more effective than abstract measures such as BMI or liver enzymes in motivating patients, facilitating better patient engagement and adherence,” Yu said.While noting that more work is needed to understand the biology behind the biomarkers, Yu underscored that “this is a big step toward personalized prevention.”

“By finding at-risk patients early, we hope to help stop MASLD before it starts,” she concluded.

 

Predictive Performance Impressive

Commenting on the study at the press briefing, Loren A. Laine, MD, AGAF, professor of medicine and chief of the Section of Digestive Diseases at the Yale School of Medicine, New Haven, Conn., and council chair of DDW 2025, noted that — as far as AUCs go — even a ranking in the 80% range is considered good. “So, for this to have an accuracy up to the 90s indicates a really excellent [predictive] performance,” he explained.

Laine agreed that the study findings have “the potential value to identify individuals at increased risk,” allowing for early monitoring and interventions. 

The interventions “could be either general, such as things like diet and lifestyle, or more specific,” based on the function of these proteins, he added.

Rotonya Carr, MD, the division head of gastroenterology at the University of Washington, Seattle, further highlighted the pressing need for better predictive tools in MASLD.

“The predictions are that if we don’t do anything, as many as 122 million people will be impacted by MASLD” in the US by 2050, she told GI & Hepatology News

“So, I am very excited about this work because we really don’t have anything right now that predicts who is going to get MASLD,” she said. “We are going to need tools like this, where people have information about their future health in order to make decisions.”

MASLD is known to be a significant risk factor for cardiovascular disease (CVD), and Carr speculated that the findings could lead to the types of predictive tools already available for CVD.

“I see this as being akin to what cardiology has had for quite some time, where they have cardiovascular risk disease calculators in which patients or their physicians can enter data and then estimate their risk of developing cardiovascular disease over, for instance, 10 years,” she said.

Laine’s disclosures include consulting and/or relationships with Medtronic, Phathom Pharmaceuticals, Biohaven, Celgene, Intercept, Merck, and Pfizer. Carr’s disclosures include relationships with Intercept and Novo Nordisk and research funding from Merck.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO – The presence of five key proteins in the blood was strongly associated with the development of metabolic dysfunction-associated steatotic liver disease (MASLD) as much as 16 years before symptoms appeared, new research showed.

“This represents the first high-performance, ultra-early (16 years) predictive model for MASLD,” said first author Shiyi Yu, MD, resident physician in the department of gastroenterology, Guangdong Provincial People’s Hospital in China.

“The findings could be a game-changer for how we screen for and intervene in liver disease,” Yu said at a press briefing for Digestive Disease Week® (DDW) 2025.

“Instead of waiting for symptoms or irreversible damage, we can [identify] high-risk individuals early and take steps to prevent MASLD from developing, which is particularly important because MASLD often progresses silently until advanced stages,” she added.

MASLD is the most common liver disorder in the world and carries a high risk of morbidity and mortality, with a mortality rate that is doubled compared with those without MASLD.

To identify any long-term predictive markers that could be used in simple predictive models, Yu and colleagues evaluated data on 52,952 participants enrolled in the UK Biobank between 2006 and 2010 who did not have MASLD at baseline and were followed up for up to 16.6 years.

Overall, 782 participants were diagnosed with MASLD over the course of the study.

A total of 2,737 blood proteins were analyzed, and among them, the five that emerged as being robust predictive biomarkers for development of MASLD within 5 years included CDHR2 (area under the curve [AUC] = 0.825), FUOM (AUC = 0.815), KRT18 (AUC = 0.810), ACY1 (AUC = 0.803), and GGT1 (AUC = 0.797). 

Deviations of the proteins in plasma concentrations were observed up to 16 years prior to MASLD onset, with higher levels of the proteins at baseline associated with up to a nearly 10-times higher risk of MASLD (hazard ratios, 7.05-9.81). 

A combination of the five proteins was predictive of incident MASLD at all time frames, including at 5-years (AUC = 0.857), 10-years (AUC = 0.775), and at all time points (AUC = 0.758).

The combined proteins gained even stronger predictive performance when added to key clinical biomarkers such as BMI and daily exercise, with an accuracy of 90.4% at 5 years and 82.2% at 16 years, “surpassing all existing short-term prediction models,” Yu reported.

Similar results were observed with the predictive model in a separate, smaller cohort of 100 participants in China, “further supporting the robustness of the model and showing it can be effective across diverse populations,” she noted in the press briefing.

 

Potential for Interventions ‘Years Before’ Damage Begins

Yu underscored the potential benefits of informing patients of their risk of MASLD.

“Too often, people do not find out they are at risk for liver disease before they are diagnosed and coping with symptoms,” she said.

A protein-based risk score could “profoundly transform early intervention strategies, triggering personalized lifestyle interventions for high-risk individuals” she said. 

With obesitytype 2 diabetes, and high cholesterol levels among key risk factors for MASLD, such personalized interventions could include “counseling on diet, physical activity, and other factors years before liver damage begins, potentially averting disease progression altogether,” Yu noted.

Instead of waiting for abnormal liver function tests or imaging findings, patients could receive more frequent monitoring with annual elastography or ultrasound, for example, she explained.

In addition, “knowing one’s individualized protein-based risk may be more effective than abstract measures such as BMI or liver enzymes in motivating patients, facilitating better patient engagement and adherence,” Yu said.While noting that more work is needed to understand the biology behind the biomarkers, Yu underscored that “this is a big step toward personalized prevention.”

“By finding at-risk patients early, we hope to help stop MASLD before it starts,” she concluded.

 

Predictive Performance Impressive

Commenting on the study at the press briefing, Loren A. Laine, MD, AGAF, professor of medicine and chief of the Section of Digestive Diseases at the Yale School of Medicine, New Haven, Conn., and council chair of DDW 2025, noted that — as far as AUCs go — even a ranking in the 80% range is considered good. “So, for this to have an accuracy up to the 90s indicates a really excellent [predictive] performance,” he explained.

Laine agreed that the study findings have “the potential value to identify individuals at increased risk,” allowing for early monitoring and interventions. 

The interventions “could be either general, such as things like diet and lifestyle, or more specific,” based on the function of these proteins, he added.

Rotonya Carr, MD, the division head of gastroenterology at the University of Washington, Seattle, further highlighted the pressing need for better predictive tools in MASLD.

“The predictions are that if we don’t do anything, as many as 122 million people will be impacted by MASLD” in the US by 2050, she told GI & Hepatology News

“So, I am very excited about this work because we really don’t have anything right now that predicts who is going to get MASLD,” she said. “We are going to need tools like this, where people have information about their future health in order to make decisions.”

MASLD is known to be a significant risk factor for cardiovascular disease (CVD), and Carr speculated that the findings could lead to the types of predictive tools already available for CVD.

“I see this as being akin to what cardiology has had for quite some time, where they have cardiovascular risk disease calculators in which patients or their physicians can enter data and then estimate their risk of developing cardiovascular disease over, for instance, 10 years,” she said.

Laine’s disclosures include consulting and/or relationships with Medtronic, Phathom Pharmaceuticals, Biohaven, Celgene, Intercept, Merck, and Pfizer. Carr’s disclosures include relationships with Intercept and Novo Nordisk and research funding from Merck.

A version of this article appeared on Medscape.com.

SAN DIEGO – The presence of five key proteins in the blood was strongly associated with the development of metabolic dysfunction-associated steatotic liver disease (MASLD) as much as 16 years before symptoms appeared, new research showed.

“This represents the first high-performance, ultra-early (16 years) predictive model for MASLD,” said first author Shiyi Yu, MD, resident physician in the department of gastroenterology, Guangdong Provincial People’s Hospital in China.

“The findings could be a game-changer for how we screen for and intervene in liver disease,” Yu said at a press briefing for Digestive Disease Week® (DDW) 2025.

“Instead of waiting for symptoms or irreversible damage, we can [identify] high-risk individuals early and take steps to prevent MASLD from developing, which is particularly important because MASLD often progresses silently until advanced stages,” she added.

MASLD is the most common liver disorder in the world and carries a high risk of morbidity and mortality, with a mortality rate that is doubled compared with those without MASLD.

To identify any long-term predictive markers that could be used in simple predictive models, Yu and colleagues evaluated data on 52,952 participants enrolled in the UK Biobank between 2006 and 2010 who did not have MASLD at baseline and were followed up for up to 16.6 years.

Overall, 782 participants were diagnosed with MASLD over the course of the study.

A total of 2,737 blood proteins were analyzed, and among them, the five that emerged as being robust predictive biomarkers for development of MASLD within 5 years included CDHR2 (area under the curve [AUC] = 0.825), FUOM (AUC = 0.815), KRT18 (AUC = 0.810), ACY1 (AUC = 0.803), and GGT1 (AUC = 0.797). 

Deviations of the proteins in plasma concentrations were observed up to 16 years prior to MASLD onset, with higher levels of the proteins at baseline associated with up to a nearly 10-times higher risk of MASLD (hazard ratios, 7.05-9.81). 

A combination of the five proteins was predictive of incident MASLD at all time frames, including at 5-years (AUC = 0.857), 10-years (AUC = 0.775), and at all time points (AUC = 0.758).

The combined proteins gained even stronger predictive performance when added to key clinical biomarkers such as BMI and daily exercise, with an accuracy of 90.4% at 5 years and 82.2% at 16 years, “surpassing all existing short-term prediction models,” Yu reported.

Similar results were observed with the predictive model in a separate, smaller cohort of 100 participants in China, “further supporting the robustness of the model and showing it can be effective across diverse populations,” she noted in the press briefing.

 

Potential for Interventions ‘Years Before’ Damage Begins

Yu underscored the potential benefits of informing patients of their risk of MASLD.

“Too often, people do not find out they are at risk for liver disease before they are diagnosed and coping with symptoms,” she said.

A protein-based risk score could “profoundly transform early intervention strategies, triggering personalized lifestyle interventions for high-risk individuals” she said. 

With obesitytype 2 diabetes, and high cholesterol levels among key risk factors for MASLD, such personalized interventions could include “counseling on diet, physical activity, and other factors years before liver damage begins, potentially averting disease progression altogether,” Yu noted.

Instead of waiting for abnormal liver function tests or imaging findings, patients could receive more frequent monitoring with annual elastography or ultrasound, for example, she explained.

In addition, “knowing one’s individualized protein-based risk may be more effective than abstract measures such as BMI or liver enzymes in motivating patients, facilitating better patient engagement and adherence,” Yu said.While noting that more work is needed to understand the biology behind the biomarkers, Yu underscored that “this is a big step toward personalized prevention.”

“By finding at-risk patients early, we hope to help stop MASLD before it starts,” she concluded.

 

Predictive Performance Impressive

Commenting on the study at the press briefing, Loren A. Laine, MD, AGAF, professor of medicine and chief of the Section of Digestive Diseases at the Yale School of Medicine, New Haven, Conn., and council chair of DDW 2025, noted that — as far as AUCs go — even a ranking in the 80% range is considered good. “So, for this to have an accuracy up to the 90s indicates a really excellent [predictive] performance,” he explained.

Laine agreed that the study findings have “the potential value to identify individuals at increased risk,” allowing for early monitoring and interventions. 

The interventions “could be either general, such as things like diet and lifestyle, or more specific,” based on the function of these proteins, he added.

Rotonya Carr, MD, the division head of gastroenterology at the University of Washington, Seattle, further highlighted the pressing need for better predictive tools in MASLD.

“The predictions are that if we don’t do anything, as many as 122 million people will be impacted by MASLD” in the US by 2050, she told GI & Hepatology News

“So, I am very excited about this work because we really don’t have anything right now that predicts who is going to get MASLD,” she said. “We are going to need tools like this, where people have information about their future health in order to make decisions.”

MASLD is known to be a significant risk factor for cardiovascular disease (CVD), and Carr speculated that the findings could lead to the types of predictive tools already available for CVD.

“I see this as being akin to what cardiology has had for quite some time, where they have cardiovascular risk disease calculators in which patients or their physicians can enter data and then estimate their risk of developing cardiovascular disease over, for instance, 10 years,” she said.

Laine’s disclosures include consulting and/or relationships with Medtronic, Phathom Pharmaceuticals, Biohaven, Celgene, Intercept, Merck, and Pfizer. Carr’s disclosures include relationships with Intercept and Novo Nordisk and research funding from Merck.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 04/30/2025 - 10:29
Un-Gate On Date
Wed, 04/30/2025 - 10:29
Use ProPublica
CFC Schedule Remove Status
Wed, 04/30/2025 - 10:29
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 04/30/2025 - 10:29