User login
MDedge conference coverage features onsite reporting of the latest study results and expert perspectives from leading researchers.
Add-On Niraparib May Slow Hormone-Sensitive Metastatic Prostate Cancer
Adding the poly (ADP-ribose) polymerase (PARP) inhibitor niraparib to abiraterone acetate plus prednisone delayed disease progression and postponed the onset of symptoms in patients with metastatic castration-sensitive prostate cancer with homologous recombination repair (HRR) genetic alterations, according to findings from the AMPLITUDE trial.
An interim analysis also demonstrated an early trend toward improved overall survival in patients who received niraparib.
These findings support adding niraparib to abiraterone acetate plus prednisone “as a new treatment option” in patients with HRR alterations, said Study Chief Gerhardt Attard, MD, PhD, chair of medical oncology, University College London Cancer Institute, London, England, speaking at the American Society of Clinical Oncology (ASCO) 2025 annual meeting.
The findings also highlight that “it’s going to be incredibly important that patients who get diagnosed with hormone-sensitive prostate cancer are tested to see if they have these mutations, so they can be offered the right therapy at the right time,” Outside Expert Bradley McGregor, MD, with Dana-Farber Cancer Institute in Boston, said during a press briefing.
Ultimately, “you don’t know if you don’t test,” McGregor added.
About one quarter of patients with metastatic castration-sensitive prostate cancer have alterations in HRR genes, about half of which are BRCA mutations. These patients typically experience faster disease progression and worse outcomes. An androgen receptor pathway inhibitor, such as abiraterone, alongside androgen deprivation therapy with or without docetaxel, is standard therapy for these patients, but “there is still a need for treatments that are tailored to patients whose tumors harbor HRR alterations,” Attard said in a press release.
Adding niraparib to this standard regimen could help improve survival in these patients.
In 2023, the FDA approved niraparib and abiraterone acetate to treat BRCA-mutated metastatic castration-resistant prostate cancer, after findings from the MAGNITUDE study demonstrated improved progression-free survival (PFS).
The phase 3 AMPLITUDE trial set out to evaluate whether this combination would yield similar survival benefits in metastatic castration-sensitive prostate cancer with HRR mutations.
In the study, 696 patients (median age, 68 years) with metastatic castration-sensitive prostate cancer and one or more HRR gene alterations were randomly allocated (1:1) to niraparib with abiraterone acetate plus prednisone or placebo with abiraterone acetate plus prednisone.
Exclusion criteria included any prior PARP inhibitor therapy or androgen receptor pathway inhibitor other than abiraterone. Eligible patients could have received at most 6 months of androgen deprivation therapy, ≤ 6 cycles of docetaxel, ≤ 45 days of abiraterone acetate plus prednisone and palliative radiation.
Baseline characteristics were well balanced between the groups. Just over half the patients in each group had BRCA1 or BRCA2 alterations. The majority had an electrocorticogram performance status of 0, but high-risk features with a predominance for synchronous metastatic disease and metastatic high volume. About 16% had received prior docetaxel, in keeping with real world data, Attard noted.
At a median follow-up of 30.8 months, niraparib plus standard therapy led to a significant 37% reduction in the risk for radiographic progression or death. The median radiographic PFS (rPFS) was not reached in the niraparib group vs 29.5 months in the placebo group (hazard ratio [HR], 0.63; P = .0001).
Patients with BRCA alterations, in particular, showed the greatest benefit, with niraparib reducing the risk for radiographic progression or death by 48% compared to placebo (median rPFS not reached vs 26 months; HR, 0.52; P < .0001).
On the key secondary endpoint of time to symptomatic progression, adding niraparib led to a “statistically and clinically” significant benefit — a 50% lower in the risk for symptomatic progression in the full population (HR, 0.50), and a 56% lower risk in BRCA-mutant group (HR, 0.44).
The first interim analysis also showed an early trend toward improved overall survival favoring the niraparib combination, with a reduction in the risk for death of 21% in the HRR-mutant population (HR, 0.79; P = .10) and 25% (HR, 0.75; P = .15) in the BRCA-mutant population.
Grade 3/4 adverse events were more common with the niraparib combination group compared to the placebo group (75% vs 59%), with anemia and hypertension being the most common. However, treatment discontinuations due to adverse remained low (15% with niraparib vs 10% with placebo).
Attard noted, however, that half the target number of patients required for the final analysis died. Still, “in my view, there’s a clear trend for favoring survival in the patients randomized to niraparib,” he told attendees.
‘Exciting News’ for Patients
The AMPLITUDE results are “really exciting news for our patients,” McGregor said.
Considering the poor prognosis of patients with metastatic castration-sensitive prostate cancer, “it is reasonable to prioritize early access to PARP inhibitors for these men, at least for the ones with BRCA mutations,” added ASCO discussant Joaquin Mateo, MD, PhD, with Vall d’Hebron Institute of Oncology, Barcelona, Spain.
However, Mateo explained, “I think that for patients with mutations in the other genes, I will be more prudent, and I’ll be on the lookout for the overall survival data to mature.”
The other key conclusion, Mateo said, is that genomic profiling “should be moved earlier into the patient course, and I am confident that embedding genomic profiling into the diagnostic evaluations of metastatic prostate cancer is also going to result in better quality of testing, more efficacious testing, and also a more equitable framework of access to testing for patients.”
This study was funded by Janssen Research & Development, LLC. Attard and Mateo disclosed relationships with Janssen and other pharmaceutical companies. McGregor disclosed relationships with Arcus Biosciences, Astellas, AVEO, Bristol Myers Squibb, Daiichi Sankyo, AstraZeneca, and other companies.
A version of this article first appeared on Medscape.com.
Adding the poly (ADP-ribose) polymerase (PARP) inhibitor niraparib to abiraterone acetate plus prednisone delayed disease progression and postponed the onset of symptoms in patients with metastatic castration-sensitive prostate cancer with homologous recombination repair (HRR) genetic alterations, according to findings from the AMPLITUDE trial.
An interim analysis also demonstrated an early trend toward improved overall survival in patients who received niraparib.
These findings support adding niraparib to abiraterone acetate plus prednisone “as a new treatment option” in patients with HRR alterations, said Study Chief Gerhardt Attard, MD, PhD, chair of medical oncology, University College London Cancer Institute, London, England, speaking at the American Society of Clinical Oncology (ASCO) 2025 annual meeting.
The findings also highlight that “it’s going to be incredibly important that patients who get diagnosed with hormone-sensitive prostate cancer are tested to see if they have these mutations, so they can be offered the right therapy at the right time,” Outside Expert Bradley McGregor, MD, with Dana-Farber Cancer Institute in Boston, said during a press briefing.
Ultimately, “you don’t know if you don’t test,” McGregor added.
About one quarter of patients with metastatic castration-sensitive prostate cancer have alterations in HRR genes, about half of which are BRCA mutations. These patients typically experience faster disease progression and worse outcomes. An androgen receptor pathway inhibitor, such as abiraterone, alongside androgen deprivation therapy with or without docetaxel, is standard therapy for these patients, but “there is still a need for treatments that are tailored to patients whose tumors harbor HRR alterations,” Attard said in a press release.
Adding niraparib to this standard regimen could help improve survival in these patients.
In 2023, the FDA approved niraparib and abiraterone acetate to treat BRCA-mutated metastatic castration-resistant prostate cancer, after findings from the MAGNITUDE study demonstrated improved progression-free survival (PFS).
The phase 3 AMPLITUDE trial set out to evaluate whether this combination would yield similar survival benefits in metastatic castration-sensitive prostate cancer with HRR mutations.
In the study, 696 patients (median age, 68 years) with metastatic castration-sensitive prostate cancer and one or more HRR gene alterations were randomly allocated (1:1) to niraparib with abiraterone acetate plus prednisone or placebo with abiraterone acetate plus prednisone.
Exclusion criteria included any prior PARP inhibitor therapy or androgen receptor pathway inhibitor other than abiraterone. Eligible patients could have received at most 6 months of androgen deprivation therapy, ≤ 6 cycles of docetaxel, ≤ 45 days of abiraterone acetate plus prednisone and palliative radiation.
Baseline characteristics were well balanced between the groups. Just over half the patients in each group had BRCA1 or BRCA2 alterations. The majority had an electrocorticogram performance status of 0, but high-risk features with a predominance for synchronous metastatic disease and metastatic high volume. About 16% had received prior docetaxel, in keeping with real world data, Attard noted.
At a median follow-up of 30.8 months, niraparib plus standard therapy led to a significant 37% reduction in the risk for radiographic progression or death. The median radiographic PFS (rPFS) was not reached in the niraparib group vs 29.5 months in the placebo group (hazard ratio [HR], 0.63; P = .0001).
Patients with BRCA alterations, in particular, showed the greatest benefit, with niraparib reducing the risk for radiographic progression or death by 48% compared to placebo (median rPFS not reached vs 26 months; HR, 0.52; P < .0001).
On the key secondary endpoint of time to symptomatic progression, adding niraparib led to a “statistically and clinically” significant benefit — a 50% lower in the risk for symptomatic progression in the full population (HR, 0.50), and a 56% lower risk in BRCA-mutant group (HR, 0.44).
The first interim analysis also showed an early trend toward improved overall survival favoring the niraparib combination, with a reduction in the risk for death of 21% in the HRR-mutant population (HR, 0.79; P = .10) and 25% (HR, 0.75; P = .15) in the BRCA-mutant population.
Grade 3/4 adverse events were more common with the niraparib combination group compared to the placebo group (75% vs 59%), with anemia and hypertension being the most common. However, treatment discontinuations due to adverse remained low (15% with niraparib vs 10% with placebo).
Attard noted, however, that half the target number of patients required for the final analysis died. Still, “in my view, there’s a clear trend for favoring survival in the patients randomized to niraparib,” he told attendees.
‘Exciting News’ for Patients
The AMPLITUDE results are “really exciting news for our patients,” McGregor said.
Considering the poor prognosis of patients with metastatic castration-sensitive prostate cancer, “it is reasonable to prioritize early access to PARP inhibitors for these men, at least for the ones with BRCA mutations,” added ASCO discussant Joaquin Mateo, MD, PhD, with Vall d’Hebron Institute of Oncology, Barcelona, Spain.
However, Mateo explained, “I think that for patients with mutations in the other genes, I will be more prudent, and I’ll be on the lookout for the overall survival data to mature.”
The other key conclusion, Mateo said, is that genomic profiling “should be moved earlier into the patient course, and I am confident that embedding genomic profiling into the diagnostic evaluations of metastatic prostate cancer is also going to result in better quality of testing, more efficacious testing, and also a more equitable framework of access to testing for patients.”
This study was funded by Janssen Research & Development, LLC. Attard and Mateo disclosed relationships with Janssen and other pharmaceutical companies. McGregor disclosed relationships with Arcus Biosciences, Astellas, AVEO, Bristol Myers Squibb, Daiichi Sankyo, AstraZeneca, and other companies.
A version of this article first appeared on Medscape.com.
Adding the poly (ADP-ribose) polymerase (PARP) inhibitor niraparib to abiraterone acetate plus prednisone delayed disease progression and postponed the onset of symptoms in patients with metastatic castration-sensitive prostate cancer with homologous recombination repair (HRR) genetic alterations, according to findings from the AMPLITUDE trial.
An interim analysis also demonstrated an early trend toward improved overall survival in patients who received niraparib.
These findings support adding niraparib to abiraterone acetate plus prednisone “as a new treatment option” in patients with HRR alterations, said Study Chief Gerhardt Attard, MD, PhD, chair of medical oncology, University College London Cancer Institute, London, England, speaking at the American Society of Clinical Oncology (ASCO) 2025 annual meeting.
The findings also highlight that “it’s going to be incredibly important that patients who get diagnosed with hormone-sensitive prostate cancer are tested to see if they have these mutations, so they can be offered the right therapy at the right time,” Outside Expert Bradley McGregor, MD, with Dana-Farber Cancer Institute in Boston, said during a press briefing.
Ultimately, “you don’t know if you don’t test,” McGregor added.
About one quarter of patients with metastatic castration-sensitive prostate cancer have alterations in HRR genes, about half of which are BRCA mutations. These patients typically experience faster disease progression and worse outcomes. An androgen receptor pathway inhibitor, such as abiraterone, alongside androgen deprivation therapy with or without docetaxel, is standard therapy for these patients, but “there is still a need for treatments that are tailored to patients whose tumors harbor HRR alterations,” Attard said in a press release.
Adding niraparib to this standard regimen could help improve survival in these patients.
In 2023, the FDA approved niraparib and abiraterone acetate to treat BRCA-mutated metastatic castration-resistant prostate cancer, after findings from the MAGNITUDE study demonstrated improved progression-free survival (PFS).
The phase 3 AMPLITUDE trial set out to evaluate whether this combination would yield similar survival benefits in metastatic castration-sensitive prostate cancer with HRR mutations.
In the study, 696 patients (median age, 68 years) with metastatic castration-sensitive prostate cancer and one or more HRR gene alterations were randomly allocated (1:1) to niraparib with abiraterone acetate plus prednisone or placebo with abiraterone acetate plus prednisone.
Exclusion criteria included any prior PARP inhibitor therapy or androgen receptor pathway inhibitor other than abiraterone. Eligible patients could have received at most 6 months of androgen deprivation therapy, ≤ 6 cycles of docetaxel, ≤ 45 days of abiraterone acetate plus prednisone and palliative radiation.
Baseline characteristics were well balanced between the groups. Just over half the patients in each group had BRCA1 or BRCA2 alterations. The majority had an electrocorticogram performance status of 0, but high-risk features with a predominance for synchronous metastatic disease and metastatic high volume. About 16% had received prior docetaxel, in keeping with real world data, Attard noted.
At a median follow-up of 30.8 months, niraparib plus standard therapy led to a significant 37% reduction in the risk for radiographic progression or death. The median radiographic PFS (rPFS) was not reached in the niraparib group vs 29.5 months in the placebo group (hazard ratio [HR], 0.63; P = .0001).
Patients with BRCA alterations, in particular, showed the greatest benefit, with niraparib reducing the risk for radiographic progression or death by 48% compared to placebo (median rPFS not reached vs 26 months; HR, 0.52; P < .0001).
On the key secondary endpoint of time to symptomatic progression, adding niraparib led to a “statistically and clinically” significant benefit — a 50% lower in the risk for symptomatic progression in the full population (HR, 0.50), and a 56% lower risk in BRCA-mutant group (HR, 0.44).
The first interim analysis also showed an early trend toward improved overall survival favoring the niraparib combination, with a reduction in the risk for death of 21% in the HRR-mutant population (HR, 0.79; P = .10) and 25% (HR, 0.75; P = .15) in the BRCA-mutant population.
Grade 3/4 adverse events were more common with the niraparib combination group compared to the placebo group (75% vs 59%), with anemia and hypertension being the most common. However, treatment discontinuations due to adverse remained low (15% with niraparib vs 10% with placebo).
Attard noted, however, that half the target number of patients required for the final analysis died. Still, “in my view, there’s a clear trend for favoring survival in the patients randomized to niraparib,” he told attendees.
‘Exciting News’ for Patients
The AMPLITUDE results are “really exciting news for our patients,” McGregor said.
Considering the poor prognosis of patients with metastatic castration-sensitive prostate cancer, “it is reasonable to prioritize early access to PARP inhibitors for these men, at least for the ones with BRCA mutations,” added ASCO discussant Joaquin Mateo, MD, PhD, with Vall d’Hebron Institute of Oncology, Barcelona, Spain.
However, Mateo explained, “I think that for patients with mutations in the other genes, I will be more prudent, and I’ll be on the lookout for the overall survival data to mature.”
The other key conclusion, Mateo said, is that genomic profiling “should be moved earlier into the patient course, and I am confident that embedding genomic profiling into the diagnostic evaluations of metastatic prostate cancer is also going to result in better quality of testing, more efficacious testing, and also a more equitable framework of access to testing for patients.”
This study was funded by Janssen Research & Development, LLC. Attard and Mateo disclosed relationships with Janssen and other pharmaceutical companies. McGregor disclosed relationships with Arcus Biosciences, Astellas, AVEO, Bristol Myers Squibb, Daiichi Sankyo, AstraZeneca, and other companies.
A version of this article first appeared on Medscape.com.
FROM ASCO 2025
Walnuts Cut Gut Permeability in Obesity
, a small study showed.
“Less than 10% of adults are meeting their fiber needs each day, and walnuts are a source of dietary fiber, which helps nourish the gut microbiota,” study coauthor Hannah Holscher, PhD, RD, associate professor of nutrition at the University of Illinois at Urbana-Champaign, told GI & Hepatology News.
Holscher and her colleagues previously conducted a study on the effects of walnut consumption on the human intestinal microbiota “and found interesting results,” she said. Among 18 healthy men and women with a mean age of 53 years, “walnuts enriched intestinal microorganisms, including Roseburia that provide important gut-health promoting attributes, like short-chain fatty acid production. We also saw lower proinflammatory secondary bile acid concentrations in individuals that ate walnuts.”
The current study, presented at NUTRITION 2025 in Orlando, Florida, found similar benefits among 30 adults with obesity but without diabetes or gastrointestinal disease.
Walnut Halves, Walnut Oil, Corn Oil — Compared
The researchers aimed to determine the impact of walnut consumption on the gut microbiome, serum and fecal bile acid profiles, systemic inflammation, and oral glucose tolerance to a mixed-meal challenge.
Participants were enrolled in a randomized, controlled, crossover, complete feeding trial with three 3-week conditions, each identical except for walnut halves (WH), walnut oil (WO), or corn oil (CO) in the diet. A 3-week washout separated each condition.
“This was a fully controlled dietary feeding intervention,” Holscher said. “We provided their breakfast, lunch, snacks and dinners — all of their foods and beverages during the three dietary intervention periods that lasted for 3 weeks each. Their base diet consisted of typical American foods that you would find in a grocery store in central Illinois.”
Fecal samples were collected on days 18-20. On day 20, participants underwent a 6-hour mixed-meal tolerance test (75 g glucose + treatment) with a fasting blood draw followed by blood sampling every 30 minutes.
The fecal microbiome and microbiota were assessed using metagenomic and amplicon sequencing, respectively. Fecal microbial metabolites were quantified using gas chromatography-mass spectrometry.
Blood glucose, insulin, and inflammatory biomarkers (interleukin-6, tumor necrosis factor-alpha, C-reactive protein, and lipopolysaccharide-binding protein) were quantified. Fecal and circulating bile acids were measured via liquid chromatography tandem mass spectrometry.
Gut permeability was assessed by quantifying 24-hour urinary excretion of orally ingested sucralose and erythritol on day 21.
Linear mixed-effects models and repeated measures ANOVA were used for the statistical analysis.
The team found that Roseburia spp were greatest following WH (3.9%) vs WO (1.6) and CO (1.9); Lachnospiraceae UCG-001 and UCG-004 were also greatest with WH vs WO and CO.
WH fecal isobutyrate concentrations (5.41 µmol/g) were lower than WO (7.17 µmol/g) and CO (7.77). Similarly, fecal isovalerate concentrations were lowest with WH (7.84 µmol/g) vs WO (10.3µmol/g) and CO (11.6 µmol/g).
In contrast, indoles were highest in WH (36.8 µmol/g) vs WO (6.78 µmol/g) and CO (8.67µmol/g).
No differences in glucose concentrations were seen among groups. The 2-hour area under the curve (AUC) for insulin was lower with WH (469 µIU/mL/min) and WO (494) vs CO (604 µIU/mL/min).
The 4-hour AUC for glycolithocholic acid was lower with WH vs WO and CO. Furthermore, sucralose recovery was lowest following WH (10.5) vs WO (14.3) and CO (14.6).
“Our current efforts are focused on understanding connections between plasma bile acids and glycemic control (ie, blood glucose and insulin concentrations),” Holscher said. “We are also interested in studying individualized or personalized responses, since people had different magnitudes of responses.”
In addition, she said, “as the gut microbiome is one of the factors that can underpin the physiological response to the diet, we are interested in determining if there are microbial signatures that are predictive of glycemic control.”
Because the research is still in the early stages, at this point, Holscher simply encourages people to eat a variety of fruits, vegetables, whole grains, legumes and nuts to meet their daily fiber recommendations and support their gut microbiome.
This study was funded by a USDA NIFA grant. No competing interests were reported.
A version of this article appeared on Medscape.com .
, a small study showed.
“Less than 10% of adults are meeting their fiber needs each day, and walnuts are a source of dietary fiber, which helps nourish the gut microbiota,” study coauthor Hannah Holscher, PhD, RD, associate professor of nutrition at the University of Illinois at Urbana-Champaign, told GI & Hepatology News.
Holscher and her colleagues previously conducted a study on the effects of walnut consumption on the human intestinal microbiota “and found interesting results,” she said. Among 18 healthy men and women with a mean age of 53 years, “walnuts enriched intestinal microorganisms, including Roseburia that provide important gut-health promoting attributes, like short-chain fatty acid production. We also saw lower proinflammatory secondary bile acid concentrations in individuals that ate walnuts.”
The current study, presented at NUTRITION 2025 in Orlando, Florida, found similar benefits among 30 adults with obesity but without diabetes or gastrointestinal disease.
Walnut Halves, Walnut Oil, Corn Oil — Compared
The researchers aimed to determine the impact of walnut consumption on the gut microbiome, serum and fecal bile acid profiles, systemic inflammation, and oral glucose tolerance to a mixed-meal challenge.
Participants were enrolled in a randomized, controlled, crossover, complete feeding trial with three 3-week conditions, each identical except for walnut halves (WH), walnut oil (WO), or corn oil (CO) in the diet. A 3-week washout separated each condition.
“This was a fully controlled dietary feeding intervention,” Holscher said. “We provided their breakfast, lunch, snacks and dinners — all of their foods and beverages during the three dietary intervention periods that lasted for 3 weeks each. Their base diet consisted of typical American foods that you would find in a grocery store in central Illinois.”
Fecal samples were collected on days 18-20. On day 20, participants underwent a 6-hour mixed-meal tolerance test (75 g glucose + treatment) with a fasting blood draw followed by blood sampling every 30 minutes.
The fecal microbiome and microbiota were assessed using metagenomic and amplicon sequencing, respectively. Fecal microbial metabolites were quantified using gas chromatography-mass spectrometry.
Blood glucose, insulin, and inflammatory biomarkers (interleukin-6, tumor necrosis factor-alpha, C-reactive protein, and lipopolysaccharide-binding protein) were quantified. Fecal and circulating bile acids were measured via liquid chromatography tandem mass spectrometry.
Gut permeability was assessed by quantifying 24-hour urinary excretion of orally ingested sucralose and erythritol on day 21.
Linear mixed-effects models and repeated measures ANOVA were used for the statistical analysis.
The team found that Roseburia spp were greatest following WH (3.9%) vs WO (1.6) and CO (1.9); Lachnospiraceae UCG-001 and UCG-004 were also greatest with WH vs WO and CO.
WH fecal isobutyrate concentrations (5.41 µmol/g) were lower than WO (7.17 µmol/g) and CO (7.77). Similarly, fecal isovalerate concentrations were lowest with WH (7.84 µmol/g) vs WO (10.3µmol/g) and CO (11.6 µmol/g).
In contrast, indoles were highest in WH (36.8 µmol/g) vs WO (6.78 µmol/g) and CO (8.67µmol/g).
No differences in glucose concentrations were seen among groups. The 2-hour area under the curve (AUC) for insulin was lower with WH (469 µIU/mL/min) and WO (494) vs CO (604 µIU/mL/min).
The 4-hour AUC for glycolithocholic acid was lower with WH vs WO and CO. Furthermore, sucralose recovery was lowest following WH (10.5) vs WO (14.3) and CO (14.6).
“Our current efforts are focused on understanding connections between plasma bile acids and glycemic control (ie, blood glucose and insulin concentrations),” Holscher said. “We are also interested in studying individualized or personalized responses, since people had different magnitudes of responses.”
In addition, she said, “as the gut microbiome is one of the factors that can underpin the physiological response to the diet, we are interested in determining if there are microbial signatures that are predictive of glycemic control.”
Because the research is still in the early stages, at this point, Holscher simply encourages people to eat a variety of fruits, vegetables, whole grains, legumes and nuts to meet their daily fiber recommendations and support their gut microbiome.
This study was funded by a USDA NIFA grant. No competing interests were reported.
A version of this article appeared on Medscape.com .
, a small study showed.
“Less than 10% of adults are meeting their fiber needs each day, and walnuts are a source of dietary fiber, which helps nourish the gut microbiota,” study coauthor Hannah Holscher, PhD, RD, associate professor of nutrition at the University of Illinois at Urbana-Champaign, told GI & Hepatology News.
Holscher and her colleagues previously conducted a study on the effects of walnut consumption on the human intestinal microbiota “and found interesting results,” she said. Among 18 healthy men and women with a mean age of 53 years, “walnuts enriched intestinal microorganisms, including Roseburia that provide important gut-health promoting attributes, like short-chain fatty acid production. We also saw lower proinflammatory secondary bile acid concentrations in individuals that ate walnuts.”
The current study, presented at NUTRITION 2025 in Orlando, Florida, found similar benefits among 30 adults with obesity but without diabetes or gastrointestinal disease.
Walnut Halves, Walnut Oil, Corn Oil — Compared
The researchers aimed to determine the impact of walnut consumption on the gut microbiome, serum and fecal bile acid profiles, systemic inflammation, and oral glucose tolerance to a mixed-meal challenge.
Participants were enrolled in a randomized, controlled, crossover, complete feeding trial with three 3-week conditions, each identical except for walnut halves (WH), walnut oil (WO), or corn oil (CO) in the diet. A 3-week washout separated each condition.
“This was a fully controlled dietary feeding intervention,” Holscher said. “We provided their breakfast, lunch, snacks and dinners — all of their foods and beverages during the three dietary intervention periods that lasted for 3 weeks each. Their base diet consisted of typical American foods that you would find in a grocery store in central Illinois.”
Fecal samples were collected on days 18-20. On day 20, participants underwent a 6-hour mixed-meal tolerance test (75 g glucose + treatment) with a fasting blood draw followed by blood sampling every 30 minutes.
The fecal microbiome and microbiota were assessed using metagenomic and amplicon sequencing, respectively. Fecal microbial metabolites were quantified using gas chromatography-mass spectrometry.
Blood glucose, insulin, and inflammatory biomarkers (interleukin-6, tumor necrosis factor-alpha, C-reactive protein, and lipopolysaccharide-binding protein) were quantified. Fecal and circulating bile acids were measured via liquid chromatography tandem mass spectrometry.
Gut permeability was assessed by quantifying 24-hour urinary excretion of orally ingested sucralose and erythritol on day 21.
Linear mixed-effects models and repeated measures ANOVA were used for the statistical analysis.
The team found that Roseburia spp were greatest following WH (3.9%) vs WO (1.6) and CO (1.9); Lachnospiraceae UCG-001 and UCG-004 were also greatest with WH vs WO and CO.
WH fecal isobutyrate concentrations (5.41 µmol/g) were lower than WO (7.17 µmol/g) and CO (7.77). Similarly, fecal isovalerate concentrations were lowest with WH (7.84 µmol/g) vs WO (10.3µmol/g) and CO (11.6 µmol/g).
In contrast, indoles were highest in WH (36.8 µmol/g) vs WO (6.78 µmol/g) and CO (8.67µmol/g).
No differences in glucose concentrations were seen among groups. The 2-hour area under the curve (AUC) for insulin was lower with WH (469 µIU/mL/min) and WO (494) vs CO (604 µIU/mL/min).
The 4-hour AUC for glycolithocholic acid was lower with WH vs WO and CO. Furthermore, sucralose recovery was lowest following WH (10.5) vs WO (14.3) and CO (14.6).
“Our current efforts are focused on understanding connections between plasma bile acids and glycemic control (ie, blood glucose and insulin concentrations),” Holscher said. “We are also interested in studying individualized or personalized responses, since people had different magnitudes of responses.”
In addition, she said, “as the gut microbiome is one of the factors that can underpin the physiological response to the diet, we are interested in determining if there are microbial signatures that are predictive of glycemic control.”
Because the research is still in the early stages, at this point, Holscher simply encourages people to eat a variety of fruits, vegetables, whole grains, legumes and nuts to meet their daily fiber recommendations and support their gut microbiome.
This study was funded by a USDA NIFA grant. No competing interests were reported.
A version of this article appeared on Medscape.com .
AI Algorithm Predicts Transfusion Need, Mortality Risk in Acute GI Bleeds
SAN DIEGO — , researchers reported at Digestive Disease Week® (DDW) 2025.
Acute GI bleeding is the most common cause of digestive disease–related hospitalization, with an estimated 500,000 hospital admissions annually. It’s known that predicting the need for red blood cell transfusion in the first 24 hours may improve resuscitation and decrease both morbidity and mortality.
However, an existing clinical score known as the Rockall Score does not perform well for predicting mortality, Xi (Nicole) Zhang, an MD-PhD student at McGill University, Montreal, Quebec, Canada, told attendees at DDW. With an area under the curve of 0.65-0.75, better prediction is needed, said Zhang, whose coresearchers included Dennis Shung, MD, MHS, PhD, director of Applied Artificial Intelligence at Yale University School of Medicine, New Haven, Connecticut.
“We’d like to predict multiple outcomes in addition to mortality,” said Zhang, who is also a student at the Mila-Quebec Artificial Intelligence Institute.
As a result, the researchers turned to the TFM approach, applying it to ICU patients with acute GI bleeding to predict both the need for transfusion and in-hospital mortality risk. The all-cause mortality rate is up to 11%, according to a 2020 study by James Y. W. Lau, MD, and colleagues. The rebleeding rate of nonvariceal upper GI bleeds is up to 10.4%. Zhang said the rebleeding rate for variceal upper gastrointestinal bleeding is up to 65%.
The AI method the researchers used outperformed a standard deep learning model at predicting the need for transfusion and estimating mortality risk.
Defining the AI Framework
“Probabilistic flow matching is a class of generative artificial intelligence that learns how a simple distribution becomes a more complex distribution with ordinary differential equations,” Zhang told GI & Hepatology News. “For example, if you had a few lines and shapes you could learn how it could become a detailed portrait of a face. In our case, we start with a few blood pressure and heart rate measurements and learn the pattern of blood pressures and heart rates over time, particularly if they reflect clinical deterioration with hemodynamic instability.”
Another way to think about the underlying algorithm, Zhang said, is to think about a river with boats where the river flow determines where the boats end up. “We are trying to direct the boat to the correct dock by adjusting the flow of water in the canal. In this case we are mapping the distribution with the first few data points to the distribution with the entire patient trajectory.”
The information gained, she said, could be helpful in timing endoscopic evaluation or allocating red blood cell products for emergent transfusion.
Study Details
The researchers evaluated a cohort of 2602 patients admitted to the ICU, identified from the publicly available MIMIC-III database. They divided the patients into a training set of 2342 patients and an internal validation set of 260 patients. Input variables were severe liver disease comorbidity, administration of vasopressor medications, mean arterial blood pressure, and heart rate over the first 24 hours.
Excluded was hemoglobin, since the point was to test the trajectory of hemodynamic parameters independent of hemoglobin thresholds used to guide red blood cell transfusion.
The outcome measures were administration of packed red blood cell transfusion within 24 hours and all-cause hospital mortality.
The TFM was more accurate than a standard deep learning model in predicting red blood cell transfusion, with an accuracy of 93.6% vs 43.2%; P ≤ .001. It was also more accurate at predicting all-cause in-hospital mortality, with an accuracy of 89.5% vs 42.5%, P = .01.
The researchers concluded that the TFM approach was able to predict the hemodynamic trajectories of patients with acute GI bleeding defined as deviation and outperformed the baseline from the measured mean arterial pressure and heart rate.
Expert Perspective
“This is an exciting proof-of-concept study that shows generative AI methods may be applied to complex datasets in order to improve on our current predictive models and improve patient care,” said Jeremy Glissen Brown, MD, MSc, an assistant professor of medicine and a practicing gastroenterologist at Duke University who has published research on the use of AI in clinical practice. He reviewed the study for GI & Hepatology News but was not involved in the research.
“Future work will likely look into the implementation of a version of this model on real-time data.” he said. “We are at an exciting inflection point in predictive models within GI and clinical medicine. Predictive models based on deep learning and generative AI hold the promise of improving how we predict and treat disease states, but the excitement being generated with studies such as this needs to be balanced with the trade-offs inherent to the current paradigm of deep learning and generative models compared to more traditional regression-based models. These include many of the same ‘black box’ explainability questions that have risen in the age of convolutional neural networks as well as some method-specific questions due to the continuous and implicit nature of TFM.”
Elaborating on that, Glissen Brown said: “TFM, like many deep learning techniques, raises concerns about explainability that we’ve long seen with convolutional neural networks — the ‘black box’ problem, where it’s difficult to interpret exactly how and why the model arrives at a particular decision. But TFM also introduces unique challenges due to its continuous and implicit formulation. Since it often learns flows without explicitly defining intermediate representations or steps, it can be harder to trace the logic or pathways it uses to connect inputs to outputs. This makes standard interpretability tools less effective and calls for new techniques tailored to these continuous architectures.”
“This approach could have a real clinical impact,” said Robert Hirten, MD, associate professor of medicine and artificial intelligence, Icahn School of Medicine at Mount Sinai, New York City, who also reviewed the study. “Accurately predicting transfusion needs and mortality risk in real time could support earlier, more targeted interventions for high-risk patients. While these findings still need to be validated in prospective studies, it could enhance ICU decision-making and resource allocation.”
“For the practicing gastroenterologist, we envision this system could help them figure out when to perform endoscopy in a patient admitted with acute gastrointestinal bleeding in the ICU at very high risk of exsanguination,” Zhang told GI & Hepatology News.
The approach, the researchers said, will be useful in identifying unique patient characteristics, make possible the identification of high-risk patients and lead to more personalized medicine.
Hirten, Zhang, and Shung had no disclosures. Glissen Brown reported consulting relationships with Medtronic, OdinVision, Doximity, and Olympus. The National Institutes of Health funded this study.
A version of this article appeared on Medscape.com.
SAN DIEGO — , researchers reported at Digestive Disease Week® (DDW) 2025.
Acute GI bleeding is the most common cause of digestive disease–related hospitalization, with an estimated 500,000 hospital admissions annually. It’s known that predicting the need for red blood cell transfusion in the first 24 hours may improve resuscitation and decrease both morbidity and mortality.
However, an existing clinical score known as the Rockall Score does not perform well for predicting mortality, Xi (Nicole) Zhang, an MD-PhD student at McGill University, Montreal, Quebec, Canada, told attendees at DDW. With an area under the curve of 0.65-0.75, better prediction is needed, said Zhang, whose coresearchers included Dennis Shung, MD, MHS, PhD, director of Applied Artificial Intelligence at Yale University School of Medicine, New Haven, Connecticut.
“We’d like to predict multiple outcomes in addition to mortality,” said Zhang, who is also a student at the Mila-Quebec Artificial Intelligence Institute.
As a result, the researchers turned to the TFM approach, applying it to ICU patients with acute GI bleeding to predict both the need for transfusion and in-hospital mortality risk. The all-cause mortality rate is up to 11%, according to a 2020 study by James Y. W. Lau, MD, and colleagues. The rebleeding rate of nonvariceal upper GI bleeds is up to 10.4%. Zhang said the rebleeding rate for variceal upper gastrointestinal bleeding is up to 65%.
The AI method the researchers used outperformed a standard deep learning model at predicting the need for transfusion and estimating mortality risk.
Defining the AI Framework
“Probabilistic flow matching is a class of generative artificial intelligence that learns how a simple distribution becomes a more complex distribution with ordinary differential equations,” Zhang told GI & Hepatology News. “For example, if you had a few lines and shapes you could learn how it could become a detailed portrait of a face. In our case, we start with a few blood pressure and heart rate measurements and learn the pattern of blood pressures and heart rates over time, particularly if they reflect clinical deterioration with hemodynamic instability.”
Another way to think about the underlying algorithm, Zhang said, is to think about a river with boats where the river flow determines where the boats end up. “We are trying to direct the boat to the correct dock by adjusting the flow of water in the canal. In this case we are mapping the distribution with the first few data points to the distribution with the entire patient trajectory.”
The information gained, she said, could be helpful in timing endoscopic evaluation or allocating red blood cell products for emergent transfusion.
Study Details
The researchers evaluated a cohort of 2602 patients admitted to the ICU, identified from the publicly available MIMIC-III database. They divided the patients into a training set of 2342 patients and an internal validation set of 260 patients. Input variables were severe liver disease comorbidity, administration of vasopressor medications, mean arterial blood pressure, and heart rate over the first 24 hours.
Excluded was hemoglobin, since the point was to test the trajectory of hemodynamic parameters independent of hemoglobin thresholds used to guide red blood cell transfusion.
The outcome measures were administration of packed red blood cell transfusion within 24 hours and all-cause hospital mortality.
The TFM was more accurate than a standard deep learning model in predicting red blood cell transfusion, with an accuracy of 93.6% vs 43.2%; P ≤ .001. It was also more accurate at predicting all-cause in-hospital mortality, with an accuracy of 89.5% vs 42.5%, P = .01.
The researchers concluded that the TFM approach was able to predict the hemodynamic trajectories of patients with acute GI bleeding defined as deviation and outperformed the baseline from the measured mean arterial pressure and heart rate.
Expert Perspective
“This is an exciting proof-of-concept study that shows generative AI methods may be applied to complex datasets in order to improve on our current predictive models and improve patient care,” said Jeremy Glissen Brown, MD, MSc, an assistant professor of medicine and a practicing gastroenterologist at Duke University who has published research on the use of AI in clinical practice. He reviewed the study for GI & Hepatology News but was not involved in the research.
“Future work will likely look into the implementation of a version of this model on real-time data.” he said. “We are at an exciting inflection point in predictive models within GI and clinical medicine. Predictive models based on deep learning and generative AI hold the promise of improving how we predict and treat disease states, but the excitement being generated with studies such as this needs to be balanced with the trade-offs inherent to the current paradigm of deep learning and generative models compared to more traditional regression-based models. These include many of the same ‘black box’ explainability questions that have risen in the age of convolutional neural networks as well as some method-specific questions due to the continuous and implicit nature of TFM.”
Elaborating on that, Glissen Brown said: “TFM, like many deep learning techniques, raises concerns about explainability that we’ve long seen with convolutional neural networks — the ‘black box’ problem, where it’s difficult to interpret exactly how and why the model arrives at a particular decision. But TFM also introduces unique challenges due to its continuous and implicit formulation. Since it often learns flows without explicitly defining intermediate representations or steps, it can be harder to trace the logic or pathways it uses to connect inputs to outputs. This makes standard interpretability tools less effective and calls for new techniques tailored to these continuous architectures.”
“This approach could have a real clinical impact,” said Robert Hirten, MD, associate professor of medicine and artificial intelligence, Icahn School of Medicine at Mount Sinai, New York City, who also reviewed the study. “Accurately predicting transfusion needs and mortality risk in real time could support earlier, more targeted interventions for high-risk patients. While these findings still need to be validated in prospective studies, it could enhance ICU decision-making and resource allocation.”
“For the practicing gastroenterologist, we envision this system could help them figure out when to perform endoscopy in a patient admitted with acute gastrointestinal bleeding in the ICU at very high risk of exsanguination,” Zhang told GI & Hepatology News.
The approach, the researchers said, will be useful in identifying unique patient characteristics, make possible the identification of high-risk patients and lead to more personalized medicine.
Hirten, Zhang, and Shung had no disclosures. Glissen Brown reported consulting relationships with Medtronic, OdinVision, Doximity, and Olympus. The National Institutes of Health funded this study.
A version of this article appeared on Medscape.com.
SAN DIEGO — , researchers reported at Digestive Disease Week® (DDW) 2025.
Acute GI bleeding is the most common cause of digestive disease–related hospitalization, with an estimated 500,000 hospital admissions annually. It’s known that predicting the need for red blood cell transfusion in the first 24 hours may improve resuscitation and decrease both morbidity and mortality.
However, an existing clinical score known as the Rockall Score does not perform well for predicting mortality, Xi (Nicole) Zhang, an MD-PhD student at McGill University, Montreal, Quebec, Canada, told attendees at DDW. With an area under the curve of 0.65-0.75, better prediction is needed, said Zhang, whose coresearchers included Dennis Shung, MD, MHS, PhD, director of Applied Artificial Intelligence at Yale University School of Medicine, New Haven, Connecticut.
“We’d like to predict multiple outcomes in addition to mortality,” said Zhang, who is also a student at the Mila-Quebec Artificial Intelligence Institute.
As a result, the researchers turned to the TFM approach, applying it to ICU patients with acute GI bleeding to predict both the need for transfusion and in-hospital mortality risk. The all-cause mortality rate is up to 11%, according to a 2020 study by James Y. W. Lau, MD, and colleagues. The rebleeding rate of nonvariceal upper GI bleeds is up to 10.4%. Zhang said the rebleeding rate for variceal upper gastrointestinal bleeding is up to 65%.
The AI method the researchers used outperformed a standard deep learning model at predicting the need for transfusion and estimating mortality risk.
Defining the AI Framework
“Probabilistic flow matching is a class of generative artificial intelligence that learns how a simple distribution becomes a more complex distribution with ordinary differential equations,” Zhang told GI & Hepatology News. “For example, if you had a few lines and shapes you could learn how it could become a detailed portrait of a face. In our case, we start with a few blood pressure and heart rate measurements and learn the pattern of blood pressures and heart rates over time, particularly if they reflect clinical deterioration with hemodynamic instability.”
Another way to think about the underlying algorithm, Zhang said, is to think about a river with boats where the river flow determines where the boats end up. “We are trying to direct the boat to the correct dock by adjusting the flow of water in the canal. In this case we are mapping the distribution with the first few data points to the distribution with the entire patient trajectory.”
The information gained, she said, could be helpful in timing endoscopic evaluation or allocating red blood cell products for emergent transfusion.
Study Details
The researchers evaluated a cohort of 2602 patients admitted to the ICU, identified from the publicly available MIMIC-III database. They divided the patients into a training set of 2342 patients and an internal validation set of 260 patients. Input variables were severe liver disease comorbidity, administration of vasopressor medications, mean arterial blood pressure, and heart rate over the first 24 hours.
Excluded was hemoglobin, since the point was to test the trajectory of hemodynamic parameters independent of hemoglobin thresholds used to guide red blood cell transfusion.
The outcome measures were administration of packed red blood cell transfusion within 24 hours and all-cause hospital mortality.
The TFM was more accurate than a standard deep learning model in predicting red blood cell transfusion, with an accuracy of 93.6% vs 43.2%; P ≤ .001. It was also more accurate at predicting all-cause in-hospital mortality, with an accuracy of 89.5% vs 42.5%, P = .01.
The researchers concluded that the TFM approach was able to predict the hemodynamic trajectories of patients with acute GI bleeding defined as deviation and outperformed the baseline from the measured mean arterial pressure and heart rate.
Expert Perspective
“This is an exciting proof-of-concept study that shows generative AI methods may be applied to complex datasets in order to improve on our current predictive models and improve patient care,” said Jeremy Glissen Brown, MD, MSc, an assistant professor of medicine and a practicing gastroenterologist at Duke University who has published research on the use of AI in clinical practice. He reviewed the study for GI & Hepatology News but was not involved in the research.
“Future work will likely look into the implementation of a version of this model on real-time data.” he said. “We are at an exciting inflection point in predictive models within GI and clinical medicine. Predictive models based on deep learning and generative AI hold the promise of improving how we predict and treat disease states, but the excitement being generated with studies such as this needs to be balanced with the trade-offs inherent to the current paradigm of deep learning and generative models compared to more traditional regression-based models. These include many of the same ‘black box’ explainability questions that have risen in the age of convolutional neural networks as well as some method-specific questions due to the continuous and implicit nature of TFM.”
Elaborating on that, Glissen Brown said: “TFM, like many deep learning techniques, raises concerns about explainability that we’ve long seen with convolutional neural networks — the ‘black box’ problem, where it’s difficult to interpret exactly how and why the model arrives at a particular decision. But TFM also introduces unique challenges due to its continuous and implicit formulation. Since it often learns flows without explicitly defining intermediate representations or steps, it can be harder to trace the logic or pathways it uses to connect inputs to outputs. This makes standard interpretability tools less effective and calls for new techniques tailored to these continuous architectures.”
“This approach could have a real clinical impact,” said Robert Hirten, MD, associate professor of medicine and artificial intelligence, Icahn School of Medicine at Mount Sinai, New York City, who also reviewed the study. “Accurately predicting transfusion needs and mortality risk in real time could support earlier, more targeted interventions for high-risk patients. While these findings still need to be validated in prospective studies, it could enhance ICU decision-making and resource allocation.”
“For the practicing gastroenterologist, we envision this system could help them figure out when to perform endoscopy in a patient admitted with acute gastrointestinal bleeding in the ICU at very high risk of exsanguination,” Zhang told GI & Hepatology News.
The approach, the researchers said, will be useful in identifying unique patient characteristics, make possible the identification of high-risk patients and lead to more personalized medicine.
Hirten, Zhang, and Shung had no disclosures. Glissen Brown reported consulting relationships with Medtronic, OdinVision, Doximity, and Olympus. The National Institutes of Health funded this study.
A version of this article appeared on Medscape.com.
FROM DDW 2025
Chatbot Helps Users Adopt a Low FODMAP Diet
SAN DIEGO — Low fermentable oligo-, di-, and monosaccharides and polyols (FODMAPs) dietary advice has been shown to be effective in easing bloating and abdominal pain, especially in patients with irritable bowel syndrome (IBS), but limited availability of dietitians makes delivering this advice challenging. Researchers from Thailand have successfully enlisted a chatbot to help.
In a randomized controlled trial, they found that chatbot-assisted dietary advice with brief guidance effectively reduced high FODMAP intake, bloating severity, and improved dietary knowledge, particularly in patients with bothersome bloating.
“Chatbot-assisted dietary advice for FODMAPs restriction was feasible and applicable in patients with bloating symptoms that had baseline symptoms of moderate severity,” study chief Pochara Somvanapanich, with the Division of Gastroenterology, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Bangkok, Thailand, told GI & Hepatology News.
Somvanapanich, who developed the chatbot algorithm, presented the study results at Digestive Disease Week (DDW) 2025.
More Knowledge, Less Bloating
The trial enrolled 86 adults with disorders of gut-brain interaction experiencing bloating symptoms for more than 6 months and consuming more than seven high-FODMAPs items per week. Half of them had IBS.
At baseline, gastrointestinal (GI) symptoms and the ability to identify FODMAPs were assessed. All participants received a 5-minute consultation on FODMAPs avoidance from a GI fellow and were randomly allocated (stratified by IBS diagnosis and education) into two groups.
The chatbot-assisted group received real-time dietary advice via a chatbot which helped them identify high, low, and non-FODMAP foods from a list of more than 300 ingredients/dishes of Thai and western cuisines.
The control group received only brief advice on high FODMAPs restriction. Both groups used a diary app to log food intake and postprandial symptoms. Baseline bloating, abdominal pain and global symptoms severity were similar between the two groups. Data on 64 participants (32 in each group) were analyzed.
After 4 weeks, significantly more people in the chatbot group than the control group responded — achieving a 30% or greater reduction in daily worst bloating, abdominal pain or global symptoms (19 [59%] vs 10 [31%], P < .05). Responder rates were similar in the IBS and non-IBS subgroups.
Subgroup analysis revealed significant differences between groups only for participants with bothersome bloating, not those with mild bloating severity.
In those with bothersome bloating severity, the chatbot group had a higher response rate (69.5% vs 36.3%) and fewer bloating symptoms (P < .05). They also had a greater reduction in high FODMAPs intake (10 vs 23 items/week) and demonstrated improved knowledge in identifying FODMAPs (P < .05).
“Responders in a chatbot group consistently engaged more with the app, performing significantly more weekly item searches than nonresponders (P < .05),” the authors noted in their conference abstract.
“Our next step is to develop the chatbot-assisted approach for the reintroduction and personalization phase based on messenger applications (including Facebook Messenger and other messaging platforms),” Somvanapanich told GI & Hepatology News.
“Once we’ve gathered enough data to confirm these are working effectively, we definitely plan to create a one-stop service application for FODMAPs dietary advice,” Somvanapanich added.
Lack of Robust Data on Digital GI Health Apps
Commenting on this research for GI & Hepatology News, Sidhartha R. Sinha, MD, Director of Digital Health and Innovation, Division of Gastroenterology and Hepatology, Stanford University in Stanford, California, noted that there is a “notable lack of robust data supporting digital health tools in gastroenterology. Despite hundreds of apps available, very few are supported by well-designed trials.”
“The study demonstrated that chatbot-assisted dietary advice significantly improved bloating symptoms, reduced intake of high-FODMAP foods, and enhanced patients’ dietary knowledge compared to brief dietary counseling alone, especially in those with bothersome symptoms,” said Sinha, who wasn’t involved in the study.
“Patients actively used the chatbot to manage their symptoms, achieving a higher response rate than those in the control arm who received brief counseling on avoiding high-FODMAP food,” he noted.
Sinha said in his practice at Stanford, “in the heart of Silicon Valley,” patients do use digital resources to manage their GI symptoms, including diseases like IBS and inflammatory bowel disease (IBD) — and he believes this is “increasingly common nationally.”
“However, the need for evidence-based tools is critical and the lack here often prevents many practitioners from regularly recommending them to patients. This study aligns well with clinical practice, and supports the use of this particular app to improve IBS symptoms, particularly when access to dietitians is limited. These results support chatbot-assisted dietary management as a feasible, effective, and scalable approach to patient care,” Sinha told GI & Hepatology News.
The study received no commercial funding. Somvanapanich and Sinha had no relevant disclosures.
A version of this article appeared on Medscape.com.
SAN DIEGO — Low fermentable oligo-, di-, and monosaccharides and polyols (FODMAPs) dietary advice has been shown to be effective in easing bloating and abdominal pain, especially in patients with irritable bowel syndrome (IBS), but limited availability of dietitians makes delivering this advice challenging. Researchers from Thailand have successfully enlisted a chatbot to help.
In a randomized controlled trial, they found that chatbot-assisted dietary advice with brief guidance effectively reduced high FODMAP intake, bloating severity, and improved dietary knowledge, particularly in patients with bothersome bloating.
“Chatbot-assisted dietary advice for FODMAPs restriction was feasible and applicable in patients with bloating symptoms that had baseline symptoms of moderate severity,” study chief Pochara Somvanapanich, with the Division of Gastroenterology, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Bangkok, Thailand, told GI & Hepatology News.
Somvanapanich, who developed the chatbot algorithm, presented the study results at Digestive Disease Week (DDW) 2025.
More Knowledge, Less Bloating
The trial enrolled 86 adults with disorders of gut-brain interaction experiencing bloating symptoms for more than 6 months and consuming more than seven high-FODMAPs items per week. Half of them had IBS.
At baseline, gastrointestinal (GI) symptoms and the ability to identify FODMAPs were assessed. All participants received a 5-minute consultation on FODMAPs avoidance from a GI fellow and were randomly allocated (stratified by IBS diagnosis and education) into two groups.
The chatbot-assisted group received real-time dietary advice via a chatbot which helped them identify high, low, and non-FODMAP foods from a list of more than 300 ingredients/dishes of Thai and western cuisines.
The control group received only brief advice on high FODMAPs restriction. Both groups used a diary app to log food intake and postprandial symptoms. Baseline bloating, abdominal pain and global symptoms severity were similar between the two groups. Data on 64 participants (32 in each group) were analyzed.
After 4 weeks, significantly more people in the chatbot group than the control group responded — achieving a 30% or greater reduction in daily worst bloating, abdominal pain or global symptoms (19 [59%] vs 10 [31%], P < .05). Responder rates were similar in the IBS and non-IBS subgroups.
Subgroup analysis revealed significant differences between groups only for participants with bothersome bloating, not those with mild bloating severity.
In those with bothersome bloating severity, the chatbot group had a higher response rate (69.5% vs 36.3%) and fewer bloating symptoms (P < .05). They also had a greater reduction in high FODMAPs intake (10 vs 23 items/week) and demonstrated improved knowledge in identifying FODMAPs (P < .05).
“Responders in a chatbot group consistently engaged more with the app, performing significantly more weekly item searches than nonresponders (P < .05),” the authors noted in their conference abstract.
“Our next step is to develop the chatbot-assisted approach for the reintroduction and personalization phase based on messenger applications (including Facebook Messenger and other messaging platforms),” Somvanapanich told GI & Hepatology News.
“Once we’ve gathered enough data to confirm these are working effectively, we definitely plan to create a one-stop service application for FODMAPs dietary advice,” Somvanapanich added.
Lack of Robust Data on Digital GI Health Apps
Commenting on this research for GI & Hepatology News, Sidhartha R. Sinha, MD, Director of Digital Health and Innovation, Division of Gastroenterology and Hepatology, Stanford University in Stanford, California, noted that there is a “notable lack of robust data supporting digital health tools in gastroenterology. Despite hundreds of apps available, very few are supported by well-designed trials.”
“The study demonstrated that chatbot-assisted dietary advice significantly improved bloating symptoms, reduced intake of high-FODMAP foods, and enhanced patients’ dietary knowledge compared to brief dietary counseling alone, especially in those with bothersome symptoms,” said Sinha, who wasn’t involved in the study.
“Patients actively used the chatbot to manage their symptoms, achieving a higher response rate than those in the control arm who received brief counseling on avoiding high-FODMAP food,” he noted.
Sinha said in his practice at Stanford, “in the heart of Silicon Valley,” patients do use digital resources to manage their GI symptoms, including diseases like IBS and inflammatory bowel disease (IBD) — and he believes this is “increasingly common nationally.”
“However, the need for evidence-based tools is critical and the lack here often prevents many practitioners from regularly recommending them to patients. This study aligns well with clinical practice, and supports the use of this particular app to improve IBS symptoms, particularly when access to dietitians is limited. These results support chatbot-assisted dietary management as a feasible, effective, and scalable approach to patient care,” Sinha told GI & Hepatology News.
The study received no commercial funding. Somvanapanich and Sinha had no relevant disclosures.
A version of this article appeared on Medscape.com.
SAN DIEGO — Low fermentable oligo-, di-, and monosaccharides and polyols (FODMAPs) dietary advice has been shown to be effective in easing bloating and abdominal pain, especially in patients with irritable bowel syndrome (IBS), but limited availability of dietitians makes delivering this advice challenging. Researchers from Thailand have successfully enlisted a chatbot to help.
In a randomized controlled trial, they found that chatbot-assisted dietary advice with brief guidance effectively reduced high FODMAP intake, bloating severity, and improved dietary knowledge, particularly in patients with bothersome bloating.
“Chatbot-assisted dietary advice for FODMAPs restriction was feasible and applicable in patients with bloating symptoms that had baseline symptoms of moderate severity,” study chief Pochara Somvanapanich, with the Division of Gastroenterology, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Bangkok, Thailand, told GI & Hepatology News.
Somvanapanich, who developed the chatbot algorithm, presented the study results at Digestive Disease Week (DDW) 2025.
More Knowledge, Less Bloating
The trial enrolled 86 adults with disorders of gut-brain interaction experiencing bloating symptoms for more than 6 months and consuming more than seven high-FODMAPs items per week. Half of them had IBS.
At baseline, gastrointestinal (GI) symptoms and the ability to identify FODMAPs were assessed. All participants received a 5-minute consultation on FODMAPs avoidance from a GI fellow and were randomly allocated (stratified by IBS diagnosis and education) into two groups.
The chatbot-assisted group received real-time dietary advice via a chatbot which helped them identify high, low, and non-FODMAP foods from a list of more than 300 ingredients/dishes of Thai and western cuisines.
The control group received only brief advice on high FODMAPs restriction. Both groups used a diary app to log food intake and postprandial symptoms. Baseline bloating, abdominal pain and global symptoms severity were similar between the two groups. Data on 64 participants (32 in each group) were analyzed.
After 4 weeks, significantly more people in the chatbot group than the control group responded — achieving a 30% or greater reduction in daily worst bloating, abdominal pain or global symptoms (19 [59%] vs 10 [31%], P < .05). Responder rates were similar in the IBS and non-IBS subgroups.
Subgroup analysis revealed significant differences between groups only for participants with bothersome bloating, not those with mild bloating severity.
In those with bothersome bloating severity, the chatbot group had a higher response rate (69.5% vs 36.3%) and fewer bloating symptoms (P < .05). They also had a greater reduction in high FODMAPs intake (10 vs 23 items/week) and demonstrated improved knowledge in identifying FODMAPs (P < .05).
“Responders in a chatbot group consistently engaged more with the app, performing significantly more weekly item searches than nonresponders (P < .05),” the authors noted in their conference abstract.
“Our next step is to develop the chatbot-assisted approach for the reintroduction and personalization phase based on messenger applications (including Facebook Messenger and other messaging platforms),” Somvanapanich told GI & Hepatology News.
“Once we’ve gathered enough data to confirm these are working effectively, we definitely plan to create a one-stop service application for FODMAPs dietary advice,” Somvanapanich added.
Lack of Robust Data on Digital GI Health Apps
Commenting on this research for GI & Hepatology News, Sidhartha R. Sinha, MD, Director of Digital Health and Innovation, Division of Gastroenterology and Hepatology, Stanford University in Stanford, California, noted that there is a “notable lack of robust data supporting digital health tools in gastroenterology. Despite hundreds of apps available, very few are supported by well-designed trials.”
“The study demonstrated that chatbot-assisted dietary advice significantly improved bloating symptoms, reduced intake of high-FODMAP foods, and enhanced patients’ dietary knowledge compared to brief dietary counseling alone, especially in those with bothersome symptoms,” said Sinha, who wasn’t involved in the study.
“Patients actively used the chatbot to manage their symptoms, achieving a higher response rate than those in the control arm who received brief counseling on avoiding high-FODMAP food,” he noted.
Sinha said in his practice at Stanford, “in the heart of Silicon Valley,” patients do use digital resources to manage their GI symptoms, including diseases like IBS and inflammatory bowel disease (IBD) — and he believes this is “increasingly common nationally.”
“However, the need for evidence-based tools is critical and the lack here often prevents many practitioners from regularly recommending them to patients. This study aligns well with clinical practice, and supports the use of this particular app to improve IBS symptoms, particularly when access to dietitians is limited. These results support chatbot-assisted dietary management as a feasible, effective, and scalable approach to patient care,” Sinha told GI & Hepatology News.
The study received no commercial funding. Somvanapanich and Sinha had no relevant disclosures.
A version of this article appeared on Medscape.com.
FROM DDW 2025
Blood-Based Test May Predict Crohn’s Disease 2 Years Before Onset
SAN DIEGO — Crohn’s disease (CD) has become more common in the United States, and an estimated 1 million Americans have the condition. Still, much is unknown about how to evaluate the individual risk for the disease.
“It’s pretty much accepted that Crohn’s disease does not begin at diagnosis,” said Ryan Ungaro, MD, associate professor of medicine at the Icahn School of Medicine at Mount Sinai, New York City, speaking at Digestive Disease Week (DDW)® 2025.
Although individual blood markers have been associated with the future risk for CD, what’s needed, he said, is to understand which combination of biomarkers are most predictive.
Now, .
It’s an early version that will likely be further improved and needs additional validation, Ungaro told GI & Hepatology News.
“Once we can accurately identify individuals at risk for developing Crohn’s disease, we can then imagine a number of potential interventions,” Ungaro said.
Approaches would vary depending on how far away the onset is estimated to be. For people who likely wouldn’t develop disease for many years, one intervention might be close monitoring to enable diagnosis in the earliest stages, when treatment works best, he said. Someone at a high risk of developing CD in the next 2 or 3 years, on the other hand, might be offered a pharmaceutical intervention.
Developing and Testing the Risk Score
To develop the risk score, Ungaro and colleagues analyzed data of 200 patients with CD and 100 healthy control participants from PREDICTS, a nested case-controlled study of active US military service members. The study is within the larger Department of Defense Serum Repository, which began in 1985 and has more than 62.5 million samples, all stored at −30 °C.
The researchers collected serum samples at four timepoints up to 6 or more years before the diagnosis. They assayed antimicrobial antibodies using the Prometheus Laboratories platform, proteomic markers using the Olink inflammation panel, and anti–granulocyte macrophage colony-stimulating factor autoantibodies using enzyme-linked immunosorbent assay.
Participants (median age, 33 years for both groups) were randomly divided into equally sized training and testing sets. In both the group, 83% of patients were White and about 90% were men.
Time-varying trajectories of marker abundance were estimated for each biomarker. Then, logistic regression modeled disease status as a function of each marker for different timepoints and multivariate modeling was performed via logistic LASSO regression.
A risk score to predict CD onset within 2 years was developed. Prediction models were fit on the testing set and predictive performance evaluated using receiver operating characteristic curves and area under the curve (AUC).
Blood proteins and antibodies have differing associations with CD depending on the time before diagnosis, the researchers found.
The integrative model to predict CD onset within 2 years incorporated 10 biomarkers associated significantly with CD onset.
The AUC for the model was 0.87 (considered good, with 1 indicating perfect discrimination). It produced a specificity of 99% and a positive predictive value of 84%.
The researchers stratified the model scores into quartiles and found the CD incidence within 2 years increased from 2% in the first quartile to 57.7% in the fourth. The relative risk of developing CD in the top quartile individuals vs lower quartile individuals was 10.4.
The serologic and proteomic markers show dynamic changes years before the diagnosis, Ungaro said.
A Strong Start
The research represents “an ambitious and exciting frontier for the future of IBD [inflammatory bowel disease] care,” said Victor G. Chedid, MD, MS, consultant and assistant professor of medicine at Mayo Clinic, Rochester, Minnesota, who reviewed the findings but was not involved in the study.
Currently, physicians treat IBD once it manifests, and it’s difficult to predict who will get CD, he said.
The integrative model’s AUC of 0.87 is impressive, and its specificity and positive predictive value levels show it is highly accurate in predicting the onset of CD within 2 years, Chedid added.
Further validation in larger and more diverse population is needed, Chedid said, but he sees the potential for the model to be practical in clinical practice.
“Additionally, the use of blood-based biomarkers makes the model relatively noninvasive and easy to implement in a clinical setting,” he said.
Now, the research goal is to understand the best biomarkers for characterizing the different preclinical phases of CD and to test different interventions in prevention trials, Ungaro told GI & Hepatology News.
A few trials are planned or ongoing, he noted. The trial PIONIR trial will look at the impact of a specific diet on the risk of developing CD, and the INTERCEPT trial aims to develop a blood-based risk score that can identify individuals with a high risk of developing CD within 5 years after initial evaluation.
Ungaro reported being on the advisory board of and/or receiving speaker or consulting fees from AbbVie, Bristol Myer Squibb, Celltrion, ECM Therapeutics, Genentech, Jansen, Eli Lilly, Pfizer, Roivant, Sanofi, and Takeda. Chedid reported having no relevant disclosures.
The PROMISE Consortium is funded by the Helmsley Charitable Trust.
A version of this article appeared on Medscape.com.
SAN DIEGO — Crohn’s disease (CD) has become more common in the United States, and an estimated 1 million Americans have the condition. Still, much is unknown about how to evaluate the individual risk for the disease.
“It’s pretty much accepted that Crohn’s disease does not begin at diagnosis,” said Ryan Ungaro, MD, associate professor of medicine at the Icahn School of Medicine at Mount Sinai, New York City, speaking at Digestive Disease Week (DDW)® 2025.
Although individual blood markers have been associated with the future risk for CD, what’s needed, he said, is to understand which combination of biomarkers are most predictive.
Now, .
It’s an early version that will likely be further improved and needs additional validation, Ungaro told GI & Hepatology News.
“Once we can accurately identify individuals at risk for developing Crohn’s disease, we can then imagine a number of potential interventions,” Ungaro said.
Approaches would vary depending on how far away the onset is estimated to be. For people who likely wouldn’t develop disease for many years, one intervention might be close monitoring to enable diagnosis in the earliest stages, when treatment works best, he said. Someone at a high risk of developing CD in the next 2 or 3 years, on the other hand, might be offered a pharmaceutical intervention.
Developing and Testing the Risk Score
To develop the risk score, Ungaro and colleagues analyzed data of 200 patients with CD and 100 healthy control participants from PREDICTS, a nested case-controlled study of active US military service members. The study is within the larger Department of Defense Serum Repository, which began in 1985 and has more than 62.5 million samples, all stored at −30 °C.
The researchers collected serum samples at four timepoints up to 6 or more years before the diagnosis. They assayed antimicrobial antibodies using the Prometheus Laboratories platform, proteomic markers using the Olink inflammation panel, and anti–granulocyte macrophage colony-stimulating factor autoantibodies using enzyme-linked immunosorbent assay.
Participants (median age, 33 years for both groups) were randomly divided into equally sized training and testing sets. In both the group, 83% of patients were White and about 90% were men.
Time-varying trajectories of marker abundance were estimated for each biomarker. Then, logistic regression modeled disease status as a function of each marker for different timepoints and multivariate modeling was performed via logistic LASSO regression.
A risk score to predict CD onset within 2 years was developed. Prediction models were fit on the testing set and predictive performance evaluated using receiver operating characteristic curves and area under the curve (AUC).
Blood proteins and antibodies have differing associations with CD depending on the time before diagnosis, the researchers found.
The integrative model to predict CD onset within 2 years incorporated 10 biomarkers associated significantly with CD onset.
The AUC for the model was 0.87 (considered good, with 1 indicating perfect discrimination). It produced a specificity of 99% and a positive predictive value of 84%.
The researchers stratified the model scores into quartiles and found the CD incidence within 2 years increased from 2% in the first quartile to 57.7% in the fourth. The relative risk of developing CD in the top quartile individuals vs lower quartile individuals was 10.4.
The serologic and proteomic markers show dynamic changes years before the diagnosis, Ungaro said.
A Strong Start
The research represents “an ambitious and exciting frontier for the future of IBD [inflammatory bowel disease] care,” said Victor G. Chedid, MD, MS, consultant and assistant professor of medicine at Mayo Clinic, Rochester, Minnesota, who reviewed the findings but was not involved in the study.
Currently, physicians treat IBD once it manifests, and it’s difficult to predict who will get CD, he said.
The integrative model’s AUC of 0.87 is impressive, and its specificity and positive predictive value levels show it is highly accurate in predicting the onset of CD within 2 years, Chedid added.
Further validation in larger and more diverse population is needed, Chedid said, but he sees the potential for the model to be practical in clinical practice.
“Additionally, the use of blood-based biomarkers makes the model relatively noninvasive and easy to implement in a clinical setting,” he said.
Now, the research goal is to understand the best biomarkers for characterizing the different preclinical phases of CD and to test different interventions in prevention trials, Ungaro told GI & Hepatology News.
A few trials are planned or ongoing, he noted. The trial PIONIR trial will look at the impact of a specific diet on the risk of developing CD, and the INTERCEPT trial aims to develop a blood-based risk score that can identify individuals with a high risk of developing CD within 5 years after initial evaluation.
Ungaro reported being on the advisory board of and/or receiving speaker or consulting fees from AbbVie, Bristol Myer Squibb, Celltrion, ECM Therapeutics, Genentech, Jansen, Eli Lilly, Pfizer, Roivant, Sanofi, and Takeda. Chedid reported having no relevant disclosures.
The PROMISE Consortium is funded by the Helmsley Charitable Trust.
A version of this article appeared on Medscape.com.
SAN DIEGO — Crohn’s disease (CD) has become more common in the United States, and an estimated 1 million Americans have the condition. Still, much is unknown about how to evaluate the individual risk for the disease.
“It’s pretty much accepted that Crohn’s disease does not begin at diagnosis,” said Ryan Ungaro, MD, associate professor of medicine at the Icahn School of Medicine at Mount Sinai, New York City, speaking at Digestive Disease Week (DDW)® 2025.
Although individual blood markers have been associated with the future risk for CD, what’s needed, he said, is to understand which combination of biomarkers are most predictive.
Now, .
It’s an early version that will likely be further improved and needs additional validation, Ungaro told GI & Hepatology News.
“Once we can accurately identify individuals at risk for developing Crohn’s disease, we can then imagine a number of potential interventions,” Ungaro said.
Approaches would vary depending on how far away the onset is estimated to be. For people who likely wouldn’t develop disease for many years, one intervention might be close monitoring to enable diagnosis in the earliest stages, when treatment works best, he said. Someone at a high risk of developing CD in the next 2 or 3 years, on the other hand, might be offered a pharmaceutical intervention.
Developing and Testing the Risk Score
To develop the risk score, Ungaro and colleagues analyzed data of 200 patients with CD and 100 healthy control participants from PREDICTS, a nested case-controlled study of active US military service members. The study is within the larger Department of Defense Serum Repository, which began in 1985 and has more than 62.5 million samples, all stored at −30 °C.
The researchers collected serum samples at four timepoints up to 6 or more years before the diagnosis. They assayed antimicrobial antibodies using the Prometheus Laboratories platform, proteomic markers using the Olink inflammation panel, and anti–granulocyte macrophage colony-stimulating factor autoantibodies using enzyme-linked immunosorbent assay.
Participants (median age, 33 years for both groups) were randomly divided into equally sized training and testing sets. In both the group, 83% of patients were White and about 90% were men.
Time-varying trajectories of marker abundance were estimated for each biomarker. Then, logistic regression modeled disease status as a function of each marker for different timepoints and multivariate modeling was performed via logistic LASSO regression.
A risk score to predict CD onset within 2 years was developed. Prediction models were fit on the testing set and predictive performance evaluated using receiver operating characteristic curves and area under the curve (AUC).
Blood proteins and antibodies have differing associations with CD depending on the time before diagnosis, the researchers found.
The integrative model to predict CD onset within 2 years incorporated 10 biomarkers associated significantly with CD onset.
The AUC for the model was 0.87 (considered good, with 1 indicating perfect discrimination). It produced a specificity of 99% and a positive predictive value of 84%.
The researchers stratified the model scores into quartiles and found the CD incidence within 2 years increased from 2% in the first quartile to 57.7% in the fourth. The relative risk of developing CD in the top quartile individuals vs lower quartile individuals was 10.4.
The serologic and proteomic markers show dynamic changes years before the diagnosis, Ungaro said.
A Strong Start
The research represents “an ambitious and exciting frontier for the future of IBD [inflammatory bowel disease] care,” said Victor G. Chedid, MD, MS, consultant and assistant professor of medicine at Mayo Clinic, Rochester, Minnesota, who reviewed the findings but was not involved in the study.
Currently, physicians treat IBD once it manifests, and it’s difficult to predict who will get CD, he said.
The integrative model’s AUC of 0.87 is impressive, and its specificity and positive predictive value levels show it is highly accurate in predicting the onset of CD within 2 years, Chedid added.
Further validation in larger and more diverse population is needed, Chedid said, but he sees the potential for the model to be practical in clinical practice.
“Additionally, the use of blood-based biomarkers makes the model relatively noninvasive and easy to implement in a clinical setting,” he said.
Now, the research goal is to understand the best biomarkers for characterizing the different preclinical phases of CD and to test different interventions in prevention trials, Ungaro told GI & Hepatology News.
A few trials are planned or ongoing, he noted. The trial PIONIR trial will look at the impact of a specific diet on the risk of developing CD, and the INTERCEPT trial aims to develop a blood-based risk score that can identify individuals with a high risk of developing CD within 5 years after initial evaluation.
Ungaro reported being on the advisory board of and/or receiving speaker or consulting fees from AbbVie, Bristol Myer Squibb, Celltrion, ECM Therapeutics, Genentech, Jansen, Eli Lilly, Pfizer, Roivant, Sanofi, and Takeda. Chedid reported having no relevant disclosures.
The PROMISE Consortium is funded by the Helmsley Charitable Trust.
A version of this article appeared on Medscape.com.
FROM DDW 2025
Winning Strategies to Retain Private Practice Gastroenterologists
SAN DIEGO — With the recently updated recommendations by the US Preventive Services Task Force lowering the age for colorectal cancer screening to 45 instead of 50, an additional 19 million patients now require screening, Asma Khapra, MD, AGAF, a gastroenterologist at Gastro Health in Fairfax, Virginia, told attendees at Digestive Disease Week® (DDW) 2025.
That change, coupled with the expected shortage of gastroenterologists, means one thing: The current workforce can’t meet patient demand, she said. , Khapra added.
The private practice model is already declining, she said. The fraction of US gastroenterologists in “fully independent” private practice was about 30% in 2019, Khapra noted. Then, “COVID really changed the landscape even more.” By 2022, “that number has shrunk to 13%.” Meanwhile, 67% are employed gastroenterologists (not in private practice), 7% work in large group practices, and 13% are private equity (PE) backed.
That makes effective retention strategies crucial for private practices, Khapra said. She first addressed the common attractions of private practices, then the challenges, and finally the winning strategies to retain and keep a viable private practice gastroenterology workforce.
The Attractions of Private Practice
The reasons for choosing private practice are many, Khapra said, including:
- Autonomy,
- Flexibility,
- Competitive compensation,
- Ownership mindset,
- Partnership paths, and
- Work-life balance including involvement in community and culture.
On the other hand, private practices have unique challenges, including:
- Administrative burdens such as EHR documentation, paperwork, prior authorizations, and staffing issues,
- Financial pressures, including competition with the employment packages offered by hospitals, as reimbursements continue to drop and staffing costs increase,
- Burnout,
- Variety of buy-ins and partnership tracks,
- Limited career development, and
- The strains of aging and endoscopy. “We used to joke in our practice that at any given time, three staff members are in physical therapy due to injuries and disabilities.”
Employing the Iceberg Model
One strategy, Khapra said, is to follow Edward T. Hall’s Iceberg Model of Culture , which focuses on the importance of both visible and invisible elements.
“The key to retention in private practice is to develop a value system where everyone is treated well and respected and compensated fairly,” she said. “That doesn’t mean you split the pie [equally].”
“Visible” elements of the model include the physical environment, policies and practices, symbols and behaviors, she said. While under the surface (“invisible” elements) are shared values, perceptions and attitudes, leadership style, conflict resolution, decision making and unwritten rules.
The key, she said, is to provide physicians an actual voice in decision making and to avoid favouritism, thus avoiding comments such as “Why do the same two people always get the prime scoping blocks?”
Financial transparency is also important, Khapra said. And people want flexibility without it being called special treatment. She provided several practical suggestions to accomplish the invisible Iceberg goals.
For instance, she suggested paying for activities outside the practice that physicians do, such as serving on committees. If the practice can’t afford that, she suggested asking the affiliated hospitals to do so, noting that such an initiative can often build community support.
Paying more attention to early associates than is typical can also benefit the practice, Khapra said. “So much effort is made to recruit them, and then once there, we’re on to the next [recruits].” Instead, she suggested, “pay attention to their needs.”
Providing support to physicians who are injured is also crucial and can foster a community culture, she said. For example, one Gastro Health physician was out for 4 weeks due to complications from surgery. “Everyone jumped in” to help fill the injured physician’s shifts, she said, reassuring the physician that the money would be figured out later. “That’s the culture you want to instill.”
To prevent burnout, another key to retaining physicians, “you have to provide support staff.” And offering good benefits, including parental and maternal leave and disability benefits, is also crucial, Khapra said. Consider practices such as having social dinners, another way to build a sense of community.
Finally, bring in national and local gastroenterologist organizations for discussions, including advocating for fair reimbursement for private practice. Consider working with the Digestive Health Physicians Alliance, which describes itself as the voice of independent gastroenterology, she suggested.
More Perspectives
Jami Kinnucan, MD, AGAF, a gastroenterologist and associate professor of medicine at Mayo Clinic, Jacksonville , Florida, spoke about optimizing recruitment of young gastroenterologists and provided perspective on Khapra’s talk.
“I think there’s a lot of overlap” with her topic and retaining private practice gastroenterologists, she said in an interview with GI & Hepatology News. Most important, she said, is having an efficient system in which the administrative flow is left to digital tools or other staff, not physicians. “That will also help to reduce burnout,” she said, and allow physicians to do what they most want to do, which is to focus on providing care to patients.
“People want to feel valued for their work,” she agreed. “People want opportunity for career development, opportunities for growth.”
As gastroenterologists age, flexibility is important, as it in in general for all physicians, Kinnucan said. She suggested schedule flexibility as one way. For instance, “if I tell 10 providers, ‘I need you to see 100 patients this week, but you can do it however you want,’ that promotes flexibility. They might want to see all of them on Monday and Tuesday, for instance. If you give people choice and autonomy, they are more likely to feel like they are part of the decision.”
How do you build a high-functioning team? “You do it by letting them operate autonomously,” and “you let people do the things they are really excited about.” And always, as Khapra said, focus on the invisible elements that are so crucial.
Khapra and Kinnucan had no relevant disclosures. Khapra received no funding for her presentation.
A version of this article appeared on Medscape.com.
SAN DIEGO — With the recently updated recommendations by the US Preventive Services Task Force lowering the age for colorectal cancer screening to 45 instead of 50, an additional 19 million patients now require screening, Asma Khapra, MD, AGAF, a gastroenterologist at Gastro Health in Fairfax, Virginia, told attendees at Digestive Disease Week® (DDW) 2025.
That change, coupled with the expected shortage of gastroenterologists, means one thing: The current workforce can’t meet patient demand, she said. , Khapra added.
The private practice model is already declining, she said. The fraction of US gastroenterologists in “fully independent” private practice was about 30% in 2019, Khapra noted. Then, “COVID really changed the landscape even more.” By 2022, “that number has shrunk to 13%.” Meanwhile, 67% are employed gastroenterologists (not in private practice), 7% work in large group practices, and 13% are private equity (PE) backed.
That makes effective retention strategies crucial for private practices, Khapra said. She first addressed the common attractions of private practices, then the challenges, and finally the winning strategies to retain and keep a viable private practice gastroenterology workforce.
The Attractions of Private Practice
The reasons for choosing private practice are many, Khapra said, including:
- Autonomy,
- Flexibility,
- Competitive compensation,
- Ownership mindset,
- Partnership paths, and
- Work-life balance including involvement in community and culture.
On the other hand, private practices have unique challenges, including:
- Administrative burdens such as EHR documentation, paperwork, prior authorizations, and staffing issues,
- Financial pressures, including competition with the employment packages offered by hospitals, as reimbursements continue to drop and staffing costs increase,
- Burnout,
- Variety of buy-ins and partnership tracks,
- Limited career development, and
- The strains of aging and endoscopy. “We used to joke in our practice that at any given time, three staff members are in physical therapy due to injuries and disabilities.”
Employing the Iceberg Model
One strategy, Khapra said, is to follow Edward T. Hall’s Iceberg Model of Culture , which focuses on the importance of both visible and invisible elements.
“The key to retention in private practice is to develop a value system where everyone is treated well and respected and compensated fairly,” she said. “That doesn’t mean you split the pie [equally].”
“Visible” elements of the model include the physical environment, policies and practices, symbols and behaviors, she said. While under the surface (“invisible” elements) are shared values, perceptions and attitudes, leadership style, conflict resolution, decision making and unwritten rules.
The key, she said, is to provide physicians an actual voice in decision making and to avoid favouritism, thus avoiding comments such as “Why do the same two people always get the prime scoping blocks?”
Financial transparency is also important, Khapra said. And people want flexibility without it being called special treatment. She provided several practical suggestions to accomplish the invisible Iceberg goals.
For instance, she suggested paying for activities outside the practice that physicians do, such as serving on committees. If the practice can’t afford that, she suggested asking the affiliated hospitals to do so, noting that such an initiative can often build community support.
Paying more attention to early associates than is typical can also benefit the practice, Khapra said. “So much effort is made to recruit them, and then once there, we’re on to the next [recruits].” Instead, she suggested, “pay attention to their needs.”
Providing support to physicians who are injured is also crucial and can foster a community culture, she said. For example, one Gastro Health physician was out for 4 weeks due to complications from surgery. “Everyone jumped in” to help fill the injured physician’s shifts, she said, reassuring the physician that the money would be figured out later. “That’s the culture you want to instill.”
To prevent burnout, another key to retaining physicians, “you have to provide support staff.” And offering good benefits, including parental and maternal leave and disability benefits, is also crucial, Khapra said. Consider practices such as having social dinners, another way to build a sense of community.
Finally, bring in national and local gastroenterologist organizations for discussions, including advocating for fair reimbursement for private practice. Consider working with the Digestive Health Physicians Alliance, which describes itself as the voice of independent gastroenterology, she suggested.
More Perspectives
Jami Kinnucan, MD, AGAF, a gastroenterologist and associate professor of medicine at Mayo Clinic, Jacksonville , Florida, spoke about optimizing recruitment of young gastroenterologists and provided perspective on Khapra’s talk.
“I think there’s a lot of overlap” with her topic and retaining private practice gastroenterologists, she said in an interview with GI & Hepatology News. Most important, she said, is having an efficient system in which the administrative flow is left to digital tools or other staff, not physicians. “That will also help to reduce burnout,” she said, and allow physicians to do what they most want to do, which is to focus on providing care to patients.
“People want to feel valued for their work,” she agreed. “People want opportunity for career development, opportunities for growth.”
As gastroenterologists age, flexibility is important, as it in in general for all physicians, Kinnucan said. She suggested schedule flexibility as one way. For instance, “if I tell 10 providers, ‘I need you to see 100 patients this week, but you can do it however you want,’ that promotes flexibility. They might want to see all of them on Monday and Tuesday, for instance. If you give people choice and autonomy, they are more likely to feel like they are part of the decision.”
How do you build a high-functioning team? “You do it by letting them operate autonomously,” and “you let people do the things they are really excited about.” And always, as Khapra said, focus on the invisible elements that are so crucial.
Khapra and Kinnucan had no relevant disclosures. Khapra received no funding for her presentation.
A version of this article appeared on Medscape.com.
SAN DIEGO — With the recently updated recommendations by the US Preventive Services Task Force lowering the age for colorectal cancer screening to 45 instead of 50, an additional 19 million patients now require screening, Asma Khapra, MD, AGAF, a gastroenterologist at Gastro Health in Fairfax, Virginia, told attendees at Digestive Disease Week® (DDW) 2025.
That change, coupled with the expected shortage of gastroenterologists, means one thing: The current workforce can’t meet patient demand, she said. , Khapra added.
The private practice model is already declining, she said. The fraction of US gastroenterologists in “fully independent” private practice was about 30% in 2019, Khapra noted. Then, “COVID really changed the landscape even more.” By 2022, “that number has shrunk to 13%.” Meanwhile, 67% are employed gastroenterologists (not in private practice), 7% work in large group practices, and 13% are private equity (PE) backed.
That makes effective retention strategies crucial for private practices, Khapra said. She first addressed the common attractions of private practices, then the challenges, and finally the winning strategies to retain and keep a viable private practice gastroenterology workforce.
The Attractions of Private Practice
The reasons for choosing private practice are many, Khapra said, including:
- Autonomy,
- Flexibility,
- Competitive compensation,
- Ownership mindset,
- Partnership paths, and
- Work-life balance including involvement in community and culture.
On the other hand, private practices have unique challenges, including:
- Administrative burdens such as EHR documentation, paperwork, prior authorizations, and staffing issues,
- Financial pressures, including competition with the employment packages offered by hospitals, as reimbursements continue to drop and staffing costs increase,
- Burnout,
- Variety of buy-ins and partnership tracks,
- Limited career development, and
- The strains of aging and endoscopy. “We used to joke in our practice that at any given time, three staff members are in physical therapy due to injuries and disabilities.”
Employing the Iceberg Model
One strategy, Khapra said, is to follow Edward T. Hall’s Iceberg Model of Culture , which focuses on the importance of both visible and invisible elements.
“The key to retention in private practice is to develop a value system where everyone is treated well and respected and compensated fairly,” she said. “That doesn’t mean you split the pie [equally].”
“Visible” elements of the model include the physical environment, policies and practices, symbols and behaviors, she said. While under the surface (“invisible” elements) are shared values, perceptions and attitudes, leadership style, conflict resolution, decision making and unwritten rules.
The key, she said, is to provide physicians an actual voice in decision making and to avoid favouritism, thus avoiding comments such as “Why do the same two people always get the prime scoping blocks?”
Financial transparency is also important, Khapra said. And people want flexibility without it being called special treatment. She provided several practical suggestions to accomplish the invisible Iceberg goals.
For instance, she suggested paying for activities outside the practice that physicians do, such as serving on committees. If the practice can’t afford that, she suggested asking the affiliated hospitals to do so, noting that such an initiative can often build community support.
Paying more attention to early associates than is typical can also benefit the practice, Khapra said. “So much effort is made to recruit them, and then once there, we’re on to the next [recruits].” Instead, she suggested, “pay attention to their needs.”
Providing support to physicians who are injured is also crucial and can foster a community culture, she said. For example, one Gastro Health physician was out for 4 weeks due to complications from surgery. “Everyone jumped in” to help fill the injured physician’s shifts, she said, reassuring the physician that the money would be figured out later. “That’s the culture you want to instill.”
To prevent burnout, another key to retaining physicians, “you have to provide support staff.” And offering good benefits, including parental and maternal leave and disability benefits, is also crucial, Khapra said. Consider practices such as having social dinners, another way to build a sense of community.
Finally, bring in national and local gastroenterologist organizations for discussions, including advocating for fair reimbursement for private practice. Consider working with the Digestive Health Physicians Alliance, which describes itself as the voice of independent gastroenterology, she suggested.
More Perspectives
Jami Kinnucan, MD, AGAF, a gastroenterologist and associate professor of medicine at Mayo Clinic, Jacksonville , Florida, spoke about optimizing recruitment of young gastroenterologists and provided perspective on Khapra’s talk.
“I think there’s a lot of overlap” with her topic and retaining private practice gastroenterologists, she said in an interview with GI & Hepatology News. Most important, she said, is having an efficient system in which the administrative flow is left to digital tools or other staff, not physicians. “That will also help to reduce burnout,” she said, and allow physicians to do what they most want to do, which is to focus on providing care to patients.
“People want to feel valued for their work,” she agreed. “People want opportunity for career development, opportunities for growth.”
As gastroenterologists age, flexibility is important, as it in in general for all physicians, Kinnucan said. She suggested schedule flexibility as one way. For instance, “if I tell 10 providers, ‘I need you to see 100 patients this week, but you can do it however you want,’ that promotes flexibility. They might want to see all of them on Monday and Tuesday, for instance. If you give people choice and autonomy, they are more likely to feel like they are part of the decision.”
How do you build a high-functioning team? “You do it by letting them operate autonomously,” and “you let people do the things they are really excited about.” And always, as Khapra said, focus on the invisible elements that are so crucial.
Khapra and Kinnucan had no relevant disclosures. Khapra received no funding for her presentation.
A version of this article appeared on Medscape.com.
FROM DDW 2025
Blood Detection Capsule Helpful in Suspected Upper GI Bleeding
SAN DIEGO — , a study found.
Notably, patients with negative capsule results had shorter hospital stays and lower acuity markers, and in more than one third of cases, an esophagogastroduodenoscopy (EGD) was avoided altogether without any observed adverse events or readmissions, the study team found.
“Our study shows that this novel capsule that detects blood in the upper GI tract (PillSense) was highly sensitive and specific (> 90%) for detecting recent or active upper GI blood, influenced clinical management in 80% of cases and allowed about one third of patients to be safely discharged from the emergency department, with close outpatient follow-up,” Linda Lee, MD, AGAF, medical director of endoscopy, Brigham and Women’s Hospital and associate professor of medicine, Harvard Medical School, Boston, told GI & Hepatology News.
The study was presented at Digestive Disease Week® (DDW) 2025.
Real-World Insights
EGD is the gold standard for diagnosing suspected upper GI bleeding, but limited access to timely EGD complicates diagnosis and resource allocation.
Approved by the US Food and Drug Administration, PillSense (EnteraSense) is an ingestible capsule with a reusable receiver that provides a rapid, noninvasive method for detecting upper GI bleeding. The capsule analyzes light absorption to identify blood and transmits the result within 10 minutes.
Lee and colleagues evaluated the real-world impact of this point-of-care device on clinical triage and resource allocation, while assessing its safety profile.
They analyzed data on 43 patients (mean age 60 years; 72% men) with clinical suspicion of upper GI bleeding in whom the device was used. The most common symptoms were symptomatic anemia (70%), melena (67%), and hematemesis (33%).
Sixteen PillSense studies (37%) were positive for blood detection, and 27 (63%) were negative.
Compared to patients with a positive capsule results, those without blood detected by the capsule had shorter hospital stays (mean, 3.8 vs 13.4 days, P = .02), lower GBS scores (mean, 7.93 vs 12.81; P = .005), and fewer units of blood transfused (mean, 1.19 vs 10.94; P = .01) and were less apt to be hemodynamically unstable (5 vs 8 patients; P = .03).
Capsule results influenced clinical management in 80% of cases, leading to avoidance of EGD in 37% and prioritization of urgent EGD in 18% (all had active bleeding on EGD).
Capsule use improved resource allocation in 51% of cases. This included 12 patients who were discharged from the ED, six who were assigned an inpatient bed early, and four who underwent expedited colonoscopy as upper GI bleeding was ruled out, they noted.
Among the eight patients who did not undergo EGD, there were no readmissions within 30 days and no adverse events. There were no capsule-related adverse events.
“Clinicians should consider using this novel capsule PillSense as another data point in the management of suspected upper GI bleed,” Lee told GI & Hepatology News.
“This could include in helping to triage patients for safe discharge from the ED or to more urgent endoscopy, to differentiate between upper vs lower GI bleed and to manage ICU patients with possible rebleeding,” Lee said.
Important Real-World Evidence
Reached for comment, Shahin Ayazi, MD, esophageal surgeon, Director, Allegheny Health Network Chevalier Jackson Esophageal Research Center, Pittsburgh, Pennsylvania, said this study is important for several reasons.
“Prior investigations have established that PillSense possesses a high negative predictive value for detecting upper GI bleeding and have speculated on its utility in triage, decision-making, and potentially avoiding unnecessary endoscopy. This study is important because it substantiates that speculation with clinical data,” Ayazi, who wasn’t involved in the study, told GI & Hepatology News.
“These findings support the capsule’s practical application in patient stratification and clinical workflow, particularly when diagnostic uncertainty is high and endoscopic resources are limited,” Ayazi noted.
In his experience, PillSense is “highly useful as a triage adjunct in the evaluation of suspected upper GI bleeding. It provides direct and objective evidence as to whether blood is currently present in the stomach,” he said.
“In patients whose presentation is ambiguous or whose clinical scores fall into an intermediate risk zone, this binary result can provide clarity that subjective assessment alone may not achieve. This is particularly relevant in settings where the goal is to perform endoscopy within 24 hours, but the volume of consults exceeds procedural capacity,” Ayazi explained.
“In such scenarios, PillSense enables physicians to stratify patients based on objective evidence of active bleeding, helping to prioritize those who require urgent endoscopy and defer or even avoid endoscopic evaluation in those who do not. The result is a more efficient allocation of endoscopic resources without compromising patient safety,” he added.
Ayazi cautioned that the PillSense capsule should not be used as a replacement for clinical evaluation or established risk stratification protocols.
“It is intended for hemodynamically stable patients and has not been validated in cases of active or massive bleeding. Its diagnostic yield depends on the presence of blood in the stomach at the time of capsule transit; intermittent or proximal bleeding that has ceased may not be detected, introducing the potential for false-negative results,” Ayazi told GI & Hepatology News.
“However, in prior studies, the negative predictive value was high, and in the present study, no adverse outcomes were observed in patients who did not undergo endoscopy following a negative PillSense result,” Ayazi noted.
“It must also be understood that PillSense does not localize the source of bleeding or replace endoscopy in patients with a high likelihood of active hemorrhage. It is not designed to detect bleeding from the lower GI tract or distal small bowel. Rather, it serves as an adjunct that can provide immediate clarity when the need for endoscopy is uncertain, and should be interpreted within the broader context of clinical findings, laboratory data, and established risk stratification tools,” he added.
The study had no specific funding. Lee and Ayazi had no relevant disclosures.
A version of this article appeared on Medscape.com.
SAN DIEGO — , a study found.
Notably, patients with negative capsule results had shorter hospital stays and lower acuity markers, and in more than one third of cases, an esophagogastroduodenoscopy (EGD) was avoided altogether without any observed adverse events or readmissions, the study team found.
“Our study shows that this novel capsule that detects blood in the upper GI tract (PillSense) was highly sensitive and specific (> 90%) for detecting recent or active upper GI blood, influenced clinical management in 80% of cases and allowed about one third of patients to be safely discharged from the emergency department, with close outpatient follow-up,” Linda Lee, MD, AGAF, medical director of endoscopy, Brigham and Women’s Hospital and associate professor of medicine, Harvard Medical School, Boston, told GI & Hepatology News.
The study was presented at Digestive Disease Week® (DDW) 2025.
Real-World Insights
EGD is the gold standard for diagnosing suspected upper GI bleeding, but limited access to timely EGD complicates diagnosis and resource allocation.
Approved by the US Food and Drug Administration, PillSense (EnteraSense) is an ingestible capsule with a reusable receiver that provides a rapid, noninvasive method for detecting upper GI bleeding. The capsule analyzes light absorption to identify blood and transmits the result within 10 minutes.
Lee and colleagues evaluated the real-world impact of this point-of-care device on clinical triage and resource allocation, while assessing its safety profile.
They analyzed data on 43 patients (mean age 60 years; 72% men) with clinical suspicion of upper GI bleeding in whom the device was used. The most common symptoms were symptomatic anemia (70%), melena (67%), and hematemesis (33%).
Sixteen PillSense studies (37%) were positive for blood detection, and 27 (63%) were negative.
Compared to patients with a positive capsule results, those without blood detected by the capsule had shorter hospital stays (mean, 3.8 vs 13.4 days, P = .02), lower GBS scores (mean, 7.93 vs 12.81; P = .005), and fewer units of blood transfused (mean, 1.19 vs 10.94; P = .01) and were less apt to be hemodynamically unstable (5 vs 8 patients; P = .03).
Capsule results influenced clinical management in 80% of cases, leading to avoidance of EGD in 37% and prioritization of urgent EGD in 18% (all had active bleeding on EGD).
Capsule use improved resource allocation in 51% of cases. This included 12 patients who were discharged from the ED, six who were assigned an inpatient bed early, and four who underwent expedited colonoscopy as upper GI bleeding was ruled out, they noted.
Among the eight patients who did not undergo EGD, there were no readmissions within 30 days and no adverse events. There were no capsule-related adverse events.
“Clinicians should consider using this novel capsule PillSense as another data point in the management of suspected upper GI bleed,” Lee told GI & Hepatology News.
“This could include in helping to triage patients for safe discharge from the ED or to more urgent endoscopy, to differentiate between upper vs lower GI bleed and to manage ICU patients with possible rebleeding,” Lee said.
Important Real-World Evidence
Reached for comment, Shahin Ayazi, MD, esophageal surgeon, Director, Allegheny Health Network Chevalier Jackson Esophageal Research Center, Pittsburgh, Pennsylvania, said this study is important for several reasons.
“Prior investigations have established that PillSense possesses a high negative predictive value for detecting upper GI bleeding and have speculated on its utility in triage, decision-making, and potentially avoiding unnecessary endoscopy. This study is important because it substantiates that speculation with clinical data,” Ayazi, who wasn’t involved in the study, told GI & Hepatology News.
“These findings support the capsule’s practical application in patient stratification and clinical workflow, particularly when diagnostic uncertainty is high and endoscopic resources are limited,” Ayazi noted.
In his experience, PillSense is “highly useful as a triage adjunct in the evaluation of suspected upper GI bleeding. It provides direct and objective evidence as to whether blood is currently present in the stomach,” he said.
“In patients whose presentation is ambiguous or whose clinical scores fall into an intermediate risk zone, this binary result can provide clarity that subjective assessment alone may not achieve. This is particularly relevant in settings where the goal is to perform endoscopy within 24 hours, but the volume of consults exceeds procedural capacity,” Ayazi explained.
“In such scenarios, PillSense enables physicians to stratify patients based on objective evidence of active bleeding, helping to prioritize those who require urgent endoscopy and defer or even avoid endoscopic evaluation in those who do not. The result is a more efficient allocation of endoscopic resources without compromising patient safety,” he added.
Ayazi cautioned that the PillSense capsule should not be used as a replacement for clinical evaluation or established risk stratification protocols.
“It is intended for hemodynamically stable patients and has not been validated in cases of active or massive bleeding. Its diagnostic yield depends on the presence of blood in the stomach at the time of capsule transit; intermittent or proximal bleeding that has ceased may not be detected, introducing the potential for false-negative results,” Ayazi told GI & Hepatology News.
“However, in prior studies, the negative predictive value was high, and in the present study, no adverse outcomes were observed in patients who did not undergo endoscopy following a negative PillSense result,” Ayazi noted.
“It must also be understood that PillSense does not localize the source of bleeding or replace endoscopy in patients with a high likelihood of active hemorrhage. It is not designed to detect bleeding from the lower GI tract or distal small bowel. Rather, it serves as an adjunct that can provide immediate clarity when the need for endoscopy is uncertain, and should be interpreted within the broader context of clinical findings, laboratory data, and established risk stratification tools,” he added.
The study had no specific funding. Lee and Ayazi had no relevant disclosures.
A version of this article appeared on Medscape.com.
SAN DIEGO — , a study found.
Notably, patients with negative capsule results had shorter hospital stays and lower acuity markers, and in more than one third of cases, an esophagogastroduodenoscopy (EGD) was avoided altogether without any observed adverse events or readmissions, the study team found.
“Our study shows that this novel capsule that detects blood in the upper GI tract (PillSense) was highly sensitive and specific (> 90%) for detecting recent or active upper GI blood, influenced clinical management in 80% of cases and allowed about one third of patients to be safely discharged from the emergency department, with close outpatient follow-up,” Linda Lee, MD, AGAF, medical director of endoscopy, Brigham and Women’s Hospital and associate professor of medicine, Harvard Medical School, Boston, told GI & Hepatology News.
The study was presented at Digestive Disease Week® (DDW) 2025.
Real-World Insights
EGD is the gold standard for diagnosing suspected upper GI bleeding, but limited access to timely EGD complicates diagnosis and resource allocation.
Approved by the US Food and Drug Administration, PillSense (EnteraSense) is an ingestible capsule with a reusable receiver that provides a rapid, noninvasive method for detecting upper GI bleeding. The capsule analyzes light absorption to identify blood and transmits the result within 10 minutes.
Lee and colleagues evaluated the real-world impact of this point-of-care device on clinical triage and resource allocation, while assessing its safety profile.
They analyzed data on 43 patients (mean age 60 years; 72% men) with clinical suspicion of upper GI bleeding in whom the device was used. The most common symptoms were symptomatic anemia (70%), melena (67%), and hematemesis (33%).
Sixteen PillSense studies (37%) were positive for blood detection, and 27 (63%) were negative.
Compared to patients with a positive capsule results, those without blood detected by the capsule had shorter hospital stays (mean, 3.8 vs 13.4 days, P = .02), lower GBS scores (mean, 7.93 vs 12.81; P = .005), and fewer units of blood transfused (mean, 1.19 vs 10.94; P = .01) and were less apt to be hemodynamically unstable (5 vs 8 patients; P = .03).
Capsule results influenced clinical management in 80% of cases, leading to avoidance of EGD in 37% and prioritization of urgent EGD in 18% (all had active bleeding on EGD).
Capsule use improved resource allocation in 51% of cases. This included 12 patients who were discharged from the ED, six who were assigned an inpatient bed early, and four who underwent expedited colonoscopy as upper GI bleeding was ruled out, they noted.
Among the eight patients who did not undergo EGD, there were no readmissions within 30 days and no adverse events. There were no capsule-related adverse events.
“Clinicians should consider using this novel capsule PillSense as another data point in the management of suspected upper GI bleed,” Lee told GI & Hepatology News.
“This could include in helping to triage patients for safe discharge from the ED or to more urgent endoscopy, to differentiate between upper vs lower GI bleed and to manage ICU patients with possible rebleeding,” Lee said.
Important Real-World Evidence
Reached for comment, Shahin Ayazi, MD, esophageal surgeon, Director, Allegheny Health Network Chevalier Jackson Esophageal Research Center, Pittsburgh, Pennsylvania, said this study is important for several reasons.
“Prior investigations have established that PillSense possesses a high negative predictive value for detecting upper GI bleeding and have speculated on its utility in triage, decision-making, and potentially avoiding unnecessary endoscopy. This study is important because it substantiates that speculation with clinical data,” Ayazi, who wasn’t involved in the study, told GI & Hepatology News.
“These findings support the capsule’s practical application in patient stratification and clinical workflow, particularly when diagnostic uncertainty is high and endoscopic resources are limited,” Ayazi noted.
In his experience, PillSense is “highly useful as a triage adjunct in the evaluation of suspected upper GI bleeding. It provides direct and objective evidence as to whether blood is currently present in the stomach,” he said.
“In patients whose presentation is ambiguous or whose clinical scores fall into an intermediate risk zone, this binary result can provide clarity that subjective assessment alone may not achieve. This is particularly relevant in settings where the goal is to perform endoscopy within 24 hours, but the volume of consults exceeds procedural capacity,” Ayazi explained.
“In such scenarios, PillSense enables physicians to stratify patients based on objective evidence of active bleeding, helping to prioritize those who require urgent endoscopy and defer or even avoid endoscopic evaluation in those who do not. The result is a more efficient allocation of endoscopic resources without compromising patient safety,” he added.
Ayazi cautioned that the PillSense capsule should not be used as a replacement for clinical evaluation or established risk stratification protocols.
“It is intended for hemodynamically stable patients and has not been validated in cases of active or massive bleeding. Its diagnostic yield depends on the presence of blood in the stomach at the time of capsule transit; intermittent or proximal bleeding that has ceased may not be detected, introducing the potential for false-negative results,” Ayazi told GI & Hepatology News.
“However, in prior studies, the negative predictive value was high, and in the present study, no adverse outcomes were observed in patients who did not undergo endoscopy following a negative PillSense result,” Ayazi noted.
“It must also be understood that PillSense does not localize the source of bleeding or replace endoscopy in patients with a high likelihood of active hemorrhage. It is not designed to detect bleeding from the lower GI tract or distal small bowel. Rather, it serves as an adjunct that can provide immediate clarity when the need for endoscopy is uncertain, and should be interpreted within the broader context of clinical findings, laboratory data, and established risk stratification tools,” he added.
The study had no specific funding. Lee and Ayazi had no relevant disclosures.
A version of this article appeared on Medscape.com.
FROM DDW 2025
Colorectal Cancer Screening Choices: Is Compliance Key?
SAN DIEGO — , and when it comes to potentially life-saving screening measures, picking the optimal screening tool is critical.
Regarding tests, “perfect is not possible,” said William M. Grady, MD, AGAF, of the Fred Hutchinson Cancer Center, University of Washington School of Medicine, in Seattle, who took part in a debate on the pros and cons of key screening options at Digestive Disease Week® (DDW) 2025.
“We have to remember that that’s the reality of colorectal cancer screening, and we need to meet our patients where they live,” said Grady, who argued on behalf of blood-based tests, including cell-free (cf) DNA (Shield, Guardant Health) and cfDNA plus protein biomarkers (Freenome).
A big point in their favor is their convenience and higher patient compliance — better tests that don’t get done do not work, he stressed.
He cited data that showed suboptimal compliance rates with standard colonoscopy: Rates range from about 70% among non-Hispanic White individuals to 67% among Black individuals, 51% among Hispanic individuals, and the low rate of just 26% among patients aged between 45 and 50 years.
With troubling increases in CRC incidence among younger patients, “that’s a group we’re particularly concerned about,” Grady said.
Meanwhile, studies show compliance rates with blood-based tests are ≥ 80%, with similar rates seen among those racial and ethnic groups, with lower rates for conventional colonoscopy, he noted.
Importantly, in terms of performance in detecting CRC, blood-based tests stand up to other modalities, as demonstrated in a real-world study conducted by Grady and his colleagues showing a sensitivity of 83% for the cfDNA test, 74% for the fecal immunochemical test (FIT) stool test, and 92% for a multitarget stool DNA test compared with 95% for colonoscopy.
“What we can see is that the sensitivity of blood-based tests looks favorable and comparable to other tests,” he said.
Among the four options, cfDNA had a highest patient adherence rate (85%-86%) compared with colonoscopy (28%-42%), FIT (43%-65%), and multitarget stool DNA (48%-60%).
“The bottom line is that these tests decrease CRC mortality and incidence, and we know there’s a potential to improve compliance with colorectal cancer screening if we offer blood-based tests for average-risk people who refuse colonoscopy,” Grady said.
Blood-Based Tests: Caveats, Harms?
Arguing against blood-based tests in the debate, Robert E. Schoen, MD, MPH, professor of medicine and epidemiology, Division of Gastroenterology, Hepatology and Nutrition, at the University of Pittsburgh, in Pittsburgh, Pennsylvania, checked off some of the key caveats.
While the overall sensitivity of blood-based tests may look favorable, these tests don’t detect early CRC well,” said Schoen. The sensitivity rates for stage 1 CRC are 64.7% with Guardant Health and 57.1% with Freenome.
Furthermore, their rates of detecting advanced adenomas are very low; the rate with Guardant Health is only about 13%, and with Freenome is even lower at 12.5%, he reported.
These rates are “similar to the false positive rate, with poor discrimination and accuracy for advanced adenomas,” Schoen said. “Without substantial detection of advanced adenomas, blood-based testing is inferior [to other options].”
Importantly, the low advanced adenoma rate translates to a lack of CRC prevention, which is key to reducing CRC mortality, he noted.
Essential to success with blood-based biopsies, as well as with stool tests, is the need for a follow-up colonoscopy if results are positive, but Schoen pointed out that this may or may not happen.
He cited research from FIT data showing that among 33,000 patients with abnormal stool tests, the rate of follow-up colonoscopy within a year, despite the concerning results, was a dismal 56%.
“We have a long way to go to make sure that people who get positive noninvasive tests get followed up,” he said.
In terms of the argument that blood-based screening is better than no screening at all, Schoen cited recent research that projected reductions in the risk for CRC incidence and mortality among 100,000 patients with each of the screening modalities.
Starting with standard colonoscopy performed every 10 years, the reductions in incidence and mortality would be 79% and 81%, respectively, followed by annual FIT, at 72% and 76%; multitarget DNA every 3 years, at 68% and 73%; and cfDNA (Shield), at 45% and 55%.
Based on those rates, if patients originally opting for FIT were to shift to blood-based tests, “the rate of CRC deaths would increase,” Schoen noted.
The findings underscore that “blood testing is unfavorable as a ‘substitution test,’” he added. “In fact, widespread adoption of blood testing could increase CRC morbidity.”
“Is it better than nothing?” he asked. “Yes, but only if performance of a colonoscopy after a positive test is accomplished.”
What About FIT?
Arguing that stool-based testing, or FIT, is the ideal choice as a first-line CRC test Jill Tinmouth, MD, PhD, a professor at the University of Toronto, Ontario, Canada, pointed to its prominent role in organized screening programs, including regions where resources may limit the widespread utilization of routine first-line colonoscopy screening. In addition, it narrows colonoscopies to those that are already prescreened as being at risk.
Data from one such program, reported by Kaiser Permanente of Northern California, showed that participation in CRC screening doubled from 40% to 80% over 10 years after initiating FIT screening. CRC mortality over the same period decreased by 50% from baseline, and incidence fell by as much as 75%.
In follow-up colonoscopies, Tinmouth noted that collective research from studies reflecting real-world participation and adherence to FIT in populations in the United Kingdom, the Netherlands, Taiwan, and California show follow-up colonoscopy rates of 88%, 85%, 70%, and 78%, respectively.
Meanwhile, a recent large comparison of biennial FIT (n = 26,719) vs one-time colonoscopy (n = 26,332) screening, the first study to directly compare the two, showed noninferiority, with nearly identical rates of CRC mortality at 10 years (0.22% colonoscopy vs 0.24% FIT) as well as CRC incidence (1.13% vs 1.22%, respectively).
“This study shows that in the context of organized screening, the benefits of FIT are the same as colonoscopy in the most important outcome of CRC — mortality,” Tinmouth said.
Furthermore, as noted with blood-based screening, the higher participation with FIT shows a much more even racial/ethnic participation than that observed with colonoscopy.
“FIT has clear and compelling advantages over colonoscopy,” she said. As well as better compliance among all groups, “it is less costly and also better for the environment [by using fewer resources],” she added.
Colonoscopy: ‘Best for First-Line Screening’
Making the case that standard colonoscopy should in fact be the first-line test, Swati G. Patel, MD, director of the Gastrointestinal Cancer Risk and Prevention Center at the University of Colorado Anschutz Medical Center, Aurora, Colorado, emphasized the robust, large population studies showing its benefits. Among them is a landmark national policy study showing a significant reduction in CRC incidence and mortality associated with first-line colonoscopy and adenoma removal.
A multitude of other studies in different settings have also shown similar benefits across large populations, Patel added.
In terms of its key advantages over FIT, the once-a-decade screening requirement for average-risk patients is seen as highly favorable by many, as evidenced in clinical trial data showing that individuals highly value tests that are accurate and do not need to be completed frequently, she said. Research from various other trials of organized screening programs further showed patients crossing over from FIT to colonoscopy, including one study of more than 3500 patients comparing colonoscopy and FIT, which had approximately 40% adherence with FIT vs nearly 90% with colonoscopy.
Notably, as many as 25% of the patients in the FIT arm in that study crossed over to colonoscopy, presumably due to preference for the once-a-decade regimen, Patel said.
“Colonoscopy had a substantial and impressive long-term protective benefit both in terms of developing colon cancer and dying from colon cancer,” she said.
Regarding the head-to-head FIT and colonoscopy comparison that Tinmouth described, Patel noted that a supplemental table in the study’s appendix of patients who completed screening does reveal increasing separation between the two approaches, favoring colonoscopy, in terms of longer-term CRC incidence and mortality.
The collective findings underscore that “colonoscopy as a standalone test is uniquely cost-effective,” in the face of costs related to colon cancer treatment.
Instead of relying on biennial tests with FIT, colonoscopy allows clinicians to immediately risk-stratify those individuals who can benefit from closer surveillance and really relax surveillance for those who are determined to be low risk, she said.
Grady had been on the scientific advisory boards for Guardant Health and Freenome and had consulted for Karius. Shoen reported relationships with Guardant Health and grant/research support from Exact Sciences, Freenome, and Immunovia. Tinmouth had no disclosures to report. Patel disclosed relationships with Olympus America and Exact Sciences.
A version of this article appeared on Medscape.com.
SAN DIEGO — , and when it comes to potentially life-saving screening measures, picking the optimal screening tool is critical.
Regarding tests, “perfect is not possible,” said William M. Grady, MD, AGAF, of the Fred Hutchinson Cancer Center, University of Washington School of Medicine, in Seattle, who took part in a debate on the pros and cons of key screening options at Digestive Disease Week® (DDW) 2025.
“We have to remember that that’s the reality of colorectal cancer screening, and we need to meet our patients where they live,” said Grady, who argued on behalf of blood-based tests, including cell-free (cf) DNA (Shield, Guardant Health) and cfDNA plus protein biomarkers (Freenome).
A big point in their favor is their convenience and higher patient compliance — better tests that don’t get done do not work, he stressed.
He cited data that showed suboptimal compliance rates with standard colonoscopy: Rates range from about 70% among non-Hispanic White individuals to 67% among Black individuals, 51% among Hispanic individuals, and the low rate of just 26% among patients aged between 45 and 50 years.
With troubling increases in CRC incidence among younger patients, “that’s a group we’re particularly concerned about,” Grady said.
Meanwhile, studies show compliance rates with blood-based tests are ≥ 80%, with similar rates seen among those racial and ethnic groups, with lower rates for conventional colonoscopy, he noted.
Importantly, in terms of performance in detecting CRC, blood-based tests stand up to other modalities, as demonstrated in a real-world study conducted by Grady and his colleagues showing a sensitivity of 83% for the cfDNA test, 74% for the fecal immunochemical test (FIT) stool test, and 92% for a multitarget stool DNA test compared with 95% for colonoscopy.
“What we can see is that the sensitivity of blood-based tests looks favorable and comparable to other tests,” he said.
Among the four options, cfDNA had a highest patient adherence rate (85%-86%) compared with colonoscopy (28%-42%), FIT (43%-65%), and multitarget stool DNA (48%-60%).
“The bottom line is that these tests decrease CRC mortality and incidence, and we know there’s a potential to improve compliance with colorectal cancer screening if we offer blood-based tests for average-risk people who refuse colonoscopy,” Grady said.
Blood-Based Tests: Caveats, Harms?
Arguing against blood-based tests in the debate, Robert E. Schoen, MD, MPH, professor of medicine and epidemiology, Division of Gastroenterology, Hepatology and Nutrition, at the University of Pittsburgh, in Pittsburgh, Pennsylvania, checked off some of the key caveats.
While the overall sensitivity of blood-based tests may look favorable, these tests don’t detect early CRC well,” said Schoen. The sensitivity rates for stage 1 CRC are 64.7% with Guardant Health and 57.1% with Freenome.
Furthermore, their rates of detecting advanced adenomas are very low; the rate with Guardant Health is only about 13%, and with Freenome is even lower at 12.5%, he reported.
These rates are “similar to the false positive rate, with poor discrimination and accuracy for advanced adenomas,” Schoen said. “Without substantial detection of advanced adenomas, blood-based testing is inferior [to other options].”
Importantly, the low advanced adenoma rate translates to a lack of CRC prevention, which is key to reducing CRC mortality, he noted.
Essential to success with blood-based biopsies, as well as with stool tests, is the need for a follow-up colonoscopy if results are positive, but Schoen pointed out that this may or may not happen.
He cited research from FIT data showing that among 33,000 patients with abnormal stool tests, the rate of follow-up colonoscopy within a year, despite the concerning results, was a dismal 56%.
“We have a long way to go to make sure that people who get positive noninvasive tests get followed up,” he said.
In terms of the argument that blood-based screening is better than no screening at all, Schoen cited recent research that projected reductions in the risk for CRC incidence and mortality among 100,000 patients with each of the screening modalities.
Starting with standard colonoscopy performed every 10 years, the reductions in incidence and mortality would be 79% and 81%, respectively, followed by annual FIT, at 72% and 76%; multitarget DNA every 3 years, at 68% and 73%; and cfDNA (Shield), at 45% and 55%.
Based on those rates, if patients originally opting for FIT were to shift to blood-based tests, “the rate of CRC deaths would increase,” Schoen noted.
The findings underscore that “blood testing is unfavorable as a ‘substitution test,’” he added. “In fact, widespread adoption of blood testing could increase CRC morbidity.”
“Is it better than nothing?” he asked. “Yes, but only if performance of a colonoscopy after a positive test is accomplished.”
What About FIT?
Arguing that stool-based testing, or FIT, is the ideal choice as a first-line CRC test Jill Tinmouth, MD, PhD, a professor at the University of Toronto, Ontario, Canada, pointed to its prominent role in organized screening programs, including regions where resources may limit the widespread utilization of routine first-line colonoscopy screening. In addition, it narrows colonoscopies to those that are already prescreened as being at risk.
Data from one such program, reported by Kaiser Permanente of Northern California, showed that participation in CRC screening doubled from 40% to 80% over 10 years after initiating FIT screening. CRC mortality over the same period decreased by 50% from baseline, and incidence fell by as much as 75%.
In follow-up colonoscopies, Tinmouth noted that collective research from studies reflecting real-world participation and adherence to FIT in populations in the United Kingdom, the Netherlands, Taiwan, and California show follow-up colonoscopy rates of 88%, 85%, 70%, and 78%, respectively.
Meanwhile, a recent large comparison of biennial FIT (n = 26,719) vs one-time colonoscopy (n = 26,332) screening, the first study to directly compare the two, showed noninferiority, with nearly identical rates of CRC mortality at 10 years (0.22% colonoscopy vs 0.24% FIT) as well as CRC incidence (1.13% vs 1.22%, respectively).
“This study shows that in the context of organized screening, the benefits of FIT are the same as colonoscopy in the most important outcome of CRC — mortality,” Tinmouth said.
Furthermore, as noted with blood-based screening, the higher participation with FIT shows a much more even racial/ethnic participation than that observed with colonoscopy.
“FIT has clear and compelling advantages over colonoscopy,” she said. As well as better compliance among all groups, “it is less costly and also better for the environment [by using fewer resources],” she added.
Colonoscopy: ‘Best for First-Line Screening’
Making the case that standard colonoscopy should in fact be the first-line test, Swati G. Patel, MD, director of the Gastrointestinal Cancer Risk and Prevention Center at the University of Colorado Anschutz Medical Center, Aurora, Colorado, emphasized the robust, large population studies showing its benefits. Among them is a landmark national policy study showing a significant reduction in CRC incidence and mortality associated with first-line colonoscopy and adenoma removal.
A multitude of other studies in different settings have also shown similar benefits across large populations, Patel added.
In terms of its key advantages over FIT, the once-a-decade screening requirement for average-risk patients is seen as highly favorable by many, as evidenced in clinical trial data showing that individuals highly value tests that are accurate and do not need to be completed frequently, she said. Research from various other trials of organized screening programs further showed patients crossing over from FIT to colonoscopy, including one study of more than 3500 patients comparing colonoscopy and FIT, which had approximately 40% adherence with FIT vs nearly 90% with colonoscopy.
Notably, as many as 25% of the patients in the FIT arm in that study crossed over to colonoscopy, presumably due to preference for the once-a-decade regimen, Patel said.
“Colonoscopy had a substantial and impressive long-term protective benefit both in terms of developing colon cancer and dying from colon cancer,” she said.
Regarding the head-to-head FIT and colonoscopy comparison that Tinmouth described, Patel noted that a supplemental table in the study’s appendix of patients who completed screening does reveal increasing separation between the two approaches, favoring colonoscopy, in terms of longer-term CRC incidence and mortality.
The collective findings underscore that “colonoscopy as a standalone test is uniquely cost-effective,” in the face of costs related to colon cancer treatment.
Instead of relying on biennial tests with FIT, colonoscopy allows clinicians to immediately risk-stratify those individuals who can benefit from closer surveillance and really relax surveillance for those who are determined to be low risk, she said.
Grady had been on the scientific advisory boards for Guardant Health and Freenome and had consulted for Karius. Shoen reported relationships with Guardant Health and grant/research support from Exact Sciences, Freenome, and Immunovia. Tinmouth had no disclosures to report. Patel disclosed relationships with Olympus America and Exact Sciences.
A version of this article appeared on Medscape.com.
SAN DIEGO — , and when it comes to potentially life-saving screening measures, picking the optimal screening tool is critical.
Regarding tests, “perfect is not possible,” said William M. Grady, MD, AGAF, of the Fred Hutchinson Cancer Center, University of Washington School of Medicine, in Seattle, who took part in a debate on the pros and cons of key screening options at Digestive Disease Week® (DDW) 2025.
“We have to remember that that’s the reality of colorectal cancer screening, and we need to meet our patients where they live,” said Grady, who argued on behalf of blood-based tests, including cell-free (cf) DNA (Shield, Guardant Health) and cfDNA plus protein biomarkers (Freenome).
A big point in their favor is their convenience and higher patient compliance — better tests that don’t get done do not work, he stressed.
He cited data that showed suboptimal compliance rates with standard colonoscopy: Rates range from about 70% among non-Hispanic White individuals to 67% among Black individuals, 51% among Hispanic individuals, and the low rate of just 26% among patients aged between 45 and 50 years.
With troubling increases in CRC incidence among younger patients, “that’s a group we’re particularly concerned about,” Grady said.
Meanwhile, studies show compliance rates with blood-based tests are ≥ 80%, with similar rates seen among those racial and ethnic groups, with lower rates for conventional colonoscopy, he noted.
Importantly, in terms of performance in detecting CRC, blood-based tests stand up to other modalities, as demonstrated in a real-world study conducted by Grady and his colleagues showing a sensitivity of 83% for the cfDNA test, 74% for the fecal immunochemical test (FIT) stool test, and 92% for a multitarget stool DNA test compared with 95% for colonoscopy.
“What we can see is that the sensitivity of blood-based tests looks favorable and comparable to other tests,” he said.
Among the four options, cfDNA had a highest patient adherence rate (85%-86%) compared with colonoscopy (28%-42%), FIT (43%-65%), and multitarget stool DNA (48%-60%).
“The bottom line is that these tests decrease CRC mortality and incidence, and we know there’s a potential to improve compliance with colorectal cancer screening if we offer blood-based tests for average-risk people who refuse colonoscopy,” Grady said.
Blood-Based Tests: Caveats, Harms?
Arguing against blood-based tests in the debate, Robert E. Schoen, MD, MPH, professor of medicine and epidemiology, Division of Gastroenterology, Hepatology and Nutrition, at the University of Pittsburgh, in Pittsburgh, Pennsylvania, checked off some of the key caveats.
While the overall sensitivity of blood-based tests may look favorable, these tests don’t detect early CRC well,” said Schoen. The sensitivity rates for stage 1 CRC are 64.7% with Guardant Health and 57.1% with Freenome.
Furthermore, their rates of detecting advanced adenomas are very low; the rate with Guardant Health is only about 13%, and with Freenome is even lower at 12.5%, he reported.
These rates are “similar to the false positive rate, with poor discrimination and accuracy for advanced adenomas,” Schoen said. “Without substantial detection of advanced adenomas, blood-based testing is inferior [to other options].”
Importantly, the low advanced adenoma rate translates to a lack of CRC prevention, which is key to reducing CRC mortality, he noted.
Essential to success with blood-based biopsies, as well as with stool tests, is the need for a follow-up colonoscopy if results are positive, but Schoen pointed out that this may or may not happen.
He cited research from FIT data showing that among 33,000 patients with abnormal stool tests, the rate of follow-up colonoscopy within a year, despite the concerning results, was a dismal 56%.
“We have a long way to go to make sure that people who get positive noninvasive tests get followed up,” he said.
In terms of the argument that blood-based screening is better than no screening at all, Schoen cited recent research that projected reductions in the risk for CRC incidence and mortality among 100,000 patients with each of the screening modalities.
Starting with standard colonoscopy performed every 10 years, the reductions in incidence and mortality would be 79% and 81%, respectively, followed by annual FIT, at 72% and 76%; multitarget DNA every 3 years, at 68% and 73%; and cfDNA (Shield), at 45% and 55%.
Based on those rates, if patients originally opting for FIT were to shift to blood-based tests, “the rate of CRC deaths would increase,” Schoen noted.
The findings underscore that “blood testing is unfavorable as a ‘substitution test,’” he added. “In fact, widespread adoption of blood testing could increase CRC morbidity.”
“Is it better than nothing?” he asked. “Yes, but only if performance of a colonoscopy after a positive test is accomplished.”
What About FIT?
Arguing that stool-based testing, or FIT, is the ideal choice as a first-line CRC test Jill Tinmouth, MD, PhD, a professor at the University of Toronto, Ontario, Canada, pointed to its prominent role in organized screening programs, including regions where resources may limit the widespread utilization of routine first-line colonoscopy screening. In addition, it narrows colonoscopies to those that are already prescreened as being at risk.
Data from one such program, reported by Kaiser Permanente of Northern California, showed that participation in CRC screening doubled from 40% to 80% over 10 years after initiating FIT screening. CRC mortality over the same period decreased by 50% from baseline, and incidence fell by as much as 75%.
In follow-up colonoscopies, Tinmouth noted that collective research from studies reflecting real-world participation and adherence to FIT in populations in the United Kingdom, the Netherlands, Taiwan, and California show follow-up colonoscopy rates of 88%, 85%, 70%, and 78%, respectively.
Meanwhile, a recent large comparison of biennial FIT (n = 26,719) vs one-time colonoscopy (n = 26,332) screening, the first study to directly compare the two, showed noninferiority, with nearly identical rates of CRC mortality at 10 years (0.22% colonoscopy vs 0.24% FIT) as well as CRC incidence (1.13% vs 1.22%, respectively).
“This study shows that in the context of organized screening, the benefits of FIT are the same as colonoscopy in the most important outcome of CRC — mortality,” Tinmouth said.
Furthermore, as noted with blood-based screening, the higher participation with FIT shows a much more even racial/ethnic participation than that observed with colonoscopy.
“FIT has clear and compelling advantages over colonoscopy,” she said. As well as better compliance among all groups, “it is less costly and also better for the environment [by using fewer resources],” she added.
Colonoscopy: ‘Best for First-Line Screening’
Making the case that standard colonoscopy should in fact be the first-line test, Swati G. Patel, MD, director of the Gastrointestinal Cancer Risk and Prevention Center at the University of Colorado Anschutz Medical Center, Aurora, Colorado, emphasized the robust, large population studies showing its benefits. Among them is a landmark national policy study showing a significant reduction in CRC incidence and mortality associated with first-line colonoscopy and adenoma removal.
A multitude of other studies in different settings have also shown similar benefits across large populations, Patel added.
In terms of its key advantages over FIT, the once-a-decade screening requirement for average-risk patients is seen as highly favorable by many, as evidenced in clinical trial data showing that individuals highly value tests that are accurate and do not need to be completed frequently, she said. Research from various other trials of organized screening programs further showed patients crossing over from FIT to colonoscopy, including one study of more than 3500 patients comparing colonoscopy and FIT, which had approximately 40% adherence with FIT vs nearly 90% with colonoscopy.
Notably, as many as 25% of the patients in the FIT arm in that study crossed over to colonoscopy, presumably due to preference for the once-a-decade regimen, Patel said.
“Colonoscopy had a substantial and impressive long-term protective benefit both in terms of developing colon cancer and dying from colon cancer,” she said.
Regarding the head-to-head FIT and colonoscopy comparison that Tinmouth described, Patel noted that a supplemental table in the study’s appendix of patients who completed screening does reveal increasing separation between the two approaches, favoring colonoscopy, in terms of longer-term CRC incidence and mortality.
The collective findings underscore that “colonoscopy as a standalone test is uniquely cost-effective,” in the face of costs related to colon cancer treatment.
Instead of relying on biennial tests with FIT, colonoscopy allows clinicians to immediately risk-stratify those individuals who can benefit from closer surveillance and really relax surveillance for those who are determined to be low risk, she said.
Grady had been on the scientific advisory boards for Guardant Health and Freenome and had consulted for Karius. Shoen reported relationships with Guardant Health and grant/research support from Exact Sciences, Freenome, and Immunovia. Tinmouth had no disclosures to report. Patel disclosed relationships with Olympus America and Exact Sciences.
A version of this article appeared on Medscape.com.
FROM DDW 2025
Barrett’s Esophagus: No Survival Difference Between Regular and At-Need Surveillance
SAN DIEGO—Gastroenterologists have debated the best course of action for patients with Barrett’s esophagus for decades. Which is better for detecting early malignancy and preventing progression to esophageal adenocarcinoma (EAC) — surveillance endoscopy at regular intervals or only when symptoms occur? Does one offer a better chance of survival than the other?
Now, researchers who conducted what they believe is the first randomized clinical trial comparing the two approaches say they have the answer.
, said Oliver Old, MD, a consultant upper-GI surgeon at Gloucestershire Royal Hospital, England, who presented the findings at Digestive Disease Week® (DDW) 2025.
At-need endoscopy may be a safe alternative for low-risk patients, the research team concluded.
The BOSS Trial
The Barrett’s Oesophagus Surveillance Versus Endoscopy At Need Study (BOSS) ran from 2009 to 2024 at 109 centers in the UK, and 3452 patients with Barrett’s esophagus of 1 cm circumferential or a 2 cm noncircumferential tongue or island were followed for a minimum of 10 years.
Researchers randomly assigned patients to undergo upper gastrointestinal endoscopy with biopsy every 2 years (the standard of care when the trial was set up) or endoscopy “at-need” when symptoms developed. Patients in the latter group were counseled about risk and were offered endoscopy for a range of alarm symptoms.
The study found no statistically significant difference in all-cause mortality risk between the two groups. Over the study period, 333 of 1733 patients (19.2%) in the surveillance group died, as did 356 of 1719 patients (20.7%) in the at-need group.
Similarly, no statistically significant between-group difference was found in the risk for cancer-specific mortality. About 6.2% of patients died from cancer in both groups — 108 in the regular surveillance group and 106 in the at-need group.
Nor was there a statistically significant difference in diagnosis of EAC, with 40 regular surveillance patients (2.3%) and 31 at-need patients (1.8%) receiving the diagnosis over median follow-up of 12.8 years. Cancer stage at diagnosis did not differ significantly between groups.
“The really low rate of progression to esophageal adenocarcinoma” was a key finding, Old said. The rate of progression to EAC was 0.23% per patient per year, he said.
Low- or high-grade dysplasia was detected in 10% of patients in the regular surveillance group, compared with 4% in the at-need group.
The mean interval between endoscopies was 22.9 months for the regular surveillance group and 31.5 months for the at-need group, and the median interval was 24.8 months and 25.7 months, respectively. The mean number of endoscopies was 3.5 in the regular surveillance group and 1.4 in the at-need group.
Eight patients in the regular surveillance group (0.46%) and seven in the at-need group (0.41%) reported serious adverse events.
Will BOSS Change Minds?
Current surveillance practices “are based on pure observational data, and the question of whether surveillance EGD [esophagogastroduodenoscopy] impacts EAC diagnosis and mortality has been ongoing,” said Margaret Zhou, MD, MS, clinical assistant professor at Stanford University School of Medicine, Stanford, California. A randomized clinical trial on the subject has been needed for years, she added.
However, Zhou said, “In my opinion, this study does not end the debate and will not change my practice of doing surveillance endoscopy on NDBE [nondysplastic Barrett’s esophagus], which I typically perform every 3-5 years, based on current guidelines.”
The American Gastroenterological Association clinical practice guideline, issued in June 2024, addresses surveillance and focuses on a patient-centered approach when deciding on treatment or surveillance.
Patients in the at-need endoscopy arm underwent endoscopy almost as frequently as the patients randomly assigned to regular surveillance, at a median interval of about 2 years, Zhou noted. Therefore, she said, “It’s difficult to conclude from this study that surveillance endoscopy has no impact.”
Additionally, the study was underpowered to detect a difference in all-cause mortality and assumed a progression rate for nondysplastic Barrett’s esophagus that is higher than the current understanding, Zhou said. “It also did not address the important question of EAC-related mortality, which would be an important outcome to be able to assess whether surveillance EGD has an impact,” she said.
Joel H. Rubenstein, MD, MSc, AGAF, director of the Barrett’s Esophagus Program and professor in the Division of Gastroenterology at the University of Michigan Medical School, Ann Arbor, agreed that the study doesn’t answer the pressing question of whether surveillance works.
While Rubenstein said he would not tell colleagues or patients to stop routine surveillance in patients with Barrett’s esophagus on the basis of these results, “it is a reminder that we should be circumspect in who we label as having Barrett’s esophagus, and we should be more proactive in discussing discontinuation of surveillance in patients based on advancing age and comorbidities.”
The study was funded by the UK’s National Institute for Health and Care Research. Zhou is a consultant for CapsoVision and Neptune Medical. Rubenstein has received research funding from Lucid Diagnostics. Old reported no disclosures.
A version of this article appeared on Medscape.com.
SAN DIEGO—Gastroenterologists have debated the best course of action for patients with Barrett’s esophagus for decades. Which is better for detecting early malignancy and preventing progression to esophageal adenocarcinoma (EAC) — surveillance endoscopy at regular intervals or only when symptoms occur? Does one offer a better chance of survival than the other?
Now, researchers who conducted what they believe is the first randomized clinical trial comparing the two approaches say they have the answer.
, said Oliver Old, MD, a consultant upper-GI surgeon at Gloucestershire Royal Hospital, England, who presented the findings at Digestive Disease Week® (DDW) 2025.
At-need endoscopy may be a safe alternative for low-risk patients, the research team concluded.
The BOSS Trial
The Barrett’s Oesophagus Surveillance Versus Endoscopy At Need Study (BOSS) ran from 2009 to 2024 at 109 centers in the UK, and 3452 patients with Barrett’s esophagus of 1 cm circumferential or a 2 cm noncircumferential tongue or island were followed for a minimum of 10 years.
Researchers randomly assigned patients to undergo upper gastrointestinal endoscopy with biopsy every 2 years (the standard of care when the trial was set up) or endoscopy “at-need” when symptoms developed. Patients in the latter group were counseled about risk and were offered endoscopy for a range of alarm symptoms.
The study found no statistically significant difference in all-cause mortality risk between the two groups. Over the study period, 333 of 1733 patients (19.2%) in the surveillance group died, as did 356 of 1719 patients (20.7%) in the at-need group.
Similarly, no statistically significant between-group difference was found in the risk for cancer-specific mortality. About 6.2% of patients died from cancer in both groups — 108 in the regular surveillance group and 106 in the at-need group.
Nor was there a statistically significant difference in diagnosis of EAC, with 40 regular surveillance patients (2.3%) and 31 at-need patients (1.8%) receiving the diagnosis over median follow-up of 12.8 years. Cancer stage at diagnosis did not differ significantly between groups.
“The really low rate of progression to esophageal adenocarcinoma” was a key finding, Old said. The rate of progression to EAC was 0.23% per patient per year, he said.
Low- or high-grade dysplasia was detected in 10% of patients in the regular surveillance group, compared with 4% in the at-need group.
The mean interval between endoscopies was 22.9 months for the regular surveillance group and 31.5 months for the at-need group, and the median interval was 24.8 months and 25.7 months, respectively. The mean number of endoscopies was 3.5 in the regular surveillance group and 1.4 in the at-need group.
Eight patients in the regular surveillance group (0.46%) and seven in the at-need group (0.41%) reported serious adverse events.
Will BOSS Change Minds?
Current surveillance practices “are based on pure observational data, and the question of whether surveillance EGD [esophagogastroduodenoscopy] impacts EAC diagnosis and mortality has been ongoing,” said Margaret Zhou, MD, MS, clinical assistant professor at Stanford University School of Medicine, Stanford, California. A randomized clinical trial on the subject has been needed for years, she added.
However, Zhou said, “In my opinion, this study does not end the debate and will not change my practice of doing surveillance endoscopy on NDBE [nondysplastic Barrett’s esophagus], which I typically perform every 3-5 years, based on current guidelines.”
The American Gastroenterological Association clinical practice guideline, issued in June 2024, addresses surveillance and focuses on a patient-centered approach when deciding on treatment or surveillance.
Patients in the at-need endoscopy arm underwent endoscopy almost as frequently as the patients randomly assigned to regular surveillance, at a median interval of about 2 years, Zhou noted. Therefore, she said, “It’s difficult to conclude from this study that surveillance endoscopy has no impact.”
Additionally, the study was underpowered to detect a difference in all-cause mortality and assumed a progression rate for nondysplastic Barrett’s esophagus that is higher than the current understanding, Zhou said. “It also did not address the important question of EAC-related mortality, which would be an important outcome to be able to assess whether surveillance EGD has an impact,” she said.
Joel H. Rubenstein, MD, MSc, AGAF, director of the Barrett’s Esophagus Program and professor in the Division of Gastroenterology at the University of Michigan Medical School, Ann Arbor, agreed that the study doesn’t answer the pressing question of whether surveillance works.
While Rubenstein said he would not tell colleagues or patients to stop routine surveillance in patients with Barrett’s esophagus on the basis of these results, “it is a reminder that we should be circumspect in who we label as having Barrett’s esophagus, and we should be more proactive in discussing discontinuation of surveillance in patients based on advancing age and comorbidities.”
The study was funded by the UK’s National Institute for Health and Care Research. Zhou is a consultant for CapsoVision and Neptune Medical. Rubenstein has received research funding from Lucid Diagnostics. Old reported no disclosures.
A version of this article appeared on Medscape.com.
SAN DIEGO—Gastroenterologists have debated the best course of action for patients with Barrett’s esophagus for decades. Which is better for detecting early malignancy and preventing progression to esophageal adenocarcinoma (EAC) — surveillance endoscopy at regular intervals or only when symptoms occur? Does one offer a better chance of survival than the other?
Now, researchers who conducted what they believe is the first randomized clinical trial comparing the two approaches say they have the answer.
, said Oliver Old, MD, a consultant upper-GI surgeon at Gloucestershire Royal Hospital, England, who presented the findings at Digestive Disease Week® (DDW) 2025.
At-need endoscopy may be a safe alternative for low-risk patients, the research team concluded.
The BOSS Trial
The Barrett’s Oesophagus Surveillance Versus Endoscopy At Need Study (BOSS) ran from 2009 to 2024 at 109 centers in the UK, and 3452 patients with Barrett’s esophagus of 1 cm circumferential or a 2 cm noncircumferential tongue or island were followed for a minimum of 10 years.
Researchers randomly assigned patients to undergo upper gastrointestinal endoscopy with biopsy every 2 years (the standard of care when the trial was set up) or endoscopy “at-need” when symptoms developed. Patients in the latter group were counseled about risk and were offered endoscopy for a range of alarm symptoms.
The study found no statistically significant difference in all-cause mortality risk between the two groups. Over the study period, 333 of 1733 patients (19.2%) in the surveillance group died, as did 356 of 1719 patients (20.7%) in the at-need group.
Similarly, no statistically significant between-group difference was found in the risk for cancer-specific mortality. About 6.2% of patients died from cancer in both groups — 108 in the regular surveillance group and 106 in the at-need group.
Nor was there a statistically significant difference in diagnosis of EAC, with 40 regular surveillance patients (2.3%) and 31 at-need patients (1.8%) receiving the diagnosis over median follow-up of 12.8 years. Cancer stage at diagnosis did not differ significantly between groups.
“The really low rate of progression to esophageal adenocarcinoma” was a key finding, Old said. The rate of progression to EAC was 0.23% per patient per year, he said.
Low- or high-grade dysplasia was detected in 10% of patients in the regular surveillance group, compared with 4% in the at-need group.
The mean interval between endoscopies was 22.9 months for the regular surveillance group and 31.5 months for the at-need group, and the median interval was 24.8 months and 25.7 months, respectively. The mean number of endoscopies was 3.5 in the regular surveillance group and 1.4 in the at-need group.
Eight patients in the regular surveillance group (0.46%) and seven in the at-need group (0.41%) reported serious adverse events.
Will BOSS Change Minds?
Current surveillance practices “are based on pure observational data, and the question of whether surveillance EGD [esophagogastroduodenoscopy] impacts EAC diagnosis and mortality has been ongoing,” said Margaret Zhou, MD, MS, clinical assistant professor at Stanford University School of Medicine, Stanford, California. A randomized clinical trial on the subject has been needed for years, she added.
However, Zhou said, “In my opinion, this study does not end the debate and will not change my practice of doing surveillance endoscopy on NDBE [nondysplastic Barrett’s esophagus], which I typically perform every 3-5 years, based on current guidelines.”
The American Gastroenterological Association clinical practice guideline, issued in June 2024, addresses surveillance and focuses on a patient-centered approach when deciding on treatment or surveillance.
Patients in the at-need endoscopy arm underwent endoscopy almost as frequently as the patients randomly assigned to regular surveillance, at a median interval of about 2 years, Zhou noted. Therefore, she said, “It’s difficult to conclude from this study that surveillance endoscopy has no impact.”
Additionally, the study was underpowered to detect a difference in all-cause mortality and assumed a progression rate for nondysplastic Barrett’s esophagus that is higher than the current understanding, Zhou said. “It also did not address the important question of EAC-related mortality, which would be an important outcome to be able to assess whether surveillance EGD has an impact,” she said.
Joel H. Rubenstein, MD, MSc, AGAF, director of the Barrett’s Esophagus Program and professor in the Division of Gastroenterology at the University of Michigan Medical School, Ann Arbor, agreed that the study doesn’t answer the pressing question of whether surveillance works.
While Rubenstein said he would not tell colleagues or patients to stop routine surveillance in patients with Barrett’s esophagus on the basis of these results, “it is a reminder that we should be circumspect in who we label as having Barrett’s esophagus, and we should be more proactive in discussing discontinuation of surveillance in patients based on advancing age and comorbidities.”
The study was funded by the UK’s National Institute for Health and Care Research. Zhou is a consultant for CapsoVision and Neptune Medical. Rubenstein has received research funding from Lucid Diagnostics. Old reported no disclosures.
A version of this article appeared on Medscape.com.
FROM DDW 2025
Post-Polypectomy Colorectal Cancers Common Before Follow-Up
SAN DIEGO — , according to new research.
Of key factors linked to a higher risk for such cases, one stands out — the quality of the baseline colonoscopy procedure.
“A lot of the neoplasia that we see after polypectomy was probably either missed or incompletely resected at baseline,” said Samir Gupta, MD, AGAF, a professor of medicine in the Division of Gastroenterology, UC San Diego Health, La Jolla, California, in discussing the topic at Digestive Diseases Week® (DDW) 2025.
“Therefore, what is key to emphasize is that [colonoscopy] quality is probably the most important factor in post-polypectomy risk,” he said. “But, advantageously, it’s also the most modifiable factor.”
Research shows that the risk for CRC incidence following a colonoscopy ranges from just about 3.4 to 5 cases per 10,000 person-years when baseline findings show no adenoma or a low risk; however, higher rates ranging from 13.8 to 20.9 cases per 10,000 person-years are observed for high-risk adenomas or serrated polyps, Gupta reported.
“Compared with those who have normal colonoscopy, the risk [for CRC] with high-risk adenomas is increased by nearly threefold,” Gupta said.
In a recent study of US veterans who underwent a colonoscopy with polypectomy between 1999 and 2016 that was labeled negative for cancer, Gupta and his colleagues found that over a median follow-up of 3.9 years, as many as 55% of 396 CRCs that occurred post-polypectomy were detected prior to the recommended surveillance colonoscopy.
The study also showed that 40% of post-polypectomy CRC deaths occurred prior to the recommended surveillance exam over a median follow-up of 4.2 years.
Cancers detected prior to the recommended surveillance exam were more likely to be diagnosed as stage IV compared with those diagnosed later (16% prior to recommended surveillance vs 2.1% and 8.3% during and after, respectively; P = .003).
Importantly, the most prominent reason for the cancers emerging in the interval before follow-up surveillance was missed lesions during the baseline colonoscopy (60%), Gupta said.
Colonoscopist Skill and Benchmarks
A larger study of 173,288 colonoscopies further underscores colonoscopist skill as a key factor in post-polypectomy CRC, showing that colonoscopists with low vs high performance quality — defined as an adenoma detection rate (ADR) of either < 20% vs ≥ 20% — had higher 10-year cumulative rates of CRC incidence among patients following a negative colonoscopy (P < .001).
Likewise, in another analysis of low-risk vs high-risk polyps, a higher colonoscopist performance status was significantly associated with lower rates of CRCs (P < .001).
“Higher colonoscopist performance was associated with a lower cumulative colorectal cancer risk within each [polyp risk] group, such that the cumulative risk after high-risk adenoma removal by a higher performing colonoscopist is similar to that in patients who had a low-risk adenoma removed by a lower performer,” Gupta explained.
“So, this has nothing to do with the type of polyp that was removed — it really has to do with the quality of the colonoscopist,” he said.
The American College of Gastroenterology and the American Society for Gastrointestinal Endoscopy Quality Task Force recently updated recommended benchmarks for colonoscopists for detecting polyps, said Aasma Shaukat, MD, AGAF, director of GI Outcomes Research at NYU Grossman School of Medicine, New York City, in further discussing the issue in the session.
They recommend an ADR of 35% overall, with the recommended benchmark being ≥ 40% for men aged 45 years or older and ≥ 30% for women aged 45 years or older, with a rate of 50% for patients aged 45 years or older with an abnormal stool test, Shaukat explained.
And “these are minimum benchmarks,” she said. “Multiple studies suggest that, in fact, the reported rates are much higher.”
Among key strategies for detecting elusive adenomas is the need to slow down withdrawal time during the colonoscopy in order to take as close a look as possible, Shaukat emphasized.
She noted research that her team has published showing that physicians’ shorter withdrawal times were in fact inversely associated with an increased risk for cancers occurring prior to the recommended surveillance (P < .0001).
“Multiple studies have shown it isn’t just the time but the technique with withdrawal,” she added, underscoring the need to flatten as much of the mucosa and folds as possible during the withdrawal. “It’s important to perfect our technique.”
Sessile serrated lesions, with often subtle and indistinct borders, can be among the most difficult polyps to remove, Shaukat noted. Studies have shown that as many as 31% of sessile serrated lesions are incompletely resected, compared with about 7% of tubular adenomas.
Patient Compliance Can’t Be Counted On
In addition to physician-related factors, patients themselves can also play a role in post-polypectomy cancer risk — specifically in not complying with surveillance recommendations, with reasons ranging from cost to the invasiveness and burden of undergoing a surveillance colonoscopy.
“Colonoscopies are expensive, and participation is suboptimal,” Gupta said.
One study of high-risk patients with adenoma shows that only 64% received surveillance, and many who did receive surveillance received it late, he noted.
This underscores the need for better prevention as well as follow-up strategies, he added.
Recommendations for surveillance exams from the World Endoscopy Organization range from every 3 to 10 years for patients with polyps, depending on the number, size, and type of polyps, to every 10 years for those with normal colonoscopies and no polyps.
A key potential solution to improve patient monitoring within those periods is the use of fecal immunochemical tests (FITs), which are noninvasive, substantially less burdensome alternatives to colonoscopies, which check for blood in the stool, Gupta said.
While the tests can’t replace the gold standard of colonoscopies, the tests nevertheless can play an important role in monitoring patients, he said.
Evidence supporting their benefits includes a recent important study of 2226 patients who underwent either post-polypectomy colonoscopy, FIT (either with FOB Gold or OC-Sensor), or FIT-fecal DNA (Cologuard) test, he noted.
The results showed that the OC-Sensor FIT had a 71% sensitivity, and FIT-fecal DNA had a sensitivity of 86% in the detection of CRC.
Importantly, the study found that a positive FIT result prior to the recommended surveillance colonoscopy reduced the time-to-diagnosis for CRC and advanced adenoma by a median of 30 and 20 months, respectively.
FIT Tests Potentially a ‘Major Advantage’
“The predictive models and these noninvasive tests are likely better than current guidelines for predicting who has metachronous advanced neoplasia or colon cancer,” Gupta said.
“For this reason, I really think that these alternatives have a potentially major advantage in reducing colonoscopy burdens. These alternatives are worthwhile of studying, and we really do need to consider them,” he said.
More broadly, the collective evidence points to factors that can and should be addressed with a proactive diligence, Gupta noted.
“We need to be able to shift from using guidelines that are just based on the number, size, and histology of polyps to a scenario where we’re doing very high-quality colonoscopies with excellent ADR rates and complete polyp excision,” Gupta said.
Furthermore, “the use of tools for more precise risk stratification could result in a big, low-risk group that could just require 10-year colonoscopy surveillance or maybe even periodic noninvasive surveillance, and a much smaller high-risk group that we could really focus our attention on, doing surveillance colonoscopy every 3-5 years or maybe even intense noninvasive surveillance.”
Gupta’s disclosures included relationships with Guardant Health, Universal DX, CellMax, and Geneoscopy. Shaukat’s disclosures included relationships with Iterative Health and Freenome.
A version of this article appeared on Medscape.com.
SAN DIEGO — , according to new research.
Of key factors linked to a higher risk for such cases, one stands out — the quality of the baseline colonoscopy procedure.
“A lot of the neoplasia that we see after polypectomy was probably either missed or incompletely resected at baseline,” said Samir Gupta, MD, AGAF, a professor of medicine in the Division of Gastroenterology, UC San Diego Health, La Jolla, California, in discussing the topic at Digestive Diseases Week® (DDW) 2025.
“Therefore, what is key to emphasize is that [colonoscopy] quality is probably the most important factor in post-polypectomy risk,” he said. “But, advantageously, it’s also the most modifiable factor.”
Research shows that the risk for CRC incidence following a colonoscopy ranges from just about 3.4 to 5 cases per 10,000 person-years when baseline findings show no adenoma or a low risk; however, higher rates ranging from 13.8 to 20.9 cases per 10,000 person-years are observed for high-risk adenomas or serrated polyps, Gupta reported.
“Compared with those who have normal colonoscopy, the risk [for CRC] with high-risk adenomas is increased by nearly threefold,” Gupta said.
In a recent study of US veterans who underwent a colonoscopy with polypectomy between 1999 and 2016 that was labeled negative for cancer, Gupta and his colleagues found that over a median follow-up of 3.9 years, as many as 55% of 396 CRCs that occurred post-polypectomy were detected prior to the recommended surveillance colonoscopy.
The study also showed that 40% of post-polypectomy CRC deaths occurred prior to the recommended surveillance exam over a median follow-up of 4.2 years.
Cancers detected prior to the recommended surveillance exam were more likely to be diagnosed as stage IV compared with those diagnosed later (16% prior to recommended surveillance vs 2.1% and 8.3% during and after, respectively; P = .003).
Importantly, the most prominent reason for the cancers emerging in the interval before follow-up surveillance was missed lesions during the baseline colonoscopy (60%), Gupta said.
Colonoscopist Skill and Benchmarks
A larger study of 173,288 colonoscopies further underscores colonoscopist skill as a key factor in post-polypectomy CRC, showing that colonoscopists with low vs high performance quality — defined as an adenoma detection rate (ADR) of either < 20% vs ≥ 20% — had higher 10-year cumulative rates of CRC incidence among patients following a negative colonoscopy (P < .001).
Likewise, in another analysis of low-risk vs high-risk polyps, a higher colonoscopist performance status was significantly associated with lower rates of CRCs (P < .001).
“Higher colonoscopist performance was associated with a lower cumulative colorectal cancer risk within each [polyp risk] group, such that the cumulative risk after high-risk adenoma removal by a higher performing colonoscopist is similar to that in patients who had a low-risk adenoma removed by a lower performer,” Gupta explained.
“So, this has nothing to do with the type of polyp that was removed — it really has to do with the quality of the colonoscopist,” he said.
The American College of Gastroenterology and the American Society for Gastrointestinal Endoscopy Quality Task Force recently updated recommended benchmarks for colonoscopists for detecting polyps, said Aasma Shaukat, MD, AGAF, director of GI Outcomes Research at NYU Grossman School of Medicine, New York City, in further discussing the issue in the session.
They recommend an ADR of 35% overall, with the recommended benchmark being ≥ 40% for men aged 45 years or older and ≥ 30% for women aged 45 years or older, with a rate of 50% for patients aged 45 years or older with an abnormal stool test, Shaukat explained.
And “these are minimum benchmarks,” she said. “Multiple studies suggest that, in fact, the reported rates are much higher.”
Among key strategies for detecting elusive adenomas is the need to slow down withdrawal time during the colonoscopy in order to take as close a look as possible, Shaukat emphasized.
She noted research that her team has published showing that physicians’ shorter withdrawal times were in fact inversely associated with an increased risk for cancers occurring prior to the recommended surveillance (P < .0001).
“Multiple studies have shown it isn’t just the time but the technique with withdrawal,” she added, underscoring the need to flatten as much of the mucosa and folds as possible during the withdrawal. “It’s important to perfect our technique.”
Sessile serrated lesions, with often subtle and indistinct borders, can be among the most difficult polyps to remove, Shaukat noted. Studies have shown that as many as 31% of sessile serrated lesions are incompletely resected, compared with about 7% of tubular adenomas.
Patient Compliance Can’t Be Counted On
In addition to physician-related factors, patients themselves can also play a role in post-polypectomy cancer risk — specifically in not complying with surveillance recommendations, with reasons ranging from cost to the invasiveness and burden of undergoing a surveillance colonoscopy.
“Colonoscopies are expensive, and participation is suboptimal,” Gupta said.
One study of high-risk patients with adenoma shows that only 64% received surveillance, and many who did receive surveillance received it late, he noted.
This underscores the need for better prevention as well as follow-up strategies, he added.
Recommendations for surveillance exams from the World Endoscopy Organization range from every 3 to 10 years for patients with polyps, depending on the number, size, and type of polyps, to every 10 years for those with normal colonoscopies and no polyps.
A key potential solution to improve patient monitoring within those periods is the use of fecal immunochemical tests (FITs), which are noninvasive, substantially less burdensome alternatives to colonoscopies, which check for blood in the stool, Gupta said.
While the tests can’t replace the gold standard of colonoscopies, the tests nevertheless can play an important role in monitoring patients, he said.
Evidence supporting their benefits includes a recent important study of 2226 patients who underwent either post-polypectomy colonoscopy, FIT (either with FOB Gold or OC-Sensor), or FIT-fecal DNA (Cologuard) test, he noted.
The results showed that the OC-Sensor FIT had a 71% sensitivity, and FIT-fecal DNA had a sensitivity of 86% in the detection of CRC.
Importantly, the study found that a positive FIT result prior to the recommended surveillance colonoscopy reduced the time-to-diagnosis for CRC and advanced adenoma by a median of 30 and 20 months, respectively.
FIT Tests Potentially a ‘Major Advantage’
“The predictive models and these noninvasive tests are likely better than current guidelines for predicting who has metachronous advanced neoplasia or colon cancer,” Gupta said.
“For this reason, I really think that these alternatives have a potentially major advantage in reducing colonoscopy burdens. These alternatives are worthwhile of studying, and we really do need to consider them,” he said.
More broadly, the collective evidence points to factors that can and should be addressed with a proactive diligence, Gupta noted.
“We need to be able to shift from using guidelines that are just based on the number, size, and histology of polyps to a scenario where we’re doing very high-quality colonoscopies with excellent ADR rates and complete polyp excision,” Gupta said.
Furthermore, “the use of tools for more precise risk stratification could result in a big, low-risk group that could just require 10-year colonoscopy surveillance or maybe even periodic noninvasive surveillance, and a much smaller high-risk group that we could really focus our attention on, doing surveillance colonoscopy every 3-5 years or maybe even intense noninvasive surveillance.”
Gupta’s disclosures included relationships with Guardant Health, Universal DX, CellMax, and Geneoscopy. Shaukat’s disclosures included relationships with Iterative Health and Freenome.
A version of this article appeared on Medscape.com.
SAN DIEGO — , according to new research.
Of key factors linked to a higher risk for such cases, one stands out — the quality of the baseline colonoscopy procedure.
“A lot of the neoplasia that we see after polypectomy was probably either missed or incompletely resected at baseline,” said Samir Gupta, MD, AGAF, a professor of medicine in the Division of Gastroenterology, UC San Diego Health, La Jolla, California, in discussing the topic at Digestive Diseases Week® (DDW) 2025.
“Therefore, what is key to emphasize is that [colonoscopy] quality is probably the most important factor in post-polypectomy risk,” he said. “But, advantageously, it’s also the most modifiable factor.”
Research shows that the risk for CRC incidence following a colonoscopy ranges from just about 3.4 to 5 cases per 10,000 person-years when baseline findings show no adenoma or a low risk; however, higher rates ranging from 13.8 to 20.9 cases per 10,000 person-years are observed for high-risk adenomas or serrated polyps, Gupta reported.
“Compared with those who have normal colonoscopy, the risk [for CRC] with high-risk adenomas is increased by nearly threefold,” Gupta said.
In a recent study of US veterans who underwent a colonoscopy with polypectomy between 1999 and 2016 that was labeled negative for cancer, Gupta and his colleagues found that over a median follow-up of 3.9 years, as many as 55% of 396 CRCs that occurred post-polypectomy were detected prior to the recommended surveillance colonoscopy.
The study also showed that 40% of post-polypectomy CRC deaths occurred prior to the recommended surveillance exam over a median follow-up of 4.2 years.
Cancers detected prior to the recommended surveillance exam were more likely to be diagnosed as stage IV compared with those diagnosed later (16% prior to recommended surveillance vs 2.1% and 8.3% during and after, respectively; P = .003).
Importantly, the most prominent reason for the cancers emerging in the interval before follow-up surveillance was missed lesions during the baseline colonoscopy (60%), Gupta said.
Colonoscopist Skill and Benchmarks
A larger study of 173,288 colonoscopies further underscores colonoscopist skill as a key factor in post-polypectomy CRC, showing that colonoscopists with low vs high performance quality — defined as an adenoma detection rate (ADR) of either < 20% vs ≥ 20% — had higher 10-year cumulative rates of CRC incidence among patients following a negative colonoscopy (P < .001).
Likewise, in another analysis of low-risk vs high-risk polyps, a higher colonoscopist performance status was significantly associated with lower rates of CRCs (P < .001).
“Higher colonoscopist performance was associated with a lower cumulative colorectal cancer risk within each [polyp risk] group, such that the cumulative risk after high-risk adenoma removal by a higher performing colonoscopist is similar to that in patients who had a low-risk adenoma removed by a lower performer,” Gupta explained.
“So, this has nothing to do with the type of polyp that was removed — it really has to do with the quality of the colonoscopist,” he said.
The American College of Gastroenterology and the American Society for Gastrointestinal Endoscopy Quality Task Force recently updated recommended benchmarks for colonoscopists for detecting polyps, said Aasma Shaukat, MD, AGAF, director of GI Outcomes Research at NYU Grossman School of Medicine, New York City, in further discussing the issue in the session.
They recommend an ADR of 35% overall, with the recommended benchmark being ≥ 40% for men aged 45 years or older and ≥ 30% for women aged 45 years or older, with a rate of 50% for patients aged 45 years or older with an abnormal stool test, Shaukat explained.
And “these are minimum benchmarks,” she said. “Multiple studies suggest that, in fact, the reported rates are much higher.”
Among key strategies for detecting elusive adenomas is the need to slow down withdrawal time during the colonoscopy in order to take as close a look as possible, Shaukat emphasized.
She noted research that her team has published showing that physicians’ shorter withdrawal times were in fact inversely associated with an increased risk for cancers occurring prior to the recommended surveillance (P < .0001).
“Multiple studies have shown it isn’t just the time but the technique with withdrawal,” she added, underscoring the need to flatten as much of the mucosa and folds as possible during the withdrawal. “It’s important to perfect our technique.”
Sessile serrated lesions, with often subtle and indistinct borders, can be among the most difficult polyps to remove, Shaukat noted. Studies have shown that as many as 31% of sessile serrated lesions are incompletely resected, compared with about 7% of tubular adenomas.
Patient Compliance Can’t Be Counted On
In addition to physician-related factors, patients themselves can also play a role in post-polypectomy cancer risk — specifically in not complying with surveillance recommendations, with reasons ranging from cost to the invasiveness and burden of undergoing a surveillance colonoscopy.
“Colonoscopies are expensive, and participation is suboptimal,” Gupta said.
One study of high-risk patients with adenoma shows that only 64% received surveillance, and many who did receive surveillance received it late, he noted.
This underscores the need for better prevention as well as follow-up strategies, he added.
Recommendations for surveillance exams from the World Endoscopy Organization range from every 3 to 10 years for patients with polyps, depending on the number, size, and type of polyps, to every 10 years for those with normal colonoscopies and no polyps.
A key potential solution to improve patient monitoring within those periods is the use of fecal immunochemical tests (FITs), which are noninvasive, substantially less burdensome alternatives to colonoscopies, which check for blood in the stool, Gupta said.
While the tests can’t replace the gold standard of colonoscopies, the tests nevertheless can play an important role in monitoring patients, he said.
Evidence supporting their benefits includes a recent important study of 2226 patients who underwent either post-polypectomy colonoscopy, FIT (either with FOB Gold or OC-Sensor), or FIT-fecal DNA (Cologuard) test, he noted.
The results showed that the OC-Sensor FIT had a 71% sensitivity, and FIT-fecal DNA had a sensitivity of 86% in the detection of CRC.
Importantly, the study found that a positive FIT result prior to the recommended surveillance colonoscopy reduced the time-to-diagnosis for CRC and advanced adenoma by a median of 30 and 20 months, respectively.
FIT Tests Potentially a ‘Major Advantage’
“The predictive models and these noninvasive tests are likely better than current guidelines for predicting who has metachronous advanced neoplasia or colon cancer,” Gupta said.
“For this reason, I really think that these alternatives have a potentially major advantage in reducing colonoscopy burdens. These alternatives are worthwhile of studying, and we really do need to consider them,” he said.
More broadly, the collective evidence points to factors that can and should be addressed with a proactive diligence, Gupta noted.
“We need to be able to shift from using guidelines that are just based on the number, size, and histology of polyps to a scenario where we’re doing very high-quality colonoscopies with excellent ADR rates and complete polyp excision,” Gupta said.
Furthermore, “the use of tools for more precise risk stratification could result in a big, low-risk group that could just require 10-year colonoscopy surveillance or maybe even periodic noninvasive surveillance, and a much smaller high-risk group that we could really focus our attention on, doing surveillance colonoscopy every 3-5 years or maybe even intense noninvasive surveillance.”
Gupta’s disclosures included relationships with Guardant Health, Universal DX, CellMax, and Geneoscopy. Shaukat’s disclosures included relationships with Iterative Health and Freenome.
A version of this article appeared on Medscape.com.
FROM DDW 2025