User login
Chemotherapy Linked to Brain Atrophy in Patients With Breast Cancer
Patients with breast cancer who undergo chemotherapy may face an increased risk for brain atrophy and cognitive decline, new findings from a pilot study suggest.
Memory problems in patients with cancer may not stem solely from stress or anxiety related to their diagnosis but could reflect underlying changes in brain structure, study investigator Paul Edison, PhD, MPhil, professor of neuroscience and clinical professor of neurology at Imperial College London, England, told this news organization.
While the findings suggest that chemotherapy may contribute to neuronal damage, the researchers noted that many aspects of the relationship between treatment and brain changes remain unclear.
Edison highlighted three key areas that require further investigation — uncovering the mechanisms driving brain atrophy, determining the proportion of patients affected, and identifying effective prevention strategies.
Another investigator on the study, Laura Kenny, MD, PhD, associate professor and consultant medical oncologist at Imperial College London, noted that the issue has received limited attention to date but expressed hope that the findings will raise awareness and encourage further research, given its clinical importance.
The findings were presented on July 29 at the Alzheimer’s Association International Conference (AAIC) 2025.
Investigating Cognitive Impact
Advances in chemotherapeutic agents have improved survival rates in patients with cancer. However, challenges persist regarding the long-term impact of these drugs.
Chemotherapy-associated cognitive impairment, often referred to as “brain fog” or “chemobrain,” affects approximately one third of patients with breast cancer following treatment.
While cognitive decline resolves within 12 months for some patients, others experience persistent effects that may elevate the risk for neurodegenerative conditions, Edison explained.
To evaluate the impact of chemotherapy on the brain, investigators studied 328 women with nonmetastatic breast cancer who had undergone chemotherapy within the past 12 months.
Patients received either anthracycline — a drug derived from the Streptomyces peucetius bacterium — or taxanes such as docetaxel and paclitaxel, both commonly used in breast cancer treatment, or a combination of these agents. In addition, some patients may also have had hormone therapy at some point during treatment, said Kenny.
Participants completed neurocognitive prescreening tests every 3 months using a specialized artificial intelligence–driven platform, allowing them to take detailed memory assessments online from home.
Among those prescreened, 18 individuals with lower neurocognitive scores (mean age, 55 years) and 19 cognitively normal control individuals without breast cancer (mean age, 67 years) underwent comprehensive, in-person, neurocognitive evaluations and MRI scans.
Researchers analyzed the scans using region of interest (ROI) and voxel-based morphometry (VBM), which uses sophisticated computer software, to assess gray matter volumes and surface areas.
The ROI analysis revealed significant reductions in gray matter volume (measured in mm3) and surface area (measured in mm2) among patients experiencing chemobrain, particularly affecting the isthmus cingulate and pars opercularis, with changes extending into the orbitofrontal and temporal regions.
Significant Atrophy
The VBM analysis confirmed significant atrophy in the frontal, parietal, and cingulate regions of patients with chemobrain compared with control individuals (P < .05). Edison noted that this pattern overlaps with brain changes typically observed in Alzheimer’s disease and vascular cognitive impairment.
For both analyses, “we demonstrated there is some amount of shrinkage in the brain among patients with chemobrain,” he said. “The fact that controls are older means the results are even more significant as there’s more brain atrophy as people age.”
Some of the affected brain regions may be linked to impaired memory, a hallmark of Alzheimer’s disease, but Edison cautioned that given the small sample size this finding should be interpreted with caution.
While the analysis demonstrated overall lower brain volumes in patients with “chemobrain” compared with controls, Edison emphasized that this finding reflects a single time point and does not indicate brain shrinkage over time.
Other events, including stroke — can also cause brain changes.
Edison highlighted the importance of determining the significance of these brain changes, how they affect patients and whether they can be prevented.
In-person neurocognitive testing revealed significantly reduced semantic and verbal fluency, as well as lower Mini-Mental State Examination scores in patients with chemobrain. Edison noted that these results support the MRI findings.
The team plans to follow patients to track brain changes and memory recovery, Kenny said. While patients with breast cancer are a common focus, the researchers intend to expand the study to other cancers in both men and women, said Kenny.
Based on discussions with her oncology colleagues, Kenny noted that many patients anecdotally report experiencing memory problems during chemotherapy.
More Research Needed
Commenting for this news organization, Rebecca M. Edelmayer, PhD, vice president, scientific engagement, at the Alzheimer’s Association, said the research may help shed light on why women are more likely to develop dementia than men.
For years now, experts have been trying to figure out what puts women at higher risk for AD and other dementias, said Edelmayer.
“We still don’t understand whether this involves biologically driven risk factors or socially driven risk factors.”
Research linking treatments for other health conditions to increased memory problems may offer some clues, she noted, suggesting a potential avenue for further investigation into the intersection of chemotherapy and neurodegenerative diseases such as Alzheimer’s.
However, Edelmayer emphasized that this line of research is still in its infancy. Much more work is needed to determine whether there is a direct cause-and-effect relationship with specific chemotherapy drugs, and whether some patients may already be predisposed or at higher risk for cognitive decline, she said.
Also commenting for this news organization, Eric Brown, MD, associate scientist and associate chief of geriatric psychiatry at the Centre for Addiction and Mental Health in Toronto, raised concerns about the study’s design.
One issue, he noted, is that the researchers did not image all patients who received chemotherapy but instead selected those with the most significant cognitive impairment. As a result, the findings may not have reflected outcomes in the average post-chemotherapy patients but rather represent the most severely affected subgroup.
Brown pointed out that the study did not clarify whether this subgroup had comorbid conditions. It’s possible, he said, that some individuals may have had Alzheimer’s disease or other forms of dementia unrelated to chemotherapy.
He agreed that tracking longitudinal changes in both cognitive scores and neuroimaging — comparing patients who receive chemotherapy with those who do not — would be a valuable next step.
The investigators, Edelmayer, and Brown reported no relevant conflicts of interest. A version of this article first appeared on Medscape.com.
Patients with breast cancer who undergo chemotherapy may face an increased risk for brain atrophy and cognitive decline, new findings from a pilot study suggest.
Memory problems in patients with cancer may not stem solely from stress or anxiety related to their diagnosis but could reflect underlying changes in brain structure, study investigator Paul Edison, PhD, MPhil, professor of neuroscience and clinical professor of neurology at Imperial College London, England, told this news organization.
While the findings suggest that chemotherapy may contribute to neuronal damage, the researchers noted that many aspects of the relationship between treatment and brain changes remain unclear.
Edison highlighted three key areas that require further investigation — uncovering the mechanisms driving brain atrophy, determining the proportion of patients affected, and identifying effective prevention strategies.
Another investigator on the study, Laura Kenny, MD, PhD, associate professor and consultant medical oncologist at Imperial College London, noted that the issue has received limited attention to date but expressed hope that the findings will raise awareness and encourage further research, given its clinical importance.
The findings were presented on July 29 at the Alzheimer’s Association International Conference (AAIC) 2025.
Investigating Cognitive Impact
Advances in chemotherapeutic agents have improved survival rates in patients with cancer. However, challenges persist regarding the long-term impact of these drugs.
Chemotherapy-associated cognitive impairment, often referred to as “brain fog” or “chemobrain,” affects approximately one third of patients with breast cancer following treatment.
While cognitive decline resolves within 12 months for some patients, others experience persistent effects that may elevate the risk for neurodegenerative conditions, Edison explained.
To evaluate the impact of chemotherapy on the brain, investigators studied 328 women with nonmetastatic breast cancer who had undergone chemotherapy within the past 12 months.
Patients received either anthracycline — a drug derived from the Streptomyces peucetius bacterium — or taxanes such as docetaxel and paclitaxel, both commonly used in breast cancer treatment, or a combination of these agents. In addition, some patients may also have had hormone therapy at some point during treatment, said Kenny.
Participants completed neurocognitive prescreening tests every 3 months using a specialized artificial intelligence–driven platform, allowing them to take detailed memory assessments online from home.
Among those prescreened, 18 individuals with lower neurocognitive scores (mean age, 55 years) and 19 cognitively normal control individuals without breast cancer (mean age, 67 years) underwent comprehensive, in-person, neurocognitive evaluations and MRI scans.
Researchers analyzed the scans using region of interest (ROI) and voxel-based morphometry (VBM), which uses sophisticated computer software, to assess gray matter volumes and surface areas.
The ROI analysis revealed significant reductions in gray matter volume (measured in mm3) and surface area (measured in mm2) among patients experiencing chemobrain, particularly affecting the isthmus cingulate and pars opercularis, with changes extending into the orbitofrontal and temporal regions.
Significant Atrophy
The VBM analysis confirmed significant atrophy in the frontal, parietal, and cingulate regions of patients with chemobrain compared with control individuals (P < .05). Edison noted that this pattern overlaps with brain changes typically observed in Alzheimer’s disease and vascular cognitive impairment.
For both analyses, “we demonstrated there is some amount of shrinkage in the brain among patients with chemobrain,” he said. “The fact that controls are older means the results are even more significant as there’s more brain atrophy as people age.”
Some of the affected brain regions may be linked to impaired memory, a hallmark of Alzheimer’s disease, but Edison cautioned that given the small sample size this finding should be interpreted with caution.
While the analysis demonstrated overall lower brain volumes in patients with “chemobrain” compared with controls, Edison emphasized that this finding reflects a single time point and does not indicate brain shrinkage over time.
Other events, including stroke — can also cause brain changes.
Edison highlighted the importance of determining the significance of these brain changes, how they affect patients and whether they can be prevented.
In-person neurocognitive testing revealed significantly reduced semantic and verbal fluency, as well as lower Mini-Mental State Examination scores in patients with chemobrain. Edison noted that these results support the MRI findings.
The team plans to follow patients to track brain changes and memory recovery, Kenny said. While patients with breast cancer are a common focus, the researchers intend to expand the study to other cancers in both men and women, said Kenny.
Based on discussions with her oncology colleagues, Kenny noted that many patients anecdotally report experiencing memory problems during chemotherapy.
More Research Needed
Commenting for this news organization, Rebecca M. Edelmayer, PhD, vice president, scientific engagement, at the Alzheimer’s Association, said the research may help shed light on why women are more likely to develop dementia than men.
For years now, experts have been trying to figure out what puts women at higher risk for AD and other dementias, said Edelmayer.
“We still don’t understand whether this involves biologically driven risk factors or socially driven risk factors.”
Research linking treatments for other health conditions to increased memory problems may offer some clues, she noted, suggesting a potential avenue for further investigation into the intersection of chemotherapy and neurodegenerative diseases such as Alzheimer’s.
However, Edelmayer emphasized that this line of research is still in its infancy. Much more work is needed to determine whether there is a direct cause-and-effect relationship with specific chemotherapy drugs, and whether some patients may already be predisposed or at higher risk for cognitive decline, she said.
Also commenting for this news organization, Eric Brown, MD, associate scientist and associate chief of geriatric psychiatry at the Centre for Addiction and Mental Health in Toronto, raised concerns about the study’s design.
One issue, he noted, is that the researchers did not image all patients who received chemotherapy but instead selected those with the most significant cognitive impairment. As a result, the findings may not have reflected outcomes in the average post-chemotherapy patients but rather represent the most severely affected subgroup.
Brown pointed out that the study did not clarify whether this subgroup had comorbid conditions. It’s possible, he said, that some individuals may have had Alzheimer’s disease or other forms of dementia unrelated to chemotherapy.
He agreed that tracking longitudinal changes in both cognitive scores and neuroimaging — comparing patients who receive chemotherapy with those who do not — would be a valuable next step.
The investigators, Edelmayer, and Brown reported no relevant conflicts of interest. A version of this article first appeared on Medscape.com.
Patients with breast cancer who undergo chemotherapy may face an increased risk for brain atrophy and cognitive decline, new findings from a pilot study suggest.
Memory problems in patients with cancer may not stem solely from stress or anxiety related to their diagnosis but could reflect underlying changes in brain structure, study investigator Paul Edison, PhD, MPhil, professor of neuroscience and clinical professor of neurology at Imperial College London, England, told this news organization.
While the findings suggest that chemotherapy may contribute to neuronal damage, the researchers noted that many aspects of the relationship between treatment and brain changes remain unclear.
Edison highlighted three key areas that require further investigation — uncovering the mechanisms driving brain atrophy, determining the proportion of patients affected, and identifying effective prevention strategies.
Another investigator on the study, Laura Kenny, MD, PhD, associate professor and consultant medical oncologist at Imperial College London, noted that the issue has received limited attention to date but expressed hope that the findings will raise awareness and encourage further research, given its clinical importance.
The findings were presented on July 29 at the Alzheimer’s Association International Conference (AAIC) 2025.
Investigating Cognitive Impact
Advances in chemotherapeutic agents have improved survival rates in patients with cancer. However, challenges persist regarding the long-term impact of these drugs.
Chemotherapy-associated cognitive impairment, often referred to as “brain fog” or “chemobrain,” affects approximately one third of patients with breast cancer following treatment.
While cognitive decline resolves within 12 months for some patients, others experience persistent effects that may elevate the risk for neurodegenerative conditions, Edison explained.
To evaluate the impact of chemotherapy on the brain, investigators studied 328 women with nonmetastatic breast cancer who had undergone chemotherapy within the past 12 months.
Patients received either anthracycline — a drug derived from the Streptomyces peucetius bacterium — or taxanes such as docetaxel and paclitaxel, both commonly used in breast cancer treatment, or a combination of these agents. In addition, some patients may also have had hormone therapy at some point during treatment, said Kenny.
Participants completed neurocognitive prescreening tests every 3 months using a specialized artificial intelligence–driven platform, allowing them to take detailed memory assessments online from home.
Among those prescreened, 18 individuals with lower neurocognitive scores (mean age, 55 years) and 19 cognitively normal control individuals without breast cancer (mean age, 67 years) underwent comprehensive, in-person, neurocognitive evaluations and MRI scans.
Researchers analyzed the scans using region of interest (ROI) and voxel-based morphometry (VBM), which uses sophisticated computer software, to assess gray matter volumes and surface areas.
The ROI analysis revealed significant reductions in gray matter volume (measured in mm3) and surface area (measured in mm2) among patients experiencing chemobrain, particularly affecting the isthmus cingulate and pars opercularis, with changes extending into the orbitofrontal and temporal regions.
Significant Atrophy
The VBM analysis confirmed significant atrophy in the frontal, parietal, and cingulate regions of patients with chemobrain compared with control individuals (P < .05). Edison noted that this pattern overlaps with brain changes typically observed in Alzheimer’s disease and vascular cognitive impairment.
For both analyses, “we demonstrated there is some amount of shrinkage in the brain among patients with chemobrain,” he said. “The fact that controls are older means the results are even more significant as there’s more brain atrophy as people age.”
Some of the affected brain regions may be linked to impaired memory, a hallmark of Alzheimer’s disease, but Edison cautioned that given the small sample size this finding should be interpreted with caution.
While the analysis demonstrated overall lower brain volumes in patients with “chemobrain” compared with controls, Edison emphasized that this finding reflects a single time point and does not indicate brain shrinkage over time.
Other events, including stroke — can also cause brain changes.
Edison highlighted the importance of determining the significance of these brain changes, how they affect patients and whether they can be prevented.
In-person neurocognitive testing revealed significantly reduced semantic and verbal fluency, as well as lower Mini-Mental State Examination scores in patients with chemobrain. Edison noted that these results support the MRI findings.
The team plans to follow patients to track brain changes and memory recovery, Kenny said. While patients with breast cancer are a common focus, the researchers intend to expand the study to other cancers in both men and women, said Kenny.
Based on discussions with her oncology colleagues, Kenny noted that many patients anecdotally report experiencing memory problems during chemotherapy.
More Research Needed
Commenting for this news organization, Rebecca M. Edelmayer, PhD, vice president, scientific engagement, at the Alzheimer’s Association, said the research may help shed light on why women are more likely to develop dementia than men.
For years now, experts have been trying to figure out what puts women at higher risk for AD and other dementias, said Edelmayer.
“We still don’t understand whether this involves biologically driven risk factors or socially driven risk factors.”
Research linking treatments for other health conditions to increased memory problems may offer some clues, she noted, suggesting a potential avenue for further investigation into the intersection of chemotherapy and neurodegenerative diseases such as Alzheimer’s.
However, Edelmayer emphasized that this line of research is still in its infancy. Much more work is needed to determine whether there is a direct cause-and-effect relationship with specific chemotherapy drugs, and whether some patients may already be predisposed or at higher risk for cognitive decline, she said.
Also commenting for this news organization, Eric Brown, MD, associate scientist and associate chief of geriatric psychiatry at the Centre for Addiction and Mental Health in Toronto, raised concerns about the study’s design.
One issue, he noted, is that the researchers did not image all patients who received chemotherapy but instead selected those with the most significant cognitive impairment. As a result, the findings may not have reflected outcomes in the average post-chemotherapy patients but rather represent the most severely affected subgroup.
Brown pointed out that the study did not clarify whether this subgroup had comorbid conditions. It’s possible, he said, that some individuals may have had Alzheimer’s disease or other forms of dementia unrelated to chemotherapy.
He agreed that tracking longitudinal changes in both cognitive scores and neuroimaging — comparing patients who receive chemotherapy with those who do not — would be a valuable next step.
The investigators, Edelmayer, and Brown reported no relevant conflicts of interest. A version of this article first appeared on Medscape.com.
FROM AAIC 2025
These Two Simple Interventions May Cut Colorectal Cancer Recurrence Risk
This transcript has been edited for clarity.
New guidelines have lowered the age to begin screening for colon cancer to 45 years old. Although this change is positive, we’re still seeing advanced cancer in younger patients who haven’t been screened in time.
Once diagnosed, these patients undergo surgery and chemotherapy and often return to us asking, “What can I do now to help myself?”
Two recent studies highlight interventions that are simple, affordable, and actionable today: exercise and aspirin. Let’s take a closer look at the results.
Exercise’s Risk Reduction Potential
The idea that exercise reduces cancer recurrence and mortality is supported by observational data. The mechanistic effects behind this have been ascribed to metabolic growth factors, inflammatory changes, immune function changes, and perhaps even positive impact on sleep.
A study just published in The New England Journal of Medicine examined structured exercise after adjuvant chemotherapy for colon cancer. The phase 3 randomized CHALLENGE trial, mostly conducted at Canadian and Australian centers, recruited patients with resected stage II or III colon cancer (9.8% and 90.2%, respectively) who had completed adjuvant chemotherapy. Patients with recurrences within a year of diagnosis were excluded, as they were more likely to have highly aggressive, biologically active disease.
Patients were randomized to receive healthcare education materials alone or in conjunction with a structured exercise program over a 3-year follow-up period.
The exercise intervention, delivered in person or virtually, focused on increasing recreational aerobic activity over baseline by at least 10 metabolic equivalent task (MET). An increment of 10 MET hours per week is not too vigorous. It is essentially the equivalent of adding about 45-60 minutes of brisk walking or 25-30 minutes of jogging 3-4 times a week.
Patients were asked to increase MET over the first 6 months and then maintain or further increase the amount over the next 2.5 years. They were permitted to structure their own exercise program by choosing the type, frequency, intensity, and duration of aerobic exercise.
The primary endpoint was disease-free survival, with secondary endpoints assessing overall survival, patient-reported outcomes, and other outcomes. Although designed to detect differences at 3 years, follow-up was also performed out to 5 and 8 years.
At a median follow-up of 7.9 years, exercise reduced the relative risk of disease recurrence, new primary cancer, or death by 28% (P = .02). This benefit persisted — and even strengthened — over time, with disease-free survival increasing by 6.4 and 7.1 percentage points at 5 and 8 years, respectively.
Musculoskeletal adverse events were slightly higher in the exercise group compared with the health education group (18.5% vs 11.5%, respectively), but only 10% were directly attributed to the exercise.
There are considerations when interpreting these results. First, there was an attrition over time for compliance and training. It would be interesting to see whether that impacted the results. Second, it’s unclear whether patient pedigree or a genomic pathway may predispose to a benefit here for the exercise group.
But overall, this phase 3 trial provides class 1 evidence supporting exercise as a low-cost, high-impact intervention to reduce cancer recurrence.
Adjuvant Aspirin in Colon Cancer Subset
That’s a perfect segue into another recent study looking at the effects of adjuvant aspirin on the prevention of recurrence.
The ALASCCA trial— conducted across centers in Sweden, Denmark, Finland, and Norway — assessed patients with stage I-III rectal cancer or stage II-III colon cancer. It focused on a subset of patients with an oncogenic abnormality called PIK3CA (phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit alpha).
PIK3CA occurs in approximately a third of colon cancers and is associated with significant chemotherapy resistance and a higher rate of disease progression.
Of the included patients, 1103 (37%) had alterations in the PIK3CA pathway. Researchers randomized patients to receive either 160 mg of aspirin or placebo daily for 3 years, starting within 3 months of surgery.
Among patients with PIK3CA mutations, aspirin dramatically reduced the risk for time to recurrence by nearly 50% at 3 years (P = .044). Adverse events associated with aspirin were minimal, including one case each of gastrointestinal bleeding, hematoma, and allergic reaction.
There is no evidence that higher aspirin doses provide greater prevention of colorectal cancer recurrence. The 160-mg use in the current study is fairly normal, roughly equivalent to two low-dose (81-mg) aspirin tablets.
Now, it’s worth noting that the use of aspirin for the primary prevention of cardiovascular disease was initially recommended by the US Preventive Services Task Force in 2016. This recommendation was then recanted in 2022, when the same group reported limited net benefit to this approach.
Two Proactive Actions
These studies highlight 2 interventions — exercise and aspirin — that are low cost, accessible, and appeal to patients eager to help prevent their cancer from recurring.
Exercise is broadly beneficial and can be recommended immediately.
For aspirin, patients should work with their oncologist to determine their PIK3CA mutation status, as this subgroup appears to benefit the most.
These findings offer patients meaningful, proactive interventions they can apply to support their recovery and reduce the risk of recurrence. Hopefully these new findings will help guide your clinical conversations.
Johnson is a regular contributor to Medscape. He is professor of medicine and chief of gastroenterology at Eastern Virginia Medical School in Norfolk, and a past president of the American College of Gastroenterology. His primary focus is the clinical practice of gastroenterology. He has published extensively in the internal medicine/gastroenterology literature, with principal research interests in esophageal and colon disease, and more recently in sleep and microbiome effects on gastrointestinal health and disease. He disclosed that he is an adviser for ISOThrive.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
New guidelines have lowered the age to begin screening for colon cancer to 45 years old. Although this change is positive, we’re still seeing advanced cancer in younger patients who haven’t been screened in time.
Once diagnosed, these patients undergo surgery and chemotherapy and often return to us asking, “What can I do now to help myself?”
Two recent studies highlight interventions that are simple, affordable, and actionable today: exercise and aspirin. Let’s take a closer look at the results.
Exercise’s Risk Reduction Potential
The idea that exercise reduces cancer recurrence and mortality is supported by observational data. The mechanistic effects behind this have been ascribed to metabolic growth factors, inflammatory changes, immune function changes, and perhaps even positive impact on sleep.
A study just published in The New England Journal of Medicine examined structured exercise after adjuvant chemotherapy for colon cancer. The phase 3 randomized CHALLENGE trial, mostly conducted at Canadian and Australian centers, recruited patients with resected stage II or III colon cancer (9.8% and 90.2%, respectively) who had completed adjuvant chemotherapy. Patients with recurrences within a year of diagnosis were excluded, as they were more likely to have highly aggressive, biologically active disease.
Patients were randomized to receive healthcare education materials alone or in conjunction with a structured exercise program over a 3-year follow-up period.
The exercise intervention, delivered in person or virtually, focused on increasing recreational aerobic activity over baseline by at least 10 metabolic equivalent task (MET). An increment of 10 MET hours per week is not too vigorous. It is essentially the equivalent of adding about 45-60 minutes of brisk walking or 25-30 minutes of jogging 3-4 times a week.
Patients were asked to increase MET over the first 6 months and then maintain or further increase the amount over the next 2.5 years. They were permitted to structure their own exercise program by choosing the type, frequency, intensity, and duration of aerobic exercise.
The primary endpoint was disease-free survival, with secondary endpoints assessing overall survival, patient-reported outcomes, and other outcomes. Although designed to detect differences at 3 years, follow-up was also performed out to 5 and 8 years.
At a median follow-up of 7.9 years, exercise reduced the relative risk of disease recurrence, new primary cancer, or death by 28% (P = .02). This benefit persisted — and even strengthened — over time, with disease-free survival increasing by 6.4 and 7.1 percentage points at 5 and 8 years, respectively.
Musculoskeletal adverse events were slightly higher in the exercise group compared with the health education group (18.5% vs 11.5%, respectively), but only 10% were directly attributed to the exercise.
There are considerations when interpreting these results. First, there was an attrition over time for compliance and training. It would be interesting to see whether that impacted the results. Second, it’s unclear whether patient pedigree or a genomic pathway may predispose to a benefit here for the exercise group.
But overall, this phase 3 trial provides class 1 evidence supporting exercise as a low-cost, high-impact intervention to reduce cancer recurrence.
Adjuvant Aspirin in Colon Cancer Subset
That’s a perfect segue into another recent study looking at the effects of adjuvant aspirin on the prevention of recurrence.
The ALASCCA trial— conducted across centers in Sweden, Denmark, Finland, and Norway — assessed patients with stage I-III rectal cancer or stage II-III colon cancer. It focused on a subset of patients with an oncogenic abnormality called PIK3CA (phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit alpha).
PIK3CA occurs in approximately a third of colon cancers and is associated with significant chemotherapy resistance and a higher rate of disease progression.
Of the included patients, 1103 (37%) had alterations in the PIK3CA pathway. Researchers randomized patients to receive either 160 mg of aspirin or placebo daily for 3 years, starting within 3 months of surgery.
Among patients with PIK3CA mutations, aspirin dramatically reduced the risk for time to recurrence by nearly 50% at 3 years (P = .044). Adverse events associated with aspirin were minimal, including one case each of gastrointestinal bleeding, hematoma, and allergic reaction.
There is no evidence that higher aspirin doses provide greater prevention of colorectal cancer recurrence. The 160-mg use in the current study is fairly normal, roughly equivalent to two low-dose (81-mg) aspirin tablets.
Now, it’s worth noting that the use of aspirin for the primary prevention of cardiovascular disease was initially recommended by the US Preventive Services Task Force in 2016. This recommendation was then recanted in 2022, when the same group reported limited net benefit to this approach.
Two Proactive Actions
These studies highlight 2 interventions — exercise and aspirin — that are low cost, accessible, and appeal to patients eager to help prevent their cancer from recurring.
Exercise is broadly beneficial and can be recommended immediately.
For aspirin, patients should work with their oncologist to determine their PIK3CA mutation status, as this subgroup appears to benefit the most.
These findings offer patients meaningful, proactive interventions they can apply to support their recovery and reduce the risk of recurrence. Hopefully these new findings will help guide your clinical conversations.
Johnson is a regular contributor to Medscape. He is professor of medicine and chief of gastroenterology at Eastern Virginia Medical School in Norfolk, and a past president of the American College of Gastroenterology. His primary focus is the clinical practice of gastroenterology. He has published extensively in the internal medicine/gastroenterology literature, with principal research interests in esophageal and colon disease, and more recently in sleep and microbiome effects on gastrointestinal health and disease. He disclosed that he is an adviser for ISOThrive.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
New guidelines have lowered the age to begin screening for colon cancer to 45 years old. Although this change is positive, we’re still seeing advanced cancer in younger patients who haven’t been screened in time.
Once diagnosed, these patients undergo surgery and chemotherapy and often return to us asking, “What can I do now to help myself?”
Two recent studies highlight interventions that are simple, affordable, and actionable today: exercise and aspirin. Let’s take a closer look at the results.
Exercise’s Risk Reduction Potential
The idea that exercise reduces cancer recurrence and mortality is supported by observational data. The mechanistic effects behind this have been ascribed to metabolic growth factors, inflammatory changes, immune function changes, and perhaps even positive impact on sleep.
A study just published in The New England Journal of Medicine examined structured exercise after adjuvant chemotherapy for colon cancer. The phase 3 randomized CHALLENGE trial, mostly conducted at Canadian and Australian centers, recruited patients with resected stage II or III colon cancer (9.8% and 90.2%, respectively) who had completed adjuvant chemotherapy. Patients with recurrences within a year of diagnosis were excluded, as they were more likely to have highly aggressive, biologically active disease.
Patients were randomized to receive healthcare education materials alone or in conjunction with a structured exercise program over a 3-year follow-up period.
The exercise intervention, delivered in person or virtually, focused on increasing recreational aerobic activity over baseline by at least 10 metabolic equivalent task (MET). An increment of 10 MET hours per week is not too vigorous. It is essentially the equivalent of adding about 45-60 minutes of brisk walking or 25-30 minutes of jogging 3-4 times a week.
Patients were asked to increase MET over the first 6 months and then maintain or further increase the amount over the next 2.5 years. They were permitted to structure their own exercise program by choosing the type, frequency, intensity, and duration of aerobic exercise.
The primary endpoint was disease-free survival, with secondary endpoints assessing overall survival, patient-reported outcomes, and other outcomes. Although designed to detect differences at 3 years, follow-up was also performed out to 5 and 8 years.
At a median follow-up of 7.9 years, exercise reduced the relative risk of disease recurrence, new primary cancer, or death by 28% (P = .02). This benefit persisted — and even strengthened — over time, with disease-free survival increasing by 6.4 and 7.1 percentage points at 5 and 8 years, respectively.
Musculoskeletal adverse events were slightly higher in the exercise group compared with the health education group (18.5% vs 11.5%, respectively), but only 10% were directly attributed to the exercise.
There are considerations when interpreting these results. First, there was an attrition over time for compliance and training. It would be interesting to see whether that impacted the results. Second, it’s unclear whether patient pedigree or a genomic pathway may predispose to a benefit here for the exercise group.
But overall, this phase 3 trial provides class 1 evidence supporting exercise as a low-cost, high-impact intervention to reduce cancer recurrence.
Adjuvant Aspirin in Colon Cancer Subset
That’s a perfect segue into another recent study looking at the effects of adjuvant aspirin on the prevention of recurrence.
The ALASCCA trial— conducted across centers in Sweden, Denmark, Finland, and Norway — assessed patients with stage I-III rectal cancer or stage II-III colon cancer. It focused on a subset of patients with an oncogenic abnormality called PIK3CA (phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit alpha).
PIK3CA occurs in approximately a third of colon cancers and is associated with significant chemotherapy resistance and a higher rate of disease progression.
Of the included patients, 1103 (37%) had alterations in the PIK3CA pathway. Researchers randomized patients to receive either 160 mg of aspirin or placebo daily for 3 years, starting within 3 months of surgery.
Among patients with PIK3CA mutations, aspirin dramatically reduced the risk for time to recurrence by nearly 50% at 3 years (P = .044). Adverse events associated with aspirin were minimal, including one case each of gastrointestinal bleeding, hematoma, and allergic reaction.
There is no evidence that higher aspirin doses provide greater prevention of colorectal cancer recurrence. The 160-mg use in the current study is fairly normal, roughly equivalent to two low-dose (81-mg) aspirin tablets.
Now, it’s worth noting that the use of aspirin for the primary prevention of cardiovascular disease was initially recommended by the US Preventive Services Task Force in 2016. This recommendation was then recanted in 2022, when the same group reported limited net benefit to this approach.
Two Proactive Actions
These studies highlight 2 interventions — exercise and aspirin — that are low cost, accessible, and appeal to patients eager to help prevent their cancer from recurring.
Exercise is broadly beneficial and can be recommended immediately.
For aspirin, patients should work with their oncologist to determine their PIK3CA mutation status, as this subgroup appears to benefit the most.
These findings offer patients meaningful, proactive interventions they can apply to support their recovery and reduce the risk of recurrence. Hopefully these new findings will help guide your clinical conversations.
Johnson is a regular contributor to Medscape. He is professor of medicine and chief of gastroenterology at Eastern Virginia Medical School in Norfolk, and a past president of the American College of Gastroenterology. His primary focus is the clinical practice of gastroenterology. He has published extensively in the internal medicine/gastroenterology literature, with principal research interests in esophageal and colon disease, and more recently in sleep and microbiome effects on gastrointestinal health and disease. He disclosed that he is an adviser for ISOThrive.
A version of this article appeared on Medscape.com.
Endometrial Cancer: 5 Things to Know
Endometrial cancer is a common type of gynecologic cancer, and its incidence is rising steadily in the United States and globally. Most cases are endometrioid adenocarcinomas, arising from the inner lining of the uterus — the endometrium. While many patients are diagnosed early because of noticeable symptoms like abnormal bleeding, trends in both incidence and mortality are concerning, especially given the persistent racial and socioeconomic disparities in outcomes.
In addition to being the most common uterine malignancy, endometrial cancer is at the forefront of precision oncology in gynecology. The traditional classification systems based on histology and hormone dependence are now being augmented by molecular subtyping that better informs prognosis and treatment. As diagnostic tools, genetic testing, and therapeutic strategies advance, the management of endometrial cancer is becoming increasingly personalized.
Here are five things to know about endometrial cancer:
1. Endometrial cancer is one of the few cancers with increasing mortality.
Endometrial cancer accounts for the majority of uterine cancers in the United States with an overall lifetime risk for women of about 1 in 40. Since the mid-2000s, incidence rates have risen steadily, by > 1% per year, reflecting both lifestyle and environmental factors. Importantly, the disease tends to be diagnosed at an early stage due to the presence of warning signs like postmenopausal bleeding, which contributes to relatively favorable survival outcomes when caught early.
However, mortality trends continue to evolve. From 1999 to 2013, death rates from endometrial cancer in the US declined slightly, but since 2013, they have increased sharply — by > 8% annually — according to recent data. This upward trend in mortality disproportionately affects non-Hispanic Black women, who experience the highest mortality rate (4.7 per 100,000) among all racial and ethnic groups. This disparity is likely caused by a complex interplay of factors, including delays in diagnosis, more aggressive tumor biology, and inequities in access to care. Addressing these disparities remains a key priority in improving outcomes.
2. Risk factors go beyond hormones and age.
Risk factors for endometrial cancer include prolonged exposure to unopposed estrogen, which can result from estrogen-only hormone replacement therapy, higher BMI, and early menarche or late menopause. Nulliparity (having never been pregnant) and older age also increase risk, as does tamoxifen use — a medication commonly prescribed for breast cancer prevention. These factors cumulatively increase endometrial proliferation and the potential for atypical cellular changes. Endometrial hyperplasia, a known precursor to cancer, is often linked to these hormonal imbalances and may require surveillance or treatment.
Beyond estrogen’s influence, a growing body of research is uncovering additional risk contributors. Women with polycystic ovary syndrome (PCOS), metabolic syndrome, or diabetes face elevated risk of developing endometrial cancer. Genetic syndromes, particularly Lynch and Cowden syndromes, are associated with significantly increased lifetime risks of endometrial cancer. Environmental exposures, such as the use of hair relaxers, are being investigated as emerging risk factors. Additionally, race remains a risk marker, with Black women not only experiencing higher mortality but also more aggressive subtypes of the disease. These complex, overlapping risks highlight the importance of individualized risk assessment and early intervention strategies.
3. Postmenopausal bleeding is the hallmark symptom — but not the only one.
In endometrial cancer, the majority of cases are diagnosed at an early stage, largely because of the hallmark symptom of postmenopausal bleeding. In addition to bleeding, patients may present with vaginal discharge, pyometra, and even pain and abdominal distension in advanced disease. Any bleeding in a postmenopausal woman should prompt evaluation, as it may signal endometrial hyperplasia or carcinoma. In premenopausal women, irregular or heavy menstrual bleeding may raise suspicion, particularly when accompanied by risk factors such as PCOS.
The diagnostic workup for suspected endometrial cancer in women, particularly those presenting with postmenopausal bleeding, begins with a focused clinical assessment and frequently includes transvaginal ultrasound (TVUS) to evaluate the endometrium. While TVUS can aid in identifying structural abnormalities or suggest malignancy, endometrial sampling is warranted in all postmenopausal women with abnormal bleeding, regardless of endometrial thickness. Office-based biopsy is the preferred initial approach due to its convenience and diagnostic yield; however, if the sample is nondiagnostic or technically difficult to obtain, hysteroscopy with directed biopsy or dilation and curettage should be pursued.
4. Classification systems are evolving to include molecular subtypes.
Historically, endometrial cancers were classified using the World Health Organization system based on histology and by hormone dependence: Type 1 (estrogen-dependent, typically endometrioid and low grade) and Type 2 (non-estrogen dependent, often serous and high grade). Type 1 cancers tend to have a better prognosis and slower progression, while Type 2 cancers are more aggressive and require intensive treatment. While helpful, this binary classification does not fully capture the biological diversity or treatment responsiveness of the disease.
The field is now moving toward molecular classification, which offers a more nuanced understanding. The four main molecular subtypes include: polymerase epsilon (POLE)-mutant, mismatch repair (MMR)-deficient, p53-abnormal, and no specific molecular profile (NSMP). These groups differ in prognosis and therapeutic implications. POLE-mutant tumors with extremely high mutational burdens generally have excellent outcomes and may not require aggressive adjuvant therapy. In contrast, p53-abnormal tumors are associated with chromosomal instability, TP53 mutations, and poor outcomes, necessitating more aggressive multimodal treatment. MMR-deficient tumors are particularly responsive to immunotherapy. These molecular distinctions are changing how clinicians approach risk stratification and management in patients with endometrial cancer.
5. Treatment is increasingly personalized — and immunotherapy is expanding.
The cornerstone of treatment for early-stage endometrial cancer is surgical: total hysterectomy with bilateral salpingo-oophorectomy, often with sentinel node mapping or lymphadenectomy. Adjuvant therapy depends on factors such as stage, grade, histology, and molecular subtype. Fertility-sparing management with progestin therapy is an option for highly selected patients with early-stage, low-grade tumors. Clinical guidelines recommend that fertility desires be addressed prior to initiating treatment, as standard surgical management typically results in loss of reproductive capacity.
For advanced or recurrent disease, treatment becomes more complex and increasingly individualized. Chemotherapy, often with carboplatin and paclitaxel, is standard for stage III/IV and recurrent disease. Molecular findings now guide additional therapy: For instance, MMR-deficient tumors may respond to checkpoint inhibitors. As targeted agents and combination regimens continue to emerge, treatment of endometrial is increasingly focused on precision medicine.
Markman is professor of medical oncology and therapeutics research and President of Medicine & Science at City of Hope in Atlanta and Chicago. He has disclosed relevant financial relationships with AstraZeneca, GSK and Myriad.
A version of this article first appeared on Medscape.com.
Endometrial cancer is a common type of gynecologic cancer, and its incidence is rising steadily in the United States and globally. Most cases are endometrioid adenocarcinomas, arising from the inner lining of the uterus — the endometrium. While many patients are diagnosed early because of noticeable symptoms like abnormal bleeding, trends in both incidence and mortality are concerning, especially given the persistent racial and socioeconomic disparities in outcomes.
In addition to being the most common uterine malignancy, endometrial cancer is at the forefront of precision oncology in gynecology. The traditional classification systems based on histology and hormone dependence are now being augmented by molecular subtyping that better informs prognosis and treatment. As diagnostic tools, genetic testing, and therapeutic strategies advance, the management of endometrial cancer is becoming increasingly personalized.
Here are five things to know about endometrial cancer:
1. Endometrial cancer is one of the few cancers with increasing mortality.
Endometrial cancer accounts for the majority of uterine cancers in the United States with an overall lifetime risk for women of about 1 in 40. Since the mid-2000s, incidence rates have risen steadily, by > 1% per year, reflecting both lifestyle and environmental factors. Importantly, the disease tends to be diagnosed at an early stage due to the presence of warning signs like postmenopausal bleeding, which contributes to relatively favorable survival outcomes when caught early.
However, mortality trends continue to evolve. From 1999 to 2013, death rates from endometrial cancer in the US declined slightly, but since 2013, they have increased sharply — by > 8% annually — according to recent data. This upward trend in mortality disproportionately affects non-Hispanic Black women, who experience the highest mortality rate (4.7 per 100,000) among all racial and ethnic groups. This disparity is likely caused by a complex interplay of factors, including delays in diagnosis, more aggressive tumor biology, and inequities in access to care. Addressing these disparities remains a key priority in improving outcomes.
2. Risk factors go beyond hormones and age.
Risk factors for endometrial cancer include prolonged exposure to unopposed estrogen, which can result from estrogen-only hormone replacement therapy, higher BMI, and early menarche or late menopause. Nulliparity (having never been pregnant) and older age also increase risk, as does tamoxifen use — a medication commonly prescribed for breast cancer prevention. These factors cumulatively increase endometrial proliferation and the potential for atypical cellular changes. Endometrial hyperplasia, a known precursor to cancer, is often linked to these hormonal imbalances and may require surveillance or treatment.
Beyond estrogen’s influence, a growing body of research is uncovering additional risk contributors. Women with polycystic ovary syndrome (PCOS), metabolic syndrome, or diabetes face elevated risk of developing endometrial cancer. Genetic syndromes, particularly Lynch and Cowden syndromes, are associated with significantly increased lifetime risks of endometrial cancer. Environmental exposures, such as the use of hair relaxers, are being investigated as emerging risk factors. Additionally, race remains a risk marker, with Black women not only experiencing higher mortality but also more aggressive subtypes of the disease. These complex, overlapping risks highlight the importance of individualized risk assessment and early intervention strategies.
3. Postmenopausal bleeding is the hallmark symptom — but not the only one.
In endometrial cancer, the majority of cases are diagnosed at an early stage, largely because of the hallmark symptom of postmenopausal bleeding. In addition to bleeding, patients may present with vaginal discharge, pyometra, and even pain and abdominal distension in advanced disease. Any bleeding in a postmenopausal woman should prompt evaluation, as it may signal endometrial hyperplasia or carcinoma. In premenopausal women, irregular or heavy menstrual bleeding may raise suspicion, particularly when accompanied by risk factors such as PCOS.
The diagnostic workup for suspected endometrial cancer in women, particularly those presenting with postmenopausal bleeding, begins with a focused clinical assessment and frequently includes transvaginal ultrasound (TVUS) to evaluate the endometrium. While TVUS can aid in identifying structural abnormalities or suggest malignancy, endometrial sampling is warranted in all postmenopausal women with abnormal bleeding, regardless of endometrial thickness. Office-based biopsy is the preferred initial approach due to its convenience and diagnostic yield; however, if the sample is nondiagnostic or technically difficult to obtain, hysteroscopy with directed biopsy or dilation and curettage should be pursued.
4. Classification systems are evolving to include molecular subtypes.
Historically, endometrial cancers were classified using the World Health Organization system based on histology and by hormone dependence: Type 1 (estrogen-dependent, typically endometrioid and low grade) and Type 2 (non-estrogen dependent, often serous and high grade). Type 1 cancers tend to have a better prognosis and slower progression, while Type 2 cancers are more aggressive and require intensive treatment. While helpful, this binary classification does not fully capture the biological diversity or treatment responsiveness of the disease.
The field is now moving toward molecular classification, which offers a more nuanced understanding. The four main molecular subtypes include: polymerase epsilon (POLE)-mutant, mismatch repair (MMR)-deficient, p53-abnormal, and no specific molecular profile (NSMP). These groups differ in prognosis and therapeutic implications. POLE-mutant tumors with extremely high mutational burdens generally have excellent outcomes and may not require aggressive adjuvant therapy. In contrast, p53-abnormal tumors are associated with chromosomal instability, TP53 mutations, and poor outcomes, necessitating more aggressive multimodal treatment. MMR-deficient tumors are particularly responsive to immunotherapy. These molecular distinctions are changing how clinicians approach risk stratification and management in patients with endometrial cancer.
5. Treatment is increasingly personalized — and immunotherapy is expanding.
The cornerstone of treatment for early-stage endometrial cancer is surgical: total hysterectomy with bilateral salpingo-oophorectomy, often with sentinel node mapping or lymphadenectomy. Adjuvant therapy depends on factors such as stage, grade, histology, and molecular subtype. Fertility-sparing management with progestin therapy is an option for highly selected patients with early-stage, low-grade tumors. Clinical guidelines recommend that fertility desires be addressed prior to initiating treatment, as standard surgical management typically results in loss of reproductive capacity.
For advanced or recurrent disease, treatment becomes more complex and increasingly individualized. Chemotherapy, often with carboplatin and paclitaxel, is standard for stage III/IV and recurrent disease. Molecular findings now guide additional therapy: For instance, MMR-deficient tumors may respond to checkpoint inhibitors. As targeted agents and combination regimens continue to emerge, treatment of endometrial is increasingly focused on precision medicine.
Markman is professor of medical oncology and therapeutics research and President of Medicine & Science at City of Hope in Atlanta and Chicago. He has disclosed relevant financial relationships with AstraZeneca, GSK and Myriad.
A version of this article first appeared on Medscape.com.
Endometrial cancer is a common type of gynecologic cancer, and its incidence is rising steadily in the United States and globally. Most cases are endometrioid adenocarcinomas, arising from the inner lining of the uterus — the endometrium. While many patients are diagnosed early because of noticeable symptoms like abnormal bleeding, trends in both incidence and mortality are concerning, especially given the persistent racial and socioeconomic disparities in outcomes.
In addition to being the most common uterine malignancy, endometrial cancer is at the forefront of precision oncology in gynecology. The traditional classification systems based on histology and hormone dependence are now being augmented by molecular subtyping that better informs prognosis and treatment. As diagnostic tools, genetic testing, and therapeutic strategies advance, the management of endometrial cancer is becoming increasingly personalized.
Here are five things to know about endometrial cancer:
1. Endometrial cancer is one of the few cancers with increasing mortality.
Endometrial cancer accounts for the majority of uterine cancers in the United States with an overall lifetime risk for women of about 1 in 40. Since the mid-2000s, incidence rates have risen steadily, by > 1% per year, reflecting both lifestyle and environmental factors. Importantly, the disease tends to be diagnosed at an early stage due to the presence of warning signs like postmenopausal bleeding, which contributes to relatively favorable survival outcomes when caught early.
However, mortality trends continue to evolve. From 1999 to 2013, death rates from endometrial cancer in the US declined slightly, but since 2013, they have increased sharply — by > 8% annually — according to recent data. This upward trend in mortality disproportionately affects non-Hispanic Black women, who experience the highest mortality rate (4.7 per 100,000) among all racial and ethnic groups. This disparity is likely caused by a complex interplay of factors, including delays in diagnosis, more aggressive tumor biology, and inequities in access to care. Addressing these disparities remains a key priority in improving outcomes.
2. Risk factors go beyond hormones and age.
Risk factors for endometrial cancer include prolonged exposure to unopposed estrogen, which can result from estrogen-only hormone replacement therapy, higher BMI, and early menarche or late menopause. Nulliparity (having never been pregnant) and older age also increase risk, as does tamoxifen use — a medication commonly prescribed for breast cancer prevention. These factors cumulatively increase endometrial proliferation and the potential for atypical cellular changes. Endometrial hyperplasia, a known precursor to cancer, is often linked to these hormonal imbalances and may require surveillance or treatment.
Beyond estrogen’s influence, a growing body of research is uncovering additional risk contributors. Women with polycystic ovary syndrome (PCOS), metabolic syndrome, or diabetes face elevated risk of developing endometrial cancer. Genetic syndromes, particularly Lynch and Cowden syndromes, are associated with significantly increased lifetime risks of endometrial cancer. Environmental exposures, such as the use of hair relaxers, are being investigated as emerging risk factors. Additionally, race remains a risk marker, with Black women not only experiencing higher mortality but also more aggressive subtypes of the disease. These complex, overlapping risks highlight the importance of individualized risk assessment and early intervention strategies.
3. Postmenopausal bleeding is the hallmark symptom — but not the only one.
In endometrial cancer, the majority of cases are diagnosed at an early stage, largely because of the hallmark symptom of postmenopausal bleeding. In addition to bleeding, patients may present with vaginal discharge, pyometra, and even pain and abdominal distension in advanced disease. Any bleeding in a postmenopausal woman should prompt evaluation, as it may signal endometrial hyperplasia or carcinoma. In premenopausal women, irregular or heavy menstrual bleeding may raise suspicion, particularly when accompanied by risk factors such as PCOS.
The diagnostic workup for suspected endometrial cancer in women, particularly those presenting with postmenopausal bleeding, begins with a focused clinical assessment and frequently includes transvaginal ultrasound (TVUS) to evaluate the endometrium. While TVUS can aid in identifying structural abnormalities or suggest malignancy, endometrial sampling is warranted in all postmenopausal women with abnormal bleeding, regardless of endometrial thickness. Office-based biopsy is the preferred initial approach due to its convenience and diagnostic yield; however, if the sample is nondiagnostic or technically difficult to obtain, hysteroscopy with directed biopsy or dilation and curettage should be pursued.
4. Classification systems are evolving to include molecular subtypes.
Historically, endometrial cancers were classified using the World Health Organization system based on histology and by hormone dependence: Type 1 (estrogen-dependent, typically endometrioid and low grade) and Type 2 (non-estrogen dependent, often serous and high grade). Type 1 cancers tend to have a better prognosis and slower progression, while Type 2 cancers are more aggressive and require intensive treatment. While helpful, this binary classification does not fully capture the biological diversity or treatment responsiveness of the disease.
The field is now moving toward molecular classification, which offers a more nuanced understanding. The four main molecular subtypes include: polymerase epsilon (POLE)-mutant, mismatch repair (MMR)-deficient, p53-abnormal, and no specific molecular profile (NSMP). These groups differ in prognosis and therapeutic implications. POLE-mutant tumors with extremely high mutational burdens generally have excellent outcomes and may not require aggressive adjuvant therapy. In contrast, p53-abnormal tumors are associated with chromosomal instability, TP53 mutations, and poor outcomes, necessitating more aggressive multimodal treatment. MMR-deficient tumors are particularly responsive to immunotherapy. These molecular distinctions are changing how clinicians approach risk stratification and management in patients with endometrial cancer.
5. Treatment is increasingly personalized — and immunotherapy is expanding.
The cornerstone of treatment for early-stage endometrial cancer is surgical: total hysterectomy with bilateral salpingo-oophorectomy, often with sentinel node mapping or lymphadenectomy. Adjuvant therapy depends on factors such as stage, grade, histology, and molecular subtype. Fertility-sparing management with progestin therapy is an option for highly selected patients with early-stage, low-grade tumors. Clinical guidelines recommend that fertility desires be addressed prior to initiating treatment, as standard surgical management typically results in loss of reproductive capacity.
For advanced or recurrent disease, treatment becomes more complex and increasingly individualized. Chemotherapy, often with carboplatin and paclitaxel, is standard for stage III/IV and recurrent disease. Molecular findings now guide additional therapy: For instance, MMR-deficient tumors may respond to checkpoint inhibitors. As targeted agents and combination regimens continue to emerge, treatment of endometrial is increasingly focused on precision medicine.
Markman is professor of medical oncology and therapeutics research and President of Medicine & Science at City of Hope in Atlanta and Chicago. He has disclosed relevant financial relationships with AstraZeneca, GSK and Myriad.
A version of this article first appeared on Medscape.com.
Earlier Vaccinations Helped Limit Marine Adenovirus Outbreak
Earlier Vaccinations Helped Limit Marine Adenovirus Outbreak
During an adenovirus (AdV) outbreak among recruits and staff at the Marine Corps Recruit Depot (MCRD) in San Diego, an investigation revealed that the earlier individuals working at the site received vaccination, the better. The clinical team found that accelerating the vaccination schedule could help prevent further outbreaks, medical separations, and training disruption.
From July 1, 2024, through September 23, 2024, a total of 212 trainees and staff developed AdV and 28 were hospitalized. Nine patients were hospitalized with AdV pneumonia within a 2-week period; 3 were admitted to the intensive care unit. Outpatient acute respiratory disease (ARD) cases also increased, with recruits accounting for nearly 97% of the AdV outbreak cases.
AdV is a frequent cause of illness among military recruits. Research has found that up to 80% of cases of febrile ARD in recruits are due to AdV, and 20% result in hospitalization.
The military developed and implemented a live, oral vaccine against AdV serotypes 4 and 7 (most common in recruits) starting in the 1970s, reducing febrile respiratory illness in recruit training sites by 50% and AdV infection by > 90%. However, the manufacturer halted production of the vaccine in 1995. By 1999, vaccine supply was depleted, and ARD cases rose. A replacement vaccine introduced in 2011 proved 99% effective, leading to a dramatic 100-fold decline in AdV disease among military trainees.
While the vaccine is effective, outbreaks are still possible among closely congregating groups like military trainees. AdV pneumonia cases spiked as the virus spread through the training companies and into new companies when they arrived at the MCRD in early July 2024. Most new infections were in recruits who had missed the AdV vaccination day.
Early symptoms of AdV may be very mild, and some recruits were likely already symptomatic when vaccinated. Aggressive environmental cleaning, separation of sick and well recruits, masking, and other nonpharmaceutical interventions did not slow the spread.
The preventive medicine and public health teams noted that AdV vaccination was being administered 11 days postarrival, to allow for pregnancy testing, and for assessing vaccine titers. US Department of Defense regulations do not dictate precise vaccination schedules. Implementation of the regulation varies among military training sites.
After reviewing other training sites’ vaccine timing schedules (most required vaccination by day 6 postarrival) and determining the time required for immunity, the medical teams at MCRD recommended shifting AdV vaccine administration, along with other standard vaccines, from day 11 to day 1 postarrival. Two weeks after the schedule change, overall incidence began declining rapidly.
Nearly 75% of patients had coinfections with other respiratory pathogens, most notably seasonal coronaviruses, COVID-19, and rhinovirus/enterovirus, suggesting that infection with AdV may increase susceptibility to other viruses, a finding that has not been identified in previous AdV outbreaks. Newly increased testing sensitivity associated with multiplex respiratory pathogen PCR availability may have been a factor in coinfection identification during this outbreak.
AdV is a significant medical threat to military recruits. Early vaccination, the investigators advise, should remain “a central tenet for prevention and control of communicable diseases in these high-risk, congregate settings.”
During an adenovirus (AdV) outbreak among recruits and staff at the Marine Corps Recruit Depot (MCRD) in San Diego, an investigation revealed that the earlier individuals working at the site received vaccination, the better. The clinical team found that accelerating the vaccination schedule could help prevent further outbreaks, medical separations, and training disruption.
From July 1, 2024, through September 23, 2024, a total of 212 trainees and staff developed AdV and 28 were hospitalized. Nine patients were hospitalized with AdV pneumonia within a 2-week period; 3 were admitted to the intensive care unit. Outpatient acute respiratory disease (ARD) cases also increased, with recruits accounting for nearly 97% of the AdV outbreak cases.
AdV is a frequent cause of illness among military recruits. Research has found that up to 80% of cases of febrile ARD in recruits are due to AdV, and 20% result in hospitalization.
The military developed and implemented a live, oral vaccine against AdV serotypes 4 and 7 (most common in recruits) starting in the 1970s, reducing febrile respiratory illness in recruit training sites by 50% and AdV infection by > 90%. However, the manufacturer halted production of the vaccine in 1995. By 1999, vaccine supply was depleted, and ARD cases rose. A replacement vaccine introduced in 2011 proved 99% effective, leading to a dramatic 100-fold decline in AdV disease among military trainees.
While the vaccine is effective, outbreaks are still possible among closely congregating groups like military trainees. AdV pneumonia cases spiked as the virus spread through the training companies and into new companies when they arrived at the MCRD in early July 2024. Most new infections were in recruits who had missed the AdV vaccination day.
Early symptoms of AdV may be very mild, and some recruits were likely already symptomatic when vaccinated. Aggressive environmental cleaning, separation of sick and well recruits, masking, and other nonpharmaceutical interventions did not slow the spread.
The preventive medicine and public health teams noted that AdV vaccination was being administered 11 days postarrival, to allow for pregnancy testing, and for assessing vaccine titers. US Department of Defense regulations do not dictate precise vaccination schedules. Implementation of the regulation varies among military training sites.
After reviewing other training sites’ vaccine timing schedules (most required vaccination by day 6 postarrival) and determining the time required for immunity, the medical teams at MCRD recommended shifting AdV vaccine administration, along with other standard vaccines, from day 11 to day 1 postarrival. Two weeks after the schedule change, overall incidence began declining rapidly.
Nearly 75% of patients had coinfections with other respiratory pathogens, most notably seasonal coronaviruses, COVID-19, and rhinovirus/enterovirus, suggesting that infection with AdV may increase susceptibility to other viruses, a finding that has not been identified in previous AdV outbreaks. Newly increased testing sensitivity associated with multiplex respiratory pathogen PCR availability may have been a factor in coinfection identification during this outbreak.
AdV is a significant medical threat to military recruits. Early vaccination, the investigators advise, should remain “a central tenet for prevention and control of communicable diseases in these high-risk, congregate settings.”
During an adenovirus (AdV) outbreak among recruits and staff at the Marine Corps Recruit Depot (MCRD) in San Diego, an investigation revealed that the earlier individuals working at the site received vaccination, the better. The clinical team found that accelerating the vaccination schedule could help prevent further outbreaks, medical separations, and training disruption.
From July 1, 2024, through September 23, 2024, a total of 212 trainees and staff developed AdV and 28 were hospitalized. Nine patients were hospitalized with AdV pneumonia within a 2-week period; 3 were admitted to the intensive care unit. Outpatient acute respiratory disease (ARD) cases also increased, with recruits accounting for nearly 97% of the AdV outbreak cases.
AdV is a frequent cause of illness among military recruits. Research has found that up to 80% of cases of febrile ARD in recruits are due to AdV, and 20% result in hospitalization.
The military developed and implemented a live, oral vaccine against AdV serotypes 4 and 7 (most common in recruits) starting in the 1970s, reducing febrile respiratory illness in recruit training sites by 50% and AdV infection by > 90%. However, the manufacturer halted production of the vaccine in 1995. By 1999, vaccine supply was depleted, and ARD cases rose. A replacement vaccine introduced in 2011 proved 99% effective, leading to a dramatic 100-fold decline in AdV disease among military trainees.
While the vaccine is effective, outbreaks are still possible among closely congregating groups like military trainees. AdV pneumonia cases spiked as the virus spread through the training companies and into new companies when they arrived at the MCRD in early July 2024. Most new infections were in recruits who had missed the AdV vaccination day.
Early symptoms of AdV may be very mild, and some recruits were likely already symptomatic when vaccinated. Aggressive environmental cleaning, separation of sick and well recruits, masking, and other nonpharmaceutical interventions did not slow the spread.
The preventive medicine and public health teams noted that AdV vaccination was being administered 11 days postarrival, to allow for pregnancy testing, and for assessing vaccine titers. US Department of Defense regulations do not dictate precise vaccination schedules. Implementation of the regulation varies among military training sites.
After reviewing other training sites’ vaccine timing schedules (most required vaccination by day 6 postarrival) and determining the time required for immunity, the medical teams at MCRD recommended shifting AdV vaccine administration, along with other standard vaccines, from day 11 to day 1 postarrival. Two weeks after the schedule change, overall incidence began declining rapidly.
Nearly 75% of patients had coinfections with other respiratory pathogens, most notably seasonal coronaviruses, COVID-19, and rhinovirus/enterovirus, suggesting that infection with AdV may increase susceptibility to other viruses, a finding that has not been identified in previous AdV outbreaks. Newly increased testing sensitivity associated with multiplex respiratory pathogen PCR availability may have been a factor in coinfection identification during this outbreak.
AdV is a significant medical threat to military recruits. Early vaccination, the investigators advise, should remain “a central tenet for prevention and control of communicable diseases in these high-risk, congregate settings.”
Earlier Vaccinations Helped Limit Marine Adenovirus Outbreak
Earlier Vaccinations Helped Limit Marine Adenovirus Outbreak
Novel Peptides Expressed in HIV Could Drive Treatment
Genetic sequencing of peptides in rebound virus in individuals with HIV who had analytic treatment interruptions (ATIs) confirmed the peptides’ expression in HIV-1 infection, according to data presented at the International AIDS Society Conference on HIV Science.
Previous research has shown that HIV-specific CD8 T-cell responses directed against five genetically conserved HIV-1 protein regions (Gag, Pol, Vif, Vpr, and Env) are associated with viral control, Josefina Marín-Rojas, PhD, Faculty of Medicine and Health, University of Sydney, and colleagues wrote in their abstract.
However, data on whether these peptides are expressed in rebound virus among individuals with HIV who experienced ATI are limited, they wrote.
The researchers applied an immunoinformatics analysis pipeline (IMAP) to select 182 peptides (IMAP-peptides) from structurally important and mutation-intolerant regions of HIV-1 proteins, senior author Sarah Palmer, PhD, co-director of the Centre for Virus Research at the Westmead Institute for Medical Research and professor in the Faculty of Medicine and Health at the University of Sydney, said in an interview.
“Our studies indicate if the immune system targets these structurally important and mutation-intolerant regions of HIV-1 proteins, this can contribute to virological control in the absence of HIV-1 therapy,” she explained.
The researchers reviewed data from the PULSE clinical trial, which included 68 men who have sex with men living with HIV in Australia. The men underwent three consecutive ATIs. A total of seven participants’ transiently controlled HIV rebound during the third ATI. The researchers examined whether the IMAP peptides were present in the HIV-1 RNA sequences of the rebound virus in four noncontrollers (patients who had viral rebound in all three ATIs) and five of the seven transient controllers who showed viral control during the third ATI.
The technique of near full-length HIV-1 RNA sequencing of rebound virus from three noncontrollers and two transient controllers identified the Gag, Pol, Vif, Vpr, and Env IMAP-peptides in 52%-100% of the viral sequences obtained from these participants across three ATI timepoints.
“We assumed that cells from people living with HIV that experience virological control after treatment interruption would have the immune response to our IMAP-peptides that we observed; however, we are amazed and encouraged by the level and extent of this immune response,” Palmer told this news organization.
The researchers also compared CD8 T-cell response between the IMAP peptides and a control peptide pool without the IMAP peptides.
The CD8 T-cells from three transient controllers had a 15- to 53-fold higher effector response to the IMAP-peptides than the CD8 T-cells from two noncontrollers, the researchers wrote in their abstract. The relative response to the IMAP-peptides in noncontrollers was 20 times lower than that to the control peptides, but the IMAP-peptide response in the transient controllers group was similar to that in the control group, the authors noted.
The results highlight the potential of IMAP in developing treatment strategies. Although the results are too preliminary to impact clinical practice at this time, the findings from the current study could lead to the development of an mRNA vaccine to clear HIV-infected cells from people living with HIV, Palmer told this news organization.
“Our next steps include developing and testing mRNA vaccine constructs that contain our IMAP-peptides to assess the immune response of cells from people living with HIV to these vaccines,” Palmer said. “From there we will conduct studies of the most promising mRNA vaccine constructs in a humanized mouse model,” she said.
Data Enhance Understanding of Immunity
The current study may provide information that can significantly impact understanding of the immune responses to HIV, David J. Cennimo, MD, associate professor of medicine and pediatrics in the Division of Infectious Disease at Rutgers New Jersey Medical School, Newark, New Jersey, said in an interview.
“The investigators looked at highly conserved regions of multiple HIV proteins,” said Cennimo, who was not involved in the study. “Conserved regions and antibody responses to them may play a role in controlling HIV viral replication and rebound,” Cennimo told this news organization. “The investigators showed these regions were present in rebounding viremia, and individuals that exhibited greater immune recognition of these regions suppressed rebound viremia longer, and perhaps targeting these regions could impact HIV prevention or cure strategies,” he said.
Secondarily, the study showed the success of the novel technique (IMAP) to identify conserved peptides, said Cennimo. The technique could potentially be applied to other viruses that mutate to escape host response, he said.The study was funded by the U.S. National Institutes of Health, the Foundation for AIDS Research, the Australian National Health and Medical Research Council, and Sandra and David Ansley. The researchers and Cennimo disclosed no financial conflicts of interest.
A version of this article first appeared on Medscape.com.
Genetic sequencing of peptides in rebound virus in individuals with HIV who had analytic treatment interruptions (ATIs) confirmed the peptides’ expression in HIV-1 infection, according to data presented at the International AIDS Society Conference on HIV Science.
Previous research has shown that HIV-specific CD8 T-cell responses directed against five genetically conserved HIV-1 protein regions (Gag, Pol, Vif, Vpr, and Env) are associated with viral control, Josefina Marín-Rojas, PhD, Faculty of Medicine and Health, University of Sydney, and colleagues wrote in their abstract.
However, data on whether these peptides are expressed in rebound virus among individuals with HIV who experienced ATI are limited, they wrote.
The researchers applied an immunoinformatics analysis pipeline (IMAP) to select 182 peptides (IMAP-peptides) from structurally important and mutation-intolerant regions of HIV-1 proteins, senior author Sarah Palmer, PhD, co-director of the Centre for Virus Research at the Westmead Institute for Medical Research and professor in the Faculty of Medicine and Health at the University of Sydney, said in an interview.
“Our studies indicate if the immune system targets these structurally important and mutation-intolerant regions of HIV-1 proteins, this can contribute to virological control in the absence of HIV-1 therapy,” she explained.
The researchers reviewed data from the PULSE clinical trial, which included 68 men who have sex with men living with HIV in Australia. The men underwent three consecutive ATIs. A total of seven participants’ transiently controlled HIV rebound during the third ATI. The researchers examined whether the IMAP peptides were present in the HIV-1 RNA sequences of the rebound virus in four noncontrollers (patients who had viral rebound in all three ATIs) and five of the seven transient controllers who showed viral control during the third ATI.
The technique of near full-length HIV-1 RNA sequencing of rebound virus from three noncontrollers and two transient controllers identified the Gag, Pol, Vif, Vpr, and Env IMAP-peptides in 52%-100% of the viral sequences obtained from these participants across three ATI timepoints.
“We assumed that cells from people living with HIV that experience virological control after treatment interruption would have the immune response to our IMAP-peptides that we observed; however, we are amazed and encouraged by the level and extent of this immune response,” Palmer told this news organization.
The researchers also compared CD8 T-cell response between the IMAP peptides and a control peptide pool without the IMAP peptides.
The CD8 T-cells from three transient controllers had a 15- to 53-fold higher effector response to the IMAP-peptides than the CD8 T-cells from two noncontrollers, the researchers wrote in their abstract. The relative response to the IMAP-peptides in noncontrollers was 20 times lower than that to the control peptides, but the IMAP-peptide response in the transient controllers group was similar to that in the control group, the authors noted.
The results highlight the potential of IMAP in developing treatment strategies. Although the results are too preliminary to impact clinical practice at this time, the findings from the current study could lead to the development of an mRNA vaccine to clear HIV-infected cells from people living with HIV, Palmer told this news organization.
“Our next steps include developing and testing mRNA vaccine constructs that contain our IMAP-peptides to assess the immune response of cells from people living with HIV to these vaccines,” Palmer said. “From there we will conduct studies of the most promising mRNA vaccine constructs in a humanized mouse model,” she said.
Data Enhance Understanding of Immunity
The current study may provide information that can significantly impact understanding of the immune responses to HIV, David J. Cennimo, MD, associate professor of medicine and pediatrics in the Division of Infectious Disease at Rutgers New Jersey Medical School, Newark, New Jersey, said in an interview.
“The investigators looked at highly conserved regions of multiple HIV proteins,” said Cennimo, who was not involved in the study. “Conserved regions and antibody responses to them may play a role in controlling HIV viral replication and rebound,” Cennimo told this news organization. “The investigators showed these regions were present in rebounding viremia, and individuals that exhibited greater immune recognition of these regions suppressed rebound viremia longer, and perhaps targeting these regions could impact HIV prevention or cure strategies,” he said.
Secondarily, the study showed the success of the novel technique (IMAP) to identify conserved peptides, said Cennimo. The technique could potentially be applied to other viruses that mutate to escape host response, he said.The study was funded by the U.S. National Institutes of Health, the Foundation for AIDS Research, the Australian National Health and Medical Research Council, and Sandra and David Ansley. The researchers and Cennimo disclosed no financial conflicts of interest.
A version of this article first appeared on Medscape.com.
Genetic sequencing of peptides in rebound virus in individuals with HIV who had analytic treatment interruptions (ATIs) confirmed the peptides’ expression in HIV-1 infection, according to data presented at the International AIDS Society Conference on HIV Science.
Previous research has shown that HIV-specific CD8 T-cell responses directed against five genetically conserved HIV-1 protein regions (Gag, Pol, Vif, Vpr, and Env) are associated with viral control, Josefina Marín-Rojas, PhD, Faculty of Medicine and Health, University of Sydney, and colleagues wrote in their abstract.
However, data on whether these peptides are expressed in rebound virus among individuals with HIV who experienced ATI are limited, they wrote.
The researchers applied an immunoinformatics analysis pipeline (IMAP) to select 182 peptides (IMAP-peptides) from structurally important and mutation-intolerant regions of HIV-1 proteins, senior author Sarah Palmer, PhD, co-director of the Centre for Virus Research at the Westmead Institute for Medical Research and professor in the Faculty of Medicine and Health at the University of Sydney, said in an interview.
“Our studies indicate if the immune system targets these structurally important and mutation-intolerant regions of HIV-1 proteins, this can contribute to virological control in the absence of HIV-1 therapy,” she explained.
The researchers reviewed data from the PULSE clinical trial, which included 68 men who have sex with men living with HIV in Australia. The men underwent three consecutive ATIs. A total of seven participants’ transiently controlled HIV rebound during the third ATI. The researchers examined whether the IMAP peptides were present in the HIV-1 RNA sequences of the rebound virus in four noncontrollers (patients who had viral rebound in all three ATIs) and five of the seven transient controllers who showed viral control during the third ATI.
The technique of near full-length HIV-1 RNA sequencing of rebound virus from three noncontrollers and two transient controllers identified the Gag, Pol, Vif, Vpr, and Env IMAP-peptides in 52%-100% of the viral sequences obtained from these participants across three ATI timepoints.
“We assumed that cells from people living with HIV that experience virological control after treatment interruption would have the immune response to our IMAP-peptides that we observed; however, we are amazed and encouraged by the level and extent of this immune response,” Palmer told this news organization.
The researchers also compared CD8 T-cell response between the IMAP peptides and a control peptide pool without the IMAP peptides.
The CD8 T-cells from three transient controllers had a 15- to 53-fold higher effector response to the IMAP-peptides than the CD8 T-cells from two noncontrollers, the researchers wrote in their abstract. The relative response to the IMAP-peptides in noncontrollers was 20 times lower than that to the control peptides, but the IMAP-peptide response in the transient controllers group was similar to that in the control group, the authors noted.
The results highlight the potential of IMAP in developing treatment strategies. Although the results are too preliminary to impact clinical practice at this time, the findings from the current study could lead to the development of an mRNA vaccine to clear HIV-infected cells from people living with HIV, Palmer told this news organization.
“Our next steps include developing and testing mRNA vaccine constructs that contain our IMAP-peptides to assess the immune response of cells from people living with HIV to these vaccines,” Palmer said. “From there we will conduct studies of the most promising mRNA vaccine constructs in a humanized mouse model,” she said.
Data Enhance Understanding of Immunity
The current study may provide information that can significantly impact understanding of the immune responses to HIV, David J. Cennimo, MD, associate professor of medicine and pediatrics in the Division of Infectious Disease at Rutgers New Jersey Medical School, Newark, New Jersey, said in an interview.
“The investigators looked at highly conserved regions of multiple HIV proteins,” said Cennimo, who was not involved in the study. “Conserved regions and antibody responses to them may play a role in controlling HIV viral replication and rebound,” Cennimo told this news organization. “The investigators showed these regions were present in rebounding viremia, and individuals that exhibited greater immune recognition of these regions suppressed rebound viremia longer, and perhaps targeting these regions could impact HIV prevention or cure strategies,” he said.
Secondarily, the study showed the success of the novel technique (IMAP) to identify conserved peptides, said Cennimo. The technique could potentially be applied to other viruses that mutate to escape host response, he said.The study was funded by the U.S. National Institutes of Health, the Foundation for AIDS Research, the Australian National Health and Medical Research Council, and Sandra and David Ansley. The researchers and Cennimo disclosed no financial conflicts of interest.
A version of this article first appeared on Medscape.com.
Alarming Rise in Early-Onset GI Cancers Calls for Early Screening, Lifestyle Change
, said the authors of a JAMA review.
In the US, early-onset GI cancers are increasing faster than any other type of early-onset cancer, including breast cancer. The trend is not limited to colorectal cancer (CRC). Gastric, pancreatic, esophageal, as well as many biliary tract and appendix cancers, are also on the rise in young adults, Kimmie Ng, MD, MPH, and Thejus Jayakrishnan, MD, both with Dana-Farber Cancer Institute, Boston, noted in their article.
The increase in early-onset GI cancers follows a “birth cohort effect,” with generational variation in risk, suggesting a potential association with changes in environmental exposures, Ng explained in an accompanying JAMA podcast.
All these GI cancers link strongly to multiple modifiable risk factors, and it is a “top area of investigation to determine exactly what environmental exposures are at play,” Ng added.
For many of these GI cancers, obesity has been the “leading hypothesis” given that rising rates seem to parallel the increase in incidence of these early-onset GI cancers, Ng explained.
“But we also have evidence, particularly strong for colorectal cancer, that dietary patterns, such as consuming a Western diet, as well as sedentary behavior and lifestyles seem to be associated with a significantly higher risk of developing these cancers at an age under 50,” Ng said.
Rising Incidence
Globally, among early-onset GI cancers reported in 2022, CRC was the most common (54%), followed by gastric cancer (24%), esophageal cancer (13%), and pancreatic cancer (9%).
In the US in 2022, 20,805 individuals were diagnosed with early-onset CRC, 2689 with early-onset gastric cancer, 2657 with early-onset pancreatic cancer, and 875 with early-onset esophageal cancer.
Since the mid-1990s, CRC among adults of all ages in the US declined by 1.3%-4.2% annually but early-onset CRC increased by roughly 2% per year in both men and women, and currently makes up about 14% of all CRC cases.
Early-onset pancreatic cancer and esophageal cancer each currently make up about 5% of all cases of these cancers in the US.
Between 2010 and 2019, the number of newly diagnosed cases of early-onset GI cancers rose by nearly about 15%, with Black, Hispanic, Indigenous ancestry, and women disproportionately affected, Ng and coauthors noted in a related review published in the British Journal of Surgery.
Modifiable and Nonmodifiable Risk Factors
Along with obesity and poor diet, other modifiable risk factors for early-onset GI cancers include sedentary lifestyle, cigarette smoking, and alcohol consumption.
Nonmodifiable risk factors include family history, hereditary cancer syndromes such as Lynch syndrome and inflammatory bowel disease.
Roughly 15%-30% of early-onset GI cancers have pathogenic germline variants in genes such as DNA mismatch repair genes and BRCA1/2.
All individuals with early-onset GI cancers should undergo germline and somatic genetic testing to guide treatment, screen for other cancers (eg, endometrial cancer in Lynch syndrome), and assess familial risk, Ng and Jayakrishnan advised.
Treatment Challenges
Treatment for early-onset GI cancers is generally similar to later-onset GI cancers and prognosis for patients with early-onset GI cancers is “similar to or worse” than that for patients with later-onset GI cancers, highlighting the need for improved methods of prevention and early detection, the authors said.
Ng noted that younger cancer patients often face more challenges after diagnosis than older patients and benefit from multidisciplinary care, including referral for fertility counseling and preservation if appropriate, and psychosocial support.
“It is very difficult and challenging to receive a cancer diagnosis no matter what age you are, but when a person is diagnosed in their 20s, 30s, or 40s, there are unique challenges,” Ng said.
Studies have documented “much higher levels of psychosocial distress, depression and anxiety” in early-onset cancer patients, “and they also often experience more financial toxicity, disruptions in their education as well as their career and there may be fertility concerns,” Ng added.
Diagnostic Delays and Screening
Currently, screening is not recommended for most early-onset GI cancers — with the exception of CRC, with screening recommended for average-risk adults in the US starting at age 45.
Yet, despite this recommendation, fewer than 1 in 5 (19.7%) US adults aged 45-49 years were screened in 2021, indicating a significant gap in early detection efforts.
High-risk individuals, such as those with Lynch syndrome, a first-degree relative with CRC, or advanced colorectal adenoma, should begin CRC screening earlier, at an age determined by the specific risk factor.
“Studies have shown significant delays in diagnosis among younger patients. It’s important that prompt diagnosis happens so that these patients do not end up being diagnosed with advanced or metastatic stages of cancer, as they often are,” Ng said.
“Screening adherence is absolutely critical,” co-author Jayakrishnan added in a news release.
“We have strong evidence that colorectal cancer screening saves lives by reducing both the number of people who develop colorectal cancer and the number of people who die from it. Each missed screening is a lost opportunity to detect cancer early when it is more treatable, or to prevent cancer altogether by identifying and removing precancerous polyps,” Jayakrishnan said.This research had no funding. Ng reported receipt of nonfinancial support from Pharmavite, institutional grants from Janssen, and personal fees from Bayer, Seagen, GlaxoSmithKline, Pfizer, CytomX, Jazz Pharmaceuticals, Revolution Medicines, Redesign Health, AbbVie, Etiome, and CRICO. Ng is an associate editor of JAMA but was not involved in any of the decisions regarding review of the manuscript or its acceptance. Jayakrishnan had no disclosures.
A version of this article appeared on Medscape.com.
, said the authors of a JAMA review.
In the US, early-onset GI cancers are increasing faster than any other type of early-onset cancer, including breast cancer. The trend is not limited to colorectal cancer (CRC). Gastric, pancreatic, esophageal, as well as many biliary tract and appendix cancers, are also on the rise in young adults, Kimmie Ng, MD, MPH, and Thejus Jayakrishnan, MD, both with Dana-Farber Cancer Institute, Boston, noted in their article.
The increase in early-onset GI cancers follows a “birth cohort effect,” with generational variation in risk, suggesting a potential association with changes in environmental exposures, Ng explained in an accompanying JAMA podcast.
All these GI cancers link strongly to multiple modifiable risk factors, and it is a “top area of investigation to determine exactly what environmental exposures are at play,” Ng added.
For many of these GI cancers, obesity has been the “leading hypothesis” given that rising rates seem to parallel the increase in incidence of these early-onset GI cancers, Ng explained.
“But we also have evidence, particularly strong for colorectal cancer, that dietary patterns, such as consuming a Western diet, as well as sedentary behavior and lifestyles seem to be associated with a significantly higher risk of developing these cancers at an age under 50,” Ng said.
Rising Incidence
Globally, among early-onset GI cancers reported in 2022, CRC was the most common (54%), followed by gastric cancer (24%), esophageal cancer (13%), and pancreatic cancer (9%).
In the US in 2022, 20,805 individuals were diagnosed with early-onset CRC, 2689 with early-onset gastric cancer, 2657 with early-onset pancreatic cancer, and 875 with early-onset esophageal cancer.
Since the mid-1990s, CRC among adults of all ages in the US declined by 1.3%-4.2% annually but early-onset CRC increased by roughly 2% per year in both men and women, and currently makes up about 14% of all CRC cases.
Early-onset pancreatic cancer and esophageal cancer each currently make up about 5% of all cases of these cancers in the US.
Between 2010 and 2019, the number of newly diagnosed cases of early-onset GI cancers rose by nearly about 15%, with Black, Hispanic, Indigenous ancestry, and women disproportionately affected, Ng and coauthors noted in a related review published in the British Journal of Surgery.
Modifiable and Nonmodifiable Risk Factors
Along with obesity and poor diet, other modifiable risk factors for early-onset GI cancers include sedentary lifestyle, cigarette smoking, and alcohol consumption.
Nonmodifiable risk factors include family history, hereditary cancer syndromes such as Lynch syndrome and inflammatory bowel disease.
Roughly 15%-30% of early-onset GI cancers have pathogenic germline variants in genes such as DNA mismatch repair genes and BRCA1/2.
All individuals with early-onset GI cancers should undergo germline and somatic genetic testing to guide treatment, screen for other cancers (eg, endometrial cancer in Lynch syndrome), and assess familial risk, Ng and Jayakrishnan advised.
Treatment Challenges
Treatment for early-onset GI cancers is generally similar to later-onset GI cancers and prognosis for patients with early-onset GI cancers is “similar to or worse” than that for patients with later-onset GI cancers, highlighting the need for improved methods of prevention and early detection, the authors said.
Ng noted that younger cancer patients often face more challenges after diagnosis than older patients and benefit from multidisciplinary care, including referral for fertility counseling and preservation if appropriate, and psychosocial support.
“It is very difficult and challenging to receive a cancer diagnosis no matter what age you are, but when a person is diagnosed in their 20s, 30s, or 40s, there are unique challenges,” Ng said.
Studies have documented “much higher levels of psychosocial distress, depression and anxiety” in early-onset cancer patients, “and they also often experience more financial toxicity, disruptions in their education as well as their career and there may be fertility concerns,” Ng added.
Diagnostic Delays and Screening
Currently, screening is not recommended for most early-onset GI cancers — with the exception of CRC, with screening recommended for average-risk adults in the US starting at age 45.
Yet, despite this recommendation, fewer than 1 in 5 (19.7%) US adults aged 45-49 years were screened in 2021, indicating a significant gap in early detection efforts.
High-risk individuals, such as those with Lynch syndrome, a first-degree relative with CRC, or advanced colorectal adenoma, should begin CRC screening earlier, at an age determined by the specific risk factor.
“Studies have shown significant delays in diagnosis among younger patients. It’s important that prompt diagnosis happens so that these patients do not end up being diagnosed with advanced or metastatic stages of cancer, as they often are,” Ng said.
“Screening adherence is absolutely critical,” co-author Jayakrishnan added in a news release.
“We have strong evidence that colorectal cancer screening saves lives by reducing both the number of people who develop colorectal cancer and the number of people who die from it. Each missed screening is a lost opportunity to detect cancer early when it is more treatable, or to prevent cancer altogether by identifying and removing precancerous polyps,” Jayakrishnan said.This research had no funding. Ng reported receipt of nonfinancial support from Pharmavite, institutional grants from Janssen, and personal fees from Bayer, Seagen, GlaxoSmithKline, Pfizer, CytomX, Jazz Pharmaceuticals, Revolution Medicines, Redesign Health, AbbVie, Etiome, and CRICO. Ng is an associate editor of JAMA but was not involved in any of the decisions regarding review of the manuscript or its acceptance. Jayakrishnan had no disclosures.
A version of this article appeared on Medscape.com.
, said the authors of a JAMA review.
In the US, early-onset GI cancers are increasing faster than any other type of early-onset cancer, including breast cancer. The trend is not limited to colorectal cancer (CRC). Gastric, pancreatic, esophageal, as well as many biliary tract and appendix cancers, are also on the rise in young adults, Kimmie Ng, MD, MPH, and Thejus Jayakrishnan, MD, both with Dana-Farber Cancer Institute, Boston, noted in their article.
The increase in early-onset GI cancers follows a “birth cohort effect,” with generational variation in risk, suggesting a potential association with changes in environmental exposures, Ng explained in an accompanying JAMA podcast.
All these GI cancers link strongly to multiple modifiable risk factors, and it is a “top area of investigation to determine exactly what environmental exposures are at play,” Ng added.
For many of these GI cancers, obesity has been the “leading hypothesis” given that rising rates seem to parallel the increase in incidence of these early-onset GI cancers, Ng explained.
“But we also have evidence, particularly strong for colorectal cancer, that dietary patterns, such as consuming a Western diet, as well as sedentary behavior and lifestyles seem to be associated with a significantly higher risk of developing these cancers at an age under 50,” Ng said.
Rising Incidence
Globally, among early-onset GI cancers reported in 2022, CRC was the most common (54%), followed by gastric cancer (24%), esophageal cancer (13%), and pancreatic cancer (9%).
In the US in 2022, 20,805 individuals were diagnosed with early-onset CRC, 2689 with early-onset gastric cancer, 2657 with early-onset pancreatic cancer, and 875 with early-onset esophageal cancer.
Since the mid-1990s, CRC among adults of all ages in the US declined by 1.3%-4.2% annually but early-onset CRC increased by roughly 2% per year in both men and women, and currently makes up about 14% of all CRC cases.
Early-onset pancreatic cancer and esophageal cancer each currently make up about 5% of all cases of these cancers in the US.
Between 2010 and 2019, the number of newly diagnosed cases of early-onset GI cancers rose by nearly about 15%, with Black, Hispanic, Indigenous ancestry, and women disproportionately affected, Ng and coauthors noted in a related review published in the British Journal of Surgery.
Modifiable and Nonmodifiable Risk Factors
Along with obesity and poor diet, other modifiable risk factors for early-onset GI cancers include sedentary lifestyle, cigarette smoking, and alcohol consumption.
Nonmodifiable risk factors include family history, hereditary cancer syndromes such as Lynch syndrome and inflammatory bowel disease.
Roughly 15%-30% of early-onset GI cancers have pathogenic germline variants in genes such as DNA mismatch repair genes and BRCA1/2.
All individuals with early-onset GI cancers should undergo germline and somatic genetic testing to guide treatment, screen for other cancers (eg, endometrial cancer in Lynch syndrome), and assess familial risk, Ng and Jayakrishnan advised.
Treatment Challenges
Treatment for early-onset GI cancers is generally similar to later-onset GI cancers and prognosis for patients with early-onset GI cancers is “similar to or worse” than that for patients with later-onset GI cancers, highlighting the need for improved methods of prevention and early detection, the authors said.
Ng noted that younger cancer patients often face more challenges after diagnosis than older patients and benefit from multidisciplinary care, including referral for fertility counseling and preservation if appropriate, and psychosocial support.
“It is very difficult and challenging to receive a cancer diagnosis no matter what age you are, but when a person is diagnosed in their 20s, 30s, or 40s, there are unique challenges,” Ng said.
Studies have documented “much higher levels of psychosocial distress, depression and anxiety” in early-onset cancer patients, “and they also often experience more financial toxicity, disruptions in their education as well as their career and there may be fertility concerns,” Ng added.
Diagnostic Delays and Screening
Currently, screening is not recommended for most early-onset GI cancers — with the exception of CRC, with screening recommended for average-risk adults in the US starting at age 45.
Yet, despite this recommendation, fewer than 1 in 5 (19.7%) US adults aged 45-49 years were screened in 2021, indicating a significant gap in early detection efforts.
High-risk individuals, such as those with Lynch syndrome, a first-degree relative with CRC, or advanced colorectal adenoma, should begin CRC screening earlier, at an age determined by the specific risk factor.
“Studies have shown significant delays in diagnosis among younger patients. It’s important that prompt diagnosis happens so that these patients do not end up being diagnosed with advanced or metastatic stages of cancer, as they often are,” Ng said.
“Screening adherence is absolutely critical,” co-author Jayakrishnan added in a news release.
“We have strong evidence that colorectal cancer screening saves lives by reducing both the number of people who develop colorectal cancer and the number of people who die from it. Each missed screening is a lost opportunity to detect cancer early when it is more treatable, or to prevent cancer altogether by identifying and removing precancerous polyps,” Jayakrishnan said.This research had no funding. Ng reported receipt of nonfinancial support from Pharmavite, institutional grants from Janssen, and personal fees from Bayer, Seagen, GlaxoSmithKline, Pfizer, CytomX, Jazz Pharmaceuticals, Revolution Medicines, Redesign Health, AbbVie, Etiome, and CRICO. Ng is an associate editor of JAMA but was not involved in any of the decisions regarding review of the manuscript or its acceptance. Jayakrishnan had no disclosures.
A version of this article appeared on Medscape.com.
Sterile Water Bottles Deemed Unnecessary for Endoscopy
Like diners saving on drinks,
“No direct evidence supports the recommendation and widespread use of sterile water during gastrointestinal endosco-py procedures,” lead author Deepak Agrawal, MD, chief of gastroenterology & hepatology at the Dell Medical School, University Texas at Austin, and colleagues, wrote in Gastro Hep Advances. “Guidelines recommending sterile water during endoscopy are based on limited evidence and mostly expert opinions.”
After reviewing the literature back to 1975, Dr. Agrawal and colleagues considered the use of sterile water in endoscopy via three frameworks: medical evidence and guidelines, environmental and broader health effects, and financial costs.
Only 2 studies – both from the 1990s – directly compared sterile and tap water use in endoscopy. Neither showed an increased risk of infection from tap water. In fact, some cultures from allegedly sterile water bottles grew pathogenic bacteria, while no patient complications were reported in either study.
“The recommendations for sterile water contradict observations in other medical care scenarios, for example, for the irrigation of open wounds,” Dr. Agrawal and colleagues noted. “Similarly, there is no benefit in using sterile water for enteral feeds in immunosuppressed patients, and tap water enemas are routinely acceptable for colon cleansing before sigmoidoscopies in all patients, irrespective of immune status.”
Current guidelines, including the 2021 US multisociety guideline on reprocessing flexible GI endoscopes and accessories, recommend sterile water for procedures involving mucosal penetration but acknowledge low-quality supporting evidence. These recommendations are based on outdated studies, some unrelated to GI endoscopy, Dr. Agrawal and colleagues pointed out, and rely heavily on cross-referenced opinion statements rather than clinical data.
They went on to suggest a concerning possibility: all those plastic bottles may actually cause more health problems than prevent them. The review estimates that the production and transportation of sterile water bottles contributes over 6,000 metric tons of emissions per year from US endoscopy units alone. What’s more, as discarded bottles break down, they release greenhouse gases and microplastics, the latter of which have been linked to cardiovascular disease, inflammatory bowel disease, and endocrine disruption.
Dr. Agrawal and colleagues also underscored the financial toxicity of sterile water bottles. Considering a 1-liter bottle of sterile water costs $3-10, an endoscopy unit performing 30 procedures per day spends approximately $1,000-3,000 per month on bottled water alone. Scaled nationally, the routine use of sterile water costs tens of millions of dollars each year, not counting indirect expenses associated with stocking and waste disposal.
Considering the dubious clinical upside against the apparent environmental and financial downsides, Dr. Agrawal and colleagues urged endoscopy units to rethink routine sterile water use.
They proposed a pragmatic model: start the day with a new sterile or reusable bottle, refill with tap water for subsequent cases, and recycle the bottle at day’s end. Institutions should ensure their tap water meets safety standards, they added, such as those outlined in the Joint Commission’s 2022 R3 Report on standards for water management.
Dr. Agrawal and colleagues also called on GI societies to revise existing guidance to reflect today’s clinical and environmental realities. Until strong evidence supports the need for sterile water, they wrote, the smarter, safer, and more sustainable option may be simply turning on the tap.
The investigators disclosed relationships with Guardant, Exact Sciences, Freenome, and others.
In an editorial accompanying the study and comments to GI & Hepatology News, Dr. Seth A. Gross of NYU Langone Health urged gastroenterologists to reconsider the use of sterile water in endoscopy.
While the rationale for bottled water has centered on infection prevention, Gross argued that the evidence does not hold up, noting that this practice contradicts modern values around sustainability and evidence-based care.
The two relevant clinical studies comparing sterile versus tap water in endoscopy are almost 30 years old, he said, and neither detected an increased risk of infection with tap water, leading both to conclude that tap water is “safe and practical” for routine endoscopy.
Gross also pointed out the inconsistency of sterile water use in medical practice, noting that tap water is acceptable in procedures with higher infection risk than endoscopy.
“Lastly,” he added, “most people drink tap water and not sterile water on a daily basis without outbreaks of gastroenteritis from bacterial infections.”
Gross’s comments went beyond the data to emphasize the obvious but overlooked environmental impacts of sterile water bottles. He suggested several challenging suggestions to make medicine more ecofriendly, like reducing travel to conferences, increasing the availability of telehealth, and choosing reusable devices over disposables.
But “what’s hiding in plain sight,” he said, “is our use of sterile water.”
While acknowledging that some patients, like those who are immunocompromised, might still warrant sterile water, Gross supported the review’s recommendation to use tap water instead. He called on GI societies and regulatory bodies to re-examine current policy and pursue updated guidance.
“Sometimes going back to the basics,” he concluded, “could be the most innovative strategy with tremendous impact.”
Seth A. Gross, MD, AGAF, is clinical chief in the Division of Gastroenterology & Hepatology at NYU Langone Health, and professor at the NYU Grossman School of Medicine, both in New York City. He reported no conflicts of interest.
In an editorial accompanying the study and comments to GI & Hepatology News, Dr. Seth A. Gross of NYU Langone Health urged gastroenterologists to reconsider the use of sterile water in endoscopy.
While the rationale for bottled water has centered on infection prevention, Gross argued that the evidence does not hold up, noting that this practice contradicts modern values around sustainability and evidence-based care.
The two relevant clinical studies comparing sterile versus tap water in endoscopy are almost 30 years old, he said, and neither detected an increased risk of infection with tap water, leading both to conclude that tap water is “safe and practical” for routine endoscopy.
Gross also pointed out the inconsistency of sterile water use in medical practice, noting that tap water is acceptable in procedures with higher infection risk than endoscopy.
“Lastly,” he added, “most people drink tap water and not sterile water on a daily basis without outbreaks of gastroenteritis from bacterial infections.”
Gross’s comments went beyond the data to emphasize the obvious but overlooked environmental impacts of sterile water bottles. He suggested several challenging suggestions to make medicine more ecofriendly, like reducing travel to conferences, increasing the availability of telehealth, and choosing reusable devices over disposables.
But “what’s hiding in plain sight,” he said, “is our use of sterile water.”
While acknowledging that some patients, like those who are immunocompromised, might still warrant sterile water, Gross supported the review’s recommendation to use tap water instead. He called on GI societies and regulatory bodies to re-examine current policy and pursue updated guidance.
“Sometimes going back to the basics,” he concluded, “could be the most innovative strategy with tremendous impact.”
Seth A. Gross, MD, AGAF, is clinical chief in the Division of Gastroenterology & Hepatology at NYU Langone Health, and professor at the NYU Grossman School of Medicine, both in New York City. He reported no conflicts of interest.
In an editorial accompanying the study and comments to GI & Hepatology News, Dr. Seth A. Gross of NYU Langone Health urged gastroenterologists to reconsider the use of sterile water in endoscopy.
While the rationale for bottled water has centered on infection prevention, Gross argued that the evidence does not hold up, noting that this practice contradicts modern values around sustainability and evidence-based care.
The two relevant clinical studies comparing sterile versus tap water in endoscopy are almost 30 years old, he said, and neither detected an increased risk of infection with tap water, leading both to conclude that tap water is “safe and practical” for routine endoscopy.
Gross also pointed out the inconsistency of sterile water use in medical practice, noting that tap water is acceptable in procedures with higher infection risk than endoscopy.
“Lastly,” he added, “most people drink tap water and not sterile water on a daily basis without outbreaks of gastroenteritis from bacterial infections.”
Gross’s comments went beyond the data to emphasize the obvious but overlooked environmental impacts of sterile water bottles. He suggested several challenging suggestions to make medicine more ecofriendly, like reducing travel to conferences, increasing the availability of telehealth, and choosing reusable devices over disposables.
But “what’s hiding in plain sight,” he said, “is our use of sterile water.”
While acknowledging that some patients, like those who are immunocompromised, might still warrant sterile water, Gross supported the review’s recommendation to use tap water instead. He called on GI societies and regulatory bodies to re-examine current policy and pursue updated guidance.
“Sometimes going back to the basics,” he concluded, “could be the most innovative strategy with tremendous impact.”
Seth A. Gross, MD, AGAF, is clinical chief in the Division of Gastroenterology & Hepatology at NYU Langone Health, and professor at the NYU Grossman School of Medicine, both in New York City. He reported no conflicts of interest.
Like diners saving on drinks,
“No direct evidence supports the recommendation and widespread use of sterile water during gastrointestinal endosco-py procedures,” lead author Deepak Agrawal, MD, chief of gastroenterology & hepatology at the Dell Medical School, University Texas at Austin, and colleagues, wrote in Gastro Hep Advances. “Guidelines recommending sterile water during endoscopy are based on limited evidence and mostly expert opinions.”
After reviewing the literature back to 1975, Dr. Agrawal and colleagues considered the use of sterile water in endoscopy via three frameworks: medical evidence and guidelines, environmental and broader health effects, and financial costs.
Only 2 studies – both from the 1990s – directly compared sterile and tap water use in endoscopy. Neither showed an increased risk of infection from tap water. In fact, some cultures from allegedly sterile water bottles grew pathogenic bacteria, while no patient complications were reported in either study.
“The recommendations for sterile water contradict observations in other medical care scenarios, for example, for the irrigation of open wounds,” Dr. Agrawal and colleagues noted. “Similarly, there is no benefit in using sterile water for enteral feeds in immunosuppressed patients, and tap water enemas are routinely acceptable for colon cleansing before sigmoidoscopies in all patients, irrespective of immune status.”
Current guidelines, including the 2021 US multisociety guideline on reprocessing flexible GI endoscopes and accessories, recommend sterile water for procedures involving mucosal penetration but acknowledge low-quality supporting evidence. These recommendations are based on outdated studies, some unrelated to GI endoscopy, Dr. Agrawal and colleagues pointed out, and rely heavily on cross-referenced opinion statements rather than clinical data.
They went on to suggest a concerning possibility: all those plastic bottles may actually cause more health problems than prevent them. The review estimates that the production and transportation of sterile water bottles contributes over 6,000 metric tons of emissions per year from US endoscopy units alone. What’s more, as discarded bottles break down, they release greenhouse gases and microplastics, the latter of which have been linked to cardiovascular disease, inflammatory bowel disease, and endocrine disruption.
Dr. Agrawal and colleagues also underscored the financial toxicity of sterile water bottles. Considering a 1-liter bottle of sterile water costs $3-10, an endoscopy unit performing 30 procedures per day spends approximately $1,000-3,000 per month on bottled water alone. Scaled nationally, the routine use of sterile water costs tens of millions of dollars each year, not counting indirect expenses associated with stocking and waste disposal.
Considering the dubious clinical upside against the apparent environmental and financial downsides, Dr. Agrawal and colleagues urged endoscopy units to rethink routine sterile water use.
They proposed a pragmatic model: start the day with a new sterile or reusable bottle, refill with tap water for subsequent cases, and recycle the bottle at day’s end. Institutions should ensure their tap water meets safety standards, they added, such as those outlined in the Joint Commission’s 2022 R3 Report on standards for water management.
Dr. Agrawal and colleagues also called on GI societies to revise existing guidance to reflect today’s clinical and environmental realities. Until strong evidence supports the need for sterile water, they wrote, the smarter, safer, and more sustainable option may be simply turning on the tap.
The investigators disclosed relationships with Guardant, Exact Sciences, Freenome, and others.
Like diners saving on drinks,
“No direct evidence supports the recommendation and widespread use of sterile water during gastrointestinal endosco-py procedures,” lead author Deepak Agrawal, MD, chief of gastroenterology & hepatology at the Dell Medical School, University Texas at Austin, and colleagues, wrote in Gastro Hep Advances. “Guidelines recommending sterile water during endoscopy are based on limited evidence and mostly expert opinions.”
After reviewing the literature back to 1975, Dr. Agrawal and colleagues considered the use of sterile water in endoscopy via three frameworks: medical evidence and guidelines, environmental and broader health effects, and financial costs.
Only 2 studies – both from the 1990s – directly compared sterile and tap water use in endoscopy. Neither showed an increased risk of infection from tap water. In fact, some cultures from allegedly sterile water bottles grew pathogenic bacteria, while no patient complications were reported in either study.
“The recommendations for sterile water contradict observations in other medical care scenarios, for example, for the irrigation of open wounds,” Dr. Agrawal and colleagues noted. “Similarly, there is no benefit in using sterile water for enteral feeds in immunosuppressed patients, and tap water enemas are routinely acceptable for colon cleansing before sigmoidoscopies in all patients, irrespective of immune status.”
Current guidelines, including the 2021 US multisociety guideline on reprocessing flexible GI endoscopes and accessories, recommend sterile water for procedures involving mucosal penetration but acknowledge low-quality supporting evidence. These recommendations are based on outdated studies, some unrelated to GI endoscopy, Dr. Agrawal and colleagues pointed out, and rely heavily on cross-referenced opinion statements rather than clinical data.
They went on to suggest a concerning possibility: all those plastic bottles may actually cause more health problems than prevent them. The review estimates that the production and transportation of sterile water bottles contributes over 6,000 metric tons of emissions per year from US endoscopy units alone. What’s more, as discarded bottles break down, they release greenhouse gases and microplastics, the latter of which have been linked to cardiovascular disease, inflammatory bowel disease, and endocrine disruption.
Dr. Agrawal and colleagues also underscored the financial toxicity of sterile water bottles. Considering a 1-liter bottle of sterile water costs $3-10, an endoscopy unit performing 30 procedures per day spends approximately $1,000-3,000 per month on bottled water alone. Scaled nationally, the routine use of sterile water costs tens of millions of dollars each year, not counting indirect expenses associated with stocking and waste disposal.
Considering the dubious clinical upside against the apparent environmental and financial downsides, Dr. Agrawal and colleagues urged endoscopy units to rethink routine sterile water use.
They proposed a pragmatic model: start the day with a new sterile or reusable bottle, refill with tap water for subsequent cases, and recycle the bottle at day’s end. Institutions should ensure their tap water meets safety standards, they added, such as those outlined in the Joint Commission’s 2022 R3 Report on standards for water management.
Dr. Agrawal and colleagues also called on GI societies to revise existing guidance to reflect today’s clinical and environmental realities. Until strong evidence supports the need for sterile water, they wrote, the smarter, safer, and more sustainable option may be simply turning on the tap.
The investigators disclosed relationships with Guardant, Exact Sciences, Freenome, and others.
FROM GASTRO HEP ADVANCES
Cirrhosis Mortality Prediction Boosted by Machine Learning
“This highly inclusive, representative, and globally derived model has been externally validated,” Jasmohan Bajaj, MD, AGAF, professor of medicine at Virginia Commonwealth University in Richmond, Virginia, told GI & Hepatology News. “This gives us a crystal ball. It helps hospital teams, transplant centers, gastroenterology and intensive care unit services triage and prioritize patients more effectively.”
The study supporting the model, which Bajaj said “could be used at this stage,” was published online in Gastroenterology. The model is available for downloading at https://silveys.shinyapps.io/app_cleared/.
CLEARED Cohort Analyzed
Wide variations across the world regarding available resources, outpatient services, reasons for admission, and etiologies of cirrhosis can influence patient outcomes, according to Bajaj and colleagues. Therefore, they sought to use ML approaches to improve prognostication for all countries.
They analyzed admission-day data from the prospective Chronic Liver Disease Evolution And Registry for Events and Decompensation (CLEARED) consortium, which includes inpatients with cirrhosis enrolled from six continents. The analysis compared ML approaches with logistical regression to predict inpatient mortality.
The researchers performed internal validation (75/25 split) and subdivision using World-Bank income status: low/low-middle (L-LMIC), upper middle (UMIC), and high (HIC). They determined that the ML model with the best area-under-the-curve (AUC) would be externally validated in a US-Veteran cirrhosis inpatient population.
The CLEARED cohort included 7239 cirrhosis inpatients (mean age, 56 years; 64% men; median MELD-Na, 25) from 115 centers globally; 22.5% of centers belonged to LMICs, 41% to UMICs, and 34% to HICs.
A total of 808 patients (11.1%) died in the hospital.
Random-Forest analysis showed the best AUC (0.815) with high calibration. This was significantly better than parametric logistic regression (AUC, 0.774) and LASSO (AUC, 0.787) models.
Random-Forest also was better than logistic regression regardless of country income-level: HIC (AUC,0.806), UMIC (AUC, 0.867), and L-LMICs (AUC, 0.768).
Of the top 15 important variables selected from Random-Forest, admission for acute kidney injury, hepatic encephalopathy, high MELD-Na/white blood count, and not being in high income country were variables most predictive of mortality.
In contrast, higher albumin, hemoglobin, diuretic use on admission, viral etiology, and being in a high-income country were most protective.
The Random-Forest model was validated in 28,670 veterans (mean age, 67 years; 96% men; median MELD-Na,15), with an inpatient mortality of 4% (1158 patients).
The final Random-Forest model, using 48 of the 67 original covariates, attained a strong AUC of 0.859. A refit version using only the top 15 variables achieved a comparable AUC of 0.851.
Clinical Relevance
“Cirrhosis and resultant organ failures remain a dynamic and multidisciplinary problem,” Bajaj noted. “Machine learning techniques are one part of multi-faceted management strategy that is required in this population.”
If patients fall into the high-risk category, he said, “careful consultation with patients, families, and clinical teams is needed before providing information, including where this model was derived from. The results of these discussions could be instructive regarding decisions for transfer, more aggressive monitoring/ICU transfer, palliative care or transplant assessments.”
Meena B. Bansal, MD, system chief, Division of Liver Diseases, Mount Sinai Health System in New York City, called the tool “very promising.” However, she told GI & Hepatology News, “it was validated on a VA [Veterans Affairs] cohort, which is a bit different than the cohort of patients seen at Mount Sinai. Therefore, validation in more academic tertiary care medical centers with high volume liver transplant would be helpful.”
Furthermore, said Bansal, who was not involved in the study, “they excluded those that receiving a liver transplant, and while only a small number, this is an important limitation.”
Nevertheless, she added, “Artificial intelligence has great potential in predictive risk models and will likely be a tool that assists for risk stratification, clinical management, and hopefully improved clinical outcomes.”
This study was partly supported by a VA Merit review to Bajaj and the National Center for Advancing Translational Sciences, National Institutes of Health. No conflicts of interest were reported by any author.
A version of this article appeared on Medscape.com.
“This highly inclusive, representative, and globally derived model has been externally validated,” Jasmohan Bajaj, MD, AGAF, professor of medicine at Virginia Commonwealth University in Richmond, Virginia, told GI & Hepatology News. “This gives us a crystal ball. It helps hospital teams, transplant centers, gastroenterology and intensive care unit services triage and prioritize patients more effectively.”
The study supporting the model, which Bajaj said “could be used at this stage,” was published online in Gastroenterology. The model is available for downloading at https://silveys.shinyapps.io/app_cleared/.
CLEARED Cohort Analyzed
Wide variations across the world regarding available resources, outpatient services, reasons for admission, and etiologies of cirrhosis can influence patient outcomes, according to Bajaj and colleagues. Therefore, they sought to use ML approaches to improve prognostication for all countries.
They analyzed admission-day data from the prospective Chronic Liver Disease Evolution And Registry for Events and Decompensation (CLEARED) consortium, which includes inpatients with cirrhosis enrolled from six continents. The analysis compared ML approaches with logistical regression to predict inpatient mortality.
The researchers performed internal validation (75/25 split) and subdivision using World-Bank income status: low/low-middle (L-LMIC), upper middle (UMIC), and high (HIC). They determined that the ML model with the best area-under-the-curve (AUC) would be externally validated in a US-Veteran cirrhosis inpatient population.
The CLEARED cohort included 7239 cirrhosis inpatients (mean age, 56 years; 64% men; median MELD-Na, 25) from 115 centers globally; 22.5% of centers belonged to LMICs, 41% to UMICs, and 34% to HICs.
A total of 808 patients (11.1%) died in the hospital.
Random-Forest analysis showed the best AUC (0.815) with high calibration. This was significantly better than parametric logistic regression (AUC, 0.774) and LASSO (AUC, 0.787) models.
Random-Forest also was better than logistic regression regardless of country income-level: HIC (AUC,0.806), UMIC (AUC, 0.867), and L-LMICs (AUC, 0.768).
Of the top 15 important variables selected from Random-Forest, admission for acute kidney injury, hepatic encephalopathy, high MELD-Na/white blood count, and not being in high income country were variables most predictive of mortality.
In contrast, higher albumin, hemoglobin, diuretic use on admission, viral etiology, and being in a high-income country were most protective.
The Random-Forest model was validated in 28,670 veterans (mean age, 67 years; 96% men; median MELD-Na,15), with an inpatient mortality of 4% (1158 patients).
The final Random-Forest model, using 48 of the 67 original covariates, attained a strong AUC of 0.859. A refit version using only the top 15 variables achieved a comparable AUC of 0.851.
Clinical Relevance
“Cirrhosis and resultant organ failures remain a dynamic and multidisciplinary problem,” Bajaj noted. “Machine learning techniques are one part of multi-faceted management strategy that is required in this population.”
If patients fall into the high-risk category, he said, “careful consultation with patients, families, and clinical teams is needed before providing information, including where this model was derived from. The results of these discussions could be instructive regarding decisions for transfer, more aggressive monitoring/ICU transfer, palliative care or transplant assessments.”
Meena B. Bansal, MD, system chief, Division of Liver Diseases, Mount Sinai Health System in New York City, called the tool “very promising.” However, she told GI & Hepatology News, “it was validated on a VA [Veterans Affairs] cohort, which is a bit different than the cohort of patients seen at Mount Sinai. Therefore, validation in more academic tertiary care medical centers with high volume liver transplant would be helpful.”
Furthermore, said Bansal, who was not involved in the study, “they excluded those that receiving a liver transplant, and while only a small number, this is an important limitation.”
Nevertheless, she added, “Artificial intelligence has great potential in predictive risk models and will likely be a tool that assists for risk stratification, clinical management, and hopefully improved clinical outcomes.”
This study was partly supported by a VA Merit review to Bajaj and the National Center for Advancing Translational Sciences, National Institutes of Health. No conflicts of interest were reported by any author.
A version of this article appeared on Medscape.com.
“This highly inclusive, representative, and globally derived model has been externally validated,” Jasmohan Bajaj, MD, AGAF, professor of medicine at Virginia Commonwealth University in Richmond, Virginia, told GI & Hepatology News. “This gives us a crystal ball. It helps hospital teams, transplant centers, gastroenterology and intensive care unit services triage and prioritize patients more effectively.”
The study supporting the model, which Bajaj said “could be used at this stage,” was published online in Gastroenterology. The model is available for downloading at https://silveys.shinyapps.io/app_cleared/.
CLEARED Cohort Analyzed
Wide variations across the world regarding available resources, outpatient services, reasons for admission, and etiologies of cirrhosis can influence patient outcomes, according to Bajaj and colleagues. Therefore, they sought to use ML approaches to improve prognostication for all countries.
They analyzed admission-day data from the prospective Chronic Liver Disease Evolution And Registry for Events and Decompensation (CLEARED) consortium, which includes inpatients with cirrhosis enrolled from six continents. The analysis compared ML approaches with logistical regression to predict inpatient mortality.
The researchers performed internal validation (75/25 split) and subdivision using World-Bank income status: low/low-middle (L-LMIC), upper middle (UMIC), and high (HIC). They determined that the ML model with the best area-under-the-curve (AUC) would be externally validated in a US-Veteran cirrhosis inpatient population.
The CLEARED cohort included 7239 cirrhosis inpatients (mean age, 56 years; 64% men; median MELD-Na, 25) from 115 centers globally; 22.5% of centers belonged to LMICs, 41% to UMICs, and 34% to HICs.
A total of 808 patients (11.1%) died in the hospital.
Random-Forest analysis showed the best AUC (0.815) with high calibration. This was significantly better than parametric logistic regression (AUC, 0.774) and LASSO (AUC, 0.787) models.
Random-Forest also was better than logistic regression regardless of country income-level: HIC (AUC,0.806), UMIC (AUC, 0.867), and L-LMICs (AUC, 0.768).
Of the top 15 important variables selected from Random-Forest, admission for acute kidney injury, hepatic encephalopathy, high MELD-Na/white blood count, and not being in high income country were variables most predictive of mortality.
In contrast, higher albumin, hemoglobin, diuretic use on admission, viral etiology, and being in a high-income country were most protective.
The Random-Forest model was validated in 28,670 veterans (mean age, 67 years; 96% men; median MELD-Na,15), with an inpatient mortality of 4% (1158 patients).
The final Random-Forest model, using 48 of the 67 original covariates, attained a strong AUC of 0.859. A refit version using only the top 15 variables achieved a comparable AUC of 0.851.
Clinical Relevance
“Cirrhosis and resultant organ failures remain a dynamic and multidisciplinary problem,” Bajaj noted. “Machine learning techniques are one part of multi-faceted management strategy that is required in this population.”
If patients fall into the high-risk category, he said, “careful consultation with patients, families, and clinical teams is needed before providing information, including where this model was derived from. The results of these discussions could be instructive regarding decisions for transfer, more aggressive monitoring/ICU transfer, palliative care or transplant assessments.”
Meena B. Bansal, MD, system chief, Division of Liver Diseases, Mount Sinai Health System in New York City, called the tool “very promising.” However, she told GI & Hepatology News, “it was validated on a VA [Veterans Affairs] cohort, which is a bit different than the cohort of patients seen at Mount Sinai. Therefore, validation in more academic tertiary care medical centers with high volume liver transplant would be helpful.”
Furthermore, said Bansal, who was not involved in the study, “they excluded those that receiving a liver transplant, and while only a small number, this is an important limitation.”
Nevertheless, she added, “Artificial intelligence has great potential in predictive risk models and will likely be a tool that assists for risk stratification, clinical management, and hopefully improved clinical outcomes.”
This study was partly supported by a VA Merit review to Bajaj and the National Center for Advancing Translational Sciences, National Institutes of Health. No conflicts of interest were reported by any author.
A version of this article appeared on Medscape.com.
FROM GASTROENTEROLOGY
Colonoscopy Costs Rise When Private Equity Acquires GI Practices, but Quality Does Not
Price increases ranged from about 5% to about 7%.
In view of the growing trend to such acquisitions, policy makers should monitor the impact of PE investment in medical practices, according to researchers led by health economist Daniel R. Arnold, PhD, of the Department of Health Services, Policy & Practice in the School of Public Health at Brown University in Providence, Rhode Island. “In a previous study of ours, gastroenterology stood out as a particularly attractive specialty to private equity,” Arnold told GI & Hepatology News.
Published in JAMA Health Forum, the economic evaluation of more than 1.1 million patients and 1.3 million colonoscopies concluded that PE acquisitions of GI sites are difficult to justify.
The Study
This difference-in-differences event study and economic evaluation analyzed data from US GI practices acquired by PE firms from 2015 to 2021. Commercial insurance claims covering more than 50 million enrollees were used to calculate price, spending, utilization, and quality measures from 2012 to 2021, with all data analyzed from April to September 2024.
The main outcomes were price, spending per physician, number of colonoscopies per physician, number of unique patients per physician, and quality, as defined by polyp detection, incomplete colonoscopies, and four adverse event measures: cardiovascular, serious and nonserious GI events, and any other adverse events.
The mean age of patients was 47.1 years, and 47.8% were men. The sample included 718, 851 colonoscopies conducted by 1494 physicians in 590, 900 patients across 1240 PE-acquired practice sites and 637, 990 control colonoscopies conducted by 2550 physicians in 527,380 patients across 2657 independent practice sites.
Among the findings:
- Colonoscopy prices at PE-acquired sites increased by 4.5% (95% CI, 2.5-6.6; P < .001) vs independent practices. That increase was much lower than that reported by Singh and colleagues for .
- The estimated price increase was 6.7% (95% CI, 4.2-9.3; P < .001) when only colonoscopies at PE practices with market shares above the 75th percentile (24.4%) in 2021 were considered. Both increases were in line with other research, Arnold said.
- Colonoscopy spending per physician increased by 16.0% (95% CI, 8.4%-24.0%; P < .001), while the number of colonoscopies and the number of unique patients per physician increased by 12.1% (95% CI, 5.3-19.4; P < .001) and 11.3% (95% CI, 4.4%-18.5%; P < .001), respectively. These measures, however, were already increasing before PE acquisition.
- No statistically significant associations were detected for the six quality measures analyzed.
Could such cost-raising acquisitions potentially be blocked by concerned regulators?
“No. Generally the purchases are at prices below what would require notification to federal authorities,” Arnold said. “The Department of Justice/Federal Trade Commission hinted at being willing to look at serial acquisitions in their 2023 Merger Guidelines, but until that happens, these will likely continue to fly under the radar.”
Still, as evidence of PE-associated poorer quality outcomes as well as clinician burnout continues to emerge, Arnold added, “I would advise physicians who get buyout offers from PE to educate themselves on what could happen to patients and staff if they choose to sell.”
Offering an outsider’s perspective on the study, health economist Atul Gupta, PhD, an assistant professor of healthcare management in the Wharton School at the University of Pennsylvania in Philadelphia, called it an “excellent addition to the developing literature examining the effects of private equity ownership of healthcare providers.” Very few studies have examined the effects on prices and quality for the same set of deals and providers. “This is important because we want to be able to do an apples-to-apples comparison of the effects on both outcomes before judging PE ownership,” he told GI & Hepatology News.
In an accompanying editorial , primary care physician Jane M. Zhu, MD, an associate professor of medicine at Oregon Health & Science University in Portland, Oregon, and not involved in the commercial-insurance-based study, said one interpretation of the findings may be that PE acquisition focuses on reducing inefficiencies, improving access by expanding practice capacity, and increasing throughput. “Another interpretation may be that PE acquisition is focused on the strategic exploitation of market and pricing power. The latter may have less of an impact on clinical measures like quality of care, but potentially, both strategies could be at play.”
Since this analysis focused on the commercial population, understanding how patient demographics may change after PE acquisition is a future avenue for exploration. “For instance, a potential explanation for both the price and utilization shifts might be if payer mix shifted toward more commercially insured patients at the expense of Medicaid or Medicare patients,” she wrote.
Zhu added that the impact of PE on prices and spending, by now replicated across different settings and specialties, is far clearer than the effect of PE on access and quality. “The analysis by Arnold et al is a welcome addition to the literature, generating important questions for future study and transparent monitoring as investor-owners become increasingly influential in healthcare.”
Going forward, said Gupta, an open question is whether the harmful effects of PE ownership of practices are differentially worse than those of other corporate entities such as insurers and hospital systems.
“There are reasons to believe that PE could be worse in theory. For example, their short-term investment horizon may force them to take measures that others will not as well as avoid investing into capital improvements that have a long-run payoff,” he said. “Their uniquely high dependence on debt and unbundling of real estate can severely hurt financial solvency of providers.” But high-quality evidence is lacking to compare the effects from these two distinct forms of corporatization.
The trend away from individual private practice is a reality, Arnold said. “The administrative burden on solo docs is becoming too much and physicians just seem to want to treat patients and not deal with it. So the options at this point really become selling to a hospital system or private equity.”
This study was funded by a grant from the philanthropic foundation Arnold Ventures (no family relation to Daniel Arnold).
Arnold reported receiving grants from Arnold Ventures during the conduct of the study. Gupta had no competing interests to declare. Zhu reported receiving grants from the Agency for Healthcare Research and Quality during the submitted work and from the National Institutes of Health, National Institute for Health Care Management Foundation, and American Psychological Association, as well as personal fees from Cambia outside the submitted work.
A version of this article appeared on Medscape.com.
Price increases ranged from about 5% to about 7%.
In view of the growing trend to such acquisitions, policy makers should monitor the impact of PE investment in medical practices, according to researchers led by health economist Daniel R. Arnold, PhD, of the Department of Health Services, Policy & Practice in the School of Public Health at Brown University in Providence, Rhode Island. “In a previous study of ours, gastroenterology stood out as a particularly attractive specialty to private equity,” Arnold told GI & Hepatology News.
Published in JAMA Health Forum, the economic evaluation of more than 1.1 million patients and 1.3 million colonoscopies concluded that PE acquisitions of GI sites are difficult to justify.
The Study
This difference-in-differences event study and economic evaluation analyzed data from US GI practices acquired by PE firms from 2015 to 2021. Commercial insurance claims covering more than 50 million enrollees were used to calculate price, spending, utilization, and quality measures from 2012 to 2021, with all data analyzed from April to September 2024.
The main outcomes were price, spending per physician, number of colonoscopies per physician, number of unique patients per physician, and quality, as defined by polyp detection, incomplete colonoscopies, and four adverse event measures: cardiovascular, serious and nonserious GI events, and any other adverse events.
The mean age of patients was 47.1 years, and 47.8% were men. The sample included 718, 851 colonoscopies conducted by 1494 physicians in 590, 900 patients across 1240 PE-acquired practice sites and 637, 990 control colonoscopies conducted by 2550 physicians in 527,380 patients across 2657 independent practice sites.
Among the findings:
- Colonoscopy prices at PE-acquired sites increased by 4.5% (95% CI, 2.5-6.6; P < .001) vs independent practices. That increase was much lower than that reported by Singh and colleagues for .
- The estimated price increase was 6.7% (95% CI, 4.2-9.3; P < .001) when only colonoscopies at PE practices with market shares above the 75th percentile (24.4%) in 2021 were considered. Both increases were in line with other research, Arnold said.
- Colonoscopy spending per physician increased by 16.0% (95% CI, 8.4%-24.0%; P < .001), while the number of colonoscopies and the number of unique patients per physician increased by 12.1% (95% CI, 5.3-19.4; P < .001) and 11.3% (95% CI, 4.4%-18.5%; P < .001), respectively. These measures, however, were already increasing before PE acquisition.
- No statistically significant associations were detected for the six quality measures analyzed.
Could such cost-raising acquisitions potentially be blocked by concerned regulators?
“No. Generally the purchases are at prices below what would require notification to federal authorities,” Arnold said. “The Department of Justice/Federal Trade Commission hinted at being willing to look at serial acquisitions in their 2023 Merger Guidelines, but until that happens, these will likely continue to fly under the radar.”
Still, as evidence of PE-associated poorer quality outcomes as well as clinician burnout continues to emerge, Arnold added, “I would advise physicians who get buyout offers from PE to educate themselves on what could happen to patients and staff if they choose to sell.”
Offering an outsider’s perspective on the study, health economist Atul Gupta, PhD, an assistant professor of healthcare management in the Wharton School at the University of Pennsylvania in Philadelphia, called it an “excellent addition to the developing literature examining the effects of private equity ownership of healthcare providers.” Very few studies have examined the effects on prices and quality for the same set of deals and providers. “This is important because we want to be able to do an apples-to-apples comparison of the effects on both outcomes before judging PE ownership,” he told GI & Hepatology News.
In an accompanying editorial , primary care physician Jane M. Zhu, MD, an associate professor of medicine at Oregon Health & Science University in Portland, Oregon, and not involved in the commercial-insurance-based study, said one interpretation of the findings may be that PE acquisition focuses on reducing inefficiencies, improving access by expanding practice capacity, and increasing throughput. “Another interpretation may be that PE acquisition is focused on the strategic exploitation of market and pricing power. The latter may have less of an impact on clinical measures like quality of care, but potentially, both strategies could be at play.”
Since this analysis focused on the commercial population, understanding how patient demographics may change after PE acquisition is a future avenue for exploration. “For instance, a potential explanation for both the price and utilization shifts might be if payer mix shifted toward more commercially insured patients at the expense of Medicaid or Medicare patients,” she wrote.
Zhu added that the impact of PE on prices and spending, by now replicated across different settings and specialties, is far clearer than the effect of PE on access and quality. “The analysis by Arnold et al is a welcome addition to the literature, generating important questions for future study and transparent monitoring as investor-owners become increasingly influential in healthcare.”
Going forward, said Gupta, an open question is whether the harmful effects of PE ownership of practices are differentially worse than those of other corporate entities such as insurers and hospital systems.
“There are reasons to believe that PE could be worse in theory. For example, their short-term investment horizon may force them to take measures that others will not as well as avoid investing into capital improvements that have a long-run payoff,” he said. “Their uniquely high dependence on debt and unbundling of real estate can severely hurt financial solvency of providers.” But high-quality evidence is lacking to compare the effects from these two distinct forms of corporatization.
The trend away from individual private practice is a reality, Arnold said. “The administrative burden on solo docs is becoming too much and physicians just seem to want to treat patients and not deal with it. So the options at this point really become selling to a hospital system or private equity.”
This study was funded by a grant from the philanthropic foundation Arnold Ventures (no family relation to Daniel Arnold).
Arnold reported receiving grants from Arnold Ventures during the conduct of the study. Gupta had no competing interests to declare. Zhu reported receiving grants from the Agency for Healthcare Research and Quality during the submitted work and from the National Institutes of Health, National Institute for Health Care Management Foundation, and American Psychological Association, as well as personal fees from Cambia outside the submitted work.
A version of this article appeared on Medscape.com.
Price increases ranged from about 5% to about 7%.
In view of the growing trend to such acquisitions, policy makers should monitor the impact of PE investment in medical practices, according to researchers led by health economist Daniel R. Arnold, PhD, of the Department of Health Services, Policy & Practice in the School of Public Health at Brown University in Providence, Rhode Island. “In a previous study of ours, gastroenterology stood out as a particularly attractive specialty to private equity,” Arnold told GI & Hepatology News.
Published in JAMA Health Forum, the economic evaluation of more than 1.1 million patients and 1.3 million colonoscopies concluded that PE acquisitions of GI sites are difficult to justify.
The Study
This difference-in-differences event study and economic evaluation analyzed data from US GI practices acquired by PE firms from 2015 to 2021. Commercial insurance claims covering more than 50 million enrollees were used to calculate price, spending, utilization, and quality measures from 2012 to 2021, with all data analyzed from April to September 2024.
The main outcomes were price, spending per physician, number of colonoscopies per physician, number of unique patients per physician, and quality, as defined by polyp detection, incomplete colonoscopies, and four adverse event measures: cardiovascular, serious and nonserious GI events, and any other adverse events.
The mean age of patients was 47.1 years, and 47.8% were men. The sample included 718, 851 colonoscopies conducted by 1494 physicians in 590, 900 patients across 1240 PE-acquired practice sites and 637, 990 control colonoscopies conducted by 2550 physicians in 527,380 patients across 2657 independent practice sites.
Among the findings:
- Colonoscopy prices at PE-acquired sites increased by 4.5% (95% CI, 2.5-6.6; P < .001) vs independent practices. That increase was much lower than that reported by Singh and colleagues for .
- The estimated price increase was 6.7% (95% CI, 4.2-9.3; P < .001) when only colonoscopies at PE practices with market shares above the 75th percentile (24.4%) in 2021 were considered. Both increases were in line with other research, Arnold said.
- Colonoscopy spending per physician increased by 16.0% (95% CI, 8.4%-24.0%; P < .001), while the number of colonoscopies and the number of unique patients per physician increased by 12.1% (95% CI, 5.3-19.4; P < .001) and 11.3% (95% CI, 4.4%-18.5%; P < .001), respectively. These measures, however, were already increasing before PE acquisition.
- No statistically significant associations were detected for the six quality measures analyzed.
Could such cost-raising acquisitions potentially be blocked by concerned regulators?
“No. Generally the purchases are at prices below what would require notification to federal authorities,” Arnold said. “The Department of Justice/Federal Trade Commission hinted at being willing to look at serial acquisitions in their 2023 Merger Guidelines, but until that happens, these will likely continue to fly under the radar.”
Still, as evidence of PE-associated poorer quality outcomes as well as clinician burnout continues to emerge, Arnold added, “I would advise physicians who get buyout offers from PE to educate themselves on what could happen to patients and staff if they choose to sell.”
Offering an outsider’s perspective on the study, health economist Atul Gupta, PhD, an assistant professor of healthcare management in the Wharton School at the University of Pennsylvania in Philadelphia, called it an “excellent addition to the developing literature examining the effects of private equity ownership of healthcare providers.” Very few studies have examined the effects on prices and quality for the same set of deals and providers. “This is important because we want to be able to do an apples-to-apples comparison of the effects on both outcomes before judging PE ownership,” he told GI & Hepatology News.
In an accompanying editorial , primary care physician Jane M. Zhu, MD, an associate professor of medicine at Oregon Health & Science University in Portland, Oregon, and not involved in the commercial-insurance-based study, said one interpretation of the findings may be that PE acquisition focuses on reducing inefficiencies, improving access by expanding practice capacity, and increasing throughput. “Another interpretation may be that PE acquisition is focused on the strategic exploitation of market and pricing power. The latter may have less of an impact on clinical measures like quality of care, but potentially, both strategies could be at play.”
Since this analysis focused on the commercial population, understanding how patient demographics may change after PE acquisition is a future avenue for exploration. “For instance, a potential explanation for both the price and utilization shifts might be if payer mix shifted toward more commercially insured patients at the expense of Medicaid or Medicare patients,” she wrote.
Zhu added that the impact of PE on prices and spending, by now replicated across different settings and specialties, is far clearer than the effect of PE on access and quality. “The analysis by Arnold et al is a welcome addition to the literature, generating important questions for future study and transparent monitoring as investor-owners become increasingly influential in healthcare.”
Going forward, said Gupta, an open question is whether the harmful effects of PE ownership of practices are differentially worse than those of other corporate entities such as insurers and hospital systems.
“There are reasons to believe that PE could be worse in theory. For example, their short-term investment horizon may force them to take measures that others will not as well as avoid investing into capital improvements that have a long-run payoff,” he said. “Their uniquely high dependence on debt and unbundling of real estate can severely hurt financial solvency of providers.” But high-quality evidence is lacking to compare the effects from these two distinct forms of corporatization.
The trend away from individual private practice is a reality, Arnold said. “The administrative burden on solo docs is becoming too much and physicians just seem to want to treat patients and not deal with it. So the options at this point really become selling to a hospital system or private equity.”
This study was funded by a grant from the philanthropic foundation Arnold Ventures (no family relation to Daniel Arnold).
Arnold reported receiving grants from Arnold Ventures during the conduct of the study. Gupta had no competing interests to declare. Zhu reported receiving grants from the Agency for Healthcare Research and Quality during the submitted work and from the National Institutes of Health, National Institute for Health Care Management Foundation, and American Psychological Association, as well as personal fees from Cambia outside the submitted work.
A version of this article appeared on Medscape.com.
Tribal Health Officials Work To Fill Vaccination Gaps as Measles Outbreak Spreads
RAPID CITY, S.D. — Cassandra Palmier had been meaning to get her son the second and final dose of the measles vaccine. But car problems made it difficult to get to the doctor.
So she pounced on the opportunity to get him vaccinated after learning that a mobile clinic would be visiting her neighborhood.
“I was definitely concerned about the epidemic and the measles,” Palmier, a member of the Oglala Sioux Tribe, said at the June event. “I wanted to do my part.”
So did her son, Makaito Cuny.
“I’m not going to be scared,” the 5-year-old announced as he walked onto the bus containing the clinic and hopped into an exam chair.
Makaito sat still as a nurse gave him the shot in his arm. “I did it!” he said while smiling at his mother.
The vaccine clinic was hosted by the Great Plains Tribal Leaders’ Health Board, which serves tribes across Iowa, Nebraska, and the Dakotas. It’s one way Native American tribes and organizations are responding to concerns about low measles vaccination rates and patients’ difficulty accessing health care as the disease spreads across the country.
Meghan O’Connell, the board’s chief public health officer, said it is also working with tribes that want to host vaccine clinics.
Elsewhere, tribal health organizations have launched social media campaigns, are making sure health providers are vaccinated, and are reaching out to the parents of unvaccinated children.
This spring, Project ECHO at the University of New Mexico hosted an online video series about measles aimed at health care professionals and organizations that serve Native American communities. The presenters outlined the basics of measles diagnosis and treatment, discussed culturally relevant communication strategies, and shared how tribes are responding to the outbreak.
Participants also strategized about ways to improve vaccination rates, said Harry Brown, a physician and an epidemiologist for the United South and Eastern Tribes, a nonprofit that works with 33 tribes in the Atlantic Coast and Southeast regions.
“It’s a pretty hot topic right now in Indian Country and I think a lot of people are being proactive,” he said.
Measles can survive for up to two hours in the air in a space where an infected person has been, sickening up to 90% of people who aren’t vaccinated, according to the Centers for Disease Control and Prevention.
The U.S. has had 1,319 confirmed cases of measles this year as of July 23, according to the CDC. It’s the largest outbreak in the U.S. since 1992. Ninety-two percent of the 2025 cases involve unvaccinated patients or people with an unknown vaccination status. Three people had died in the U.S. and 165 had been hospitalized as of July 23.
O’Connell said data on Native Americans’ vaccination rates is imperfect but that it suggests a lower percentage of them have received measles shots than the overall U.S. population.
The limited national data on measles vaccination rates for Native Americans is based on small surveys of people who self-identify as Native American. Some show that Native Americans have slightly lower measles vaccination rates, while others show significant gaps.
Data from some states, including South Dakota and Montana, shows that Native Americans are less likely than white children to be vaccinated on schedule.
The national measles vaccination rate is significantly lower for Native Americans who use the mostly rural Indian Health Service. About 76% of children 16 to 27 months old had gotten the first shot, according to data collected by the agency during recent patient visits at 156 clinics. That’s a 10-percentage-point drop from 10 years ago.
But the IHS data shows that its patients are at least as likely as other children to have received both recommended measles shots by the time they’re 17. O’Connell said it’s unclear if currently unvaccinated patients will continue the trend of eventually getting up to date on their shots or if they will remain unvaccinated.
The immunization rate is probably higher for older children since schools require students to get vaccinated unless they have an exemption, Brown said. He said it’s important that parents get their children vaccinated on time, when they’re young and more at risk of being hospitalized or dying from the disease.
Native Americans may have lower vaccination rates due to the challenges they face in accessing shots and other health care, O’Connell said. Those on rural reservations may be an hour or more from a clinic. Or, like Palmier, they may not have reliable transportation.
Another reason, O’Connell said, is that some Native Americans distrust the Indian Health Service, which is chronically underfunded and understaffed. If the only nearby health care facility is run by the agency, patients may delay or skip care.
O’Connell and Brown said vaccine skepticism and mistrust of the entire health care system are growing in Native American communities, as has occurred elsewhere nationwide.
“Prior to social media, I think our population was pretty trustful of childhood vaccination. And American Indians have a long history of being severely impacted by infectious disease,” he said.
European colonizers’ arrival in the late 1400s brought new diseases, including measles, that killed tens of millions of Indigenous people in North and South America by the early 1600s. Native Americans have also had high mortality rates in modern pandemics, including the 1918-20 Spanish flu and COVID-19.
The Great Plains Tribal Leaders’ Health Board reacted quickly when measles cases began showing up near its headquarters in South Dakota this year. Nebraska health officials announced in late May that a child had measles in a rural part of the state, close to the Pine Ridge Indian Reservation. Then, four people from the Rapid City area got sick later that month and into the middle of June.
“Our phones really rang off the hook” once that news came out, said Darren Crowe, a vice president at the board’s Oyate Health Center in Rapid City. He said parents wanted to know if their children were up to date on their measles vaccines.
Crowe said the health board ordered extra masks, created a measles command team that meets daily, and called parents when its online database showed their children needed a shot.
Brown praised that approach.
“It takes a concerted outreach effort that goes individual to individual,” he said, adding that his organization helped the Mississippi Band of Choctaw Indians and the Alabama-Coushatta Tribe of Texas with similar efforts.
Brown said reaching specific families can be a challenge in some low-income Native American communities, where many people’s phone numbers frequently change since they use temporary prepaid plans.
Once a health worker reaches a parent, Brown said, they should listen and ask questions before sharing the importance of the vaccine against measles, mumps, and rubella.
“Rather than trying to preach to somebody and beat them over the head with data or whatever to convince them that this is what they need to do, you start out by finding out where they are,” he said. “So, ‘Tell me about your experience with vaccination. Tell me what you know about vaccination.’”
Most people agree to immunize their children when presented with helpful information in a nonjudgmental way, Brown said.
KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.
RAPID CITY, S.D. — Cassandra Palmier had been meaning to get her son the second and final dose of the measles vaccine. But car problems made it difficult to get to the doctor.
So she pounced on the opportunity to get him vaccinated after learning that a mobile clinic would be visiting her neighborhood.
“I was definitely concerned about the epidemic and the measles,” Palmier, a member of the Oglala Sioux Tribe, said at the June event. “I wanted to do my part.”
So did her son, Makaito Cuny.
“I’m not going to be scared,” the 5-year-old announced as he walked onto the bus containing the clinic and hopped into an exam chair.
Makaito sat still as a nurse gave him the shot in his arm. “I did it!” he said while smiling at his mother.
The vaccine clinic was hosted by the Great Plains Tribal Leaders’ Health Board, which serves tribes across Iowa, Nebraska, and the Dakotas. It’s one way Native American tribes and organizations are responding to concerns about low measles vaccination rates and patients’ difficulty accessing health care as the disease spreads across the country.
Meghan O’Connell, the board’s chief public health officer, said it is also working with tribes that want to host vaccine clinics.
Elsewhere, tribal health organizations have launched social media campaigns, are making sure health providers are vaccinated, and are reaching out to the parents of unvaccinated children.
This spring, Project ECHO at the University of New Mexico hosted an online video series about measles aimed at health care professionals and organizations that serve Native American communities. The presenters outlined the basics of measles diagnosis and treatment, discussed culturally relevant communication strategies, and shared how tribes are responding to the outbreak.
Participants also strategized about ways to improve vaccination rates, said Harry Brown, a physician and an epidemiologist for the United South and Eastern Tribes, a nonprofit that works with 33 tribes in the Atlantic Coast and Southeast regions.
“It’s a pretty hot topic right now in Indian Country and I think a lot of people are being proactive,” he said.
Measles can survive for up to two hours in the air in a space where an infected person has been, sickening up to 90% of people who aren’t vaccinated, according to the Centers for Disease Control and Prevention.
The U.S. has had 1,319 confirmed cases of measles this year as of July 23, according to the CDC. It’s the largest outbreak in the U.S. since 1992. Ninety-two percent of the 2025 cases involve unvaccinated patients or people with an unknown vaccination status. Three people had died in the U.S. and 165 had been hospitalized as of July 23.
O’Connell said data on Native Americans’ vaccination rates is imperfect but that it suggests a lower percentage of them have received measles shots than the overall U.S. population.
The limited national data on measles vaccination rates for Native Americans is based on small surveys of people who self-identify as Native American. Some show that Native Americans have slightly lower measles vaccination rates, while others show significant gaps.
Data from some states, including South Dakota and Montana, shows that Native Americans are less likely than white children to be vaccinated on schedule.
The national measles vaccination rate is significantly lower for Native Americans who use the mostly rural Indian Health Service. About 76% of children 16 to 27 months old had gotten the first shot, according to data collected by the agency during recent patient visits at 156 clinics. That’s a 10-percentage-point drop from 10 years ago.
But the IHS data shows that its patients are at least as likely as other children to have received both recommended measles shots by the time they’re 17. O’Connell said it’s unclear if currently unvaccinated patients will continue the trend of eventually getting up to date on their shots or if they will remain unvaccinated.
The immunization rate is probably higher for older children since schools require students to get vaccinated unless they have an exemption, Brown said. He said it’s important that parents get their children vaccinated on time, when they’re young and more at risk of being hospitalized or dying from the disease.
Native Americans may have lower vaccination rates due to the challenges they face in accessing shots and other health care, O’Connell said. Those on rural reservations may be an hour or more from a clinic. Or, like Palmier, they may not have reliable transportation.
Another reason, O’Connell said, is that some Native Americans distrust the Indian Health Service, which is chronically underfunded and understaffed. If the only nearby health care facility is run by the agency, patients may delay or skip care.
O’Connell and Brown said vaccine skepticism and mistrust of the entire health care system are growing in Native American communities, as has occurred elsewhere nationwide.
“Prior to social media, I think our population was pretty trustful of childhood vaccination. And American Indians have a long history of being severely impacted by infectious disease,” he said.
European colonizers’ arrival in the late 1400s brought new diseases, including measles, that killed tens of millions of Indigenous people in North and South America by the early 1600s. Native Americans have also had high mortality rates in modern pandemics, including the 1918-20 Spanish flu and COVID-19.
The Great Plains Tribal Leaders’ Health Board reacted quickly when measles cases began showing up near its headquarters in South Dakota this year. Nebraska health officials announced in late May that a child had measles in a rural part of the state, close to the Pine Ridge Indian Reservation. Then, four people from the Rapid City area got sick later that month and into the middle of June.
“Our phones really rang off the hook” once that news came out, said Darren Crowe, a vice president at the board’s Oyate Health Center in Rapid City. He said parents wanted to know if their children were up to date on their measles vaccines.
Crowe said the health board ordered extra masks, created a measles command team that meets daily, and called parents when its online database showed their children needed a shot.
Brown praised that approach.
“It takes a concerted outreach effort that goes individual to individual,” he said, adding that his organization helped the Mississippi Band of Choctaw Indians and the Alabama-Coushatta Tribe of Texas with similar efforts.
Brown said reaching specific families can be a challenge in some low-income Native American communities, where many people’s phone numbers frequently change since they use temporary prepaid plans.
Once a health worker reaches a parent, Brown said, they should listen and ask questions before sharing the importance of the vaccine against measles, mumps, and rubella.
“Rather than trying to preach to somebody and beat them over the head with data or whatever to convince them that this is what they need to do, you start out by finding out where they are,” he said. “So, ‘Tell me about your experience with vaccination. Tell me what you know about vaccination.’”
Most people agree to immunize their children when presented with helpful information in a nonjudgmental way, Brown said.
KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.
RAPID CITY, S.D. — Cassandra Palmier had been meaning to get her son the second and final dose of the measles vaccine. But car problems made it difficult to get to the doctor.
So she pounced on the opportunity to get him vaccinated after learning that a mobile clinic would be visiting her neighborhood.
“I was definitely concerned about the epidemic and the measles,” Palmier, a member of the Oglala Sioux Tribe, said at the June event. “I wanted to do my part.”
So did her son, Makaito Cuny.
“I’m not going to be scared,” the 5-year-old announced as he walked onto the bus containing the clinic and hopped into an exam chair.
Makaito sat still as a nurse gave him the shot in his arm. “I did it!” he said while smiling at his mother.
The vaccine clinic was hosted by the Great Plains Tribal Leaders’ Health Board, which serves tribes across Iowa, Nebraska, and the Dakotas. It’s one way Native American tribes and organizations are responding to concerns about low measles vaccination rates and patients’ difficulty accessing health care as the disease spreads across the country.
Meghan O’Connell, the board’s chief public health officer, said it is also working with tribes that want to host vaccine clinics.
Elsewhere, tribal health organizations have launched social media campaigns, are making sure health providers are vaccinated, and are reaching out to the parents of unvaccinated children.
This spring, Project ECHO at the University of New Mexico hosted an online video series about measles aimed at health care professionals and organizations that serve Native American communities. The presenters outlined the basics of measles diagnosis and treatment, discussed culturally relevant communication strategies, and shared how tribes are responding to the outbreak.
Participants also strategized about ways to improve vaccination rates, said Harry Brown, a physician and an epidemiologist for the United South and Eastern Tribes, a nonprofit that works with 33 tribes in the Atlantic Coast and Southeast regions.
“It’s a pretty hot topic right now in Indian Country and I think a lot of people are being proactive,” he said.
Measles can survive for up to two hours in the air in a space where an infected person has been, sickening up to 90% of people who aren’t vaccinated, according to the Centers for Disease Control and Prevention.
The U.S. has had 1,319 confirmed cases of measles this year as of July 23, according to the CDC. It’s the largest outbreak in the U.S. since 1992. Ninety-two percent of the 2025 cases involve unvaccinated patients or people with an unknown vaccination status. Three people had died in the U.S. and 165 had been hospitalized as of July 23.
O’Connell said data on Native Americans’ vaccination rates is imperfect but that it suggests a lower percentage of them have received measles shots than the overall U.S. population.
The limited national data on measles vaccination rates for Native Americans is based on small surveys of people who self-identify as Native American. Some show that Native Americans have slightly lower measles vaccination rates, while others show significant gaps.
Data from some states, including South Dakota and Montana, shows that Native Americans are less likely than white children to be vaccinated on schedule.
The national measles vaccination rate is significantly lower for Native Americans who use the mostly rural Indian Health Service. About 76% of children 16 to 27 months old had gotten the first shot, according to data collected by the agency during recent patient visits at 156 clinics. That’s a 10-percentage-point drop from 10 years ago.
But the IHS data shows that its patients are at least as likely as other children to have received both recommended measles shots by the time they’re 17. O’Connell said it’s unclear if currently unvaccinated patients will continue the trend of eventually getting up to date on their shots or if they will remain unvaccinated.
The immunization rate is probably higher for older children since schools require students to get vaccinated unless they have an exemption, Brown said. He said it’s important that parents get their children vaccinated on time, when they’re young and more at risk of being hospitalized or dying from the disease.
Native Americans may have lower vaccination rates due to the challenges they face in accessing shots and other health care, O’Connell said. Those on rural reservations may be an hour or more from a clinic. Or, like Palmier, they may not have reliable transportation.
Another reason, O’Connell said, is that some Native Americans distrust the Indian Health Service, which is chronically underfunded and understaffed. If the only nearby health care facility is run by the agency, patients may delay or skip care.
O’Connell and Brown said vaccine skepticism and mistrust of the entire health care system are growing in Native American communities, as has occurred elsewhere nationwide.
“Prior to social media, I think our population was pretty trustful of childhood vaccination. And American Indians have a long history of being severely impacted by infectious disease,” he said.
European colonizers’ arrival in the late 1400s brought new diseases, including measles, that killed tens of millions of Indigenous people in North and South America by the early 1600s. Native Americans have also had high mortality rates in modern pandemics, including the 1918-20 Spanish flu and COVID-19.
The Great Plains Tribal Leaders’ Health Board reacted quickly when measles cases began showing up near its headquarters in South Dakota this year. Nebraska health officials announced in late May that a child had measles in a rural part of the state, close to the Pine Ridge Indian Reservation. Then, four people from the Rapid City area got sick later that month and into the middle of June.
“Our phones really rang off the hook” once that news came out, said Darren Crowe, a vice president at the board’s Oyate Health Center in Rapid City. He said parents wanted to know if their children were up to date on their measles vaccines.
Crowe said the health board ordered extra masks, created a measles command team that meets daily, and called parents when its online database showed their children needed a shot.
Brown praised that approach.
“It takes a concerted outreach effort that goes individual to individual,” he said, adding that his organization helped the Mississippi Band of Choctaw Indians and the Alabama-Coushatta Tribe of Texas with similar efforts.
Brown said reaching specific families can be a challenge in some low-income Native American communities, where many people’s phone numbers frequently change since they use temporary prepaid plans.
Once a health worker reaches a parent, Brown said, they should listen and ask questions before sharing the importance of the vaccine against measles, mumps, and rubella.
“Rather than trying to preach to somebody and beat them over the head with data or whatever to convince them that this is what they need to do, you start out by finding out where they are,” he said. “So, ‘Tell me about your experience with vaccination. Tell me what you know about vaccination.’”
Most people agree to immunize their children when presented with helpful information in a nonjudgmental way, Brown said.
KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.