User login
Childhood Atopic Dermatitis Doesn’t Delay Puberty
TOPLINE:
METHODOLOGY:
- Investigators conducted a nationwide cohort study among 15,534 children in Denmark whose pubertal development was assessed every 6 months with a web-based questionnaire starting at the age of 11 years.
- The children were classified into three groups: No atopic dermatitis; self-reported doctor-diagnosed atopic dermatitis (maternal report of a doctor diagnosis at 6 months, 18 months, and/or 7 years of age); hospital-diagnosed atopic dermatitis (registry data showing it as the primary reason for hospital contact up to the age of 8 years), representing mainly severe cases.
- The main outcome was the age difference averaged across a range of pubertal milestones (attainment of Tanner stages; development of axillary hair, acne, and voice break; and occurrence of first ejaculation and menarche).
TAKEAWAY:
- Overall, 21.5% of the children had self-reported doctor-diagnosed atopic dermatitis and 0.7% had hospital-diagnosed atopic dermatitis.
- Relative to girls without atopic dermatitis, girls with self-reported doctor-diagnosed atopic dermatitis reached the milestones at the same age, with a mean difference of 0.0 months, and girls with hospital-diagnosed atopic dermatitis reached them a mean of 0.3 months earlier.
- Relative to boys without atopic dermatitis, boys with self-reported doctor-diagnosed atopic dermatitis reached the milestones a mean of 0.1 month later and boys with hospital-diagnosed atopic dermatitis reached them a mean of 0.3 months earlier.
- A more stringent definition of atopic dermatitis — persistent or recurrent atopic dermatitis at 7 years of age (assumed more likely to affect sleep and disrupt the skin barrier near the start of puberty) — was also not associated with delayed pubertal development.
IN PRACTICE:
“Previous studies on atopic dermatitis and puberty are limited, some suggest a link between atopic dermatitis and delayed puberty, akin to other chronic inflammatory diseases in childhood,” the authors wrote. “The results of the present study are reassuring for young patients with atopic dermatitis approaching puberty and reproductive health in adult life,” they concluded.
SOURCE:
The study was led by Camilla Lomholt Kjersgaard, MD, Aarhus University, Aarhus, Denmark, and was published online in JAAD International.
LIMITATIONS:
Limitations included a lack of information on treatment, the use of analyses that did not address missing data, and a possible misclassification of self-reported pubertal development.
DISCLOSURES:
The study was funded by the Danish Council for Independent Research; Aarhus University; and Fonden af Fam. Kjærsgaard, Sunds; and was cofunded by the European Union. The authors reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Investigators conducted a nationwide cohort study among 15,534 children in Denmark whose pubertal development was assessed every 6 months with a web-based questionnaire starting at the age of 11 years.
- The children were classified into three groups: No atopic dermatitis; self-reported doctor-diagnosed atopic dermatitis (maternal report of a doctor diagnosis at 6 months, 18 months, and/or 7 years of age); hospital-diagnosed atopic dermatitis (registry data showing it as the primary reason for hospital contact up to the age of 8 years), representing mainly severe cases.
- The main outcome was the age difference averaged across a range of pubertal milestones (attainment of Tanner stages; development of axillary hair, acne, and voice break; and occurrence of first ejaculation and menarche).
TAKEAWAY:
- Overall, 21.5% of the children had self-reported doctor-diagnosed atopic dermatitis and 0.7% had hospital-diagnosed atopic dermatitis.
- Relative to girls without atopic dermatitis, girls with self-reported doctor-diagnosed atopic dermatitis reached the milestones at the same age, with a mean difference of 0.0 months, and girls with hospital-diagnosed atopic dermatitis reached them a mean of 0.3 months earlier.
- Relative to boys without atopic dermatitis, boys with self-reported doctor-diagnosed atopic dermatitis reached the milestones a mean of 0.1 month later and boys with hospital-diagnosed atopic dermatitis reached them a mean of 0.3 months earlier.
- A more stringent definition of atopic dermatitis — persistent or recurrent atopic dermatitis at 7 years of age (assumed more likely to affect sleep and disrupt the skin barrier near the start of puberty) — was also not associated with delayed pubertal development.
IN PRACTICE:
“Previous studies on atopic dermatitis and puberty are limited, some suggest a link between atopic dermatitis and delayed puberty, akin to other chronic inflammatory diseases in childhood,” the authors wrote. “The results of the present study are reassuring for young patients with atopic dermatitis approaching puberty and reproductive health in adult life,” they concluded.
SOURCE:
The study was led by Camilla Lomholt Kjersgaard, MD, Aarhus University, Aarhus, Denmark, and was published online in JAAD International.
LIMITATIONS:
Limitations included a lack of information on treatment, the use of analyses that did not address missing data, and a possible misclassification of self-reported pubertal development.
DISCLOSURES:
The study was funded by the Danish Council for Independent Research; Aarhus University; and Fonden af Fam. Kjærsgaard, Sunds; and was cofunded by the European Union. The authors reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Investigators conducted a nationwide cohort study among 15,534 children in Denmark whose pubertal development was assessed every 6 months with a web-based questionnaire starting at the age of 11 years.
- The children were classified into three groups: No atopic dermatitis; self-reported doctor-diagnosed atopic dermatitis (maternal report of a doctor diagnosis at 6 months, 18 months, and/or 7 years of age); hospital-diagnosed atopic dermatitis (registry data showing it as the primary reason for hospital contact up to the age of 8 years), representing mainly severe cases.
- The main outcome was the age difference averaged across a range of pubertal milestones (attainment of Tanner stages; development of axillary hair, acne, and voice break; and occurrence of first ejaculation and menarche).
TAKEAWAY:
- Overall, 21.5% of the children had self-reported doctor-diagnosed atopic dermatitis and 0.7% had hospital-diagnosed atopic dermatitis.
- Relative to girls without atopic dermatitis, girls with self-reported doctor-diagnosed atopic dermatitis reached the milestones at the same age, with a mean difference of 0.0 months, and girls with hospital-diagnosed atopic dermatitis reached them a mean of 0.3 months earlier.
- Relative to boys without atopic dermatitis, boys with self-reported doctor-diagnosed atopic dermatitis reached the milestones a mean of 0.1 month later and boys with hospital-diagnosed atopic dermatitis reached them a mean of 0.3 months earlier.
- A more stringent definition of atopic dermatitis — persistent or recurrent atopic dermatitis at 7 years of age (assumed more likely to affect sleep and disrupt the skin barrier near the start of puberty) — was also not associated with delayed pubertal development.
IN PRACTICE:
“Previous studies on atopic dermatitis and puberty are limited, some suggest a link between atopic dermatitis and delayed puberty, akin to other chronic inflammatory diseases in childhood,” the authors wrote. “The results of the present study are reassuring for young patients with atopic dermatitis approaching puberty and reproductive health in adult life,” they concluded.
SOURCE:
The study was led by Camilla Lomholt Kjersgaard, MD, Aarhus University, Aarhus, Denmark, and was published online in JAAD International.
LIMITATIONS:
Limitations included a lack of information on treatment, the use of analyses that did not address missing data, and a possible misclassification of self-reported pubertal development.
DISCLOSURES:
The study was funded by the Danish Council for Independent Research; Aarhus University; and Fonden af Fam. Kjærsgaard, Sunds; and was cofunded by the European Union. The authors reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Digestive Disease Mortality Higher for US Indigenous Communities
which experience the highest death rates and ongoing increases, according to a recent study.
Policymakers, healthcare providers, and communities need to respond with targeted interventions and collaborative efforts that address these inequities and advance digestive health equity, lead author Wafa A. Aldhaleei, MD, of Mayo Clinic, Rochester, Minnesota, and colleagues reported.
“Several studies have reported the epidemiological characteristics of certain digestive diseases such as pancreatitis, liver and biliary diseases, and inflammatory bowel disease,” the investigators wrote in Clinical Gastroenterology and Hepatology. “These studies provide insights into the US burden by sex and racial and ethnic disparities of various digestive diseases individually. However, little is known about racial disparities in the United States digestive diseases mortality burden.”
As part of the Global Burden of Disease Study, the investigators analyzed data from the Institute of Health Metrics and Evaluation Global Health Data Exchange, including age-standardized digestive disease mortality rates for five racial and ethnic groups (Black, White, American Indian and Alaska Native, Asian and Pacific Islander, and Latino) between 2000 and 2019, with further subgroups based on sex, state, and county. Joinpoint regression analysis was employed to determine overall temporal trends by demography.
Results showed striking mortality rate differences across racial and ethnic groups. In 2019, digestive disease mortality rates were highest among American Indian and Alaska Native individuals, reaching 86.2 per 100,000 — over twice the rate seen in White (35.5 per 100,000), Black (33.6 per 100,000), and Latino (33.6 per 100,000) populations, and more than five times higher than in Asian and Pacific Islander individuals (15.6 per 100,000). Over the study period, American Indian and Alaska Native individuals experienced a significant 0.87% average annual increase in mortality rates, while White individuals saw a smaller increase of 0.12% annually. In contrast, Latino, Black, and Asian and Pacific Islander individuals had declining average annual rates.
Geographic disparities in digestive disease mortality were significant, with West Virginia recording the highest state-level rate in 2019 at 44.8 deaths per 100,000, well above the national rate of 34.5 per 100,000. Certain regions with high concentrations of American Indian and Alaska Native populations, such as the Southwest Tribes service area (including Arizona and New Mexico) and the Plain Indians service area (spanning Montana, North Dakota, and South Dakota), reported mortality rates exceeding 70 per 100,000, more than double the national average. In Alaska, the American Indian and Alaska Native population’s mortality rate surged with annual increases of up to 3.53% during some periods.
Analyses also revealed some notable sex-based trends. Among American Indian and Alaska Native individuals, males experienced a mortality rate increase of 0.87% annually, reaching 93.5 per 100,000 by 2019, while females saw an even sharper rise at 1.11% per year, with a mortality rate of 79.6 per 100,000 in 2019. For White individuals, the average annual percentage increase was 0.12% for males, bringing their rate to 40.2 per 100,000, and 0.30% for females, with a rate of 31.0 per 100,000 in 2019.
“Our study reveals persistent racial, ethnic, and geographic disparities in digestive diseases mortality in the United States,” the investigators concluded. “Targeted interventions and further research are needed to address these disparities and promote digestive health equity. Collaboration among researchers, policymakers, healthcare providers, and communities is essential to achieve this goal.”This research was conducted as part of Global Burden of Disease, Injuries and Risk Factors Study, coordinated by the Institute of Health Metrics and Evaluation. The investigators disclosed no conflicts of interest.
which experience the highest death rates and ongoing increases, according to a recent study.
Policymakers, healthcare providers, and communities need to respond with targeted interventions and collaborative efforts that address these inequities and advance digestive health equity, lead author Wafa A. Aldhaleei, MD, of Mayo Clinic, Rochester, Minnesota, and colleagues reported.
“Several studies have reported the epidemiological characteristics of certain digestive diseases such as pancreatitis, liver and biliary diseases, and inflammatory bowel disease,” the investigators wrote in Clinical Gastroenterology and Hepatology. “These studies provide insights into the US burden by sex and racial and ethnic disparities of various digestive diseases individually. However, little is known about racial disparities in the United States digestive diseases mortality burden.”
As part of the Global Burden of Disease Study, the investigators analyzed data from the Institute of Health Metrics and Evaluation Global Health Data Exchange, including age-standardized digestive disease mortality rates for five racial and ethnic groups (Black, White, American Indian and Alaska Native, Asian and Pacific Islander, and Latino) between 2000 and 2019, with further subgroups based on sex, state, and county. Joinpoint regression analysis was employed to determine overall temporal trends by demography.
Results showed striking mortality rate differences across racial and ethnic groups. In 2019, digestive disease mortality rates were highest among American Indian and Alaska Native individuals, reaching 86.2 per 100,000 — over twice the rate seen in White (35.5 per 100,000), Black (33.6 per 100,000), and Latino (33.6 per 100,000) populations, and more than five times higher than in Asian and Pacific Islander individuals (15.6 per 100,000). Over the study period, American Indian and Alaska Native individuals experienced a significant 0.87% average annual increase in mortality rates, while White individuals saw a smaller increase of 0.12% annually. In contrast, Latino, Black, and Asian and Pacific Islander individuals had declining average annual rates.
Geographic disparities in digestive disease mortality were significant, with West Virginia recording the highest state-level rate in 2019 at 44.8 deaths per 100,000, well above the national rate of 34.5 per 100,000. Certain regions with high concentrations of American Indian and Alaska Native populations, such as the Southwest Tribes service area (including Arizona and New Mexico) and the Plain Indians service area (spanning Montana, North Dakota, and South Dakota), reported mortality rates exceeding 70 per 100,000, more than double the national average. In Alaska, the American Indian and Alaska Native population’s mortality rate surged with annual increases of up to 3.53% during some periods.
Analyses also revealed some notable sex-based trends. Among American Indian and Alaska Native individuals, males experienced a mortality rate increase of 0.87% annually, reaching 93.5 per 100,000 by 2019, while females saw an even sharper rise at 1.11% per year, with a mortality rate of 79.6 per 100,000 in 2019. For White individuals, the average annual percentage increase was 0.12% for males, bringing their rate to 40.2 per 100,000, and 0.30% for females, with a rate of 31.0 per 100,000 in 2019.
“Our study reveals persistent racial, ethnic, and geographic disparities in digestive diseases mortality in the United States,” the investigators concluded. “Targeted interventions and further research are needed to address these disparities and promote digestive health equity. Collaboration among researchers, policymakers, healthcare providers, and communities is essential to achieve this goal.”This research was conducted as part of Global Burden of Disease, Injuries and Risk Factors Study, coordinated by the Institute of Health Metrics and Evaluation. The investigators disclosed no conflicts of interest.
which experience the highest death rates and ongoing increases, according to a recent study.
Policymakers, healthcare providers, and communities need to respond with targeted interventions and collaborative efforts that address these inequities and advance digestive health equity, lead author Wafa A. Aldhaleei, MD, of Mayo Clinic, Rochester, Minnesota, and colleagues reported.
“Several studies have reported the epidemiological characteristics of certain digestive diseases such as pancreatitis, liver and biliary diseases, and inflammatory bowel disease,” the investigators wrote in Clinical Gastroenterology and Hepatology. “These studies provide insights into the US burden by sex and racial and ethnic disparities of various digestive diseases individually. However, little is known about racial disparities in the United States digestive diseases mortality burden.”
As part of the Global Burden of Disease Study, the investigators analyzed data from the Institute of Health Metrics and Evaluation Global Health Data Exchange, including age-standardized digestive disease mortality rates for five racial and ethnic groups (Black, White, American Indian and Alaska Native, Asian and Pacific Islander, and Latino) between 2000 and 2019, with further subgroups based on sex, state, and county. Joinpoint regression analysis was employed to determine overall temporal trends by demography.
Results showed striking mortality rate differences across racial and ethnic groups. In 2019, digestive disease mortality rates were highest among American Indian and Alaska Native individuals, reaching 86.2 per 100,000 — over twice the rate seen in White (35.5 per 100,000), Black (33.6 per 100,000), and Latino (33.6 per 100,000) populations, and more than five times higher than in Asian and Pacific Islander individuals (15.6 per 100,000). Over the study period, American Indian and Alaska Native individuals experienced a significant 0.87% average annual increase in mortality rates, while White individuals saw a smaller increase of 0.12% annually. In contrast, Latino, Black, and Asian and Pacific Islander individuals had declining average annual rates.
Geographic disparities in digestive disease mortality were significant, with West Virginia recording the highest state-level rate in 2019 at 44.8 deaths per 100,000, well above the national rate of 34.5 per 100,000. Certain regions with high concentrations of American Indian and Alaska Native populations, such as the Southwest Tribes service area (including Arizona and New Mexico) and the Plain Indians service area (spanning Montana, North Dakota, and South Dakota), reported mortality rates exceeding 70 per 100,000, more than double the national average. In Alaska, the American Indian and Alaska Native population’s mortality rate surged with annual increases of up to 3.53% during some periods.
Analyses also revealed some notable sex-based trends. Among American Indian and Alaska Native individuals, males experienced a mortality rate increase of 0.87% annually, reaching 93.5 per 100,000 by 2019, while females saw an even sharper rise at 1.11% per year, with a mortality rate of 79.6 per 100,000 in 2019. For White individuals, the average annual percentage increase was 0.12% for males, bringing their rate to 40.2 per 100,000, and 0.30% for females, with a rate of 31.0 per 100,000 in 2019.
“Our study reveals persistent racial, ethnic, and geographic disparities in digestive diseases mortality in the United States,” the investigators concluded. “Targeted interventions and further research are needed to address these disparities and promote digestive health equity. Collaboration among researchers, policymakers, healthcare providers, and communities is essential to achieve this goal.”This research was conducted as part of Global Burden of Disease, Injuries and Risk Factors Study, coordinated by the Institute of Health Metrics and Evaluation. The investigators disclosed no conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Common Herbicide a Player in Neurodegeneration?
new research showed.
Researchers found that glyphosate exposure even at regulated levels was associated with increased neuroinflammation and accelerated Alzheimer’s disease–like pathology in mice — an effect that persisted 6 months after a recovery period when exposure was stopped.
“More research is needed to understand the consequences of glyphosate exposure to the brain in humans and to understand the appropriate dose of exposure to limit detrimental outcomes,” said co–senior author Ramon Velazquez, PhD, with Arizona State University, Tempe.
The study was published online in The Journal of Neuroinflammation.
Persistent Accumulation Within the Brain
Glyphosate is the most heavily applied herbicide in the United States, with roughly 300 million pounds used annually in agricultural communities throughout the United States. It is also used for weed control in parks, residential areas, and personal gardens.
The Environmental Protection Agency (EPA) has determined that glyphosate poses no risks to human health when used as directed. But the World Health Organization’s International Agency for Research on Cancer disagrees, classifying the herbicide as “possibly carcinogenic to humans.”
In addition to the possible cancer risk, multiple reports have also suggested potential harmful effects of glyphosate exposure on the brain.
In earlier work, Velazquez and colleagues showed that glyphosate crosses the blood-brain barrier and infiltrates the brains of mice, contributing to neuroinflammation and other detrimental effects on brain function.
In their latest study, they examined the long-term effects of glyphosate exposure on neuroinflammation and Alzheimer’s disease–like pathology using a mouse model.
They dosed 4.5-month-old mice genetically predisposed to Alzheimer’s disease and non-transgenic control mice with either 0, 50, or 500 mg/kg of glyphosate daily for 13 weeks followed by a 6-month recovery period.
The high dose is similar to levels used in earlier research, and the low dose is close to the limit used to establish the current EPA acceptable dose in humans.
Glyphosate’s metabolite, aminomethylphosphonic acid, was detectable and persisted in mouse brain tissue even 6 months after exposure ceased, the researchers reported.
Additionally, there was a significant increase in soluble and insoluble fractions of amyloid-beta (Abeta), Abeta42 plaque load and plaque size, and phosphorylated tau at Threonine 181 and Serine 396 in hippocampus and cortex brain tissue from glyphosate-exposed mice, “highlighting an exacerbation of hallmark Alzheimer’s disease–like proteinopathies,” they noted.
Glyphosate exposure was also associated with significant elevations in both pro- and anti-inflammatory cytokines and chemokines in brain tissue of transgenic and normal mice and in peripheral blood plasma of transgenic mice.
Glyphosate-exposed transgenic mice also showed heightened anxiety-like behaviors and reduced survival.
“These findings highlight that many chemicals we regularly encounter, previously considered safe, may pose potential health risks,” co–senior author Patrick Pirrotte, PhD, with the Translational Genomics Research Institute, Phoenix, Arizona, said in a statement.
“However, further research is needed to fully assess the public health impact and identify safer alternatives,” Pirrotte added.
Funding for the study was provided by the National Institutes on Aging, National Cancer Institute and the Arizona State University (ASU) Biodesign Institute. The authors have declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
new research showed.
Researchers found that glyphosate exposure even at regulated levels was associated with increased neuroinflammation and accelerated Alzheimer’s disease–like pathology in mice — an effect that persisted 6 months after a recovery period when exposure was stopped.
“More research is needed to understand the consequences of glyphosate exposure to the brain in humans and to understand the appropriate dose of exposure to limit detrimental outcomes,” said co–senior author Ramon Velazquez, PhD, with Arizona State University, Tempe.
The study was published online in The Journal of Neuroinflammation.
Persistent Accumulation Within the Brain
Glyphosate is the most heavily applied herbicide in the United States, with roughly 300 million pounds used annually in agricultural communities throughout the United States. It is also used for weed control in parks, residential areas, and personal gardens.
The Environmental Protection Agency (EPA) has determined that glyphosate poses no risks to human health when used as directed. But the World Health Organization’s International Agency for Research on Cancer disagrees, classifying the herbicide as “possibly carcinogenic to humans.”
In addition to the possible cancer risk, multiple reports have also suggested potential harmful effects of glyphosate exposure on the brain.
In earlier work, Velazquez and colleagues showed that glyphosate crosses the blood-brain barrier and infiltrates the brains of mice, contributing to neuroinflammation and other detrimental effects on brain function.
In their latest study, they examined the long-term effects of glyphosate exposure on neuroinflammation and Alzheimer’s disease–like pathology using a mouse model.
They dosed 4.5-month-old mice genetically predisposed to Alzheimer’s disease and non-transgenic control mice with either 0, 50, or 500 mg/kg of glyphosate daily for 13 weeks followed by a 6-month recovery period.
The high dose is similar to levels used in earlier research, and the low dose is close to the limit used to establish the current EPA acceptable dose in humans.
Glyphosate’s metabolite, aminomethylphosphonic acid, was detectable and persisted in mouse brain tissue even 6 months after exposure ceased, the researchers reported.
Additionally, there was a significant increase in soluble and insoluble fractions of amyloid-beta (Abeta), Abeta42 plaque load and plaque size, and phosphorylated tau at Threonine 181 and Serine 396 in hippocampus and cortex brain tissue from glyphosate-exposed mice, “highlighting an exacerbation of hallmark Alzheimer’s disease–like proteinopathies,” they noted.
Glyphosate exposure was also associated with significant elevations in both pro- and anti-inflammatory cytokines and chemokines in brain tissue of transgenic and normal mice and in peripheral blood plasma of transgenic mice.
Glyphosate-exposed transgenic mice also showed heightened anxiety-like behaviors and reduced survival.
“These findings highlight that many chemicals we regularly encounter, previously considered safe, may pose potential health risks,” co–senior author Patrick Pirrotte, PhD, with the Translational Genomics Research Institute, Phoenix, Arizona, said in a statement.
“However, further research is needed to fully assess the public health impact and identify safer alternatives,” Pirrotte added.
Funding for the study was provided by the National Institutes on Aging, National Cancer Institute and the Arizona State University (ASU) Biodesign Institute. The authors have declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
new research showed.
Researchers found that glyphosate exposure even at regulated levels was associated with increased neuroinflammation and accelerated Alzheimer’s disease–like pathology in mice — an effect that persisted 6 months after a recovery period when exposure was stopped.
“More research is needed to understand the consequences of glyphosate exposure to the brain in humans and to understand the appropriate dose of exposure to limit detrimental outcomes,” said co–senior author Ramon Velazquez, PhD, with Arizona State University, Tempe.
The study was published online in The Journal of Neuroinflammation.
Persistent Accumulation Within the Brain
Glyphosate is the most heavily applied herbicide in the United States, with roughly 300 million pounds used annually in agricultural communities throughout the United States. It is also used for weed control in parks, residential areas, and personal gardens.
The Environmental Protection Agency (EPA) has determined that glyphosate poses no risks to human health when used as directed. But the World Health Organization’s International Agency for Research on Cancer disagrees, classifying the herbicide as “possibly carcinogenic to humans.”
In addition to the possible cancer risk, multiple reports have also suggested potential harmful effects of glyphosate exposure on the brain.
In earlier work, Velazquez and colleagues showed that glyphosate crosses the blood-brain barrier and infiltrates the brains of mice, contributing to neuroinflammation and other detrimental effects on brain function.
In their latest study, they examined the long-term effects of glyphosate exposure on neuroinflammation and Alzheimer’s disease–like pathology using a mouse model.
They dosed 4.5-month-old mice genetically predisposed to Alzheimer’s disease and non-transgenic control mice with either 0, 50, or 500 mg/kg of glyphosate daily for 13 weeks followed by a 6-month recovery period.
The high dose is similar to levels used in earlier research, and the low dose is close to the limit used to establish the current EPA acceptable dose in humans.
Glyphosate’s metabolite, aminomethylphosphonic acid, was detectable and persisted in mouse brain tissue even 6 months after exposure ceased, the researchers reported.
Additionally, there was a significant increase in soluble and insoluble fractions of amyloid-beta (Abeta), Abeta42 plaque load and plaque size, and phosphorylated tau at Threonine 181 and Serine 396 in hippocampus and cortex brain tissue from glyphosate-exposed mice, “highlighting an exacerbation of hallmark Alzheimer’s disease–like proteinopathies,” they noted.
Glyphosate exposure was also associated with significant elevations in both pro- and anti-inflammatory cytokines and chemokines in brain tissue of transgenic and normal mice and in peripheral blood plasma of transgenic mice.
Glyphosate-exposed transgenic mice also showed heightened anxiety-like behaviors and reduced survival.
“These findings highlight that many chemicals we regularly encounter, previously considered safe, may pose potential health risks,” co–senior author Patrick Pirrotte, PhD, with the Translational Genomics Research Institute, Phoenix, Arizona, said in a statement.
“However, further research is needed to fully assess the public health impact and identify safer alternatives,” Pirrotte added.
Funding for the study was provided by the National Institutes on Aging, National Cancer Institute and the Arizona State University (ASU) Biodesign Institute. The authors have declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF NEUROINFLAMMATION
Real-World Data Question Low-Dose Steroid Use in ANCA Vasculitis
TOPLINE:
Compared with a standard dosing regimen, a reduced-dose glucocorticoid regimen is associated with an increased risk for disease progression, relapse, death, or kidney failure in antineutrophil cytoplasmic antibody (ANCA)–associated vasculitis, particularly affecting patients receiving rituximab or those with elevated creatinine levels.
METHODOLOGY:
- The PEXIVAS trial demonstrated that a reduced-dose glucocorticoid regimen was noninferior to standard dosing in terms of death or end-stage kidney disease in ANCA-associated vasculitis. However, the trial did not include disease progression or relapse as a primary endpoint, and cyclophosphamide was the primary induction therapy.
- Researchers conducted this retrospective study across 19 hospitals (18 in France and one in Luxembourg) between January 2018 and November 2022 to compare the effectiveness of a reduced-dose glucocorticoid regimen, as used in the PEXIVAS trial, with a standard-dose regimen in patients with ANCA-associated vasculitis in the real-world setting.
- They included 234 patients aged > 15 years (51% men) with severe granulomatosis with polyangiitis (n = 141) or microscopic polyangiitis (n = 93) who received induction therapy with rituximab or cyclophosphamide; 126 and 108 patients received reduced-dose and standard-dose glucocorticoid regimens, respectively.
- Most patients (70%) had severe renal involvement.
- The primary composite outcome encompassed minor relapse, major relapse, disease progression before remission, end-stage kidney disease requiring dialysis for > 12 weeks or transplantation, and death within 12 months post-induction.
TAKEAWAY:
- The primary composite outcome occurred in a higher proportion of patients receiving reduced-dose glucocorticoid therapy than in those receiving standard-dose therapy (33.3% vs 18.5%; hazard ratio [HR], 2.20; 95% CI, 1.23-3.94).
- However, no significant association was found between reduced-dose glucocorticoids and the risk for death or end-stage kidney disease or the occurrence of serious infections.
- Among patients receiving reduced-dose glucocorticoids, serum creatinine levels > 300 μmol/L were associated with an increased risk for the primary composite outcome (adjusted HR, 3.02; 95% CI, 1.28-7.11).
- In the rituximab induction subgroup, reduced-dose glucocorticoid was associated with an increased risk for the primary composite outcome (adjusted HR, 2.36; 95% CI, 1.18-4.71), compared with standard-dose glucocorticoids.
IN PRACTICE:
“Our data suggest increased vigilance when using the [reduced-dose glucocorticoid] regimen, especially in the two subgroups of patients at higher risk of failure, that is, those receiving [rituximab] as induction therapy and those with a baseline serum creatinine greater than 300 μmol/L,” the authors wrote.
SOURCE:
The study was led by Sophie Nagle, MD, National Referral Centre for Rare Autoimmune and Systemic Diseases, Department of Internal Medicine, Hôpital Cochin, Paris, France. It was published online on November 20, 2024, in Annals of the Rheumatic Diseases.
LIMITATIONS:
The retrospective nature of this study may have introduced inherent limitations and potential selection bias. The study lacked data on patient comorbidities, which could have influenced treatment choice and outcomes. Additionally, about a quarter of patients did not receive methylprednisolone pulses prior to oral glucocorticoids, unlike the PEXIVAS trial protocol. The group receiving standard-dose glucocorticoids showed heterogeneity in glucocorticoid regimens, and the minimum follow-up was only 6 months.
DISCLOSURES:
This study did not report any source of funding. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Compared with a standard dosing regimen, a reduced-dose glucocorticoid regimen is associated with an increased risk for disease progression, relapse, death, or kidney failure in antineutrophil cytoplasmic antibody (ANCA)–associated vasculitis, particularly affecting patients receiving rituximab or those with elevated creatinine levels.
METHODOLOGY:
- The PEXIVAS trial demonstrated that a reduced-dose glucocorticoid regimen was noninferior to standard dosing in terms of death or end-stage kidney disease in ANCA-associated vasculitis. However, the trial did not include disease progression or relapse as a primary endpoint, and cyclophosphamide was the primary induction therapy.
- Researchers conducted this retrospective study across 19 hospitals (18 in France and one in Luxembourg) between January 2018 and November 2022 to compare the effectiveness of a reduced-dose glucocorticoid regimen, as used in the PEXIVAS trial, with a standard-dose regimen in patients with ANCA-associated vasculitis in the real-world setting.
- They included 234 patients aged > 15 years (51% men) with severe granulomatosis with polyangiitis (n = 141) or microscopic polyangiitis (n = 93) who received induction therapy with rituximab or cyclophosphamide; 126 and 108 patients received reduced-dose and standard-dose glucocorticoid regimens, respectively.
- Most patients (70%) had severe renal involvement.
- The primary composite outcome encompassed minor relapse, major relapse, disease progression before remission, end-stage kidney disease requiring dialysis for > 12 weeks or transplantation, and death within 12 months post-induction.
TAKEAWAY:
- The primary composite outcome occurred in a higher proportion of patients receiving reduced-dose glucocorticoid therapy than in those receiving standard-dose therapy (33.3% vs 18.5%; hazard ratio [HR], 2.20; 95% CI, 1.23-3.94).
- However, no significant association was found between reduced-dose glucocorticoids and the risk for death or end-stage kidney disease or the occurrence of serious infections.
- Among patients receiving reduced-dose glucocorticoids, serum creatinine levels > 300 μmol/L were associated with an increased risk for the primary composite outcome (adjusted HR, 3.02; 95% CI, 1.28-7.11).
- In the rituximab induction subgroup, reduced-dose glucocorticoid was associated with an increased risk for the primary composite outcome (adjusted HR, 2.36; 95% CI, 1.18-4.71), compared with standard-dose glucocorticoids.
IN PRACTICE:
“Our data suggest increased vigilance when using the [reduced-dose glucocorticoid] regimen, especially in the two subgroups of patients at higher risk of failure, that is, those receiving [rituximab] as induction therapy and those with a baseline serum creatinine greater than 300 μmol/L,” the authors wrote.
SOURCE:
The study was led by Sophie Nagle, MD, National Referral Centre for Rare Autoimmune and Systemic Diseases, Department of Internal Medicine, Hôpital Cochin, Paris, France. It was published online on November 20, 2024, in Annals of the Rheumatic Diseases.
LIMITATIONS:
The retrospective nature of this study may have introduced inherent limitations and potential selection bias. The study lacked data on patient comorbidities, which could have influenced treatment choice and outcomes. Additionally, about a quarter of patients did not receive methylprednisolone pulses prior to oral glucocorticoids, unlike the PEXIVAS trial protocol. The group receiving standard-dose glucocorticoids showed heterogeneity in glucocorticoid regimens, and the minimum follow-up was only 6 months.
DISCLOSURES:
This study did not report any source of funding. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Compared with a standard dosing regimen, a reduced-dose glucocorticoid regimen is associated with an increased risk for disease progression, relapse, death, or kidney failure in antineutrophil cytoplasmic antibody (ANCA)–associated vasculitis, particularly affecting patients receiving rituximab or those with elevated creatinine levels.
METHODOLOGY:
- The PEXIVAS trial demonstrated that a reduced-dose glucocorticoid regimen was noninferior to standard dosing in terms of death or end-stage kidney disease in ANCA-associated vasculitis. However, the trial did not include disease progression or relapse as a primary endpoint, and cyclophosphamide was the primary induction therapy.
- Researchers conducted this retrospective study across 19 hospitals (18 in France and one in Luxembourg) between January 2018 and November 2022 to compare the effectiveness of a reduced-dose glucocorticoid regimen, as used in the PEXIVAS trial, with a standard-dose regimen in patients with ANCA-associated vasculitis in the real-world setting.
- They included 234 patients aged > 15 years (51% men) with severe granulomatosis with polyangiitis (n = 141) or microscopic polyangiitis (n = 93) who received induction therapy with rituximab or cyclophosphamide; 126 and 108 patients received reduced-dose and standard-dose glucocorticoid regimens, respectively.
- Most patients (70%) had severe renal involvement.
- The primary composite outcome encompassed minor relapse, major relapse, disease progression before remission, end-stage kidney disease requiring dialysis for > 12 weeks or transplantation, and death within 12 months post-induction.
TAKEAWAY:
- The primary composite outcome occurred in a higher proportion of patients receiving reduced-dose glucocorticoid therapy than in those receiving standard-dose therapy (33.3% vs 18.5%; hazard ratio [HR], 2.20; 95% CI, 1.23-3.94).
- However, no significant association was found between reduced-dose glucocorticoids and the risk for death or end-stage kidney disease or the occurrence of serious infections.
- Among patients receiving reduced-dose glucocorticoids, serum creatinine levels > 300 μmol/L were associated with an increased risk for the primary composite outcome (adjusted HR, 3.02; 95% CI, 1.28-7.11).
- In the rituximab induction subgroup, reduced-dose glucocorticoid was associated with an increased risk for the primary composite outcome (adjusted HR, 2.36; 95% CI, 1.18-4.71), compared with standard-dose glucocorticoids.
IN PRACTICE:
“Our data suggest increased vigilance when using the [reduced-dose glucocorticoid] regimen, especially in the two subgroups of patients at higher risk of failure, that is, those receiving [rituximab] as induction therapy and those with a baseline serum creatinine greater than 300 μmol/L,” the authors wrote.
SOURCE:
The study was led by Sophie Nagle, MD, National Referral Centre for Rare Autoimmune and Systemic Diseases, Department of Internal Medicine, Hôpital Cochin, Paris, France. It was published online on November 20, 2024, in Annals of the Rheumatic Diseases.
LIMITATIONS:
The retrospective nature of this study may have introduced inherent limitations and potential selection bias. The study lacked data on patient comorbidities, which could have influenced treatment choice and outcomes. Additionally, about a quarter of patients did not receive methylprednisolone pulses prior to oral glucocorticoids, unlike the PEXIVAS trial protocol. The group receiving standard-dose glucocorticoids showed heterogeneity in glucocorticoid regimens, and the minimum follow-up was only 6 months.
DISCLOSURES:
This study did not report any source of funding. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Human Milk Boosts Intestinal Growth, Immune Health of Fetal Organoids
These findings suggest an important role for human milk in supporting intestinal health, and may inform strategies for reducing the risk of necrotizing enterocolitis (NEC) in preterm infants, lead author Lauren Smith, MD, of Yale School of Medicine, New Haven, Connecticut, and colleagues, reported.
“Compelling evidence has revealed that the largest risk factor for NEC apart from prematurity is formula feeding, while conversely, parental milk (PM) confers protection, with a 6- to 10-fold lower incidence of NEC among PM-fed infants compared to formula,” the investigators wrote in Gastro Hep Advances. “It is unknown whether this is due to the many known protective factors in PM or as a result of an injurious component present in formula or a combination of both.”
To learn more, the investigators studied organoids cultured in a three-dimensional matrix and exposed to one of four dietary conditions: PM, donor human milk (DHM), standard formula (SF), or extensively hydrolyzed formula (HF). Organoids were grown in growth media supplemented with these diets for 5 days, followed by differentiation media for an additional 5 days. Growth, differentiation, and immune-related factors were analyzed using advanced imaging, RNA sequencing, and cytokine profiling.
The results demonstrated that human milk–fed organoids significantly outperformed formula-fed organoids in several measures. By the fifth day of growth media exposure, organoids supplemented with PM or DHM were larger and exhibited higher rates of proliferation, as evidenced by Ki67 staining. Organoids exposed to SF were the smallest and had the lowest proliferation and highest levels of apoptosis, while HF-fed organoids showed intermediate growth performance.
During the differentiation phase, organoids exposed to human milk developed more complex structures, forming buds with greater length and diameter compared to formula-fed organoids. PM was particularly effective, though DHM also promoted substantial differentiation. RNA sequencing revealed that organoids cultured with human milk upregulated genes involved in fatty acid metabolism and Wnt signaling, which are critical for cellular energy production and epithelial proliferation. In contrast, formula-fed organoids exhibited downregulation of cell-cycle-promoting genes and showed an inflammatory gene signature.
Cytokine profiling further underscored the benefits of human milk. Organoids exposed to PM and DHM secreted higher levels of immune-regulating cytokines, such as thymic stromal lymphopoietin (TSLP) and macrophage colony-stimulating factor (M-CSF). In contrast, formula-fed organoids produced lower levels of these beneficial cytokines and higher levels of pro-inflammatory markers, including interleukin-18 (IL-18).
These findings suggest that human milk supports intestinal growth, differentiation, and immune regulation in ways that formula does not, and the investigators emphasized the importance of identifying specific bioactive factors in human milk.
“If the factors responsible for this effect can be identified, there could be significant clinical value in supplementing these components in DHM and formula to help prevent NEC and foster normal intestinal development in preterm infants,” they concluded.
Future research will aim to isolate and supplement key components of human milk to enhance the nutritional and protective value of donor milk and formula. In addition, the investigators noted the need to explore potential sex-based differences in intestinal development, as the current study used only male-derived samples.The research was supported by the Yale School of Medicine Medical Student Research Fellowship. The investigators disclosed no conflicts of interest.
These findings suggest an important role for human milk in supporting intestinal health, and may inform strategies for reducing the risk of necrotizing enterocolitis (NEC) in preterm infants, lead author Lauren Smith, MD, of Yale School of Medicine, New Haven, Connecticut, and colleagues, reported.
“Compelling evidence has revealed that the largest risk factor for NEC apart from prematurity is formula feeding, while conversely, parental milk (PM) confers protection, with a 6- to 10-fold lower incidence of NEC among PM-fed infants compared to formula,” the investigators wrote in Gastro Hep Advances. “It is unknown whether this is due to the many known protective factors in PM or as a result of an injurious component present in formula or a combination of both.”
To learn more, the investigators studied organoids cultured in a three-dimensional matrix and exposed to one of four dietary conditions: PM, donor human milk (DHM), standard formula (SF), or extensively hydrolyzed formula (HF). Organoids were grown in growth media supplemented with these diets for 5 days, followed by differentiation media for an additional 5 days. Growth, differentiation, and immune-related factors were analyzed using advanced imaging, RNA sequencing, and cytokine profiling.
The results demonstrated that human milk–fed organoids significantly outperformed formula-fed organoids in several measures. By the fifth day of growth media exposure, organoids supplemented with PM or DHM were larger and exhibited higher rates of proliferation, as evidenced by Ki67 staining. Organoids exposed to SF were the smallest and had the lowest proliferation and highest levels of apoptosis, while HF-fed organoids showed intermediate growth performance.
During the differentiation phase, organoids exposed to human milk developed more complex structures, forming buds with greater length and diameter compared to formula-fed organoids. PM was particularly effective, though DHM also promoted substantial differentiation. RNA sequencing revealed that organoids cultured with human milk upregulated genes involved in fatty acid metabolism and Wnt signaling, which are critical for cellular energy production and epithelial proliferation. In contrast, formula-fed organoids exhibited downregulation of cell-cycle-promoting genes and showed an inflammatory gene signature.
Cytokine profiling further underscored the benefits of human milk. Organoids exposed to PM and DHM secreted higher levels of immune-regulating cytokines, such as thymic stromal lymphopoietin (TSLP) and macrophage colony-stimulating factor (M-CSF). In contrast, formula-fed organoids produced lower levels of these beneficial cytokines and higher levels of pro-inflammatory markers, including interleukin-18 (IL-18).
These findings suggest that human milk supports intestinal growth, differentiation, and immune regulation in ways that formula does not, and the investigators emphasized the importance of identifying specific bioactive factors in human milk.
“If the factors responsible for this effect can be identified, there could be significant clinical value in supplementing these components in DHM and formula to help prevent NEC and foster normal intestinal development in preterm infants,” they concluded.
Future research will aim to isolate and supplement key components of human milk to enhance the nutritional and protective value of donor milk and formula. In addition, the investigators noted the need to explore potential sex-based differences in intestinal development, as the current study used only male-derived samples.The research was supported by the Yale School of Medicine Medical Student Research Fellowship. The investigators disclosed no conflicts of interest.
These findings suggest an important role for human milk in supporting intestinal health, and may inform strategies for reducing the risk of necrotizing enterocolitis (NEC) in preterm infants, lead author Lauren Smith, MD, of Yale School of Medicine, New Haven, Connecticut, and colleagues, reported.
“Compelling evidence has revealed that the largest risk factor for NEC apart from prematurity is formula feeding, while conversely, parental milk (PM) confers protection, with a 6- to 10-fold lower incidence of NEC among PM-fed infants compared to formula,” the investigators wrote in Gastro Hep Advances. “It is unknown whether this is due to the many known protective factors in PM or as a result of an injurious component present in formula or a combination of both.”
To learn more, the investigators studied organoids cultured in a three-dimensional matrix and exposed to one of four dietary conditions: PM, donor human milk (DHM), standard formula (SF), or extensively hydrolyzed formula (HF). Organoids were grown in growth media supplemented with these diets for 5 days, followed by differentiation media for an additional 5 days. Growth, differentiation, and immune-related factors were analyzed using advanced imaging, RNA sequencing, and cytokine profiling.
The results demonstrated that human milk–fed organoids significantly outperformed formula-fed organoids in several measures. By the fifth day of growth media exposure, organoids supplemented with PM or DHM were larger and exhibited higher rates of proliferation, as evidenced by Ki67 staining. Organoids exposed to SF were the smallest and had the lowest proliferation and highest levels of apoptosis, while HF-fed organoids showed intermediate growth performance.
During the differentiation phase, organoids exposed to human milk developed more complex structures, forming buds with greater length and diameter compared to formula-fed organoids. PM was particularly effective, though DHM also promoted substantial differentiation. RNA sequencing revealed that organoids cultured with human milk upregulated genes involved in fatty acid metabolism and Wnt signaling, which are critical for cellular energy production and epithelial proliferation. In contrast, formula-fed organoids exhibited downregulation of cell-cycle-promoting genes and showed an inflammatory gene signature.
Cytokine profiling further underscored the benefits of human milk. Organoids exposed to PM and DHM secreted higher levels of immune-regulating cytokines, such as thymic stromal lymphopoietin (TSLP) and macrophage colony-stimulating factor (M-CSF). In contrast, formula-fed organoids produced lower levels of these beneficial cytokines and higher levels of pro-inflammatory markers, including interleukin-18 (IL-18).
These findings suggest that human milk supports intestinal growth, differentiation, and immune regulation in ways that formula does not, and the investigators emphasized the importance of identifying specific bioactive factors in human milk.
“If the factors responsible for this effect can be identified, there could be significant clinical value in supplementing these components in DHM and formula to help prevent NEC and foster normal intestinal development in preterm infants,” they concluded.
Future research will aim to isolate and supplement key components of human milk to enhance the nutritional and protective value of donor milk and formula. In addition, the investigators noted the need to explore potential sex-based differences in intestinal development, as the current study used only male-derived samples.The research was supported by the Yale School of Medicine Medical Student Research Fellowship. The investigators disclosed no conflicts of interest.
FROM GASTRO HEP ADVANCES
Biomarkers Predict Villous Atrophy in Potential Celiac Disease Patients
according to investigators.
Given that PCD patients present with positive serology and intact duodenal architecture, these findings may provide a much-needed tool for identifying patients who are more likely to benefit from early dietary interventions, lead author Renata Auricchio, MD, PhD, of the University of Naples Federico II, Italy, and colleagues reported.
“PCD offers the unique opportunity to observe the progression of gluten-induced tissue damage in celiac disease,” the investigators wrote in Gastroenterology. “These patients recognize gluten and produce specific autoantibodies, but have not developed intestinal damage.”
The study included 31 children with asymptomatic PCD who were eating a gluten-containing diet. Serum samples from each child were analyzed for the relative abundance of 92 inflammation-linked proteins using a proximity extension immunoassay. Statistical analyses, including partial least squares discriminant and linear discriminant analyses, were then applied to identify which proteins were associated with the development of VA.
After a mean follow-up period of 5.85 years, 14 participants developed VA (ie, celiac disease), while the remaining 17 remained asymptomatic.
Panel analysis revealed that specific inflammatory proteins, including interleukin (IL)–20, IL-2, sirtuin 2 (SIRT2), leukemia inhibitory factor (LIF), IL-22 receptor subunit a1, cystatin D (CST5), IL-17 receptor A, IL-15 receptor subunit a (RA), CUB domain–containing protein 1 (CDCP1), and IL-14, were 1.23- to 1.76-fold higher in children who developed VA. Among these, seven proteins — CDCP1, IL-2, LIF, IL10RA, SIRT2, CST5, and IL-4 — were able to significantly distinguish between symptomatic and asymptomatic cases in a linear discriminant model. This panel of seven proteins achieved a predictive accuracy of 96.8% in identifying children at risk of VA.
Additionally, bioinformatics pathway analysis confirmed that the broader set of proteins is involved in the positive regulation of JAK-STAT signaling (involving IL-22 receptor subunit a1, IL-4, IL-20, IL10RA, LIF, and IL-2), inflammatory responses (IL-4, IL-20, LIF, and IL-2), and processes such as tyrosine phosphorylation, leukocyte differentiation, IgG isotype switching, and protein phosphorylation regulation. These findings suggest that gluten-induced inflammation may already be active in early stages of the disease, including the initial phases of leukocyte differentiation, according to the investigators.
“Over a long follow-up on a gluten-containing diet, only 40% of these patients progressed to VA,” Dr. Auricchio and colleagues wrote. “Notably, 25%-30% of children with PCD even stop producing anti–tissue transglutaminase antibodies, and the others keep on producing autoantibodies but preserve a normal intestinal mucosa. Considering these data, the decision to address a patient with PCD on a gluten-free diet at time of diagnosis is quite critical.”
The researchers noted that this new model, with accuracy exceeding 95%, is well suited for routine use because of its practicality and reliability.
“Our previous model, based mainly on small intestinal mucosa features, moved a step toward the prediction of outcome but still required a mucosal biopsy, and the accuracy of prediction was not greater than 80%, which is somewhat uncertain for a lifelong clinical decision,” they wrote. In contrast, the present model “appears to be sufficient to immediately suggest a gluten-free diet in children with PCD, who are almost certainly committed to developing VA.”
The investigators called for long-term studies to validate their findings in other cohorts, including adult populations.This study was supported by the TIMID project and Inflammation in Human Early Life: Targeting Impacts on Life Course Health (INITIALISE) by the Horizon Europe Program of the European Union. The investigators disclosed no conflicts of interest.
Patients with positive celiac serologies but normal villous architecture on biopsy are considered to have potential celiac disease (PCD). While the prevalence of PCD is not well-established, it is estimated to be around 1%. This study by Auricchio and colleagues investigates seven serum proteomic biomarkers that could help predict whether asymptomatic patients with PCD are at risk of developing villous atrophy (VA).
The study also identifies specific inflammatory proteins present in PCD patients who are likely to develop VA. These biomarkers provide valuable insights into the pathogenesis of celiac disease and the development of VA in genetically predisposed individuals.
As celiac disease is increasingly diagnosed without biopsies, serum proteomic biomarkers could be crucial in identifying patients who may benefit from starting a gluten-free diet (GFD) earlier, potentially preventing complications. According to the European Society of Pediatric Gastroenterology, Hepatology, and Nutrition (ESPGHAN) guidelines, children can be diagnosed with celiac disease if their tissue transglutaminase IgA level is 10 times the upper limit of normal, confirmed by a positive endomysial antibody test. However, this approach may lead to many patients committing to a lifelong GFD despite having only PCD, as biopsies may not have been performed. In this study, 60% of patients with PCD did not progress to VA, suggesting that biomarkers could help prevent unnecessary long-term GFD commitments.
Stephanie M. Moleski, MD, is the director of the Jefferson Celiac Center and associate professor in the division of gastroenterology at Thomas Jefferson University Hospital in Philadelphia. She reported no conflicts of interest.
Patients with positive celiac serologies but normal villous architecture on biopsy are considered to have potential celiac disease (PCD). While the prevalence of PCD is not well-established, it is estimated to be around 1%. This study by Auricchio and colleagues investigates seven serum proteomic biomarkers that could help predict whether asymptomatic patients with PCD are at risk of developing villous atrophy (VA).
The study also identifies specific inflammatory proteins present in PCD patients who are likely to develop VA. These biomarkers provide valuable insights into the pathogenesis of celiac disease and the development of VA in genetically predisposed individuals.
As celiac disease is increasingly diagnosed without biopsies, serum proteomic biomarkers could be crucial in identifying patients who may benefit from starting a gluten-free diet (GFD) earlier, potentially preventing complications. According to the European Society of Pediatric Gastroenterology, Hepatology, and Nutrition (ESPGHAN) guidelines, children can be diagnosed with celiac disease if their tissue transglutaminase IgA level is 10 times the upper limit of normal, confirmed by a positive endomysial antibody test. However, this approach may lead to many patients committing to a lifelong GFD despite having only PCD, as biopsies may not have been performed. In this study, 60% of patients with PCD did not progress to VA, suggesting that biomarkers could help prevent unnecessary long-term GFD commitments.
Stephanie M. Moleski, MD, is the director of the Jefferson Celiac Center and associate professor in the division of gastroenterology at Thomas Jefferson University Hospital in Philadelphia. She reported no conflicts of interest.
Patients with positive celiac serologies but normal villous architecture on biopsy are considered to have potential celiac disease (PCD). While the prevalence of PCD is not well-established, it is estimated to be around 1%. This study by Auricchio and colleagues investigates seven serum proteomic biomarkers that could help predict whether asymptomatic patients with PCD are at risk of developing villous atrophy (VA).
The study also identifies specific inflammatory proteins present in PCD patients who are likely to develop VA. These biomarkers provide valuable insights into the pathogenesis of celiac disease and the development of VA in genetically predisposed individuals.
As celiac disease is increasingly diagnosed without biopsies, serum proteomic biomarkers could be crucial in identifying patients who may benefit from starting a gluten-free diet (GFD) earlier, potentially preventing complications. According to the European Society of Pediatric Gastroenterology, Hepatology, and Nutrition (ESPGHAN) guidelines, children can be diagnosed with celiac disease if their tissue transglutaminase IgA level is 10 times the upper limit of normal, confirmed by a positive endomysial antibody test. However, this approach may lead to many patients committing to a lifelong GFD despite having only PCD, as biopsies may not have been performed. In this study, 60% of patients with PCD did not progress to VA, suggesting that biomarkers could help prevent unnecessary long-term GFD commitments.
Stephanie M. Moleski, MD, is the director of the Jefferson Celiac Center and associate professor in the division of gastroenterology at Thomas Jefferson University Hospital in Philadelphia. She reported no conflicts of interest.
according to investigators.
Given that PCD patients present with positive serology and intact duodenal architecture, these findings may provide a much-needed tool for identifying patients who are more likely to benefit from early dietary interventions, lead author Renata Auricchio, MD, PhD, of the University of Naples Federico II, Italy, and colleagues reported.
“PCD offers the unique opportunity to observe the progression of gluten-induced tissue damage in celiac disease,” the investigators wrote in Gastroenterology. “These patients recognize gluten and produce specific autoantibodies, but have not developed intestinal damage.”
The study included 31 children with asymptomatic PCD who were eating a gluten-containing diet. Serum samples from each child were analyzed for the relative abundance of 92 inflammation-linked proteins using a proximity extension immunoassay. Statistical analyses, including partial least squares discriminant and linear discriminant analyses, were then applied to identify which proteins were associated with the development of VA.
After a mean follow-up period of 5.85 years, 14 participants developed VA (ie, celiac disease), while the remaining 17 remained asymptomatic.
Panel analysis revealed that specific inflammatory proteins, including interleukin (IL)–20, IL-2, sirtuin 2 (SIRT2), leukemia inhibitory factor (LIF), IL-22 receptor subunit a1, cystatin D (CST5), IL-17 receptor A, IL-15 receptor subunit a (RA), CUB domain–containing protein 1 (CDCP1), and IL-14, were 1.23- to 1.76-fold higher in children who developed VA. Among these, seven proteins — CDCP1, IL-2, LIF, IL10RA, SIRT2, CST5, and IL-4 — were able to significantly distinguish between symptomatic and asymptomatic cases in a linear discriminant model. This panel of seven proteins achieved a predictive accuracy of 96.8% in identifying children at risk of VA.
Additionally, bioinformatics pathway analysis confirmed that the broader set of proteins is involved in the positive regulation of JAK-STAT signaling (involving IL-22 receptor subunit a1, IL-4, IL-20, IL10RA, LIF, and IL-2), inflammatory responses (IL-4, IL-20, LIF, and IL-2), and processes such as tyrosine phosphorylation, leukocyte differentiation, IgG isotype switching, and protein phosphorylation regulation. These findings suggest that gluten-induced inflammation may already be active in early stages of the disease, including the initial phases of leukocyte differentiation, according to the investigators.
“Over a long follow-up on a gluten-containing diet, only 40% of these patients progressed to VA,” Dr. Auricchio and colleagues wrote. “Notably, 25%-30% of children with PCD even stop producing anti–tissue transglutaminase antibodies, and the others keep on producing autoantibodies but preserve a normal intestinal mucosa. Considering these data, the decision to address a patient with PCD on a gluten-free diet at time of diagnosis is quite critical.”
The researchers noted that this new model, with accuracy exceeding 95%, is well suited for routine use because of its practicality and reliability.
“Our previous model, based mainly on small intestinal mucosa features, moved a step toward the prediction of outcome but still required a mucosal biopsy, and the accuracy of prediction was not greater than 80%, which is somewhat uncertain for a lifelong clinical decision,” they wrote. In contrast, the present model “appears to be sufficient to immediately suggest a gluten-free diet in children with PCD, who are almost certainly committed to developing VA.”
The investigators called for long-term studies to validate their findings in other cohorts, including adult populations.This study was supported by the TIMID project and Inflammation in Human Early Life: Targeting Impacts on Life Course Health (INITIALISE) by the Horizon Europe Program of the European Union. The investigators disclosed no conflicts of interest.
according to investigators.
Given that PCD patients present with positive serology and intact duodenal architecture, these findings may provide a much-needed tool for identifying patients who are more likely to benefit from early dietary interventions, lead author Renata Auricchio, MD, PhD, of the University of Naples Federico II, Italy, and colleagues reported.
“PCD offers the unique opportunity to observe the progression of gluten-induced tissue damage in celiac disease,” the investigators wrote in Gastroenterology. “These patients recognize gluten and produce specific autoantibodies, but have not developed intestinal damage.”
The study included 31 children with asymptomatic PCD who were eating a gluten-containing diet. Serum samples from each child were analyzed for the relative abundance of 92 inflammation-linked proteins using a proximity extension immunoassay. Statistical analyses, including partial least squares discriminant and linear discriminant analyses, were then applied to identify which proteins were associated with the development of VA.
After a mean follow-up period of 5.85 years, 14 participants developed VA (ie, celiac disease), while the remaining 17 remained asymptomatic.
Panel analysis revealed that specific inflammatory proteins, including interleukin (IL)–20, IL-2, sirtuin 2 (SIRT2), leukemia inhibitory factor (LIF), IL-22 receptor subunit a1, cystatin D (CST5), IL-17 receptor A, IL-15 receptor subunit a (RA), CUB domain–containing protein 1 (CDCP1), and IL-14, were 1.23- to 1.76-fold higher in children who developed VA. Among these, seven proteins — CDCP1, IL-2, LIF, IL10RA, SIRT2, CST5, and IL-4 — were able to significantly distinguish between symptomatic and asymptomatic cases in a linear discriminant model. This panel of seven proteins achieved a predictive accuracy of 96.8% in identifying children at risk of VA.
Additionally, bioinformatics pathway analysis confirmed that the broader set of proteins is involved in the positive regulation of JAK-STAT signaling (involving IL-22 receptor subunit a1, IL-4, IL-20, IL10RA, LIF, and IL-2), inflammatory responses (IL-4, IL-20, LIF, and IL-2), and processes such as tyrosine phosphorylation, leukocyte differentiation, IgG isotype switching, and protein phosphorylation regulation. These findings suggest that gluten-induced inflammation may already be active in early stages of the disease, including the initial phases of leukocyte differentiation, according to the investigators.
“Over a long follow-up on a gluten-containing diet, only 40% of these patients progressed to VA,” Dr. Auricchio and colleagues wrote. “Notably, 25%-30% of children with PCD even stop producing anti–tissue transglutaminase antibodies, and the others keep on producing autoantibodies but preserve a normal intestinal mucosa. Considering these data, the decision to address a patient with PCD on a gluten-free diet at time of diagnosis is quite critical.”
The researchers noted that this new model, with accuracy exceeding 95%, is well suited for routine use because of its practicality and reliability.
“Our previous model, based mainly on small intestinal mucosa features, moved a step toward the prediction of outcome but still required a mucosal biopsy, and the accuracy of prediction was not greater than 80%, which is somewhat uncertain for a lifelong clinical decision,” they wrote. In contrast, the present model “appears to be sufficient to immediately suggest a gluten-free diet in children with PCD, who are almost certainly committed to developing VA.”
The investigators called for long-term studies to validate their findings in other cohorts, including adult populations.This study was supported by the TIMID project and Inflammation in Human Early Life: Targeting Impacts on Life Course Health (INITIALISE) by the Horizon Europe Program of the European Union. The investigators disclosed no conflicts of interest.
FROM GASTROENTEROLOGY
Intratumoral Dendritic Cell Therapy Shows Promise in Early-Stage ERBB2-Positive Breast Cancer
TOPLINE:
The higher dose (100 million cells) shows enhanced immune effector recruitment and significant tumor regression before chemotherapy initiation.
METHODOLOGY:
- ERBB2-positive breast cancer survival has improved with anti-ERBB2 antibodies trastuzumab and pertuzumab, but for a pathologic complete response, chemotherapy remains necessary, which comes with significant toxic effects.
- A phase 1, nonrandomized clinical trial enrolled 12 patients with early-stage ERBB2-positive breast cancer in Tampa, Florida, from October 2021 to October 2022.
- Participants received intratumoral (IT) cDC1 injections weekly for 6 weeks at two dose levels (50 million cells for dose level 1 and 100 million cells for dose level 2), with six patients in each group.
- Starting from day 1 of the cDC1 injections, treatment included trastuzumab (8-mg/kg loading dose, then 6 mg/kg) and pertuzumab (840-mg loading dose, then 420 mg) administered intravenously every 3 weeks for six cycles, followed by paclitaxel (80 mg/m2) weekly for 12 weeks and surgery with lumpectomy or mastectomy.
- Primary outcomes measured safety and immune response of increasing doses of cDC1 combined with anti-ERBB2 antibodies before neoadjuvant chemotherapy; secondary outcomes assessed antitumor efficacy through breast MRI and residual cancer burden at surgery.
TAKEAWAY:
- IT delivery of ERBB2 cDC1 was safe and not associated with any dose-limiting toxic effects. The most frequent adverse events attributed to cDC1 were grade 1-2 chills (50%), fatigue (41.7%), headache (33%), and injection-site reactions (33%).
- Dose level 2 showed enhanced recruitment of adaptive CD3, CD4, and CD8 T cells and B cells within the tumor microenvironment (TME), along with increased innate gamma delta T cells and natural killer T cells.
- Breast MRI revealed nine objective responses, including six partial responses and three complete responses, with three cases of stable disease.
- Following surgery, 7 of 12 patients (58%) achieved a pathologic complete response, including all 3 hormone receptor–negative patients and 4 of the 9 hormone receptor–positive patients.
IN PRACTICE:
“Overall, the clinical data shown here demonstrate the effects of combining ERBB2 antibodies with IT [intratumoral] delivery of targeted cDC1 to enhance immune cell infiltration within the TME [tumor microenvironment] and subsequently induce tumor regression before chemotherapy,” wrote the authors, who noted they will be testing the higher dose for an ongoing phase 2 trial with an additional 41 patients.
SOURCE:
The study was led by Hyo S. Han, MD, of H. Lee Moffitt Cancer Center and Research Institute in Tampa, Florida. It was published online on December 5, 2024, in JAMA Oncology.
LIMITATIONS:
Because only two dose levels of cDC1 were tested, it remains unclear whether higher doses or different administration schedules could further enhance immune response. Additionally, the nonrandomized design prevents definitive conclusions about whether the clinical benefits were solely from the anti-ERBB2 antibodies. The small sample size also makes it difficult to determine if the pathologic complete responses were primarily due to the 12 weeks of trastuzumab/pertuzumab/paclitaxel treatment.
DISCLOSURES:
This study was funded by the Moffitt Breast Cancer Research Fund, Shula Fund, and Pennies in Action. Several authors reported research support and personal and consulting fees from US funding agencies and multiple pharmaceutical companies outside of the submitted work, as well as related intellectual property and patents.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The higher dose (100 million cells) shows enhanced immune effector recruitment and significant tumor regression before chemotherapy initiation.
METHODOLOGY:
- ERBB2-positive breast cancer survival has improved with anti-ERBB2 antibodies trastuzumab and pertuzumab, but for a pathologic complete response, chemotherapy remains necessary, which comes with significant toxic effects.
- A phase 1, nonrandomized clinical trial enrolled 12 patients with early-stage ERBB2-positive breast cancer in Tampa, Florida, from October 2021 to October 2022.
- Participants received intratumoral (IT) cDC1 injections weekly for 6 weeks at two dose levels (50 million cells for dose level 1 and 100 million cells for dose level 2), with six patients in each group.
- Starting from day 1 of the cDC1 injections, treatment included trastuzumab (8-mg/kg loading dose, then 6 mg/kg) and pertuzumab (840-mg loading dose, then 420 mg) administered intravenously every 3 weeks for six cycles, followed by paclitaxel (80 mg/m2) weekly for 12 weeks and surgery with lumpectomy or mastectomy.
- Primary outcomes measured safety and immune response of increasing doses of cDC1 combined with anti-ERBB2 antibodies before neoadjuvant chemotherapy; secondary outcomes assessed antitumor efficacy through breast MRI and residual cancer burden at surgery.
TAKEAWAY:
- IT delivery of ERBB2 cDC1 was safe and not associated with any dose-limiting toxic effects. The most frequent adverse events attributed to cDC1 were grade 1-2 chills (50%), fatigue (41.7%), headache (33%), and injection-site reactions (33%).
- Dose level 2 showed enhanced recruitment of adaptive CD3, CD4, and CD8 T cells and B cells within the tumor microenvironment (TME), along with increased innate gamma delta T cells and natural killer T cells.
- Breast MRI revealed nine objective responses, including six partial responses and three complete responses, with three cases of stable disease.
- Following surgery, 7 of 12 patients (58%) achieved a pathologic complete response, including all 3 hormone receptor–negative patients and 4 of the 9 hormone receptor–positive patients.
IN PRACTICE:
“Overall, the clinical data shown here demonstrate the effects of combining ERBB2 antibodies with IT [intratumoral] delivery of targeted cDC1 to enhance immune cell infiltration within the TME [tumor microenvironment] and subsequently induce tumor regression before chemotherapy,” wrote the authors, who noted they will be testing the higher dose for an ongoing phase 2 trial with an additional 41 patients.
SOURCE:
The study was led by Hyo S. Han, MD, of H. Lee Moffitt Cancer Center and Research Institute in Tampa, Florida. It was published online on December 5, 2024, in JAMA Oncology.
LIMITATIONS:
Because only two dose levels of cDC1 were tested, it remains unclear whether higher doses or different administration schedules could further enhance immune response. Additionally, the nonrandomized design prevents definitive conclusions about whether the clinical benefits were solely from the anti-ERBB2 antibodies. The small sample size also makes it difficult to determine if the pathologic complete responses were primarily due to the 12 weeks of trastuzumab/pertuzumab/paclitaxel treatment.
DISCLOSURES:
This study was funded by the Moffitt Breast Cancer Research Fund, Shula Fund, and Pennies in Action. Several authors reported research support and personal and consulting fees from US funding agencies and multiple pharmaceutical companies outside of the submitted work, as well as related intellectual property and patents.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The higher dose (100 million cells) shows enhanced immune effector recruitment and significant tumor regression before chemotherapy initiation.
METHODOLOGY:
- ERBB2-positive breast cancer survival has improved with anti-ERBB2 antibodies trastuzumab and pertuzumab, but for a pathologic complete response, chemotherapy remains necessary, which comes with significant toxic effects.
- A phase 1, nonrandomized clinical trial enrolled 12 patients with early-stage ERBB2-positive breast cancer in Tampa, Florida, from October 2021 to October 2022.
- Participants received intratumoral (IT) cDC1 injections weekly for 6 weeks at two dose levels (50 million cells for dose level 1 and 100 million cells for dose level 2), with six patients in each group.
- Starting from day 1 of the cDC1 injections, treatment included trastuzumab (8-mg/kg loading dose, then 6 mg/kg) and pertuzumab (840-mg loading dose, then 420 mg) administered intravenously every 3 weeks for six cycles, followed by paclitaxel (80 mg/m2) weekly for 12 weeks and surgery with lumpectomy or mastectomy.
- Primary outcomes measured safety and immune response of increasing doses of cDC1 combined with anti-ERBB2 antibodies before neoadjuvant chemotherapy; secondary outcomes assessed antitumor efficacy through breast MRI and residual cancer burden at surgery.
TAKEAWAY:
- IT delivery of ERBB2 cDC1 was safe and not associated with any dose-limiting toxic effects. The most frequent adverse events attributed to cDC1 were grade 1-2 chills (50%), fatigue (41.7%), headache (33%), and injection-site reactions (33%).
- Dose level 2 showed enhanced recruitment of adaptive CD3, CD4, and CD8 T cells and B cells within the tumor microenvironment (TME), along with increased innate gamma delta T cells and natural killer T cells.
- Breast MRI revealed nine objective responses, including six partial responses and three complete responses, with three cases of stable disease.
- Following surgery, 7 of 12 patients (58%) achieved a pathologic complete response, including all 3 hormone receptor–negative patients and 4 of the 9 hormone receptor–positive patients.
IN PRACTICE:
“Overall, the clinical data shown here demonstrate the effects of combining ERBB2 antibodies with IT [intratumoral] delivery of targeted cDC1 to enhance immune cell infiltration within the TME [tumor microenvironment] and subsequently induce tumor regression before chemotherapy,” wrote the authors, who noted they will be testing the higher dose for an ongoing phase 2 trial with an additional 41 patients.
SOURCE:
The study was led by Hyo S. Han, MD, of H. Lee Moffitt Cancer Center and Research Institute in Tampa, Florida. It was published online on December 5, 2024, in JAMA Oncology.
LIMITATIONS:
Because only two dose levels of cDC1 were tested, it remains unclear whether higher doses or different administration schedules could further enhance immune response. Additionally, the nonrandomized design prevents definitive conclusions about whether the clinical benefits were solely from the anti-ERBB2 antibodies. The small sample size also makes it difficult to determine if the pathologic complete responses were primarily due to the 12 weeks of trastuzumab/pertuzumab/paclitaxel treatment.
DISCLOSURES:
This study was funded by the Moffitt Breast Cancer Research Fund, Shula Fund, and Pennies in Action. Several authors reported research support and personal and consulting fees from US funding agencies and multiple pharmaceutical companies outside of the submitted work, as well as related intellectual property and patents.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Nonmelanoma Skin Cancer Risk May Be Reduced in Patients on PCSK9 Inhibitors
TOPLINE:
Proprotein convertase subtilisin/kexin type 9 ( those older than 65 years, and those with immunosuppression.
METHODOLOGY:
- To evaluate the risk for NMSC — basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) — in patients with ASCVD on PCSK9 inhibitors, researchers analyzed data from the US Collaborative Network in the TriNetX database of adults aged ≥ 40 years with ASCVD who received statin therapy between 2016 and 2022.
- A total of 73,636 patients were included, divided equally between those receiving a PCSK9 inhibitor (evolocumab, alirocumab, or inclisiran) plus statin therapy and the control group (those on statin therapy only).
- The analysis used propensity score matching for head-to-head comparisons, with hazard ratios (HRs) estimated using Cox proportional hazard models.
- Stratified analyses examined outcomes by age, sex, Fitzpatrick skin type, and immune status. (Immunosuppressed patients were those treated with immunosuppressants for more than 90 days in the year before the index date — the date when exposed patients were first prescribed a PCSK9 inhibitor, which was also index date for matched patients in the statin-only group.)
TAKEAWAY:
- Patients with ASCVD in the PCSK9 group showed significantly lower risks for NMSC (HR, 0.78; 95% CI, 0.71-0.87), BCC (HR, 0.78; 95% CI, 0.69-0.89), and SCC (HR, 0.79; 95% CI, 0.67-0.93) than control individuals on a statin only (P < .001 for all three).
- Both evolocumab and alirocumab demonstrated similar protective effects against the development of NMSC.
- The reduced risk for NMSC was particularly notable among patients aged 65-79 years (HR, 0.75; 95% CI, 0.66-0.86) and those aged ≥ 80 years (HR, 0.74; 95% CI, 0.60-0.91).
- Men showed a more pronounced reduction in the risk for NMSC (HR, 0.73; 95% CI, 0.64-0.83) than women (HR, 0.93; 95% CI, 0.78-1.11). The effect on lowering NMSC risk was also evident among immunosuppressed patients in the PCSK9 group (HR, 0.68; 95% CI, 0.60-0.75).
IN PRACTICE:
“The findings suggest the promising pleiotropic effect of PCSK9 inhibitors on the chemoprevention of NMSC,” the study authors wrote. Referring to previous studies that “provided mechanistic clues to our findings,” they added that “further studies are required to investigate the underlying mechanisms and establish causality.”
SOURCE:
The study was led by Cheng-Yuan Li, Taipei Veterans General Hospital, Taipei, Taiwan, and was published online in The British Journal of Dermatology.
LIMITATIONS:
Electronic health records lack information on sun protection habits, family history of skin cancer, diet, body mass index, and air pollution exposure, risk factors for NMSC. The study also lacked detailed information on enrollees’ lipid profiles and was focused mostly on patients in the United States, limiting the generalizability of the findings to other regions.
DISCLOSURES:
The study was supported by grants from Taipei Veterans General Hospital and the Ministry of Science and Technology, Taiwan. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Proprotein convertase subtilisin/kexin type 9 ( those older than 65 years, and those with immunosuppression.
METHODOLOGY:
- To evaluate the risk for NMSC — basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) — in patients with ASCVD on PCSK9 inhibitors, researchers analyzed data from the US Collaborative Network in the TriNetX database of adults aged ≥ 40 years with ASCVD who received statin therapy between 2016 and 2022.
- A total of 73,636 patients were included, divided equally between those receiving a PCSK9 inhibitor (evolocumab, alirocumab, or inclisiran) plus statin therapy and the control group (those on statin therapy only).
- The analysis used propensity score matching for head-to-head comparisons, with hazard ratios (HRs) estimated using Cox proportional hazard models.
- Stratified analyses examined outcomes by age, sex, Fitzpatrick skin type, and immune status. (Immunosuppressed patients were those treated with immunosuppressants for more than 90 days in the year before the index date — the date when exposed patients were first prescribed a PCSK9 inhibitor, which was also index date for matched patients in the statin-only group.)
TAKEAWAY:
- Patients with ASCVD in the PCSK9 group showed significantly lower risks for NMSC (HR, 0.78; 95% CI, 0.71-0.87), BCC (HR, 0.78; 95% CI, 0.69-0.89), and SCC (HR, 0.79; 95% CI, 0.67-0.93) than control individuals on a statin only (P < .001 for all three).
- Both evolocumab and alirocumab demonstrated similar protective effects against the development of NMSC.
- The reduced risk for NMSC was particularly notable among patients aged 65-79 years (HR, 0.75; 95% CI, 0.66-0.86) and those aged ≥ 80 years (HR, 0.74; 95% CI, 0.60-0.91).
- Men showed a more pronounced reduction in the risk for NMSC (HR, 0.73; 95% CI, 0.64-0.83) than women (HR, 0.93; 95% CI, 0.78-1.11). The effect on lowering NMSC risk was also evident among immunosuppressed patients in the PCSK9 group (HR, 0.68; 95% CI, 0.60-0.75).
IN PRACTICE:
“The findings suggest the promising pleiotropic effect of PCSK9 inhibitors on the chemoprevention of NMSC,” the study authors wrote. Referring to previous studies that “provided mechanistic clues to our findings,” they added that “further studies are required to investigate the underlying mechanisms and establish causality.”
SOURCE:
The study was led by Cheng-Yuan Li, Taipei Veterans General Hospital, Taipei, Taiwan, and was published online in The British Journal of Dermatology.
LIMITATIONS:
Electronic health records lack information on sun protection habits, family history of skin cancer, diet, body mass index, and air pollution exposure, risk factors for NMSC. The study also lacked detailed information on enrollees’ lipid profiles and was focused mostly on patients in the United States, limiting the generalizability of the findings to other regions.
DISCLOSURES:
The study was supported by grants from Taipei Veterans General Hospital and the Ministry of Science and Technology, Taiwan. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Proprotein convertase subtilisin/kexin type 9 ( those older than 65 years, and those with immunosuppression.
METHODOLOGY:
- To evaluate the risk for NMSC — basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) — in patients with ASCVD on PCSK9 inhibitors, researchers analyzed data from the US Collaborative Network in the TriNetX database of adults aged ≥ 40 years with ASCVD who received statin therapy between 2016 and 2022.
- A total of 73,636 patients were included, divided equally between those receiving a PCSK9 inhibitor (evolocumab, alirocumab, or inclisiran) plus statin therapy and the control group (those on statin therapy only).
- The analysis used propensity score matching for head-to-head comparisons, with hazard ratios (HRs) estimated using Cox proportional hazard models.
- Stratified analyses examined outcomes by age, sex, Fitzpatrick skin type, and immune status. (Immunosuppressed patients were those treated with immunosuppressants for more than 90 days in the year before the index date — the date when exposed patients were first prescribed a PCSK9 inhibitor, which was also index date for matched patients in the statin-only group.)
TAKEAWAY:
- Patients with ASCVD in the PCSK9 group showed significantly lower risks for NMSC (HR, 0.78; 95% CI, 0.71-0.87), BCC (HR, 0.78; 95% CI, 0.69-0.89), and SCC (HR, 0.79; 95% CI, 0.67-0.93) than control individuals on a statin only (P < .001 for all three).
- Both evolocumab and alirocumab demonstrated similar protective effects against the development of NMSC.
- The reduced risk for NMSC was particularly notable among patients aged 65-79 years (HR, 0.75; 95% CI, 0.66-0.86) and those aged ≥ 80 years (HR, 0.74; 95% CI, 0.60-0.91).
- Men showed a more pronounced reduction in the risk for NMSC (HR, 0.73; 95% CI, 0.64-0.83) than women (HR, 0.93; 95% CI, 0.78-1.11). The effect on lowering NMSC risk was also evident among immunosuppressed patients in the PCSK9 group (HR, 0.68; 95% CI, 0.60-0.75).
IN PRACTICE:
“The findings suggest the promising pleiotropic effect of PCSK9 inhibitors on the chemoprevention of NMSC,” the study authors wrote. Referring to previous studies that “provided mechanistic clues to our findings,” they added that “further studies are required to investigate the underlying mechanisms and establish causality.”
SOURCE:
The study was led by Cheng-Yuan Li, Taipei Veterans General Hospital, Taipei, Taiwan, and was published online in The British Journal of Dermatology.
LIMITATIONS:
Electronic health records lack information on sun protection habits, family history of skin cancer, diet, body mass index, and air pollution exposure, risk factors for NMSC. The study also lacked detailed information on enrollees’ lipid profiles and was focused mostly on patients in the United States, limiting the generalizability of the findings to other regions.
DISCLOSURES:
The study was supported by grants from Taipei Veterans General Hospital and the Ministry of Science and Technology, Taiwan. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Dark Chocolate: A Bittersweet Remedy for Diabetes Risk
TOPLINE:
Consuming five or more servings per week of dark chocolate is associated with a lower risk for type 2 diabetes (T2D), compared with infrequent or no consumption. Conversely, a higher consumption of milk chocolate does not significantly affect the risk for diabetes and may contribute to greater weight gain.
METHODOLOGY:
- Chocolate is rich in flavanols, natural compounds known to support heart health and lower the risk for T2D. However, the link between chocolate consumption and the risk for T2D is uncertain, with inconsistent research findings that don’t distinguish between dark or milk chocolate.
- Researchers conducted a prospective cohort study to investigate the associations between dark, milk, and total chocolate consumption and the risk for T2D in three long-term US studies of female nurses and male healthcare professionals with no history of diabetes, cardiovascular disease, or cancer at baseline.
- The relationship between total chocolate consumption and the risk for diabetes was investigated in 192,208 individuals who reported their chocolate consumption using validated food frequency questionnaires every 4 years from 1986 onward.
- Information on chocolate subtypes was assessed from 2006/2007 onward in 111,654 participants.
- Participants self-reported T2D through biennial questionnaires, which was confirmed via supplementary questionnaires collecting data on glucose levels, hemoglobin A1c concentration, symptoms, and treatments; they also self-reported their body weight at baseline and during follow-ups.
TAKEAWAY:
- During 4,829,175 person-years of follow-up, researchers identified 18,862 individuals with incident T2D in the total chocolate analysis cohort.
- In the chocolate subtype cohort, 4771 incident T2D cases were identified during 1,270,348 person-years of follow-up. Having at least five servings per week of dark chocolate was associated with a 21% lower risk for T2D (adjusted hazard ratio, 0.79; P for trend = .006), while milk chocolate consumption showed no significant link (P for trend = .75).
- The risk for T2D decreased by 3% for each additional serving of dark chocolate consumed weekly, indicating a dose-response effect.
- Compared with individuals who did not change their chocolate intake, those who had an increased milk chocolate intake had greater weight gain over 4-year periods (mean difference, 0.35 kg; 95% CI, 0.27-0.43); dark chocolate showed no significant association with weight change.
IN PRACTICE:
“Even though dark and milk chocolate have similar levels of calories and saturated fat, it appears that the rich polyphenols in dark chocolate might offset the effects of saturated fat and sugar on weight gain and diabetes. It’s an intriguing difference that’s worth exploring more,” corresponding author Qi Sun from the Departments of Nutrition and Epidemiology, Harvard TH Chan School of Public Health, Boston, Massachusetts, said in a press release.
SOURCE:
This study was led by Binkai Liu, Harvard TH Chan School of Public Health. It was published online in The BMJ.
LIMITATIONS:
The relatively limited number of participants in the higher chocolate consumption groups may have reduced the statistical power for detecting modest associations between dark chocolate consumption and the risk for T2D. Additionally, the study population primarily consisted of non-Hispanic White adults older than 50 years at baseline, which, along with their professional backgrounds, may have limited the generalizability of the study findings to other populations with different socioeconomic or personal characteristics. Chocolate consumption in this study was lower than the national average of three servings per week, which may have limited the ability to assess the dose-response relationship at higher intake levels.
DISCLOSURES:
This study was supported by grants from the National Institutes of Health. Some authors reported receiving investigator-initiated grants, being on scientific advisory boards, and receiving research funding from certain institutions.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Consuming five or more servings per week of dark chocolate is associated with a lower risk for type 2 diabetes (T2D), compared with infrequent or no consumption. Conversely, a higher consumption of milk chocolate does not significantly affect the risk for diabetes and may contribute to greater weight gain.
METHODOLOGY:
- Chocolate is rich in flavanols, natural compounds known to support heart health and lower the risk for T2D. However, the link between chocolate consumption and the risk for T2D is uncertain, with inconsistent research findings that don’t distinguish between dark or milk chocolate.
- Researchers conducted a prospective cohort study to investigate the associations between dark, milk, and total chocolate consumption and the risk for T2D in three long-term US studies of female nurses and male healthcare professionals with no history of diabetes, cardiovascular disease, or cancer at baseline.
- The relationship between total chocolate consumption and the risk for diabetes was investigated in 192,208 individuals who reported their chocolate consumption using validated food frequency questionnaires every 4 years from 1986 onward.
- Information on chocolate subtypes was assessed from 2006/2007 onward in 111,654 participants.
- Participants self-reported T2D through biennial questionnaires, which was confirmed via supplementary questionnaires collecting data on glucose levels, hemoglobin A1c concentration, symptoms, and treatments; they also self-reported their body weight at baseline and during follow-ups.
TAKEAWAY:
- During 4,829,175 person-years of follow-up, researchers identified 18,862 individuals with incident T2D in the total chocolate analysis cohort.
- In the chocolate subtype cohort, 4771 incident T2D cases were identified during 1,270,348 person-years of follow-up. Having at least five servings per week of dark chocolate was associated with a 21% lower risk for T2D (adjusted hazard ratio, 0.79; P for trend = .006), while milk chocolate consumption showed no significant link (P for trend = .75).
- The risk for T2D decreased by 3% for each additional serving of dark chocolate consumed weekly, indicating a dose-response effect.
- Compared with individuals who did not change their chocolate intake, those who had an increased milk chocolate intake had greater weight gain over 4-year periods (mean difference, 0.35 kg; 95% CI, 0.27-0.43); dark chocolate showed no significant association with weight change.
IN PRACTICE:
“Even though dark and milk chocolate have similar levels of calories and saturated fat, it appears that the rich polyphenols in dark chocolate might offset the effects of saturated fat and sugar on weight gain and diabetes. It’s an intriguing difference that’s worth exploring more,” corresponding author Qi Sun from the Departments of Nutrition and Epidemiology, Harvard TH Chan School of Public Health, Boston, Massachusetts, said in a press release.
SOURCE:
This study was led by Binkai Liu, Harvard TH Chan School of Public Health. It was published online in The BMJ.
LIMITATIONS:
The relatively limited number of participants in the higher chocolate consumption groups may have reduced the statistical power for detecting modest associations between dark chocolate consumption and the risk for T2D. Additionally, the study population primarily consisted of non-Hispanic White adults older than 50 years at baseline, which, along with their professional backgrounds, may have limited the generalizability of the study findings to other populations with different socioeconomic or personal characteristics. Chocolate consumption in this study was lower than the national average of three servings per week, which may have limited the ability to assess the dose-response relationship at higher intake levels.
DISCLOSURES:
This study was supported by grants from the National Institutes of Health. Some authors reported receiving investigator-initiated grants, being on scientific advisory boards, and receiving research funding from certain institutions.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Consuming five or more servings per week of dark chocolate is associated with a lower risk for type 2 diabetes (T2D), compared with infrequent or no consumption. Conversely, a higher consumption of milk chocolate does not significantly affect the risk for diabetes and may contribute to greater weight gain.
METHODOLOGY:
- Chocolate is rich in flavanols, natural compounds known to support heart health and lower the risk for T2D. However, the link between chocolate consumption and the risk for T2D is uncertain, with inconsistent research findings that don’t distinguish between dark or milk chocolate.
- Researchers conducted a prospective cohort study to investigate the associations between dark, milk, and total chocolate consumption and the risk for T2D in three long-term US studies of female nurses and male healthcare professionals with no history of diabetes, cardiovascular disease, or cancer at baseline.
- The relationship between total chocolate consumption and the risk for diabetes was investigated in 192,208 individuals who reported their chocolate consumption using validated food frequency questionnaires every 4 years from 1986 onward.
- Information on chocolate subtypes was assessed from 2006/2007 onward in 111,654 participants.
- Participants self-reported T2D through biennial questionnaires, which was confirmed via supplementary questionnaires collecting data on glucose levels, hemoglobin A1c concentration, symptoms, and treatments; they also self-reported their body weight at baseline and during follow-ups.
TAKEAWAY:
- During 4,829,175 person-years of follow-up, researchers identified 18,862 individuals with incident T2D in the total chocolate analysis cohort.
- In the chocolate subtype cohort, 4771 incident T2D cases were identified during 1,270,348 person-years of follow-up. Having at least five servings per week of dark chocolate was associated with a 21% lower risk for T2D (adjusted hazard ratio, 0.79; P for trend = .006), while milk chocolate consumption showed no significant link (P for trend = .75).
- The risk for T2D decreased by 3% for each additional serving of dark chocolate consumed weekly, indicating a dose-response effect.
- Compared with individuals who did not change their chocolate intake, those who had an increased milk chocolate intake had greater weight gain over 4-year periods (mean difference, 0.35 kg; 95% CI, 0.27-0.43); dark chocolate showed no significant association with weight change.
IN PRACTICE:
“Even though dark and milk chocolate have similar levels of calories and saturated fat, it appears that the rich polyphenols in dark chocolate might offset the effects of saturated fat and sugar on weight gain and diabetes. It’s an intriguing difference that’s worth exploring more,” corresponding author Qi Sun from the Departments of Nutrition and Epidemiology, Harvard TH Chan School of Public Health, Boston, Massachusetts, said in a press release.
SOURCE:
This study was led by Binkai Liu, Harvard TH Chan School of Public Health. It was published online in The BMJ.
LIMITATIONS:
The relatively limited number of participants in the higher chocolate consumption groups may have reduced the statistical power for detecting modest associations between dark chocolate consumption and the risk for T2D. Additionally, the study population primarily consisted of non-Hispanic White adults older than 50 years at baseline, which, along with their professional backgrounds, may have limited the generalizability of the study findings to other populations with different socioeconomic or personal characteristics. Chocolate consumption in this study was lower than the national average of three servings per week, which may have limited the ability to assess the dose-response relationship at higher intake levels.
DISCLOSURES:
This study was supported by grants from the National Institutes of Health. Some authors reported receiving investigator-initiated grants, being on scientific advisory boards, and receiving research funding from certain institutions.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Identifying the Best Upfront Regimen for Unresectable CRC Liver Metastasis
A new report demonstrated why patients benefit most from starting on a two-drug chemotherapy regimen — FOLFOX or FOLFIRI — instead of a three-drug regimen — FOLFOXIRI.
The CAIRO5 trial compared overall survival among 294 patients with right sided tumors and/or RAS/BRAF mutations who received FOLFOXIRI (5-fluorouracil [FU], oxaliplatin, irinotecan, plus folinic acid as an enhancer) or investigators’ choice of FOLFOX (5-FU, oxaliplatin, and folinic acid) or FOLFIRI (5-FU, irinotecan, and folinic acid). All patients also received bevacizumab.
In a post hoc analysis, researchers found no overall survival benefit among patients receiving the three-drug regimen. At a median follow-up of just over 5 years, the median overall survival was 23.6 months with FOLFOX or FOLFIRI vs 24.1 months with FOLFOXIRI (P = .44).
The finding means that patients can avoid the extra toxicity associated with combining oxaliplatin and irinotecan without compromising overall survival, Alan P. Venook, MD, a gastrointestinal medical oncologist at the University of California San Francisco, told Medscape Medical News.
The analysis did not stop there in defining the optimal upfront therapy for this patient population.
In a second arm of the analysis, researchers looked at whether swapping in panitumumab for bevacizumab offered any benefit in 236 patients with left-sided tumors and wild-type RAS/BRAF who received either of the two-drug regimens.
Here, the authors also found no benefit of using panitumumab with FOLFOX or FOLFIRI instead of bevacizumab, reporting a median overall survival of 38.3 months with panitumumab vs 39.9 months with bevacizumab.
In addition to avoiding upfront FOLFOXIRI, patients can also avoid the skin reactions and other toxicities associated with panitumumab, including “horrible acne,” Venook said.
Overall, the results support the use of FOLFOX or FOLFIRI with bevacizumab “irrespective of RAS/BRAFV600E status and tumor sidedness” as the initial treatment for CRC with liver-only metastases, concluded the study investigators, from the University Medical Center Utrecht in the Netherlands.
Why Does This Clarity Matter?
The study confirms the standard practice in the United States of starting patients on a two-drug chemotherapy with bevacizumab for the indication and highlights “why we don’t go all in right at the beginning” with a three-drug regimen, Venook said.
In short, more drugs upfront isn’t going to change patients’ long-term survival outcome. Plus, using FOLFOXIRI upfront means “you’ve really pretty much used up all your guns for early treatment,” Venook said.
As for bevacizumab vs panitumumab, most practitioners in the United States favor bevacizumab because of the rash many patients on epidermal growth factor receptor blockers like panitumumab and cetuximab get, Venook said.
Because FOLFOX and FOLFIRI are equally effective on the overall survival front, the decision between them comes down to a balance between patient comorbidities and side effect profiles. Neuropathy, for instance, is more common with FOLFOX, whereas diarrhea is more likely with FOLFIRI, Venook said.
Venook favors FOLFIRI because “almost every patient will develop neuropathy” after seven or eight doses of FOLFOX, which limits its use. “You’re expecting that first treatment to give you the most mileage,” so starting with a treatment “you’re going to get limited use out of ... never made sense to me,” he said.
Venook noted that the results apply only to the older patients studied in CAIRO5 and not necessarily to the ever-growing population of younger people with CRC. Patients in the trial had a median age of 62 years with a performance status of 0-1, a median of 12 liver lesions with no metastases outside the liver, and no contraindications to local or systemic treatment.
The CAIRO5 analysis also looked at what happens after upfront chemotherapy, with the goal being to shrink liver lesions so the lesions can be surgically removed or treated with thermal ablation.
Almost half the patients ultimately underwent resection or ablation, and 39% of those in the FOLFOX or FOLFIRI plus bevacizumab group and 49% in the FOLFOX or FOLFIRI plus panitumumab group then went on to receive adjuvant chemotherapy (ACT) to reduce the risk for recurrence. ACT was recommended in the study, but not required, and consisted of chemotherapy minus bevacizumab or panitumumab.
Overall survival was longest among patients who had complete local treatment without recurrences for at least 6 months (64.3 months) or who had salvage local treatment after early recurrence (58.9 months). Median overall survival was 30.5 months for patients with complete local treatment without salvage after early recurrence, and 28.7 months for patients with incomplete local treatment. Overall survival was worst in patients who remained unresectable (18.3 months).
ACT was associated with improved overall and relapse-free survival, justifying discussing the option with patients who have completed local treatment, the study team said.
CAIRO5 was funded by Roche and Amgen, makers of bevacizumab and panitumumab, respectively. Bond and Venook didn’t have any disclosures.
A version of this article first appeared on Medscape.com.
A new report demonstrated why patients benefit most from starting on a two-drug chemotherapy regimen — FOLFOX or FOLFIRI — instead of a three-drug regimen — FOLFOXIRI.
The CAIRO5 trial compared overall survival among 294 patients with right sided tumors and/or RAS/BRAF mutations who received FOLFOXIRI (5-fluorouracil [FU], oxaliplatin, irinotecan, plus folinic acid as an enhancer) or investigators’ choice of FOLFOX (5-FU, oxaliplatin, and folinic acid) or FOLFIRI (5-FU, irinotecan, and folinic acid). All patients also received bevacizumab.
In a post hoc analysis, researchers found no overall survival benefit among patients receiving the three-drug regimen. At a median follow-up of just over 5 years, the median overall survival was 23.6 months with FOLFOX or FOLFIRI vs 24.1 months with FOLFOXIRI (P = .44).
The finding means that patients can avoid the extra toxicity associated with combining oxaliplatin and irinotecan without compromising overall survival, Alan P. Venook, MD, a gastrointestinal medical oncologist at the University of California San Francisco, told Medscape Medical News.
The analysis did not stop there in defining the optimal upfront therapy for this patient population.
In a second arm of the analysis, researchers looked at whether swapping in panitumumab for bevacizumab offered any benefit in 236 patients with left-sided tumors and wild-type RAS/BRAF who received either of the two-drug regimens.
Here, the authors also found no benefit of using panitumumab with FOLFOX or FOLFIRI instead of bevacizumab, reporting a median overall survival of 38.3 months with panitumumab vs 39.9 months with bevacizumab.
In addition to avoiding upfront FOLFOXIRI, patients can also avoid the skin reactions and other toxicities associated with panitumumab, including “horrible acne,” Venook said.
Overall, the results support the use of FOLFOX or FOLFIRI with bevacizumab “irrespective of RAS/BRAFV600E status and tumor sidedness” as the initial treatment for CRC with liver-only metastases, concluded the study investigators, from the University Medical Center Utrecht in the Netherlands.
Why Does This Clarity Matter?
The study confirms the standard practice in the United States of starting patients on a two-drug chemotherapy with bevacizumab for the indication and highlights “why we don’t go all in right at the beginning” with a three-drug regimen, Venook said.
In short, more drugs upfront isn’t going to change patients’ long-term survival outcome. Plus, using FOLFOXIRI upfront means “you’ve really pretty much used up all your guns for early treatment,” Venook said.
As for bevacizumab vs panitumumab, most practitioners in the United States favor bevacizumab because of the rash many patients on epidermal growth factor receptor blockers like panitumumab and cetuximab get, Venook said.
Because FOLFOX and FOLFIRI are equally effective on the overall survival front, the decision between them comes down to a balance between patient comorbidities and side effect profiles. Neuropathy, for instance, is more common with FOLFOX, whereas diarrhea is more likely with FOLFIRI, Venook said.
Venook favors FOLFIRI because “almost every patient will develop neuropathy” after seven or eight doses of FOLFOX, which limits its use. “You’re expecting that first treatment to give you the most mileage,” so starting with a treatment “you’re going to get limited use out of ... never made sense to me,” he said.
Venook noted that the results apply only to the older patients studied in CAIRO5 and not necessarily to the ever-growing population of younger people with CRC. Patients in the trial had a median age of 62 years with a performance status of 0-1, a median of 12 liver lesions with no metastases outside the liver, and no contraindications to local or systemic treatment.
The CAIRO5 analysis also looked at what happens after upfront chemotherapy, with the goal being to shrink liver lesions so the lesions can be surgically removed or treated with thermal ablation.
Almost half the patients ultimately underwent resection or ablation, and 39% of those in the FOLFOX or FOLFIRI plus bevacizumab group and 49% in the FOLFOX or FOLFIRI plus panitumumab group then went on to receive adjuvant chemotherapy (ACT) to reduce the risk for recurrence. ACT was recommended in the study, but not required, and consisted of chemotherapy minus bevacizumab or panitumumab.
Overall survival was longest among patients who had complete local treatment without recurrences for at least 6 months (64.3 months) or who had salvage local treatment after early recurrence (58.9 months). Median overall survival was 30.5 months for patients with complete local treatment without salvage after early recurrence, and 28.7 months for patients with incomplete local treatment. Overall survival was worst in patients who remained unresectable (18.3 months).
ACT was associated with improved overall and relapse-free survival, justifying discussing the option with patients who have completed local treatment, the study team said.
CAIRO5 was funded by Roche and Amgen, makers of bevacizumab and panitumumab, respectively. Bond and Venook didn’t have any disclosures.
A version of this article first appeared on Medscape.com.
A new report demonstrated why patients benefit most from starting on a two-drug chemotherapy regimen — FOLFOX or FOLFIRI — instead of a three-drug regimen — FOLFOXIRI.
The CAIRO5 trial compared overall survival among 294 patients with right sided tumors and/or RAS/BRAF mutations who received FOLFOXIRI (5-fluorouracil [FU], oxaliplatin, irinotecan, plus folinic acid as an enhancer) or investigators’ choice of FOLFOX (5-FU, oxaliplatin, and folinic acid) or FOLFIRI (5-FU, irinotecan, and folinic acid). All patients also received bevacizumab.
In a post hoc analysis, researchers found no overall survival benefit among patients receiving the three-drug regimen. At a median follow-up of just over 5 years, the median overall survival was 23.6 months with FOLFOX or FOLFIRI vs 24.1 months with FOLFOXIRI (P = .44).
The finding means that patients can avoid the extra toxicity associated with combining oxaliplatin and irinotecan without compromising overall survival, Alan P. Venook, MD, a gastrointestinal medical oncologist at the University of California San Francisco, told Medscape Medical News.
The analysis did not stop there in defining the optimal upfront therapy for this patient population.
In a second arm of the analysis, researchers looked at whether swapping in panitumumab for bevacizumab offered any benefit in 236 patients with left-sided tumors and wild-type RAS/BRAF who received either of the two-drug regimens.
Here, the authors also found no benefit of using panitumumab with FOLFOX or FOLFIRI instead of bevacizumab, reporting a median overall survival of 38.3 months with panitumumab vs 39.9 months with bevacizumab.
In addition to avoiding upfront FOLFOXIRI, patients can also avoid the skin reactions and other toxicities associated with panitumumab, including “horrible acne,” Venook said.
Overall, the results support the use of FOLFOX or FOLFIRI with bevacizumab “irrespective of RAS/BRAFV600E status and tumor sidedness” as the initial treatment for CRC with liver-only metastases, concluded the study investigators, from the University Medical Center Utrecht in the Netherlands.
Why Does This Clarity Matter?
The study confirms the standard practice in the United States of starting patients on a two-drug chemotherapy with bevacizumab for the indication and highlights “why we don’t go all in right at the beginning” with a three-drug regimen, Venook said.
In short, more drugs upfront isn’t going to change patients’ long-term survival outcome. Plus, using FOLFOXIRI upfront means “you’ve really pretty much used up all your guns for early treatment,” Venook said.
As for bevacizumab vs panitumumab, most practitioners in the United States favor bevacizumab because of the rash many patients on epidermal growth factor receptor blockers like panitumumab and cetuximab get, Venook said.
Because FOLFOX and FOLFIRI are equally effective on the overall survival front, the decision between them comes down to a balance between patient comorbidities and side effect profiles. Neuropathy, for instance, is more common with FOLFOX, whereas diarrhea is more likely with FOLFIRI, Venook said.
Venook favors FOLFIRI because “almost every patient will develop neuropathy” after seven or eight doses of FOLFOX, which limits its use. “You’re expecting that first treatment to give you the most mileage,” so starting with a treatment “you’re going to get limited use out of ... never made sense to me,” he said.
Venook noted that the results apply only to the older patients studied in CAIRO5 and not necessarily to the ever-growing population of younger people with CRC. Patients in the trial had a median age of 62 years with a performance status of 0-1, a median of 12 liver lesions with no metastases outside the liver, and no contraindications to local or systemic treatment.
The CAIRO5 analysis also looked at what happens after upfront chemotherapy, with the goal being to shrink liver lesions so the lesions can be surgically removed or treated with thermal ablation.
Almost half the patients ultimately underwent resection or ablation, and 39% of those in the FOLFOX or FOLFIRI plus bevacizumab group and 49% in the FOLFOX or FOLFIRI plus panitumumab group then went on to receive adjuvant chemotherapy (ACT) to reduce the risk for recurrence. ACT was recommended in the study, but not required, and consisted of chemotherapy minus bevacizumab or panitumumab.
Overall survival was longest among patients who had complete local treatment without recurrences for at least 6 months (64.3 months) or who had salvage local treatment after early recurrence (58.9 months). Median overall survival was 30.5 months for patients with complete local treatment without salvage after early recurrence, and 28.7 months for patients with incomplete local treatment. Overall survival was worst in patients who remained unresectable (18.3 months).
ACT was associated with improved overall and relapse-free survival, justifying discussing the option with patients who have completed local treatment, the study team said.
CAIRO5 was funded by Roche and Amgen, makers of bevacizumab and panitumumab, respectively. Bond and Venook didn’t have any disclosures.
A version of this article first appeared on Medscape.com.
FROM JAMA ONCOLOGY