User login
Commentary: Complementary treatments for AD, November 2022
Still, some patients seek alternative or adjunctive treatment approaches, owing to a desire to identify the root cause of disease, their aversion toward Western medicine, or fear of adverse events. Yepes-Nuñez and colleagues performed a systematic review and meta-analysis including 23 studies of benefits and harms of allergen immunotherapy for AD. I had the privilege of participating in this study and can testify to the astronomical amount of work that went into comprehensively identifying all of the relevant studies and synthesizing the data. We found that adjunctive subcutaneous or sublingual allergen immunotherapy, particularly for house dust mites, led to modest but generally delayed improvements of AD severity, itch, and quality of life, and less definitive effects on sleep disturbance and AD flares. Overall, both were well tolerated, though subcutaneous immunotherapy was associated with more adverse events than sublingual immunotherapy. Allergen immunotherapy requires a significant investment of time by patients and was only modestly effective. Nevertheless, it may be a reasonable approach to consider in select patients with AD.
Benjamin Franklin famously stated that "an ounce of prevention is worth a pound of cure." Likewise, while successful treatment of AD is great, how can we advise patients and caregivers of children who are at high risk for AD? To answer this question, Voigt and Lele performed a systematic review and meta-analysis of 11 randomized controlled trials examining the efficacy of Lactobacillus rhamnosus at preventing AD in children when taken by mothers during pregnancy. They found that L. rhamnosus significantly reduced the risk of developing AD within 2 years, marginally significantly reduced risk at 4-5 years, and significantly reduced risk at 6-7 years, but no significant risk differences were observed at 10-11 years. The authors concluded that use of L. rhamnosus with or without other probiotics during pregnancy reduces the incidence of childhood AD at least up to age 7 years.
Wang and colleagues conducted an observational study of the relationship of home environment exposures with atopic disease, including AD, in 17,881 offspring from Iceland, Norway, Sweden, Denmark, and Estonia who had undergone two follow-up investigations every 10 years. They found that AD was associated with parent-reported visible mold and dampness/mold at home, living in an apartment, and living in newer buildings. Avoidance of these environmental exposures could possibly decrease the risk of developing AD, although future confirmatory studies are needed.
For each of these treatment/prevention approaches, the magnitude of benefit is not very large. Thus, these approaches do not replace our armamentarium of treatments and avoidance strategies for AD. Rather, they can be used complementarily as low-risk add-on interventions with a potential upside.
Still, some patients seek alternative or adjunctive treatment approaches, owing to a desire to identify the root cause of disease, their aversion toward Western medicine, or fear of adverse events. Yepes-Nuñez and colleagues performed a systematic review and meta-analysis including 23 studies of benefits and harms of allergen immunotherapy for AD. I had the privilege of participating in this study and can testify to the astronomical amount of work that went into comprehensively identifying all of the relevant studies and synthesizing the data. We found that adjunctive subcutaneous or sublingual allergen immunotherapy, particularly for house dust mites, led to modest but generally delayed improvements of AD severity, itch, and quality of life, and less definitive effects on sleep disturbance and AD flares. Overall, both were well tolerated, though subcutaneous immunotherapy was associated with more adverse events than sublingual immunotherapy. Allergen immunotherapy requires a significant investment of time by patients and was only modestly effective. Nevertheless, it may be a reasonable approach to consider in select patients with AD.
Benjamin Franklin famously stated that "an ounce of prevention is worth a pound of cure." Likewise, while successful treatment of AD is great, how can we advise patients and caregivers of children who are at high risk for AD? To answer this question, Voigt and Lele performed a systematic review and meta-analysis of 11 randomized controlled trials examining the efficacy of Lactobacillus rhamnosus at preventing AD in children when taken by mothers during pregnancy. They found that L. rhamnosus significantly reduced the risk of developing AD within 2 years, marginally significantly reduced risk at 4-5 years, and significantly reduced risk at 6-7 years, but no significant risk differences were observed at 10-11 years. The authors concluded that use of L. rhamnosus with or without other probiotics during pregnancy reduces the incidence of childhood AD at least up to age 7 years.
Wang and colleagues conducted an observational study of the relationship of home environment exposures with atopic disease, including AD, in 17,881 offspring from Iceland, Norway, Sweden, Denmark, and Estonia who had undergone two follow-up investigations every 10 years. They found that AD was associated with parent-reported visible mold and dampness/mold at home, living in an apartment, and living in newer buildings. Avoidance of these environmental exposures could possibly decrease the risk of developing AD, although future confirmatory studies are needed.
For each of these treatment/prevention approaches, the magnitude of benefit is not very large. Thus, these approaches do not replace our armamentarium of treatments and avoidance strategies for AD. Rather, they can be used complementarily as low-risk add-on interventions with a potential upside.
Still, some patients seek alternative or adjunctive treatment approaches, owing to a desire to identify the root cause of disease, their aversion toward Western medicine, or fear of adverse events. Yepes-Nuñez and colleagues performed a systematic review and meta-analysis including 23 studies of benefits and harms of allergen immunotherapy for AD. I had the privilege of participating in this study and can testify to the astronomical amount of work that went into comprehensively identifying all of the relevant studies and synthesizing the data. We found that adjunctive subcutaneous or sublingual allergen immunotherapy, particularly for house dust mites, led to modest but generally delayed improvements of AD severity, itch, and quality of life, and less definitive effects on sleep disturbance and AD flares. Overall, both were well tolerated, though subcutaneous immunotherapy was associated with more adverse events than sublingual immunotherapy. Allergen immunotherapy requires a significant investment of time by patients and was only modestly effective. Nevertheless, it may be a reasonable approach to consider in select patients with AD.
Benjamin Franklin famously stated that "an ounce of prevention is worth a pound of cure." Likewise, while successful treatment of AD is great, how can we advise patients and caregivers of children who are at high risk for AD? To answer this question, Voigt and Lele performed a systematic review and meta-analysis of 11 randomized controlled trials examining the efficacy of Lactobacillus rhamnosus at preventing AD in children when taken by mothers during pregnancy. They found that L. rhamnosus significantly reduced the risk of developing AD within 2 years, marginally significantly reduced risk at 4-5 years, and significantly reduced risk at 6-7 years, but no significant risk differences were observed at 10-11 years. The authors concluded that use of L. rhamnosus with or without other probiotics during pregnancy reduces the incidence of childhood AD at least up to age 7 years.
Wang and colleagues conducted an observational study of the relationship of home environment exposures with atopic disease, including AD, in 17,881 offspring from Iceland, Norway, Sweden, Denmark, and Estonia who had undergone two follow-up investigations every 10 years. They found that AD was associated with parent-reported visible mold and dampness/mold at home, living in an apartment, and living in newer buildings. Avoidance of these environmental exposures could possibly decrease the risk of developing AD, although future confirmatory studies are needed.
For each of these treatment/prevention approaches, the magnitude of benefit is not very large. Thus, these approaches do not replace our armamentarium of treatments and avoidance strategies for AD. Rather, they can be used complementarily as low-risk add-on interventions with a potential upside.
Diet high in plant omega-3s tied to better HF prognosis
Heart failure (HF) patients with high serum levels of alpha-linolenic acid (ALA) had a better prognosis than those with the lowest levels, in an observational study.
ALA is an omega-3 fatty acid that is found mainly in plants, including flaxseed, chia, walnuts, or canola oil.
“The most striking finding to us is the clear difference between patients at the bottom quartile compared to the other 75%, pointing to a threshold on the putative effect of ALA, reinforcing the notion that ‘one size does not fill all,’ ” Aleix Sala-Vila, PharmD, PhD, of the Hospital del Mar Medical Research Institute, Barcelona, told this news organization.The analysis, which was published online in the Journal of the American College of Cardiology, showed statistically significant reductions in all-cause death, cardiovascular (CV) death, and first HF hospitalization among those in the three upper quartiles of serum ALA levels, compared with those in the lowest quartile.
The team’s earlier finding that higher levels of serum phosphatidylcholine eicosapentaenoic acid (PC EPA) and ALA were associated with a lower risk of adverse events in patients with ST-segment elevation myocardial infarction prompted the current study, Dr. Sala-Vila said.
Although their findings are hypothesis-generating at this point, he added, “inclusion of some ALA-rich foods, such as walnuts, in the diet of any individual, whether they have HF or not, might translate into CV benefits, besides the putative effect on HF. There is no evidence of any deleterious effect of one daily serving of walnuts, not even on weight gain.”
Plant power
Dr. Sala-Vila and colleagues analyzed data and samples from 905 patients (mean age, 67; 32% women) with HF of different etiologies. ALA was assessed by gas chromatography in serum phospholipids, which reflect long-term dietary ALA intake and metabolism.
The primary outcome was a composite of all-cause death or first HF hospitalization. The secondary outcome was the composite of CV death or HF hospitalization.
After a median follow-up of 2.4 years, 140 all-cause deaths, 85 CV deaths, and 141 first HF hospitalizations occurred (composite of all-cause death and first HF hospitalization, 238; composite of CV death and HF hospitalization, 184).
Compared with patients at the lowest quartile of ALA in serum phospholipids, those at the three upper quartiles showed a 39% reduction in the risk of the primary endpoint (hazard ratio, 0.61).
Statistically significant reductions also were observed for all-cause death (HR, 0.58), CV death (HR, 0.51), first HF hospitalization (HR, 0.58), and the composite of CV death and HF hospitalization (HR, 0.58).
By contrast, nonstatistically significant associations were seen for fish-derived EPA, DHA, and the sum of EPA + DHA.
Limitations of the study include its observational nature; a relatively young cohort with reduced or mid-range ejection fraction and stage 2 chronic kidney disease; and no dietary data except for those regarding fatty acids.
“Controversial results from landmark recent trials on omega-3 might have translated into confusion/negative impact on the reputation of these fatty acids,” Dr. Sala-Vila noted. “Many factors affect how each participant responds to a certain intervention (precision nutrition), such as genetics, the microbiome, and the environment. In this regard, nutritional status – omega-3 background – is emerging as a key determinant.”
Randomized trials needed
JoAnn E. Manson, MD, MPH, DrPH, chief of the Division of Preventive Medicine at Brigham and Women’s Hospital, Boston, said the findings “are promising in the context of earlier research on omega-3s.”
Those studies include the landmark GISSI-HF trial, a randomized, controlled trial (RCT) that showed a small benefit of n-3 polyunsaturated fatty acids regarding hospital admissions and mortality among patients with chronic HF, and her team’s VITAL-HF study, which showed a significant reduction in recurrent HF hospitalization with marine omega-3 supplementation versus placebo.
“This may not be a causal association, and the authors acknowledge that they don’t have information on other dietary factors,” Dr. Manson said. “It may be that the foods that are leading to this higher blood level of ALA comprise the type of plant-based diet that’s been linked to lower risk of CVD, such as the Mediterranean diet. The findings also could be the result of other factors that aren’t fully controlled for in the analysis, or the participants could be more compliant with their medications.”
Nevertheless, she said, “it’s reasonable to recommend that people with a history of HF or who are at high risk of HF increase their intake of ALA-enriched foods, including canola oil, flaxseed oils, soybeans and soybean oils, and walnuts.”
“I think the evidence is promising enough that an RCT of ALA in people with heart failure also would be reasonable,” she added.
Similarly, Abdallah Al-Mohammad, MD, of Northern General Hospital, Sheffield, England, writes in a related editorial that while a potential role for ALA in improving morbidity and mortality in HF patients cannot be substantiated yet, the findings “open the field to more questions” for which “the judge and jury ... shall be prospective randomized controlled trials.”
No commercial funding or relevant conflicts of interest were declared.
A version of this article first appeared on Medscape.com.
Heart failure (HF) patients with high serum levels of alpha-linolenic acid (ALA) had a better prognosis than those with the lowest levels, in an observational study.
ALA is an omega-3 fatty acid that is found mainly in plants, including flaxseed, chia, walnuts, or canola oil.
“The most striking finding to us is the clear difference between patients at the bottom quartile compared to the other 75%, pointing to a threshold on the putative effect of ALA, reinforcing the notion that ‘one size does not fill all,’ ” Aleix Sala-Vila, PharmD, PhD, of the Hospital del Mar Medical Research Institute, Barcelona, told this news organization.The analysis, which was published online in the Journal of the American College of Cardiology, showed statistically significant reductions in all-cause death, cardiovascular (CV) death, and first HF hospitalization among those in the three upper quartiles of serum ALA levels, compared with those in the lowest quartile.
The team’s earlier finding that higher levels of serum phosphatidylcholine eicosapentaenoic acid (PC EPA) and ALA were associated with a lower risk of adverse events in patients with ST-segment elevation myocardial infarction prompted the current study, Dr. Sala-Vila said.
Although their findings are hypothesis-generating at this point, he added, “inclusion of some ALA-rich foods, such as walnuts, in the diet of any individual, whether they have HF or not, might translate into CV benefits, besides the putative effect on HF. There is no evidence of any deleterious effect of one daily serving of walnuts, not even on weight gain.”
Plant power
Dr. Sala-Vila and colleagues analyzed data and samples from 905 patients (mean age, 67; 32% women) with HF of different etiologies. ALA was assessed by gas chromatography in serum phospholipids, which reflect long-term dietary ALA intake and metabolism.
The primary outcome was a composite of all-cause death or first HF hospitalization. The secondary outcome was the composite of CV death or HF hospitalization.
After a median follow-up of 2.4 years, 140 all-cause deaths, 85 CV deaths, and 141 first HF hospitalizations occurred (composite of all-cause death and first HF hospitalization, 238; composite of CV death and HF hospitalization, 184).
Compared with patients at the lowest quartile of ALA in serum phospholipids, those at the three upper quartiles showed a 39% reduction in the risk of the primary endpoint (hazard ratio, 0.61).
Statistically significant reductions also were observed for all-cause death (HR, 0.58), CV death (HR, 0.51), first HF hospitalization (HR, 0.58), and the composite of CV death and HF hospitalization (HR, 0.58).
By contrast, nonstatistically significant associations were seen for fish-derived EPA, DHA, and the sum of EPA + DHA.
Limitations of the study include its observational nature; a relatively young cohort with reduced or mid-range ejection fraction and stage 2 chronic kidney disease; and no dietary data except for those regarding fatty acids.
“Controversial results from landmark recent trials on omega-3 might have translated into confusion/negative impact on the reputation of these fatty acids,” Dr. Sala-Vila noted. “Many factors affect how each participant responds to a certain intervention (precision nutrition), such as genetics, the microbiome, and the environment. In this regard, nutritional status – omega-3 background – is emerging as a key determinant.”
Randomized trials needed
JoAnn E. Manson, MD, MPH, DrPH, chief of the Division of Preventive Medicine at Brigham and Women’s Hospital, Boston, said the findings “are promising in the context of earlier research on omega-3s.”
Those studies include the landmark GISSI-HF trial, a randomized, controlled trial (RCT) that showed a small benefit of n-3 polyunsaturated fatty acids regarding hospital admissions and mortality among patients with chronic HF, and her team’s VITAL-HF study, which showed a significant reduction in recurrent HF hospitalization with marine omega-3 supplementation versus placebo.
“This may not be a causal association, and the authors acknowledge that they don’t have information on other dietary factors,” Dr. Manson said. “It may be that the foods that are leading to this higher blood level of ALA comprise the type of plant-based diet that’s been linked to lower risk of CVD, such as the Mediterranean diet. The findings also could be the result of other factors that aren’t fully controlled for in the analysis, or the participants could be more compliant with their medications.”
Nevertheless, she said, “it’s reasonable to recommend that people with a history of HF or who are at high risk of HF increase their intake of ALA-enriched foods, including canola oil, flaxseed oils, soybeans and soybean oils, and walnuts.”
“I think the evidence is promising enough that an RCT of ALA in people with heart failure also would be reasonable,” she added.
Similarly, Abdallah Al-Mohammad, MD, of Northern General Hospital, Sheffield, England, writes in a related editorial that while a potential role for ALA in improving morbidity and mortality in HF patients cannot be substantiated yet, the findings “open the field to more questions” for which “the judge and jury ... shall be prospective randomized controlled trials.”
No commercial funding or relevant conflicts of interest were declared.
A version of this article first appeared on Medscape.com.
Heart failure (HF) patients with high serum levels of alpha-linolenic acid (ALA) had a better prognosis than those with the lowest levels, in an observational study.
ALA is an omega-3 fatty acid that is found mainly in plants, including flaxseed, chia, walnuts, or canola oil.
“The most striking finding to us is the clear difference between patients at the bottom quartile compared to the other 75%, pointing to a threshold on the putative effect of ALA, reinforcing the notion that ‘one size does not fill all,’ ” Aleix Sala-Vila, PharmD, PhD, of the Hospital del Mar Medical Research Institute, Barcelona, told this news organization.The analysis, which was published online in the Journal of the American College of Cardiology, showed statistically significant reductions in all-cause death, cardiovascular (CV) death, and first HF hospitalization among those in the three upper quartiles of serum ALA levels, compared with those in the lowest quartile.
The team’s earlier finding that higher levels of serum phosphatidylcholine eicosapentaenoic acid (PC EPA) and ALA were associated with a lower risk of adverse events in patients with ST-segment elevation myocardial infarction prompted the current study, Dr. Sala-Vila said.
Although their findings are hypothesis-generating at this point, he added, “inclusion of some ALA-rich foods, such as walnuts, in the diet of any individual, whether they have HF or not, might translate into CV benefits, besides the putative effect on HF. There is no evidence of any deleterious effect of one daily serving of walnuts, not even on weight gain.”
Plant power
Dr. Sala-Vila and colleagues analyzed data and samples from 905 patients (mean age, 67; 32% women) with HF of different etiologies. ALA was assessed by gas chromatography in serum phospholipids, which reflect long-term dietary ALA intake and metabolism.
The primary outcome was a composite of all-cause death or first HF hospitalization. The secondary outcome was the composite of CV death or HF hospitalization.
After a median follow-up of 2.4 years, 140 all-cause deaths, 85 CV deaths, and 141 first HF hospitalizations occurred (composite of all-cause death and first HF hospitalization, 238; composite of CV death and HF hospitalization, 184).
Compared with patients at the lowest quartile of ALA in serum phospholipids, those at the three upper quartiles showed a 39% reduction in the risk of the primary endpoint (hazard ratio, 0.61).
Statistically significant reductions also were observed for all-cause death (HR, 0.58), CV death (HR, 0.51), first HF hospitalization (HR, 0.58), and the composite of CV death and HF hospitalization (HR, 0.58).
By contrast, nonstatistically significant associations were seen for fish-derived EPA, DHA, and the sum of EPA + DHA.
Limitations of the study include its observational nature; a relatively young cohort with reduced or mid-range ejection fraction and stage 2 chronic kidney disease; and no dietary data except for those regarding fatty acids.
“Controversial results from landmark recent trials on omega-3 might have translated into confusion/negative impact on the reputation of these fatty acids,” Dr. Sala-Vila noted. “Many factors affect how each participant responds to a certain intervention (precision nutrition), such as genetics, the microbiome, and the environment. In this regard, nutritional status – omega-3 background – is emerging as a key determinant.”
Randomized trials needed
JoAnn E. Manson, MD, MPH, DrPH, chief of the Division of Preventive Medicine at Brigham and Women’s Hospital, Boston, said the findings “are promising in the context of earlier research on omega-3s.”
Those studies include the landmark GISSI-HF trial, a randomized, controlled trial (RCT) that showed a small benefit of n-3 polyunsaturated fatty acids regarding hospital admissions and mortality among patients with chronic HF, and her team’s VITAL-HF study, which showed a significant reduction in recurrent HF hospitalization with marine omega-3 supplementation versus placebo.
“This may not be a causal association, and the authors acknowledge that they don’t have information on other dietary factors,” Dr. Manson said. “It may be that the foods that are leading to this higher blood level of ALA comprise the type of plant-based diet that’s been linked to lower risk of CVD, such as the Mediterranean diet. The findings also could be the result of other factors that aren’t fully controlled for in the analysis, or the participants could be more compliant with their medications.”
Nevertheless, she said, “it’s reasonable to recommend that people with a history of HF or who are at high risk of HF increase their intake of ALA-enriched foods, including canola oil, flaxseed oils, soybeans and soybean oils, and walnuts.”
“I think the evidence is promising enough that an RCT of ALA in people with heart failure also would be reasonable,” she added.
Similarly, Abdallah Al-Mohammad, MD, of Northern General Hospital, Sheffield, England, writes in a related editorial that while a potential role for ALA in improving morbidity and mortality in HF patients cannot be substantiated yet, the findings “open the field to more questions” for which “the judge and jury ... shall be prospective randomized controlled trials.”
No commercial funding or relevant conflicts of interest were declared.
A version of this article first appeared on Medscape.com.
New consensus on managing nausea and vomiting in pregnancy
Although the nausea and vomiting associated with pregnancy are usually mild, they are more severe (hyperemesis gravidarum) in around one-third of women and require hospitalization in the first trimester for 0.3%-3.6% of these women in France. Given the diversity of practical care, a working group from the National College of French Gynecologists and Obstetricians (CNGOF) has established a consensus on the definition and management of these symptoms.
Definition and severity
Nausea and vomiting during pregnancy are defined as those emerging in the first trimester of pregnancy and for which there is no other etiology.
The severity of these symptoms should be assessed through weight loss from the beginning of the pregnancy, clinical signs of dehydration (thirst, skin turgor, hypotension, oliguria, etc.), and modified PUQE (Pregnancy-Unique Quantification of Emesis and Nausea) score. This is a three-question score rated from 0 to 15, available in the full text of the expert consensus.
Severe nausea and vomiting are not considered complicated when weight loss is < 5%, with no clinical signs of dehydration, and combined with a PUQE score of ≤ 6. In contrast, hyperemesis gravidarum is distinguished from nausea and vomiting during pregnancy by weight loss of ≥ 5 % or signs of dehydration or a PUQE score of ≥ 7.
Treating hyperemesis gravidarum
A laboratory workup should be ordered, along with an assay of blood potassium, blood sodium ions, and creatinine levels, as well as a complete dipstick urinalysis.
If symptoms persist or worsen despite well-managed treatment, an additional assessment is recommended, including an abdominal ultrasound and laboratory workup (white blood cell count, transaminases, lipase, CRP, TSH, T4).
Hospitalization is proposed when at least one of the following criteria is met: weight loss ≥ 10%, one or more clinical signs of dehydration, PUQE score of ≥ 13, hypokalemia < 3.0 mmol/L, hyponatremia < 120 mmol/L, elevated serum creatinine > 100 micromol/L, or resistance to treatment.
Which treatment?
Prenatal vitamins and iron supplementation should be stopped, as the latter seems to make symptoms worse. This step should be taken without stopping folic acid supplementation.
Women are free to adapt their diets and lifestyles according to their symptoms, since no such changes have been reported to improve symptoms.
If the PUQE score is < 6, even in the absence of proof of their benefit, ginger or B6 vitamin can be used. The same applies to acupressure, acupuncture, and electrical stimulation, which should only be considered in women without complications. Aromatherapy is not to be used, because of the potential risks associated with essential oils, and as no efficacy has been demonstrated.
It is proposed that drugs or combinations of drugs associated with the least severe and least frequent side effects should always be chosen in the absence of superiority of one class over another.
To prevent Gayet Wernicke encephalopathy, vitamin B1 must be administered systematically for hyperemesis gravidarum needing parenteral rehydration. Psychological support should be offered to all patients with hyperemesis gravidarum because of the negative impact of this pathology on mental well-being. Patients should be informed that there are patient associations involved in supporting these women and their families.
A version of this article first appeared on Medscape.com and was translated from Univadis France.
Although the nausea and vomiting associated with pregnancy are usually mild, they are more severe (hyperemesis gravidarum) in around one-third of women and require hospitalization in the first trimester for 0.3%-3.6% of these women in France. Given the diversity of practical care, a working group from the National College of French Gynecologists and Obstetricians (CNGOF) has established a consensus on the definition and management of these symptoms.
Definition and severity
Nausea and vomiting during pregnancy are defined as those emerging in the first trimester of pregnancy and for which there is no other etiology.
The severity of these symptoms should be assessed through weight loss from the beginning of the pregnancy, clinical signs of dehydration (thirst, skin turgor, hypotension, oliguria, etc.), and modified PUQE (Pregnancy-Unique Quantification of Emesis and Nausea) score. This is a three-question score rated from 0 to 15, available in the full text of the expert consensus.
Severe nausea and vomiting are not considered complicated when weight loss is < 5%, with no clinical signs of dehydration, and combined with a PUQE score of ≤ 6. In contrast, hyperemesis gravidarum is distinguished from nausea and vomiting during pregnancy by weight loss of ≥ 5 % or signs of dehydration or a PUQE score of ≥ 7.
Treating hyperemesis gravidarum
A laboratory workup should be ordered, along with an assay of blood potassium, blood sodium ions, and creatinine levels, as well as a complete dipstick urinalysis.
If symptoms persist or worsen despite well-managed treatment, an additional assessment is recommended, including an abdominal ultrasound and laboratory workup (white blood cell count, transaminases, lipase, CRP, TSH, T4).
Hospitalization is proposed when at least one of the following criteria is met: weight loss ≥ 10%, one or more clinical signs of dehydration, PUQE score of ≥ 13, hypokalemia < 3.0 mmol/L, hyponatremia < 120 mmol/L, elevated serum creatinine > 100 micromol/L, or resistance to treatment.
Which treatment?
Prenatal vitamins and iron supplementation should be stopped, as the latter seems to make symptoms worse. This step should be taken without stopping folic acid supplementation.
Women are free to adapt their diets and lifestyles according to their symptoms, since no such changes have been reported to improve symptoms.
If the PUQE score is < 6, even in the absence of proof of their benefit, ginger or B6 vitamin can be used. The same applies to acupressure, acupuncture, and electrical stimulation, which should only be considered in women without complications. Aromatherapy is not to be used, because of the potential risks associated with essential oils, and as no efficacy has been demonstrated.
It is proposed that drugs or combinations of drugs associated with the least severe and least frequent side effects should always be chosen in the absence of superiority of one class over another.
To prevent Gayet Wernicke encephalopathy, vitamin B1 must be administered systematically for hyperemesis gravidarum needing parenteral rehydration. Psychological support should be offered to all patients with hyperemesis gravidarum because of the negative impact of this pathology on mental well-being. Patients should be informed that there are patient associations involved in supporting these women and their families.
A version of this article first appeared on Medscape.com and was translated from Univadis France.
Although the nausea and vomiting associated with pregnancy are usually mild, they are more severe (hyperemesis gravidarum) in around one-third of women and require hospitalization in the first trimester for 0.3%-3.6% of these women in France. Given the diversity of practical care, a working group from the National College of French Gynecologists and Obstetricians (CNGOF) has established a consensus on the definition and management of these symptoms.
Definition and severity
Nausea and vomiting during pregnancy are defined as those emerging in the first trimester of pregnancy and for which there is no other etiology.
The severity of these symptoms should be assessed through weight loss from the beginning of the pregnancy, clinical signs of dehydration (thirst, skin turgor, hypotension, oliguria, etc.), and modified PUQE (Pregnancy-Unique Quantification of Emesis and Nausea) score. This is a three-question score rated from 0 to 15, available in the full text of the expert consensus.
Severe nausea and vomiting are not considered complicated when weight loss is < 5%, with no clinical signs of dehydration, and combined with a PUQE score of ≤ 6. In contrast, hyperemesis gravidarum is distinguished from nausea and vomiting during pregnancy by weight loss of ≥ 5 % or signs of dehydration or a PUQE score of ≥ 7.
Treating hyperemesis gravidarum
A laboratory workup should be ordered, along with an assay of blood potassium, blood sodium ions, and creatinine levels, as well as a complete dipstick urinalysis.
If symptoms persist or worsen despite well-managed treatment, an additional assessment is recommended, including an abdominal ultrasound and laboratory workup (white blood cell count, transaminases, lipase, CRP, TSH, T4).
Hospitalization is proposed when at least one of the following criteria is met: weight loss ≥ 10%, one or more clinical signs of dehydration, PUQE score of ≥ 13, hypokalemia < 3.0 mmol/L, hyponatremia < 120 mmol/L, elevated serum creatinine > 100 micromol/L, or resistance to treatment.
Which treatment?
Prenatal vitamins and iron supplementation should be stopped, as the latter seems to make symptoms worse. This step should be taken without stopping folic acid supplementation.
Women are free to adapt their diets and lifestyles according to their symptoms, since no such changes have been reported to improve symptoms.
If the PUQE score is < 6, even in the absence of proof of their benefit, ginger or B6 vitamin can be used. The same applies to acupressure, acupuncture, and electrical stimulation, which should only be considered in women without complications. Aromatherapy is not to be used, because of the potential risks associated with essential oils, and as no efficacy has been demonstrated.
It is proposed that drugs or combinations of drugs associated with the least severe and least frequent side effects should always be chosen in the absence of superiority of one class over another.
To prevent Gayet Wernicke encephalopathy, vitamin B1 must be administered systematically for hyperemesis gravidarum needing parenteral rehydration. Psychological support should be offered to all patients with hyperemesis gravidarum because of the negative impact of this pathology on mental well-being. Patients should be informed that there are patient associations involved in supporting these women and their families.
A version of this article first appeared on Medscape.com and was translated from Univadis France.
Menopause an independent risk factor for schizophrenia relapse
Investigators studied a cohort of close to 62,000 people with SSDs, stratifying individuals by sex and age, and found that starting between the ages of 45 and 50 years – when the menopausal transition is underway – women were more frequently hospitalized for psychosis, compared with men and women younger than 45 years.
In addition, the protective effect of antipsychotic medication was highest in women younger than 45 years and lowest in women aged 45 years or older, even at higher doses.
“Women with schizophrenia who are older than 45 are a vulnerable group for relapse, and higher doses of antipsychotics are not the answer,” lead author Iris Sommer, MD, PhD, professor, department of neuroscience, University Medical Center of Groningen, the Netherlands, told this news organization.
The study was published online in Schizophrenia Bulletin.
Vulnerable period
There is an association between estrogen levels and disease severity throughout the life stages of women with SSDs, with lower estrogen levels associated with psychosis, for example, during low estrogenic phases of the menstrual cycle, the investigators note.
“After menopause, estrogen levels remain low, which is associated with a deterioration in the clinical course; therefore, women with SSD have sex-specific psychiatric needs that differ according to their life stage,” they add.
“Estrogens inhibit an important liver enzyme (cytochrome P-450 [CYP1A2]), which leads to higher blood levels of several antipsychotics like olanzapine and clozapine,” said Dr. Sommer. In addition, estrogens make the stomach less acidic, “leading to easier resorption of medication.”
As a clinician, Dr. Sommer said that she has “often witnessed a worsening of symptoms [of psychosis] after menopause.” As a researcher, she “knew that estrogens can have ameliorating effects on brain health, especially in schizophrenia.”
She and her colleagues were motivated to research the issue because there is a “remarkable paucity” of quantitative data on a “vulnerable period that all women with schizophrenia will experience.”
Detailed, quantitative data
The researchers sought to provide “detailed, quantitative data on life-stage dependent clinical changes occurring in women with SSD, using an intra-individual design to prevent confounding.”
They drew on data from a nationwide, register-based cohort study of all hospitalized patients with SSD between 1972 and 2014 in Finland (n = 61,889), with follow-up from Jan. 1, 1996, to Dec. 31, 2017.
People were stratified according to age (younger than 45 years and 45 years or older), with the same person contributing person-time to both age groups. The cohort was also subdivided into 5-year age groups, starting at age 20 years and ending at age 69 years.
The primary outcome measure was relapse (that is, inpatient hospitalization because of psychosis).
The researchers focused specifically on monotherapies, excluding time periods when two or more antipsychotics were used concomitantly. They also looked at antipsychotic nonuse periods.
Antipsychotic monotherapies were categorized into defined daily doses per day (DDDs/d):
- less than 0.4
- 0.4 to 0.6
- 0.6 to 0.9
- 0.9 to less than 1.1
- 1.1 to less than 1.4
- 1.4 to less than 1.6
- 1.6 or more
The researchers restricted the main analyses to the four most frequently used oral antipsychotic monotherapies: clozapine, olanzapine, quetiapine, and risperidone.
The turning tide
The cohort consisted of more men than women (31,104 vs. 30,785, respectively), with a mean (standard deviation) age of 49.8 (16.6) years in women vs. 43.6 (14.8) in men.
Among both sexes, olanzapine was the most prescribed antipsychotic (roughly one-quarter of patients). In women, the next most common antipsychotic was risperidone, followed by quetiapine and clozapine, whereas in men, the second most common antipsychotic was clozapine, followed by risperidone and quetiapine.
When the researchers compared men and women younger than 45 years, there were “few consistent differences” in proportions hospitalized for psychosis.
Starting at age 45 years and continuing through the oldest age group (65-69 years), higher proportions of women were hospitalized for psychosis, compared with their male peers (all Ps < .00001).
Women 45 or older had significantly higher risk for relapse associated with standard dose use, compared with the other groups.
When the researchers compared men and women older and younger than 45 years, women younger than 45 years showed lower adjusted hazard ratios (aHRs) at doses between of 0.6-0.9 DDDs/d, whereas for doses over 1.1 DDDs/d, women aged 45 years or older showed “remarkably higher” aHRs, compared with women younger than 45 years and men aged 45 years or older, with a difference that increased with increasing dose.
In women, the efficacy of the antipsychotics was decreased at these DDDs/d.
“We ... showed that antipsychotic monotherapy is most effective in preventing relapse in women below 45, as compared to women above that age, and also as compared to men of all ages,” the authors summarize. But after age 45 years, “the tide seems to turn for women,” compared with younger women and with men of the same age group.
One of several study limitations was the use of age as an estimation of menopausal status, they note.
Don’t just raise the dose
Commenting on the research, Mary Seeman, MD, professor emerita, department of psychiatry, University of Toronto, noted the study corroborates her group’s findings regarding the effect of menopause on antipsychotic response.
“When the efficacy of previously effective antipsychotic doses wanes at menopause, raising the dose is not the treatment of choice because it increases the risk of weight gain, cardiovascular, and cerebrovascular events,” said Dr. Seeman, who was not involved with the current research.
“Changing to an antipsychotic that is less affected by estrogen loss may work better,” she continued, noting that amisulpride and aripiprazole “work well post menopause.”
Additional interventions may include changing to a depot or skin-patch antipsychotic that “obviates first-pass metabolism,” adding hormone replacement or a selective estrogen receptor modulator or including phytoestrogens (bioidenticals) in the diet.
The study yields research recommendations, including comparing the effectiveness of different antipsychotics in postmenopausal women with SSDs, recruiting pre- and postmenopausal women in trials of antipsychotic drugs, and stratifying by hormonal status when analyzing results of antipsychotic trials, Dr. Seeman said.
This work was supported by the Finnish Ministry of Social Affairs and Health through the developmental fund for Niuvanniemi Hospital and the Academy of Finland. The Dutch Medical Research Association supported Dr. Sommer. Dr. Sommer declares no relevant financial relationships. The other authors’ disclosures are listed on the original paper. Dr. Seeman declares no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Investigators studied a cohort of close to 62,000 people with SSDs, stratifying individuals by sex and age, and found that starting between the ages of 45 and 50 years – when the menopausal transition is underway – women were more frequently hospitalized for psychosis, compared with men and women younger than 45 years.
In addition, the protective effect of antipsychotic medication was highest in women younger than 45 years and lowest in women aged 45 years or older, even at higher doses.
“Women with schizophrenia who are older than 45 are a vulnerable group for relapse, and higher doses of antipsychotics are not the answer,” lead author Iris Sommer, MD, PhD, professor, department of neuroscience, University Medical Center of Groningen, the Netherlands, told this news organization.
The study was published online in Schizophrenia Bulletin.
Vulnerable period
There is an association between estrogen levels and disease severity throughout the life stages of women with SSDs, with lower estrogen levels associated with psychosis, for example, during low estrogenic phases of the menstrual cycle, the investigators note.
“After menopause, estrogen levels remain low, which is associated with a deterioration in the clinical course; therefore, women with SSD have sex-specific psychiatric needs that differ according to their life stage,” they add.
“Estrogens inhibit an important liver enzyme (cytochrome P-450 [CYP1A2]), which leads to higher blood levels of several antipsychotics like olanzapine and clozapine,” said Dr. Sommer. In addition, estrogens make the stomach less acidic, “leading to easier resorption of medication.”
As a clinician, Dr. Sommer said that she has “often witnessed a worsening of symptoms [of psychosis] after menopause.” As a researcher, she “knew that estrogens can have ameliorating effects on brain health, especially in schizophrenia.”
She and her colleagues were motivated to research the issue because there is a “remarkable paucity” of quantitative data on a “vulnerable period that all women with schizophrenia will experience.”
Detailed, quantitative data
The researchers sought to provide “detailed, quantitative data on life-stage dependent clinical changes occurring in women with SSD, using an intra-individual design to prevent confounding.”
They drew on data from a nationwide, register-based cohort study of all hospitalized patients with SSD between 1972 and 2014 in Finland (n = 61,889), with follow-up from Jan. 1, 1996, to Dec. 31, 2017.
People were stratified according to age (younger than 45 years and 45 years or older), with the same person contributing person-time to both age groups. The cohort was also subdivided into 5-year age groups, starting at age 20 years and ending at age 69 years.
The primary outcome measure was relapse (that is, inpatient hospitalization because of psychosis).
The researchers focused specifically on monotherapies, excluding time periods when two or more antipsychotics were used concomitantly. They also looked at antipsychotic nonuse periods.
Antipsychotic monotherapies were categorized into defined daily doses per day (DDDs/d):
- less than 0.4
- 0.4 to 0.6
- 0.6 to 0.9
- 0.9 to less than 1.1
- 1.1 to less than 1.4
- 1.4 to less than 1.6
- 1.6 or more
The researchers restricted the main analyses to the four most frequently used oral antipsychotic monotherapies: clozapine, olanzapine, quetiapine, and risperidone.
The turning tide
The cohort consisted of more men than women (31,104 vs. 30,785, respectively), with a mean (standard deviation) age of 49.8 (16.6) years in women vs. 43.6 (14.8) in men.
Among both sexes, olanzapine was the most prescribed antipsychotic (roughly one-quarter of patients). In women, the next most common antipsychotic was risperidone, followed by quetiapine and clozapine, whereas in men, the second most common antipsychotic was clozapine, followed by risperidone and quetiapine.
When the researchers compared men and women younger than 45 years, there were “few consistent differences” in proportions hospitalized for psychosis.
Starting at age 45 years and continuing through the oldest age group (65-69 years), higher proportions of women were hospitalized for psychosis, compared with their male peers (all Ps < .00001).
Women 45 or older had significantly higher risk for relapse associated with standard dose use, compared with the other groups.
When the researchers compared men and women older and younger than 45 years, women younger than 45 years showed lower adjusted hazard ratios (aHRs) at doses between of 0.6-0.9 DDDs/d, whereas for doses over 1.1 DDDs/d, women aged 45 years or older showed “remarkably higher” aHRs, compared with women younger than 45 years and men aged 45 years or older, with a difference that increased with increasing dose.
In women, the efficacy of the antipsychotics was decreased at these DDDs/d.
“We ... showed that antipsychotic monotherapy is most effective in preventing relapse in women below 45, as compared to women above that age, and also as compared to men of all ages,” the authors summarize. But after age 45 years, “the tide seems to turn for women,” compared with younger women and with men of the same age group.
One of several study limitations was the use of age as an estimation of menopausal status, they note.
Don’t just raise the dose
Commenting on the research, Mary Seeman, MD, professor emerita, department of psychiatry, University of Toronto, noted the study corroborates her group’s findings regarding the effect of menopause on antipsychotic response.
“When the efficacy of previously effective antipsychotic doses wanes at menopause, raising the dose is not the treatment of choice because it increases the risk of weight gain, cardiovascular, and cerebrovascular events,” said Dr. Seeman, who was not involved with the current research.
“Changing to an antipsychotic that is less affected by estrogen loss may work better,” she continued, noting that amisulpride and aripiprazole “work well post menopause.”
Additional interventions may include changing to a depot or skin-patch antipsychotic that “obviates first-pass metabolism,” adding hormone replacement or a selective estrogen receptor modulator or including phytoestrogens (bioidenticals) in the diet.
The study yields research recommendations, including comparing the effectiveness of different antipsychotics in postmenopausal women with SSDs, recruiting pre- and postmenopausal women in trials of antipsychotic drugs, and stratifying by hormonal status when analyzing results of antipsychotic trials, Dr. Seeman said.
This work was supported by the Finnish Ministry of Social Affairs and Health through the developmental fund for Niuvanniemi Hospital and the Academy of Finland. The Dutch Medical Research Association supported Dr. Sommer. Dr. Sommer declares no relevant financial relationships. The other authors’ disclosures are listed on the original paper. Dr. Seeman declares no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Investigators studied a cohort of close to 62,000 people with SSDs, stratifying individuals by sex and age, and found that starting between the ages of 45 and 50 years – when the menopausal transition is underway – women were more frequently hospitalized for psychosis, compared with men and women younger than 45 years.
In addition, the protective effect of antipsychotic medication was highest in women younger than 45 years and lowest in women aged 45 years or older, even at higher doses.
“Women with schizophrenia who are older than 45 are a vulnerable group for relapse, and higher doses of antipsychotics are not the answer,” lead author Iris Sommer, MD, PhD, professor, department of neuroscience, University Medical Center of Groningen, the Netherlands, told this news organization.
The study was published online in Schizophrenia Bulletin.
Vulnerable period
There is an association between estrogen levels and disease severity throughout the life stages of women with SSDs, with lower estrogen levels associated with psychosis, for example, during low estrogenic phases of the menstrual cycle, the investigators note.
“After menopause, estrogen levels remain low, which is associated with a deterioration in the clinical course; therefore, women with SSD have sex-specific psychiatric needs that differ according to their life stage,” they add.
“Estrogens inhibit an important liver enzyme (cytochrome P-450 [CYP1A2]), which leads to higher blood levels of several antipsychotics like olanzapine and clozapine,” said Dr. Sommer. In addition, estrogens make the stomach less acidic, “leading to easier resorption of medication.”
As a clinician, Dr. Sommer said that she has “often witnessed a worsening of symptoms [of psychosis] after menopause.” As a researcher, she “knew that estrogens can have ameliorating effects on brain health, especially in schizophrenia.”
She and her colleagues were motivated to research the issue because there is a “remarkable paucity” of quantitative data on a “vulnerable period that all women with schizophrenia will experience.”
Detailed, quantitative data
The researchers sought to provide “detailed, quantitative data on life-stage dependent clinical changes occurring in women with SSD, using an intra-individual design to prevent confounding.”
They drew on data from a nationwide, register-based cohort study of all hospitalized patients with SSD between 1972 and 2014 in Finland (n = 61,889), with follow-up from Jan. 1, 1996, to Dec. 31, 2017.
People were stratified according to age (younger than 45 years and 45 years or older), with the same person contributing person-time to both age groups. The cohort was also subdivided into 5-year age groups, starting at age 20 years and ending at age 69 years.
The primary outcome measure was relapse (that is, inpatient hospitalization because of psychosis).
The researchers focused specifically on monotherapies, excluding time periods when two or more antipsychotics were used concomitantly. They also looked at antipsychotic nonuse periods.
Antipsychotic monotherapies were categorized into defined daily doses per day (DDDs/d):
- less than 0.4
- 0.4 to 0.6
- 0.6 to 0.9
- 0.9 to less than 1.1
- 1.1 to less than 1.4
- 1.4 to less than 1.6
- 1.6 or more
The researchers restricted the main analyses to the four most frequently used oral antipsychotic monotherapies: clozapine, olanzapine, quetiapine, and risperidone.
The turning tide
The cohort consisted of more men than women (31,104 vs. 30,785, respectively), with a mean (standard deviation) age of 49.8 (16.6) years in women vs. 43.6 (14.8) in men.
Among both sexes, olanzapine was the most prescribed antipsychotic (roughly one-quarter of patients). In women, the next most common antipsychotic was risperidone, followed by quetiapine and clozapine, whereas in men, the second most common antipsychotic was clozapine, followed by risperidone and quetiapine.
When the researchers compared men and women younger than 45 years, there were “few consistent differences” in proportions hospitalized for psychosis.
Starting at age 45 years and continuing through the oldest age group (65-69 years), higher proportions of women were hospitalized for psychosis, compared with their male peers (all Ps < .00001).
Women 45 or older had significantly higher risk for relapse associated with standard dose use, compared with the other groups.
When the researchers compared men and women older and younger than 45 years, women younger than 45 years showed lower adjusted hazard ratios (aHRs) at doses between of 0.6-0.9 DDDs/d, whereas for doses over 1.1 DDDs/d, women aged 45 years or older showed “remarkably higher” aHRs, compared with women younger than 45 years and men aged 45 years or older, with a difference that increased with increasing dose.
In women, the efficacy of the antipsychotics was decreased at these DDDs/d.
“We ... showed that antipsychotic monotherapy is most effective in preventing relapse in women below 45, as compared to women above that age, and also as compared to men of all ages,” the authors summarize. But after age 45 years, “the tide seems to turn for women,” compared with younger women and with men of the same age group.
One of several study limitations was the use of age as an estimation of menopausal status, they note.
Don’t just raise the dose
Commenting on the research, Mary Seeman, MD, professor emerita, department of psychiatry, University of Toronto, noted the study corroborates her group’s findings regarding the effect of menopause on antipsychotic response.
“When the efficacy of previously effective antipsychotic doses wanes at menopause, raising the dose is not the treatment of choice because it increases the risk of weight gain, cardiovascular, and cerebrovascular events,” said Dr. Seeman, who was not involved with the current research.
“Changing to an antipsychotic that is less affected by estrogen loss may work better,” she continued, noting that amisulpride and aripiprazole “work well post menopause.”
Additional interventions may include changing to a depot or skin-patch antipsychotic that “obviates first-pass metabolism,” adding hormone replacement or a selective estrogen receptor modulator or including phytoestrogens (bioidenticals) in the diet.
The study yields research recommendations, including comparing the effectiveness of different antipsychotics in postmenopausal women with SSDs, recruiting pre- and postmenopausal women in trials of antipsychotic drugs, and stratifying by hormonal status when analyzing results of antipsychotic trials, Dr. Seeman said.
This work was supported by the Finnish Ministry of Social Affairs and Health through the developmental fund for Niuvanniemi Hospital and the Academy of Finland. The Dutch Medical Research Association supported Dr. Sommer. Dr. Sommer declares no relevant financial relationships. The other authors’ disclosures are listed on the original paper. Dr. Seeman declares no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM SCHIZOPHRENIA BULLETIN
Commentary: Potential new treatments in gastroesophageal adenocarcinoma, November 2022
The phase 2 FIGHT trial1 evaluated the role of bemarituzumab, an anti-FGFR2 antibody, in combination with chemotherapy during first-line treatment of advanced gastroesophageal adenocarcinoma. The primary endpoint of this trial was progression-free survival (PFS). This trial enrolled 155 patients with upper gastrointestinal tumors with FGFR2b overexpression (defined as at least 2+ by immunohistochemistry) or amplification on next-generation sequencing. About 30% of patients with HER2 nonpositive tumors (ie, those that would not qualify for treatment with the targeted agent trastuzumab) were eligible for participation. In the FIGHT trial, patients were randomized in a 1:1 ratio to receive either standard chemotherapy (folinic acid, fluorouracil, and oxaliplatin [FOLFOX]) or chemotherapy plus bemarituzumab. Patients in the experimental group were allowed to receive one dose of standard FOLFOX chemotherapy while biomarker testing was ongoing.
With a median follow-up time of 10.9 moths, PFS was numerically prolonged in the bemarituzumab group (9.5 vs 7.4 months), but it did not reach statistical significance (P = .073). Overall survival (OS) was improved in the experimental group (not reached vs 12.9 months; P = .027). With a longer follow-up of 12.5 months, in post hoc exploratory analysis, OS was significantly longer in the experimental group (19.2 vs 13.5 months; hazard ratio 0.60, P = .027). The rate of serious adverse events was similar between the two groups. However, it is important to note ocular toxicities associated with bemarituzumab treatment. Corneal adverse events were seen in 67% of patients in the experimental group, with 24% of patients experiencing grade 3 events. Moreover, 26% of patients discontinued bemarituzumab because of corneal adverse events.
Overall, this phase 2 trial demonstrated that FGFR2b is emerging as an important biomarker and target in patients with advanced gastroesophageal adenocarcinoma. Ongoing phase 3 trials (FORTITUDE-101 with FOLFOX [NCT05052801] and FORTITUDE-102 with FOLFOX and nivolumab [NCT05111626]) hopefully will confirm the early results seen in the FIGHT trial. Awareness and early attention to treatment-associated toxicities will be critical for the potential future incorporation of bemarituzumab into clinical practice.
A study by Ramos‐Santillan and colleagues explored whether the order of treatment modalities matter in the management of early-stage gastric cancer. Typically, perioperative chemotherapy (both neoadjuvant and adjuvant) is used during treatment of early-stage gastric cancer, which is usually defined as at least cT2N0 or cTxN+ disease. In this study, multivariable Cox regression analyses were performed on propensity score-matched cohorts. The study analyzed outcomes of 11,984 patients who were identified using the US National Cancer Database and treated between 2005 and 2014. The results revealed that patients who had stage I disease had better outcomes with upfront resection followed by adjuvant therapy. Patients with stage III disease did better with a neoadjuvant approach, whereas patients with stage II disease had similar outcomes regardless of chemotherapy timing. This research has the limitations inherent to the retrospective nature of the analysis and lack of prospective enrollment and controls. However, it does suggest that there may be a fraction of patients who should be treated with upfront resection. For incorporation of this change into standard practice, the question of therapy sequencing should be answered in a randomized prospective trial that incorporates the most updated systemic therapy (fluorouracil, leucovorin, oxaliplatin, and docetaxel [FLOT]) into its design.
Chemotherapy continues to play a critical role during first-line treatment of advanced esophageal and gastric adenocarcinoma. Triple chemotherapy regimens have been known to have increased efficacy in this setting, but their use has been limited by associated toxicities. A study by Nguyen and colleagues evaluated the TCX regimen (paclitaxel, carboplatin, and capecitabine) during first-line treatment of advanced gastric cancer. This regimen is similar to other triple chemotherapy regimens, such as FLOT and DCF (docetaxel, cisplatin, and fluorouracil), which have proven activity in this disease. This prospective phase 2 trial enrolled 83 patients. The median PFS (9.3 months) and OS (17 months) compared favorably with historical references. The regimen had expected adverse events, with cytopenias and fatigue being the most frequently reported. On the basis of the reported safety and efficacy, TCX has potential to be used as a chemotherapy backbone in future trials, but larger trials are needed to confirm the phase 2 trial results.
References
Wainberg ZA, Enzinger PC, Kang YK, et al. Bemarituzumab in patients with FGFR2b-selected gastric or gastro-oesophageal junction adenocarcinoma (FIGHT): A randomised, double-blind, placebo-controlled, phase 2 study. Lancet Oncol. 2022 Oct 13. Doi: 10.1016/S1470-2045(22)00603-9
The phase 2 FIGHT trial1 evaluated the role of bemarituzumab, an anti-FGFR2 antibody, in combination with chemotherapy during first-line treatment of advanced gastroesophageal adenocarcinoma. The primary endpoint of this trial was progression-free survival (PFS). This trial enrolled 155 patients with upper gastrointestinal tumors with FGFR2b overexpression (defined as at least 2+ by immunohistochemistry) or amplification on next-generation sequencing. About 30% of patients with HER2 nonpositive tumors (ie, those that would not qualify for treatment with the targeted agent trastuzumab) were eligible for participation. In the FIGHT trial, patients were randomized in a 1:1 ratio to receive either standard chemotherapy (folinic acid, fluorouracil, and oxaliplatin [FOLFOX]) or chemotherapy plus bemarituzumab. Patients in the experimental group were allowed to receive one dose of standard FOLFOX chemotherapy while biomarker testing was ongoing.
With a median follow-up time of 10.9 moths, PFS was numerically prolonged in the bemarituzumab group (9.5 vs 7.4 months), but it did not reach statistical significance (P = .073). Overall survival (OS) was improved in the experimental group (not reached vs 12.9 months; P = .027). With a longer follow-up of 12.5 months, in post hoc exploratory analysis, OS was significantly longer in the experimental group (19.2 vs 13.5 months; hazard ratio 0.60, P = .027). The rate of serious adverse events was similar between the two groups. However, it is important to note ocular toxicities associated with bemarituzumab treatment. Corneal adverse events were seen in 67% of patients in the experimental group, with 24% of patients experiencing grade 3 events. Moreover, 26% of patients discontinued bemarituzumab because of corneal adverse events.
Overall, this phase 2 trial demonstrated that FGFR2b is emerging as an important biomarker and target in patients with advanced gastroesophageal adenocarcinoma. Ongoing phase 3 trials (FORTITUDE-101 with FOLFOX [NCT05052801] and FORTITUDE-102 with FOLFOX and nivolumab [NCT05111626]) hopefully will confirm the early results seen in the FIGHT trial. Awareness and early attention to treatment-associated toxicities will be critical for the potential future incorporation of bemarituzumab into clinical practice.
A study by Ramos‐Santillan and colleagues explored whether the order of treatment modalities matter in the management of early-stage gastric cancer. Typically, perioperative chemotherapy (both neoadjuvant and adjuvant) is used during treatment of early-stage gastric cancer, which is usually defined as at least cT2N0 or cTxN+ disease. In this study, multivariable Cox regression analyses were performed on propensity score-matched cohorts. The study analyzed outcomes of 11,984 patients who were identified using the US National Cancer Database and treated between 2005 and 2014. The results revealed that patients who had stage I disease had better outcomes with upfront resection followed by adjuvant therapy. Patients with stage III disease did better with a neoadjuvant approach, whereas patients with stage II disease had similar outcomes regardless of chemotherapy timing. This research has the limitations inherent to the retrospective nature of the analysis and lack of prospective enrollment and controls. However, it does suggest that there may be a fraction of patients who should be treated with upfront resection. For incorporation of this change into standard practice, the question of therapy sequencing should be answered in a randomized prospective trial that incorporates the most updated systemic therapy (fluorouracil, leucovorin, oxaliplatin, and docetaxel [FLOT]) into its design.
Chemotherapy continues to play a critical role during first-line treatment of advanced esophageal and gastric adenocarcinoma. Triple chemotherapy regimens have been known to have increased efficacy in this setting, but their use has been limited by associated toxicities. A study by Nguyen and colleagues evaluated the TCX regimen (paclitaxel, carboplatin, and capecitabine) during first-line treatment of advanced gastric cancer. This regimen is similar to other triple chemotherapy regimens, such as FLOT and DCF (docetaxel, cisplatin, and fluorouracil), which have proven activity in this disease. This prospective phase 2 trial enrolled 83 patients. The median PFS (9.3 months) and OS (17 months) compared favorably with historical references. The regimen had expected adverse events, with cytopenias and fatigue being the most frequently reported. On the basis of the reported safety and efficacy, TCX has potential to be used as a chemotherapy backbone in future trials, but larger trials are needed to confirm the phase 2 trial results.
References
Wainberg ZA, Enzinger PC, Kang YK, et al. Bemarituzumab in patients with FGFR2b-selected gastric or gastro-oesophageal junction adenocarcinoma (FIGHT): A randomised, double-blind, placebo-controlled, phase 2 study. Lancet Oncol. 2022 Oct 13. Doi: 10.1016/S1470-2045(22)00603-9
The phase 2 FIGHT trial1 evaluated the role of bemarituzumab, an anti-FGFR2 antibody, in combination with chemotherapy during first-line treatment of advanced gastroesophageal adenocarcinoma. The primary endpoint of this trial was progression-free survival (PFS). This trial enrolled 155 patients with upper gastrointestinal tumors with FGFR2b overexpression (defined as at least 2+ by immunohistochemistry) or amplification on next-generation sequencing. About 30% of patients with HER2 nonpositive tumors (ie, those that would not qualify for treatment with the targeted agent trastuzumab) were eligible for participation. In the FIGHT trial, patients were randomized in a 1:1 ratio to receive either standard chemotherapy (folinic acid, fluorouracil, and oxaliplatin [FOLFOX]) or chemotherapy plus bemarituzumab. Patients in the experimental group were allowed to receive one dose of standard FOLFOX chemotherapy while biomarker testing was ongoing.
With a median follow-up time of 10.9 moths, PFS was numerically prolonged in the bemarituzumab group (9.5 vs 7.4 months), but it did not reach statistical significance (P = .073). Overall survival (OS) was improved in the experimental group (not reached vs 12.9 months; P = .027). With a longer follow-up of 12.5 months, in post hoc exploratory analysis, OS was significantly longer in the experimental group (19.2 vs 13.5 months; hazard ratio 0.60, P = .027). The rate of serious adverse events was similar between the two groups. However, it is important to note ocular toxicities associated with bemarituzumab treatment. Corneal adverse events were seen in 67% of patients in the experimental group, with 24% of patients experiencing grade 3 events. Moreover, 26% of patients discontinued bemarituzumab because of corneal adverse events.
Overall, this phase 2 trial demonstrated that FGFR2b is emerging as an important biomarker and target in patients with advanced gastroesophageal adenocarcinoma. Ongoing phase 3 trials (FORTITUDE-101 with FOLFOX [NCT05052801] and FORTITUDE-102 with FOLFOX and nivolumab [NCT05111626]) hopefully will confirm the early results seen in the FIGHT trial. Awareness and early attention to treatment-associated toxicities will be critical for the potential future incorporation of bemarituzumab into clinical practice.
A study by Ramos‐Santillan and colleagues explored whether the order of treatment modalities matter in the management of early-stage gastric cancer. Typically, perioperative chemotherapy (both neoadjuvant and adjuvant) is used during treatment of early-stage gastric cancer, which is usually defined as at least cT2N0 or cTxN+ disease. In this study, multivariable Cox regression analyses were performed on propensity score-matched cohorts. The study analyzed outcomes of 11,984 patients who were identified using the US National Cancer Database and treated between 2005 and 2014. The results revealed that patients who had stage I disease had better outcomes with upfront resection followed by adjuvant therapy. Patients with stage III disease did better with a neoadjuvant approach, whereas patients with stage II disease had similar outcomes regardless of chemotherapy timing. This research has the limitations inherent to the retrospective nature of the analysis and lack of prospective enrollment and controls. However, it does suggest that there may be a fraction of patients who should be treated with upfront resection. For incorporation of this change into standard practice, the question of therapy sequencing should be answered in a randomized prospective trial that incorporates the most updated systemic therapy (fluorouracil, leucovorin, oxaliplatin, and docetaxel [FLOT]) into its design.
Chemotherapy continues to play a critical role during first-line treatment of advanced esophageal and gastric adenocarcinoma. Triple chemotherapy regimens have been known to have increased efficacy in this setting, but their use has been limited by associated toxicities. A study by Nguyen and colleagues evaluated the TCX regimen (paclitaxel, carboplatin, and capecitabine) during first-line treatment of advanced gastric cancer. This regimen is similar to other triple chemotherapy regimens, such as FLOT and DCF (docetaxel, cisplatin, and fluorouracil), which have proven activity in this disease. This prospective phase 2 trial enrolled 83 patients. The median PFS (9.3 months) and OS (17 months) compared favorably with historical references. The regimen had expected adverse events, with cytopenias and fatigue being the most frequently reported. On the basis of the reported safety and efficacy, TCX has potential to be used as a chemotherapy backbone in future trials, but larger trials are needed to confirm the phase 2 trial results.
References
Wainberg ZA, Enzinger PC, Kang YK, et al. Bemarituzumab in patients with FGFR2b-selected gastric or gastro-oesophageal junction adenocarcinoma (FIGHT): A randomised, double-blind, placebo-controlled, phase 2 study. Lancet Oncol. 2022 Oct 13. Doi: 10.1016/S1470-2045(22)00603-9
Commentary: Chemoradiotherapy in CRC, November 2022
Once again, I have been given the distinct honor of analyzing two of the most provocative studies in colorectal cancer this month for Clinical Edge. The first study I will examine was done by Khamzina and colleagues and attempts to define the optimal time to perform surgery after neoadjuvant chemoradiotherapy in locally advanced rectal cancer. In this retrospective analysis, 770 patients who received long-course chemoradiotherapy for rectal cancer followed by total mesorectal excision (TME) were analyzed by how long the interval was between completion of radiation and surgery. Patients were separated into two groups: 6-8 weeks (n = 502) vs >8 weeks (n = 268). Though the pathologic complete response rates and 5-year disease-free survival rates were not significantly different between the two groups, tumor regression grade was significantly better in the >8 weeks arm (P = .004). This result confirms many previous studies that demonstrate continued tumor shrinkage months after completion of chemoradiotherapy and may provide an explanation of why the OPRA trial demonstrated a higher TME-free rate in the chemoradiotherapy-then-chemotherapy arm than it did in the induction chemotherapy-then-chemoradiotherapy arm (53% vs 41%).
Schaefer and colleagues looked at the potential prognostic markers for efficacy of transarterial radioembolization (TARE) with 90Y resin microspheres in the treatment of liver-dominant metastatic colorectal cancer (mCRC). Their study evaluated 237 patients with liver-dominant mCRC from the prospective observational CIRSE Registry for SIR-Spheres Therapy (CIRT) study who were scheduled to receive TARE with 90Y resin microspheres. For these patients, the aspartate transaminase-to-platelet ratio index (APRI), international normalized ratio (INR), and albumin-bilirubin (ALBI) grade were measured prior to treatment to potentially detect values that might be associated with differential outcomes from TARE. An APRI > 0.40 independently predicted worse overall survival (OS) (hazard ratio [HR] 2.25; P < .0001), progression-free survival (PFS) (HR 1.42; P = .0416), and hepatic PFS (HR 1.50; P = .0207). The other independent predictors for worse OS and hepatic PFS were an INR value of < 1 (HR 1.66; P = .0091) and ALBI grade 3 (HR 5.29; P = .0075), respectively. It is very difficult to make much out of this study save to say that poorer liver function at baseline (at least with respect to APRI and ALBI) predicts worse outcomes after TARE, which is none too controversial an opinion. That said, APRI and ALBI may be able to provide an extra measure of granularity to determine who might be more of a marginal candidate for TARE than would categorization according to Child-Pugh score alone. Saving these patients from a potentially morbid procedure would be a significant benefit.
Once again, I have been given the distinct honor of analyzing two of the most provocative studies in colorectal cancer this month for Clinical Edge. The first study I will examine was done by Khamzina and colleagues and attempts to define the optimal time to perform surgery after neoadjuvant chemoradiotherapy in locally advanced rectal cancer. In this retrospective analysis, 770 patients who received long-course chemoradiotherapy for rectal cancer followed by total mesorectal excision (TME) were analyzed by how long the interval was between completion of radiation and surgery. Patients were separated into two groups: 6-8 weeks (n = 502) vs >8 weeks (n = 268). Though the pathologic complete response rates and 5-year disease-free survival rates were not significantly different between the two groups, tumor regression grade was significantly better in the >8 weeks arm (P = .004). This result confirms many previous studies that demonstrate continued tumor shrinkage months after completion of chemoradiotherapy and may provide an explanation of why the OPRA trial demonstrated a higher TME-free rate in the chemoradiotherapy-then-chemotherapy arm than it did in the induction chemotherapy-then-chemoradiotherapy arm (53% vs 41%).
Schaefer and colleagues looked at the potential prognostic markers for efficacy of transarterial radioembolization (TARE) with 90Y resin microspheres in the treatment of liver-dominant metastatic colorectal cancer (mCRC). Their study evaluated 237 patients with liver-dominant mCRC from the prospective observational CIRSE Registry for SIR-Spheres Therapy (CIRT) study who were scheduled to receive TARE with 90Y resin microspheres. For these patients, the aspartate transaminase-to-platelet ratio index (APRI), international normalized ratio (INR), and albumin-bilirubin (ALBI) grade were measured prior to treatment to potentially detect values that might be associated with differential outcomes from TARE. An APRI > 0.40 independently predicted worse overall survival (OS) (hazard ratio [HR] 2.25; P < .0001), progression-free survival (PFS) (HR 1.42; P = .0416), and hepatic PFS (HR 1.50; P = .0207). The other independent predictors for worse OS and hepatic PFS were an INR value of < 1 (HR 1.66; P = .0091) and ALBI grade 3 (HR 5.29; P = .0075), respectively. It is very difficult to make much out of this study save to say that poorer liver function at baseline (at least with respect to APRI and ALBI) predicts worse outcomes after TARE, which is none too controversial an opinion. That said, APRI and ALBI may be able to provide an extra measure of granularity to determine who might be more of a marginal candidate for TARE than would categorization according to Child-Pugh score alone. Saving these patients from a potentially morbid procedure would be a significant benefit.
Once again, I have been given the distinct honor of analyzing two of the most provocative studies in colorectal cancer this month for Clinical Edge. The first study I will examine was done by Khamzina and colleagues and attempts to define the optimal time to perform surgery after neoadjuvant chemoradiotherapy in locally advanced rectal cancer. In this retrospective analysis, 770 patients who received long-course chemoradiotherapy for rectal cancer followed by total mesorectal excision (TME) were analyzed by how long the interval was between completion of radiation and surgery. Patients were separated into two groups: 6-8 weeks (n = 502) vs >8 weeks (n = 268). Though the pathologic complete response rates and 5-year disease-free survival rates were not significantly different between the two groups, tumor regression grade was significantly better in the >8 weeks arm (P = .004). This result confirms many previous studies that demonstrate continued tumor shrinkage months after completion of chemoradiotherapy and may provide an explanation of why the OPRA trial demonstrated a higher TME-free rate in the chemoradiotherapy-then-chemotherapy arm than it did in the induction chemotherapy-then-chemoradiotherapy arm (53% vs 41%).
Schaefer and colleagues looked at the potential prognostic markers for efficacy of transarterial radioembolization (TARE) with 90Y resin microspheres in the treatment of liver-dominant metastatic colorectal cancer (mCRC). Their study evaluated 237 patients with liver-dominant mCRC from the prospective observational CIRSE Registry for SIR-Spheres Therapy (CIRT) study who were scheduled to receive TARE with 90Y resin microspheres. For these patients, the aspartate transaminase-to-platelet ratio index (APRI), international normalized ratio (INR), and albumin-bilirubin (ALBI) grade were measured prior to treatment to potentially detect values that might be associated with differential outcomes from TARE. An APRI > 0.40 independently predicted worse overall survival (OS) (hazard ratio [HR] 2.25; P < .0001), progression-free survival (PFS) (HR 1.42; P = .0416), and hepatic PFS (HR 1.50; P = .0207). The other independent predictors for worse OS and hepatic PFS were an INR value of < 1 (HR 1.66; P = .0091) and ALBI grade 3 (HR 5.29; P = .0075), respectively. It is very difficult to make much out of this study save to say that poorer liver function at baseline (at least with respect to APRI and ALBI) predicts worse outcomes after TARE, which is none too controversial an opinion. That said, APRI and ALBI may be able to provide an extra measure of granularity to determine who might be more of a marginal candidate for TARE than would categorization according to Child-Pugh score alone. Saving these patients from a potentially morbid procedure would be a significant benefit.
AGA News – November 2022
Vaccine recommendations for your patients with IBD
It’s cold and flu season – are your patients with inflammatory bowel disease (IBD) properly informed about increased risk of infections?
It is especially important for patients with IBD to receive the flu vaccine every year. With IBD, you can only get the shot (and not the spray in the nose).
The AGA GI Patient Center provides additional recommendations about vaccines in adults with IBD. Talk to your patients with IBD about what vaccines are needed for their treatment regimen, age and sex.
Review vaccines and vaccine-preventable diseases the patient had when first diagnosed with IBD, no matter the age and continue discussing vaccines during regular health care visits.
Give patients vaccine(s) for infections they are not immune as soon as possible.
Make sure patients are up to date or receive any live vaccines prior to starting some immunosuppressive treatments, such as biologics.
Help drive HCV testing, treatment, and eradication
AGA has been working with the Centers for Medicare & Medicaid Services and the Centers for Disease Control and Prevention to help drive the national priority for hepatitis C virus (HCV) testing, treatment and eradication.
Updates to QPP measure 400
In July 2022, CMS contacted AGA to modify measure 400 – to update the coverage for one-time screening for HCV to include a referral for treatment for patients with positive antibodies. CMS also advised that confirmation of eradication is a priority and we have started working with the CDC to modify AGA’s sustained virologic response measure for future consideration in the Quality Payment Program.
The modifications to measure 400 have been drafted and given the substantial changes to the measure specification, the measure needs to be retested and submitted to the National Quality Forum for consideration by the Measures Application Partnership. The CMS contractor for this project, Mathematica, will be leading this testing initiative and selected test sites will qualify for up to $2,000 to participate.
Additionally, there will be a second phase of testing to use deidentified patient-level data to assess the measure’s validity and reliability, which will be contracted separately. Our hope is that testing sites recruited for the first phase of testing will stay on through the second phase.
Participants in the second phase of testing are expected to have at least 2 years of patient data available containing data elements needed to calculate the measure. Ideally, each of the participating clinicians at the prospective testing sites would see at least 40 confirmed HCV-positive cases annually.
Vaccine recommendations for your patients with IBD
It’s cold and flu season – are your patients with inflammatory bowel disease (IBD) properly informed about increased risk of infections?
It is especially important for patients with IBD to receive the flu vaccine every year. With IBD, you can only get the shot (and not the spray in the nose).
The AGA GI Patient Center provides additional recommendations about vaccines in adults with IBD. Talk to your patients with IBD about what vaccines are needed for their treatment regimen, age and sex.
Review vaccines and vaccine-preventable diseases the patient had when first diagnosed with IBD, no matter the age and continue discussing vaccines during regular health care visits.
Give patients vaccine(s) for infections they are not immune as soon as possible.
Make sure patients are up to date or receive any live vaccines prior to starting some immunosuppressive treatments, such as biologics.
Help drive HCV testing, treatment, and eradication
AGA has been working with the Centers for Medicare & Medicaid Services and the Centers for Disease Control and Prevention to help drive the national priority for hepatitis C virus (HCV) testing, treatment and eradication.
Updates to QPP measure 400
In July 2022, CMS contacted AGA to modify measure 400 – to update the coverage for one-time screening for HCV to include a referral for treatment for patients with positive antibodies. CMS also advised that confirmation of eradication is a priority and we have started working with the CDC to modify AGA’s sustained virologic response measure for future consideration in the Quality Payment Program.
The modifications to measure 400 have been drafted and given the substantial changes to the measure specification, the measure needs to be retested and submitted to the National Quality Forum for consideration by the Measures Application Partnership. The CMS contractor for this project, Mathematica, will be leading this testing initiative and selected test sites will qualify for up to $2,000 to participate.
Additionally, there will be a second phase of testing to use deidentified patient-level data to assess the measure’s validity and reliability, which will be contracted separately. Our hope is that testing sites recruited for the first phase of testing will stay on through the second phase.
Participants in the second phase of testing are expected to have at least 2 years of patient data available containing data elements needed to calculate the measure. Ideally, each of the participating clinicians at the prospective testing sites would see at least 40 confirmed HCV-positive cases annually.
Vaccine recommendations for your patients with IBD
It’s cold and flu season – are your patients with inflammatory bowel disease (IBD) properly informed about increased risk of infections?
It is especially important for patients with IBD to receive the flu vaccine every year. With IBD, you can only get the shot (and not the spray in the nose).
The AGA GI Patient Center provides additional recommendations about vaccines in adults with IBD. Talk to your patients with IBD about what vaccines are needed for their treatment regimen, age and sex.
Review vaccines and vaccine-preventable diseases the patient had when first diagnosed with IBD, no matter the age and continue discussing vaccines during regular health care visits.
Give patients vaccine(s) for infections they are not immune as soon as possible.
Make sure patients are up to date or receive any live vaccines prior to starting some immunosuppressive treatments, such as biologics.
Help drive HCV testing, treatment, and eradication
AGA has been working with the Centers for Medicare & Medicaid Services and the Centers for Disease Control and Prevention to help drive the national priority for hepatitis C virus (HCV) testing, treatment and eradication.
Updates to QPP measure 400
In July 2022, CMS contacted AGA to modify measure 400 – to update the coverage for one-time screening for HCV to include a referral for treatment for patients with positive antibodies. CMS also advised that confirmation of eradication is a priority and we have started working with the CDC to modify AGA’s sustained virologic response measure for future consideration in the Quality Payment Program.
The modifications to measure 400 have been drafted and given the substantial changes to the measure specification, the measure needs to be retested and submitted to the National Quality Forum for consideration by the Measures Application Partnership. The CMS contractor for this project, Mathematica, will be leading this testing initiative and selected test sites will qualify for up to $2,000 to participate.
Additionally, there will be a second phase of testing to use deidentified patient-level data to assess the measure’s validity and reliability, which will be contracted separately. Our hope is that testing sites recruited for the first phase of testing will stay on through the second phase.
Participants in the second phase of testing are expected to have at least 2 years of patient data available containing data elements needed to calculate the measure. Ideally, each of the participating clinicians at the prospective testing sites would see at least 40 confirmed HCV-positive cases annually.
Commentary: COVID-19, Tenosynovitis, and RA, November 2022
Multiple studies have emphasized the potential for severe COVID-19 outcomes in patients with rheumatic disease, including patients with rheumatoid arthritis (RA). Because these studies often group together patients with different diseases, medications, and manifestations, differences in outcomes between patients with these conditions may be difficult to tease out.
Figueroa-Parra and colleagues performed a retrospective cohort study comparing people with RA who developed COVID-19 to those who did not have RA to examine the effect of RA characteristics, such as interstitial lung disease (ILD), serostatus, and bone erosions, on COVID-19 outcomes. Patients with RA, particularly those with seropositive RA, bone erosions, and RA-associated ILD, had approximately twofold (or higher) risk for severe COVID-19 outcomes, such as mortality or mechanical ventilation, than did those without RA. However, there was no difference in outcomes seen between patients with RA who were seropositive compared with those who were seronegative, with or without bone erosions, or with or without ILD. The mechanism by which RA phenotypes and their treatment affect this risk remains unclear.
Li and colleagues also looked at COVID-19 outcomes in patients with RA according to vaccination status using a UK primary care database. Among unvaccinated patients, the risk for SARS-CoV-2 infection and hospitalization or mortality because of COVID-19 were modestly higher in people with RA. Among vaccinated patients, there was no increased risk for breakthrough infection, COVID-19 hospitalization, or mortality observed in patients with RA over 3 or 6 months of follow-up, with a slight increase over 9 months of follow-up. Overall, both studies support prior research suggesting a higher risk for more severe COVID-19 in patients with RA, as well as potential mitigation with vaccination.
Predictors of RA course and severity are of great interest in determining the optimal therapy to reduce joint damage and prevent RA progression while also minimizing the adverse effects of treatment. Early disease course has been shown to be important in several studies. Giollo and colleagues compared patients with "difficult-to-treat RA," ie, RA that is resistant to multiple biologic disease-modifying antirheumatic drugs (bDMARD) or targeted synthetic DMARD (tsDMARD), with those without in an inception cohort study and found that early difficult management as well as delay of methotrexate initiation was associated with persistent inflammatory symptoms. This finding does not show a causative relationship between methotrexate and protection from the development of refractory RA but does lend support for early aggressive treatment in patients with a high inflammatory burden.
Conversely, Parisi and colleagues performed a subanalysis of the STARTER study of patients with RA in clinical remission to evaluate the impact of different therapies. The STARTER study had shown an association between ultrasound detection of tenosynovitis and RA flares. Of the more than 250 patients completing the study, ultrasound evidence of tenosynovitis was better controlled in patients on combination bDMARD and conventional synthetic DMARD (csDMARD) therapy than in those on csDMARDs monotherapy, with a trend toward reduction in flares in patients on combination therapy. Given the relatively small effect, it is not clear that combination therapy is associated with deeper remission, but, as suggested in prior studies, ultrasound evidence of tenosynovitis may be worthwhile considering prior to tapering therapy.
Multiple studies have emphasized the potential for severe COVID-19 outcomes in patients with rheumatic disease, including patients with rheumatoid arthritis (RA). Because these studies often group together patients with different diseases, medications, and manifestations, differences in outcomes between patients with these conditions may be difficult to tease out.
Figueroa-Parra and colleagues performed a retrospective cohort study comparing people with RA who developed COVID-19 to those who did not have RA to examine the effect of RA characteristics, such as interstitial lung disease (ILD), serostatus, and bone erosions, on COVID-19 outcomes. Patients with RA, particularly those with seropositive RA, bone erosions, and RA-associated ILD, had approximately twofold (or higher) risk for severe COVID-19 outcomes, such as mortality or mechanical ventilation, than did those without RA. However, there was no difference in outcomes seen between patients with RA who were seropositive compared with those who were seronegative, with or without bone erosions, or with or without ILD. The mechanism by which RA phenotypes and their treatment affect this risk remains unclear.
Li and colleagues also looked at COVID-19 outcomes in patients with RA according to vaccination status using a UK primary care database. Among unvaccinated patients, the risk for SARS-CoV-2 infection and hospitalization or mortality because of COVID-19 were modestly higher in people with RA. Among vaccinated patients, there was no increased risk for breakthrough infection, COVID-19 hospitalization, or mortality observed in patients with RA over 3 or 6 months of follow-up, with a slight increase over 9 months of follow-up. Overall, both studies support prior research suggesting a higher risk for more severe COVID-19 in patients with RA, as well as potential mitigation with vaccination.
Predictors of RA course and severity are of great interest in determining the optimal therapy to reduce joint damage and prevent RA progression while also minimizing the adverse effects of treatment. Early disease course has been shown to be important in several studies. Giollo and colleagues compared patients with "difficult-to-treat RA," ie, RA that is resistant to multiple biologic disease-modifying antirheumatic drugs (bDMARD) or targeted synthetic DMARD (tsDMARD), with those without in an inception cohort study and found that early difficult management as well as delay of methotrexate initiation was associated with persistent inflammatory symptoms. This finding does not show a causative relationship between methotrexate and protection from the development of refractory RA but does lend support for early aggressive treatment in patients with a high inflammatory burden.
Conversely, Parisi and colleagues performed a subanalysis of the STARTER study of patients with RA in clinical remission to evaluate the impact of different therapies. The STARTER study had shown an association between ultrasound detection of tenosynovitis and RA flares. Of the more than 250 patients completing the study, ultrasound evidence of tenosynovitis was better controlled in patients on combination bDMARD and conventional synthetic DMARD (csDMARD) therapy than in those on csDMARDs monotherapy, with a trend toward reduction in flares in patients on combination therapy. Given the relatively small effect, it is not clear that combination therapy is associated with deeper remission, but, as suggested in prior studies, ultrasound evidence of tenosynovitis may be worthwhile considering prior to tapering therapy.
Multiple studies have emphasized the potential for severe COVID-19 outcomes in patients with rheumatic disease, including patients with rheumatoid arthritis (RA). Because these studies often group together patients with different diseases, medications, and manifestations, differences in outcomes between patients with these conditions may be difficult to tease out.
Figueroa-Parra and colleagues performed a retrospective cohort study comparing people with RA who developed COVID-19 to those who did not have RA to examine the effect of RA characteristics, such as interstitial lung disease (ILD), serostatus, and bone erosions, on COVID-19 outcomes. Patients with RA, particularly those with seropositive RA, bone erosions, and RA-associated ILD, had approximately twofold (or higher) risk for severe COVID-19 outcomes, such as mortality or mechanical ventilation, than did those without RA. However, there was no difference in outcomes seen between patients with RA who were seropositive compared with those who were seronegative, with or without bone erosions, or with or without ILD. The mechanism by which RA phenotypes and their treatment affect this risk remains unclear.
Li and colleagues also looked at COVID-19 outcomes in patients with RA according to vaccination status using a UK primary care database. Among unvaccinated patients, the risk for SARS-CoV-2 infection and hospitalization or mortality because of COVID-19 were modestly higher in people with RA. Among vaccinated patients, there was no increased risk for breakthrough infection, COVID-19 hospitalization, or mortality observed in patients with RA over 3 or 6 months of follow-up, with a slight increase over 9 months of follow-up. Overall, both studies support prior research suggesting a higher risk for more severe COVID-19 in patients with RA, as well as potential mitigation with vaccination.
Predictors of RA course and severity are of great interest in determining the optimal therapy to reduce joint damage and prevent RA progression while also minimizing the adverse effects of treatment. Early disease course has been shown to be important in several studies. Giollo and colleagues compared patients with "difficult-to-treat RA," ie, RA that is resistant to multiple biologic disease-modifying antirheumatic drugs (bDMARD) or targeted synthetic DMARD (tsDMARD), with those without in an inception cohort study and found that early difficult management as well as delay of methotrexate initiation was associated with persistent inflammatory symptoms. This finding does not show a causative relationship between methotrexate and protection from the development of refractory RA but does lend support for early aggressive treatment in patients with a high inflammatory burden.
Conversely, Parisi and colleagues performed a subanalysis of the STARTER study of patients with RA in clinical remission to evaluate the impact of different therapies. The STARTER study had shown an association between ultrasound detection of tenosynovitis and RA flares. Of the more than 250 patients completing the study, ultrasound evidence of tenosynovitis was better controlled in patients on combination bDMARD and conventional synthetic DMARD (csDMARD) therapy than in those on csDMARDs monotherapy, with a trend toward reduction in flares in patients on combination therapy. Given the relatively small effect, it is not clear that combination therapy is associated with deeper remission, but, as suggested in prior studies, ultrasound evidence of tenosynovitis may be worthwhile considering prior to tapering therapy.
Study uncovers two molecular subgroups of cervical cancer
Scientists have discovered that cervical cancer can be divided into two distinct molecular subgroups – one far more aggressive than the other – offering hope of better understanding and treatment of the disease.
In the United Kingdom, there are over 3,000 new case of cervical cancer, with around 850 deaths each year. It is almost always caused by the human papillomavirus (HPV), and vaccination against this virus has successfully reduced the incidence of cervical cancer – in fact, the reduction has been by 87% among women in their 20s in England who were offered the vaccine when they were aged 12-13 years as part of the U.K. HPV vaccination program.
“Despite major steps forward in preventing cervical cancer, many women still die from the disease,” said Tim Fenton, MD, associate professor in cancer biology, School of Cancer Sciences Centre for Cancer Immunology, University of Southampton (England), and coauthor of the new study.
Two distinct subgroups
In the new study, published in Nature Communications, researchers described their breakthrough findings as a “major step forward” in understanding the disease, and said they provided a “tantalizing new clue” in determining the best treatments for individual patients.
For the observational study - part of the largest ‘omics’ study of its kind – researchers led by scientists at University College London and the University of Southampton began by applying a multiomics approach to identify combinations of molecular markers and characteristics associated with the biological processes involved in cervical cancer cells. The integrated multiomic analysis of 643 cervical squamous cell carcinomas (CSCC) – the most common histological variant of cervical cancer – represented patient populations from the United States, Europe, and sub-Saharan Africa.
To begin with they analysed and compared DNA, RNA, proteins, and metabolites in 236 CSCC cases in a publicly available U.S. database. They found that U.S. cancers fell into two distinct “omics” subgroups, which they named C1 and C2. After further investigation, the researchers identified that C1 tumors contained a much higher number of cytotoxic T cells. “The findings suggested that patients with C1 tumors would have a stronger immune response within the tumor micro-environment,” they said.
Weaker antitumor immune response
To determine if the two sub-types affect patients with cervical cancer in different ways, the team, which also included researchers from the University of Kent, the University of Cambridge, Oslo University Hospital, the University of Bergen (Norway), and the University of Innsbruck (Austria) derived molecular profiles and looked at clinical outcomes of a further 313 CSCC cases from Norway and Austria.
The researchers found that, just as in the US cohort, nearly a quarter of patients fell into the C2 subtype, and that again C1 tumors contained far more killer T cells than C2 tumors. “Importantly, the data also showed C2 was far more clinically aggressive with worse outcomes for patients,” the authors said.
Patients with C2 tumors were more than twice as likely (hazard ratio, 2.32) to die from their cervical cancer at any point during the follow-up period – up to 21 years – than those with C1 tumors. In terms of 5-year disease-specific survival, the rates were 79% survival for C1 and 66% survival for C2, the authors pointed out.
They highlighted that the difference in outcomes between patients with C1 and C2 tumors was very similar across the US and European cohorts.
Kerry Chester, PhD, professor of molecular medicine at UCL Cancer Institute, and coauthor, said: “Inclusion of patient cohorts in Norway and Austria, for which highly detailed clinical information was available to complement the molecular data, were key factors in the success of the study.”
Analyzing a further cohort of 94 Ugandan CSCC cases, the team found that C2 tumors were much more common than C1 tumors in patients who were also HIV-positive, “underlining the link to a weaker antitumor immune response” in this group.
Molecular subtyping offers better prognostic information
Cervical cancer can be caused by at least 12 different ‘high-risk’ HPV types, and there have been conflicting reports as to whether the HPV type present in a cervical cancer influences the prognosis for the patient. CSCCs can now also be categorized into two subtypes, C1 and C2, the authors explained, among which C1 tumors have a more favorable outcome.
“Although HPV16 is more likely to cause C1 tumors and HPV18 C2 tumors, HPV type is not an independent predictor of prognosis, suggesting it is the tumor type rather than the causative HPV type that is critical for the disease outcome,” they highlighted.
“Intriguingly, the C1/C2 grouping appeared to be more informative than the type of HPV present,” they added. “While certain HPV types were found more commonly in either C1 or C2 tumors, prognosis was linked to the group to which the tumor could be assigned, rather than the HPV type it contained.”
The reason that HPV16 and other alpha-9 HPV types have been associated with more favorable outcomes was possibly that these viruses are “more likely to cause C1-type tumors”, the authors suggested. Although larger numbers are needed for robust within-stage comparisons of C1 and C2 tumors, “we observe a clear trend in the survival rates between C1 and C2 by stage”, they said.
Taking molecular (C1/C2) subtyping into account may allow for more “accurate prognostication” than current staging and potentially different clinical management of patients with C1 versus C2 tumors, the authors said. This could include the identification of patients at risk of relapse who may require further adjuvant therapy after completion of up-front therapy.
New therapeutic targets
Dr. Fenton highlighted that the study findings suggested that determining whether a patient has a C1 or a C2 cervical cancer could help in planning their treatment, since it appeared to provide “additional prognostic information beyond that gained from clinical staging”. Given the differences in the antitumor immune response observed in C1 and C2 tumors, this classification might also be useful in predicting which patients are likely to benefit from emerging immunotherapy drugs, he said.
The study findings also found that CSCC can develop along “two trajectories” associated with differing clinical behavior that can be identified using defined gene expression or DNA methylation signatures, and this may guide “improved clinical management of cervical cancer patients”, they said.
“This collaborative multidisciplinary research is a major step forward in our understanding of cervical cancer,” said Dr. Chester. “Through careful molecular profiling and genetic analysis of cervical cancer tumors we have gained valuable new insight into the tumor microenvironment and factors potentially making the cancer less aggressive in some patients.”
The authors expressed hope that their study findings will stimulate functional studies of genes and their role in cervical cancer pathogenesis, potentially enabling identification of new therapeutic targets.
The study was funded by Debbie Fund (a UCL postgraduate research scholarship), Rosetrees Trust, Cancer Research UK, the Biotechnology and Biosciences Research Council, the Royal Society, and the Global Challenges Doctoral Centre at the University of Kent, MRC, PCUK, BBSRC, TUF, Orchid, and the UCLH BRC. The authors declared no competing interests.
A version of this article first appeared on Medscape UK.
Scientists have discovered that cervical cancer can be divided into two distinct molecular subgroups – one far more aggressive than the other – offering hope of better understanding and treatment of the disease.
In the United Kingdom, there are over 3,000 new case of cervical cancer, with around 850 deaths each year. It is almost always caused by the human papillomavirus (HPV), and vaccination against this virus has successfully reduced the incidence of cervical cancer – in fact, the reduction has been by 87% among women in their 20s in England who were offered the vaccine when they were aged 12-13 years as part of the U.K. HPV vaccination program.
“Despite major steps forward in preventing cervical cancer, many women still die from the disease,” said Tim Fenton, MD, associate professor in cancer biology, School of Cancer Sciences Centre for Cancer Immunology, University of Southampton (England), and coauthor of the new study.
Two distinct subgroups
In the new study, published in Nature Communications, researchers described their breakthrough findings as a “major step forward” in understanding the disease, and said they provided a “tantalizing new clue” in determining the best treatments for individual patients.
For the observational study - part of the largest ‘omics’ study of its kind – researchers led by scientists at University College London and the University of Southampton began by applying a multiomics approach to identify combinations of molecular markers and characteristics associated with the biological processes involved in cervical cancer cells. The integrated multiomic analysis of 643 cervical squamous cell carcinomas (CSCC) – the most common histological variant of cervical cancer – represented patient populations from the United States, Europe, and sub-Saharan Africa.
To begin with they analysed and compared DNA, RNA, proteins, and metabolites in 236 CSCC cases in a publicly available U.S. database. They found that U.S. cancers fell into two distinct “omics” subgroups, which they named C1 and C2. After further investigation, the researchers identified that C1 tumors contained a much higher number of cytotoxic T cells. “The findings suggested that patients with C1 tumors would have a stronger immune response within the tumor micro-environment,” they said.
Weaker antitumor immune response
To determine if the two sub-types affect patients with cervical cancer in different ways, the team, which also included researchers from the University of Kent, the University of Cambridge, Oslo University Hospital, the University of Bergen (Norway), and the University of Innsbruck (Austria) derived molecular profiles and looked at clinical outcomes of a further 313 CSCC cases from Norway and Austria.
The researchers found that, just as in the US cohort, nearly a quarter of patients fell into the C2 subtype, and that again C1 tumors contained far more killer T cells than C2 tumors. “Importantly, the data also showed C2 was far more clinically aggressive with worse outcomes for patients,” the authors said.
Patients with C2 tumors were more than twice as likely (hazard ratio, 2.32) to die from their cervical cancer at any point during the follow-up period – up to 21 years – than those with C1 tumors. In terms of 5-year disease-specific survival, the rates were 79% survival for C1 and 66% survival for C2, the authors pointed out.
They highlighted that the difference in outcomes between patients with C1 and C2 tumors was very similar across the US and European cohorts.
Kerry Chester, PhD, professor of molecular medicine at UCL Cancer Institute, and coauthor, said: “Inclusion of patient cohorts in Norway and Austria, for which highly detailed clinical information was available to complement the molecular data, were key factors in the success of the study.”
Analyzing a further cohort of 94 Ugandan CSCC cases, the team found that C2 tumors were much more common than C1 tumors in patients who were also HIV-positive, “underlining the link to a weaker antitumor immune response” in this group.
Molecular subtyping offers better prognostic information
Cervical cancer can be caused by at least 12 different ‘high-risk’ HPV types, and there have been conflicting reports as to whether the HPV type present in a cervical cancer influences the prognosis for the patient. CSCCs can now also be categorized into two subtypes, C1 and C2, the authors explained, among which C1 tumors have a more favorable outcome.
“Although HPV16 is more likely to cause C1 tumors and HPV18 C2 tumors, HPV type is not an independent predictor of prognosis, suggesting it is the tumor type rather than the causative HPV type that is critical for the disease outcome,” they highlighted.
“Intriguingly, the C1/C2 grouping appeared to be more informative than the type of HPV present,” they added. “While certain HPV types were found more commonly in either C1 or C2 tumors, prognosis was linked to the group to which the tumor could be assigned, rather than the HPV type it contained.”
The reason that HPV16 and other alpha-9 HPV types have been associated with more favorable outcomes was possibly that these viruses are “more likely to cause C1-type tumors”, the authors suggested. Although larger numbers are needed for robust within-stage comparisons of C1 and C2 tumors, “we observe a clear trend in the survival rates between C1 and C2 by stage”, they said.
Taking molecular (C1/C2) subtyping into account may allow for more “accurate prognostication” than current staging and potentially different clinical management of patients with C1 versus C2 tumors, the authors said. This could include the identification of patients at risk of relapse who may require further adjuvant therapy after completion of up-front therapy.
New therapeutic targets
Dr. Fenton highlighted that the study findings suggested that determining whether a patient has a C1 or a C2 cervical cancer could help in planning their treatment, since it appeared to provide “additional prognostic information beyond that gained from clinical staging”. Given the differences in the antitumor immune response observed in C1 and C2 tumors, this classification might also be useful in predicting which patients are likely to benefit from emerging immunotherapy drugs, he said.
The study findings also found that CSCC can develop along “two trajectories” associated with differing clinical behavior that can be identified using defined gene expression or DNA methylation signatures, and this may guide “improved clinical management of cervical cancer patients”, they said.
“This collaborative multidisciplinary research is a major step forward in our understanding of cervical cancer,” said Dr. Chester. “Through careful molecular profiling and genetic analysis of cervical cancer tumors we have gained valuable new insight into the tumor microenvironment and factors potentially making the cancer less aggressive in some patients.”
The authors expressed hope that their study findings will stimulate functional studies of genes and their role in cervical cancer pathogenesis, potentially enabling identification of new therapeutic targets.
The study was funded by Debbie Fund (a UCL postgraduate research scholarship), Rosetrees Trust, Cancer Research UK, the Biotechnology and Biosciences Research Council, the Royal Society, and the Global Challenges Doctoral Centre at the University of Kent, MRC, PCUK, BBSRC, TUF, Orchid, and the UCLH BRC. The authors declared no competing interests.
A version of this article first appeared on Medscape UK.
Scientists have discovered that cervical cancer can be divided into two distinct molecular subgroups – one far more aggressive than the other – offering hope of better understanding and treatment of the disease.
In the United Kingdom, there are over 3,000 new case of cervical cancer, with around 850 deaths each year. It is almost always caused by the human papillomavirus (HPV), and vaccination against this virus has successfully reduced the incidence of cervical cancer – in fact, the reduction has been by 87% among women in their 20s in England who were offered the vaccine when they were aged 12-13 years as part of the U.K. HPV vaccination program.
“Despite major steps forward in preventing cervical cancer, many women still die from the disease,” said Tim Fenton, MD, associate professor in cancer biology, School of Cancer Sciences Centre for Cancer Immunology, University of Southampton (England), and coauthor of the new study.
Two distinct subgroups
In the new study, published in Nature Communications, researchers described their breakthrough findings as a “major step forward” in understanding the disease, and said they provided a “tantalizing new clue” in determining the best treatments for individual patients.
For the observational study - part of the largest ‘omics’ study of its kind – researchers led by scientists at University College London and the University of Southampton began by applying a multiomics approach to identify combinations of molecular markers and characteristics associated with the biological processes involved in cervical cancer cells. The integrated multiomic analysis of 643 cervical squamous cell carcinomas (CSCC) – the most common histological variant of cervical cancer – represented patient populations from the United States, Europe, and sub-Saharan Africa.
To begin with they analysed and compared DNA, RNA, proteins, and metabolites in 236 CSCC cases in a publicly available U.S. database. They found that U.S. cancers fell into two distinct “omics” subgroups, which they named C1 and C2. After further investigation, the researchers identified that C1 tumors contained a much higher number of cytotoxic T cells. “The findings suggested that patients with C1 tumors would have a stronger immune response within the tumor micro-environment,” they said.
Weaker antitumor immune response
To determine if the two sub-types affect patients with cervical cancer in different ways, the team, which also included researchers from the University of Kent, the University of Cambridge, Oslo University Hospital, the University of Bergen (Norway), and the University of Innsbruck (Austria) derived molecular profiles and looked at clinical outcomes of a further 313 CSCC cases from Norway and Austria.
The researchers found that, just as in the US cohort, nearly a quarter of patients fell into the C2 subtype, and that again C1 tumors contained far more killer T cells than C2 tumors. “Importantly, the data also showed C2 was far more clinically aggressive with worse outcomes for patients,” the authors said.
Patients with C2 tumors were more than twice as likely (hazard ratio, 2.32) to die from their cervical cancer at any point during the follow-up period – up to 21 years – than those with C1 tumors. In terms of 5-year disease-specific survival, the rates were 79% survival for C1 and 66% survival for C2, the authors pointed out.
They highlighted that the difference in outcomes between patients with C1 and C2 tumors was very similar across the US and European cohorts.
Kerry Chester, PhD, professor of molecular medicine at UCL Cancer Institute, and coauthor, said: “Inclusion of patient cohorts in Norway and Austria, for which highly detailed clinical information was available to complement the molecular data, were key factors in the success of the study.”
Analyzing a further cohort of 94 Ugandan CSCC cases, the team found that C2 tumors were much more common than C1 tumors in patients who were also HIV-positive, “underlining the link to a weaker antitumor immune response” in this group.
Molecular subtyping offers better prognostic information
Cervical cancer can be caused by at least 12 different ‘high-risk’ HPV types, and there have been conflicting reports as to whether the HPV type present in a cervical cancer influences the prognosis for the patient. CSCCs can now also be categorized into two subtypes, C1 and C2, the authors explained, among which C1 tumors have a more favorable outcome.
“Although HPV16 is more likely to cause C1 tumors and HPV18 C2 tumors, HPV type is not an independent predictor of prognosis, suggesting it is the tumor type rather than the causative HPV type that is critical for the disease outcome,” they highlighted.
“Intriguingly, the C1/C2 grouping appeared to be more informative than the type of HPV present,” they added. “While certain HPV types were found more commonly in either C1 or C2 tumors, prognosis was linked to the group to which the tumor could be assigned, rather than the HPV type it contained.”
The reason that HPV16 and other alpha-9 HPV types have been associated with more favorable outcomes was possibly that these viruses are “more likely to cause C1-type tumors”, the authors suggested. Although larger numbers are needed for robust within-stage comparisons of C1 and C2 tumors, “we observe a clear trend in the survival rates between C1 and C2 by stage”, they said.
Taking molecular (C1/C2) subtyping into account may allow for more “accurate prognostication” than current staging and potentially different clinical management of patients with C1 versus C2 tumors, the authors said. This could include the identification of patients at risk of relapse who may require further adjuvant therapy after completion of up-front therapy.
New therapeutic targets
Dr. Fenton highlighted that the study findings suggested that determining whether a patient has a C1 or a C2 cervical cancer could help in planning their treatment, since it appeared to provide “additional prognostic information beyond that gained from clinical staging”. Given the differences in the antitumor immune response observed in C1 and C2 tumors, this classification might also be useful in predicting which patients are likely to benefit from emerging immunotherapy drugs, he said.
The study findings also found that CSCC can develop along “two trajectories” associated with differing clinical behavior that can be identified using defined gene expression or DNA methylation signatures, and this may guide “improved clinical management of cervical cancer patients”, they said.
“This collaborative multidisciplinary research is a major step forward in our understanding of cervical cancer,” said Dr. Chester. “Through careful molecular profiling and genetic analysis of cervical cancer tumors we have gained valuable new insight into the tumor microenvironment and factors potentially making the cancer less aggressive in some patients.”
The authors expressed hope that their study findings will stimulate functional studies of genes and their role in cervical cancer pathogenesis, potentially enabling identification of new therapeutic targets.
The study was funded by Debbie Fund (a UCL postgraduate research scholarship), Rosetrees Trust, Cancer Research UK, the Biotechnology and Biosciences Research Council, the Royal Society, and the Global Challenges Doctoral Centre at the University of Kent, MRC, PCUK, BBSRC, TUF, Orchid, and the UCLH BRC. The authors declared no competing interests.
A version of this article first appeared on Medscape UK.
FROM NATURE COMMUNICATIONS
Major U.S. GI societies issue strategic plan on environmental sustainability
according to a new joint strategic plan published simultaneously in Gastroenterology, Gastrointestinal Endoscopy, American Journal of Gastroenterology, and Hepatology.
The plan outlines numerous strategic goals and objectives across clinical care, education, research, and industry to support sustainable practices. With first author Heiko Pohl, MD, a gastroenterologist and hepatologist at the Veterans Affairs Medical Center in White River Junction, Vermont, and professor of medicine at the Geisel School of Medicine at Dartmouth, Hanover, N.H., the joint statement includes task force members from the American Association for the Study of Liver Diseases, American College of Gastroenterology, American Gastroenterological Association, and American Society for Gastrointestinal Endoscopy.
“It is clear that the evolving climate crisis, with its deleterious effects on planetary ecosystems, also poses harm to the health of humankind,” the authors wrote in Gastroenterology.
“Climate change affects many social and environmental determinants of health, including water and food security, shelter, physical activity, and accessible health care,” they added. These changes influence gastrointestinal practice (for example, increased risk of obesity and fatty liver disease, disruption of the microbiome, compromised gut immune function).
At the same time, health care delivery contributes to climate change and greenhouse gas emissions worldwide, they wrote. As a procedure-intensive specialty, digestive health care adds to the health care carbon footprint through single-use supplies and high levels of waste.
“As is the case for the impact of climate change by and on health care systems, there is a vicious cycle whereby climate change negatively impacts individual digestive health, which accelerates specialized health care activity, which further contributes to the climate crisis,” the authors wrote.
The multisociety task force noted the transition to a more sustainable model will be challenging and require major modification of current habits in practice. However, the long-term effects “will promote health, save cost, and ... correspond with a broader shared vision of planetary health,” they wrote.
The strategic plan covers seven domains: clinical settings, education, research, society efforts, intersociety efforts, industry, and advocacy. Each domain has specific initiatives for 2023 to 2027. Years 1 and 2 are conceived as a period of self-assessment and planning, followed by implementation and assessment during years 3-5.
In the plan, clinical settings would assess the carbon footprint and waste within all areas of practice and identify low-carbon and low-waste alternatives, such as immediate, short-term, and long-term solutions. This involves creating a framework for GI practices to develop sustainability metrics and offer affordable testing and treatment alternatives with a favorable environmental impact.
Through education, the societies would raise awareness and share sustainability practices with health care leadership, practitioners, and patients regarding the interactions among climate change, digestive health, and health care services. This would include discussions about the professional and ethical implications of old and new patterns of shared resource utilization.
The societies also support raising and allocating resources for research related to the intersections of climate change, digestive health, and health care, with an emphasis on vulnerable groups. This would encourage the inclusion of environmental considerations in research proposals.
At the GI society level, the groups suggest assessing and monitoring the current environmental impact of society-related activities. This entails identifying and implementing measures that would decrease the carbon footprint and reduce waste, as well as track financial costs and savings and environmental benefits from efforts included in a sustainability model.
At the intersociety level, the U.S. groups would collaborate with national and international GI and hepatology societies to support sustainability efforts and use validated metrics to evaluate their efforts. The multisociety plan has received endorsements from nearly two-dozen groups, including the Crohn’s & Colitis Foundation, World Endoscopy Organization, and World Gastroenterology Organisation.
The plan calls for engagement with GI- and hepatology-focused industry and pharmaceutical partners to develop environmentally friendly products, publish information on carbon footprint implications, and promote options for recycling.
Through advocacy efforts, the societies would also identify and incorporate principles of sustainable health care among the goals of relevant political action committees, as well as leverage collaborative advocacy efforts with national and international health care and research agencies, political leaders, and payors.
“We are grateful that several other GI organizations have endorsed our plan, which reflects the importance and timeliness of the opportunity to work together and share best practices to overcome the burden of climate change on digestive health and help mitigate the environmental impact of GI practice,” the authors concluded.
The authors did not declare a funding source for the report. Several of the authors declared financial relationships with pharmaceutical companies, serving as a consultant or receiving research funding.
according to a new joint strategic plan published simultaneously in Gastroenterology, Gastrointestinal Endoscopy, American Journal of Gastroenterology, and Hepatology.
The plan outlines numerous strategic goals and objectives across clinical care, education, research, and industry to support sustainable practices. With first author Heiko Pohl, MD, a gastroenterologist and hepatologist at the Veterans Affairs Medical Center in White River Junction, Vermont, and professor of medicine at the Geisel School of Medicine at Dartmouth, Hanover, N.H., the joint statement includes task force members from the American Association for the Study of Liver Diseases, American College of Gastroenterology, American Gastroenterological Association, and American Society for Gastrointestinal Endoscopy.
“It is clear that the evolving climate crisis, with its deleterious effects on planetary ecosystems, also poses harm to the health of humankind,” the authors wrote in Gastroenterology.
“Climate change affects many social and environmental determinants of health, including water and food security, shelter, physical activity, and accessible health care,” they added. These changes influence gastrointestinal practice (for example, increased risk of obesity and fatty liver disease, disruption of the microbiome, compromised gut immune function).
At the same time, health care delivery contributes to climate change and greenhouse gas emissions worldwide, they wrote. As a procedure-intensive specialty, digestive health care adds to the health care carbon footprint through single-use supplies and high levels of waste.
“As is the case for the impact of climate change by and on health care systems, there is a vicious cycle whereby climate change negatively impacts individual digestive health, which accelerates specialized health care activity, which further contributes to the climate crisis,” the authors wrote.
The multisociety task force noted the transition to a more sustainable model will be challenging and require major modification of current habits in practice. However, the long-term effects “will promote health, save cost, and ... correspond with a broader shared vision of planetary health,” they wrote.
The strategic plan covers seven domains: clinical settings, education, research, society efforts, intersociety efforts, industry, and advocacy. Each domain has specific initiatives for 2023 to 2027. Years 1 and 2 are conceived as a period of self-assessment and planning, followed by implementation and assessment during years 3-5.
In the plan, clinical settings would assess the carbon footprint and waste within all areas of practice and identify low-carbon and low-waste alternatives, such as immediate, short-term, and long-term solutions. This involves creating a framework for GI practices to develop sustainability metrics and offer affordable testing and treatment alternatives with a favorable environmental impact.
Through education, the societies would raise awareness and share sustainability practices with health care leadership, practitioners, and patients regarding the interactions among climate change, digestive health, and health care services. This would include discussions about the professional and ethical implications of old and new patterns of shared resource utilization.
The societies also support raising and allocating resources for research related to the intersections of climate change, digestive health, and health care, with an emphasis on vulnerable groups. This would encourage the inclusion of environmental considerations in research proposals.
At the GI society level, the groups suggest assessing and monitoring the current environmental impact of society-related activities. This entails identifying and implementing measures that would decrease the carbon footprint and reduce waste, as well as track financial costs and savings and environmental benefits from efforts included in a sustainability model.
At the intersociety level, the U.S. groups would collaborate with national and international GI and hepatology societies to support sustainability efforts and use validated metrics to evaluate their efforts. The multisociety plan has received endorsements from nearly two-dozen groups, including the Crohn’s & Colitis Foundation, World Endoscopy Organization, and World Gastroenterology Organisation.
The plan calls for engagement with GI- and hepatology-focused industry and pharmaceutical partners to develop environmentally friendly products, publish information on carbon footprint implications, and promote options for recycling.
Through advocacy efforts, the societies would also identify and incorporate principles of sustainable health care among the goals of relevant political action committees, as well as leverage collaborative advocacy efforts with national and international health care and research agencies, political leaders, and payors.
“We are grateful that several other GI organizations have endorsed our plan, which reflects the importance and timeliness of the opportunity to work together and share best practices to overcome the burden of climate change on digestive health and help mitigate the environmental impact of GI practice,” the authors concluded.
The authors did not declare a funding source for the report. Several of the authors declared financial relationships with pharmaceutical companies, serving as a consultant or receiving research funding.
according to a new joint strategic plan published simultaneously in Gastroenterology, Gastrointestinal Endoscopy, American Journal of Gastroenterology, and Hepatology.
The plan outlines numerous strategic goals and objectives across clinical care, education, research, and industry to support sustainable practices. With first author Heiko Pohl, MD, a gastroenterologist and hepatologist at the Veterans Affairs Medical Center in White River Junction, Vermont, and professor of medicine at the Geisel School of Medicine at Dartmouth, Hanover, N.H., the joint statement includes task force members from the American Association for the Study of Liver Diseases, American College of Gastroenterology, American Gastroenterological Association, and American Society for Gastrointestinal Endoscopy.
“It is clear that the evolving climate crisis, with its deleterious effects on planetary ecosystems, also poses harm to the health of humankind,” the authors wrote in Gastroenterology.
“Climate change affects many social and environmental determinants of health, including water and food security, shelter, physical activity, and accessible health care,” they added. These changes influence gastrointestinal practice (for example, increased risk of obesity and fatty liver disease, disruption of the microbiome, compromised gut immune function).
At the same time, health care delivery contributes to climate change and greenhouse gas emissions worldwide, they wrote. As a procedure-intensive specialty, digestive health care adds to the health care carbon footprint through single-use supplies and high levels of waste.
“As is the case for the impact of climate change by and on health care systems, there is a vicious cycle whereby climate change negatively impacts individual digestive health, which accelerates specialized health care activity, which further contributes to the climate crisis,” the authors wrote.
The multisociety task force noted the transition to a more sustainable model will be challenging and require major modification of current habits in practice. However, the long-term effects “will promote health, save cost, and ... correspond with a broader shared vision of planetary health,” they wrote.
The strategic plan covers seven domains: clinical settings, education, research, society efforts, intersociety efforts, industry, and advocacy. Each domain has specific initiatives for 2023 to 2027. Years 1 and 2 are conceived as a period of self-assessment and planning, followed by implementation and assessment during years 3-5.
In the plan, clinical settings would assess the carbon footprint and waste within all areas of practice and identify low-carbon and low-waste alternatives, such as immediate, short-term, and long-term solutions. This involves creating a framework for GI practices to develop sustainability metrics and offer affordable testing and treatment alternatives with a favorable environmental impact.
Through education, the societies would raise awareness and share sustainability practices with health care leadership, practitioners, and patients regarding the interactions among climate change, digestive health, and health care services. This would include discussions about the professional and ethical implications of old and new patterns of shared resource utilization.
The societies also support raising and allocating resources for research related to the intersections of climate change, digestive health, and health care, with an emphasis on vulnerable groups. This would encourage the inclusion of environmental considerations in research proposals.
At the GI society level, the groups suggest assessing and monitoring the current environmental impact of society-related activities. This entails identifying and implementing measures that would decrease the carbon footprint and reduce waste, as well as track financial costs and savings and environmental benefits from efforts included in a sustainability model.
At the intersociety level, the U.S. groups would collaborate with national and international GI and hepatology societies to support sustainability efforts and use validated metrics to evaluate their efforts. The multisociety plan has received endorsements from nearly two-dozen groups, including the Crohn’s & Colitis Foundation, World Endoscopy Organization, and World Gastroenterology Organisation.
The plan calls for engagement with GI- and hepatology-focused industry and pharmaceutical partners to develop environmentally friendly products, publish information on carbon footprint implications, and promote options for recycling.
Through advocacy efforts, the societies would also identify and incorporate principles of sustainable health care among the goals of relevant political action committees, as well as leverage collaborative advocacy efforts with national and international health care and research agencies, political leaders, and payors.
“We are grateful that several other GI organizations have endorsed our plan, which reflects the importance and timeliness of the opportunity to work together and share best practices to overcome the burden of climate change on digestive health and help mitigate the environmental impact of GI practice,” the authors concluded.
The authors did not declare a funding source for the report. Several of the authors declared financial relationships with pharmaceutical companies, serving as a consultant or receiving research funding.
FROM GASTROENTEROLOGY