New expert guidance on ketamine for resistant depression

Article Type
Changed
Fri, 03/26/2021 - 15:27

An international panel of mood disorder experts has published guidance on how to safely and effectively use ketamine and esketamine to treat adults with treatment-resistant depression (TRD).

Dr. Roger McIntyre

“Ketamine and esketamine are the first rapid-onset treatments for adults with TRD, and there was an international need for best-practice guidance on the deft and safe implementation of ketamine and esketamine at the point of care, as none previously existed,” first author Roger McIntyre, MD, professor of psychiatry and pharmacology, University of Toronto, said in an interview.

“This need has only been amplified by the significant increase in the number of clinics and centers providing this treatment,” added Dr. McIntyre, head of the mood disorders psychopharmacology unit.

Their article was published online March 17 in the American Journal of Psychiatry.
 

Insufficient evidence of long-term efficacy

As reported by this news organization, the U.S. Food and Drug Administration (FDA) approved esketamine nasal spray (Spravato) for TRD in March 2019.

In August 2020, the FDA updated the approval to include adults with major depression and suicidal thoughts or actions.

To provide clinical guidance, Dr. McIntyre and colleagues synthesized the available literature on the efficacy, safety, and tolerability of ketamine and esketamine for TRD.

The evidence, they note, supports the rapid-onset (within 1-2 days) efficacy of esketamine and ketamine in TRD.

The strongest evidence of efficacy is for intranasal esketamine and intravenous ketamine. There is insufficient evidence for oral, subcutaneous, or intramuscular ketamine for TRD, they report.

Intranasal esketamine demonstrates efficacy, safety, and tolerability for up to 1 year in adults with TRD. Evidence for long-term efficacy, safety, and tolerability of intravenous ketamine for patients with TRD is insufficient, the group notes.

They also note that esketamine is approved in the United States for major depression in association with suicidal ideation or behavior and that it has been proven to reduce suicide completion.

Safety concerns with ketamine and esketamine identified in the literature include, but are not limited to, psychiatric, neurologic/cognitive, genitourinary, and hemodynamic effects.
 

Implementation checklist

The group has developed an “implementation checklist” for use of ketamine/esketamine in clinical practice.

Starting with patient selection, they note that appropriate patients are those with a confirmed diagnosis of TRD for whom psychosis and other conditions that would significantly affect the risk-benefit ratio have been ruled out.

They suggest that a physical examination and monitoring of vital signs be undertaken during treatment and during posttreatment surveillance. A urine drug screen should be considered if appropriate.

The group advises that esketamine and ketamine be administered only in settings with multidisciplinary personnel, including, but not limited to, those with expertise in the assessment of mood disorders.

Clinics should be equipped with appropriate cardiorespiratory monitoring and be capable of psychiatric assessment of dissociation and psychotomimetic effects.

Depressive symptoms should be measured, and the authors suggest assessing for anxiety, cognitive function, well-being, and psychosocial function.

Patients should be monitored immediately after treatment to ensure cardiorespiratory stability, clear sensorium, and attenuation of dissociative and psychotomimetic effects.

The United States and some other countries require a risk evaluation and mitigation strategy (REMS) when administering esketamine. Regarding the REMS, it is advised that all patients be monitored for a minimum of 2 hours before discharge.

Patients should arrange for reliable transportation for each appointment, and they should be advised not to operate motor vehicles or hazardous machinery without at least one night of sleep.

“The rate of treatment-resistant depression as well as suicide is extraordinary and rising in many parts of the world, only worsened by COVID-19,” said Dr. McIntyre.

“Clinicians of different professional backgrounds have been interested in ketamine/esketamine, and we are extraordinarily pleased to see our international guidelines published,” he added.
 

‘Extremely useful’

Reached for comment, Alan Schatzberg, MD, professor of psychiatry and behavioral sciences at Stanford (Calif.) University, said this document “puts a lot of information in one place as far as what we know and what we don’t know right now, and that’s helpful. I think it’s an attempt to have a kind of a somewhat objective review of the literature, and it’s in a good journal.”

Dr. Alan Schatzberg

The article, Dr. Schatzberg added, “could be extremely useful for someone who is considering whether ketamine is useful for a patient or what they can tell a patient about ketamine, that is, about how long they might need, is it going to work, will it continue to work, and the level of data we have either on benefits or side effects.”

The research had no specific funding. The original article contains a complete list of author disclosures. Dr. Schatzberg has received grant support from Janssen; has served as a consultant for Alkermes, Avanir, Brain Resource, Bracket, Compass, Delpor, Epiodyne, GLG, Jazz, Janssen Pharmaceuticals, Lundbeck/Takeda, McKinsey and Company, Merck, Myriad Genetics, Neuronetics, Owl Analytics, Pfizer, Sage, Sunovion, and Xhale; holds equity in Corcept (cofounder), Delpor, Dermira, Epiodyne, Gilead, Incyte Genetics, Intersect ENT, Madrigal, Merck, Owl Analytics, Seattle Genetics, Titan, and Xhale; and is listed as an inventor on patents for pharmacogenetics and antiglucocorticoid use in the prediction of antidepressant response.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

An international panel of mood disorder experts has published guidance on how to safely and effectively use ketamine and esketamine to treat adults with treatment-resistant depression (TRD).

Dr. Roger McIntyre

“Ketamine and esketamine are the first rapid-onset treatments for adults with TRD, and there was an international need for best-practice guidance on the deft and safe implementation of ketamine and esketamine at the point of care, as none previously existed,” first author Roger McIntyre, MD, professor of psychiatry and pharmacology, University of Toronto, said in an interview.

“This need has only been amplified by the significant increase in the number of clinics and centers providing this treatment,” added Dr. McIntyre, head of the mood disorders psychopharmacology unit.

Their article was published online March 17 in the American Journal of Psychiatry.
 

Insufficient evidence of long-term efficacy

As reported by this news organization, the U.S. Food and Drug Administration (FDA) approved esketamine nasal spray (Spravato) for TRD in March 2019.

In August 2020, the FDA updated the approval to include adults with major depression and suicidal thoughts or actions.

To provide clinical guidance, Dr. McIntyre and colleagues synthesized the available literature on the efficacy, safety, and tolerability of ketamine and esketamine for TRD.

The evidence, they note, supports the rapid-onset (within 1-2 days) efficacy of esketamine and ketamine in TRD.

The strongest evidence of efficacy is for intranasal esketamine and intravenous ketamine. There is insufficient evidence for oral, subcutaneous, or intramuscular ketamine for TRD, they report.

Intranasal esketamine demonstrates efficacy, safety, and tolerability for up to 1 year in adults with TRD. Evidence for long-term efficacy, safety, and tolerability of intravenous ketamine for patients with TRD is insufficient, the group notes.

They also note that esketamine is approved in the United States for major depression in association with suicidal ideation or behavior and that it has been proven to reduce suicide completion.

Safety concerns with ketamine and esketamine identified in the literature include, but are not limited to, psychiatric, neurologic/cognitive, genitourinary, and hemodynamic effects.
 

Implementation checklist

The group has developed an “implementation checklist” for use of ketamine/esketamine in clinical practice.

Starting with patient selection, they note that appropriate patients are those with a confirmed diagnosis of TRD for whom psychosis and other conditions that would significantly affect the risk-benefit ratio have been ruled out.

They suggest that a physical examination and monitoring of vital signs be undertaken during treatment and during posttreatment surveillance. A urine drug screen should be considered if appropriate.

The group advises that esketamine and ketamine be administered only in settings with multidisciplinary personnel, including, but not limited to, those with expertise in the assessment of mood disorders.

Clinics should be equipped with appropriate cardiorespiratory monitoring and be capable of psychiatric assessment of dissociation and psychotomimetic effects.

Depressive symptoms should be measured, and the authors suggest assessing for anxiety, cognitive function, well-being, and psychosocial function.

Patients should be monitored immediately after treatment to ensure cardiorespiratory stability, clear sensorium, and attenuation of dissociative and psychotomimetic effects.

The United States and some other countries require a risk evaluation and mitigation strategy (REMS) when administering esketamine. Regarding the REMS, it is advised that all patients be monitored for a minimum of 2 hours before discharge.

Patients should arrange for reliable transportation for each appointment, and they should be advised not to operate motor vehicles or hazardous machinery without at least one night of sleep.

“The rate of treatment-resistant depression as well as suicide is extraordinary and rising in many parts of the world, only worsened by COVID-19,” said Dr. McIntyre.

“Clinicians of different professional backgrounds have been interested in ketamine/esketamine, and we are extraordinarily pleased to see our international guidelines published,” he added.
 

‘Extremely useful’

Reached for comment, Alan Schatzberg, MD, professor of psychiatry and behavioral sciences at Stanford (Calif.) University, said this document “puts a lot of information in one place as far as what we know and what we don’t know right now, and that’s helpful. I think it’s an attempt to have a kind of a somewhat objective review of the literature, and it’s in a good journal.”

Dr. Alan Schatzberg

The article, Dr. Schatzberg added, “could be extremely useful for someone who is considering whether ketamine is useful for a patient or what they can tell a patient about ketamine, that is, about how long they might need, is it going to work, will it continue to work, and the level of data we have either on benefits or side effects.”

The research had no specific funding. The original article contains a complete list of author disclosures. Dr. Schatzberg has received grant support from Janssen; has served as a consultant for Alkermes, Avanir, Brain Resource, Bracket, Compass, Delpor, Epiodyne, GLG, Jazz, Janssen Pharmaceuticals, Lundbeck/Takeda, McKinsey and Company, Merck, Myriad Genetics, Neuronetics, Owl Analytics, Pfizer, Sage, Sunovion, and Xhale; holds equity in Corcept (cofounder), Delpor, Dermira, Epiodyne, Gilead, Incyte Genetics, Intersect ENT, Madrigal, Merck, Owl Analytics, Seattle Genetics, Titan, and Xhale; and is listed as an inventor on patents for pharmacogenetics and antiglucocorticoid use in the prediction of antidepressant response.

A version of this article first appeared on Medscape.com.

An international panel of mood disorder experts has published guidance on how to safely and effectively use ketamine and esketamine to treat adults with treatment-resistant depression (TRD).

Dr. Roger McIntyre

“Ketamine and esketamine are the first rapid-onset treatments for adults with TRD, and there was an international need for best-practice guidance on the deft and safe implementation of ketamine and esketamine at the point of care, as none previously existed,” first author Roger McIntyre, MD, professor of psychiatry and pharmacology, University of Toronto, said in an interview.

“This need has only been amplified by the significant increase in the number of clinics and centers providing this treatment,” added Dr. McIntyre, head of the mood disorders psychopharmacology unit.

Their article was published online March 17 in the American Journal of Psychiatry.
 

Insufficient evidence of long-term efficacy

As reported by this news organization, the U.S. Food and Drug Administration (FDA) approved esketamine nasal spray (Spravato) for TRD in March 2019.

In August 2020, the FDA updated the approval to include adults with major depression and suicidal thoughts or actions.

To provide clinical guidance, Dr. McIntyre and colleagues synthesized the available literature on the efficacy, safety, and tolerability of ketamine and esketamine for TRD.

The evidence, they note, supports the rapid-onset (within 1-2 days) efficacy of esketamine and ketamine in TRD.

The strongest evidence of efficacy is for intranasal esketamine and intravenous ketamine. There is insufficient evidence for oral, subcutaneous, or intramuscular ketamine for TRD, they report.

Intranasal esketamine demonstrates efficacy, safety, and tolerability for up to 1 year in adults with TRD. Evidence for long-term efficacy, safety, and tolerability of intravenous ketamine for patients with TRD is insufficient, the group notes.

They also note that esketamine is approved in the United States for major depression in association with suicidal ideation or behavior and that it has been proven to reduce suicide completion.

Safety concerns with ketamine and esketamine identified in the literature include, but are not limited to, psychiatric, neurologic/cognitive, genitourinary, and hemodynamic effects.
 

Implementation checklist

The group has developed an “implementation checklist” for use of ketamine/esketamine in clinical practice.

Starting with patient selection, they note that appropriate patients are those with a confirmed diagnosis of TRD for whom psychosis and other conditions that would significantly affect the risk-benefit ratio have been ruled out.

They suggest that a physical examination and monitoring of vital signs be undertaken during treatment and during posttreatment surveillance. A urine drug screen should be considered if appropriate.

The group advises that esketamine and ketamine be administered only in settings with multidisciplinary personnel, including, but not limited to, those with expertise in the assessment of mood disorders.

Clinics should be equipped with appropriate cardiorespiratory monitoring and be capable of psychiatric assessment of dissociation and psychotomimetic effects.

Depressive symptoms should be measured, and the authors suggest assessing for anxiety, cognitive function, well-being, and psychosocial function.

Patients should be monitored immediately after treatment to ensure cardiorespiratory stability, clear sensorium, and attenuation of dissociative and psychotomimetic effects.

The United States and some other countries require a risk evaluation and mitigation strategy (REMS) when administering esketamine. Regarding the REMS, it is advised that all patients be monitored for a minimum of 2 hours before discharge.

Patients should arrange for reliable transportation for each appointment, and they should be advised not to operate motor vehicles or hazardous machinery without at least one night of sleep.

“The rate of treatment-resistant depression as well as suicide is extraordinary and rising in many parts of the world, only worsened by COVID-19,” said Dr. McIntyre.

“Clinicians of different professional backgrounds have been interested in ketamine/esketamine, and we are extraordinarily pleased to see our international guidelines published,” he added.
 

‘Extremely useful’

Reached for comment, Alan Schatzberg, MD, professor of psychiatry and behavioral sciences at Stanford (Calif.) University, said this document “puts a lot of information in one place as far as what we know and what we don’t know right now, and that’s helpful. I think it’s an attempt to have a kind of a somewhat objective review of the literature, and it’s in a good journal.”

Dr. Alan Schatzberg

The article, Dr. Schatzberg added, “could be extremely useful for someone who is considering whether ketamine is useful for a patient or what they can tell a patient about ketamine, that is, about how long they might need, is it going to work, will it continue to work, and the level of data we have either on benefits or side effects.”

The research had no specific funding. The original article contains a complete list of author disclosures. Dr. Schatzberg has received grant support from Janssen; has served as a consultant for Alkermes, Avanir, Brain Resource, Bracket, Compass, Delpor, Epiodyne, GLG, Jazz, Janssen Pharmaceuticals, Lundbeck/Takeda, McKinsey and Company, Merck, Myriad Genetics, Neuronetics, Owl Analytics, Pfizer, Sage, Sunovion, and Xhale; holds equity in Corcept (cofounder), Delpor, Dermira, Epiodyne, Gilead, Incyte Genetics, Intersect ENT, Madrigal, Merck, Owl Analytics, Seattle Genetics, Titan, and Xhale; and is listed as an inventor on patents for pharmacogenetics and antiglucocorticoid use in the prediction of antidepressant response.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

STEP 4: Ongoing semaglutide treatment extends weight loss

Article Type
Changed
Tue, 05/03/2022 - 15:06

Weekly injections with the GLP-1 receptor agonist semaglutide helped people maintain, and even increase, their initial weight loss on the agent when they continued treatment beyond 20 weeks in results from an international, multicenter trial with 803 randomized subjects.

The study “reflects what we always see in practice, that when people lose weight their body then fights to regain it. The results underscore this” by showing what happens when people stop the drug, Domenica M. Rubino, MD, reported at the annual meeting of the Endocrine Society.

The STEP 4 study began with 902 obese or higher-risk people with an average body mass index of about 38 kg/m2 who underwent a 20-week, open-label, run-in phase of weekly subcutaneous injections of semaglutide (Ozempic), during which all subjects gradually up-titrated to the study’s maintenance dosage of 2.4 mg/week and allowing investigators to weed out intolerant, noncompliant, or nonresponsive people. After this phase excluded 99 subjects from continuing, and documented that the remaining 803 patients had already lost an average of 11% of their starting weight, the core of the study kicked in by randomizing them 2:1 to either maintain their weekly semaglutide injections for another 48 weeks or change to placebo injections.

After 48 more weeks, the 535 people who continued active semaglutide treatment lost on average an additional 8% of their weight. Meanwhile, the 268 who switched to placebo gained 7% of the weight they had reached at the 20-week point, for a significant between-group weight-loss difference of about 15% for the study’s primary endpoint. Those maintained on semaglutide for the full 68 weeks had a cumulative average weight loss of about 17%, compared with when they first began treatment, Dr. Rubino said. Concurrently with her report, the results also appeared in an article published online in JAMA.

“It’s reassuring that people who remain on this treatment can sustain weight losses of 15%, and in some cases 20% or more. That’s huge,” Dr. Rubino said in an interview. . After 68 weeks, 40% of the people who maintained their semaglutide treatment had lost at least 20% of their weight, compared with when they first started treatment.

“Preventing weight regain following initial weight loss is a well-known major challenge for people who lose weight,” commented John Clark III, MD, PhD, a weight management specialist at the University of Texas Southwestern Medical Center in Dallas who was not involved with the study. The findings from STEP 4 will be “helpful to have a discussion [with weight-loss patients] about the risks and benefits of continuing to take this medication longer than just a few months and if they want to continue taking the medication after they reach their goal weight,” Dr. Clark noted in an interview. “This new information reinforces that treatment continues to be effective after the short term.”

“This is obesity 101. If a treatment is provided that targets mechanisms of obesity, and then the treatment stops, we should not be surprised that weight regain occurs,” commented Ania M. Jastreboff, MD, PhD, codirector of the Yale Center for Weight Management in New Haven, Conn. “It’s tragic to see patients who, after successful weight loss, suffer regain because the treatment by which they lost weight stopped,” she said in an interview.



The STEP 4 study ran at 73 centers in 10 countries during 2018-2020. It enrolled adults without diabetes and with a BMI of at least 30, or at least 27 if they also had at least one weight-related comorbidity such as hypertension, dyslipidemia, or obstructive sleep apnea. Participants averaged about 47 years of age, almost 80% were women, and about 84% were White, including 8% of Hispanic or Latinx ethnicity.

The adverse-event profile was consistent with findings from trials where semaglutide treated hyperglycemia in patients with type 2 diabetes (semaglutide at a maximum once-weekly dosage of 1 mg has Food and Drug Administration approval for controlling hyperglycemia in patients with type 2 diabetes), as well results from other semaglutide studies and from studies of other agents in the GLP-1 receptor agonist class.

In STEP 4 9% of patients who received semaglutide during the randomized phase and 7% of those randomized to placebo had a serious adverse reaction, and about 2% of those in both treatment arms stopped treatment because of an adverse event. The most common adverse events on semaglutide were gastrointestinal, with diarrhea in 14%, nausea in 14%, constipation in 12%, and vomiting in 10%.

These GI effects are often mitigated by slower dose escalation, eating smaller amounts of food at a time, and not eating beyond the point of feeling full, noted Dr. Jastreboff.

The STEP 4 results follow prior reports from three other large trials – STEP 1, STEP 2, and STEP 3 – that studied the weight-loss effects of weekly semaglutide treatment in adults using varying enrollment criteria and treatment designs. “We’ve seen very consistent results [across all four studies] for efficacy and safety,” said Dr. Rubino, who owns and directs the Washington Center for Weight Management & Research in Arlington, Va.

NovoNordisk, the company that markets semaglutide, submitted data from all four studies to the FDA late last year in an application for a new label for a weight loss indication at the 2.4-mg/week dosage. The company has said it expects an agency decision by June 2021.

Dr. Rubino has been an adviser and consultant to and a speaker on behalf of Novo Nordisk, and she has also been an investigator for studies sponsored by AstraZeneca, Boehringer Ingelheim, and Novo Nordisk. Dr. Clark had no disclosures. Dr. Jastreboff is consultant for and has received research funding from NovoNordisk, and she has also been a consultant to and/or received research from Eli Lilly and Boehringer Ingelheim.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Weekly injections with the GLP-1 receptor agonist semaglutide helped people maintain, and even increase, their initial weight loss on the agent when they continued treatment beyond 20 weeks in results from an international, multicenter trial with 803 randomized subjects.

The study “reflects what we always see in practice, that when people lose weight their body then fights to regain it. The results underscore this” by showing what happens when people stop the drug, Domenica M. Rubino, MD, reported at the annual meeting of the Endocrine Society.

The STEP 4 study began with 902 obese or higher-risk people with an average body mass index of about 38 kg/m2 who underwent a 20-week, open-label, run-in phase of weekly subcutaneous injections of semaglutide (Ozempic), during which all subjects gradually up-titrated to the study’s maintenance dosage of 2.4 mg/week and allowing investigators to weed out intolerant, noncompliant, or nonresponsive people. After this phase excluded 99 subjects from continuing, and documented that the remaining 803 patients had already lost an average of 11% of their starting weight, the core of the study kicked in by randomizing them 2:1 to either maintain their weekly semaglutide injections for another 48 weeks or change to placebo injections.

After 48 more weeks, the 535 people who continued active semaglutide treatment lost on average an additional 8% of their weight. Meanwhile, the 268 who switched to placebo gained 7% of the weight they had reached at the 20-week point, for a significant between-group weight-loss difference of about 15% for the study’s primary endpoint. Those maintained on semaglutide for the full 68 weeks had a cumulative average weight loss of about 17%, compared with when they first began treatment, Dr. Rubino said. Concurrently with her report, the results also appeared in an article published online in JAMA.

“It’s reassuring that people who remain on this treatment can sustain weight losses of 15%, and in some cases 20% or more. That’s huge,” Dr. Rubino said in an interview. . After 68 weeks, 40% of the people who maintained their semaglutide treatment had lost at least 20% of their weight, compared with when they first started treatment.

“Preventing weight regain following initial weight loss is a well-known major challenge for people who lose weight,” commented John Clark III, MD, PhD, a weight management specialist at the University of Texas Southwestern Medical Center in Dallas who was not involved with the study. The findings from STEP 4 will be “helpful to have a discussion [with weight-loss patients] about the risks and benefits of continuing to take this medication longer than just a few months and if they want to continue taking the medication after they reach their goal weight,” Dr. Clark noted in an interview. “This new information reinforces that treatment continues to be effective after the short term.”

“This is obesity 101. If a treatment is provided that targets mechanisms of obesity, and then the treatment stops, we should not be surprised that weight regain occurs,” commented Ania M. Jastreboff, MD, PhD, codirector of the Yale Center for Weight Management in New Haven, Conn. “It’s tragic to see patients who, after successful weight loss, suffer regain because the treatment by which they lost weight stopped,” she said in an interview.



The STEP 4 study ran at 73 centers in 10 countries during 2018-2020. It enrolled adults without diabetes and with a BMI of at least 30, or at least 27 if they also had at least one weight-related comorbidity such as hypertension, dyslipidemia, or obstructive sleep apnea. Participants averaged about 47 years of age, almost 80% were women, and about 84% were White, including 8% of Hispanic or Latinx ethnicity.

The adverse-event profile was consistent with findings from trials where semaglutide treated hyperglycemia in patients with type 2 diabetes (semaglutide at a maximum once-weekly dosage of 1 mg has Food and Drug Administration approval for controlling hyperglycemia in patients with type 2 diabetes), as well results from other semaglutide studies and from studies of other agents in the GLP-1 receptor agonist class.

In STEP 4 9% of patients who received semaglutide during the randomized phase and 7% of those randomized to placebo had a serious adverse reaction, and about 2% of those in both treatment arms stopped treatment because of an adverse event. The most common adverse events on semaglutide were gastrointestinal, with diarrhea in 14%, nausea in 14%, constipation in 12%, and vomiting in 10%.

These GI effects are often mitigated by slower dose escalation, eating smaller amounts of food at a time, and not eating beyond the point of feeling full, noted Dr. Jastreboff.

The STEP 4 results follow prior reports from three other large trials – STEP 1, STEP 2, and STEP 3 – that studied the weight-loss effects of weekly semaglutide treatment in adults using varying enrollment criteria and treatment designs. “We’ve seen very consistent results [across all four studies] for efficacy and safety,” said Dr. Rubino, who owns and directs the Washington Center for Weight Management & Research in Arlington, Va.

NovoNordisk, the company that markets semaglutide, submitted data from all four studies to the FDA late last year in an application for a new label for a weight loss indication at the 2.4-mg/week dosage. The company has said it expects an agency decision by June 2021.

Dr. Rubino has been an adviser and consultant to and a speaker on behalf of Novo Nordisk, and she has also been an investigator for studies sponsored by AstraZeneca, Boehringer Ingelheim, and Novo Nordisk. Dr. Clark had no disclosures. Dr. Jastreboff is consultant for and has received research funding from NovoNordisk, and she has also been a consultant to and/or received research from Eli Lilly and Boehringer Ingelheim.

Weekly injections with the GLP-1 receptor agonist semaglutide helped people maintain, and even increase, their initial weight loss on the agent when they continued treatment beyond 20 weeks in results from an international, multicenter trial with 803 randomized subjects.

The study “reflects what we always see in practice, that when people lose weight their body then fights to regain it. The results underscore this” by showing what happens when people stop the drug, Domenica M. Rubino, MD, reported at the annual meeting of the Endocrine Society.

The STEP 4 study began with 902 obese or higher-risk people with an average body mass index of about 38 kg/m2 who underwent a 20-week, open-label, run-in phase of weekly subcutaneous injections of semaglutide (Ozempic), during which all subjects gradually up-titrated to the study’s maintenance dosage of 2.4 mg/week and allowing investigators to weed out intolerant, noncompliant, or nonresponsive people. After this phase excluded 99 subjects from continuing, and documented that the remaining 803 patients had already lost an average of 11% of their starting weight, the core of the study kicked in by randomizing them 2:1 to either maintain their weekly semaglutide injections for another 48 weeks or change to placebo injections.

After 48 more weeks, the 535 people who continued active semaglutide treatment lost on average an additional 8% of their weight. Meanwhile, the 268 who switched to placebo gained 7% of the weight they had reached at the 20-week point, for a significant between-group weight-loss difference of about 15% for the study’s primary endpoint. Those maintained on semaglutide for the full 68 weeks had a cumulative average weight loss of about 17%, compared with when they first began treatment, Dr. Rubino said. Concurrently with her report, the results also appeared in an article published online in JAMA.

“It’s reassuring that people who remain on this treatment can sustain weight losses of 15%, and in some cases 20% or more. That’s huge,” Dr. Rubino said in an interview. . After 68 weeks, 40% of the people who maintained their semaglutide treatment had lost at least 20% of their weight, compared with when they first started treatment.

“Preventing weight regain following initial weight loss is a well-known major challenge for people who lose weight,” commented John Clark III, MD, PhD, a weight management specialist at the University of Texas Southwestern Medical Center in Dallas who was not involved with the study. The findings from STEP 4 will be “helpful to have a discussion [with weight-loss patients] about the risks and benefits of continuing to take this medication longer than just a few months and if they want to continue taking the medication after they reach their goal weight,” Dr. Clark noted in an interview. “This new information reinforces that treatment continues to be effective after the short term.”

“This is obesity 101. If a treatment is provided that targets mechanisms of obesity, and then the treatment stops, we should not be surprised that weight regain occurs,” commented Ania M. Jastreboff, MD, PhD, codirector of the Yale Center for Weight Management in New Haven, Conn. “It’s tragic to see patients who, after successful weight loss, suffer regain because the treatment by which they lost weight stopped,” she said in an interview.



The STEP 4 study ran at 73 centers in 10 countries during 2018-2020. It enrolled adults without diabetes and with a BMI of at least 30, or at least 27 if they also had at least one weight-related comorbidity such as hypertension, dyslipidemia, or obstructive sleep apnea. Participants averaged about 47 years of age, almost 80% were women, and about 84% were White, including 8% of Hispanic or Latinx ethnicity.

The adverse-event profile was consistent with findings from trials where semaglutide treated hyperglycemia in patients with type 2 diabetes (semaglutide at a maximum once-weekly dosage of 1 mg has Food and Drug Administration approval for controlling hyperglycemia in patients with type 2 diabetes), as well results from other semaglutide studies and from studies of other agents in the GLP-1 receptor agonist class.

In STEP 4 9% of patients who received semaglutide during the randomized phase and 7% of those randomized to placebo had a serious adverse reaction, and about 2% of those in both treatment arms stopped treatment because of an adverse event. The most common adverse events on semaglutide were gastrointestinal, with diarrhea in 14%, nausea in 14%, constipation in 12%, and vomiting in 10%.

These GI effects are often mitigated by slower dose escalation, eating smaller amounts of food at a time, and not eating beyond the point of feeling full, noted Dr. Jastreboff.

The STEP 4 results follow prior reports from three other large trials – STEP 1, STEP 2, and STEP 3 – that studied the weight-loss effects of weekly semaglutide treatment in adults using varying enrollment criteria and treatment designs. “We’ve seen very consistent results [across all four studies] for efficacy and safety,” said Dr. Rubino, who owns and directs the Washington Center for Weight Management & Research in Arlington, Va.

NovoNordisk, the company that markets semaglutide, submitted data from all four studies to the FDA late last year in an application for a new label for a weight loss indication at the 2.4-mg/week dosage. The company has said it expects an agency decision by June 2021.

Dr. Rubino has been an adviser and consultant to and a speaker on behalf of Novo Nordisk, and she has also been an investigator for studies sponsored by AstraZeneca, Boehringer Ingelheim, and Novo Nordisk. Dr. Clark had no disclosures. Dr. Jastreboff is consultant for and has received research funding from NovoNordisk, and she has also been a consultant to and/or received research from Eli Lilly and Boehringer Ingelheim.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ENDO 2021

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

Maternal caffeine consumption, even small amounts, may reduce neonatal size

Article Type
Changed
Fri, 03/26/2021 - 15:12

For pregnant women, just half a cup of coffee a day may reduce neonatal birth size and body weight, according to a prospective study involving more than 2,500 women.

kjekol/thinkstock

That’s only 50 mg of a caffeine day, which falls below the upper threshold of 200 mg set by the American College of Obstetricians and Gynecologists, lead author Jessica Gleason, PhD, MPH, of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, Bethesda, Md, and colleagues reported.

“Systematic reviews and meta-analyses have reported that maternal caffeine consumption, even in doses lower than 200 mg, is associated with a higher risk for low birth weight, small for gestational age (SGA), and fetal growth restriction, suggesting there may be no safe amount of caffeine during pregnancy,” the investigators wrote in JAMA Network Open.

Findings to date have been inconsistent, with a 2014 meta-analysis reporting contrary or null results in four out of nine studies.

Dr. Gleason and colleagues suggested that such discrepancies may be caused by uncontrolled confounding factors in some of the studies, such as smoking, as well as the inadequacy of self-reporting, which fails to incorporate variations in caffeine content between beverages, or differences in rates of metabolism between individuals.

“To our knowledge, no studies have examined the association between caffeine intake and neonatal anthropometric measures beyond weight, length, and head circumference, and few have analyzed plasma concentrations of caffeine and its metabolites or genetic variations in the rate of metabolism associated with neonatal size,” the investigators wrote.

Dr. Gleason and colleagues set out to address this knowledge gap with a prospective cohort study, including 2,055 nonsmoking women with low risk of birth defects who presented at 12 centers between 2009 and 2013. Mean participant age was 28.3 years and mean body mass index was 23.6. Races and ethnicities were represented almost evenly even across four groups: Hispanic (28.2%), White (27.4%), Black (25.2%), and Asian/Pacific Islander (19.2%). Rate of caffeine metabolism was defined by the single-nucleotide variant rs762551 (CYP1A2*1F), according to which, slightly more women had slow metabolism (52.7%) than fast metabolism (47.3%).

Women were enrolled at 8-13 weeks’ gestational age, at which time they underwent interviews and blood draws, allowing for measurement of caffeine and paraxanthine plasma levels, as well as self-reported caffeine consumption during the preceding week.

Over the course of six visits, fetal growth was observed via ultrasound. Medical records were used to determine birth weights and neonatal anthropometric measures, including fat and skin fold mass, body length, and circumferences of the thigh, arm, abdomen, and head.

Neonatal measurements were compared with plasma levels of caffeine and paraxanthine, both continuously and as quartiles (Q1, ≤ 28.3 ng/mL; Q2, 28.4-157.1 ng/mL; Q3, 157.2-658.8 ng/mL; Q4, > 658.8 ng/mL). Comparisons were also made with self-reported caffeine intake.

Women who reported drinking 1-50 mg of caffeine per day had neonates with smaller subscapular skin folds (beta = –0.14 mm; 95% confidence interval, –0.27 to -–0.01 mm), while those who reported more than 50 mg per day had newborns with lower birth weight (beta = –66 g; 95% CI, –121 to –10 g), and smaller circumferences of mid-upper thigh (beta = –0.32 cm; 95% CI, –0.55 to –0.09 cm), anterior thigh skin fold (beta = –0.24 mm; 95% CI, –0.47 to -.01 mm), and mid-upper arm (beta = –0.17 cm; 95% CI, –0.31 to –0.02 cm).

Caffeine plasma concentrations supported these findings.

Compared with women who had caffeine plasma concentrations in the lowest quartile, those in the highest quartile gave birth to neonates with shorter length (beta = –0.44 cm; P = .04 for trend) and lower body weight (beta = –84.3 g; P = .04 for trend), as well as smaller mid-upper arm circumference (beta = -0.25 cm; P = .02 for trend), mid-upper thigh circumference (beta = –0.29 cm; P = .07 for trend), and head circumference (beta = 0.28 cm; P < .001 for trend). A comparison of lower and upper paraxanthine quartiles revealed the similar trends, as did analyses of continuous measures.

“Our results suggest that caffeine consumption during pregnancy, even at levels much lower than the recommended 200 mg per day of caffeine may be associated with decreased fetal growth,” the investigators concluded.

Dr. Sarah Prager

Sarah W. Prager, MD, of the University of Washington, Seattle, suggested that the findings “do not demonstrate that caffeine has a clinically meaningful negative clinical impact on newborn size and weight.”

She noted that there was no difference in the rate of SGA between plasma caffeine quartiles, and that most patients were thin, which may not accurately represent the U.S. population.

“Based on these new data, my take home message to patients would be that increasing amounts of caffeine can have a small but real impact on the size of their baby at birth, though it is unlikely to result in a diagnosis of SGA,” she said. “Pregnant patients may want to limit caffeine intake even more than the ACOG recommendation of 200 mg per day.”

Dr. Robert Silver

According to Robert M. Silver, MD, of the University of Utah Health Sciences Center, Salt Lake City, “data from this study are of high quality, owing to the prospective cohort design, large numbers, assessment of biomarkers, and sophisticated analyses.”

Still, he urged a cautious interpretation from a clinical perspective.

“It is important to not overreact to these data,” he said. “The decrease in fetal growth associated with caffeine is small and may prove to be clinically meaningless. Accordingly, clinical recommendations regarding caffeine intake during pregnancy should not be modified solely based on this study.”

Dr. Silver suggested that the findings deserve additional investigation.

“These observations warrant further research about the effects of caffeine exposure during pregnancy,” he said. “Ideally, studies should assess the effect of caffeine exposure on fetal growth in various pregnancy epochs as well as on neonatal and childhood growth.”

The study was funded by the Intramural Research Program of the NICHD. Dr. Gerlanc is an employee of The Prospective Group, which was contracted to provide statistical support.

Publications
Topics
Sections

For pregnant women, just half a cup of coffee a day may reduce neonatal birth size and body weight, according to a prospective study involving more than 2,500 women.

kjekol/thinkstock

That’s only 50 mg of a caffeine day, which falls below the upper threshold of 200 mg set by the American College of Obstetricians and Gynecologists, lead author Jessica Gleason, PhD, MPH, of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, Bethesda, Md, and colleagues reported.

“Systematic reviews and meta-analyses have reported that maternal caffeine consumption, even in doses lower than 200 mg, is associated with a higher risk for low birth weight, small for gestational age (SGA), and fetal growth restriction, suggesting there may be no safe amount of caffeine during pregnancy,” the investigators wrote in JAMA Network Open.

Findings to date have been inconsistent, with a 2014 meta-analysis reporting contrary or null results in four out of nine studies.

Dr. Gleason and colleagues suggested that such discrepancies may be caused by uncontrolled confounding factors in some of the studies, such as smoking, as well as the inadequacy of self-reporting, which fails to incorporate variations in caffeine content between beverages, or differences in rates of metabolism between individuals.

“To our knowledge, no studies have examined the association between caffeine intake and neonatal anthropometric measures beyond weight, length, and head circumference, and few have analyzed plasma concentrations of caffeine and its metabolites or genetic variations in the rate of metabolism associated with neonatal size,” the investigators wrote.

Dr. Gleason and colleagues set out to address this knowledge gap with a prospective cohort study, including 2,055 nonsmoking women with low risk of birth defects who presented at 12 centers between 2009 and 2013. Mean participant age was 28.3 years and mean body mass index was 23.6. Races and ethnicities were represented almost evenly even across four groups: Hispanic (28.2%), White (27.4%), Black (25.2%), and Asian/Pacific Islander (19.2%). Rate of caffeine metabolism was defined by the single-nucleotide variant rs762551 (CYP1A2*1F), according to which, slightly more women had slow metabolism (52.7%) than fast metabolism (47.3%).

Women were enrolled at 8-13 weeks’ gestational age, at which time they underwent interviews and blood draws, allowing for measurement of caffeine and paraxanthine plasma levels, as well as self-reported caffeine consumption during the preceding week.

Over the course of six visits, fetal growth was observed via ultrasound. Medical records were used to determine birth weights and neonatal anthropometric measures, including fat and skin fold mass, body length, and circumferences of the thigh, arm, abdomen, and head.

Neonatal measurements were compared with plasma levels of caffeine and paraxanthine, both continuously and as quartiles (Q1, ≤ 28.3 ng/mL; Q2, 28.4-157.1 ng/mL; Q3, 157.2-658.8 ng/mL; Q4, > 658.8 ng/mL). Comparisons were also made with self-reported caffeine intake.

Women who reported drinking 1-50 mg of caffeine per day had neonates with smaller subscapular skin folds (beta = –0.14 mm; 95% confidence interval, –0.27 to -–0.01 mm), while those who reported more than 50 mg per day had newborns with lower birth weight (beta = –66 g; 95% CI, –121 to –10 g), and smaller circumferences of mid-upper thigh (beta = –0.32 cm; 95% CI, –0.55 to –0.09 cm), anterior thigh skin fold (beta = –0.24 mm; 95% CI, –0.47 to -.01 mm), and mid-upper arm (beta = –0.17 cm; 95% CI, –0.31 to –0.02 cm).

Caffeine plasma concentrations supported these findings.

Compared with women who had caffeine plasma concentrations in the lowest quartile, those in the highest quartile gave birth to neonates with shorter length (beta = –0.44 cm; P = .04 for trend) and lower body weight (beta = –84.3 g; P = .04 for trend), as well as smaller mid-upper arm circumference (beta = -0.25 cm; P = .02 for trend), mid-upper thigh circumference (beta = –0.29 cm; P = .07 for trend), and head circumference (beta = 0.28 cm; P < .001 for trend). A comparison of lower and upper paraxanthine quartiles revealed the similar trends, as did analyses of continuous measures.

“Our results suggest that caffeine consumption during pregnancy, even at levels much lower than the recommended 200 mg per day of caffeine may be associated with decreased fetal growth,” the investigators concluded.

Dr. Sarah Prager

Sarah W. Prager, MD, of the University of Washington, Seattle, suggested that the findings “do not demonstrate that caffeine has a clinically meaningful negative clinical impact on newborn size and weight.”

She noted that there was no difference in the rate of SGA between plasma caffeine quartiles, and that most patients were thin, which may not accurately represent the U.S. population.

“Based on these new data, my take home message to patients would be that increasing amounts of caffeine can have a small but real impact on the size of their baby at birth, though it is unlikely to result in a diagnosis of SGA,” she said. “Pregnant patients may want to limit caffeine intake even more than the ACOG recommendation of 200 mg per day.”

Dr. Robert Silver

According to Robert M. Silver, MD, of the University of Utah Health Sciences Center, Salt Lake City, “data from this study are of high quality, owing to the prospective cohort design, large numbers, assessment of biomarkers, and sophisticated analyses.”

Still, he urged a cautious interpretation from a clinical perspective.

“It is important to not overreact to these data,” he said. “The decrease in fetal growth associated with caffeine is small and may prove to be clinically meaningless. Accordingly, clinical recommendations regarding caffeine intake during pregnancy should not be modified solely based on this study.”

Dr. Silver suggested that the findings deserve additional investigation.

“These observations warrant further research about the effects of caffeine exposure during pregnancy,” he said. “Ideally, studies should assess the effect of caffeine exposure on fetal growth in various pregnancy epochs as well as on neonatal and childhood growth.”

The study was funded by the Intramural Research Program of the NICHD. Dr. Gerlanc is an employee of The Prospective Group, which was contracted to provide statistical support.

For pregnant women, just half a cup of coffee a day may reduce neonatal birth size and body weight, according to a prospective study involving more than 2,500 women.

kjekol/thinkstock

That’s only 50 mg of a caffeine day, which falls below the upper threshold of 200 mg set by the American College of Obstetricians and Gynecologists, lead author Jessica Gleason, PhD, MPH, of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, Bethesda, Md, and colleagues reported.

“Systematic reviews and meta-analyses have reported that maternal caffeine consumption, even in doses lower than 200 mg, is associated with a higher risk for low birth weight, small for gestational age (SGA), and fetal growth restriction, suggesting there may be no safe amount of caffeine during pregnancy,” the investigators wrote in JAMA Network Open.

Findings to date have been inconsistent, with a 2014 meta-analysis reporting contrary or null results in four out of nine studies.

Dr. Gleason and colleagues suggested that such discrepancies may be caused by uncontrolled confounding factors in some of the studies, such as smoking, as well as the inadequacy of self-reporting, which fails to incorporate variations in caffeine content between beverages, or differences in rates of metabolism between individuals.

“To our knowledge, no studies have examined the association between caffeine intake and neonatal anthropometric measures beyond weight, length, and head circumference, and few have analyzed plasma concentrations of caffeine and its metabolites or genetic variations in the rate of metabolism associated with neonatal size,” the investigators wrote.

Dr. Gleason and colleagues set out to address this knowledge gap with a prospective cohort study, including 2,055 nonsmoking women with low risk of birth defects who presented at 12 centers between 2009 and 2013. Mean participant age was 28.3 years and mean body mass index was 23.6. Races and ethnicities were represented almost evenly even across four groups: Hispanic (28.2%), White (27.4%), Black (25.2%), and Asian/Pacific Islander (19.2%). Rate of caffeine metabolism was defined by the single-nucleotide variant rs762551 (CYP1A2*1F), according to which, slightly more women had slow metabolism (52.7%) than fast metabolism (47.3%).

Women were enrolled at 8-13 weeks’ gestational age, at which time they underwent interviews and blood draws, allowing for measurement of caffeine and paraxanthine plasma levels, as well as self-reported caffeine consumption during the preceding week.

Over the course of six visits, fetal growth was observed via ultrasound. Medical records were used to determine birth weights and neonatal anthropometric measures, including fat and skin fold mass, body length, and circumferences of the thigh, arm, abdomen, and head.

Neonatal measurements were compared with plasma levels of caffeine and paraxanthine, both continuously and as quartiles (Q1, ≤ 28.3 ng/mL; Q2, 28.4-157.1 ng/mL; Q3, 157.2-658.8 ng/mL; Q4, > 658.8 ng/mL). Comparisons were also made with self-reported caffeine intake.

Women who reported drinking 1-50 mg of caffeine per day had neonates with smaller subscapular skin folds (beta = –0.14 mm; 95% confidence interval, –0.27 to -–0.01 mm), while those who reported more than 50 mg per day had newborns with lower birth weight (beta = –66 g; 95% CI, –121 to –10 g), and smaller circumferences of mid-upper thigh (beta = –0.32 cm; 95% CI, –0.55 to –0.09 cm), anterior thigh skin fold (beta = –0.24 mm; 95% CI, –0.47 to -.01 mm), and mid-upper arm (beta = –0.17 cm; 95% CI, –0.31 to –0.02 cm).

Caffeine plasma concentrations supported these findings.

Compared with women who had caffeine plasma concentrations in the lowest quartile, those in the highest quartile gave birth to neonates with shorter length (beta = –0.44 cm; P = .04 for trend) and lower body weight (beta = –84.3 g; P = .04 for trend), as well as smaller mid-upper arm circumference (beta = -0.25 cm; P = .02 for trend), mid-upper thigh circumference (beta = –0.29 cm; P = .07 for trend), and head circumference (beta = 0.28 cm; P < .001 for trend). A comparison of lower and upper paraxanthine quartiles revealed the similar trends, as did analyses of continuous measures.

“Our results suggest that caffeine consumption during pregnancy, even at levels much lower than the recommended 200 mg per day of caffeine may be associated with decreased fetal growth,” the investigators concluded.

Dr. Sarah Prager

Sarah W. Prager, MD, of the University of Washington, Seattle, suggested that the findings “do not demonstrate that caffeine has a clinically meaningful negative clinical impact on newborn size and weight.”

She noted that there was no difference in the rate of SGA between plasma caffeine quartiles, and that most patients were thin, which may not accurately represent the U.S. population.

“Based on these new data, my take home message to patients would be that increasing amounts of caffeine can have a small but real impact on the size of their baby at birth, though it is unlikely to result in a diagnosis of SGA,” she said. “Pregnant patients may want to limit caffeine intake even more than the ACOG recommendation of 200 mg per day.”

Dr. Robert Silver

According to Robert M. Silver, MD, of the University of Utah Health Sciences Center, Salt Lake City, “data from this study are of high quality, owing to the prospective cohort design, large numbers, assessment of biomarkers, and sophisticated analyses.”

Still, he urged a cautious interpretation from a clinical perspective.

“It is important to not overreact to these data,” he said. “The decrease in fetal growth associated with caffeine is small and may prove to be clinically meaningless. Accordingly, clinical recommendations regarding caffeine intake during pregnancy should not be modified solely based on this study.”

Dr. Silver suggested that the findings deserve additional investigation.

“These observations warrant further research about the effects of caffeine exposure during pregnancy,” he said. “Ideally, studies should assess the effect of caffeine exposure on fetal growth in various pregnancy epochs as well as on neonatal and childhood growth.”

The study was funded by the Intramural Research Program of the NICHD. Dr. Gerlanc is an employee of The Prospective Group, which was contracted to provide statistical support.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

Arcalyst gets FDA nod as first therapy for recurrent pericarditis

Article Type
Changed
Mon, 03/29/2021 - 08:43

 

The Food and Drug Administration has approved rilonacept (Arcalyst) to treat recurrent pericarditis and reduce the risk for recurrence in adults and children 12 years and older.

Approval of the weekly subcutaneous injection offers patients the first and only FDA-approved therapy for recurrent pericarditis, the agency said in a release.

Recurrent pericarditis is characterized by a remitting relapsing inflammation of the pericardium, and therapeutic options have been limited to NSAIDs, colchicine, and corticosteroids.

Rilonacept is a recombinant fusion protein that blocks interleukin-1 alpha and interleukin-1 beta signaling. It is already approved by the FDA to treat a group of rare inherited inflammatory diseases called cryopyrin-associated periodic syndromes.

The new indication is based on the pivotal phase 3 RHAPSODY trial in 86 patients with acute symptoms of recurrent pericarditis and systemic inflammation. After randomization, pericarditis recurred in 2 of 30 patients (7%) treated with rilonacept and in 23 of 31 patients (74%) treated with placebo, representing a 96% reduction in the relative risk for recurrence with rilonacept.

Patients who received rilonacept were also pain free or had minimal pain on 98% of trial days, whereas those who received placebo had minimal or no pain on 46% of trial days.

The most common adverse effects of rilonacept are injection-site reactions and upper-respiratory tract infections.

Serious, life-threatening infections have been reported in patients taking rilonacept, according to the FDA. Patients with active or chronic infections should not take the drug.

The FDA label also advises that patients should avoid live vaccines while taking rilonacept and that it should be discontinued if a hypersensitivity reaction occurs.

The commercial launch is expected in April, according to the company.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

The Food and Drug Administration has approved rilonacept (Arcalyst) to treat recurrent pericarditis and reduce the risk for recurrence in adults and children 12 years and older.

Approval of the weekly subcutaneous injection offers patients the first and only FDA-approved therapy for recurrent pericarditis, the agency said in a release.

Recurrent pericarditis is characterized by a remitting relapsing inflammation of the pericardium, and therapeutic options have been limited to NSAIDs, colchicine, and corticosteroids.

Rilonacept is a recombinant fusion protein that blocks interleukin-1 alpha and interleukin-1 beta signaling. It is already approved by the FDA to treat a group of rare inherited inflammatory diseases called cryopyrin-associated periodic syndromes.

The new indication is based on the pivotal phase 3 RHAPSODY trial in 86 patients with acute symptoms of recurrent pericarditis and systemic inflammation. After randomization, pericarditis recurred in 2 of 30 patients (7%) treated with rilonacept and in 23 of 31 patients (74%) treated with placebo, representing a 96% reduction in the relative risk for recurrence with rilonacept.

Patients who received rilonacept were also pain free or had minimal pain on 98% of trial days, whereas those who received placebo had minimal or no pain on 46% of trial days.

The most common adverse effects of rilonacept are injection-site reactions and upper-respiratory tract infections.

Serious, life-threatening infections have been reported in patients taking rilonacept, according to the FDA. Patients with active or chronic infections should not take the drug.

The FDA label also advises that patients should avoid live vaccines while taking rilonacept and that it should be discontinued if a hypersensitivity reaction occurs.

The commercial launch is expected in April, according to the company.

A version of this article first appeared on Medscape.com.

 

The Food and Drug Administration has approved rilonacept (Arcalyst) to treat recurrent pericarditis and reduce the risk for recurrence in adults and children 12 years and older.

Approval of the weekly subcutaneous injection offers patients the first and only FDA-approved therapy for recurrent pericarditis, the agency said in a release.

Recurrent pericarditis is characterized by a remitting relapsing inflammation of the pericardium, and therapeutic options have been limited to NSAIDs, colchicine, and corticosteroids.

Rilonacept is a recombinant fusion protein that blocks interleukin-1 alpha and interleukin-1 beta signaling. It is already approved by the FDA to treat a group of rare inherited inflammatory diseases called cryopyrin-associated periodic syndromes.

The new indication is based on the pivotal phase 3 RHAPSODY trial in 86 patients with acute symptoms of recurrent pericarditis and systemic inflammation. After randomization, pericarditis recurred in 2 of 30 patients (7%) treated with rilonacept and in 23 of 31 patients (74%) treated with placebo, representing a 96% reduction in the relative risk for recurrence with rilonacept.

Patients who received rilonacept were also pain free or had minimal pain on 98% of trial days, whereas those who received placebo had minimal or no pain on 46% of trial days.

The most common adverse effects of rilonacept are injection-site reactions and upper-respiratory tract infections.

Serious, life-threatening infections have been reported in patients taking rilonacept, according to the FDA. Patients with active or chronic infections should not take the drug.

The FDA label also advises that patients should avoid live vaccines while taking rilonacept and that it should be discontinued if a hypersensitivity reaction occurs.

The commercial launch is expected in April, according to the company.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

Metoprolol increases severity, but not risk, of COPD exacerbations

Article Type
Changed
Fri, 03/26/2021 - 14:21

Background: Beta-blockers are underutilized in patients with both COPD and established cardiovascular indications for beta-blocker therapy, despite evidence suggesting overall benefit. Prior observational studies have associated beta-blockers with improved outcomes in COPD in the absence of cardiovascular indications; however, this has not been previously evaluated in a randomized trial.

Dr. John Gerstenberger

Study design: Placebo-controlled, double-blind, prospective, randomized trial.

Setting: A total of 26 centers in the United States.

Synopsis: The BLOCK COPD trial randomized more than 500 patients with moderate to severe COPD and no established indication for beta-blocker therapy to extended-release metoprolol or placebo. There was no significant difference in the primary endpoint of time until first exacerbation. While there was no difference in the overall risk of exacerbations of COPD, the trial was terminated early because of increased risk of severe or very severe exacerbations of COPD in the metoprolol group (hazard ratio, 1.91; 95% confidence interval, 1.20-2.83). These were defined as exacerbations requiring hospitalization and mechanical ventilation, respectively.

Importantly, this trial excluded patients with established indications for beta-blocker therapy, and study findings should not be applied to this population.

Bottom line: Metoprolol is not associated with increased risk of COPD exacerbations, but is associated with increased severity of COPD exacerbations in patients with moderate to severe COPD who have no established indications for beta-blockers.

Citation: Dransfield MT et al. Metoprolol for the prevention of acute exacerbations of COPD. N Engl J Med. 2019 Oct 20. doi: 10.1056/NEJMoa1908142.

Dr. Gerstenberger is a hospitalist and clinical assistant professor of medicine at the University of Utah, Salt Lake City.

Publications
Topics
Sections

Background: Beta-blockers are underutilized in patients with both COPD and established cardiovascular indications for beta-blocker therapy, despite evidence suggesting overall benefit. Prior observational studies have associated beta-blockers with improved outcomes in COPD in the absence of cardiovascular indications; however, this has not been previously evaluated in a randomized trial.

Dr. John Gerstenberger

Study design: Placebo-controlled, double-blind, prospective, randomized trial.

Setting: A total of 26 centers in the United States.

Synopsis: The BLOCK COPD trial randomized more than 500 patients with moderate to severe COPD and no established indication for beta-blocker therapy to extended-release metoprolol or placebo. There was no significant difference in the primary endpoint of time until first exacerbation. While there was no difference in the overall risk of exacerbations of COPD, the trial was terminated early because of increased risk of severe or very severe exacerbations of COPD in the metoprolol group (hazard ratio, 1.91; 95% confidence interval, 1.20-2.83). These were defined as exacerbations requiring hospitalization and mechanical ventilation, respectively.

Importantly, this trial excluded patients with established indications for beta-blocker therapy, and study findings should not be applied to this population.

Bottom line: Metoprolol is not associated with increased risk of COPD exacerbations, but is associated with increased severity of COPD exacerbations in patients with moderate to severe COPD who have no established indications for beta-blockers.

Citation: Dransfield MT et al. Metoprolol for the prevention of acute exacerbations of COPD. N Engl J Med. 2019 Oct 20. doi: 10.1056/NEJMoa1908142.

Dr. Gerstenberger is a hospitalist and clinical assistant professor of medicine at the University of Utah, Salt Lake City.

Background: Beta-blockers are underutilized in patients with both COPD and established cardiovascular indications for beta-blocker therapy, despite evidence suggesting overall benefit. Prior observational studies have associated beta-blockers with improved outcomes in COPD in the absence of cardiovascular indications; however, this has not been previously evaluated in a randomized trial.

Dr. John Gerstenberger

Study design: Placebo-controlled, double-blind, prospective, randomized trial.

Setting: A total of 26 centers in the United States.

Synopsis: The BLOCK COPD trial randomized more than 500 patients with moderate to severe COPD and no established indication for beta-blocker therapy to extended-release metoprolol or placebo. There was no significant difference in the primary endpoint of time until first exacerbation. While there was no difference in the overall risk of exacerbations of COPD, the trial was terminated early because of increased risk of severe or very severe exacerbations of COPD in the metoprolol group (hazard ratio, 1.91; 95% confidence interval, 1.20-2.83). These were defined as exacerbations requiring hospitalization and mechanical ventilation, respectively.

Importantly, this trial excluded patients with established indications for beta-blocker therapy, and study findings should not be applied to this population.

Bottom line: Metoprolol is not associated with increased risk of COPD exacerbations, but is associated with increased severity of COPD exacerbations in patients with moderate to severe COPD who have no established indications for beta-blockers.

Citation: Dransfield MT et al. Metoprolol for the prevention of acute exacerbations of COPD. N Engl J Med. 2019 Oct 20. doi: 10.1056/NEJMoa1908142.

Dr. Gerstenberger is a hospitalist and clinical assistant professor of medicine at the University of Utah, Salt Lake City.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

A paleolithic raw bar, and the human brush with extinction

Article Type
Changed
Fri, 03/26/2021 - 14:10

This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”

Courtesy Dr. Bret Stetka

“He was a bold man that first ate an oyster.” – Jonathan Swift

That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.

Courtesy Dr. Bret Stetka
Dr. Bret Stetka

Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.

For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.

The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.

Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.

Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!

“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.

“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”

MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.

One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.

Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
 

 

 

A sea of vitamins

Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.

Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.

By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.

Unless you ask Michael Crawford.

Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.

In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.

Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.

In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.

Recent studies show that some of omega-3s’ purported health benefits were exaggerated, but they do appear to benefit the brain, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.

Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.

Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.

Not Dr. Crawford.

He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”

Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.

For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.

University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”

Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.

Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.

Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.

In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.

Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.

Publications
Topics
Sections

This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”

Courtesy Dr. Bret Stetka

“He was a bold man that first ate an oyster.” – Jonathan Swift

That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.

Courtesy Dr. Bret Stetka
Dr. Bret Stetka

Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.

For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.

The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.

Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.

Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!

“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.

“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”

MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.

One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.

Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
 

 

 

A sea of vitamins

Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.

Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.

By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.

Unless you ask Michael Crawford.

Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.

In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.

Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.

In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.

Recent studies show that some of omega-3s’ purported health benefits were exaggerated, but they do appear to benefit the brain, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.

Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.

Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.

Not Dr. Crawford.

He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”

Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.

For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.

University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”

Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.

Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.

Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.

In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.

Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.

This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”

Courtesy Dr. Bret Stetka

“He was a bold man that first ate an oyster.” – Jonathan Swift

That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.

Courtesy Dr. Bret Stetka
Dr. Bret Stetka

Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.

For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.

The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.

Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.

Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!

“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.

“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”

MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.

One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.

Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
 

 

 

A sea of vitamins

Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.

Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.

By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.

Unless you ask Michael Crawford.

Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.

In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.

Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.

In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.

Recent studies show that some of omega-3s’ purported health benefits were exaggerated, but they do appear to benefit the brain, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.

Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.

Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.

Not Dr. Crawford.

He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”

Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.

For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.

University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”

Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.

Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.

Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.

In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.

Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

Biomarkers predict efficacy of DKN-01 in endometrial cancer

Article Type
Changed
Tue, 03/30/2021 - 09:50

 

The path forward for DKN-01, an investigational monoclonal antibody targeting DKK1, may be in biomarker-selected populations of patients with epithelial endometrial cancer (EEC), a phase 2 basket trial suggests.

Among 29 patients with heavily pretreated EEC, outcomes of DKN-01 monotherapy were best in patients with Wnt activating mutations, high levels of DKK1 expression, or PIK3CA activating mutations.

Patients in these groups had better disease control rates and progression-free survival (PFS), reported Rebecca C. Arend, MD, of the University of Alabama at Birmingham.

“Future development will focus on biomarker-selected patients, specifically patients with Wnt activating mutations, high tumoral DKK1, and PIK3CA activating mutations,” Dr. Arend said at the Society of Gynecologic Oncology’s Virtual Annual Meeting on Women’s Cancer (Abstract 10717).

She explained that DKK1 has been shown to modulate signaling in the Wnt/beta-catenin pathway, a key regulator of cellular functions in humans and animals that has been highly conserved throughout evolution.

“DKK1 activates P13 kinase/AKT signaling by binding to the CKAP4 receptor to promote tumor growth,” Dr. Arend explained.
 

Focus on monotherapy

Dr. Arend and colleagues conducted a phase 2 basket trial of DKN-01 either as monotherapy or in combination with paclitaxel in patients with EEC, epithelial ovarian cancer, and carcinosarcoma (malignant mixed Mullerian tumor). The trial design required at least 50% of patients to have Wnt signaling alterations.

Dr. Arend reported results for 29 patients with EEC who received DKN-01 monotherapy.

There were nine patients with Wnt activating mutations. None of them achieved a complete response (CR) or partial response (PR), but six had stable disease (SD), for a disease control rate of 67%. Of the 20 patients without Wnt activating mutations, 1 achieved a CR, 1 achieved a PR, and 3 had SD, for a disease control rate of 25%.

The median PFS was 5.5 months in patients with Wnt activating mutations and 1.8 months in patients without the mutations.

“Importantly, patients whose tumors have a Wnt activating mutation have a correlation with increased tumoral expression of DKK1 by 14.4-fold higher,” Dr. Arend noted.

When she and her colleagues analyzed patients by DKK1 expression, the team found that high levels of DKK1 correlated with better clinical outcomes. The disease control rate was 57% for patients in the highest third of DKK1 expression (1 PR, 3 SD) vs. 7% (1 SD) for those in the lowest two-thirds. The median PFS was 3 months and 1.8 months, respectively.

Of the seven patients whose tumors could not be evaluated for DKK1 expression, one patient had a CR and 5 had SD, for a disease control rate of 86%. The median PFS in this group was 8.0 months. Three of these patients had known Wnt activating mutations.

“Given this correlation [between] higher DKK1 expression [and] Wnt activating mutations, one could consider that, at a minimum, these patients would have had a higher DKK1 expression as well,” Dr. Arend said.

She and her colleagues also found that patients with PIK3CA activating mutations and two or fewer prior lines of therapy had a 33% overall response rate (1 CR, 1 PR), compared with 0% for patients without these mutations who had two or fewer prior therapies. Patients with PIK3CA activating mutations also had a better disease control rate (67% vs. 40%) and median PFS (5.6 months vs. 1.8 months).

Although Dr. Arend did not present safety data from the study at SGO 2021, she reported some data in a video investor call for Leap Therapeutics, which is developing DKN-01. She said the most common treatment-emergent adverse events with DKN-01 were nausea in 28.8% of patients, fatigue in 26.7%, and constipation in 11.5%. Serious events included acute kidney injury, dyspnea, nausea, and peripheral edema (occurring in 1.9% of patients each).
 

 

 

Monotherapy or combination?

In the question-and-answer session following Dr. Arend’s presentation, comoderator Joyce Liu, MD, of the Dana-Farber Cancer Institute in Boston, said that “even in the DKK1-high tumors, the activity of DKN-01 as a monotherapy is a little bit limited.”

She asked whether the future of targeting inhibitors in the Wnt/beta-catenin pathway will be limited to biomarker-specific populations or if agents such as DKN-01 should be used in combinations.

“I do think that we need a lot more data to determine,” Dr. Arend replied. “I think that there may be a subset of patients, especially those that don’t tolerate the [lenvatinib/pembrolizumab] combo who may have an upregulation of beta-catenin or a Wnt mutation who could benefit from monotherapy.”

Dr. Arend added that data from her lab and others suggest that DKN-01 in combination with other agents holds promise for improving outcomes in biomarker-selected populations.

The current study is funded by Leap Therapeutics. Dr. Arend disclosed advisory board activity for the company and others. Dr. Liu reported personal fees from several companies, not including Leap Therapeutics.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

The path forward for DKN-01, an investigational monoclonal antibody targeting DKK1, may be in biomarker-selected populations of patients with epithelial endometrial cancer (EEC), a phase 2 basket trial suggests.

Among 29 patients with heavily pretreated EEC, outcomes of DKN-01 monotherapy were best in patients with Wnt activating mutations, high levels of DKK1 expression, or PIK3CA activating mutations.

Patients in these groups had better disease control rates and progression-free survival (PFS), reported Rebecca C. Arend, MD, of the University of Alabama at Birmingham.

“Future development will focus on biomarker-selected patients, specifically patients with Wnt activating mutations, high tumoral DKK1, and PIK3CA activating mutations,” Dr. Arend said at the Society of Gynecologic Oncology’s Virtual Annual Meeting on Women’s Cancer (Abstract 10717).

She explained that DKK1 has been shown to modulate signaling in the Wnt/beta-catenin pathway, a key regulator of cellular functions in humans and animals that has been highly conserved throughout evolution.

“DKK1 activates P13 kinase/AKT signaling by binding to the CKAP4 receptor to promote tumor growth,” Dr. Arend explained.
 

Focus on monotherapy

Dr. Arend and colleagues conducted a phase 2 basket trial of DKN-01 either as monotherapy or in combination with paclitaxel in patients with EEC, epithelial ovarian cancer, and carcinosarcoma (malignant mixed Mullerian tumor). The trial design required at least 50% of patients to have Wnt signaling alterations.

Dr. Arend reported results for 29 patients with EEC who received DKN-01 monotherapy.

There were nine patients with Wnt activating mutations. None of them achieved a complete response (CR) or partial response (PR), but six had stable disease (SD), for a disease control rate of 67%. Of the 20 patients without Wnt activating mutations, 1 achieved a CR, 1 achieved a PR, and 3 had SD, for a disease control rate of 25%.

The median PFS was 5.5 months in patients with Wnt activating mutations and 1.8 months in patients without the mutations.

“Importantly, patients whose tumors have a Wnt activating mutation have a correlation with increased tumoral expression of DKK1 by 14.4-fold higher,” Dr. Arend noted.

When she and her colleagues analyzed patients by DKK1 expression, the team found that high levels of DKK1 correlated with better clinical outcomes. The disease control rate was 57% for patients in the highest third of DKK1 expression (1 PR, 3 SD) vs. 7% (1 SD) for those in the lowest two-thirds. The median PFS was 3 months and 1.8 months, respectively.

Of the seven patients whose tumors could not be evaluated for DKK1 expression, one patient had a CR and 5 had SD, for a disease control rate of 86%. The median PFS in this group was 8.0 months. Three of these patients had known Wnt activating mutations.

“Given this correlation [between] higher DKK1 expression [and] Wnt activating mutations, one could consider that, at a minimum, these patients would have had a higher DKK1 expression as well,” Dr. Arend said.

She and her colleagues also found that patients with PIK3CA activating mutations and two or fewer prior lines of therapy had a 33% overall response rate (1 CR, 1 PR), compared with 0% for patients without these mutations who had two or fewer prior therapies. Patients with PIK3CA activating mutations also had a better disease control rate (67% vs. 40%) and median PFS (5.6 months vs. 1.8 months).

Although Dr. Arend did not present safety data from the study at SGO 2021, she reported some data in a video investor call for Leap Therapeutics, which is developing DKN-01. She said the most common treatment-emergent adverse events with DKN-01 were nausea in 28.8% of patients, fatigue in 26.7%, and constipation in 11.5%. Serious events included acute kidney injury, dyspnea, nausea, and peripheral edema (occurring in 1.9% of patients each).
 

 

 

Monotherapy or combination?

In the question-and-answer session following Dr. Arend’s presentation, comoderator Joyce Liu, MD, of the Dana-Farber Cancer Institute in Boston, said that “even in the DKK1-high tumors, the activity of DKN-01 as a monotherapy is a little bit limited.”

She asked whether the future of targeting inhibitors in the Wnt/beta-catenin pathway will be limited to biomarker-specific populations or if agents such as DKN-01 should be used in combinations.

“I do think that we need a lot more data to determine,” Dr. Arend replied. “I think that there may be a subset of patients, especially those that don’t tolerate the [lenvatinib/pembrolizumab] combo who may have an upregulation of beta-catenin or a Wnt mutation who could benefit from monotherapy.”

Dr. Arend added that data from her lab and others suggest that DKN-01 in combination with other agents holds promise for improving outcomes in biomarker-selected populations.

The current study is funded by Leap Therapeutics. Dr. Arend disclosed advisory board activity for the company and others. Dr. Liu reported personal fees from several companies, not including Leap Therapeutics.

 

The path forward for DKN-01, an investigational monoclonal antibody targeting DKK1, may be in biomarker-selected populations of patients with epithelial endometrial cancer (EEC), a phase 2 basket trial suggests.

Among 29 patients with heavily pretreated EEC, outcomes of DKN-01 monotherapy were best in patients with Wnt activating mutations, high levels of DKK1 expression, or PIK3CA activating mutations.

Patients in these groups had better disease control rates and progression-free survival (PFS), reported Rebecca C. Arend, MD, of the University of Alabama at Birmingham.

“Future development will focus on biomarker-selected patients, specifically patients with Wnt activating mutations, high tumoral DKK1, and PIK3CA activating mutations,” Dr. Arend said at the Society of Gynecologic Oncology’s Virtual Annual Meeting on Women’s Cancer (Abstract 10717).

She explained that DKK1 has been shown to modulate signaling in the Wnt/beta-catenin pathway, a key regulator of cellular functions in humans and animals that has been highly conserved throughout evolution.

“DKK1 activates P13 kinase/AKT signaling by binding to the CKAP4 receptor to promote tumor growth,” Dr. Arend explained.
 

Focus on monotherapy

Dr. Arend and colleagues conducted a phase 2 basket trial of DKN-01 either as monotherapy or in combination with paclitaxel in patients with EEC, epithelial ovarian cancer, and carcinosarcoma (malignant mixed Mullerian tumor). The trial design required at least 50% of patients to have Wnt signaling alterations.

Dr. Arend reported results for 29 patients with EEC who received DKN-01 monotherapy.

There were nine patients with Wnt activating mutations. None of them achieved a complete response (CR) or partial response (PR), but six had stable disease (SD), for a disease control rate of 67%. Of the 20 patients without Wnt activating mutations, 1 achieved a CR, 1 achieved a PR, and 3 had SD, for a disease control rate of 25%.

The median PFS was 5.5 months in patients with Wnt activating mutations and 1.8 months in patients without the mutations.

“Importantly, patients whose tumors have a Wnt activating mutation have a correlation with increased tumoral expression of DKK1 by 14.4-fold higher,” Dr. Arend noted.

When she and her colleagues analyzed patients by DKK1 expression, the team found that high levels of DKK1 correlated with better clinical outcomes. The disease control rate was 57% for patients in the highest third of DKK1 expression (1 PR, 3 SD) vs. 7% (1 SD) for those in the lowest two-thirds. The median PFS was 3 months and 1.8 months, respectively.

Of the seven patients whose tumors could not be evaluated for DKK1 expression, one patient had a CR and 5 had SD, for a disease control rate of 86%. The median PFS in this group was 8.0 months. Three of these patients had known Wnt activating mutations.

“Given this correlation [between] higher DKK1 expression [and] Wnt activating mutations, one could consider that, at a minimum, these patients would have had a higher DKK1 expression as well,” Dr. Arend said.

She and her colleagues also found that patients with PIK3CA activating mutations and two or fewer prior lines of therapy had a 33% overall response rate (1 CR, 1 PR), compared with 0% for patients without these mutations who had two or fewer prior therapies. Patients with PIK3CA activating mutations also had a better disease control rate (67% vs. 40%) and median PFS (5.6 months vs. 1.8 months).

Although Dr. Arend did not present safety data from the study at SGO 2021, she reported some data in a video investor call for Leap Therapeutics, which is developing DKN-01. She said the most common treatment-emergent adverse events with DKN-01 were nausea in 28.8% of patients, fatigue in 26.7%, and constipation in 11.5%. Serious events included acute kidney injury, dyspnea, nausea, and peripheral edema (occurring in 1.9% of patients each).
 

 

 

Monotherapy or combination?

In the question-and-answer session following Dr. Arend’s presentation, comoderator Joyce Liu, MD, of the Dana-Farber Cancer Institute in Boston, said that “even in the DKK1-high tumors, the activity of DKN-01 as a monotherapy is a little bit limited.”

She asked whether the future of targeting inhibitors in the Wnt/beta-catenin pathway will be limited to biomarker-specific populations or if agents such as DKN-01 should be used in combinations.

“I do think that we need a lot more data to determine,” Dr. Arend replied. “I think that there may be a subset of patients, especially those that don’t tolerate the [lenvatinib/pembrolizumab] combo who may have an upregulation of beta-catenin or a Wnt mutation who could benefit from monotherapy.”

Dr. Arend added that data from her lab and others suggest that DKN-01 in combination with other agents holds promise for improving outcomes in biomarker-selected populations.

The current study is funded by Leap Therapeutics. Dr. Arend disclosed advisory board activity for the company and others. Dr. Liu reported personal fees from several companies, not including Leap Therapeutics.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SGO 2021

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

Will psoriasis patients embrace proactive topical therapy?

Article Type
Changed
Fri, 03/26/2021 - 12:30

Long-term proactive topical management of plaque psoriasis with twice-weekly calcipotriene/betamethasone dipropionate foam has been shown in a high-quality randomized trial to be more effective than conventional reactive management – but will patients go for it?

Dr. Bruce E. Strober

Bruce E. Strober, MD, PhD, has his doubts, and he shared them with Linda Stein Gold, MD, after she presented updated results from the 52-week PSO-LONG trial at Innovations in Dermatology: Virtual Spring Conference 2021.

In order for the proactive management approach tested in this study to be successful, patients must apply the topical agent as maintenance therapy to cleared areas where they previously had psoriasis. And while they did so in this study with an assist in the form of monthly office visits and nudging from investigators, in real-world clinical practice that’s unlikely to happen, according to Dr. Strober, of Yale University, New Haven, Conn.

“It makes sense to do what’s being done in this study, there’s no doubt, but I’m concerned about adherence and whether patients are really going to do it,” he said.

“Adherence is going to be everything here, and you know patients don’t like to apply topicals to their body. Once they’re clear they’re just going to walk away from the topical,” Dr. Strober predicted.

Dr. Linda F. Stein Gold

Dr. Stein Gold countered: “When a study goes on for a full year, it starts to reflect real life.”

Moreover, the PSO-LONG trial provides the first high-quality evidence physicians can share with patients demonstrating that proactive management pays off in terms of fewer relapses and more time in remission over the long haul, added Dr. Stein Gold, director of dermatology clinical research at the Henry Ford Health System in Detroit.

PSO-LONG was a double-blind, international, phase 3 study including 545 adults with plaque psoriasis who had clear or almost-clear skin after 4 weeks of once-daily calcipotriene 0.005%/betamethasone dipropionate 0.064% (Cal/BD) foam (Enstilar), and were then randomized to twice-weekly proactive management or to a reactive approach involving application of vehicle on the same twice-weekly schedule. Relapses resulted in rescue therapy with 4 weeks of once-daily Cal/BD foam.

The primary endpoint was the median time to first relapse: 56 days with the proactive approach, a significant improvement over the 30 days with the reactive approach. Over the course of 52 weeks, the proactive group spent an additional 41 days in remission, compared with the reactive group. Patients randomized to twice-weekly Cal/BD foam averaged 3.1 relapses per year, compared with 4.8 with reactive management. The side-effect profiles in the two study arms were similar.

Mean Physician Global Assessment scores and Psoriasis Area and Activity Index scores for the proactive group clearly separated from the reactive group by week 4, with those differences maintained throughout the year. The area under the curve for distribution for the Physician Global Assessment score was 15% lower in the proactive group, and 20% lower for the modified PASI score.



“These results suggest that proactive management – a concept that’s been used for atopic dermatitis – could be applied to patients with psoriasis to prolong remission,” Dr. Stein Gold concluded at the conference, sponsored by MedscapeLIVE! and the producers of the Hawaii Dermatology Seminar and Caribbean Dermatology Symposium.

Asked how confident she is that patients in the real world truly will do this, Dr. Stein Gold replied: “You know, I don’t know. We hope so. Now we can tell them we actually have some data that supports treating the cleared areas. And it’s only twice a week, separated on Mondays and Thursdays.”

“I take a much more reactive approach,” Dr. Strober said. “I advise patients to get back in there with their topical steroid as soon as they see any signs of recurrence.

He added that he’s eager to see if a proactive management approach such as the one that was successful in PSO-LONG is also beneficial using some of the promising topical agents with nonsteroidal mechanisms of action, which are advancing through the developmental pipeline.

Late in 2020, the Food and Drug Administration approved an expanded indication for Cal/BD foam, which includes the PSO-LONG data on the efficacy and safety of long-term twice-weekly therapy in adults in product labeling. The combination spray/foam was previously approved by the FDA as once-daily therapy in psoriasis patients aged 12 years and older, but only for up to 4 weeks because of safety concerns regarding longer use of the potent topical steroid as daily therapy.

The PSO-LONG trial was funded by LEO Pharma. Dr. Stein Gold reported serving as a paid investigator and/or consultant to LEO and numerous other pharmaceutical companies. Dr. Strober, reported serving as a consultant to more than two dozen pharmaceutical companies. MedscapeLIVE! and this news organization are owned by the same parent company.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Long-term proactive topical management of plaque psoriasis with twice-weekly calcipotriene/betamethasone dipropionate foam has been shown in a high-quality randomized trial to be more effective than conventional reactive management – but will patients go for it?

Dr. Bruce E. Strober

Bruce E. Strober, MD, PhD, has his doubts, and he shared them with Linda Stein Gold, MD, after she presented updated results from the 52-week PSO-LONG trial at Innovations in Dermatology: Virtual Spring Conference 2021.

In order for the proactive management approach tested in this study to be successful, patients must apply the topical agent as maintenance therapy to cleared areas where they previously had psoriasis. And while they did so in this study with an assist in the form of monthly office visits and nudging from investigators, in real-world clinical practice that’s unlikely to happen, according to Dr. Strober, of Yale University, New Haven, Conn.

“It makes sense to do what’s being done in this study, there’s no doubt, but I’m concerned about adherence and whether patients are really going to do it,” he said.

“Adherence is going to be everything here, and you know patients don’t like to apply topicals to their body. Once they’re clear they’re just going to walk away from the topical,” Dr. Strober predicted.

Dr. Linda F. Stein Gold

Dr. Stein Gold countered: “When a study goes on for a full year, it starts to reflect real life.”

Moreover, the PSO-LONG trial provides the first high-quality evidence physicians can share with patients demonstrating that proactive management pays off in terms of fewer relapses and more time in remission over the long haul, added Dr. Stein Gold, director of dermatology clinical research at the Henry Ford Health System in Detroit.

PSO-LONG was a double-blind, international, phase 3 study including 545 adults with plaque psoriasis who had clear or almost-clear skin after 4 weeks of once-daily calcipotriene 0.005%/betamethasone dipropionate 0.064% (Cal/BD) foam (Enstilar), and were then randomized to twice-weekly proactive management or to a reactive approach involving application of vehicle on the same twice-weekly schedule. Relapses resulted in rescue therapy with 4 weeks of once-daily Cal/BD foam.

The primary endpoint was the median time to first relapse: 56 days with the proactive approach, a significant improvement over the 30 days with the reactive approach. Over the course of 52 weeks, the proactive group spent an additional 41 days in remission, compared with the reactive group. Patients randomized to twice-weekly Cal/BD foam averaged 3.1 relapses per year, compared with 4.8 with reactive management. The side-effect profiles in the two study arms were similar.

Mean Physician Global Assessment scores and Psoriasis Area and Activity Index scores for the proactive group clearly separated from the reactive group by week 4, with those differences maintained throughout the year. The area under the curve for distribution for the Physician Global Assessment score was 15% lower in the proactive group, and 20% lower for the modified PASI score.



“These results suggest that proactive management – a concept that’s been used for atopic dermatitis – could be applied to patients with psoriasis to prolong remission,” Dr. Stein Gold concluded at the conference, sponsored by MedscapeLIVE! and the producers of the Hawaii Dermatology Seminar and Caribbean Dermatology Symposium.

Asked how confident she is that patients in the real world truly will do this, Dr. Stein Gold replied: “You know, I don’t know. We hope so. Now we can tell them we actually have some data that supports treating the cleared areas. And it’s only twice a week, separated on Mondays and Thursdays.”

“I take a much more reactive approach,” Dr. Strober said. “I advise patients to get back in there with their topical steroid as soon as they see any signs of recurrence.

He added that he’s eager to see if a proactive management approach such as the one that was successful in PSO-LONG is also beneficial using some of the promising topical agents with nonsteroidal mechanisms of action, which are advancing through the developmental pipeline.

Late in 2020, the Food and Drug Administration approved an expanded indication for Cal/BD foam, which includes the PSO-LONG data on the efficacy and safety of long-term twice-weekly therapy in adults in product labeling. The combination spray/foam was previously approved by the FDA as once-daily therapy in psoriasis patients aged 12 years and older, but only for up to 4 weeks because of safety concerns regarding longer use of the potent topical steroid as daily therapy.

The PSO-LONG trial was funded by LEO Pharma. Dr. Stein Gold reported serving as a paid investigator and/or consultant to LEO and numerous other pharmaceutical companies. Dr. Strober, reported serving as a consultant to more than two dozen pharmaceutical companies. MedscapeLIVE! and this news organization are owned by the same parent company.

Long-term proactive topical management of plaque psoriasis with twice-weekly calcipotriene/betamethasone dipropionate foam has been shown in a high-quality randomized trial to be more effective than conventional reactive management – but will patients go for it?

Dr. Bruce E. Strober

Bruce E. Strober, MD, PhD, has his doubts, and he shared them with Linda Stein Gold, MD, after she presented updated results from the 52-week PSO-LONG trial at Innovations in Dermatology: Virtual Spring Conference 2021.

In order for the proactive management approach tested in this study to be successful, patients must apply the topical agent as maintenance therapy to cleared areas where they previously had psoriasis. And while they did so in this study with an assist in the form of monthly office visits and nudging from investigators, in real-world clinical practice that’s unlikely to happen, according to Dr. Strober, of Yale University, New Haven, Conn.

“It makes sense to do what’s being done in this study, there’s no doubt, but I’m concerned about adherence and whether patients are really going to do it,” he said.

“Adherence is going to be everything here, and you know patients don’t like to apply topicals to their body. Once they’re clear they’re just going to walk away from the topical,” Dr. Strober predicted.

Dr. Linda F. Stein Gold

Dr. Stein Gold countered: “When a study goes on for a full year, it starts to reflect real life.”

Moreover, the PSO-LONG trial provides the first high-quality evidence physicians can share with patients demonstrating that proactive management pays off in terms of fewer relapses and more time in remission over the long haul, added Dr. Stein Gold, director of dermatology clinical research at the Henry Ford Health System in Detroit.

PSO-LONG was a double-blind, international, phase 3 study including 545 adults with plaque psoriasis who had clear or almost-clear skin after 4 weeks of once-daily calcipotriene 0.005%/betamethasone dipropionate 0.064% (Cal/BD) foam (Enstilar), and were then randomized to twice-weekly proactive management or to a reactive approach involving application of vehicle on the same twice-weekly schedule. Relapses resulted in rescue therapy with 4 weeks of once-daily Cal/BD foam.

The primary endpoint was the median time to first relapse: 56 days with the proactive approach, a significant improvement over the 30 days with the reactive approach. Over the course of 52 weeks, the proactive group spent an additional 41 days in remission, compared with the reactive group. Patients randomized to twice-weekly Cal/BD foam averaged 3.1 relapses per year, compared with 4.8 with reactive management. The side-effect profiles in the two study arms were similar.

Mean Physician Global Assessment scores and Psoriasis Area and Activity Index scores for the proactive group clearly separated from the reactive group by week 4, with those differences maintained throughout the year. The area under the curve for distribution for the Physician Global Assessment score was 15% lower in the proactive group, and 20% lower for the modified PASI score.



“These results suggest that proactive management – a concept that’s been used for atopic dermatitis – could be applied to patients with psoriasis to prolong remission,” Dr. Stein Gold concluded at the conference, sponsored by MedscapeLIVE! and the producers of the Hawaii Dermatology Seminar and Caribbean Dermatology Symposium.

Asked how confident she is that patients in the real world truly will do this, Dr. Stein Gold replied: “You know, I don’t know. We hope so. Now we can tell them we actually have some data that supports treating the cleared areas. And it’s only twice a week, separated on Mondays and Thursdays.”

“I take a much more reactive approach,” Dr. Strober said. “I advise patients to get back in there with their topical steroid as soon as they see any signs of recurrence.

He added that he’s eager to see if a proactive management approach such as the one that was successful in PSO-LONG is also beneficial using some of the promising topical agents with nonsteroidal mechanisms of action, which are advancing through the developmental pipeline.

Late in 2020, the Food and Drug Administration approved an expanded indication for Cal/BD foam, which includes the PSO-LONG data on the efficacy and safety of long-term twice-weekly therapy in adults in product labeling. The combination spray/foam was previously approved by the FDA as once-daily therapy in psoriasis patients aged 12 years and older, but only for up to 4 weeks because of safety concerns regarding longer use of the potent topical steroid as daily therapy.

The PSO-LONG trial was funded by LEO Pharma. Dr. Stein Gold reported serving as a paid investigator and/or consultant to LEO and numerous other pharmaceutical companies. Dr. Strober, reported serving as a consultant to more than two dozen pharmaceutical companies. MedscapeLIVE! and this news organization are owned by the same parent company.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM INNOVATIONS IN DERMATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

The significance of mismatch repair deficiency in endometrial cancer

Article Type
Changed
Fri, 03/26/2021 - 11:32

Women with Lynch syndrome are known to carry an approximately 60% lifetime risk of endometrial cancer. These cancers result from inherited deleterious mutations in genes that code for mismatch repair proteins. However, mismatch repair deficiency (MMR-d) is not exclusively found in the tumors of patients with Lynch syndrome, and much is being learned about this group of endometrial cancers, their behavior, and their vulnerability to targeted therapies.

Dr. Emma C. Rossi

During the processes of DNA replication, recombination, or chemical and physical damage, mismatches in base pairs frequently occurs. Mismatch repair proteins function to identify and repair such errors, and the loss of their function causes the accumulation of the insertions or deletions of short, repetitive sequences of DNA. This phenomenon can be measured using polymerase chain reaction (PCR) screening of known microsatellites to look for the accumulation of errors, a phenotype which is called microsatellite instability (MSI). The accumulation of errors in DNA sequences is thought to lead to mutations in cancer-related genes.

The four predominant mismatch repair genes include MLH1, MSH2, MSH 6, and PMS2. These genes may possess loss of function through a germline/inherited mechanism, such as Lynch syndrome, or can be sporadically acquired. Approximately 20%-30% of endometrial cancers exhibit MMR-d with acquired, sporadic losses in function being the majority of cases and only approximately 10% a result of Lynch syndrome. Mutations in PMS2 are the dominant genotype of Lynch syndrome, whereas loss of function in MLH1 is most frequent aberration in sporadic cases of MMR-d endometrial cancer.1

Endometrial cancers can be tested for MMR-d by performing immunohistochemistry to look for loss of expression in the four most common MMR genes. If there is loss of expression of MLH1, additional triage testing can be performed to determine if this loss is caused by the epigenetic phenomenon of hypermethylation. When present, this excludes Lynch syndrome and suggests a sporadic form origin of the disease. If there is loss of expression of the MMR genes (including loss of MLH1 and subsequent negative testing for promotor methylation), the patient should receive genetic testing for the presence of a germline mutation indicating Lynch syndrome. As an adjunct or alternative to immunohistochemistry, PCR studies or next-generation sequencing can be used to measure the presence of microsatellite instability in a process that identifies the expansion or reduction in repetitive DNA sequences of the tumor, compared with normal tumor.2

It is of the highest importance to identify endometrial cancers caused by Lynch syndrome because this enables providers to offer cascade testing of relatives, and to intensify screening or preventative measures for the many other cancers (such as colon, upper gastrointestinal, breast, and urothelial) for which these patients are at risk. Therefore, routine screening for MMR-d tumors is recommended in all cases of endometrial cancer, not simply those of a young age at diagnosis or for whom a strong family history exists.3 Using family history factors, primary tumor site, and age as a trigger for screening for Lynch syndrome, such as the Bethesda Guidelines, is associated with a 82% sensitivity in identifying Lynch syndrome. In a meta-analysis including testing results from 1,159 women with endometrial cancer, 43% of patients who were diagnosed with Lynch syndrome via molecular analysis would have been missed by clinical screening using Bethesda Guidelines.2

Discovering cases of Lynch syndrome is not the only benefit of routine testing for MMR-d in endometrial cancers. There is also significant value in the characterization of sporadic mismatch repair–deficient tumors because this information provides prognostic information and guides therapy. Tumors with a microsatellite-high phenotype/MMR-d were identified as one of the four distinct molecular subgroups of endometrial cancer by the Cancer Genome Atlas.4 Patients with this molecular profile exhibited “intermediate” prognostic outcomes, performing better than the “serous-like” cancers with p53 mutations, yet worse than patients with a POLE ultramutated group who rarely experience recurrences or death, even in the setting of unfavorable histology.

Beyond prognostication, the molecular profile of endometrial cancers also influence their responsiveness to therapeutics, highlighting the importance of splitting, not lumping endometrial cancers into relevant molecular subgroups when designing research and practicing clinical medicine. The PORTEC-3 trial studied 410 women with high-risk endometrial cancer, and randomized participants to receive either adjuvant radiation alone, or radiation with chemotherapy.5 There were no differences in progression-free survival between the two therapeutic strategies when analyzed in aggregate. However, when analyzed by Cancer Genome Atlas molecular subgroup, it was noted that there was a clear benefit from chemotherapy for patients with p53 mutations. For patients with MMR-d tumors, no such benefit was observed. Patients assigned this molecular subgroup did no better with the addition of platinum and taxane chemotherapy over radiation alone. Unfortunately, for patients with MMR-d tumors, recurrence rates remained high, suggesting that we can and need to discover more effective therapies for these tumors than what is available with conventional radiation or platinum and taxane chemotherapy. Targeted therapy may be the solution to this problem. Through microsatellite instability, MMR-d tumors create somatic mutations which result in neoantigens, an immunogenic environment. This state up-regulates checkpoint inhibitor proteins, which serve as an actionable target for anti-PD-L1 antibodies, such as the drug pembrolizumab which has been shown to be highly active against MMR-d endometrial cancer. In the landmark, KEYNOTE-158 trial, patients with advanced, recurrent solid tumors that exhibited MMR-d were treated with pembrolizumab.6 This included 49 patients with endometrial cancer, among whom there was a 79% response rate. Subsequently, pembrolizumab was granted Food and Drug Administration approval for use in advanced, recurrent MMR-d/MSI-high endometrial cancer. Trials are currently enrolling patients to explore the utility of this drug in the up-front setting in both early- and late-stage disease with a hope that this targeted therapy can do what conventional cytotoxic chemotherapy has failed to do.

Therefore, given the clinical significance of mismatch repair deficiency, all patients with endometrial cancer should be investigated for loss of expression in these proteins, and if present, considered for the possibility of Lynch syndrome. While most will not have an inherited cause, this information regarding their tumor biology remains critically important in both prognostication and decision-making surrounding other therapies and their eligibility for promising clinical trials.

Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no conflicts of interest to declare. Email her at [email protected].

References

1. Simpkins SB et al. Hum. Mol. Genet. 1999;8:661-6.

2. Kahn R et al. Cancer. 2019 Sep 15;125(18):2172-3183.

3. SGO Clinical Practice Statement: Screening for Lynch Syndrome in Endometrial Cancer. https://www.sgo.org/clinical-practice/guidelines/screening-for-lynch-syndrome-in-endometrial-cancer/

4. Kandoth et al. Nature. 2013;497(7447):67-73.

5. Leon-Castillo A et al. J Clin Oncol. 2020 Oct 10;38(29):3388-97.

6. Marabelle A et al. J Clin Oncol. 2020 Jan 1;38(1):1-10.

Publications
Topics
Sections

Women with Lynch syndrome are known to carry an approximately 60% lifetime risk of endometrial cancer. These cancers result from inherited deleterious mutations in genes that code for mismatch repair proteins. However, mismatch repair deficiency (MMR-d) is not exclusively found in the tumors of patients with Lynch syndrome, and much is being learned about this group of endometrial cancers, their behavior, and their vulnerability to targeted therapies.

Dr. Emma C. Rossi

During the processes of DNA replication, recombination, or chemical and physical damage, mismatches in base pairs frequently occurs. Mismatch repair proteins function to identify and repair such errors, and the loss of their function causes the accumulation of the insertions or deletions of short, repetitive sequences of DNA. This phenomenon can be measured using polymerase chain reaction (PCR) screening of known microsatellites to look for the accumulation of errors, a phenotype which is called microsatellite instability (MSI). The accumulation of errors in DNA sequences is thought to lead to mutations in cancer-related genes.

The four predominant mismatch repair genes include MLH1, MSH2, MSH 6, and PMS2. These genes may possess loss of function through a germline/inherited mechanism, such as Lynch syndrome, or can be sporadically acquired. Approximately 20%-30% of endometrial cancers exhibit MMR-d with acquired, sporadic losses in function being the majority of cases and only approximately 10% a result of Lynch syndrome. Mutations in PMS2 are the dominant genotype of Lynch syndrome, whereas loss of function in MLH1 is most frequent aberration in sporadic cases of MMR-d endometrial cancer.1

Endometrial cancers can be tested for MMR-d by performing immunohistochemistry to look for loss of expression in the four most common MMR genes. If there is loss of expression of MLH1, additional triage testing can be performed to determine if this loss is caused by the epigenetic phenomenon of hypermethylation. When present, this excludes Lynch syndrome and suggests a sporadic form origin of the disease. If there is loss of expression of the MMR genes (including loss of MLH1 and subsequent negative testing for promotor methylation), the patient should receive genetic testing for the presence of a germline mutation indicating Lynch syndrome. As an adjunct or alternative to immunohistochemistry, PCR studies or next-generation sequencing can be used to measure the presence of microsatellite instability in a process that identifies the expansion or reduction in repetitive DNA sequences of the tumor, compared with normal tumor.2

It is of the highest importance to identify endometrial cancers caused by Lynch syndrome because this enables providers to offer cascade testing of relatives, and to intensify screening or preventative measures for the many other cancers (such as colon, upper gastrointestinal, breast, and urothelial) for which these patients are at risk. Therefore, routine screening for MMR-d tumors is recommended in all cases of endometrial cancer, not simply those of a young age at diagnosis or for whom a strong family history exists.3 Using family history factors, primary tumor site, and age as a trigger for screening for Lynch syndrome, such as the Bethesda Guidelines, is associated with a 82% sensitivity in identifying Lynch syndrome. In a meta-analysis including testing results from 1,159 women with endometrial cancer, 43% of patients who were diagnosed with Lynch syndrome via molecular analysis would have been missed by clinical screening using Bethesda Guidelines.2

Discovering cases of Lynch syndrome is not the only benefit of routine testing for MMR-d in endometrial cancers. There is also significant value in the characterization of sporadic mismatch repair–deficient tumors because this information provides prognostic information and guides therapy. Tumors with a microsatellite-high phenotype/MMR-d were identified as one of the four distinct molecular subgroups of endometrial cancer by the Cancer Genome Atlas.4 Patients with this molecular profile exhibited “intermediate” prognostic outcomes, performing better than the “serous-like” cancers with p53 mutations, yet worse than patients with a POLE ultramutated group who rarely experience recurrences or death, even in the setting of unfavorable histology.

Beyond prognostication, the molecular profile of endometrial cancers also influence their responsiveness to therapeutics, highlighting the importance of splitting, not lumping endometrial cancers into relevant molecular subgroups when designing research and practicing clinical medicine. The PORTEC-3 trial studied 410 women with high-risk endometrial cancer, and randomized participants to receive either adjuvant radiation alone, or radiation with chemotherapy.5 There were no differences in progression-free survival between the two therapeutic strategies when analyzed in aggregate. However, when analyzed by Cancer Genome Atlas molecular subgroup, it was noted that there was a clear benefit from chemotherapy for patients with p53 mutations. For patients with MMR-d tumors, no such benefit was observed. Patients assigned this molecular subgroup did no better with the addition of platinum and taxane chemotherapy over radiation alone. Unfortunately, for patients with MMR-d tumors, recurrence rates remained high, suggesting that we can and need to discover more effective therapies for these tumors than what is available with conventional radiation or platinum and taxane chemotherapy. Targeted therapy may be the solution to this problem. Through microsatellite instability, MMR-d tumors create somatic mutations which result in neoantigens, an immunogenic environment. This state up-regulates checkpoint inhibitor proteins, which serve as an actionable target for anti-PD-L1 antibodies, such as the drug pembrolizumab which has been shown to be highly active against MMR-d endometrial cancer. In the landmark, KEYNOTE-158 trial, patients with advanced, recurrent solid tumors that exhibited MMR-d were treated with pembrolizumab.6 This included 49 patients with endometrial cancer, among whom there was a 79% response rate. Subsequently, pembrolizumab was granted Food and Drug Administration approval for use in advanced, recurrent MMR-d/MSI-high endometrial cancer. Trials are currently enrolling patients to explore the utility of this drug in the up-front setting in both early- and late-stage disease with a hope that this targeted therapy can do what conventional cytotoxic chemotherapy has failed to do.

Therefore, given the clinical significance of mismatch repair deficiency, all patients with endometrial cancer should be investigated for loss of expression in these proteins, and if present, considered for the possibility of Lynch syndrome. While most will not have an inherited cause, this information regarding their tumor biology remains critically important in both prognostication and decision-making surrounding other therapies and their eligibility for promising clinical trials.

Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no conflicts of interest to declare. Email her at [email protected].

References

1. Simpkins SB et al. Hum. Mol. Genet. 1999;8:661-6.

2. Kahn R et al. Cancer. 2019 Sep 15;125(18):2172-3183.

3. SGO Clinical Practice Statement: Screening for Lynch Syndrome in Endometrial Cancer. https://www.sgo.org/clinical-practice/guidelines/screening-for-lynch-syndrome-in-endometrial-cancer/

4. Kandoth et al. Nature. 2013;497(7447):67-73.

5. Leon-Castillo A et al. J Clin Oncol. 2020 Oct 10;38(29):3388-97.

6. Marabelle A et al. J Clin Oncol. 2020 Jan 1;38(1):1-10.

Women with Lynch syndrome are known to carry an approximately 60% lifetime risk of endometrial cancer. These cancers result from inherited deleterious mutations in genes that code for mismatch repair proteins. However, mismatch repair deficiency (MMR-d) is not exclusively found in the tumors of patients with Lynch syndrome, and much is being learned about this group of endometrial cancers, their behavior, and their vulnerability to targeted therapies.

Dr. Emma C. Rossi

During the processes of DNA replication, recombination, or chemical and physical damage, mismatches in base pairs frequently occurs. Mismatch repair proteins function to identify and repair such errors, and the loss of their function causes the accumulation of the insertions or deletions of short, repetitive sequences of DNA. This phenomenon can be measured using polymerase chain reaction (PCR) screening of known microsatellites to look for the accumulation of errors, a phenotype which is called microsatellite instability (MSI). The accumulation of errors in DNA sequences is thought to lead to mutations in cancer-related genes.

The four predominant mismatch repair genes include MLH1, MSH2, MSH 6, and PMS2. These genes may possess loss of function through a germline/inherited mechanism, such as Lynch syndrome, or can be sporadically acquired. Approximately 20%-30% of endometrial cancers exhibit MMR-d with acquired, sporadic losses in function being the majority of cases and only approximately 10% a result of Lynch syndrome. Mutations in PMS2 are the dominant genotype of Lynch syndrome, whereas loss of function in MLH1 is most frequent aberration in sporadic cases of MMR-d endometrial cancer.1

Endometrial cancers can be tested for MMR-d by performing immunohistochemistry to look for loss of expression in the four most common MMR genes. If there is loss of expression of MLH1, additional triage testing can be performed to determine if this loss is caused by the epigenetic phenomenon of hypermethylation. When present, this excludes Lynch syndrome and suggests a sporadic form origin of the disease. If there is loss of expression of the MMR genes (including loss of MLH1 and subsequent negative testing for promotor methylation), the patient should receive genetic testing for the presence of a germline mutation indicating Lynch syndrome. As an adjunct or alternative to immunohistochemistry, PCR studies or next-generation sequencing can be used to measure the presence of microsatellite instability in a process that identifies the expansion or reduction in repetitive DNA sequences of the tumor, compared with normal tumor.2

It is of the highest importance to identify endometrial cancers caused by Lynch syndrome because this enables providers to offer cascade testing of relatives, and to intensify screening or preventative measures for the many other cancers (such as colon, upper gastrointestinal, breast, and urothelial) for which these patients are at risk. Therefore, routine screening for MMR-d tumors is recommended in all cases of endometrial cancer, not simply those of a young age at diagnosis or for whom a strong family history exists.3 Using family history factors, primary tumor site, and age as a trigger for screening for Lynch syndrome, such as the Bethesda Guidelines, is associated with a 82% sensitivity in identifying Lynch syndrome. In a meta-analysis including testing results from 1,159 women with endometrial cancer, 43% of patients who were diagnosed with Lynch syndrome via molecular analysis would have been missed by clinical screening using Bethesda Guidelines.2

Discovering cases of Lynch syndrome is not the only benefit of routine testing for MMR-d in endometrial cancers. There is also significant value in the characterization of sporadic mismatch repair–deficient tumors because this information provides prognostic information and guides therapy. Tumors with a microsatellite-high phenotype/MMR-d were identified as one of the four distinct molecular subgroups of endometrial cancer by the Cancer Genome Atlas.4 Patients with this molecular profile exhibited “intermediate” prognostic outcomes, performing better than the “serous-like” cancers with p53 mutations, yet worse than patients with a POLE ultramutated group who rarely experience recurrences or death, even in the setting of unfavorable histology.

Beyond prognostication, the molecular profile of endometrial cancers also influence their responsiveness to therapeutics, highlighting the importance of splitting, not lumping endometrial cancers into relevant molecular subgroups when designing research and practicing clinical medicine. The PORTEC-3 trial studied 410 women with high-risk endometrial cancer, and randomized participants to receive either adjuvant radiation alone, or radiation with chemotherapy.5 There were no differences in progression-free survival between the two therapeutic strategies when analyzed in aggregate. However, when analyzed by Cancer Genome Atlas molecular subgroup, it was noted that there was a clear benefit from chemotherapy for patients with p53 mutations. For patients with MMR-d tumors, no such benefit was observed. Patients assigned this molecular subgroup did no better with the addition of platinum and taxane chemotherapy over radiation alone. Unfortunately, for patients with MMR-d tumors, recurrence rates remained high, suggesting that we can and need to discover more effective therapies for these tumors than what is available with conventional radiation or platinum and taxane chemotherapy. Targeted therapy may be the solution to this problem. Through microsatellite instability, MMR-d tumors create somatic mutations which result in neoantigens, an immunogenic environment. This state up-regulates checkpoint inhibitor proteins, which serve as an actionable target for anti-PD-L1 antibodies, such as the drug pembrolizumab which has been shown to be highly active against MMR-d endometrial cancer. In the landmark, KEYNOTE-158 trial, patients with advanced, recurrent solid tumors that exhibited MMR-d were treated with pembrolizumab.6 This included 49 patients with endometrial cancer, among whom there was a 79% response rate. Subsequently, pembrolizumab was granted Food and Drug Administration approval for use in advanced, recurrent MMR-d/MSI-high endometrial cancer. Trials are currently enrolling patients to explore the utility of this drug in the up-front setting in both early- and late-stage disease with a hope that this targeted therapy can do what conventional cytotoxic chemotherapy has failed to do.

Therefore, given the clinical significance of mismatch repair deficiency, all patients with endometrial cancer should be investigated for loss of expression in these proteins, and if present, considered for the possibility of Lynch syndrome. While most will not have an inherited cause, this information regarding their tumor biology remains critically important in both prognostication and decision-making surrounding other therapies and their eligibility for promising clinical trials.

Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no conflicts of interest to declare. Email her at [email protected].

References

1. Simpkins SB et al. Hum. Mol. Genet. 1999;8:661-6.

2. Kahn R et al. Cancer. 2019 Sep 15;125(18):2172-3183.

3. SGO Clinical Practice Statement: Screening for Lynch Syndrome in Endometrial Cancer. https://www.sgo.org/clinical-practice/guidelines/screening-for-lynch-syndrome-in-endometrial-cancer/

4. Kandoth et al. Nature. 2013;497(7447):67-73.

5. Leon-Castillo A et al. J Clin Oncol. 2020 Oct 10;38(29):3388-97.

6. Marabelle A et al. J Clin Oncol. 2020 Jan 1;38(1):1-10.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

In U.S., lockdowns added 2 pounds per month

Article Type
Changed
Thu, 08/11/2022 - 10:03

Americans gained nearly 2 pounds per month under COVID-19 shelter-in-place orders in 2020, according to a new study published March 22, 2021, in JAMA Network Open.

Those who kept the same lockdown habits could have gained 20 pounds during the past year, the study authors said.

“We know that weight gain is a public health problem in the U.S. already, so anything making it worse is definitely concerning, and shelter-in-place orders are so ubiquitous that the sheer number of people affected by this makes it extremely relevant,” Gregory Marcus, MD, the senior author and a cardiologist at the University of California, San Francisco, told the New York Times.

Dr. Marcus and colleagues analyzed more than 7,000 weight measurements from 269 people in 37 states who used Bluetooth-connected scales from Feb. 1 to June 1, 2020. Among the participants, about 52% were women, 77% were White, and they had an average age of 52 years.

The research team found that participants had a steady weight gain of more than half a pound every 10 days. That equals about 1.5-2 pounds per month.

Many of the participants were losing weight before the shelter-in-place orders went into effect, Dr. Marcus said. The lockdown effects could be even greater for those who weren’t losing weight before.

“It’s reasonable to assume these individuals are more engaged with their health in general, and more disciplined and on top of things,” he said. “That suggests we could be underestimating – that this is the tip of the iceberg.”

The small study doesn’t represent all of the nation and can’t be generalized to the U.S. population, the study authors noted, but it’s an indicator of what happened during the pandemic. The participants’ weight increased regardless of their location and chronic medical conditions.

Overall, people don’t move around as much during lockdowns, the UCSF researchers reported in another study published in Annals of Internal Medicine in November 2020. According to smartphone data, daily step counts decreased by 27% in March 2020. The step counts increased again throughout the summer but still remained lower than before the COVID-19 pandemic.

“The detrimental health outcomes suggested by these data demonstrate a need to identify concurrent strategies to mitigate weight gain,” the authors wrote in the JAMA Network Open study, “such as encouraging healthy diets and exploring ways to enhance physical activity, as local governments consider new constraints in response to SARS-CoV-2 and potential future pandemics.”

A version of this article first appeared on WebMD.com.

Publications
Topics
Sections

Americans gained nearly 2 pounds per month under COVID-19 shelter-in-place orders in 2020, according to a new study published March 22, 2021, in JAMA Network Open.

Those who kept the same lockdown habits could have gained 20 pounds during the past year, the study authors said.

“We know that weight gain is a public health problem in the U.S. already, so anything making it worse is definitely concerning, and shelter-in-place orders are so ubiquitous that the sheer number of people affected by this makes it extremely relevant,” Gregory Marcus, MD, the senior author and a cardiologist at the University of California, San Francisco, told the New York Times.

Dr. Marcus and colleagues analyzed more than 7,000 weight measurements from 269 people in 37 states who used Bluetooth-connected scales from Feb. 1 to June 1, 2020. Among the participants, about 52% were women, 77% were White, and they had an average age of 52 years.

The research team found that participants had a steady weight gain of more than half a pound every 10 days. That equals about 1.5-2 pounds per month.

Many of the participants were losing weight before the shelter-in-place orders went into effect, Dr. Marcus said. The lockdown effects could be even greater for those who weren’t losing weight before.

“It’s reasonable to assume these individuals are more engaged with their health in general, and more disciplined and on top of things,” he said. “That suggests we could be underestimating – that this is the tip of the iceberg.”

The small study doesn’t represent all of the nation and can’t be generalized to the U.S. population, the study authors noted, but it’s an indicator of what happened during the pandemic. The participants’ weight increased regardless of their location and chronic medical conditions.

Overall, people don’t move around as much during lockdowns, the UCSF researchers reported in another study published in Annals of Internal Medicine in November 2020. According to smartphone data, daily step counts decreased by 27% in March 2020. The step counts increased again throughout the summer but still remained lower than before the COVID-19 pandemic.

“The detrimental health outcomes suggested by these data demonstrate a need to identify concurrent strategies to mitigate weight gain,” the authors wrote in the JAMA Network Open study, “such as encouraging healthy diets and exploring ways to enhance physical activity, as local governments consider new constraints in response to SARS-CoV-2 and potential future pandemics.”

A version of this article first appeared on WebMD.com.

Americans gained nearly 2 pounds per month under COVID-19 shelter-in-place orders in 2020, according to a new study published March 22, 2021, in JAMA Network Open.

Those who kept the same lockdown habits could have gained 20 pounds during the past year, the study authors said.

“We know that weight gain is a public health problem in the U.S. already, so anything making it worse is definitely concerning, and shelter-in-place orders are so ubiquitous that the sheer number of people affected by this makes it extremely relevant,” Gregory Marcus, MD, the senior author and a cardiologist at the University of California, San Francisco, told the New York Times.

Dr. Marcus and colleagues analyzed more than 7,000 weight measurements from 269 people in 37 states who used Bluetooth-connected scales from Feb. 1 to June 1, 2020. Among the participants, about 52% were women, 77% were White, and they had an average age of 52 years.

The research team found that participants had a steady weight gain of more than half a pound every 10 days. That equals about 1.5-2 pounds per month.

Many of the participants were losing weight before the shelter-in-place orders went into effect, Dr. Marcus said. The lockdown effects could be even greater for those who weren’t losing weight before.

“It’s reasonable to assume these individuals are more engaged with their health in general, and more disciplined and on top of things,” he said. “That suggests we could be underestimating – that this is the tip of the iceberg.”

The small study doesn’t represent all of the nation and can’t be generalized to the U.S. population, the study authors noted, but it’s an indicator of what happened during the pandemic. The participants’ weight increased regardless of their location and chronic medical conditions.

Overall, people don’t move around as much during lockdowns, the UCSF researchers reported in another study published in Annals of Internal Medicine in November 2020. According to smartphone data, daily step counts decreased by 27% in March 2020. The step counts increased again throughout the summer but still remained lower than before the COVID-19 pandemic.

“The detrimental health outcomes suggested by these data demonstrate a need to identify concurrent strategies to mitigate weight gain,” the authors wrote in the JAMA Network Open study, “such as encouraging healthy diets and exploring ways to enhance physical activity, as local governments consider new constraints in response to SARS-CoV-2 and potential future pandemics.”

A version of this article first appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article