User login
For MD-IQ use only
Where Is the ‘Microbiome Revolution’ Headed Next?
Human microbiome research has progressed in leaps and bounds over the past decades, from pivotal studies begun in the 1970s to the launch of the Human Microbiome Project in 2007. Breakthroughs have laid the groundwork for more recent clinical applications, such as fecal microbiota transplantation (FMT), and advanced techniques to explore new therapeutic pathways. Yet the “microbiome revolution” is just getting started, according to professor Martin J. Blaser, MD, one of the field’s pioneers.
, says Dr. Blaser, who holds the Henry Rutgers Chair of the Human Microbiome and is director of the Center for Advanced Biotechnology and Medicine at Rutgers University in New Brunswick, New Jersey.
Dr. Blaser is the author of Missing Microbes: How the Overuse of Antibiotics Is Fueling Our Modern Plagues, serves as chair of the Presidential Advisory Council on Combating Antibiotic-Resistant Bacteria and is a member of the scientific advisory board of the biotech startup Micronoma.
In this interview, which has been condensed and edited for clarity, Dr. Blaser discusses where we’re at now and where he sees the microbiome field evolving in the coming years.
Highlighting the Most Promising Applications
Which recent studies on the link between the human microbiome and disease have you found particularly promising?
There have been a number of studies, including our own, focusing on the gut-kidney axis. The gut microbiome produces, or detoxifies, metabolites that are toxic to the kidney: for example, those involved in the formation of kidney stones and in the worsening of uremia.
Altering the microbiome to reduce the uremic toxins and the nidus for stone formation is a very promising field of research.
What other disease states may be amenable to microbiome-based interventions?
There are diseases that are caused by known genetic mutations. Yet, for nearly all of them, there is great variation in clinical outcomes, which might be classed as genes multiplied by environment interactions.
It seems likely to me that microbiome variation could account for some proportion of those differences for some genetic diseases.
It’s now well established that altering the microbiome with FMT is a successful intervention for recurrent Clostridioides difficile infections. What do you see as the next disease states where FMT could prove successful?
If you go to ClinicalTrials.gov, you will find that that there are 471 trials registered using FMT. This is across a broad range of illnesses, including metabolic, immunological, autoimmune, inflammatory, degenerative, and neoplastic diseases.
Which will be the next condition showing marked efficacy is anyone’s guess. That is why we must do clinical trials to assess what works and what does not, regardless of specific illness.
The donor’s microbiome appears to be vital to engraftment success, with “superdonors” even being identified. What factors do you think primarily influence microbiome engraftment?
There is an emerging science about this question, driven in part by classical ecological theory.
Right now, we are using FMT as if one size fits all. But this probably would not provide optimal treatment for all. Just as we type blood donors and recipients before the blood transfusion, one could easily imagine a parallel kind of procedure.
Are there any diseases where it’s just too far-fetched to think altering the microbiome could make a difference?
The link between the microbiome and human health is so pervasive that there are few conditions that are out of the realm of possibility. It really is a frontier.
Not that the microbiome causes everything, but by understanding and manipulating the microbiome, we could at least palliate, or slow down, particular pathologic processes.
For all the major causes of death in the United States — cardiovascular disease, cancer, dementia and neurogenerative diseases, diabetes, and lung, liver, and kidney diseases — there is ongoing investigation of the microbiome. A greater promise would be to prevent or cure these illnesses.
Predicting the Next Stages of the ‘Microbiome Revolution’
Do you believe we are at a turning point with the microbiome in terms of being able to manipulate or engineer it?
The microbiome is a scientific frontier that has an impact across the biosphere. It is a broad frontier involving human and veterinary medicine, agriculture, and the environment. Knowledge is increasing incrementally, as expected.
Are we at the point yet where doctors should be incorporating microbiome-related lifestyle changes for people with or at risk for cancer, heart disease, Alzheimer’s disease, or other chronic conditions?
Although we are still in the early stages of the “microbiome revolution,” which I first wrote about in EMBO Reports in 2006 and then again in the Journal of Clinical Investigation in 2014, I think important advances for all of these conditions are coming our way in the next 5-10 years.
How are prebiotics, probiotics, and postbiotics being used to shape the microbiome?
This is a very important and active area in clinical investigation, which needs to be ramped up.
Tens of millions of people are using probiotics and prebiotics every day for vague indications, and which have only infrequently been tested in robust clinical trials. So, there is a disconnect between what’s being claimed with the bulk of the probiotics at present and what we’ll actually know in the future.
How do you think the microbiome will stack up to other factors influencing health, such as genetics, exercise, and nutrition?
All are important, but unlike genetics, the microbiome is tractable, like diet and exercise.
It is essentially impossible to change one’s genome, but that might become more likely before too long. However, we can easily change someone’s microbiome through dietary means, for example. Once we know the ground rules, there will be many options. Right now, it is mostly one-offs, but as the scientific basis broadens, much more will be possible.
In the future, do you think we’ll be able to look at a person’s microbiome and tell what his or her risk of developing disease is, similar to the way we use gene panels now?
Yes, but we will need scientific advances to teach us what are the important biomarkers in general and in particular people. This will be one area of precision medicine.
Lessons From Decades at the Forefront
You’ve been involved in this research for over 30 years, and the majority has focused on the human microbiome and its role in disease. When did it become apparent to you that this research had unique therapeutic promise?
From the very start, there was always the potential to harness the microbiome to improve human health. In fact, I wrote a perspective in PNAS on that theme in 2010.
The key is to understand the biology of the microbiome, and from the scientific study comes new preventives and new treatments. Right now, there are many “probiotic” products on the market. Probiotics have a great future, but most of what is out there has not been rigorously tested for effectiveness.
Was there a particular series of studies that occurred before the launch of the Human Microbiome Project and brought us to the current era?
The studies in the 1970s-1980s by Carl Woese using 16S rRNA genes to understand phylogeny and evolution opened up the field of DNA sequencing to consider bacterial evolution and issues of ancestry.
A key subject of your research and the focus of your book is antibiotic-resistant bacteria. What did this work teach you about describing the science of antibiotic resistance to the general public?
People don’t care very much about antibiotic resistance. They think that affects other people, mostly. In contrast, they care about their own health and their children’s health.
The more that the data show that using antibiotics can be harmful to health in some circumstances, the more that use will diminish. We need more transparency about benefits and costs.
Are there any common misconceptions about the microbiome that you hear from the general public, or even clinicians, that you would like to see greater efforts to dispel?
The public and the medical profession are in love with probiotics, buying them by the tens of millions. But as stated before, they are very diverse and mostly untested for efficacy.
The next step is to test specific formulations to see which ones work, and for whom, and which ones don’t. That would be a big advance.
A version of this article appeared on Medscape.com.
Human microbiome research has progressed in leaps and bounds over the past decades, from pivotal studies begun in the 1970s to the launch of the Human Microbiome Project in 2007. Breakthroughs have laid the groundwork for more recent clinical applications, such as fecal microbiota transplantation (FMT), and advanced techniques to explore new therapeutic pathways. Yet the “microbiome revolution” is just getting started, according to professor Martin J. Blaser, MD, one of the field’s pioneers.
, says Dr. Blaser, who holds the Henry Rutgers Chair of the Human Microbiome and is director of the Center for Advanced Biotechnology and Medicine at Rutgers University in New Brunswick, New Jersey.
Dr. Blaser is the author of Missing Microbes: How the Overuse of Antibiotics Is Fueling Our Modern Plagues, serves as chair of the Presidential Advisory Council on Combating Antibiotic-Resistant Bacteria and is a member of the scientific advisory board of the biotech startup Micronoma.
In this interview, which has been condensed and edited for clarity, Dr. Blaser discusses where we’re at now and where he sees the microbiome field evolving in the coming years.
Highlighting the Most Promising Applications
Which recent studies on the link between the human microbiome and disease have you found particularly promising?
There have been a number of studies, including our own, focusing on the gut-kidney axis. The gut microbiome produces, or detoxifies, metabolites that are toxic to the kidney: for example, those involved in the formation of kidney stones and in the worsening of uremia.
Altering the microbiome to reduce the uremic toxins and the nidus for stone formation is a very promising field of research.
What other disease states may be amenable to microbiome-based interventions?
There are diseases that are caused by known genetic mutations. Yet, for nearly all of them, there is great variation in clinical outcomes, which might be classed as genes multiplied by environment interactions.
It seems likely to me that microbiome variation could account for some proportion of those differences for some genetic diseases.
It’s now well established that altering the microbiome with FMT is a successful intervention for recurrent Clostridioides difficile infections. What do you see as the next disease states where FMT could prove successful?
If you go to ClinicalTrials.gov, you will find that that there are 471 trials registered using FMT. This is across a broad range of illnesses, including metabolic, immunological, autoimmune, inflammatory, degenerative, and neoplastic diseases.
Which will be the next condition showing marked efficacy is anyone’s guess. That is why we must do clinical trials to assess what works and what does not, regardless of specific illness.
The donor’s microbiome appears to be vital to engraftment success, with “superdonors” even being identified. What factors do you think primarily influence microbiome engraftment?
There is an emerging science about this question, driven in part by classical ecological theory.
Right now, we are using FMT as if one size fits all. But this probably would not provide optimal treatment for all. Just as we type blood donors and recipients before the blood transfusion, one could easily imagine a parallel kind of procedure.
Are there any diseases where it’s just too far-fetched to think altering the microbiome could make a difference?
The link between the microbiome and human health is so pervasive that there are few conditions that are out of the realm of possibility. It really is a frontier.
Not that the microbiome causes everything, but by understanding and manipulating the microbiome, we could at least palliate, or slow down, particular pathologic processes.
For all the major causes of death in the United States — cardiovascular disease, cancer, dementia and neurogenerative diseases, diabetes, and lung, liver, and kidney diseases — there is ongoing investigation of the microbiome. A greater promise would be to prevent or cure these illnesses.
Predicting the Next Stages of the ‘Microbiome Revolution’
Do you believe we are at a turning point with the microbiome in terms of being able to manipulate or engineer it?
The microbiome is a scientific frontier that has an impact across the biosphere. It is a broad frontier involving human and veterinary medicine, agriculture, and the environment. Knowledge is increasing incrementally, as expected.
Are we at the point yet where doctors should be incorporating microbiome-related lifestyle changes for people with or at risk for cancer, heart disease, Alzheimer’s disease, or other chronic conditions?
Although we are still in the early stages of the “microbiome revolution,” which I first wrote about in EMBO Reports in 2006 and then again in the Journal of Clinical Investigation in 2014, I think important advances for all of these conditions are coming our way in the next 5-10 years.
How are prebiotics, probiotics, and postbiotics being used to shape the microbiome?
This is a very important and active area in clinical investigation, which needs to be ramped up.
Tens of millions of people are using probiotics and prebiotics every day for vague indications, and which have only infrequently been tested in robust clinical trials. So, there is a disconnect between what’s being claimed with the bulk of the probiotics at present and what we’ll actually know in the future.
How do you think the microbiome will stack up to other factors influencing health, such as genetics, exercise, and nutrition?
All are important, but unlike genetics, the microbiome is tractable, like diet and exercise.
It is essentially impossible to change one’s genome, but that might become more likely before too long. However, we can easily change someone’s microbiome through dietary means, for example. Once we know the ground rules, there will be many options. Right now, it is mostly one-offs, but as the scientific basis broadens, much more will be possible.
In the future, do you think we’ll be able to look at a person’s microbiome and tell what his or her risk of developing disease is, similar to the way we use gene panels now?
Yes, but we will need scientific advances to teach us what are the important biomarkers in general and in particular people. This will be one area of precision medicine.
Lessons From Decades at the Forefront
You’ve been involved in this research for over 30 years, and the majority has focused on the human microbiome and its role in disease. When did it become apparent to you that this research had unique therapeutic promise?
From the very start, there was always the potential to harness the microbiome to improve human health. In fact, I wrote a perspective in PNAS on that theme in 2010.
The key is to understand the biology of the microbiome, and from the scientific study comes new preventives and new treatments. Right now, there are many “probiotic” products on the market. Probiotics have a great future, but most of what is out there has not been rigorously tested for effectiveness.
Was there a particular series of studies that occurred before the launch of the Human Microbiome Project and brought us to the current era?
The studies in the 1970s-1980s by Carl Woese using 16S rRNA genes to understand phylogeny and evolution opened up the field of DNA sequencing to consider bacterial evolution and issues of ancestry.
A key subject of your research and the focus of your book is antibiotic-resistant bacteria. What did this work teach you about describing the science of antibiotic resistance to the general public?
People don’t care very much about antibiotic resistance. They think that affects other people, mostly. In contrast, they care about their own health and their children’s health.
The more that the data show that using antibiotics can be harmful to health in some circumstances, the more that use will diminish. We need more transparency about benefits and costs.
Are there any common misconceptions about the microbiome that you hear from the general public, or even clinicians, that you would like to see greater efforts to dispel?
The public and the medical profession are in love with probiotics, buying them by the tens of millions. But as stated before, they are very diverse and mostly untested for efficacy.
The next step is to test specific formulations to see which ones work, and for whom, and which ones don’t. That would be a big advance.
A version of this article appeared on Medscape.com.
Human microbiome research has progressed in leaps and bounds over the past decades, from pivotal studies begun in the 1970s to the launch of the Human Microbiome Project in 2007. Breakthroughs have laid the groundwork for more recent clinical applications, such as fecal microbiota transplantation (FMT), and advanced techniques to explore new therapeutic pathways. Yet the “microbiome revolution” is just getting started, according to professor Martin J. Blaser, MD, one of the field’s pioneers.
, says Dr. Blaser, who holds the Henry Rutgers Chair of the Human Microbiome and is director of the Center for Advanced Biotechnology and Medicine at Rutgers University in New Brunswick, New Jersey.
Dr. Blaser is the author of Missing Microbes: How the Overuse of Antibiotics Is Fueling Our Modern Plagues, serves as chair of the Presidential Advisory Council on Combating Antibiotic-Resistant Bacteria and is a member of the scientific advisory board of the biotech startup Micronoma.
In this interview, which has been condensed and edited for clarity, Dr. Blaser discusses where we’re at now and where he sees the microbiome field evolving in the coming years.
Highlighting the Most Promising Applications
Which recent studies on the link between the human microbiome and disease have you found particularly promising?
There have been a number of studies, including our own, focusing on the gut-kidney axis. The gut microbiome produces, or detoxifies, metabolites that are toxic to the kidney: for example, those involved in the formation of kidney stones and in the worsening of uremia.
Altering the microbiome to reduce the uremic toxins and the nidus for stone formation is a very promising field of research.
What other disease states may be amenable to microbiome-based interventions?
There are diseases that are caused by known genetic mutations. Yet, for nearly all of them, there is great variation in clinical outcomes, which might be classed as genes multiplied by environment interactions.
It seems likely to me that microbiome variation could account for some proportion of those differences for some genetic diseases.
It’s now well established that altering the microbiome with FMT is a successful intervention for recurrent Clostridioides difficile infections. What do you see as the next disease states where FMT could prove successful?
If you go to ClinicalTrials.gov, you will find that that there are 471 trials registered using FMT. This is across a broad range of illnesses, including metabolic, immunological, autoimmune, inflammatory, degenerative, and neoplastic diseases.
Which will be the next condition showing marked efficacy is anyone’s guess. That is why we must do clinical trials to assess what works and what does not, regardless of specific illness.
The donor’s microbiome appears to be vital to engraftment success, with “superdonors” even being identified. What factors do you think primarily influence microbiome engraftment?
There is an emerging science about this question, driven in part by classical ecological theory.
Right now, we are using FMT as if one size fits all. But this probably would not provide optimal treatment for all. Just as we type blood donors and recipients before the blood transfusion, one could easily imagine a parallel kind of procedure.
Are there any diseases where it’s just too far-fetched to think altering the microbiome could make a difference?
The link between the microbiome and human health is so pervasive that there are few conditions that are out of the realm of possibility. It really is a frontier.
Not that the microbiome causes everything, but by understanding and manipulating the microbiome, we could at least palliate, or slow down, particular pathologic processes.
For all the major causes of death in the United States — cardiovascular disease, cancer, dementia and neurogenerative diseases, diabetes, and lung, liver, and kidney diseases — there is ongoing investigation of the microbiome. A greater promise would be to prevent or cure these illnesses.
Predicting the Next Stages of the ‘Microbiome Revolution’
Do you believe we are at a turning point with the microbiome in terms of being able to manipulate or engineer it?
The microbiome is a scientific frontier that has an impact across the biosphere. It is a broad frontier involving human and veterinary medicine, agriculture, and the environment. Knowledge is increasing incrementally, as expected.
Are we at the point yet where doctors should be incorporating microbiome-related lifestyle changes for people with or at risk for cancer, heart disease, Alzheimer’s disease, or other chronic conditions?
Although we are still in the early stages of the “microbiome revolution,” which I first wrote about in EMBO Reports in 2006 and then again in the Journal of Clinical Investigation in 2014, I think important advances for all of these conditions are coming our way in the next 5-10 years.
How are prebiotics, probiotics, and postbiotics being used to shape the microbiome?
This is a very important and active area in clinical investigation, which needs to be ramped up.
Tens of millions of people are using probiotics and prebiotics every day for vague indications, and which have only infrequently been tested in robust clinical trials. So, there is a disconnect between what’s being claimed with the bulk of the probiotics at present and what we’ll actually know in the future.
How do you think the microbiome will stack up to other factors influencing health, such as genetics, exercise, and nutrition?
All are important, but unlike genetics, the microbiome is tractable, like diet and exercise.
It is essentially impossible to change one’s genome, but that might become more likely before too long. However, we can easily change someone’s microbiome through dietary means, for example. Once we know the ground rules, there will be many options. Right now, it is mostly one-offs, but as the scientific basis broadens, much more will be possible.
In the future, do you think we’ll be able to look at a person’s microbiome and tell what his or her risk of developing disease is, similar to the way we use gene panels now?
Yes, but we will need scientific advances to teach us what are the important biomarkers in general and in particular people. This will be one area of precision medicine.
Lessons From Decades at the Forefront
You’ve been involved in this research for over 30 years, and the majority has focused on the human microbiome and its role in disease. When did it become apparent to you that this research had unique therapeutic promise?
From the very start, there was always the potential to harness the microbiome to improve human health. In fact, I wrote a perspective in PNAS on that theme in 2010.
The key is to understand the biology of the microbiome, and from the scientific study comes new preventives and new treatments. Right now, there are many “probiotic” products on the market. Probiotics have a great future, but most of what is out there has not been rigorously tested for effectiveness.
Was there a particular series of studies that occurred before the launch of the Human Microbiome Project and brought us to the current era?
The studies in the 1970s-1980s by Carl Woese using 16S rRNA genes to understand phylogeny and evolution opened up the field of DNA sequencing to consider bacterial evolution and issues of ancestry.
A key subject of your research and the focus of your book is antibiotic-resistant bacteria. What did this work teach you about describing the science of antibiotic resistance to the general public?
People don’t care very much about antibiotic resistance. They think that affects other people, mostly. In contrast, they care about their own health and their children’s health.
The more that the data show that using antibiotics can be harmful to health in some circumstances, the more that use will diminish. We need more transparency about benefits and costs.
Are there any common misconceptions about the microbiome that you hear from the general public, or even clinicians, that you would like to see greater efforts to dispel?
The public and the medical profession are in love with probiotics, buying them by the tens of millions. But as stated before, they are very diverse and mostly untested for efficacy.
The next step is to test specific formulations to see which ones work, and for whom, and which ones don’t. That would be a big advance.
A version of this article appeared on Medscape.com.
Roflumilast foam gets nod as new option for seborrheic dermatitis
The in a press release.
The 0.3% foam, marketed as Zoryve, applied once-daily, is indicated for patients aged 9 years and older with seborrheic dermatitis, and can be used anywhere on the body, including areas with hair, with no limits on duration of use, according to the company, Arcutis. A 0.3% cream formulation of roflumilast was previously approved by the FDA for the topical treatment of plaque psoriasis in patients aged 6 years and older.
Approval was based on data from the phase 3 STRATUM trial and an accompanying phase 2 study known as Trial 203. These studies included a total of 683 adults and youth aged 9 years and older with seborrheic dermatitis. Participants were randomized to roflumilast or a placebo.
At 8 weeks, 79.5 % of patients on roflumilast met the primary efficacy endpoint of Investigator Global Assessment (IGA) scores of 0 or 1 (clear or almost clear) compared with 58.0% of patients on placebo (P < .001); the results were similar in the phase 2 Trial 203 (73.1% vs. 40.8%, respectively; P < .001). Overall, more than 50% of the patients on roflumilast achieved a clear score.
Patients in the roflumilast group also showed significant improvement in all secondary endpoints, including itching, scaling, and erythema, according to the company.
In the STRATUM study, 62.8% of roflumilast-treated patients and 40.6% of placebo patients achieved a 4-point or more reduction in itch based on the Worst Itch Numerical Rating Score (P =.0001), and 28% of roflumilast-treated patients reported significant itch improvement within the first 48 hours of use, compared with 13% of placebo patients (P = .0024).
Over a treatment period of up to 1 year, no treatment-related severe adverse events were reported in the phase 2 and 3 studies. The incidence of treatment emergent adverse events was similar between the treatment and placebo groups, and the most common adverse events (occurring in 1% of more of patients) across both studies were nasopharyngitis (1.5%), nausea (1.3%), and headache (1.1%).
Roflumilast foam is scheduled to be available by the end of January 2024, according to the company. The product is for topical use only, and contraindicated for individuals with severe liver impairment.
The in a press release.
The 0.3% foam, marketed as Zoryve, applied once-daily, is indicated for patients aged 9 years and older with seborrheic dermatitis, and can be used anywhere on the body, including areas with hair, with no limits on duration of use, according to the company, Arcutis. A 0.3% cream formulation of roflumilast was previously approved by the FDA for the topical treatment of plaque psoriasis in patients aged 6 years and older.
Approval was based on data from the phase 3 STRATUM trial and an accompanying phase 2 study known as Trial 203. These studies included a total of 683 adults and youth aged 9 years and older with seborrheic dermatitis. Participants were randomized to roflumilast or a placebo.
At 8 weeks, 79.5 % of patients on roflumilast met the primary efficacy endpoint of Investigator Global Assessment (IGA) scores of 0 or 1 (clear or almost clear) compared with 58.0% of patients on placebo (P < .001); the results were similar in the phase 2 Trial 203 (73.1% vs. 40.8%, respectively; P < .001). Overall, more than 50% of the patients on roflumilast achieved a clear score.
Patients in the roflumilast group also showed significant improvement in all secondary endpoints, including itching, scaling, and erythema, according to the company.
In the STRATUM study, 62.8% of roflumilast-treated patients and 40.6% of placebo patients achieved a 4-point or more reduction in itch based on the Worst Itch Numerical Rating Score (P =.0001), and 28% of roflumilast-treated patients reported significant itch improvement within the first 48 hours of use, compared with 13% of placebo patients (P = .0024).
Over a treatment period of up to 1 year, no treatment-related severe adverse events were reported in the phase 2 and 3 studies. The incidence of treatment emergent adverse events was similar between the treatment and placebo groups, and the most common adverse events (occurring in 1% of more of patients) across both studies were nasopharyngitis (1.5%), nausea (1.3%), and headache (1.1%).
Roflumilast foam is scheduled to be available by the end of January 2024, according to the company. The product is for topical use only, and contraindicated for individuals with severe liver impairment.
The in a press release.
The 0.3% foam, marketed as Zoryve, applied once-daily, is indicated for patients aged 9 years and older with seborrheic dermatitis, and can be used anywhere on the body, including areas with hair, with no limits on duration of use, according to the company, Arcutis. A 0.3% cream formulation of roflumilast was previously approved by the FDA for the topical treatment of plaque psoriasis in patients aged 6 years and older.
Approval was based on data from the phase 3 STRATUM trial and an accompanying phase 2 study known as Trial 203. These studies included a total of 683 adults and youth aged 9 years and older with seborrheic dermatitis. Participants were randomized to roflumilast or a placebo.
At 8 weeks, 79.5 % of patients on roflumilast met the primary efficacy endpoint of Investigator Global Assessment (IGA) scores of 0 or 1 (clear or almost clear) compared with 58.0% of patients on placebo (P < .001); the results were similar in the phase 2 Trial 203 (73.1% vs. 40.8%, respectively; P < .001). Overall, more than 50% of the patients on roflumilast achieved a clear score.
Patients in the roflumilast group also showed significant improvement in all secondary endpoints, including itching, scaling, and erythema, according to the company.
In the STRATUM study, 62.8% of roflumilast-treated patients and 40.6% of placebo patients achieved a 4-point or more reduction in itch based on the Worst Itch Numerical Rating Score (P =.0001), and 28% of roflumilast-treated patients reported significant itch improvement within the first 48 hours of use, compared with 13% of placebo patients (P = .0024).
Over a treatment period of up to 1 year, no treatment-related severe adverse events were reported in the phase 2 and 3 studies. The incidence of treatment emergent adverse events was similar between the treatment and placebo groups, and the most common adverse events (occurring in 1% of more of patients) across both studies were nasopharyngitis (1.5%), nausea (1.3%), and headache (1.1%).
Roflumilast foam is scheduled to be available by the end of January 2024, according to the company. The product is for topical use only, and contraindicated for individuals with severe liver impairment.
What causes obesity? More science points to the brain
For much of his life, 32-year-old Michael Smith had a war going on in his head.
After a big meal, he knew he should be full. But an inexplicable hunger would drive him to pick up the fork again.
Cravings for fried chicken or gummy bears overwhelmed him, fueling late-night DoorDash orders that — despite their bounty of fat and sugar — never satisfied him.
He recalls waking up on the couch, half-eaten takeout in his lap, feeling sluggish and out of control.
“It was like I was food drunk,” recalls Smith, who lives in Boston. “I had a moment I looked at myself in the mirror. I was around 380 pounds, and I said, ‘OK, something has got to give.’ “
Smith is among the 42% of U.S. adults living with obesity, a misunderstood and stubbornly hard-to-manage condition that doctors have only recently begun to call a disease. Its root causes have been debated for decades, with studies suggesting everything from genes to lifestyle to a shifting food supply loaded with carbohydrates and ultra-processed foods. Solutions have long targeted self-discipline and a simple “eat less, move more” strategy with remarkably grim results.
Those who successfully slim down tend to gain back 50% of that weight within 2 years, and 80% within 5 years. Meanwhile, the obesity epidemic marches on.
But a new frontier of brain-based therapies — from GLP-1 agonist drugs thought to act on reward and appetite centers to deep brain stimulation aimed at resetting neural circuits — has kindled hope among patients like Smith and the doctors who treat them. The treatments, and theories behind them, are not without controversy. They’re expensive, have side effects, and, critics contend, pull focus from diet and exercise.
But most agree that in the battle against obesity, one crucial organ has been overlooked.
“Obesity, in almost all circumstances, is most likely a disorder of the brain,” said Casey Halpern, MD, associate professor of neurosurgery at the University of Pennsylvania. “What these individuals need is not simply more willpower, but the therapeutic equivalent of an electrician that can make right these connections inside their brain.”
A Break in the Machine
Throughout the day, the machine that is our brain is constantly humming in the background, taking in subtle signals from our gut, hormones, and environment to determine when we’re hungry, how food makes us feel, and whether we are taking in enough energy, or expending too much, to survive.
said Kevin Hall, PhD, an obesity researcher with the National Institute of Diabetes and Digestive and Kidney Diseases. “I liken it to holding your breath. I can do that for a period of time, and I have some conscious control. But eventually, physiology wins out.”
Mounting evidence suggests that in people with obesity, something in the machine is broken.
One seminal 2001 study in The Lancet suggested that, like people addicted to cocaine or alcohol, they lack receptors to the feel-good brain chemical dopamine and overeat in pursuit of the pleasure they lack.
A recent study, not yet published, from Dr. Hall’s lab drew a slightly different conclusion, suggesting that people with obesity actually have too much dopamine, filling up those receptors so the pleasure spike from eating doesn’t feel like much.
“It’s kind of like trying to shout in a noisy room. You’re going to have to shout louder to have the same effect,” said Dr. Hall.
Gut-brain pathways that tell us we’re full may also be impaired.
In another study, Yale researchers tube-fed 500 calories of sugar or fat directly into the stomachs of 28 lean people and 30 people with obesity. Then they observed brain activity using functional magnetic resonance imaging (fMRI).
In lean people, about 30 regions of the brain quieted after the meal, including parts of the striatum (associated with cravings).
In those with obesity, the brain barely responded at all.
“In my clinic, patients will often say ‘I just finished my dinner, but it doesn’t feel like it,’” said senior author Mireille Serlie, MD, PhD, an obesity researcher at the Yale School of Medicine. “It may be that this nutrient-sensing interaction between the gut and the brain is less pronounced or comes too late for them after the meal.”
Dr. Halpern recently identified a brain circuit linking a memory center (hippocampus) to an appetite control region (hypothalamus). In people with obesity and binge eating disorder, the circuit appears jammed. This may cause them to, in a sense, forget they just ate.
“Some of their eating episodes are almost dissociative — they’re not realizing how much they are eating and can’t keep track of it,” he said.
Another brain system works to maintain longer-term homeostasis — or weight stability. Like a set thermostat, it kicks on to trigger hunger and fatigue when it senses we’re low on fat.
The hormone leptin, found in fat cells, sends signals to the hypothalamus to let it know how much energy we have on board.
“If leptin levels go up, it signals the brain that you have too much fat and you should eat less to return to the starting point,” said Rockefeller University geneticist Jeffrey Friedman, MD, PhD, who discovered the hormone in 1994. “If you have too little fat and leptin is low, that will stimulate appetite to return you to the starting point.”
In people with obesity, he said, the thermostat — or set point the body seeks to maintain — is too high.
All this raises a crucial question: How do these circuits and pathways malfunction in the first place?
What Breaks the Brain?
Genes, scientists agree, play a role.
Studies show that genetics underlie as much as 75% of people’s differences in body mass index (BMI), with certain gene combinations raising obesity risk in particular environments.
While hundreds of genes are believed to have a small effect, about a dozen single genes are thought to have a large effect. (Notably, most influence brain function.) For instance, about 6% of people with severe obesity since childhood have mutations in a gene called MC4R (melanocortin 4 receptor), which influences leptin signaling.
Still, genetics alone cannot account for the explosion in obesity in the U.S. over the last 50 years, says epidemiologist Deirdre Tobias, ScD, assistant professor of medicine at Harvard Medical School.
At the population level, “our genes don’t change that much in less than a generation,” she said.
But our food supply has.
Ultra-processed foods — those containing hydrogenated oils, high-fructose corn syrup, flavoring agents, emulsifiers, and other manufactured ingredients — now make up about 60% of the food supply.
“The evidence is fairly consistent indicating that there’s something about these foods that is possibly causing obesity,” said Tobias.
In one telling 2019 study, Dr. Hall and his colleagues brought 20 men and women into a study center to live for a month and tightly controlled their food intake and activity. One group was provided with meals with 80% of calories from ultra-processed food. The other was given meals with no processed food.
The three daily meals provided had the same calories, sugars, fats, fiber, and carbohydrates, and people were told to eat as much as they wanted.
Those on the ultra-processed diet ate about 500 calories more per day, ate faster, and gained weight. Those on the unprocessed diet lost weight.
“This is a stark example of how, when you can change the food environment, you cause really remarkable changes in food intake without people even being aware that they are overeating,” said Dr. Hall.
Just what it is about these relatively novel foods that may trigger overeating is unclear. It could be the crunch, the lack of water content, the engineered balance of sugar/salt/fat, their easy-to-devour texture, or something else.
Some research suggests that the foods may interfere with gut-brain signaling that tells the brain you’re full.
“Evidence is amassing that the nutritional content of processed foods is not accurately conveyed to the brain,” Dana M. Small, PhD, a neuroscientist at Yale, wrote in a recent perspective paper in Science.
Even more concerning: Some animal studies suggest processed foods reprogram the brain to dislike healthy foods.
And once these brain changes are made, they are hard to reverse.
“The problem is, our brain is not wired for this,” said Dr. Halpern. “We are not evolved to eat the food we are eating, so our brain adapts, but it adapts in a negative way that puts us at risk.”
That’s why changing the food environment via public policy must be part of the solution in combating obesity, Dr. Tobias said.
A New Era of Brain-Based Solutions
In the spring of 2021, after years of trying and failing to lose weight via the “move more, eat less” model, Michael Smith began to take a medication called Vyvanse. The drug was approved in 2008 for attention deficit hyperactivity disorder, but since it also influences levels of the hormones dopamine and norepinephrine to reduce cravings, it is now frequently prescribed for binge eating disorder.
“That was pretty much how I got rid of my first 60 to 70 pounds,” Smith said.
A few months later, after he hit a plateau, he had surgery to shrink the size of his stomach — a decision he now second-guesses.
While it kept him from overeating for a time, the fried chicken and gummy bear cravings returned a few months later.
His doctor, Fatima Cody Stanford, MD, put him on a second medication: semaglutide, or Wegovy, the weekly shot approved for weight loss in 2021. It works, in part, by mimicking glucagon-like peptide-1 (GLP-1), a key gut hormone that lets your brain know you are full.
The weight began to fall off again.
Smith’s success story is just one of many that Dr. Stanford, an obesity medicine doctor-scientist at Harvard, has heard in her office in recent years.
“I do not believe these drugs are a panacea,” she said. “There are nonresponders, and those are the patients I take off the medication. But for the high-responders, and there are many of them, they are telling me, ‘Oh my gosh. For the first time in my life, I am not constantly thinking about eating. My life has changed.’”
A Multi-Pronged Approach
Dr. Halpern, at Penn, has also been hearing success stories.
In recent years, he has placed permanent electrodes in the brains of three people with grade III, or severe, obesity and binge eating disorder.
All had tried exercise, dieting, support groups, medication, and weight loss surgery to no avail.
The electrodes modulate an area in the center of the brain called the nucleus accumbens, which in mice studies has been shown to reduce cravings when stimulated.
Thus far, all three are seeing promising results.
“It’s not like I don’t think about food at all,” one of them, Robyn Baldwin, told The New York Times. “But I’m no longer a craving person.”
Dr. Halpern is now extending the trial to more patients and hopes to ultimately include other areas of the brain, including those that involve memory.
He imagines a day when people with severe obesity, who have failed conventional treatments, can walk into a clinic and have their brain circuits assessed to see which ones may be misfiring.
Many might find relief with noninvasive brain stimulation, like transcranial magnetic stimulation (already in use for depression). Others might need a more extreme approach, like the deep brain stimulation, or DBS, therapy Dr. Halpern used.
“Obviously, DBS is hard to scale, so it would have to be reserved for the most severe patients,” he said.
Still, not everyone believes brain-based drugs and surgeries are the answer.
David Ludwig, MD, PhD, a professor of nutrition at the Harvard School of Public Health, played a key role in the discovery of GLP-1 and acknowledges that “of course” the brain influences body composition. But to him, explaining obesity as a disease of the brain oversimplifies it, discounting metabolic factors such as a tendency to store too much fat.
He noted that it’s hard to get drug companies, or any agencies, to fund large clinical trials on simple things like low-carbohydrate diets or exercise programs.
“We need all the tools we can get in the battle against the obesity epidemic, and new technologies are worth exploring,” he said. “However, the success of these drugs should not lead us to deprioritize diet and lifestyle interventions.”
Dr. Stanford, who has received consulting fees from Wegovy, believes the future of treatment lies in a multi-pronged approach, with surgery, medication, and lifestyle changes coalescing in a lasting, but fragile, remission.
“Unfortunately, there is no cure for obesity,” said Dr. Stanford, whose patients often have setbacks and must try new strategies. “There are treatments that work for a while, but they are constantly pushing up against this origin in the brain.”
Smith says understanding this has been a big part of his success.
He is now a leaner and healthier 5-foot-6 and 204 pounds. In addition to taking his medication, he walks to work, goes to the gym twice a week, limits his portions, and tries to reframe the way he thinks about food, viewing it as fuel rather than an indulgence.
Sometimes, when he looks in the mirror, he is reminded of his 380-pound self, and it scares him. He doesn’t want to go back there. He’s confident now that he won’t have to.
“There is this misconception out there that you just need to put the fork down, but I’m learning it’s more complicated than that,” he said. “I intend to treat this as the illness that it is and do what I need to combat it so I’m able to keep this new reality I have built for myself.”
A version of this article appeared on WebMD.com .
For much of his life, 32-year-old Michael Smith had a war going on in his head.
After a big meal, he knew he should be full. But an inexplicable hunger would drive him to pick up the fork again.
Cravings for fried chicken or gummy bears overwhelmed him, fueling late-night DoorDash orders that — despite their bounty of fat and sugar — never satisfied him.
He recalls waking up on the couch, half-eaten takeout in his lap, feeling sluggish and out of control.
“It was like I was food drunk,” recalls Smith, who lives in Boston. “I had a moment I looked at myself in the mirror. I was around 380 pounds, and I said, ‘OK, something has got to give.’ “
Smith is among the 42% of U.S. adults living with obesity, a misunderstood and stubbornly hard-to-manage condition that doctors have only recently begun to call a disease. Its root causes have been debated for decades, with studies suggesting everything from genes to lifestyle to a shifting food supply loaded with carbohydrates and ultra-processed foods. Solutions have long targeted self-discipline and a simple “eat less, move more” strategy with remarkably grim results.
Those who successfully slim down tend to gain back 50% of that weight within 2 years, and 80% within 5 years. Meanwhile, the obesity epidemic marches on.
But a new frontier of brain-based therapies — from GLP-1 agonist drugs thought to act on reward and appetite centers to deep brain stimulation aimed at resetting neural circuits — has kindled hope among patients like Smith and the doctors who treat them. The treatments, and theories behind them, are not without controversy. They’re expensive, have side effects, and, critics contend, pull focus from diet and exercise.
But most agree that in the battle against obesity, one crucial organ has been overlooked.
“Obesity, in almost all circumstances, is most likely a disorder of the brain,” said Casey Halpern, MD, associate professor of neurosurgery at the University of Pennsylvania. “What these individuals need is not simply more willpower, but the therapeutic equivalent of an electrician that can make right these connections inside their brain.”
A Break in the Machine
Throughout the day, the machine that is our brain is constantly humming in the background, taking in subtle signals from our gut, hormones, and environment to determine when we’re hungry, how food makes us feel, and whether we are taking in enough energy, or expending too much, to survive.
said Kevin Hall, PhD, an obesity researcher with the National Institute of Diabetes and Digestive and Kidney Diseases. “I liken it to holding your breath. I can do that for a period of time, and I have some conscious control. But eventually, physiology wins out.”
Mounting evidence suggests that in people with obesity, something in the machine is broken.
One seminal 2001 study in The Lancet suggested that, like people addicted to cocaine or alcohol, they lack receptors to the feel-good brain chemical dopamine and overeat in pursuit of the pleasure they lack.
A recent study, not yet published, from Dr. Hall’s lab drew a slightly different conclusion, suggesting that people with obesity actually have too much dopamine, filling up those receptors so the pleasure spike from eating doesn’t feel like much.
“It’s kind of like trying to shout in a noisy room. You’re going to have to shout louder to have the same effect,” said Dr. Hall.
Gut-brain pathways that tell us we’re full may also be impaired.
In another study, Yale researchers tube-fed 500 calories of sugar or fat directly into the stomachs of 28 lean people and 30 people with obesity. Then they observed brain activity using functional magnetic resonance imaging (fMRI).
In lean people, about 30 regions of the brain quieted after the meal, including parts of the striatum (associated with cravings).
In those with obesity, the brain barely responded at all.
“In my clinic, patients will often say ‘I just finished my dinner, but it doesn’t feel like it,’” said senior author Mireille Serlie, MD, PhD, an obesity researcher at the Yale School of Medicine. “It may be that this nutrient-sensing interaction between the gut and the brain is less pronounced or comes too late for them after the meal.”
Dr. Halpern recently identified a brain circuit linking a memory center (hippocampus) to an appetite control region (hypothalamus). In people with obesity and binge eating disorder, the circuit appears jammed. This may cause them to, in a sense, forget they just ate.
“Some of their eating episodes are almost dissociative — they’re not realizing how much they are eating and can’t keep track of it,” he said.
Another brain system works to maintain longer-term homeostasis — or weight stability. Like a set thermostat, it kicks on to trigger hunger and fatigue when it senses we’re low on fat.
The hormone leptin, found in fat cells, sends signals to the hypothalamus to let it know how much energy we have on board.
“If leptin levels go up, it signals the brain that you have too much fat and you should eat less to return to the starting point,” said Rockefeller University geneticist Jeffrey Friedman, MD, PhD, who discovered the hormone in 1994. “If you have too little fat and leptin is low, that will stimulate appetite to return you to the starting point.”
In people with obesity, he said, the thermostat — or set point the body seeks to maintain — is too high.
All this raises a crucial question: How do these circuits and pathways malfunction in the first place?
What Breaks the Brain?
Genes, scientists agree, play a role.
Studies show that genetics underlie as much as 75% of people’s differences in body mass index (BMI), with certain gene combinations raising obesity risk in particular environments.
While hundreds of genes are believed to have a small effect, about a dozen single genes are thought to have a large effect. (Notably, most influence brain function.) For instance, about 6% of people with severe obesity since childhood have mutations in a gene called MC4R (melanocortin 4 receptor), which influences leptin signaling.
Still, genetics alone cannot account for the explosion in obesity in the U.S. over the last 50 years, says epidemiologist Deirdre Tobias, ScD, assistant professor of medicine at Harvard Medical School.
At the population level, “our genes don’t change that much in less than a generation,” she said.
But our food supply has.
Ultra-processed foods — those containing hydrogenated oils, high-fructose corn syrup, flavoring agents, emulsifiers, and other manufactured ingredients — now make up about 60% of the food supply.
“The evidence is fairly consistent indicating that there’s something about these foods that is possibly causing obesity,” said Tobias.
In one telling 2019 study, Dr. Hall and his colleagues brought 20 men and women into a study center to live for a month and tightly controlled their food intake and activity. One group was provided with meals with 80% of calories from ultra-processed food. The other was given meals with no processed food.
The three daily meals provided had the same calories, sugars, fats, fiber, and carbohydrates, and people were told to eat as much as they wanted.
Those on the ultra-processed diet ate about 500 calories more per day, ate faster, and gained weight. Those on the unprocessed diet lost weight.
“This is a stark example of how, when you can change the food environment, you cause really remarkable changes in food intake without people even being aware that they are overeating,” said Dr. Hall.
Just what it is about these relatively novel foods that may trigger overeating is unclear. It could be the crunch, the lack of water content, the engineered balance of sugar/salt/fat, their easy-to-devour texture, or something else.
Some research suggests that the foods may interfere with gut-brain signaling that tells the brain you’re full.
“Evidence is amassing that the nutritional content of processed foods is not accurately conveyed to the brain,” Dana M. Small, PhD, a neuroscientist at Yale, wrote in a recent perspective paper in Science.
Even more concerning: Some animal studies suggest processed foods reprogram the brain to dislike healthy foods.
And once these brain changes are made, they are hard to reverse.
“The problem is, our brain is not wired for this,” said Dr. Halpern. “We are not evolved to eat the food we are eating, so our brain adapts, but it adapts in a negative way that puts us at risk.”
That’s why changing the food environment via public policy must be part of the solution in combating obesity, Dr. Tobias said.
A New Era of Brain-Based Solutions
In the spring of 2021, after years of trying and failing to lose weight via the “move more, eat less” model, Michael Smith began to take a medication called Vyvanse. The drug was approved in 2008 for attention deficit hyperactivity disorder, but since it also influences levels of the hormones dopamine and norepinephrine to reduce cravings, it is now frequently prescribed for binge eating disorder.
“That was pretty much how I got rid of my first 60 to 70 pounds,” Smith said.
A few months later, after he hit a plateau, he had surgery to shrink the size of his stomach — a decision he now second-guesses.
While it kept him from overeating for a time, the fried chicken and gummy bear cravings returned a few months later.
His doctor, Fatima Cody Stanford, MD, put him on a second medication: semaglutide, or Wegovy, the weekly shot approved for weight loss in 2021. It works, in part, by mimicking glucagon-like peptide-1 (GLP-1), a key gut hormone that lets your brain know you are full.
The weight began to fall off again.
Smith’s success story is just one of many that Dr. Stanford, an obesity medicine doctor-scientist at Harvard, has heard in her office in recent years.
“I do not believe these drugs are a panacea,” she said. “There are nonresponders, and those are the patients I take off the medication. But for the high-responders, and there are many of them, they are telling me, ‘Oh my gosh. For the first time in my life, I am not constantly thinking about eating. My life has changed.’”
A Multi-Pronged Approach
Dr. Halpern, at Penn, has also been hearing success stories.
In recent years, he has placed permanent electrodes in the brains of three people with grade III, or severe, obesity and binge eating disorder.
All had tried exercise, dieting, support groups, medication, and weight loss surgery to no avail.
The electrodes modulate an area in the center of the brain called the nucleus accumbens, which in mice studies has been shown to reduce cravings when stimulated.
Thus far, all three are seeing promising results.
“It’s not like I don’t think about food at all,” one of them, Robyn Baldwin, told The New York Times. “But I’m no longer a craving person.”
Dr. Halpern is now extending the trial to more patients and hopes to ultimately include other areas of the brain, including those that involve memory.
He imagines a day when people with severe obesity, who have failed conventional treatments, can walk into a clinic and have their brain circuits assessed to see which ones may be misfiring.
Many might find relief with noninvasive brain stimulation, like transcranial magnetic stimulation (already in use for depression). Others might need a more extreme approach, like the deep brain stimulation, or DBS, therapy Dr. Halpern used.
“Obviously, DBS is hard to scale, so it would have to be reserved for the most severe patients,” he said.
Still, not everyone believes brain-based drugs and surgeries are the answer.
David Ludwig, MD, PhD, a professor of nutrition at the Harvard School of Public Health, played a key role in the discovery of GLP-1 and acknowledges that “of course” the brain influences body composition. But to him, explaining obesity as a disease of the brain oversimplifies it, discounting metabolic factors such as a tendency to store too much fat.
He noted that it’s hard to get drug companies, or any agencies, to fund large clinical trials on simple things like low-carbohydrate diets or exercise programs.
“We need all the tools we can get in the battle against the obesity epidemic, and new technologies are worth exploring,” he said. “However, the success of these drugs should not lead us to deprioritize diet and lifestyle interventions.”
Dr. Stanford, who has received consulting fees from Wegovy, believes the future of treatment lies in a multi-pronged approach, with surgery, medication, and lifestyle changes coalescing in a lasting, but fragile, remission.
“Unfortunately, there is no cure for obesity,” said Dr. Stanford, whose patients often have setbacks and must try new strategies. “There are treatments that work for a while, but they are constantly pushing up against this origin in the brain.”
Smith says understanding this has been a big part of his success.
He is now a leaner and healthier 5-foot-6 and 204 pounds. In addition to taking his medication, he walks to work, goes to the gym twice a week, limits his portions, and tries to reframe the way he thinks about food, viewing it as fuel rather than an indulgence.
Sometimes, when he looks in the mirror, he is reminded of his 380-pound self, and it scares him. He doesn’t want to go back there. He’s confident now that he won’t have to.
“There is this misconception out there that you just need to put the fork down, but I’m learning it’s more complicated than that,” he said. “I intend to treat this as the illness that it is and do what I need to combat it so I’m able to keep this new reality I have built for myself.”
A version of this article appeared on WebMD.com .
For much of his life, 32-year-old Michael Smith had a war going on in his head.
After a big meal, he knew he should be full. But an inexplicable hunger would drive him to pick up the fork again.
Cravings for fried chicken or gummy bears overwhelmed him, fueling late-night DoorDash orders that — despite their bounty of fat and sugar — never satisfied him.
He recalls waking up on the couch, half-eaten takeout in his lap, feeling sluggish and out of control.
“It was like I was food drunk,” recalls Smith, who lives in Boston. “I had a moment I looked at myself in the mirror. I was around 380 pounds, and I said, ‘OK, something has got to give.’ “
Smith is among the 42% of U.S. adults living with obesity, a misunderstood and stubbornly hard-to-manage condition that doctors have only recently begun to call a disease. Its root causes have been debated for decades, with studies suggesting everything from genes to lifestyle to a shifting food supply loaded with carbohydrates and ultra-processed foods. Solutions have long targeted self-discipline and a simple “eat less, move more” strategy with remarkably grim results.
Those who successfully slim down tend to gain back 50% of that weight within 2 years, and 80% within 5 years. Meanwhile, the obesity epidemic marches on.
But a new frontier of brain-based therapies — from GLP-1 agonist drugs thought to act on reward and appetite centers to deep brain stimulation aimed at resetting neural circuits — has kindled hope among patients like Smith and the doctors who treat them. The treatments, and theories behind them, are not without controversy. They’re expensive, have side effects, and, critics contend, pull focus from diet and exercise.
But most agree that in the battle against obesity, one crucial organ has been overlooked.
“Obesity, in almost all circumstances, is most likely a disorder of the brain,” said Casey Halpern, MD, associate professor of neurosurgery at the University of Pennsylvania. “What these individuals need is not simply more willpower, but the therapeutic equivalent of an electrician that can make right these connections inside their brain.”
A Break in the Machine
Throughout the day, the machine that is our brain is constantly humming in the background, taking in subtle signals from our gut, hormones, and environment to determine when we’re hungry, how food makes us feel, and whether we are taking in enough energy, or expending too much, to survive.
said Kevin Hall, PhD, an obesity researcher with the National Institute of Diabetes and Digestive and Kidney Diseases. “I liken it to holding your breath. I can do that for a period of time, and I have some conscious control. But eventually, physiology wins out.”
Mounting evidence suggests that in people with obesity, something in the machine is broken.
One seminal 2001 study in The Lancet suggested that, like people addicted to cocaine or alcohol, they lack receptors to the feel-good brain chemical dopamine and overeat in pursuit of the pleasure they lack.
A recent study, not yet published, from Dr. Hall’s lab drew a slightly different conclusion, suggesting that people with obesity actually have too much dopamine, filling up those receptors so the pleasure spike from eating doesn’t feel like much.
“It’s kind of like trying to shout in a noisy room. You’re going to have to shout louder to have the same effect,” said Dr. Hall.
Gut-brain pathways that tell us we’re full may also be impaired.
In another study, Yale researchers tube-fed 500 calories of sugar or fat directly into the stomachs of 28 lean people and 30 people with obesity. Then they observed brain activity using functional magnetic resonance imaging (fMRI).
In lean people, about 30 regions of the brain quieted after the meal, including parts of the striatum (associated with cravings).
In those with obesity, the brain barely responded at all.
“In my clinic, patients will often say ‘I just finished my dinner, but it doesn’t feel like it,’” said senior author Mireille Serlie, MD, PhD, an obesity researcher at the Yale School of Medicine. “It may be that this nutrient-sensing interaction between the gut and the brain is less pronounced or comes too late for them after the meal.”
Dr. Halpern recently identified a brain circuit linking a memory center (hippocampus) to an appetite control region (hypothalamus). In people with obesity and binge eating disorder, the circuit appears jammed. This may cause them to, in a sense, forget they just ate.
“Some of their eating episodes are almost dissociative — they’re not realizing how much they are eating and can’t keep track of it,” he said.
Another brain system works to maintain longer-term homeostasis — or weight stability. Like a set thermostat, it kicks on to trigger hunger and fatigue when it senses we’re low on fat.
The hormone leptin, found in fat cells, sends signals to the hypothalamus to let it know how much energy we have on board.
“If leptin levels go up, it signals the brain that you have too much fat and you should eat less to return to the starting point,” said Rockefeller University geneticist Jeffrey Friedman, MD, PhD, who discovered the hormone in 1994. “If you have too little fat and leptin is low, that will stimulate appetite to return you to the starting point.”
In people with obesity, he said, the thermostat — or set point the body seeks to maintain — is too high.
All this raises a crucial question: How do these circuits and pathways malfunction in the first place?
What Breaks the Brain?
Genes, scientists agree, play a role.
Studies show that genetics underlie as much as 75% of people’s differences in body mass index (BMI), with certain gene combinations raising obesity risk in particular environments.
While hundreds of genes are believed to have a small effect, about a dozen single genes are thought to have a large effect. (Notably, most influence brain function.) For instance, about 6% of people with severe obesity since childhood have mutations in a gene called MC4R (melanocortin 4 receptor), which influences leptin signaling.
Still, genetics alone cannot account for the explosion in obesity in the U.S. over the last 50 years, says epidemiologist Deirdre Tobias, ScD, assistant professor of medicine at Harvard Medical School.
At the population level, “our genes don’t change that much in less than a generation,” she said.
But our food supply has.
Ultra-processed foods — those containing hydrogenated oils, high-fructose corn syrup, flavoring agents, emulsifiers, and other manufactured ingredients — now make up about 60% of the food supply.
“The evidence is fairly consistent indicating that there’s something about these foods that is possibly causing obesity,” said Tobias.
In one telling 2019 study, Dr. Hall and his colleagues brought 20 men and women into a study center to live for a month and tightly controlled their food intake and activity. One group was provided with meals with 80% of calories from ultra-processed food. The other was given meals with no processed food.
The three daily meals provided had the same calories, sugars, fats, fiber, and carbohydrates, and people were told to eat as much as they wanted.
Those on the ultra-processed diet ate about 500 calories more per day, ate faster, and gained weight. Those on the unprocessed diet lost weight.
“This is a stark example of how, when you can change the food environment, you cause really remarkable changes in food intake without people even being aware that they are overeating,” said Dr. Hall.
Just what it is about these relatively novel foods that may trigger overeating is unclear. It could be the crunch, the lack of water content, the engineered balance of sugar/salt/fat, their easy-to-devour texture, or something else.
Some research suggests that the foods may interfere with gut-brain signaling that tells the brain you’re full.
“Evidence is amassing that the nutritional content of processed foods is not accurately conveyed to the brain,” Dana M. Small, PhD, a neuroscientist at Yale, wrote in a recent perspective paper in Science.
Even more concerning: Some animal studies suggest processed foods reprogram the brain to dislike healthy foods.
And once these brain changes are made, they are hard to reverse.
“The problem is, our brain is not wired for this,” said Dr. Halpern. “We are not evolved to eat the food we are eating, so our brain adapts, but it adapts in a negative way that puts us at risk.”
That’s why changing the food environment via public policy must be part of the solution in combating obesity, Dr. Tobias said.
A New Era of Brain-Based Solutions
In the spring of 2021, after years of trying and failing to lose weight via the “move more, eat less” model, Michael Smith began to take a medication called Vyvanse. The drug was approved in 2008 for attention deficit hyperactivity disorder, but since it also influences levels of the hormones dopamine and norepinephrine to reduce cravings, it is now frequently prescribed for binge eating disorder.
“That was pretty much how I got rid of my first 60 to 70 pounds,” Smith said.
A few months later, after he hit a plateau, he had surgery to shrink the size of his stomach — a decision he now second-guesses.
While it kept him from overeating for a time, the fried chicken and gummy bear cravings returned a few months later.
His doctor, Fatima Cody Stanford, MD, put him on a second medication: semaglutide, or Wegovy, the weekly shot approved for weight loss in 2021. It works, in part, by mimicking glucagon-like peptide-1 (GLP-1), a key gut hormone that lets your brain know you are full.
The weight began to fall off again.
Smith’s success story is just one of many that Dr. Stanford, an obesity medicine doctor-scientist at Harvard, has heard in her office in recent years.
“I do not believe these drugs are a panacea,” she said. “There are nonresponders, and those are the patients I take off the medication. But for the high-responders, and there are many of them, they are telling me, ‘Oh my gosh. For the first time in my life, I am not constantly thinking about eating. My life has changed.’”
A Multi-Pronged Approach
Dr. Halpern, at Penn, has also been hearing success stories.
In recent years, he has placed permanent electrodes in the brains of three people with grade III, or severe, obesity and binge eating disorder.
All had tried exercise, dieting, support groups, medication, and weight loss surgery to no avail.
The electrodes modulate an area in the center of the brain called the nucleus accumbens, which in mice studies has been shown to reduce cravings when stimulated.
Thus far, all three are seeing promising results.
“It’s not like I don’t think about food at all,” one of them, Robyn Baldwin, told The New York Times. “But I’m no longer a craving person.”
Dr. Halpern is now extending the trial to more patients and hopes to ultimately include other areas of the brain, including those that involve memory.
He imagines a day when people with severe obesity, who have failed conventional treatments, can walk into a clinic and have their brain circuits assessed to see which ones may be misfiring.
Many might find relief with noninvasive brain stimulation, like transcranial magnetic stimulation (already in use for depression). Others might need a more extreme approach, like the deep brain stimulation, or DBS, therapy Dr. Halpern used.
“Obviously, DBS is hard to scale, so it would have to be reserved for the most severe patients,” he said.
Still, not everyone believes brain-based drugs and surgeries are the answer.
David Ludwig, MD, PhD, a professor of nutrition at the Harvard School of Public Health, played a key role in the discovery of GLP-1 and acknowledges that “of course” the brain influences body composition. But to him, explaining obesity as a disease of the brain oversimplifies it, discounting metabolic factors such as a tendency to store too much fat.
He noted that it’s hard to get drug companies, or any agencies, to fund large clinical trials on simple things like low-carbohydrate diets or exercise programs.
“We need all the tools we can get in the battle against the obesity epidemic, and new technologies are worth exploring,” he said. “However, the success of these drugs should not lead us to deprioritize diet and lifestyle interventions.”
Dr. Stanford, who has received consulting fees from Wegovy, believes the future of treatment lies in a multi-pronged approach, with surgery, medication, and lifestyle changes coalescing in a lasting, but fragile, remission.
“Unfortunately, there is no cure for obesity,” said Dr. Stanford, whose patients often have setbacks and must try new strategies. “There are treatments that work for a while, but they are constantly pushing up against this origin in the brain.”
Smith says understanding this has been a big part of his success.
He is now a leaner and healthier 5-foot-6 and 204 pounds. In addition to taking his medication, he walks to work, goes to the gym twice a week, limits his portions, and tries to reframe the way he thinks about food, viewing it as fuel rather than an indulgence.
Sometimes, when he looks in the mirror, he is reminded of his 380-pound self, and it scares him. He doesn’t want to go back there. He’s confident now that he won’t have to.
“There is this misconception out there that you just need to put the fork down, but I’m learning it’s more complicated than that,” he said. “I intend to treat this as the illness that it is and do what I need to combat it so I’m able to keep this new reality I have built for myself.”
A version of this article appeared on WebMD.com .
FDA approves implant for glaucoma
The iDose TR (Glaukos Corp) is inserted into a corneal incision on the temple side of the eye. Pivotal phase 3 clinical trials showed the treatment resulted in sustained reductions in IOP for 3 months ranging from 6.6 to 8.4 mm Hg, comparable to reductions with topical timolol 0.5% drops used twice daily. Normal IOP is 10-21 mm Hg, and glaucoma treatments are designed to reduce high IOP into the normal range.
Glaukos Corp said that it intends a commercial launch of the implant early in 2024, with a wholesale cost of $13,950 per implant.
Travoprost is a prostaglandin analog that has been long used as a topical formulation for lowering IOP in OAG and OHT. Timolol is a topical beta-blocker widely used for the same indications.
iDose TR comes in a preloaded handheld injector designed to deliver the implant into the sclera of the eye. The implant seats in the junction of the iris, sclera, and cornea.
In two phase 3 clinical trials, 81% of patients who received the iDose TR did not require supplemental drops to reduce IOP after 12 months compared with 95% of those who receive timolol alone.
The phase 3 trials included 1150 participants across 89 clinical sites. Both trials, GC-010 and GC-012, met the primary endpoints through 3 months and demonstrated a favorable tolerability and safety profile through 12 months, according to results that John Berdahl, MD, a researcher with Vance Thompson Vision in Sioux Falls, South Dakota, and an investigator for Glaukos, presented in May at the annual meeting of the American Society of Cataract and Refractive Surgery.
Based on these outcomes, the FDA concluded in the prescribing information that iDose TR demonstrated noninferiority to topical timolol in reduction of IOP during the first 3 months of treatment. The agency also noted that use of iDose TR did not demonstrate noninferiority over the next 9 months.
In the controlled studies, the most common ocular adverse reactions reported in 2% to 6% of patients who received iDose TR were increases in IOP , iritis, dry eye, and defects of the visual field, most of which were said to be mild and transient in nature.
A version of this article appeared on Medscape.com.
The iDose TR (Glaukos Corp) is inserted into a corneal incision on the temple side of the eye. Pivotal phase 3 clinical trials showed the treatment resulted in sustained reductions in IOP for 3 months ranging from 6.6 to 8.4 mm Hg, comparable to reductions with topical timolol 0.5% drops used twice daily. Normal IOP is 10-21 mm Hg, and glaucoma treatments are designed to reduce high IOP into the normal range.
Glaukos Corp said that it intends a commercial launch of the implant early in 2024, with a wholesale cost of $13,950 per implant.
Travoprost is a prostaglandin analog that has been long used as a topical formulation for lowering IOP in OAG and OHT. Timolol is a topical beta-blocker widely used for the same indications.
iDose TR comes in a preloaded handheld injector designed to deliver the implant into the sclera of the eye. The implant seats in the junction of the iris, sclera, and cornea.
In two phase 3 clinical trials, 81% of patients who received the iDose TR did not require supplemental drops to reduce IOP after 12 months compared with 95% of those who receive timolol alone.
The phase 3 trials included 1150 participants across 89 clinical sites. Both trials, GC-010 and GC-012, met the primary endpoints through 3 months and demonstrated a favorable tolerability and safety profile through 12 months, according to results that John Berdahl, MD, a researcher with Vance Thompson Vision in Sioux Falls, South Dakota, and an investigator for Glaukos, presented in May at the annual meeting of the American Society of Cataract and Refractive Surgery.
Based on these outcomes, the FDA concluded in the prescribing information that iDose TR demonstrated noninferiority to topical timolol in reduction of IOP during the first 3 months of treatment. The agency also noted that use of iDose TR did not demonstrate noninferiority over the next 9 months.
In the controlled studies, the most common ocular adverse reactions reported in 2% to 6% of patients who received iDose TR were increases in IOP , iritis, dry eye, and defects of the visual field, most of which were said to be mild and transient in nature.
A version of this article appeared on Medscape.com.
The iDose TR (Glaukos Corp) is inserted into a corneal incision on the temple side of the eye. Pivotal phase 3 clinical trials showed the treatment resulted in sustained reductions in IOP for 3 months ranging from 6.6 to 8.4 mm Hg, comparable to reductions with topical timolol 0.5% drops used twice daily. Normal IOP is 10-21 mm Hg, and glaucoma treatments are designed to reduce high IOP into the normal range.
Glaukos Corp said that it intends a commercial launch of the implant early in 2024, with a wholesale cost of $13,950 per implant.
Travoprost is a prostaglandin analog that has been long used as a topical formulation for lowering IOP in OAG and OHT. Timolol is a topical beta-blocker widely used for the same indications.
iDose TR comes in a preloaded handheld injector designed to deliver the implant into the sclera of the eye. The implant seats in the junction of the iris, sclera, and cornea.
In two phase 3 clinical trials, 81% of patients who received the iDose TR did not require supplemental drops to reduce IOP after 12 months compared with 95% of those who receive timolol alone.
The phase 3 trials included 1150 participants across 89 clinical sites. Both trials, GC-010 and GC-012, met the primary endpoints through 3 months and demonstrated a favorable tolerability and safety profile through 12 months, according to results that John Berdahl, MD, a researcher with Vance Thompson Vision in Sioux Falls, South Dakota, and an investigator for Glaukos, presented in May at the annual meeting of the American Society of Cataract and Refractive Surgery.
Based on these outcomes, the FDA concluded in the prescribing information that iDose TR demonstrated noninferiority to topical timolol in reduction of IOP during the first 3 months of treatment. The agency also noted that use of iDose TR did not demonstrate noninferiority over the next 9 months.
In the controlled studies, the most common ocular adverse reactions reported in 2% to 6% of patients who received iDose TR were increases in IOP , iritis, dry eye, and defects of the visual field, most of which were said to be mild and transient in nature.
A version of this article appeared on Medscape.com.
GVHD raises vitiligo risk in transplant recipients
published online in JAMA Dermatology December 13.
In the cohort study, the greatest risk occurred with hematopoietic stem cell transplants (HSCTs) and in cases involving GVHD. Kidney and liver transplants carried slight increases in risk.
“The findings suggest that early detection and management of vitiligo lesions can be improved by estimating the likelihood of its development in transplant recipients and implementing a multidisciplinary approach for monitoring,” wrote the authors, from the departments of dermatology and biostatistics, at the Catholic University of Korea, Seoul.
Using claims data from South Korea’s National Health Insurance Service database, the investigators compared vitiligo incidence among 23,829 patients who had undergone solid organ transplantation (SOT) or HSCT between 2010 and 2017 versus that of 119,145 age- and sex-matched controls. At a mean observation time of 4.79 years in the transplant group (and 5.12 years for controls), the adjusted hazard ratio (AHR) for vitiligo among patients who had undergone any transplant was 1.73. AHRs for HSCT, liver transplants, and kidney transplants were 12.69, 1.63, and 1.50, respectively.
Patients who had undergone allogeneic HSCT (AHR, 14.43) or autologous transplants (AHR, 5.71), as well as those with and without GVHD (24.09 and 8.21, respectively) had significantly higher vitiligo risk than the control group.
Among those with GVHD, HSCT recipients (AHR, 16.42) and those with allogeneic grafts (AHR, 16.81) had a higher vitiligo risk than that of control patients.
In a subgroup that included 10,355 transplant recipients who underwent posttransplant health checkups, investigators found the highest vitiligo risk — AHR, 25.09 versus controls — among HSCT recipients with comorbid GVHD. However, patients who underwent SOT, autologous HSCT, or HSCT without GVHD showed no increased vitiligo risk in this analysis. “The results of health checkup data analysis may differ from the initial analysis due to additional adjustments for lifestyle factors and inclusion of only patients who underwent a health checkup,” the authors wrote.
Asked to comment on the results, George Han, MD, PhD, who was not involved with the study, told this news organization, “this is an interesting paper where the primary difference from previous studies is the new association between GVHD in hematopoietic stem cell transplant recipients and vitiligo.” Prior research had shown higher rates of vitiligo in HSCT recipients without making the GVHD distinction. Dr. Han is associate professor of dermatology in the Hofstra/Northwell Department of Dermatology, Hyde Park, New York.
Although GVHD may not be top-of-mind for dermatologists in daily practice, he said, the study enhances their understanding of vitiligo risk in HSCT recipients. “In some ways,” Dr. Han added, “the association makes sense, as the activated T cells from the graft attacking the skin in the HSCT recipient follow many of the mechanisms of vitiligo, including upregulating interferon gamma and the CXCR3/CXCL10 axis.”
Presently, he said, dermatologists worry more about solid organ recipients than about HSCT recipients because the long-term immunosuppression required by SOT increases the risk of squamous cell carcinoma (SCC). “However, the risk of skin cancers also seems to be elevated in HSCT recipients, and in this case the basal cell carcinoma (BCC):SCC ratio is not necessarily reversed as we see in solid organ transplant recipients. So the mechanisms are a bit less clear. Interestingly, acute and chronic GVHD have both been associated with increased risks of BCC and SCC/BCC, respectively.”
Overall, Dr. Han said, any transplant recipient should undergo yearly skin checks not only for skin cancers, but also for other skin conditions such as vitiligo. “It would be nice to see this codified into official guidelines, which can vary considerably but are overall more consistent in solid organ transplant recipients than in HSCT recipients. No such guidelines seem to be available for HSCTs.”
The study was funded by the Basic Research in Science & Engineering program through the National Research Foundation of Korea, which is funded by the country’s Ministry of Education. The study authors had no disclosures. Dr. Han reports no relevant financial interests.
published online in JAMA Dermatology December 13.
In the cohort study, the greatest risk occurred with hematopoietic stem cell transplants (HSCTs) and in cases involving GVHD. Kidney and liver transplants carried slight increases in risk.
“The findings suggest that early detection and management of vitiligo lesions can be improved by estimating the likelihood of its development in transplant recipients and implementing a multidisciplinary approach for monitoring,” wrote the authors, from the departments of dermatology and biostatistics, at the Catholic University of Korea, Seoul.
Using claims data from South Korea’s National Health Insurance Service database, the investigators compared vitiligo incidence among 23,829 patients who had undergone solid organ transplantation (SOT) or HSCT between 2010 and 2017 versus that of 119,145 age- and sex-matched controls. At a mean observation time of 4.79 years in the transplant group (and 5.12 years for controls), the adjusted hazard ratio (AHR) for vitiligo among patients who had undergone any transplant was 1.73. AHRs for HSCT, liver transplants, and kidney transplants were 12.69, 1.63, and 1.50, respectively.
Patients who had undergone allogeneic HSCT (AHR, 14.43) or autologous transplants (AHR, 5.71), as well as those with and without GVHD (24.09 and 8.21, respectively) had significantly higher vitiligo risk than the control group.
Among those with GVHD, HSCT recipients (AHR, 16.42) and those with allogeneic grafts (AHR, 16.81) had a higher vitiligo risk than that of control patients.
In a subgroup that included 10,355 transplant recipients who underwent posttransplant health checkups, investigators found the highest vitiligo risk — AHR, 25.09 versus controls — among HSCT recipients with comorbid GVHD. However, patients who underwent SOT, autologous HSCT, or HSCT without GVHD showed no increased vitiligo risk in this analysis. “The results of health checkup data analysis may differ from the initial analysis due to additional adjustments for lifestyle factors and inclusion of only patients who underwent a health checkup,” the authors wrote.
Asked to comment on the results, George Han, MD, PhD, who was not involved with the study, told this news organization, “this is an interesting paper where the primary difference from previous studies is the new association between GVHD in hematopoietic stem cell transplant recipients and vitiligo.” Prior research had shown higher rates of vitiligo in HSCT recipients without making the GVHD distinction. Dr. Han is associate professor of dermatology in the Hofstra/Northwell Department of Dermatology, Hyde Park, New York.
Although GVHD may not be top-of-mind for dermatologists in daily practice, he said, the study enhances their understanding of vitiligo risk in HSCT recipients. “In some ways,” Dr. Han added, “the association makes sense, as the activated T cells from the graft attacking the skin in the HSCT recipient follow many of the mechanisms of vitiligo, including upregulating interferon gamma and the CXCR3/CXCL10 axis.”
Presently, he said, dermatologists worry more about solid organ recipients than about HSCT recipients because the long-term immunosuppression required by SOT increases the risk of squamous cell carcinoma (SCC). “However, the risk of skin cancers also seems to be elevated in HSCT recipients, and in this case the basal cell carcinoma (BCC):SCC ratio is not necessarily reversed as we see in solid organ transplant recipients. So the mechanisms are a bit less clear. Interestingly, acute and chronic GVHD have both been associated with increased risks of BCC and SCC/BCC, respectively.”
Overall, Dr. Han said, any transplant recipient should undergo yearly skin checks not only for skin cancers, but also for other skin conditions such as vitiligo. “It would be nice to see this codified into official guidelines, which can vary considerably but are overall more consistent in solid organ transplant recipients than in HSCT recipients. No such guidelines seem to be available for HSCTs.”
The study was funded by the Basic Research in Science & Engineering program through the National Research Foundation of Korea, which is funded by the country’s Ministry of Education. The study authors had no disclosures. Dr. Han reports no relevant financial interests.
published online in JAMA Dermatology December 13.
In the cohort study, the greatest risk occurred with hematopoietic stem cell transplants (HSCTs) and in cases involving GVHD. Kidney and liver transplants carried slight increases in risk.
“The findings suggest that early detection and management of vitiligo lesions can be improved by estimating the likelihood of its development in transplant recipients and implementing a multidisciplinary approach for monitoring,” wrote the authors, from the departments of dermatology and biostatistics, at the Catholic University of Korea, Seoul.
Using claims data from South Korea’s National Health Insurance Service database, the investigators compared vitiligo incidence among 23,829 patients who had undergone solid organ transplantation (SOT) or HSCT between 2010 and 2017 versus that of 119,145 age- and sex-matched controls. At a mean observation time of 4.79 years in the transplant group (and 5.12 years for controls), the adjusted hazard ratio (AHR) for vitiligo among patients who had undergone any transplant was 1.73. AHRs for HSCT, liver transplants, and kidney transplants were 12.69, 1.63, and 1.50, respectively.
Patients who had undergone allogeneic HSCT (AHR, 14.43) or autologous transplants (AHR, 5.71), as well as those with and without GVHD (24.09 and 8.21, respectively) had significantly higher vitiligo risk than the control group.
Among those with GVHD, HSCT recipients (AHR, 16.42) and those with allogeneic grafts (AHR, 16.81) had a higher vitiligo risk than that of control patients.
In a subgroup that included 10,355 transplant recipients who underwent posttransplant health checkups, investigators found the highest vitiligo risk — AHR, 25.09 versus controls — among HSCT recipients with comorbid GVHD. However, patients who underwent SOT, autologous HSCT, or HSCT without GVHD showed no increased vitiligo risk in this analysis. “The results of health checkup data analysis may differ from the initial analysis due to additional adjustments for lifestyle factors and inclusion of only patients who underwent a health checkup,” the authors wrote.
Asked to comment on the results, George Han, MD, PhD, who was not involved with the study, told this news organization, “this is an interesting paper where the primary difference from previous studies is the new association between GVHD in hematopoietic stem cell transplant recipients and vitiligo.” Prior research had shown higher rates of vitiligo in HSCT recipients without making the GVHD distinction. Dr. Han is associate professor of dermatology in the Hofstra/Northwell Department of Dermatology, Hyde Park, New York.
Although GVHD may not be top-of-mind for dermatologists in daily practice, he said, the study enhances their understanding of vitiligo risk in HSCT recipients. “In some ways,” Dr. Han added, “the association makes sense, as the activated T cells from the graft attacking the skin in the HSCT recipient follow many of the mechanisms of vitiligo, including upregulating interferon gamma and the CXCR3/CXCL10 axis.”
Presently, he said, dermatologists worry more about solid organ recipients than about HSCT recipients because the long-term immunosuppression required by SOT increases the risk of squamous cell carcinoma (SCC). “However, the risk of skin cancers also seems to be elevated in HSCT recipients, and in this case the basal cell carcinoma (BCC):SCC ratio is not necessarily reversed as we see in solid organ transplant recipients. So the mechanisms are a bit less clear. Interestingly, acute and chronic GVHD have both been associated with increased risks of BCC and SCC/BCC, respectively.”
Overall, Dr. Han said, any transplant recipient should undergo yearly skin checks not only for skin cancers, but also for other skin conditions such as vitiligo. “It would be nice to see this codified into official guidelines, which can vary considerably but are overall more consistent in solid organ transplant recipients than in HSCT recipients. No such guidelines seem to be available for HSCTs.”
The study was funded by the Basic Research in Science & Engineering program through the National Research Foundation of Korea, which is funded by the country’s Ministry of Education. The study authors had no disclosures. Dr. Han reports no relevant financial interests.
FROM JAMA DERMATOLOGY
AI-Aided Stethoscope Beats PCP in Detecting Valvular HD
, a new study shows.
The results suggest collecting relevant sounds through a stethoscope (auscultation) using AI-powered technology is an important primary care tool to detect VHD, study author Moshe A. Rancier, MD, medical director, Massachusetts General Brigham Community Physicians, Lawrence, Massachusetts, said in an interview.
“Incorporating this AI-assisted device into the primary care exam will help identify patients at risk for VHD earlier and eventually decrease costs in our healthcare system,” he said, because timely detection could avoid emergency room visits and surgeries.
The findings were presented at the annual scientific sessions of the American Heart Association.
VHD Common
Clinically significant VHD, indicating structural damage to heart valves, affects 1 in 10 adults older than 65 years. Patients may be asymptomatic or present to their PCP with an unspecific symptom like fatigue or malaise.
If VHD is undiagnosed and left untreated, patients could develop more severe symptoms, even be at risk for death, and their quality of life is significantly affected, said Dr. Rancier.
Cardiac auscultation, the current point-of-care clinical standard, has relatively low sensitivity for detecting VHD, leaving most patients undiagnosed.
The deep learning–based AI tool uses sound data to detect cardiac murmurs associated with clinically significant VHD. The device used in the study (Eko; Eko Health) is approved by the US Food and Drug Administration and is on the market.
The tool identifies background sounds that might affect the evaluation. “If there’s any noise or breath sounds, it tells me this is not a good heart sound, and asks me to record again,” said Dr. Rancier.
A doctor using the AI-assisted stethoscope carries out the auscultation exam with the sound data captured by a smartphone or tablet and sent to the AI server. “I get an answer in a second as to if there’s a murmur or not,” said Dr. Rancier.
Not only that, but the tool can determine if it’s a systolic or diastolic murmur, he added.
Real-World Population
The study enrolled a “real-world” population of 368 patients, median age 70 years, 61% female, 70% White, and 18% Hispanic without a prior VHD diagnosis or history of murmur, from three primary care clinics in Queens, New York, and Lawrence and Haverhill, Massachusetts.
About 79% of the cohort had hypertension, 68% had dyslipidemia, and 38% had diabetes, “which aligns with the population in the US,” said Dr. Rancier.
Each study participant had a regular exam carried out by Dr. Rancier using a traditional stethoscope to detect murmurs and an exam by a technician with a digital stethoscope that collected phonocardiogram (PCG) data for analysis by AI.
In addition, each patient received an echocardiogram 1-2 weeks later to confirm whether clinically significant VHD was present. An expert panel of cardiologists also reviewed the patient’s PCG recordings to confirm the presence of audible murmurs.
Dr. Rancier and the expert panel were blinded to AI and echocardiogram results.
Researchers calculated performance metrics for both PCP auscultation and the AI in detecting audible VHD.
The study showed that AI improved sensitivity to detect audible VHD by over twofold compared with PCP auscultation (94.1% vs 41.2%), with limited impact on specificity (84.5% vs 95.5%).
Dr. Rancier stressed the importance of sensitivity because clinicians tend to under-detect murmurs. “You don’t want to miss those patients because the consequences of undiagnosed VHD are dire.”
The AI tool identified 22 patients with moderate or greater VHD who were previously undiagnosed, whereas PCPs identified eight previously undiagnosed patients with VHD.
Dr. Rancier sees this tool being used beyond primary care, perhaps by emergency room personnel.
The authors plan to follow study participants and assess outcomes at for 6-12 months. They also aim to include more patients to increase the study’s power.
Expanding the Technology
They are also interested to see whether the technology can determine which valve is affected; for example, whether the issue is aortic stenosis or mitral regurgitation.
A limitation of the study was its small sample size.
Commenting on the findings, Dan Roden, MD, professor of medicine, pharmacology, and biomedical informatics, senior vice president for personalized medicine at Vanderbilt University Medical Center, Nashville, Tennessee, and chair of the American Heart Association Council on Genomic and Precision Medicine, noted that it demonstrated the AI-based stethoscope “did extraordinarily well” in predicting VHD.
“I see this as an emerging technology — using an AI-enabled stethoscope and perhaps combining it with other imaging modalities, like an AI-enabled echocardiogram built into your stethoscope,” said Dr. Roden.
“Use of these new tools to detect the presence of valvular disease, as well as the extent of valvular disease and the extent of other kinds of heart disease, will likely help to transform CVD care.”
The study was funded by Eko Health Inc. Dr. Rancier and Dr. Roden have no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
, a new study shows.
The results suggest collecting relevant sounds through a stethoscope (auscultation) using AI-powered technology is an important primary care tool to detect VHD, study author Moshe A. Rancier, MD, medical director, Massachusetts General Brigham Community Physicians, Lawrence, Massachusetts, said in an interview.
“Incorporating this AI-assisted device into the primary care exam will help identify patients at risk for VHD earlier and eventually decrease costs in our healthcare system,” he said, because timely detection could avoid emergency room visits and surgeries.
The findings were presented at the annual scientific sessions of the American Heart Association.
VHD Common
Clinically significant VHD, indicating structural damage to heart valves, affects 1 in 10 adults older than 65 years. Patients may be asymptomatic or present to their PCP with an unspecific symptom like fatigue or malaise.
If VHD is undiagnosed and left untreated, patients could develop more severe symptoms, even be at risk for death, and their quality of life is significantly affected, said Dr. Rancier.
Cardiac auscultation, the current point-of-care clinical standard, has relatively low sensitivity for detecting VHD, leaving most patients undiagnosed.
The deep learning–based AI tool uses sound data to detect cardiac murmurs associated with clinically significant VHD. The device used in the study (Eko; Eko Health) is approved by the US Food and Drug Administration and is on the market.
The tool identifies background sounds that might affect the evaluation. “If there’s any noise or breath sounds, it tells me this is not a good heart sound, and asks me to record again,” said Dr. Rancier.
A doctor using the AI-assisted stethoscope carries out the auscultation exam with the sound data captured by a smartphone or tablet and sent to the AI server. “I get an answer in a second as to if there’s a murmur or not,” said Dr. Rancier.
Not only that, but the tool can determine if it’s a systolic or diastolic murmur, he added.
Real-World Population
The study enrolled a “real-world” population of 368 patients, median age 70 years, 61% female, 70% White, and 18% Hispanic without a prior VHD diagnosis or history of murmur, from three primary care clinics in Queens, New York, and Lawrence and Haverhill, Massachusetts.
About 79% of the cohort had hypertension, 68% had dyslipidemia, and 38% had diabetes, “which aligns with the population in the US,” said Dr. Rancier.
Each study participant had a regular exam carried out by Dr. Rancier using a traditional stethoscope to detect murmurs and an exam by a technician with a digital stethoscope that collected phonocardiogram (PCG) data for analysis by AI.
In addition, each patient received an echocardiogram 1-2 weeks later to confirm whether clinically significant VHD was present. An expert panel of cardiologists also reviewed the patient’s PCG recordings to confirm the presence of audible murmurs.
Dr. Rancier and the expert panel were blinded to AI and echocardiogram results.
Researchers calculated performance metrics for both PCP auscultation and the AI in detecting audible VHD.
The study showed that AI improved sensitivity to detect audible VHD by over twofold compared with PCP auscultation (94.1% vs 41.2%), with limited impact on specificity (84.5% vs 95.5%).
Dr. Rancier stressed the importance of sensitivity because clinicians tend to under-detect murmurs. “You don’t want to miss those patients because the consequences of undiagnosed VHD are dire.”
The AI tool identified 22 patients with moderate or greater VHD who were previously undiagnosed, whereas PCPs identified eight previously undiagnosed patients with VHD.
Dr. Rancier sees this tool being used beyond primary care, perhaps by emergency room personnel.
The authors plan to follow study participants and assess outcomes at for 6-12 months. They also aim to include more patients to increase the study’s power.
Expanding the Technology
They are also interested to see whether the technology can determine which valve is affected; for example, whether the issue is aortic stenosis or mitral regurgitation.
A limitation of the study was its small sample size.
Commenting on the findings, Dan Roden, MD, professor of medicine, pharmacology, and biomedical informatics, senior vice president for personalized medicine at Vanderbilt University Medical Center, Nashville, Tennessee, and chair of the American Heart Association Council on Genomic and Precision Medicine, noted that it demonstrated the AI-based stethoscope “did extraordinarily well” in predicting VHD.
“I see this as an emerging technology — using an AI-enabled stethoscope and perhaps combining it with other imaging modalities, like an AI-enabled echocardiogram built into your stethoscope,” said Dr. Roden.
“Use of these new tools to detect the presence of valvular disease, as well as the extent of valvular disease and the extent of other kinds of heart disease, will likely help to transform CVD care.”
The study was funded by Eko Health Inc. Dr. Rancier and Dr. Roden have no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
, a new study shows.
The results suggest collecting relevant sounds through a stethoscope (auscultation) using AI-powered technology is an important primary care tool to detect VHD, study author Moshe A. Rancier, MD, medical director, Massachusetts General Brigham Community Physicians, Lawrence, Massachusetts, said in an interview.
“Incorporating this AI-assisted device into the primary care exam will help identify patients at risk for VHD earlier and eventually decrease costs in our healthcare system,” he said, because timely detection could avoid emergency room visits and surgeries.
The findings were presented at the annual scientific sessions of the American Heart Association.
VHD Common
Clinically significant VHD, indicating structural damage to heart valves, affects 1 in 10 adults older than 65 years. Patients may be asymptomatic or present to their PCP with an unspecific symptom like fatigue or malaise.
If VHD is undiagnosed and left untreated, patients could develop more severe symptoms, even be at risk for death, and their quality of life is significantly affected, said Dr. Rancier.
Cardiac auscultation, the current point-of-care clinical standard, has relatively low sensitivity for detecting VHD, leaving most patients undiagnosed.
The deep learning–based AI tool uses sound data to detect cardiac murmurs associated with clinically significant VHD. The device used in the study (Eko; Eko Health) is approved by the US Food and Drug Administration and is on the market.
The tool identifies background sounds that might affect the evaluation. “If there’s any noise or breath sounds, it tells me this is not a good heart sound, and asks me to record again,” said Dr. Rancier.
A doctor using the AI-assisted stethoscope carries out the auscultation exam with the sound data captured by a smartphone or tablet and sent to the AI server. “I get an answer in a second as to if there’s a murmur or not,” said Dr. Rancier.
Not only that, but the tool can determine if it’s a systolic or diastolic murmur, he added.
Real-World Population
The study enrolled a “real-world” population of 368 patients, median age 70 years, 61% female, 70% White, and 18% Hispanic without a prior VHD diagnosis or history of murmur, from three primary care clinics in Queens, New York, and Lawrence and Haverhill, Massachusetts.
About 79% of the cohort had hypertension, 68% had dyslipidemia, and 38% had diabetes, “which aligns with the population in the US,” said Dr. Rancier.
Each study participant had a regular exam carried out by Dr. Rancier using a traditional stethoscope to detect murmurs and an exam by a technician with a digital stethoscope that collected phonocardiogram (PCG) data for analysis by AI.
In addition, each patient received an echocardiogram 1-2 weeks later to confirm whether clinically significant VHD was present. An expert panel of cardiologists also reviewed the patient’s PCG recordings to confirm the presence of audible murmurs.
Dr. Rancier and the expert panel were blinded to AI and echocardiogram results.
Researchers calculated performance metrics for both PCP auscultation and the AI in detecting audible VHD.
The study showed that AI improved sensitivity to detect audible VHD by over twofold compared with PCP auscultation (94.1% vs 41.2%), with limited impact on specificity (84.5% vs 95.5%).
Dr. Rancier stressed the importance of sensitivity because clinicians tend to under-detect murmurs. “You don’t want to miss those patients because the consequences of undiagnosed VHD are dire.”
The AI tool identified 22 patients with moderate or greater VHD who were previously undiagnosed, whereas PCPs identified eight previously undiagnosed patients with VHD.
Dr. Rancier sees this tool being used beyond primary care, perhaps by emergency room personnel.
The authors plan to follow study participants and assess outcomes at for 6-12 months. They also aim to include more patients to increase the study’s power.
Expanding the Technology
They are also interested to see whether the technology can determine which valve is affected; for example, whether the issue is aortic stenosis or mitral regurgitation.
A limitation of the study was its small sample size.
Commenting on the findings, Dan Roden, MD, professor of medicine, pharmacology, and biomedical informatics, senior vice president for personalized medicine at Vanderbilt University Medical Center, Nashville, Tennessee, and chair of the American Heart Association Council on Genomic and Precision Medicine, noted that it demonstrated the AI-based stethoscope “did extraordinarily well” in predicting VHD.
“I see this as an emerging technology — using an AI-enabled stethoscope and perhaps combining it with other imaging modalities, like an AI-enabled echocardiogram built into your stethoscope,” said Dr. Roden.
“Use of these new tools to detect the presence of valvular disease, as well as the extent of valvular disease and the extent of other kinds of heart disease, will likely help to transform CVD care.”
The study was funded by Eko Health Inc. Dr. Rancier and Dr. Roden have no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
FROM AHA 2023
Patients with HR-positive breast cancer can safely use ART
SAN ANTONIO — who pause endocrine therapy to conceive, according to new data from the POSITIVE trial.
“We believe these data are of vital importance for the oncofertility counseling of young breast cancer patients,” Hatem A. Azim Jr., MD, PhD, adjunct professor, School of Medicine and Breast Cancer Center, Monterrey Institute of Technology, Mexico, said in a presentation at the San Antonio Breast Cancer Symposium.
As reported previously by this news organization, the primary results of the POSITIVE trial showed that interrupting endocrine therapy to allow pregnancy does not increase the risk of recurrence at 41 months follow-up.
Yet, there is concern that use of fertility preservation or assisted reproductive technology methods — especially those that entail the use of hormones — could have harmful effects on patients with HR-positive breast cancers, Dr. Azim explained.
To investigate, Dr. Azim and colleagues did a secondary analysis of outcomes from the POSITIVE trial, focusing on resumption of menstruation and use of fertility preservation and assisted reproductive technologies.
Among 516 women evaluated for the menstruation analysis, two thirds were aged 35 and older and a little more than half (53%) reported amenorrhea at enrollment, “which is not surprising,” Dr. Azim said.
“What is encouraging,” he said, is that 85% of women recovered menses within 6 months and 94% within 12 months of pausing endocrine therapy.
Among 497 evaluable participants who paused endocrine therapy to attempt pregnancy, 368 (74%) became pregnant.
Looking at time to pregnancy, there was a clear association between younger age at enrollment and shorter time to pregnancy. The cumulative incidence of pregnancy at 12 months was 64% in women younger than age 35 years, 54% in those aged 35-39, and 38% in those age 40-42. In a multivariable model, age < 35 was the only factor independently associated with a shorter time to pregnancy.
No Harmful Impact on Breast Cancer Outcomes
Turning to fertility preservation and use of assisted reproductive technologies, roughly half of the women (51%) underwent some form of fertility preservation at breast cancer diagnosis and before trial enrollment, most commonly ovarian stimulation for embryo or oocyte cryopreservation.
After enrollment, 43% of women underwent some form of assisted reproductive technology to attempt pregnancy, most commonly ovarian stimulation for in vitro fertilization (IVF) and cryopreserved embryo transfer.
In the multivariable model, cryopreserved embryo transfer was the only assisted reproductive technology significantly associated with a greater chance of becoming pregnant, more than doubling patients’ odds (odds ratio, 2.4).
“This means that at breast cancer diagnosis, we should consider cryopreservation of embryos for future use if desired,” Dr. Azim said.
Again, age mattered. Women younger than 35 undergoing assisted reproductive technologies had a 50% higher chance of becoming pregnant compared with peers aged 35-39, and an 84% higher chance than women aged 40-42.
Importantly, there was no apparent short-term detrimental impact of fertility preservation and/or assisted reproductive technologies on breast cancer outcomes, Dr. Azim reported. At 3 years, the breast cancer-free interval was almost identical between women who underwent ovarian stimulation for cryopreservation and those who did not (9.7% vs 8.7%).
“POSITIVE showed positive results that emphasize the importance of active oncofertility counseling with the patient starting at diagnosis,” said Hee Jeong Kim, MD, PhD, professor, Division of Breast Surgery, Asan Medical Center, Seoul, Republic of Korea, and discussant for the study.
“These data are reassuring for our young patients with a diagnosis of breast cancer and shows that assisted reproductive technology is an option and is probably safe to do with the caveat that it needs longer follow-up,” added SABCS codirector Carlos Arteaga, MD, director, Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas.
Dr. Azim has no relevant disclosures. Dr. Arteaga is a scientific adviser to Novartis, Lilly, Merck, AstraZeneca, Daiichi Sankyo, OrigiMed, Immunomedics, PUMA Biotechnology, TAIHO Oncology, Sanofi, and the Susan G. Komen Foundation. He has received grant support from Pfizer, Lilly, and Takeda. Dr. Kim reports no relevant financial relationships.
A version of this article appeared on Medscape.com.
SAN ANTONIO — who pause endocrine therapy to conceive, according to new data from the POSITIVE trial.
“We believe these data are of vital importance for the oncofertility counseling of young breast cancer patients,” Hatem A. Azim Jr., MD, PhD, adjunct professor, School of Medicine and Breast Cancer Center, Monterrey Institute of Technology, Mexico, said in a presentation at the San Antonio Breast Cancer Symposium.
As reported previously by this news organization, the primary results of the POSITIVE trial showed that interrupting endocrine therapy to allow pregnancy does not increase the risk of recurrence at 41 months follow-up.
Yet, there is concern that use of fertility preservation or assisted reproductive technology methods — especially those that entail the use of hormones — could have harmful effects on patients with HR-positive breast cancers, Dr. Azim explained.
To investigate, Dr. Azim and colleagues did a secondary analysis of outcomes from the POSITIVE trial, focusing on resumption of menstruation and use of fertility preservation and assisted reproductive technologies.
Among 516 women evaluated for the menstruation analysis, two thirds were aged 35 and older and a little more than half (53%) reported amenorrhea at enrollment, “which is not surprising,” Dr. Azim said.
“What is encouraging,” he said, is that 85% of women recovered menses within 6 months and 94% within 12 months of pausing endocrine therapy.
Among 497 evaluable participants who paused endocrine therapy to attempt pregnancy, 368 (74%) became pregnant.
Looking at time to pregnancy, there was a clear association between younger age at enrollment and shorter time to pregnancy. The cumulative incidence of pregnancy at 12 months was 64% in women younger than age 35 years, 54% in those aged 35-39, and 38% in those age 40-42. In a multivariable model, age < 35 was the only factor independently associated with a shorter time to pregnancy.
No Harmful Impact on Breast Cancer Outcomes
Turning to fertility preservation and use of assisted reproductive technologies, roughly half of the women (51%) underwent some form of fertility preservation at breast cancer diagnosis and before trial enrollment, most commonly ovarian stimulation for embryo or oocyte cryopreservation.
After enrollment, 43% of women underwent some form of assisted reproductive technology to attempt pregnancy, most commonly ovarian stimulation for in vitro fertilization (IVF) and cryopreserved embryo transfer.
In the multivariable model, cryopreserved embryo transfer was the only assisted reproductive technology significantly associated with a greater chance of becoming pregnant, more than doubling patients’ odds (odds ratio, 2.4).
“This means that at breast cancer diagnosis, we should consider cryopreservation of embryos for future use if desired,” Dr. Azim said.
Again, age mattered. Women younger than 35 undergoing assisted reproductive technologies had a 50% higher chance of becoming pregnant compared with peers aged 35-39, and an 84% higher chance than women aged 40-42.
Importantly, there was no apparent short-term detrimental impact of fertility preservation and/or assisted reproductive technologies on breast cancer outcomes, Dr. Azim reported. At 3 years, the breast cancer-free interval was almost identical between women who underwent ovarian stimulation for cryopreservation and those who did not (9.7% vs 8.7%).
“POSITIVE showed positive results that emphasize the importance of active oncofertility counseling with the patient starting at diagnosis,” said Hee Jeong Kim, MD, PhD, professor, Division of Breast Surgery, Asan Medical Center, Seoul, Republic of Korea, and discussant for the study.
“These data are reassuring for our young patients with a diagnosis of breast cancer and shows that assisted reproductive technology is an option and is probably safe to do with the caveat that it needs longer follow-up,” added SABCS codirector Carlos Arteaga, MD, director, Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas.
Dr. Azim has no relevant disclosures. Dr. Arteaga is a scientific adviser to Novartis, Lilly, Merck, AstraZeneca, Daiichi Sankyo, OrigiMed, Immunomedics, PUMA Biotechnology, TAIHO Oncology, Sanofi, and the Susan G. Komen Foundation. He has received grant support from Pfizer, Lilly, and Takeda. Dr. Kim reports no relevant financial relationships.
A version of this article appeared on Medscape.com.
SAN ANTONIO — who pause endocrine therapy to conceive, according to new data from the POSITIVE trial.
“We believe these data are of vital importance for the oncofertility counseling of young breast cancer patients,” Hatem A. Azim Jr., MD, PhD, adjunct professor, School of Medicine and Breast Cancer Center, Monterrey Institute of Technology, Mexico, said in a presentation at the San Antonio Breast Cancer Symposium.
As reported previously by this news organization, the primary results of the POSITIVE trial showed that interrupting endocrine therapy to allow pregnancy does not increase the risk of recurrence at 41 months follow-up.
Yet, there is concern that use of fertility preservation or assisted reproductive technology methods — especially those that entail the use of hormones — could have harmful effects on patients with HR-positive breast cancers, Dr. Azim explained.
To investigate, Dr. Azim and colleagues did a secondary analysis of outcomes from the POSITIVE trial, focusing on resumption of menstruation and use of fertility preservation and assisted reproductive technologies.
Among 516 women evaluated for the menstruation analysis, two thirds were aged 35 and older and a little more than half (53%) reported amenorrhea at enrollment, “which is not surprising,” Dr. Azim said.
“What is encouraging,” he said, is that 85% of women recovered menses within 6 months and 94% within 12 months of pausing endocrine therapy.
Among 497 evaluable participants who paused endocrine therapy to attempt pregnancy, 368 (74%) became pregnant.
Looking at time to pregnancy, there was a clear association between younger age at enrollment and shorter time to pregnancy. The cumulative incidence of pregnancy at 12 months was 64% in women younger than age 35 years, 54% in those aged 35-39, and 38% in those age 40-42. In a multivariable model, age < 35 was the only factor independently associated with a shorter time to pregnancy.
No Harmful Impact on Breast Cancer Outcomes
Turning to fertility preservation and use of assisted reproductive technologies, roughly half of the women (51%) underwent some form of fertility preservation at breast cancer diagnosis and before trial enrollment, most commonly ovarian stimulation for embryo or oocyte cryopreservation.
After enrollment, 43% of women underwent some form of assisted reproductive technology to attempt pregnancy, most commonly ovarian stimulation for in vitro fertilization (IVF) and cryopreserved embryo transfer.
In the multivariable model, cryopreserved embryo transfer was the only assisted reproductive technology significantly associated with a greater chance of becoming pregnant, more than doubling patients’ odds (odds ratio, 2.4).
“This means that at breast cancer diagnosis, we should consider cryopreservation of embryos for future use if desired,” Dr. Azim said.
Again, age mattered. Women younger than 35 undergoing assisted reproductive technologies had a 50% higher chance of becoming pregnant compared with peers aged 35-39, and an 84% higher chance than women aged 40-42.
Importantly, there was no apparent short-term detrimental impact of fertility preservation and/or assisted reproductive technologies on breast cancer outcomes, Dr. Azim reported. At 3 years, the breast cancer-free interval was almost identical between women who underwent ovarian stimulation for cryopreservation and those who did not (9.7% vs 8.7%).
“POSITIVE showed positive results that emphasize the importance of active oncofertility counseling with the patient starting at diagnosis,” said Hee Jeong Kim, MD, PhD, professor, Division of Breast Surgery, Asan Medical Center, Seoul, Republic of Korea, and discussant for the study.
“These data are reassuring for our young patients with a diagnosis of breast cancer and shows that assisted reproductive technology is an option and is probably safe to do with the caveat that it needs longer follow-up,” added SABCS codirector Carlos Arteaga, MD, director, Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas.
Dr. Azim has no relevant disclosures. Dr. Arteaga is a scientific adviser to Novartis, Lilly, Merck, AstraZeneca, Daiichi Sankyo, OrigiMed, Immunomedics, PUMA Biotechnology, TAIHO Oncology, Sanofi, and the Susan G. Komen Foundation. He has received grant support from Pfizer, Lilly, and Takeda. Dr. Kim reports no relevant financial relationships.
A version of this article appeared on Medscape.com.
FROM SABCS 2023
Living in a Food Swamp Tied to High Breast Cancer Mortality
SAN ANTONIO — a novel ecological study has found.
“Food deserts and food swamps are both bad, but it’s worse in food swamps,” Malcolm Bevel, PhD, MSPH, with Augusta University in Georgia, said in an interview.
He presented his research at the San Antonio Breast Cancer Symposium.
Breast cancer is the fourth leading cause of cancer death in the United States and is one of 13 obesity-related cancers. Healthy food consumption is a protective factor shown to decrease obesity risk and postmenopausal breast cancer mortality.
However, residing in food deserts or food swamps reduces access to healthy foods and has been severely understudied regarding postmenopausal breast cancer mortality, Dr. Bevel explained.
To investigate, Dr. Bevel and colleagues did a cross-sectional, ecological analysis where they merged 2010 to 2020 postmenopausal breast cancer mortality data from the Centers for Disease Control and Prevention (CDC) with aggregated 2012 to 2020 data from the US Department of Agriculture Food Environment Atlas.
A food swamp score was calculated as the ratio of fast-food and convenience stores to grocery stores and farmer’s markets.
A food desert score was calculated as the proportion of residents living more than 1 mile (urban) or 10 miles (rural) from a grocery store and household income ≤ 200% of the federal poverty threshold.
The researchers categorized food deserts and food swamps as low, moderate, or high, with higher scores denoting counties with fewer resources for healthy food.
Counties with high postmenopausal breast cancer mortality rates had a higher percentage of non-Hispanic Black population (5.8% vs. 2.1%), poverty rates (17.2% vs 14.2%), and adult obesity (32.5% vs 32%) and diabetes rates (11.8% vs 10.5%), compared with counties with low postmenopausal breast cancer mortality rates, Dr. Bevel reported.
The age-adjusted odds of counties having high postmenopausal breast cancer mortality was 53% higher in counties with high food desert scores (adjusted odds ratio [aOR] 1.53; 95% CI, 1.26 - 1.88), and over twofold higher in those with high food swamp scores (aOR, 2.09; 95% CI: 1.69 - 2.58).
In fully adjusted models, the likelihood of counties having moderate postmenopausal breast cancer mortality rates was 32% higher in those with moderate food swamp scores (aOR, 1.32; 95% CI, 1.03 - 1.70).
Growing Epidemic Requires System Change
These findings are in line with another study by Dr. Bevel and his colleagues published earlier this year in JAMA Oncology.
In that study, communities with easy access to fast food were 77% more likely to have high levels of obesity-related cancer mortality, as reported by this news organization.
There is a “growing epidemic” of food deserts and food swamps in the US, which could be due to systemic issues such as gentrification/redlining and lack of investment with chain grocery stores that provide healthy food options, said Dr. Bevel.
Local policymakers and community stakeholders could implement culturally tailored, sustainable interventions for obesity and obesity-related cancer prevention, including postmenopausal breast cancer. These could include creating more walkable neighborhoods and community vegetable gardens, he suggested.
“This is an important study demonstrating how the environment impacts outcomes in postmenopausal women diagnosed with breast cancer,” said Lia Scott, PhD, MPH, discussant for the study.
“Most of the literature is primarily focused on food deserts to characterize the food environment. However, these authors looked at both food deserts and food swamps. And even after adjusting for various factors and age, counties with high food swamp scores at greater odds of having higher postmenopausal breast cancer mortality rates,” said Dr. Scott, who is from Georgia State University School of Public Health in Atlanta.
“There is a clear need for systems change. With ecological studies like this one, we could potentially drive policy by providing actionable data,” she added.
The study had no specific funding. Dr. Bevel and Dr. Scott report no relevant financial relationships.
A version of this article appeared on Medscape.com.
SAN ANTONIO — a novel ecological study has found.
“Food deserts and food swamps are both bad, but it’s worse in food swamps,” Malcolm Bevel, PhD, MSPH, with Augusta University in Georgia, said in an interview.
He presented his research at the San Antonio Breast Cancer Symposium.
Breast cancer is the fourth leading cause of cancer death in the United States and is one of 13 obesity-related cancers. Healthy food consumption is a protective factor shown to decrease obesity risk and postmenopausal breast cancer mortality.
However, residing in food deserts or food swamps reduces access to healthy foods and has been severely understudied regarding postmenopausal breast cancer mortality, Dr. Bevel explained.
To investigate, Dr. Bevel and colleagues did a cross-sectional, ecological analysis where they merged 2010 to 2020 postmenopausal breast cancer mortality data from the Centers for Disease Control and Prevention (CDC) with aggregated 2012 to 2020 data from the US Department of Agriculture Food Environment Atlas.
A food swamp score was calculated as the ratio of fast-food and convenience stores to grocery stores and farmer’s markets.
A food desert score was calculated as the proportion of residents living more than 1 mile (urban) or 10 miles (rural) from a grocery store and household income ≤ 200% of the federal poverty threshold.
The researchers categorized food deserts and food swamps as low, moderate, or high, with higher scores denoting counties with fewer resources for healthy food.
Counties with high postmenopausal breast cancer mortality rates had a higher percentage of non-Hispanic Black population (5.8% vs. 2.1%), poverty rates (17.2% vs 14.2%), and adult obesity (32.5% vs 32%) and diabetes rates (11.8% vs 10.5%), compared with counties with low postmenopausal breast cancer mortality rates, Dr. Bevel reported.
The age-adjusted odds of counties having high postmenopausal breast cancer mortality was 53% higher in counties with high food desert scores (adjusted odds ratio [aOR] 1.53; 95% CI, 1.26 - 1.88), and over twofold higher in those with high food swamp scores (aOR, 2.09; 95% CI: 1.69 - 2.58).
In fully adjusted models, the likelihood of counties having moderate postmenopausal breast cancer mortality rates was 32% higher in those with moderate food swamp scores (aOR, 1.32; 95% CI, 1.03 - 1.70).
Growing Epidemic Requires System Change
These findings are in line with another study by Dr. Bevel and his colleagues published earlier this year in JAMA Oncology.
In that study, communities with easy access to fast food were 77% more likely to have high levels of obesity-related cancer mortality, as reported by this news organization.
There is a “growing epidemic” of food deserts and food swamps in the US, which could be due to systemic issues such as gentrification/redlining and lack of investment with chain grocery stores that provide healthy food options, said Dr. Bevel.
Local policymakers and community stakeholders could implement culturally tailored, sustainable interventions for obesity and obesity-related cancer prevention, including postmenopausal breast cancer. These could include creating more walkable neighborhoods and community vegetable gardens, he suggested.
“This is an important study demonstrating how the environment impacts outcomes in postmenopausal women diagnosed with breast cancer,” said Lia Scott, PhD, MPH, discussant for the study.
“Most of the literature is primarily focused on food deserts to characterize the food environment. However, these authors looked at both food deserts and food swamps. And even after adjusting for various factors and age, counties with high food swamp scores at greater odds of having higher postmenopausal breast cancer mortality rates,” said Dr. Scott, who is from Georgia State University School of Public Health in Atlanta.
“There is a clear need for systems change. With ecological studies like this one, we could potentially drive policy by providing actionable data,” she added.
The study had no specific funding. Dr. Bevel and Dr. Scott report no relevant financial relationships.
A version of this article appeared on Medscape.com.
SAN ANTONIO — a novel ecological study has found.
“Food deserts and food swamps are both bad, but it’s worse in food swamps,” Malcolm Bevel, PhD, MSPH, with Augusta University in Georgia, said in an interview.
He presented his research at the San Antonio Breast Cancer Symposium.
Breast cancer is the fourth leading cause of cancer death in the United States and is one of 13 obesity-related cancers. Healthy food consumption is a protective factor shown to decrease obesity risk and postmenopausal breast cancer mortality.
However, residing in food deserts or food swamps reduces access to healthy foods and has been severely understudied regarding postmenopausal breast cancer mortality, Dr. Bevel explained.
To investigate, Dr. Bevel and colleagues did a cross-sectional, ecological analysis where they merged 2010 to 2020 postmenopausal breast cancer mortality data from the Centers for Disease Control and Prevention (CDC) with aggregated 2012 to 2020 data from the US Department of Agriculture Food Environment Atlas.
A food swamp score was calculated as the ratio of fast-food and convenience stores to grocery stores and farmer’s markets.
A food desert score was calculated as the proportion of residents living more than 1 mile (urban) or 10 miles (rural) from a grocery store and household income ≤ 200% of the federal poverty threshold.
The researchers categorized food deserts and food swamps as low, moderate, or high, with higher scores denoting counties with fewer resources for healthy food.
Counties with high postmenopausal breast cancer mortality rates had a higher percentage of non-Hispanic Black population (5.8% vs. 2.1%), poverty rates (17.2% vs 14.2%), and adult obesity (32.5% vs 32%) and diabetes rates (11.8% vs 10.5%), compared with counties with low postmenopausal breast cancer mortality rates, Dr. Bevel reported.
The age-adjusted odds of counties having high postmenopausal breast cancer mortality was 53% higher in counties with high food desert scores (adjusted odds ratio [aOR] 1.53; 95% CI, 1.26 - 1.88), and over twofold higher in those with high food swamp scores (aOR, 2.09; 95% CI: 1.69 - 2.58).
In fully adjusted models, the likelihood of counties having moderate postmenopausal breast cancer mortality rates was 32% higher in those with moderate food swamp scores (aOR, 1.32; 95% CI, 1.03 - 1.70).
Growing Epidemic Requires System Change
These findings are in line with another study by Dr. Bevel and his colleagues published earlier this year in JAMA Oncology.
In that study, communities with easy access to fast food were 77% more likely to have high levels of obesity-related cancer mortality, as reported by this news organization.
There is a “growing epidemic” of food deserts and food swamps in the US, which could be due to systemic issues such as gentrification/redlining and lack of investment with chain grocery stores that provide healthy food options, said Dr. Bevel.
Local policymakers and community stakeholders could implement culturally tailored, sustainable interventions for obesity and obesity-related cancer prevention, including postmenopausal breast cancer. These could include creating more walkable neighborhoods and community vegetable gardens, he suggested.
“This is an important study demonstrating how the environment impacts outcomes in postmenopausal women diagnosed with breast cancer,” said Lia Scott, PhD, MPH, discussant for the study.
“Most of the literature is primarily focused on food deserts to characterize the food environment. However, these authors looked at both food deserts and food swamps. And even after adjusting for various factors and age, counties with high food swamp scores at greater odds of having higher postmenopausal breast cancer mortality rates,” said Dr. Scott, who is from Georgia State University School of Public Health in Atlanta.
“There is a clear need for systems change. With ecological studies like this one, we could potentially drive policy by providing actionable data,” she added.
The study had no specific funding. Dr. Bevel and Dr. Scott report no relevant financial relationships.
A version of this article appeared on Medscape.com.
FROM SABCS 2023
Why Are Prion Diseases on the Rise?
This transcript has been edited for clarity.
In 1986, in Britain, cattle started dying.
The condition, quickly nicknamed “mad cow disease,” was clearly infectious, but the particular pathogen was difficult to identify. By 1993, 120,000 cattle in Britain were identified as being infected. As yet, no human cases had occurred and the UK government insisted that cattle were a dead-end host for the pathogen. By the mid-1990s, however, multiple human cases, attributable to ingestion of meat and organs from infected cattle, were discovered. In humans, variant Creutzfeldt-Jakob disease (CJD) was a media sensation — a nearly uniformly fatal, untreatable condition with a rapid onset of dementia, mobility issues characterized by jerky movements, and autopsy reports finding that the brain itself had turned into a spongy mess.
The United States banned UK beef imports in 1996 and only lifted the ban in 2020.
The disease was made all the more mysterious because the pathogen involved was not a bacterium, parasite, or virus, but a protein — or a proteinaceous infectious particle, shortened to “prion.”
Prions are misfolded proteins that aggregate in cells — in this case, in nerve cells. But what makes prions different from other misfolded proteins is that the misfolded protein catalyzes the conversion of its non-misfolded counterpart into the misfolded configuration. It creates a chain reaction, leading to rapid accumulation of misfolded proteins and cell death.
And, like a time bomb, we all have prion protein inside us. In its normally folded state, the function of prion protein remains unclear — knockout mice do okay without it — but it is also highly conserved across mammalian species, so it probably does something worthwhile, perhaps protecting nerve fibers.
Far more common than humans contracting mad cow disease is the condition known as sporadic CJD, responsible for 85% of all cases of prion-induced brain disease. The cause of sporadic CJD is unknown.
But one thing is known: Cases are increasing.
I don’t want you to freak out; we are not in the midst of a CJD epidemic. But it’s been a while since I’ve seen people discussing the condition — which remains as horrible as it was in the 1990s — and a new research letter appearing in JAMA Neurology brought it back to the top of my mind.
Researchers, led by Matthew Crane at Hopkins, used the CDC’s WONDER cause-of-death database, which pulls diagnoses from death certificates. Normally, I’m not a fan of using death certificates for cause-of-death analyses, but in this case I’ll give it a pass. Assuming that the diagnosis of CJD is made, it would be really unlikely for it not to appear on a death certificate.
The main findings are seen here.
Note that we can’t tell whether these are sporadic CJD cases or variant CJD cases or even familial CJD cases; however, unless there has been a dramatic change in epidemiology, the vast majority of these will be sporadic.
The question is, why are there more cases?
Whenever this type of question comes up with any disease, there are basically three possibilities:
First, there may be an increase in the susceptible, or at-risk, population. In this case, we know that older people are at higher risk of developing sporadic CJD, and over time, the population has aged. To be fair, the authors adjusted for this and still saw an increase, though it was attenuated.
Second, we might be better at diagnosing the condition. A lot has happened since the mid-1990s, when the diagnosis was based more or less on symptoms. The advent of more sophisticated MRI protocols as well as a new diagnostic test called “real-time quaking-induced conversion testing” may mean we are just better at detecting people with this disease.
Third (and most concerning), a new exposure has occurred. What that exposure might be, where it might come from, is anyone’s guess. It’s hard to do broad-scale epidemiology on very rare diseases.
But given these findings, it seems that a bit more surveillance for this rare but devastating condition is well merited.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now.
F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
In 1986, in Britain, cattle started dying.
The condition, quickly nicknamed “mad cow disease,” was clearly infectious, but the particular pathogen was difficult to identify. By 1993, 120,000 cattle in Britain were identified as being infected. As yet, no human cases had occurred and the UK government insisted that cattle were a dead-end host for the pathogen. By the mid-1990s, however, multiple human cases, attributable to ingestion of meat and organs from infected cattle, were discovered. In humans, variant Creutzfeldt-Jakob disease (CJD) was a media sensation — a nearly uniformly fatal, untreatable condition with a rapid onset of dementia, mobility issues characterized by jerky movements, and autopsy reports finding that the brain itself had turned into a spongy mess.
The United States banned UK beef imports in 1996 and only lifted the ban in 2020.
The disease was made all the more mysterious because the pathogen involved was not a bacterium, parasite, or virus, but a protein — or a proteinaceous infectious particle, shortened to “prion.”
Prions are misfolded proteins that aggregate in cells — in this case, in nerve cells. But what makes prions different from other misfolded proteins is that the misfolded protein catalyzes the conversion of its non-misfolded counterpart into the misfolded configuration. It creates a chain reaction, leading to rapid accumulation of misfolded proteins and cell death.
And, like a time bomb, we all have prion protein inside us. In its normally folded state, the function of prion protein remains unclear — knockout mice do okay without it — but it is also highly conserved across mammalian species, so it probably does something worthwhile, perhaps protecting nerve fibers.
Far more common than humans contracting mad cow disease is the condition known as sporadic CJD, responsible for 85% of all cases of prion-induced brain disease. The cause of sporadic CJD is unknown.
But one thing is known: Cases are increasing.
I don’t want you to freak out; we are not in the midst of a CJD epidemic. But it’s been a while since I’ve seen people discussing the condition — which remains as horrible as it was in the 1990s — and a new research letter appearing in JAMA Neurology brought it back to the top of my mind.
Researchers, led by Matthew Crane at Hopkins, used the CDC’s WONDER cause-of-death database, which pulls diagnoses from death certificates. Normally, I’m not a fan of using death certificates for cause-of-death analyses, but in this case I’ll give it a pass. Assuming that the diagnosis of CJD is made, it would be really unlikely for it not to appear on a death certificate.
The main findings are seen here.
Note that we can’t tell whether these are sporadic CJD cases or variant CJD cases or even familial CJD cases; however, unless there has been a dramatic change in epidemiology, the vast majority of these will be sporadic.
The question is, why are there more cases?
Whenever this type of question comes up with any disease, there are basically three possibilities:
First, there may be an increase in the susceptible, or at-risk, population. In this case, we know that older people are at higher risk of developing sporadic CJD, and over time, the population has aged. To be fair, the authors adjusted for this and still saw an increase, though it was attenuated.
Second, we might be better at diagnosing the condition. A lot has happened since the mid-1990s, when the diagnosis was based more or less on symptoms. The advent of more sophisticated MRI protocols as well as a new diagnostic test called “real-time quaking-induced conversion testing” may mean we are just better at detecting people with this disease.
Third (and most concerning), a new exposure has occurred. What that exposure might be, where it might come from, is anyone’s guess. It’s hard to do broad-scale epidemiology on very rare diseases.
But given these findings, it seems that a bit more surveillance for this rare but devastating condition is well merited.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now.
F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
In 1986, in Britain, cattle started dying.
The condition, quickly nicknamed “mad cow disease,” was clearly infectious, but the particular pathogen was difficult to identify. By 1993, 120,000 cattle in Britain were identified as being infected. As yet, no human cases had occurred and the UK government insisted that cattle were a dead-end host for the pathogen. By the mid-1990s, however, multiple human cases, attributable to ingestion of meat and organs from infected cattle, were discovered. In humans, variant Creutzfeldt-Jakob disease (CJD) was a media sensation — a nearly uniformly fatal, untreatable condition with a rapid onset of dementia, mobility issues characterized by jerky movements, and autopsy reports finding that the brain itself had turned into a spongy mess.
The United States banned UK beef imports in 1996 and only lifted the ban in 2020.
The disease was made all the more mysterious because the pathogen involved was not a bacterium, parasite, or virus, but a protein — or a proteinaceous infectious particle, shortened to “prion.”
Prions are misfolded proteins that aggregate in cells — in this case, in nerve cells. But what makes prions different from other misfolded proteins is that the misfolded protein catalyzes the conversion of its non-misfolded counterpart into the misfolded configuration. It creates a chain reaction, leading to rapid accumulation of misfolded proteins and cell death.
And, like a time bomb, we all have prion protein inside us. In its normally folded state, the function of prion protein remains unclear — knockout mice do okay without it — but it is also highly conserved across mammalian species, so it probably does something worthwhile, perhaps protecting nerve fibers.
Far more common than humans contracting mad cow disease is the condition known as sporadic CJD, responsible for 85% of all cases of prion-induced brain disease. The cause of sporadic CJD is unknown.
But one thing is known: Cases are increasing.
I don’t want you to freak out; we are not in the midst of a CJD epidemic. But it’s been a while since I’ve seen people discussing the condition — which remains as horrible as it was in the 1990s — and a new research letter appearing in JAMA Neurology brought it back to the top of my mind.
Researchers, led by Matthew Crane at Hopkins, used the CDC’s WONDER cause-of-death database, which pulls diagnoses from death certificates. Normally, I’m not a fan of using death certificates for cause-of-death analyses, but in this case I’ll give it a pass. Assuming that the diagnosis of CJD is made, it would be really unlikely for it not to appear on a death certificate.
The main findings are seen here.
Note that we can’t tell whether these are sporadic CJD cases or variant CJD cases or even familial CJD cases; however, unless there has been a dramatic change in epidemiology, the vast majority of these will be sporadic.
The question is, why are there more cases?
Whenever this type of question comes up with any disease, there are basically three possibilities:
First, there may be an increase in the susceptible, or at-risk, population. In this case, we know that older people are at higher risk of developing sporadic CJD, and over time, the population has aged. To be fair, the authors adjusted for this and still saw an increase, though it was attenuated.
Second, we might be better at diagnosing the condition. A lot has happened since the mid-1990s, when the diagnosis was based more or less on symptoms. The advent of more sophisticated MRI protocols as well as a new diagnostic test called “real-time quaking-induced conversion testing” may mean we are just better at detecting people with this disease.
Third (and most concerning), a new exposure has occurred. What that exposure might be, where it might come from, is anyone’s guess. It’s hard to do broad-scale epidemiology on very rare diseases.
But given these findings, it seems that a bit more surveillance for this rare but devastating condition is well merited.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now.
F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Is migraine really a female disorder?
BARCELONA, SPAIN — Migraine is widely considered a predominantly female disorder. Its frequency, duration, and severity tend to be higher in women, and women are also more likely than men to receive a migraine diagnosis. However, gender expectations, differences in the likelihood of self-reporting, and problems with how migraine is classified make it difficult to estimate its true prevalence in men and women.
Different Symptoms
Headache disorders are estimated to affect 50% of the general population ; tension-type headache and migraine are the two most common. According to epidemiologic studies, migraine is more prevalent in women, with a female-to-male ratio of 3:1. There are numerous studies of why this might be, most of which focus largely on female-related factors, such as hormones and the menstrual cycle.
“Despite many years of research, there isn’t one clear factor explaining this substantial difference between women and men,” said Tobias Kurth of Charité – Universitätsmedizin Berlin, Germany. “So the question is: Are we missing something else?”
One factor in these perceived sex differences in migraine is that women seem to report their migraines differently from men, and they also have different symptoms. For example, women are more likely than men to report severe pain, and their migraine attacks are more often accompanied by photophobia, phonophobia, and nausea, whereas men’s migraines are more often accompanied by aura.
“By favoring female symptoms, the classification system may not be picking up male symptoms because they’re not being classified in the right way,” Dr. Kurth said, with one consequence being that migraine is underdiagnosed in men. “Before trying to understand the biological and behavioral reasons for these sex differences, we first need to consider these methodological challenges that we all apply knowingly or unknowingly.”
Christian Lampl, professor of neurology at Konventhospital der Barmherzigen Brüder Linz, Austria, and president of the European Headache Federation, said in an interview, “I’m convinced that this 3:1 ratio which has been stated for decades is wrong, but we still don’t have the data. The criteria we have [for classifying migraine] are useful for clinical trials, but they are useless for determining the male-to-female ratio.
“We need a new definition of migraine,” he added. “Migraine is an episode, not an attack. Attacks have a sudden onset, and migraine onset is not sudden — it is an episode with a headache attack.”
Inadequate Menopause Services
Professor Anne MacGregor of St. Bartholomew’s Hospital in London, United Kingdom, specializes in migraine and women’s health. She presented data showing that migraine is underdiagnosed in women; one reason being that the disorder receives inadequate attention from healthcare professionals at specialist menopause services.
Menopause is associated with an increased prevalence of migraine, but women do not discuss headache symptoms at specialist menopause services, Dr. MacGregor said.
She then described unpublished results from a survey of 117 women attending the specialist menopause service at St. Bartholomew’s Hospital. Among the respondents, 34% reported experiencing episodic migraine and an additional 8% reported having chronic migraine.
“Within this population of women who were not reporting headache as a symptom [to the menopause service until asked in the survey], 42% of them were positive for a diagnosis of migraine,” said Dr. MacGregor. “They were mostly relying on prescribed paracetamol and codeine, or buying it over the counter, and only 22% of them were receiving triptans.
“They are clearly being undertreated,” she added. “Part of this issue is that they didn’t spontaneously report headache as a menopause symptom, so they weren’t consulting for headache to their primary care physicians.”
Correct diagnosis by a consultant is a prerequisite for receiving appropriate migraine treatment. Yet, according to a US study published in 2012, only 45.5% of women with episodic migraine consulted a prescribing healthcare professional. Of those who consulted, 89% were diagnosed correctly, and only 68% of those received the appropriate treatment.
A larger, more recent study confirmed that there is a massive unmet need for improving care in this patient population. The Chronic Migraine Epidemiology and Outcomes (CaMEO) Study, which analyzed data from nearly 90,000 participants, showed that just 4.8% of people with chronic migraine received consultation, correct diagnosis, and treatment, with 89% of women with chronic migraine left undiagnosed.
The OVERCOME Study further revealed that although many people with migraine were repeat consulters, they were consulting their physicians for other health problems.
“This makes it very clear that people in other specialties need to be more aware about picking up and diagnosing headache,” said MacGregor. “That’s where the real need is in managing headache. We have the treatments, but if the patients can’t access them, they’re not much good to them.”
A version of this article appeared on Medscape.com.
BARCELONA, SPAIN — Migraine is widely considered a predominantly female disorder. Its frequency, duration, and severity tend to be higher in women, and women are also more likely than men to receive a migraine diagnosis. However, gender expectations, differences in the likelihood of self-reporting, and problems with how migraine is classified make it difficult to estimate its true prevalence in men and women.
Different Symptoms
Headache disorders are estimated to affect 50% of the general population ; tension-type headache and migraine are the two most common. According to epidemiologic studies, migraine is more prevalent in women, with a female-to-male ratio of 3:1. There are numerous studies of why this might be, most of which focus largely on female-related factors, such as hormones and the menstrual cycle.
“Despite many years of research, there isn’t one clear factor explaining this substantial difference between women and men,” said Tobias Kurth of Charité – Universitätsmedizin Berlin, Germany. “So the question is: Are we missing something else?”
One factor in these perceived sex differences in migraine is that women seem to report their migraines differently from men, and they also have different symptoms. For example, women are more likely than men to report severe pain, and their migraine attacks are more often accompanied by photophobia, phonophobia, and nausea, whereas men’s migraines are more often accompanied by aura.
“By favoring female symptoms, the classification system may not be picking up male symptoms because they’re not being classified in the right way,” Dr. Kurth said, with one consequence being that migraine is underdiagnosed in men. “Before trying to understand the biological and behavioral reasons for these sex differences, we first need to consider these methodological challenges that we all apply knowingly or unknowingly.”
Christian Lampl, professor of neurology at Konventhospital der Barmherzigen Brüder Linz, Austria, and president of the European Headache Federation, said in an interview, “I’m convinced that this 3:1 ratio which has been stated for decades is wrong, but we still don’t have the data. The criteria we have [for classifying migraine] are useful for clinical trials, but they are useless for determining the male-to-female ratio.
“We need a new definition of migraine,” he added. “Migraine is an episode, not an attack. Attacks have a sudden onset, and migraine onset is not sudden — it is an episode with a headache attack.”
Inadequate Menopause Services
Professor Anne MacGregor of St. Bartholomew’s Hospital in London, United Kingdom, specializes in migraine and women’s health. She presented data showing that migraine is underdiagnosed in women; one reason being that the disorder receives inadequate attention from healthcare professionals at specialist menopause services.
Menopause is associated with an increased prevalence of migraine, but women do not discuss headache symptoms at specialist menopause services, Dr. MacGregor said.
She then described unpublished results from a survey of 117 women attending the specialist menopause service at St. Bartholomew’s Hospital. Among the respondents, 34% reported experiencing episodic migraine and an additional 8% reported having chronic migraine.
“Within this population of women who were not reporting headache as a symptom [to the menopause service until asked in the survey], 42% of them were positive for a diagnosis of migraine,” said Dr. MacGregor. “They were mostly relying on prescribed paracetamol and codeine, or buying it over the counter, and only 22% of them were receiving triptans.
“They are clearly being undertreated,” she added. “Part of this issue is that they didn’t spontaneously report headache as a menopause symptom, so they weren’t consulting for headache to their primary care physicians.”
Correct diagnosis by a consultant is a prerequisite for receiving appropriate migraine treatment. Yet, according to a US study published in 2012, only 45.5% of women with episodic migraine consulted a prescribing healthcare professional. Of those who consulted, 89% were diagnosed correctly, and only 68% of those received the appropriate treatment.
A larger, more recent study confirmed that there is a massive unmet need for improving care in this patient population. The Chronic Migraine Epidemiology and Outcomes (CaMEO) Study, which analyzed data from nearly 90,000 participants, showed that just 4.8% of people with chronic migraine received consultation, correct diagnosis, and treatment, with 89% of women with chronic migraine left undiagnosed.
The OVERCOME Study further revealed that although many people with migraine were repeat consulters, they were consulting their physicians for other health problems.
“This makes it very clear that people in other specialties need to be more aware about picking up and diagnosing headache,” said MacGregor. “That’s where the real need is in managing headache. We have the treatments, but if the patients can’t access them, they’re not much good to them.”
A version of this article appeared on Medscape.com.
BARCELONA, SPAIN — Migraine is widely considered a predominantly female disorder. Its frequency, duration, and severity tend to be higher in women, and women are also more likely than men to receive a migraine diagnosis. However, gender expectations, differences in the likelihood of self-reporting, and problems with how migraine is classified make it difficult to estimate its true prevalence in men and women.
Different Symptoms
Headache disorders are estimated to affect 50% of the general population ; tension-type headache and migraine are the two most common. According to epidemiologic studies, migraine is more prevalent in women, with a female-to-male ratio of 3:1. There are numerous studies of why this might be, most of which focus largely on female-related factors, such as hormones and the menstrual cycle.
“Despite many years of research, there isn’t one clear factor explaining this substantial difference between women and men,” said Tobias Kurth of Charité – Universitätsmedizin Berlin, Germany. “So the question is: Are we missing something else?”
One factor in these perceived sex differences in migraine is that women seem to report their migraines differently from men, and they also have different symptoms. For example, women are more likely than men to report severe pain, and their migraine attacks are more often accompanied by photophobia, phonophobia, and nausea, whereas men’s migraines are more often accompanied by aura.
“By favoring female symptoms, the classification system may not be picking up male symptoms because they’re not being classified in the right way,” Dr. Kurth said, with one consequence being that migraine is underdiagnosed in men. “Before trying to understand the biological and behavioral reasons for these sex differences, we first need to consider these methodological challenges that we all apply knowingly or unknowingly.”
Christian Lampl, professor of neurology at Konventhospital der Barmherzigen Brüder Linz, Austria, and president of the European Headache Federation, said in an interview, “I’m convinced that this 3:1 ratio which has been stated for decades is wrong, but we still don’t have the data. The criteria we have [for classifying migraine] are useful for clinical trials, but they are useless for determining the male-to-female ratio.
“We need a new definition of migraine,” he added. “Migraine is an episode, not an attack. Attacks have a sudden onset, and migraine onset is not sudden — it is an episode with a headache attack.”
Inadequate Menopause Services
Professor Anne MacGregor of St. Bartholomew’s Hospital in London, United Kingdom, specializes in migraine and women’s health. She presented data showing that migraine is underdiagnosed in women; one reason being that the disorder receives inadequate attention from healthcare professionals at specialist menopause services.
Menopause is associated with an increased prevalence of migraine, but women do not discuss headache symptoms at specialist menopause services, Dr. MacGregor said.
She then described unpublished results from a survey of 117 women attending the specialist menopause service at St. Bartholomew’s Hospital. Among the respondents, 34% reported experiencing episodic migraine and an additional 8% reported having chronic migraine.
“Within this population of women who were not reporting headache as a symptom [to the menopause service until asked in the survey], 42% of them were positive for a diagnosis of migraine,” said Dr. MacGregor. “They were mostly relying on prescribed paracetamol and codeine, or buying it over the counter, and only 22% of them were receiving triptans.
“They are clearly being undertreated,” she added. “Part of this issue is that they didn’t spontaneously report headache as a menopause symptom, so they weren’t consulting for headache to their primary care physicians.”
Correct diagnosis by a consultant is a prerequisite for receiving appropriate migraine treatment. Yet, according to a US study published in 2012, only 45.5% of women with episodic migraine consulted a prescribing healthcare professional. Of those who consulted, 89% were diagnosed correctly, and only 68% of those received the appropriate treatment.
A larger, more recent study confirmed that there is a massive unmet need for improving care in this patient population. The Chronic Migraine Epidemiology and Outcomes (CaMEO) Study, which analyzed data from nearly 90,000 participants, showed that just 4.8% of people with chronic migraine received consultation, correct diagnosis, and treatment, with 89% of women with chronic migraine left undiagnosed.
The OVERCOME Study further revealed that although many people with migraine were repeat consulters, they were consulting their physicians for other health problems.
“This makes it very clear that people in other specialties need to be more aware about picking up and diagnosing headache,” said MacGregor. “That’s where the real need is in managing headache. We have the treatments, but if the patients can’t access them, they’re not much good to them.”
A version of this article appeared on Medscape.com.
FROM EHC 2023