User login
Obesity and Cancer: Untangling a Complex Web
According to the Centers for Disease Control and Prevention (CDC), over 684,000 Americans are diagnosed with an “obesity-associated” cancer each year.
The incidence of many of these cancers has been rising in recent years, particularly among younger people — a trend that sits in contrast with the overall decline in cancers with no established relationship to excess weight, such as lung and skin cancers.
Is obesity the new smoking? Not exactly.
While about 42% of cancers — including common ones such as colorectal and postmenopausal breast cancers — are considered obesity-related, only about 8% of incident cancers are attributed to excess body weight. People often develop those diseases regardless of weight.
Although plenty of evidence points to excess body fat as a cancer risk factor, it’s unclear at what point excess weight has an effect. Is gaining weight later in life, for instance, better or worse for cancer risk than being overweight or obese from a young age?
There’s another glaring knowledge gap: Does losing weight at some point in adulthood change the picture? In other words, how many of those 684,000 diagnoses might have been prevented if people shed excess pounds?
When it comes to weight and cancer risk, “there’s a lot we don’t know,” said Jennifer W. Bea, PhD, associate professor, health promotion sciences, University of Arizona, Tucson.
A Consistent but Complicated Relationship
Given the growing incidence of obesity — which currently affects about 42% of US adults and 20% of children and teenagers — it’s no surprise that many studies have delved into the potential effects of excess weight on cancer rates.
Although virtually all the evidence comes from large cohort studies, leaving the cause-effect question open, certain associations keep showing up.
“What we know is that, consistently, a higher body mass index [BMI] — particularly in the obese category — leads to a higher risk of multiple cancers,” said Jeffrey A. Meyerhardt, MD, MPH, codirector, Colon and Rectal Cancer Center, Dana-Farber Cancer Institute, Boston.
In a widely cited report published in The New England Journal of Medicine in 2016, the International Agency for Research on Cancer (IARC) analyzed over 1000 epidemiologic studies on body fat and cancer. The agency pointed to over a dozen cancers, including some of the most common and deadly, linked to excess body weight.
That list includes esophageal adenocarcinoma and endometrial cancer — associated with the highest risk — along with kidney, liver, stomach (gastric cardia), pancreatic, colorectal, postmenopausal breast, gallbladder, ovarian, and thyroid cancers, plus multiple myeloma and meningioma. There’s also “limited” evidence linking excess weight to additional cancer types, including aggressive prostate cancer and certain head and neck cancers.
At the same time, Dr. Meyerhardt said, many of those same cancers are also associated with issues that lead to, or coexist with, overweight and obesity, including poor diet, lack of exercise, and metabolic conditions such as diabetes.
It’s a complicated web, and it’s likely, Dr. Meyerhardt said, that high BMI both directly affects cancer risk and is part of a “causal pathway” of other factors that do.
Regarding direct effects, preclinical research has pointed to multiple ways in which excess body fat could contribute to cancer, said Karen M. Basen-Engquist, PhD, MPH, professor, Division of Cancer Prevention and Population Services, The University of Texas MD Anderson Cancer Center, Houston.
One broad mechanism to help explain the obesity-cancer link is chronic systemic inflammation because excess fat tissue can raise levels of substances in the body, such as tumor necrosis factor alpha and interleukin 6, which fuel inflammation. Excess fat also contributes to hyperinsulinemia — too much insulin in the blood — which can help promote the growth and spread of tumor cells.
But the underlying reasons also appear to vary by cancer type, Dr. Basen-Engquist said. With hormonally driven cancer types, such as breast and endometrial, excess body fat may alter hormone levels in ways that spur tumor growth. Extra fat tissue may, for example, convert androgens into estrogens, which could help feed estrogen-dependent tumors.
That, Dr. Basen-Engquist noted, could be why excess weight is associated with postmenopausal, not premenopausal, breast cancer: Before menopause, body fat is a relatively minor contributor to estrogen levels but becomes more important after menopause.
How Big Is the Effect?
While more than a dozen cancers have been consistently linked to excess weight, the strength of those associations varies considerably.
Endometrial and esophageal cancers are two that stand out. In the 2016 IARC analysis, people with severe obesity had a seven-times greater risk for endometrial cancer and 4.8-times greater risk for esophageal adenocarcinoma vs people with a normal BMI.
With other cancers, the risk increases for those with severe obesity compared with a normal BMI were far more modest: 10% for ovarian cancer, 30% for colorectal cancer, and 80% for kidney and stomach cancers, for example. For postmenopausal breast cancer, every five-unit increase in BMI was associated with a 10% relative risk increase.
A 2018 study from the American Cancer Society, which attempted to estimate the proportion of cancers in the United States attributable to modifiable risk factors — including alcohol consumption, ultraviolet rays exposure, and physical inactivity — found that smoking accounted for the highest proportion of cancer cases by a wide margin (19%), but excess weight came in second (7.8%).
Again, weight appeared to play a bigger role in certain cancers than others: An estimated 60% of endometrial cancers were linked to excess weight, as were roughly one third of esophageal, kidney, and liver cancers. At the other end of the spectrum, just over 11% of breast, 5% of colorectal, and 4% of ovarian cancers were attributable to excess weight.
Even at the lower end, those rates could make a big difference on the population level, especially for groups with higher rates of obesity.
CDC data show that obesity-related cancers are rising among women younger than 50 years, most rapidly among Hispanic women, and some less common obesity-related cancers, such as stomach, thyroid and pancreatic, are also rising among Black individuals and Hispanic Americans.
Obesity may be one reason for growing cancer disparities, said Leah Ferrucci, PhD, MPH, assistant professor, epidemiology, Yale School of Public Health, New Haven, Connecticut. But, she added, the evidence is limited because Black individuals and Hispanic Americans are understudied.
When Do Extra Pounds Matter?
When it comes to cancer risk, at what point in life does excess weight, or weight gain, matter? Is the standard weight gain in middle age, for instance, as hazardous as being overweight or obese from a young age?
Some evidence suggests there’s no “safe” time for putting on excess pounds.
A recent meta-analysis concluded that weight gain at any point after age 18 years is associated with incremental increases in the risk for postmenopausal breast cancer. A 2023 study in JAMA Network Open found a similar pattern with colorectal and other gastrointestinal cancers: People who had sustained overweight or obesity from age 20 years through middle age faced an increased risk of developing those cancers after age 55 years.
The timing of weight gain didn’t seem to matter either. The same elevated risk held among people who were normal weight in their younger years but became overweight after age 55 years.
Those studies focused on later-onset disease. But, in recent years, experts have tracked a troubling rise in early-onset cancers — those diagnosed before age 50 years — particularly gastrointestinal cancers.
An obvious question, Dr. Meyerhardt said, is whether the growing prevalence of obesity among young people is partly to blame.
There’s some data to support that, he said. An analysis from the Nurses’ Health Study II found that women with obesity had double the risk for early-onset colorectal cancer as those with a normal BMI. And every 5-kg increase in weight after age 18 years was associated with a 9% increase in colorectal cancer risk.
But while obesity trends probably partly explain the rise in early-onset cancers, there is likely more to the story, Dr. Meyerhardt said.
“I think all of us who see an increasing number of patients under 50 with colorectal cancer know there’s a fair number who do not fit that [high BMI] profile,” he said. “There’s a fair number over 50 who don’t either.”
Does Weight Loss Help?
With all the evidence pointing to high BMI as a cancer risk factor, a logical conclusion is that weight loss should reduce that excess risk. However, Dr. Bea said, there’s actually little data to support that, and what exists comes from observational studies.
Some research has focused on people who had substantial weight loss after bariatric surgery, with encouraging results. A study published in JAMA found that among 5053 people who underwent bariatric surgery, 2.9% developed an obesity-related cancer over 10 years compared with 4.9% in the nonsurgery group.
Most people, however, aim for less dramatic weight loss, with the help of diet and exercise or sometimes medication. Some evidence shows that a modest degree of weight loss may lower the risks for postmenopausal breast and endometrial cancers.
A 2020 pooled analysis found, for instance, that among women aged ≥ 50 years, those who lost as little as 2.0-4.5 kg, or 4.4-10.0 pounds, and kept it off for 10 years had a lower risk for breast cancer than women whose weight remained stable. And losing more weight — 9 kg, or about 20 pounds, or more — was even better for lowering cancer risk.
But other research suggests the opposite. A recent analysis found that people who lost weight within the past 2 years through diet and exercise had a higher risk for a range of cancers compared with those who did not lose weight. Overall, though, the increased risk was quite low.
Whatever the research does, or doesn’t, show about weight and cancer risk, Dr. Basen-Engquist said, it’s important that risk factors, obesity and otherwise, aren’t “used as blame tools.”
“With obesity, behavior certainly plays into it,” she said. “But there are so many influences on our behavior that are socially determined.”
Both Dr. Basen-Engquist and Dr. Meyerhardt said it’s important for clinicians to consider the individual in front of them and for everyone to set realistic expectations.
People with obesity should not feel they have to become thin to be healthier, and no one has to leap from being sedentary to exercising several hours a week.
“We don’t want patients to feel that if they don’t get to a stated goal in a guideline, it’s all for naught,” Dr. Meyerhardt said.
A version of this article appeared on Medscape.com.
According to the Centers for Disease Control and Prevention (CDC), over 684,000 Americans are diagnosed with an “obesity-associated” cancer each year.
The incidence of many of these cancers has been rising in recent years, particularly among younger people — a trend that sits in contrast with the overall decline in cancers with no established relationship to excess weight, such as lung and skin cancers.
Is obesity the new smoking? Not exactly.
While about 42% of cancers — including common ones such as colorectal and postmenopausal breast cancers — are considered obesity-related, only about 8% of incident cancers are attributed to excess body weight. People often develop those diseases regardless of weight.
Although plenty of evidence points to excess body fat as a cancer risk factor, it’s unclear at what point excess weight has an effect. Is gaining weight later in life, for instance, better or worse for cancer risk than being overweight or obese from a young age?
There’s another glaring knowledge gap: Does losing weight at some point in adulthood change the picture? In other words, how many of those 684,000 diagnoses might have been prevented if people shed excess pounds?
When it comes to weight and cancer risk, “there’s a lot we don’t know,” said Jennifer W. Bea, PhD, associate professor, health promotion sciences, University of Arizona, Tucson.
A Consistent but Complicated Relationship
Given the growing incidence of obesity — which currently affects about 42% of US adults and 20% of children and teenagers — it’s no surprise that many studies have delved into the potential effects of excess weight on cancer rates.
Although virtually all the evidence comes from large cohort studies, leaving the cause-effect question open, certain associations keep showing up.
“What we know is that, consistently, a higher body mass index [BMI] — particularly in the obese category — leads to a higher risk of multiple cancers,” said Jeffrey A. Meyerhardt, MD, MPH, codirector, Colon and Rectal Cancer Center, Dana-Farber Cancer Institute, Boston.
In a widely cited report published in The New England Journal of Medicine in 2016, the International Agency for Research on Cancer (IARC) analyzed over 1000 epidemiologic studies on body fat and cancer. The agency pointed to over a dozen cancers, including some of the most common and deadly, linked to excess body weight.
That list includes esophageal adenocarcinoma and endometrial cancer — associated with the highest risk — along with kidney, liver, stomach (gastric cardia), pancreatic, colorectal, postmenopausal breast, gallbladder, ovarian, and thyroid cancers, plus multiple myeloma and meningioma. There’s also “limited” evidence linking excess weight to additional cancer types, including aggressive prostate cancer and certain head and neck cancers.
At the same time, Dr. Meyerhardt said, many of those same cancers are also associated with issues that lead to, or coexist with, overweight and obesity, including poor diet, lack of exercise, and metabolic conditions such as diabetes.
It’s a complicated web, and it’s likely, Dr. Meyerhardt said, that high BMI both directly affects cancer risk and is part of a “causal pathway” of other factors that do.
Regarding direct effects, preclinical research has pointed to multiple ways in which excess body fat could contribute to cancer, said Karen M. Basen-Engquist, PhD, MPH, professor, Division of Cancer Prevention and Population Services, The University of Texas MD Anderson Cancer Center, Houston.
One broad mechanism to help explain the obesity-cancer link is chronic systemic inflammation because excess fat tissue can raise levels of substances in the body, such as tumor necrosis factor alpha and interleukin 6, which fuel inflammation. Excess fat also contributes to hyperinsulinemia — too much insulin in the blood — which can help promote the growth and spread of tumor cells.
But the underlying reasons also appear to vary by cancer type, Dr. Basen-Engquist said. With hormonally driven cancer types, such as breast and endometrial, excess body fat may alter hormone levels in ways that spur tumor growth. Extra fat tissue may, for example, convert androgens into estrogens, which could help feed estrogen-dependent tumors.
That, Dr. Basen-Engquist noted, could be why excess weight is associated with postmenopausal, not premenopausal, breast cancer: Before menopause, body fat is a relatively minor contributor to estrogen levels but becomes more important after menopause.
How Big Is the Effect?
While more than a dozen cancers have been consistently linked to excess weight, the strength of those associations varies considerably.
Endometrial and esophageal cancers are two that stand out. In the 2016 IARC analysis, people with severe obesity had a seven-times greater risk for endometrial cancer and 4.8-times greater risk for esophageal adenocarcinoma vs people with a normal BMI.
With other cancers, the risk increases for those with severe obesity compared with a normal BMI were far more modest: 10% for ovarian cancer, 30% for colorectal cancer, and 80% for kidney and stomach cancers, for example. For postmenopausal breast cancer, every five-unit increase in BMI was associated with a 10% relative risk increase.
A 2018 study from the American Cancer Society, which attempted to estimate the proportion of cancers in the United States attributable to modifiable risk factors — including alcohol consumption, ultraviolet rays exposure, and physical inactivity — found that smoking accounted for the highest proportion of cancer cases by a wide margin (19%), but excess weight came in second (7.8%).
Again, weight appeared to play a bigger role in certain cancers than others: An estimated 60% of endometrial cancers were linked to excess weight, as were roughly one third of esophageal, kidney, and liver cancers. At the other end of the spectrum, just over 11% of breast, 5% of colorectal, and 4% of ovarian cancers were attributable to excess weight.
Even at the lower end, those rates could make a big difference on the population level, especially for groups with higher rates of obesity.
CDC data show that obesity-related cancers are rising among women younger than 50 years, most rapidly among Hispanic women, and some less common obesity-related cancers, such as stomach, thyroid and pancreatic, are also rising among Black individuals and Hispanic Americans.
Obesity may be one reason for growing cancer disparities, said Leah Ferrucci, PhD, MPH, assistant professor, epidemiology, Yale School of Public Health, New Haven, Connecticut. But, she added, the evidence is limited because Black individuals and Hispanic Americans are understudied.
When Do Extra Pounds Matter?
When it comes to cancer risk, at what point in life does excess weight, or weight gain, matter? Is the standard weight gain in middle age, for instance, as hazardous as being overweight or obese from a young age?
Some evidence suggests there’s no “safe” time for putting on excess pounds.
A recent meta-analysis concluded that weight gain at any point after age 18 years is associated with incremental increases in the risk for postmenopausal breast cancer. A 2023 study in JAMA Network Open found a similar pattern with colorectal and other gastrointestinal cancers: People who had sustained overweight or obesity from age 20 years through middle age faced an increased risk of developing those cancers after age 55 years.
The timing of weight gain didn’t seem to matter either. The same elevated risk held among people who were normal weight in their younger years but became overweight after age 55 years.
Those studies focused on later-onset disease. But, in recent years, experts have tracked a troubling rise in early-onset cancers — those diagnosed before age 50 years — particularly gastrointestinal cancers.
An obvious question, Dr. Meyerhardt said, is whether the growing prevalence of obesity among young people is partly to blame.
There’s some data to support that, he said. An analysis from the Nurses’ Health Study II found that women with obesity had double the risk for early-onset colorectal cancer as those with a normal BMI. And every 5-kg increase in weight after age 18 years was associated with a 9% increase in colorectal cancer risk.
But while obesity trends probably partly explain the rise in early-onset cancers, there is likely more to the story, Dr. Meyerhardt said.
“I think all of us who see an increasing number of patients under 50 with colorectal cancer know there’s a fair number who do not fit that [high BMI] profile,” he said. “There’s a fair number over 50 who don’t either.”
Does Weight Loss Help?
With all the evidence pointing to high BMI as a cancer risk factor, a logical conclusion is that weight loss should reduce that excess risk. However, Dr. Bea said, there’s actually little data to support that, and what exists comes from observational studies.
Some research has focused on people who had substantial weight loss after bariatric surgery, with encouraging results. A study published in JAMA found that among 5053 people who underwent bariatric surgery, 2.9% developed an obesity-related cancer over 10 years compared with 4.9% in the nonsurgery group.
Most people, however, aim for less dramatic weight loss, with the help of diet and exercise or sometimes medication. Some evidence shows that a modest degree of weight loss may lower the risks for postmenopausal breast and endometrial cancers.
A 2020 pooled analysis found, for instance, that among women aged ≥ 50 years, those who lost as little as 2.0-4.5 kg, or 4.4-10.0 pounds, and kept it off for 10 years had a lower risk for breast cancer than women whose weight remained stable. And losing more weight — 9 kg, or about 20 pounds, or more — was even better for lowering cancer risk.
But other research suggests the opposite. A recent analysis found that people who lost weight within the past 2 years through diet and exercise had a higher risk for a range of cancers compared with those who did not lose weight. Overall, though, the increased risk was quite low.
Whatever the research does, or doesn’t, show about weight and cancer risk, Dr. Basen-Engquist said, it’s important that risk factors, obesity and otherwise, aren’t “used as blame tools.”
“With obesity, behavior certainly plays into it,” she said. “But there are so many influences on our behavior that are socially determined.”
Both Dr. Basen-Engquist and Dr. Meyerhardt said it’s important for clinicians to consider the individual in front of them and for everyone to set realistic expectations.
People with obesity should not feel they have to become thin to be healthier, and no one has to leap from being sedentary to exercising several hours a week.
“We don’t want patients to feel that if they don’t get to a stated goal in a guideline, it’s all for naught,” Dr. Meyerhardt said.
A version of this article appeared on Medscape.com.
According to the Centers for Disease Control and Prevention (CDC), over 684,000 Americans are diagnosed with an “obesity-associated” cancer each year.
The incidence of many of these cancers has been rising in recent years, particularly among younger people — a trend that sits in contrast with the overall decline in cancers with no established relationship to excess weight, such as lung and skin cancers.
Is obesity the new smoking? Not exactly.
While about 42% of cancers — including common ones such as colorectal and postmenopausal breast cancers — are considered obesity-related, only about 8% of incident cancers are attributed to excess body weight. People often develop those diseases regardless of weight.
Although plenty of evidence points to excess body fat as a cancer risk factor, it’s unclear at what point excess weight has an effect. Is gaining weight later in life, for instance, better or worse for cancer risk than being overweight or obese from a young age?
There’s another glaring knowledge gap: Does losing weight at some point in adulthood change the picture? In other words, how many of those 684,000 diagnoses might have been prevented if people shed excess pounds?
When it comes to weight and cancer risk, “there’s a lot we don’t know,” said Jennifer W. Bea, PhD, associate professor, health promotion sciences, University of Arizona, Tucson.
A Consistent but Complicated Relationship
Given the growing incidence of obesity — which currently affects about 42% of US adults and 20% of children and teenagers — it’s no surprise that many studies have delved into the potential effects of excess weight on cancer rates.
Although virtually all the evidence comes from large cohort studies, leaving the cause-effect question open, certain associations keep showing up.
“What we know is that, consistently, a higher body mass index [BMI] — particularly in the obese category — leads to a higher risk of multiple cancers,” said Jeffrey A. Meyerhardt, MD, MPH, codirector, Colon and Rectal Cancer Center, Dana-Farber Cancer Institute, Boston.
In a widely cited report published in The New England Journal of Medicine in 2016, the International Agency for Research on Cancer (IARC) analyzed over 1000 epidemiologic studies on body fat and cancer. The agency pointed to over a dozen cancers, including some of the most common and deadly, linked to excess body weight.
That list includes esophageal adenocarcinoma and endometrial cancer — associated with the highest risk — along with kidney, liver, stomach (gastric cardia), pancreatic, colorectal, postmenopausal breast, gallbladder, ovarian, and thyroid cancers, plus multiple myeloma and meningioma. There’s also “limited” evidence linking excess weight to additional cancer types, including aggressive prostate cancer and certain head and neck cancers.
At the same time, Dr. Meyerhardt said, many of those same cancers are also associated with issues that lead to, or coexist with, overweight and obesity, including poor diet, lack of exercise, and metabolic conditions such as diabetes.
It’s a complicated web, and it’s likely, Dr. Meyerhardt said, that high BMI both directly affects cancer risk and is part of a “causal pathway” of other factors that do.
Regarding direct effects, preclinical research has pointed to multiple ways in which excess body fat could contribute to cancer, said Karen M. Basen-Engquist, PhD, MPH, professor, Division of Cancer Prevention and Population Services, The University of Texas MD Anderson Cancer Center, Houston.
One broad mechanism to help explain the obesity-cancer link is chronic systemic inflammation because excess fat tissue can raise levels of substances in the body, such as tumor necrosis factor alpha and interleukin 6, which fuel inflammation. Excess fat also contributes to hyperinsulinemia — too much insulin in the blood — which can help promote the growth and spread of tumor cells.
But the underlying reasons also appear to vary by cancer type, Dr. Basen-Engquist said. With hormonally driven cancer types, such as breast and endometrial, excess body fat may alter hormone levels in ways that spur tumor growth. Extra fat tissue may, for example, convert androgens into estrogens, which could help feed estrogen-dependent tumors.
That, Dr. Basen-Engquist noted, could be why excess weight is associated with postmenopausal, not premenopausal, breast cancer: Before menopause, body fat is a relatively minor contributor to estrogen levels but becomes more important after menopause.
How Big Is the Effect?
While more than a dozen cancers have been consistently linked to excess weight, the strength of those associations varies considerably.
Endometrial and esophageal cancers are two that stand out. In the 2016 IARC analysis, people with severe obesity had a seven-times greater risk for endometrial cancer and 4.8-times greater risk for esophageal adenocarcinoma vs people with a normal BMI.
With other cancers, the risk increases for those with severe obesity compared with a normal BMI were far more modest: 10% for ovarian cancer, 30% for colorectal cancer, and 80% for kidney and stomach cancers, for example. For postmenopausal breast cancer, every five-unit increase in BMI was associated with a 10% relative risk increase.
A 2018 study from the American Cancer Society, which attempted to estimate the proportion of cancers in the United States attributable to modifiable risk factors — including alcohol consumption, ultraviolet rays exposure, and physical inactivity — found that smoking accounted for the highest proportion of cancer cases by a wide margin (19%), but excess weight came in second (7.8%).
Again, weight appeared to play a bigger role in certain cancers than others: An estimated 60% of endometrial cancers were linked to excess weight, as were roughly one third of esophageal, kidney, and liver cancers. At the other end of the spectrum, just over 11% of breast, 5% of colorectal, and 4% of ovarian cancers were attributable to excess weight.
Even at the lower end, those rates could make a big difference on the population level, especially for groups with higher rates of obesity.
CDC data show that obesity-related cancers are rising among women younger than 50 years, most rapidly among Hispanic women, and some less common obesity-related cancers, such as stomach, thyroid and pancreatic, are also rising among Black individuals and Hispanic Americans.
Obesity may be one reason for growing cancer disparities, said Leah Ferrucci, PhD, MPH, assistant professor, epidemiology, Yale School of Public Health, New Haven, Connecticut. But, she added, the evidence is limited because Black individuals and Hispanic Americans are understudied.
When Do Extra Pounds Matter?
When it comes to cancer risk, at what point in life does excess weight, or weight gain, matter? Is the standard weight gain in middle age, for instance, as hazardous as being overweight or obese from a young age?
Some evidence suggests there’s no “safe” time for putting on excess pounds.
A recent meta-analysis concluded that weight gain at any point after age 18 years is associated with incremental increases in the risk for postmenopausal breast cancer. A 2023 study in JAMA Network Open found a similar pattern with colorectal and other gastrointestinal cancers: People who had sustained overweight or obesity from age 20 years through middle age faced an increased risk of developing those cancers after age 55 years.
The timing of weight gain didn’t seem to matter either. The same elevated risk held among people who were normal weight in their younger years but became overweight after age 55 years.
Those studies focused on later-onset disease. But, in recent years, experts have tracked a troubling rise in early-onset cancers — those diagnosed before age 50 years — particularly gastrointestinal cancers.
An obvious question, Dr. Meyerhardt said, is whether the growing prevalence of obesity among young people is partly to blame.
There’s some data to support that, he said. An analysis from the Nurses’ Health Study II found that women with obesity had double the risk for early-onset colorectal cancer as those with a normal BMI. And every 5-kg increase in weight after age 18 years was associated with a 9% increase in colorectal cancer risk.
But while obesity trends probably partly explain the rise in early-onset cancers, there is likely more to the story, Dr. Meyerhardt said.
“I think all of us who see an increasing number of patients under 50 with colorectal cancer know there’s a fair number who do not fit that [high BMI] profile,” he said. “There’s a fair number over 50 who don’t either.”
Does Weight Loss Help?
With all the evidence pointing to high BMI as a cancer risk factor, a logical conclusion is that weight loss should reduce that excess risk. However, Dr. Bea said, there’s actually little data to support that, and what exists comes from observational studies.
Some research has focused on people who had substantial weight loss after bariatric surgery, with encouraging results. A study published in JAMA found that among 5053 people who underwent bariatric surgery, 2.9% developed an obesity-related cancer over 10 years compared with 4.9% in the nonsurgery group.
Most people, however, aim for less dramatic weight loss, with the help of diet and exercise or sometimes medication. Some evidence shows that a modest degree of weight loss may lower the risks for postmenopausal breast and endometrial cancers.
A 2020 pooled analysis found, for instance, that among women aged ≥ 50 years, those who lost as little as 2.0-4.5 kg, or 4.4-10.0 pounds, and kept it off for 10 years had a lower risk for breast cancer than women whose weight remained stable. And losing more weight — 9 kg, or about 20 pounds, or more — was even better for lowering cancer risk.
But other research suggests the opposite. A recent analysis found that people who lost weight within the past 2 years through diet and exercise had a higher risk for a range of cancers compared with those who did not lose weight. Overall, though, the increased risk was quite low.
Whatever the research does, or doesn’t, show about weight and cancer risk, Dr. Basen-Engquist said, it’s important that risk factors, obesity and otherwise, aren’t “used as blame tools.”
“With obesity, behavior certainly plays into it,” she said. “But there are so many influences on our behavior that are socially determined.”
Both Dr. Basen-Engquist and Dr. Meyerhardt said it’s important for clinicians to consider the individual in front of them and for everyone to set realistic expectations.
People with obesity should not feel they have to become thin to be healthier, and no one has to leap from being sedentary to exercising several hours a week.
“We don’t want patients to feel that if they don’t get to a stated goal in a guideline, it’s all for naught,” Dr. Meyerhardt said.
A version of this article appeared on Medscape.com.
Failed IOL Promotes Poor Maternal and Fetal Outcomes for Mothers With Diabetes
Approximately one-quarter of mothers with diabetes failed induction of labor, and this failure was associated with a range of adverse outcomes for mothers and infants, based on data from more than 2,000 individuals.
Uncontrolled diabetes remains a risk factor for cesarean delivery, Ali Alhousseini, MD, of Corewell Health East, Dearborn, Michigan, and colleagues wrote in a study presented at the annual clinical and scientific meeting of the American College of Obstetricians and Gynecologists.
“Identifying and stratifying associated risk factors for failed induction of labor [IOL] may improve counseling and intrapartum care,” the researchers wrote in their abstract.
The researchers reviewed data from 2,172 mothers with diabetes who underwent IOL at a single university medical center between January 2013 and December 2021. They examined a range of maternal characteristics including age, ethnicity, gestational age, medical comorbidities, insulin administration, parity, and health insurance.
A total of 567 mothers with diabetes (26.1%) failed IOL and underwent cesarean delivery.
Overall, failed IOL was significantly associated with nulliparity (P = .0001), as well as preexisting diabetes compared with gestational diabetes, diabetes control with insulin, maternal essential hypertension, preeclampsia, and polyhydramnios (P = .001 for all). Other factors significantly associated with failed IOL included prenatal diagnosis of fetal growth restriction (P = .008), and placental abnormalities (P = .027).
Neonatal factors of weight, large for gestational age, head circumference, and height were not significantly associated with failed IOL (P > .05 for all).
As for neonatal outcomes, failed IOL was significantly associated with admission to neonatal intensive care unit, hyperbilirubinemia, and longer hospital stay (P = .001 for all). Failed IOL was significantly associated with lower 1-minute APGAR scores, but not with lower 5-minute APGAR scores, the researchers noted (P = .033 for 1-minute score). No association was noted between failed IOL and neonatal readmission, lower umbilical cord pH value, or maternal ethnicity.
The findings were limited by the retrospective design, but data analysis is ongoing, Dr. Alhousseini said. The researchers are continuing to assess the roles not only of optimal glucose control, but other maternal factors in improving maternal and neonatal outcomes, he said.
Data Add to Awareness of Risk Factors
The current study is important because of the increasing incidence of diabetes and the need to examine associated risk factors in pregnancy, Michael Richley, MD, a maternal fetal medicine physician at the University of Washington, Seattle, said in an interview. “The average age of onset of diabetes is becoming younger and type 2 diabetes in pregnancy is an increasingly common diagnosis,” said Dr. Richley, who was not involved in the study.
The increase in both maternal and neonatal adverse outcomes is expected given the risk factors identified in the study, said Dr. Richley. “The patients with diabetes also were sicker at baseline, with hypertensive disorders, growth restriction, and pregestational diabetes,” he noted.
The study findings support data from previous research, Dr. Richley said. The message to clinicians is that patients with diabetes not only have an increased risk of needing a cesarean delivery but also have an increased risk of poor outcomes if a cesarean delivery is needed, he said.
Although a prospective study would be useful to show causality as opposed to just an association, such a study is challenging in this patient population given the limitations of conducting research on labor and delivery, he said.
The study received no outside funding. The researchers and Dr. Richley had no financial conflicts to disclose.
Approximately one-quarter of mothers with diabetes failed induction of labor, and this failure was associated with a range of adverse outcomes for mothers and infants, based on data from more than 2,000 individuals.
Uncontrolled diabetes remains a risk factor for cesarean delivery, Ali Alhousseini, MD, of Corewell Health East, Dearborn, Michigan, and colleagues wrote in a study presented at the annual clinical and scientific meeting of the American College of Obstetricians and Gynecologists.
“Identifying and stratifying associated risk factors for failed induction of labor [IOL] may improve counseling and intrapartum care,” the researchers wrote in their abstract.
The researchers reviewed data from 2,172 mothers with diabetes who underwent IOL at a single university medical center between January 2013 and December 2021. They examined a range of maternal characteristics including age, ethnicity, gestational age, medical comorbidities, insulin administration, parity, and health insurance.
A total of 567 mothers with diabetes (26.1%) failed IOL and underwent cesarean delivery.
Overall, failed IOL was significantly associated with nulliparity (P = .0001), as well as preexisting diabetes compared with gestational diabetes, diabetes control with insulin, maternal essential hypertension, preeclampsia, and polyhydramnios (P = .001 for all). Other factors significantly associated with failed IOL included prenatal diagnosis of fetal growth restriction (P = .008), and placental abnormalities (P = .027).
Neonatal factors of weight, large for gestational age, head circumference, and height were not significantly associated with failed IOL (P > .05 for all).
As for neonatal outcomes, failed IOL was significantly associated with admission to neonatal intensive care unit, hyperbilirubinemia, and longer hospital stay (P = .001 for all). Failed IOL was significantly associated with lower 1-minute APGAR scores, but not with lower 5-minute APGAR scores, the researchers noted (P = .033 for 1-minute score). No association was noted between failed IOL and neonatal readmission, lower umbilical cord pH value, or maternal ethnicity.
The findings were limited by the retrospective design, but data analysis is ongoing, Dr. Alhousseini said. The researchers are continuing to assess the roles not only of optimal glucose control, but other maternal factors in improving maternal and neonatal outcomes, he said.
Data Add to Awareness of Risk Factors
The current study is important because of the increasing incidence of diabetes and the need to examine associated risk factors in pregnancy, Michael Richley, MD, a maternal fetal medicine physician at the University of Washington, Seattle, said in an interview. “The average age of onset of diabetes is becoming younger and type 2 diabetes in pregnancy is an increasingly common diagnosis,” said Dr. Richley, who was not involved in the study.
The increase in both maternal and neonatal adverse outcomes is expected given the risk factors identified in the study, said Dr. Richley. “The patients with diabetes also were sicker at baseline, with hypertensive disorders, growth restriction, and pregestational diabetes,” he noted.
The study findings support data from previous research, Dr. Richley said. The message to clinicians is that patients with diabetes not only have an increased risk of needing a cesarean delivery but also have an increased risk of poor outcomes if a cesarean delivery is needed, he said.
Although a prospective study would be useful to show causality as opposed to just an association, such a study is challenging in this patient population given the limitations of conducting research on labor and delivery, he said.
The study received no outside funding. The researchers and Dr. Richley had no financial conflicts to disclose.
Approximately one-quarter of mothers with diabetes failed induction of labor, and this failure was associated with a range of adverse outcomes for mothers and infants, based on data from more than 2,000 individuals.
Uncontrolled diabetes remains a risk factor for cesarean delivery, Ali Alhousseini, MD, of Corewell Health East, Dearborn, Michigan, and colleagues wrote in a study presented at the annual clinical and scientific meeting of the American College of Obstetricians and Gynecologists.
“Identifying and stratifying associated risk factors for failed induction of labor [IOL] may improve counseling and intrapartum care,” the researchers wrote in their abstract.
The researchers reviewed data from 2,172 mothers with diabetes who underwent IOL at a single university medical center between January 2013 and December 2021. They examined a range of maternal characteristics including age, ethnicity, gestational age, medical comorbidities, insulin administration, parity, and health insurance.
A total of 567 mothers with diabetes (26.1%) failed IOL and underwent cesarean delivery.
Overall, failed IOL was significantly associated with nulliparity (P = .0001), as well as preexisting diabetes compared with gestational diabetes, diabetes control with insulin, maternal essential hypertension, preeclampsia, and polyhydramnios (P = .001 for all). Other factors significantly associated with failed IOL included prenatal diagnosis of fetal growth restriction (P = .008), and placental abnormalities (P = .027).
Neonatal factors of weight, large for gestational age, head circumference, and height were not significantly associated with failed IOL (P > .05 for all).
As for neonatal outcomes, failed IOL was significantly associated with admission to neonatal intensive care unit, hyperbilirubinemia, and longer hospital stay (P = .001 for all). Failed IOL was significantly associated with lower 1-minute APGAR scores, but not with lower 5-minute APGAR scores, the researchers noted (P = .033 for 1-minute score). No association was noted between failed IOL and neonatal readmission, lower umbilical cord pH value, or maternal ethnicity.
The findings were limited by the retrospective design, but data analysis is ongoing, Dr. Alhousseini said. The researchers are continuing to assess the roles not only of optimal glucose control, but other maternal factors in improving maternal and neonatal outcomes, he said.
Data Add to Awareness of Risk Factors
The current study is important because of the increasing incidence of diabetes and the need to examine associated risk factors in pregnancy, Michael Richley, MD, a maternal fetal medicine physician at the University of Washington, Seattle, said in an interview. “The average age of onset of diabetes is becoming younger and type 2 diabetes in pregnancy is an increasingly common diagnosis,” said Dr. Richley, who was not involved in the study.
The increase in both maternal and neonatal adverse outcomes is expected given the risk factors identified in the study, said Dr. Richley. “The patients with diabetes also were sicker at baseline, with hypertensive disorders, growth restriction, and pregestational diabetes,” he noted.
The study findings support data from previous research, Dr. Richley said. The message to clinicians is that patients with diabetes not only have an increased risk of needing a cesarean delivery but also have an increased risk of poor outcomes if a cesarean delivery is needed, he said.
Although a prospective study would be useful to show causality as opposed to just an association, such a study is challenging in this patient population given the limitations of conducting research on labor and delivery, he said.
The study received no outside funding. The researchers and Dr. Richley had no financial conflicts to disclose.
FROM ACOG 2024
Parental e-Cigarette Use Linked to Atopic Dermatitis Risk in Children
TOPLINE:
METHODOLOGY:
- AD is one of the most common inflammatory conditions in children and is linked to environmental risk factors, such as exposure to secondhand smoke and prenatal exposure to tobacco.
- To address the effect of e-cigarettes use on children, researchers conducted a cross-sectional analysis of data from the 2014-2018 National Health Interview Survey, a nationally representative sample of the US population.
- The analysis included 48,637,111 individuals (mean age, 8.4 years), with 6,354,515 (13%) indicating a history of AD (mean age, 8 years).
TAKEAWAY:
- The prevalence of parental e-cigarette use was 18.0% among individuals with AD, compared with 14.4% among those without AD.
- This corresponded to a 24% higher risk for AD associated with parental e-cigarette use (adjusted odds ratio, 1.24; 95% CI, 1.08-1.42).
- The association between e-cigarette use and AD in children held regardless of parent’s sex.
IN PRACTICE:
“Our results suggest that parental e-cigarette use was associated with pediatric AD,” the authors concluded. They noted that the authors of a previous study that associated e-cigarette use with AD in adults postulated that the cause was “the inflammatory state created by” e-cigarettes.
SOURCE:
This study, led by Gun Min Youn, Department of Dermatology, Stanford University School of Medicine, Stanford, California, was published online in JAMA Dermatology.
LIMITATIONS:
The cross-sectional survey design limited the ability to draw causal inferences. Defining e-cigarette use as a single past instance could affect the strength of the findings. Only past-year e-cigarette use was considered. Furthermore, data on pediatric cigarette or e-cigarette use, a potential confounder, were unavailable.
DISCLOSURES:
The study did not disclose funding information. One author reported receiving consultation fees outside the submitted work. No other disclosures were reported.
A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- AD is one of the most common inflammatory conditions in children and is linked to environmental risk factors, such as exposure to secondhand smoke and prenatal exposure to tobacco.
- To address the effect of e-cigarettes use on children, researchers conducted a cross-sectional analysis of data from the 2014-2018 National Health Interview Survey, a nationally representative sample of the US population.
- The analysis included 48,637,111 individuals (mean age, 8.4 years), with 6,354,515 (13%) indicating a history of AD (mean age, 8 years).
TAKEAWAY:
- The prevalence of parental e-cigarette use was 18.0% among individuals with AD, compared with 14.4% among those without AD.
- This corresponded to a 24% higher risk for AD associated with parental e-cigarette use (adjusted odds ratio, 1.24; 95% CI, 1.08-1.42).
- The association between e-cigarette use and AD in children held regardless of parent’s sex.
IN PRACTICE:
“Our results suggest that parental e-cigarette use was associated with pediatric AD,” the authors concluded. They noted that the authors of a previous study that associated e-cigarette use with AD in adults postulated that the cause was “the inflammatory state created by” e-cigarettes.
SOURCE:
This study, led by Gun Min Youn, Department of Dermatology, Stanford University School of Medicine, Stanford, California, was published online in JAMA Dermatology.
LIMITATIONS:
The cross-sectional survey design limited the ability to draw causal inferences. Defining e-cigarette use as a single past instance could affect the strength of the findings. Only past-year e-cigarette use was considered. Furthermore, data on pediatric cigarette or e-cigarette use, a potential confounder, were unavailable.
DISCLOSURES:
The study did not disclose funding information. One author reported receiving consultation fees outside the submitted work. No other disclosures were reported.
A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- AD is one of the most common inflammatory conditions in children and is linked to environmental risk factors, such as exposure to secondhand smoke and prenatal exposure to tobacco.
- To address the effect of e-cigarettes use on children, researchers conducted a cross-sectional analysis of data from the 2014-2018 National Health Interview Survey, a nationally representative sample of the US population.
- The analysis included 48,637,111 individuals (mean age, 8.4 years), with 6,354,515 (13%) indicating a history of AD (mean age, 8 years).
TAKEAWAY:
- The prevalence of parental e-cigarette use was 18.0% among individuals with AD, compared with 14.4% among those without AD.
- This corresponded to a 24% higher risk for AD associated with parental e-cigarette use (adjusted odds ratio, 1.24; 95% CI, 1.08-1.42).
- The association between e-cigarette use and AD in children held regardless of parent’s sex.
IN PRACTICE:
“Our results suggest that parental e-cigarette use was associated with pediatric AD,” the authors concluded. They noted that the authors of a previous study that associated e-cigarette use with AD in adults postulated that the cause was “the inflammatory state created by” e-cigarettes.
SOURCE:
This study, led by Gun Min Youn, Department of Dermatology, Stanford University School of Medicine, Stanford, California, was published online in JAMA Dermatology.
LIMITATIONS:
The cross-sectional survey design limited the ability to draw causal inferences. Defining e-cigarette use as a single past instance could affect the strength of the findings. Only past-year e-cigarette use was considered. Furthermore, data on pediatric cigarette or e-cigarette use, a potential confounder, were unavailable.
DISCLOSURES:
The study did not disclose funding information. One author reported receiving consultation fees outside the submitted work. No other disclosures were reported.
A version of this article appeared on Medscape.com.
Cortisol Test Confirms HPA Axis Recovery from Steroid Use
TOPLINE:
An early serum cortisol concentration of > 237 nmol/L (> 8.6 μg/dL) has been validated as a safe and useful screening test with 100% specificity for predicting recovery of the hypothalamic-pituitary-adrenal (HPA) axis in patients on tapering regimes from long‐term chronic glucocorticoid therapy (CGT).
METHODOLOGY:
- A retrospective review of 250-µg Synacthen test (SST) results performed in patients on tapering CGT doses from a single-center rheumatology department over 12 months.
- A total of 60 SSTs were performed in 58 patients, all in the morning (7-12 AM) after withholding CGT for 48 hours.
- Peripheral blood was sampled for cortisol at baseline, 30 minutes, and 60 minutes.
- Adrenal insufficiency (AI) was defined as a peak serum cortisol concentration.
TAKEAWAY:
- The mean duration of CGT (all prednisolone) was 63 months, prescribed primarily for giant cell arteritis/polymyalgia rheumatica (48%) and inflammatory arthritis (18%), with a mean daily dose of 3.4 mg at the time of SST.
- With the investigators’ previously reported basal serum cortisol concentration of > 237 nmol/L (> 8.6 μg/dL) used to confirm an intact HPA axis, no patient with AI would have been missed, but 37 of 51 (73%) unnecessary SSTs in euadrenal patients would have been avoided.
- A basal serum cortisol concentration of > 227 nmol/L had a specificity of 100% for predicting passing the SST, while a basal serum cortisol concentration of ≤ 55 nmol/L had a 100% sensitivity for predicting failure.
- A mean daily prednisolone dosing at the time of SST in patients with AI was significantly higher than that with normal SSTs (5.7 vs 2.9 mg, respectively; P = .01).
IN PRACTICE:
“This offers a more rapid, convenient, and cost‐effective screening method for patients requiring biochemical assessment of the HPA axis with the potential for significant resource savings without any adverse impact on patient safety,” the authors wrote.
SOURCE:
The study was conducted by Ella Sharma, of the Department of Endocrinology, Royal Victoria Infirmary, Newcastle upon Tyne, UK, and colleagues and published online on May 19, 2024, as a letter in Clinical Endocrinology.
LIMITATIONS:
Not provided.
DISCLOSURES:
Not provided.
A version of this article appeared on Medscape.com.
TOPLINE:
An early serum cortisol concentration of > 237 nmol/L (> 8.6 μg/dL) has been validated as a safe and useful screening test with 100% specificity for predicting recovery of the hypothalamic-pituitary-adrenal (HPA) axis in patients on tapering regimes from long‐term chronic glucocorticoid therapy (CGT).
METHODOLOGY:
- A retrospective review of 250-µg Synacthen test (SST) results performed in patients on tapering CGT doses from a single-center rheumatology department over 12 months.
- A total of 60 SSTs were performed in 58 patients, all in the morning (7-12 AM) after withholding CGT for 48 hours.
- Peripheral blood was sampled for cortisol at baseline, 30 minutes, and 60 minutes.
- Adrenal insufficiency (AI) was defined as a peak serum cortisol concentration.
TAKEAWAY:
- The mean duration of CGT (all prednisolone) was 63 months, prescribed primarily for giant cell arteritis/polymyalgia rheumatica (48%) and inflammatory arthritis (18%), with a mean daily dose of 3.4 mg at the time of SST.
- With the investigators’ previously reported basal serum cortisol concentration of > 237 nmol/L (> 8.6 μg/dL) used to confirm an intact HPA axis, no patient with AI would have been missed, but 37 of 51 (73%) unnecessary SSTs in euadrenal patients would have been avoided.
- A basal serum cortisol concentration of > 227 nmol/L had a specificity of 100% for predicting passing the SST, while a basal serum cortisol concentration of ≤ 55 nmol/L had a 100% sensitivity for predicting failure.
- A mean daily prednisolone dosing at the time of SST in patients with AI was significantly higher than that with normal SSTs (5.7 vs 2.9 mg, respectively; P = .01).
IN PRACTICE:
“This offers a more rapid, convenient, and cost‐effective screening method for patients requiring biochemical assessment of the HPA axis with the potential for significant resource savings without any adverse impact on patient safety,” the authors wrote.
SOURCE:
The study was conducted by Ella Sharma, of the Department of Endocrinology, Royal Victoria Infirmary, Newcastle upon Tyne, UK, and colleagues and published online on May 19, 2024, as a letter in Clinical Endocrinology.
LIMITATIONS:
Not provided.
DISCLOSURES:
Not provided.
A version of this article appeared on Medscape.com.
TOPLINE:
An early serum cortisol concentration of > 237 nmol/L (> 8.6 μg/dL) has been validated as a safe and useful screening test with 100% specificity for predicting recovery of the hypothalamic-pituitary-adrenal (HPA) axis in patients on tapering regimes from long‐term chronic glucocorticoid therapy (CGT).
METHODOLOGY:
- A retrospective review of 250-µg Synacthen test (SST) results performed in patients on tapering CGT doses from a single-center rheumatology department over 12 months.
- A total of 60 SSTs were performed in 58 patients, all in the morning (7-12 AM) after withholding CGT for 48 hours.
- Peripheral blood was sampled for cortisol at baseline, 30 minutes, and 60 minutes.
- Adrenal insufficiency (AI) was defined as a peak serum cortisol concentration.
TAKEAWAY:
- The mean duration of CGT (all prednisolone) was 63 months, prescribed primarily for giant cell arteritis/polymyalgia rheumatica (48%) and inflammatory arthritis (18%), with a mean daily dose of 3.4 mg at the time of SST.
- With the investigators’ previously reported basal serum cortisol concentration of > 237 nmol/L (> 8.6 μg/dL) used to confirm an intact HPA axis, no patient with AI would have been missed, but 37 of 51 (73%) unnecessary SSTs in euadrenal patients would have been avoided.
- A basal serum cortisol concentration of > 227 nmol/L had a specificity of 100% for predicting passing the SST, while a basal serum cortisol concentration of ≤ 55 nmol/L had a 100% sensitivity for predicting failure.
- A mean daily prednisolone dosing at the time of SST in patients with AI was significantly higher than that with normal SSTs (5.7 vs 2.9 mg, respectively; P = .01).
IN PRACTICE:
“This offers a more rapid, convenient, and cost‐effective screening method for patients requiring biochemical assessment of the HPA axis with the potential for significant resource savings without any adverse impact on patient safety,” the authors wrote.
SOURCE:
The study was conducted by Ella Sharma, of the Department of Endocrinology, Royal Victoria Infirmary, Newcastle upon Tyne, UK, and colleagues and published online on May 19, 2024, as a letter in Clinical Endocrinology.
LIMITATIONS:
Not provided.
DISCLOSURES:
Not provided.
A version of this article appeared on Medscape.com.
Maternal Buprenorphine Affects Fetal Breathing
Measures of fetal breathing movement were lower in fetuses of pregnant patients who received buprenorphine, compared with controls, based on data from 177 individuals.
The findings were presented at the annual clinical and scientific meeting of the American College of Obstetricians and Gynecologists by Caroline Bulger, MD, of East Tennessee State University, Johnson City.
Pregnant patients with opioid-use disorder in the community surrounding Johnson City receive medication-assisted therapy with buprenorphine during the prenatal period, Dr. Bulger and colleagues wrote in their abstract. The current prenatal program for substance use disorder was established in 2016 based on patient requests for assistance in lowering their buprenorphine dosages during pregnancy, said senior author Martin E. Olsen, MD, also of East Tennessee State University, in an interview.
“Buprenorphine medication–assisted treatment in pregnancy is associated with long-term effects on childhood development such as smaller neonatal brains, decreased school performance, and low birth weight;” however, data on the fetal effects of buprenorphine are limited, said Dr. Olsen.
The current study was conducted to evaluate a short-term finding of the fetal effects of buprenorphine, Dr. Olsen said.
“This study was performed after obstetric sonographers at our institution noted that biophysical profile [BPP] ultrasound assessments of the fetuses of mothers on buprenorphine took longer than for other patients,” said Dr. Olsen.
The researchers conducted a retrospective chart review of 131 patients who received buprenorphine and 46 who were followed for chronic hypertension and served as high-risk controls. Patients were seen at a single institution between July 1, 2016, and June 30, 2020.
The researchers hypothesized that BPP of fetuses in patients receiving buprenorphine might be different from controls because of the effects of buprenorphine.
Overall, patients who received buprenorphine were more likely to have a fetal breathing score of zero than those who underwent a BPP for hypertension. A significant relationship emerged between buprenorphine dosage and breathing motion assessment; patients on high-dose buprenorphine were more likely than patients on low doses to have values of zero on fetal breathing motion assessment, and a chi-squared test yielded a P value of .04269.
The takeaway for clinical practice is that clinicians performing BPP ultrasounds on buprenorphine-exposed fetuses can expect that these assessments may take longer on average than assessments of other high-risk patients, said Dr. Olsen. “Additional assessment after a low BPP score is still indicated for these fetuses just as in other high-risk pregnancies,” he said.
The study was limited primarily by the retrospective design, Dr. Olsen said.
Although current treatment guidelines do not emphasize the effects of maternal buprenorphine use on fetal development, these findings support previous research showing effects of buprenorphine on fetal brain structure, the researchers wrote in their abstract. Looking ahead, “We recommend additional study on the maternal buprenorphine medication–assisted treatment dose effects for fetal and neonatal development with attention to such factors as head circumference, birth weight, achievement of developmental milestones, and school performance,” Dr. Olsen said.
“We and others have shown that the lowest effective dose of buprenorphine can lower neonatal abstinence syndrome/neonatal opioid withdrawal syndrome rates,” but data showing an impact of lowest effective dose management on long-term complications of fetal buprenorphine exposure are lacking, he noted.
The study received no outside funding. The researchers had no financial conflicts to disclose.
Measures of fetal breathing movement were lower in fetuses of pregnant patients who received buprenorphine, compared with controls, based on data from 177 individuals.
The findings were presented at the annual clinical and scientific meeting of the American College of Obstetricians and Gynecologists by Caroline Bulger, MD, of East Tennessee State University, Johnson City.
Pregnant patients with opioid-use disorder in the community surrounding Johnson City receive medication-assisted therapy with buprenorphine during the prenatal period, Dr. Bulger and colleagues wrote in their abstract. The current prenatal program for substance use disorder was established in 2016 based on patient requests for assistance in lowering their buprenorphine dosages during pregnancy, said senior author Martin E. Olsen, MD, also of East Tennessee State University, in an interview.
“Buprenorphine medication–assisted treatment in pregnancy is associated with long-term effects on childhood development such as smaller neonatal brains, decreased school performance, and low birth weight;” however, data on the fetal effects of buprenorphine are limited, said Dr. Olsen.
The current study was conducted to evaluate a short-term finding of the fetal effects of buprenorphine, Dr. Olsen said.
“This study was performed after obstetric sonographers at our institution noted that biophysical profile [BPP] ultrasound assessments of the fetuses of mothers on buprenorphine took longer than for other patients,” said Dr. Olsen.
The researchers conducted a retrospective chart review of 131 patients who received buprenorphine and 46 who were followed for chronic hypertension and served as high-risk controls. Patients were seen at a single institution between July 1, 2016, and June 30, 2020.
The researchers hypothesized that BPP of fetuses in patients receiving buprenorphine might be different from controls because of the effects of buprenorphine.
Overall, patients who received buprenorphine were more likely to have a fetal breathing score of zero than those who underwent a BPP for hypertension. A significant relationship emerged between buprenorphine dosage and breathing motion assessment; patients on high-dose buprenorphine were more likely than patients on low doses to have values of zero on fetal breathing motion assessment, and a chi-squared test yielded a P value of .04269.
The takeaway for clinical practice is that clinicians performing BPP ultrasounds on buprenorphine-exposed fetuses can expect that these assessments may take longer on average than assessments of other high-risk patients, said Dr. Olsen. “Additional assessment after a low BPP score is still indicated for these fetuses just as in other high-risk pregnancies,” he said.
The study was limited primarily by the retrospective design, Dr. Olsen said.
Although current treatment guidelines do not emphasize the effects of maternal buprenorphine use on fetal development, these findings support previous research showing effects of buprenorphine on fetal brain structure, the researchers wrote in their abstract. Looking ahead, “We recommend additional study on the maternal buprenorphine medication–assisted treatment dose effects for fetal and neonatal development with attention to such factors as head circumference, birth weight, achievement of developmental milestones, and school performance,” Dr. Olsen said.
“We and others have shown that the lowest effective dose of buprenorphine can lower neonatal abstinence syndrome/neonatal opioid withdrawal syndrome rates,” but data showing an impact of lowest effective dose management on long-term complications of fetal buprenorphine exposure are lacking, he noted.
The study received no outside funding. The researchers had no financial conflicts to disclose.
Measures of fetal breathing movement were lower in fetuses of pregnant patients who received buprenorphine, compared with controls, based on data from 177 individuals.
The findings were presented at the annual clinical and scientific meeting of the American College of Obstetricians and Gynecologists by Caroline Bulger, MD, of East Tennessee State University, Johnson City.
Pregnant patients with opioid-use disorder in the community surrounding Johnson City receive medication-assisted therapy with buprenorphine during the prenatal period, Dr. Bulger and colleagues wrote in their abstract. The current prenatal program for substance use disorder was established in 2016 based on patient requests for assistance in lowering their buprenorphine dosages during pregnancy, said senior author Martin E. Olsen, MD, also of East Tennessee State University, in an interview.
“Buprenorphine medication–assisted treatment in pregnancy is associated with long-term effects on childhood development such as smaller neonatal brains, decreased school performance, and low birth weight;” however, data on the fetal effects of buprenorphine are limited, said Dr. Olsen.
The current study was conducted to evaluate a short-term finding of the fetal effects of buprenorphine, Dr. Olsen said.
“This study was performed after obstetric sonographers at our institution noted that biophysical profile [BPP] ultrasound assessments of the fetuses of mothers on buprenorphine took longer than for other patients,” said Dr. Olsen.
The researchers conducted a retrospective chart review of 131 patients who received buprenorphine and 46 who were followed for chronic hypertension and served as high-risk controls. Patients were seen at a single institution between July 1, 2016, and June 30, 2020.
The researchers hypothesized that BPP of fetuses in patients receiving buprenorphine might be different from controls because of the effects of buprenorphine.
Overall, patients who received buprenorphine were more likely to have a fetal breathing score of zero than those who underwent a BPP for hypertension. A significant relationship emerged between buprenorphine dosage and breathing motion assessment; patients on high-dose buprenorphine were more likely than patients on low doses to have values of zero on fetal breathing motion assessment, and a chi-squared test yielded a P value of .04269.
The takeaway for clinical practice is that clinicians performing BPP ultrasounds on buprenorphine-exposed fetuses can expect that these assessments may take longer on average than assessments of other high-risk patients, said Dr. Olsen. “Additional assessment after a low BPP score is still indicated for these fetuses just as in other high-risk pregnancies,” he said.
The study was limited primarily by the retrospective design, Dr. Olsen said.
Although current treatment guidelines do not emphasize the effects of maternal buprenorphine use on fetal development, these findings support previous research showing effects of buprenorphine on fetal brain structure, the researchers wrote in their abstract. Looking ahead, “We recommend additional study on the maternal buprenorphine medication–assisted treatment dose effects for fetal and neonatal development with attention to such factors as head circumference, birth weight, achievement of developmental milestones, and school performance,” Dr. Olsen said.
“We and others have shown that the lowest effective dose of buprenorphine can lower neonatal abstinence syndrome/neonatal opioid withdrawal syndrome rates,” but data showing an impact of lowest effective dose management on long-term complications of fetal buprenorphine exposure are lacking, he noted.
The study received no outside funding. The researchers had no financial conflicts to disclose.
ACOG 2024
Decision-Making Help for Kids With Disabilities Entering Adulthood
About one in six children (17%) between 3 and 17 years have a disability, which may affect their ability to make decisions when they transition to adulthood.
Typically, at age 18, a young adult assumes rights such as the legal right to make medical decisions (including reproductive decisions), and mental health, financial, and education decisions.
.
Several Options in the Continuum
The AAP describes a continuum of decision-making for youth with IDD from fully autonomous decisions to decisions made by an appointed guardian.
Highlighting an array of options is one way this paper is helpful, said Matthew Siegel, MD, chief of clinical enterprise with the Department of Psychiatry & Behavioral Sciences at Boston Children’s Hospital in Massachusetts. “I suspect that for a lot of practitioners what they’re aware of is guardianship or no guardianship.” These authors highlight that the options are more nuanced, he said.
Pediatricians have widely different ideas about what their role should be in facilitating decision-making in the transition period, he said, so this paper helps clarify what advocacy and discussion are needed.
The paper, written by first author Renee M. Turchi, MD, MPH, and colleagues on behalf of the Council on Children with Disabilities’ Committee on Medical Liability and Risk Management, states that, “The goal should always be the least restrictive decision-making that balances autonomy with safety and supports.”
One Alternative Is Supported Decision-Making
Supported decision-making is one alternative to guardianship. Authors explain that under that framework, a patient can choose a trusted support person and create an agreement with that person on what kinds of decisions the person needs help with and how much assistance is needed. The individual makes the final decision, not the support person.
Authors explain the benefits of that approach: “Individuals with IDD who use supported decision-making report increased confidence in themselves and their decision-making, improved decision-making skills, increased engagement with their community, and perceived more control of their lives,” the authors wrote.
Another option for people with IDD might be, rather than formally naming a substitute decision-maker, allowing a parent or caregiver access to their electronic health record or allowing that person to have independent discussions with their physician.
With guardianship, also called conservatorship in some states, a court requires clear and convincing evidence that the youth is not competent to make his or her own decisions. The court may order evaluations by many professionals, including pediatricians.
State-Specific Legal Information Is Available
Many states have recently enacted laws surrounding supported decision-making and guardianship. The authors reference a national resource center website that details the legislation for each state and points to resources and tools for pediatricians, families, and patients.
“Historically, pediatricians have rarely discussed the legal aspects of transition to adult-oriented services with the youth with IDD and subsequently, their families,” the authors wrote.
Discussions Should Start Early
Ideally, the authors wrote, the discussions about what level of supports might be necessary in the transition to adulthood should start at age 12-14 and include the youth, teachers, parents, and the medical team.
That’s earlier than some of the previous guidance, Dr. Siegel said, and it will be important to evaluate future evidence on the best age to start planning “both from a cognitive development standpoint and from a practicality standpoint.”
The authors point out that the needs for level of support may change and “pediatricians can reevaluate the decision-making arrangement as part of the annual physical/mental examinations to align with the youth’s desires, needs, and decision-making abilities over time.”
The authors and Dr. Siegel report no relevant financial relationships.
About one in six children (17%) between 3 and 17 years have a disability, which may affect their ability to make decisions when they transition to adulthood.
Typically, at age 18, a young adult assumes rights such as the legal right to make medical decisions (including reproductive decisions), and mental health, financial, and education decisions.
.
Several Options in the Continuum
The AAP describes a continuum of decision-making for youth with IDD from fully autonomous decisions to decisions made by an appointed guardian.
Highlighting an array of options is one way this paper is helpful, said Matthew Siegel, MD, chief of clinical enterprise with the Department of Psychiatry & Behavioral Sciences at Boston Children’s Hospital in Massachusetts. “I suspect that for a lot of practitioners what they’re aware of is guardianship or no guardianship.” These authors highlight that the options are more nuanced, he said.
Pediatricians have widely different ideas about what their role should be in facilitating decision-making in the transition period, he said, so this paper helps clarify what advocacy and discussion are needed.
The paper, written by first author Renee M. Turchi, MD, MPH, and colleagues on behalf of the Council on Children with Disabilities’ Committee on Medical Liability and Risk Management, states that, “The goal should always be the least restrictive decision-making that balances autonomy with safety and supports.”
One Alternative Is Supported Decision-Making
Supported decision-making is one alternative to guardianship. Authors explain that under that framework, a patient can choose a trusted support person and create an agreement with that person on what kinds of decisions the person needs help with and how much assistance is needed. The individual makes the final decision, not the support person.
Authors explain the benefits of that approach: “Individuals with IDD who use supported decision-making report increased confidence in themselves and their decision-making, improved decision-making skills, increased engagement with their community, and perceived more control of their lives,” the authors wrote.
Another option for people with IDD might be, rather than formally naming a substitute decision-maker, allowing a parent or caregiver access to their electronic health record or allowing that person to have independent discussions with their physician.
With guardianship, also called conservatorship in some states, a court requires clear and convincing evidence that the youth is not competent to make his or her own decisions. The court may order evaluations by many professionals, including pediatricians.
State-Specific Legal Information Is Available
Many states have recently enacted laws surrounding supported decision-making and guardianship. The authors reference a national resource center website that details the legislation for each state and points to resources and tools for pediatricians, families, and patients.
“Historically, pediatricians have rarely discussed the legal aspects of transition to adult-oriented services with the youth with IDD and subsequently, their families,” the authors wrote.
Discussions Should Start Early
Ideally, the authors wrote, the discussions about what level of supports might be necessary in the transition to adulthood should start at age 12-14 and include the youth, teachers, parents, and the medical team.
That’s earlier than some of the previous guidance, Dr. Siegel said, and it will be important to evaluate future evidence on the best age to start planning “both from a cognitive development standpoint and from a practicality standpoint.”
The authors point out that the needs for level of support may change and “pediatricians can reevaluate the decision-making arrangement as part of the annual physical/mental examinations to align with the youth’s desires, needs, and decision-making abilities over time.”
The authors and Dr. Siegel report no relevant financial relationships.
About one in six children (17%) between 3 and 17 years have a disability, which may affect their ability to make decisions when they transition to adulthood.
Typically, at age 18, a young adult assumes rights such as the legal right to make medical decisions (including reproductive decisions), and mental health, financial, and education decisions.
.
Several Options in the Continuum
The AAP describes a continuum of decision-making for youth with IDD from fully autonomous decisions to decisions made by an appointed guardian.
Highlighting an array of options is one way this paper is helpful, said Matthew Siegel, MD, chief of clinical enterprise with the Department of Psychiatry & Behavioral Sciences at Boston Children’s Hospital in Massachusetts. “I suspect that for a lot of practitioners what they’re aware of is guardianship or no guardianship.” These authors highlight that the options are more nuanced, he said.
Pediatricians have widely different ideas about what their role should be in facilitating decision-making in the transition period, he said, so this paper helps clarify what advocacy and discussion are needed.
The paper, written by first author Renee M. Turchi, MD, MPH, and colleagues on behalf of the Council on Children with Disabilities’ Committee on Medical Liability and Risk Management, states that, “The goal should always be the least restrictive decision-making that balances autonomy with safety and supports.”
One Alternative Is Supported Decision-Making
Supported decision-making is one alternative to guardianship. Authors explain that under that framework, a patient can choose a trusted support person and create an agreement with that person on what kinds of decisions the person needs help with and how much assistance is needed. The individual makes the final decision, not the support person.
Authors explain the benefits of that approach: “Individuals with IDD who use supported decision-making report increased confidence in themselves and their decision-making, improved decision-making skills, increased engagement with their community, and perceived more control of their lives,” the authors wrote.
Another option for people with IDD might be, rather than formally naming a substitute decision-maker, allowing a parent or caregiver access to their electronic health record or allowing that person to have independent discussions with their physician.
With guardianship, also called conservatorship in some states, a court requires clear and convincing evidence that the youth is not competent to make his or her own decisions. The court may order evaluations by many professionals, including pediatricians.
State-Specific Legal Information Is Available
Many states have recently enacted laws surrounding supported decision-making and guardianship. The authors reference a national resource center website that details the legislation for each state and points to resources and tools for pediatricians, families, and patients.
“Historically, pediatricians have rarely discussed the legal aspects of transition to adult-oriented services with the youth with IDD and subsequently, their families,” the authors wrote.
Discussions Should Start Early
Ideally, the authors wrote, the discussions about what level of supports might be necessary in the transition to adulthood should start at age 12-14 and include the youth, teachers, parents, and the medical team.
That’s earlier than some of the previous guidance, Dr. Siegel said, and it will be important to evaluate future evidence on the best age to start planning “both from a cognitive development standpoint and from a practicality standpoint.”
The authors point out that the needs for level of support may change and “pediatricians can reevaluate the decision-making arrangement as part of the annual physical/mental examinations to align with the youth’s desires, needs, and decision-making abilities over time.”
The authors and Dr. Siegel report no relevant financial relationships.
FROM PEDIATRICS
Most women can conceive after breast cancer treatment
The findings, presented May 23 in advance of the annual meeting of the American Society of Clinical Oncology (ASCO) represent the most comprehensive look to date at fertility outcomes following treatment for women diagnosed with breast cancer before age 40 (Abstract 1518).
Kimia Sorouri, MD, a research fellow at the Dana-Farber Cancer Center in Boston, Massachusetts, and her colleagues, looked at data from the Young Women’s Breast Cancer study, a multicenter longitudinal cohort study, for 1213 U.S. and Canadian women (74% non-Hispanic white) who were diagnosed with stages 0-III breast cancer between 2006 and 2016. None of the included patients had metastatic disease, prior hysterectomy, or prior oophorectomy at diagnosis.
During a median 11 years of follow up, 197 of the women reported attempting pregnancy. Of these, 73% reported becoming pregnant, and 65% delivered a live infant a median 4 years after cancer diagnosis. The median age at diagnosis was 32 years, and 28% opted for egg or embryo freezing to preserve fertility. Importantly, 68% received chemotherapy, which can impair fertility, with only a small percentage undergoing ovarian suppression during chemotherapy treatment.
Key predictors of pregnancy or live birth in this study were “financial comfort,” a self-reported measure defined as having money left over to spend after bills are paid (odds ratio [OR], 2.04; 95% CI 1.01-4.12; P = .047); younger age at the time of diagnosis; and undergoing fertility preservation interventions at diagnosis (OR, 2.78; 95% CI 1.29-6.00; P = .009). Chemotherapy and other treatment factors were not seen to be associated with pregnancy or birth outcomes.
“Current research that informs our understanding of the impact of breast cancer treatment on pregnancy and live birth rates is fairly limited,” Dr. Sorouri said during an online press conference announcing the findings. Quality data on fertility outcomes has been limited to studies in certain subgroups, such as women with estrogen receptor–positive breast cancers, she noted, while other studies “have short-term follow-up and critically lack prospective assessment of attempt at conception.”
The new findings show, Dr. Sorouri said, “that in this modern cohort with a heightened awareness of fertility, access to fertility preservation can help to mitigate a portion of the damage from chemotherapy and other agents. Importantly, this highlights the need for increased accessibility of fertility preservation services for women newly diagnosed with breast cancer who are interested in a future pregnancy.”
Commenting on Dr. Sorouri and colleagues’ findings, Julie Gralow, MD, a breast cancer researcher and ASCO’s chief medical officer, stressed that, while younger age at diagnosis and financial comfort were two factors outside the scope of clinical oncology practice, “we can impact fertility preservation prior to treatment.”
She called it “critical” that every patient be informed of the impact of a breast cancer diagnosis and treatment on future fertility, and that all young patients interested in future fertility be offered fertility preservation prior to beginning treatment.
Ann Partridge, MD, of Dana-Farber, said in an interview that the findings reflected a decades’ long change in approach. “Twenty years ago when we first started this cohort, people would tell women ‘you can’t get pregnant. It’s too dangerous. You won’t be able to.’ And some indeed aren’t able to, but the majority who are attempting are succeeding, especially if they preserve their eggs or embryos. So even if chemo puts you into menopause or made you subfertile, if you’ve preserved eggs or embryos, we now can mitigate that distressing effect that many cancer patients have suffered from historically. That’s the good news here.”
Nonetheless, Dr. Partridge, an oncologist and the last author of the study, noted, the results reflected success only for women actively attempting pregnancy. “Remember, we’re not including the people who didn’t attempt. There may be some who went into menopause who never banked eggs or embryos, and may never have tried because they went to a doctor who told them they’re not fertile.” Further, she said, not all insurances cover in vitro fertilization for women who have had breast cancer.
The fact that financial comfort was correlated with reproductive success, Dr. Partridge said, speaks to broader issues about access. “It may not be all about insurers. It may be to have the ability, to have the time, the education and the wherewithal to do this right — and about being with doctors who talk about it.”
Dr. Sorouri and colleagues’ study was sponsored by the Breast Cancer Research Foundation and Susan G. Komen. Several co-authors disclosed receiving speaking and/or consulting fees from pharmaceutical companies, and one reported being an employee of GlaxoSmithKline. Dr. Sorouri reported no industry funding, while Dr. Partridge reported research funding from Novartis.
The findings, presented May 23 in advance of the annual meeting of the American Society of Clinical Oncology (ASCO) represent the most comprehensive look to date at fertility outcomes following treatment for women diagnosed with breast cancer before age 40 (Abstract 1518).
Kimia Sorouri, MD, a research fellow at the Dana-Farber Cancer Center in Boston, Massachusetts, and her colleagues, looked at data from the Young Women’s Breast Cancer study, a multicenter longitudinal cohort study, for 1213 U.S. and Canadian women (74% non-Hispanic white) who were diagnosed with stages 0-III breast cancer between 2006 and 2016. None of the included patients had metastatic disease, prior hysterectomy, or prior oophorectomy at diagnosis.
During a median 11 years of follow up, 197 of the women reported attempting pregnancy. Of these, 73% reported becoming pregnant, and 65% delivered a live infant a median 4 years after cancer diagnosis. The median age at diagnosis was 32 years, and 28% opted for egg or embryo freezing to preserve fertility. Importantly, 68% received chemotherapy, which can impair fertility, with only a small percentage undergoing ovarian suppression during chemotherapy treatment.
Key predictors of pregnancy or live birth in this study were “financial comfort,” a self-reported measure defined as having money left over to spend after bills are paid (odds ratio [OR], 2.04; 95% CI 1.01-4.12; P = .047); younger age at the time of diagnosis; and undergoing fertility preservation interventions at diagnosis (OR, 2.78; 95% CI 1.29-6.00; P = .009). Chemotherapy and other treatment factors were not seen to be associated with pregnancy or birth outcomes.
“Current research that informs our understanding of the impact of breast cancer treatment on pregnancy and live birth rates is fairly limited,” Dr. Sorouri said during an online press conference announcing the findings. Quality data on fertility outcomes has been limited to studies in certain subgroups, such as women with estrogen receptor–positive breast cancers, she noted, while other studies “have short-term follow-up and critically lack prospective assessment of attempt at conception.”
The new findings show, Dr. Sorouri said, “that in this modern cohort with a heightened awareness of fertility, access to fertility preservation can help to mitigate a portion of the damage from chemotherapy and other agents. Importantly, this highlights the need for increased accessibility of fertility preservation services for women newly diagnosed with breast cancer who are interested in a future pregnancy.”
Commenting on Dr. Sorouri and colleagues’ findings, Julie Gralow, MD, a breast cancer researcher and ASCO’s chief medical officer, stressed that, while younger age at diagnosis and financial comfort were two factors outside the scope of clinical oncology practice, “we can impact fertility preservation prior to treatment.”
She called it “critical” that every patient be informed of the impact of a breast cancer diagnosis and treatment on future fertility, and that all young patients interested in future fertility be offered fertility preservation prior to beginning treatment.
Ann Partridge, MD, of Dana-Farber, said in an interview that the findings reflected a decades’ long change in approach. “Twenty years ago when we first started this cohort, people would tell women ‘you can’t get pregnant. It’s too dangerous. You won’t be able to.’ And some indeed aren’t able to, but the majority who are attempting are succeeding, especially if they preserve their eggs or embryos. So even if chemo puts you into menopause or made you subfertile, if you’ve preserved eggs or embryos, we now can mitigate that distressing effect that many cancer patients have suffered from historically. That’s the good news here.”
Nonetheless, Dr. Partridge, an oncologist and the last author of the study, noted, the results reflected success only for women actively attempting pregnancy. “Remember, we’re not including the people who didn’t attempt. There may be some who went into menopause who never banked eggs or embryos, and may never have tried because they went to a doctor who told them they’re not fertile.” Further, she said, not all insurances cover in vitro fertilization for women who have had breast cancer.
The fact that financial comfort was correlated with reproductive success, Dr. Partridge said, speaks to broader issues about access. “It may not be all about insurers. It may be to have the ability, to have the time, the education and the wherewithal to do this right — and about being with doctors who talk about it.”
Dr. Sorouri and colleagues’ study was sponsored by the Breast Cancer Research Foundation and Susan G. Komen. Several co-authors disclosed receiving speaking and/or consulting fees from pharmaceutical companies, and one reported being an employee of GlaxoSmithKline. Dr. Sorouri reported no industry funding, while Dr. Partridge reported research funding from Novartis.
The findings, presented May 23 in advance of the annual meeting of the American Society of Clinical Oncology (ASCO) represent the most comprehensive look to date at fertility outcomes following treatment for women diagnosed with breast cancer before age 40 (Abstract 1518).
Kimia Sorouri, MD, a research fellow at the Dana-Farber Cancer Center in Boston, Massachusetts, and her colleagues, looked at data from the Young Women’s Breast Cancer study, a multicenter longitudinal cohort study, for 1213 U.S. and Canadian women (74% non-Hispanic white) who were diagnosed with stages 0-III breast cancer between 2006 and 2016. None of the included patients had metastatic disease, prior hysterectomy, or prior oophorectomy at diagnosis.
During a median 11 years of follow up, 197 of the women reported attempting pregnancy. Of these, 73% reported becoming pregnant, and 65% delivered a live infant a median 4 years after cancer diagnosis. The median age at diagnosis was 32 years, and 28% opted for egg or embryo freezing to preserve fertility. Importantly, 68% received chemotherapy, which can impair fertility, with only a small percentage undergoing ovarian suppression during chemotherapy treatment.
Key predictors of pregnancy or live birth in this study were “financial comfort,” a self-reported measure defined as having money left over to spend after bills are paid (odds ratio [OR], 2.04; 95% CI 1.01-4.12; P = .047); younger age at the time of diagnosis; and undergoing fertility preservation interventions at diagnosis (OR, 2.78; 95% CI 1.29-6.00; P = .009). Chemotherapy and other treatment factors were not seen to be associated with pregnancy or birth outcomes.
“Current research that informs our understanding of the impact of breast cancer treatment on pregnancy and live birth rates is fairly limited,” Dr. Sorouri said during an online press conference announcing the findings. Quality data on fertility outcomes has been limited to studies in certain subgroups, such as women with estrogen receptor–positive breast cancers, she noted, while other studies “have short-term follow-up and critically lack prospective assessment of attempt at conception.”
The new findings show, Dr. Sorouri said, “that in this modern cohort with a heightened awareness of fertility, access to fertility preservation can help to mitigate a portion of the damage from chemotherapy and other agents. Importantly, this highlights the need for increased accessibility of fertility preservation services for women newly diagnosed with breast cancer who are interested in a future pregnancy.”
Commenting on Dr. Sorouri and colleagues’ findings, Julie Gralow, MD, a breast cancer researcher and ASCO’s chief medical officer, stressed that, while younger age at diagnosis and financial comfort were two factors outside the scope of clinical oncology practice, “we can impact fertility preservation prior to treatment.”
She called it “critical” that every patient be informed of the impact of a breast cancer diagnosis and treatment on future fertility, and that all young patients interested in future fertility be offered fertility preservation prior to beginning treatment.
Ann Partridge, MD, of Dana-Farber, said in an interview that the findings reflected a decades’ long change in approach. “Twenty years ago when we first started this cohort, people would tell women ‘you can’t get pregnant. It’s too dangerous. You won’t be able to.’ And some indeed aren’t able to, but the majority who are attempting are succeeding, especially if they preserve their eggs or embryos. So even if chemo puts you into menopause or made you subfertile, if you’ve preserved eggs or embryos, we now can mitigate that distressing effect that many cancer patients have suffered from historically. That’s the good news here.”
Nonetheless, Dr. Partridge, an oncologist and the last author of the study, noted, the results reflected success only for women actively attempting pregnancy. “Remember, we’re not including the people who didn’t attempt. There may be some who went into menopause who never banked eggs or embryos, and may never have tried because they went to a doctor who told them they’re not fertile.” Further, she said, not all insurances cover in vitro fertilization for women who have had breast cancer.
The fact that financial comfort was correlated with reproductive success, Dr. Partridge said, speaks to broader issues about access. “It may not be all about insurers. It may be to have the ability, to have the time, the education and the wherewithal to do this right — and about being with doctors who talk about it.”
Dr. Sorouri and colleagues’ study was sponsored by the Breast Cancer Research Foundation and Susan G. Komen. Several co-authors disclosed receiving speaking and/or consulting fees from pharmaceutical companies, and one reported being an employee of GlaxoSmithKline. Dr. Sorouri reported no industry funding, while Dr. Partridge reported research funding from Novartis.
FROM ASCO 2024
New Administration Routes for Adrenaline in Anaphylaxis
PARIS — While anaphylaxis requires immediate adrenaline administration through autoinjection, the use of this treatment is not optimal. Therefore, the development of new adrenaline formulations (such as for intranasal, sublingual, and transcutaneous routes) aims to facilitate the drug’s use and reduce persistent delays in administration by patients and caregivers. An overview of the research was presented at the 19th French-speaking Congress of Allergology.
Anaphylaxis is a severe and potentially fatal immediate hypersensitivity reaction with highly variable and dynamic clinical presentations. It requires prompt recognition for immediate treatment with intramuscular (IM) adrenaline (at the anterolateral aspect of the mid-thigh).
One might think that this reflex is acquired, but in France, while the number of prescribed adrenaline autoinjection (AAI) devices has been increasing for a decade, reaching 965,944 units in 2022, this first-line treatment is underused. Anapen (150, 300, and 500 µg), EpiPen (150 and 300 µg), Jext (150 µg and 300 µg), and Emerade (150, 300, and 500 µg) are the four products marketed in France in 2024.
“Only 17.3% of individuals presenting to the emergency department in the Lorraine region used it in 2015,” said Catherine Neukirch, MD, a pneumologist at Hôpital Bichat–Claude Bernard in Paris, France, with rates of 11.3% for children and 20.3% for adults.
Anaphylaxis Incidence Increasing
Approximately 0.3% (95% CI, 0.1-0.5) of the population will experience an anaphylaxis episode in their lifetime. Incidence in Europe, across all causes, is estimated between 1.5 and 7.9 cases per 100,000 inhabitants per year. Although anaphylaxis is on the rise, its associated mortality remains low, ranging between 0.05 and 0.51 per million per year for drugs, between 0.03 and 0.32 per million per year for foods, and between 0.09 and 0.13 per million per year for hymenopteran venoms.
Data from the European Anaphylaxis Registry indicate that anaphylaxis manifests rapidly after allergen exposure: 55% of cases occur within 10 minutes and 80% within 30 minutes. In addition, a biphasic reaction, which can occur up to 72 hours after exposure, is observed in < 5% of cases.
While a delay in adrenaline use is associated with risk for increased morbidity and mortality, AAI significantly reduces error rates compared with manual treatments involving ampoules, needles, and syringes. It also reduces the associated panic risks. However, there are multiple barriers to adrenaline use. The clinical symptoms of anaphylaxis may be misleading, especially if it occurs without cutaneous and urticarial manifestations but with only acute bronchospasm. It may present as isolated laryngeal edema without digestive involvement, hypotension, or other respiratory problems.
Other limitations to adrenaline use include technical difficulties and the possibility of incorrect administration, the need for appropriate needle sizes for patients with obesity, needle phobia, potential adverse effects of adrenaline injections, failure to carry two autoinjectors, constraints related to storage and bulky transport, as well as the need for training and practice.
“These factors contribute to underuse of adrenaline by patients and caregivers,” said Dr. Neukirch, which results in delays in necessary administration.
Adrenaline Treatment Criteria?
An analysis published in 2023 based on pharmacovigilance data from 30 regional French centers from 1984 to 2022 included 42 reported cases (average age, 33 years; 26% children) of reactions to AAI, which probably is an underestimate. About 40% of AAI uses occurred during anaphylaxis. The remaining 60% were triggered outside of reactions. The main reasons were accidental injections, mainly in the fingers, and cases of not triggering the autoinjector, underlining the importance of patient education.
In 2015, the European Medicines Agency required pharmacological studies for injectable adrenaline on healthy volunteers. These studies include ultrasound measurements of bolus injection, pharmacokinetics (ie, absorption, distribution, metabolism, and excretion), and pharmacodynamics (ie, the effect of the drug and the mechanism of action in the body), with precise evaluation of cardiovascular effects (eg, systolic and diastolic blood pressures and heart rate).
Among the information collected with the different products, ultrasound studies have shown a different localization of the adrenaline bolus (ie, in muscle in patients with normal BMI and mostly in adipose tissue in patients with BMI indicating overweight and obesity). The consequences of this finding are still unknown.
In a study with 500 µg Anapen, women with overweight or obesity showed different pharmacokinetic or pharmacodynamic profiles from those in men with normal weight, with an increase in the area under the curve (0-240 min) and marked changes in the heart rate time curve.
IM administration of 0.5 mg produces rapid pharmacokinetic effects in patients with normal weight, overweight, or obesity, with a delay for the second peak in the latter case. This delay perhaps results from initial local vasoconstriction due to adrenaline.
The early peak plasma concentration occurs at 5-10 minutes for AAI, with a faster speed for Anapen and EpiPen.
Moreover, needle size is not the most important factor. Rather, it is the strength and speed of injection, which can vary depending on the AAI.
Also, the optimal plasma concentration of adrenaline to treat anaphylaxis is not known; studies cannot be conducted during anaphylaxis. In terms of pharmacokinetics, a small series discovered that increased skin or muscle thickness delays the absorption of EpiPen AAI.
Intranasal Adrenaline
To facilitate rapid adrenaline use and convince reluctant patients to carry and use adrenaline, intranasal, sublingual, or transcutaneous forms are under development.
Three intranasal forms of adrenaline are already well advanced, including Neffy from ARS Pharma, epinephrine sprays from Bryn Pharma and Hikma, and Oxero from Oragoo, which contains dry powder.
A comparison of intranasal adrenaline Neffy and AAI shows that the former has satisfactory pharmacokinetic and pharmacodynamic effects.
In a phase 1 randomized crossover study of 42 healthy adults comparing the pharmacokinetic effects of Neffy adrenaline (2 mg) and EpiPen (0.3 mg), as well as IM epinephrine 0.3 mg, several observations were made. For a single dose, the maximum concentration (Cmax) of Neffy was lower than that of EpiPen.
However, with repeated doses administered 10 minutes apart, the Cmax of Neffy was higher than that of EpiPen. At this stage, pharmacodynamic responses to intranasal products are at least comparable with those of approved injectable products.
A comparison of the pharmacodynamic effects, such as systolic and diastolic blood pressures and heart rate, of Neffy adrenaline and AAI concluded that the profile of Neffy is comparable with that of EpiPen and superior to that of IM epinephrine.
In patients with a history of allergic rhinitis, adrenaline Cmax appears to be increased, while time to peak plasma concentration (Tmax) is reduced. Low blood pressure does not prevent Neffy absorption. Neffy is currently under review by the American and European health authorities.
Intranasal absorption of dry powder adrenaline appears to be faster than that of EpiPen, thus offering a clinical advantage in the short therapeutic window for anaphylaxis treatment.
In an open-label trial conducted on 12 adults with seasonal allergic rhinitis without asthma, the pharmacokinetics, pharmacodynamics, and safety of adrenaline were compared between FMXIN002 (1.6 and 3.2 mg), which was administered intranasally with or without nasal allergen challenge, and IM EpiPen 0.3 mg. Pharmacokinetics varied by patient. Nevertheless, nasal FMXIN002 had a shorter Tmax, a doubled Cmax after the allergen challenge peak, and a higher area under the curve in the 8 hours following administration compared with EpiPen. Pharmacodynamic effects comparable with those of EpiPen were noted at 15 minutes to 4 hours after administration. The tolerance was good, with mild and local side effects. The powder seems to deposit slightly better in the nasal cavity. It remains stable for 6 months at a temperature of 40 °C and relative humidity of 75% and for 2 years at a temperature of 25 °C and relative humidity of 60%.
Sublingual Adrenaline Film
AQST-109 is a sublingual film that is intended to allow rapid administration of epinephrine 1, which is a prodrug of adrenaline. The product is the size of a postage stamp, weighs < 30 g, and dissolves on contact with the tongue.
The EPIPHAST II study was a phase 1, multiperiod, crossover study conducted on 24 healthy adults (age, 24-49 years) who were randomly assigned to receive either 12 or 0.3 mg of AQST-109 of manual IM adrenaline in the first two periods. All participants received 0.3 mg of EpiPen in the last period.
EpiPen 0.3 mg resulted in a higher Cmax than AQST-109 12 mg. AQST-109 12 mg had the fastest median Tmax of 12 minutes. The areas under the curve of AQST-109 12 mg fell between those of EpiPen 0.3 mg and manual IM adrenaline 0.3 mg.
Early increases in systolic blood pressure, diastolic blood pressure, and heart rate were observed with AQST-109 12 mg. Changes were more pronounced with AQST-109 12 mg despite a higher Cmax with EpiPen 0.3 mg.
Part 3 of the EPIPHAST study evaluated the impact of food exposure (ie, a peanut butter sandwich) on the pharmacokinetics of AQST-109 12 mg in 24 healthy adults. Oral food residues did not significantly affect pharmacodynamic parameters, and no treatment-related adverse events were reported.
Researchers concluded that AQST-109 12 mg absorption would not be altered by “real” situations if used during meals. “These results suggest that the sublingual adrenaline film could be promising in real situations,” said Dr. Neukirch, especially in cases of food allergy with recent ingestion of the allergenic food.
Transcutaneous Adrenaline
A transcutaneous form of adrenaline that uses the Zeneo device developed by Crossject, a company based in Dijon, France, comes in the form of an AAI that requires no needle. This project, funded by the European Union, uses a gas generator to propel the drug at very high speed through the skin in 50 milliseconds. This method allows for extended drug storage.
Dr. Neukirch reported financial relationships with Viatris, Stallergènes, ALK, Astrazeneca, Sanofi, GSK, and Novartis.
This story was translated from the Medscape French edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
PARIS — While anaphylaxis requires immediate adrenaline administration through autoinjection, the use of this treatment is not optimal. Therefore, the development of new adrenaline formulations (such as for intranasal, sublingual, and transcutaneous routes) aims to facilitate the drug’s use and reduce persistent delays in administration by patients and caregivers. An overview of the research was presented at the 19th French-speaking Congress of Allergology.
Anaphylaxis is a severe and potentially fatal immediate hypersensitivity reaction with highly variable and dynamic clinical presentations. It requires prompt recognition for immediate treatment with intramuscular (IM) adrenaline (at the anterolateral aspect of the mid-thigh).
One might think that this reflex is acquired, but in France, while the number of prescribed adrenaline autoinjection (AAI) devices has been increasing for a decade, reaching 965,944 units in 2022, this first-line treatment is underused. Anapen (150, 300, and 500 µg), EpiPen (150 and 300 µg), Jext (150 µg and 300 µg), and Emerade (150, 300, and 500 µg) are the four products marketed in France in 2024.
“Only 17.3% of individuals presenting to the emergency department in the Lorraine region used it in 2015,” said Catherine Neukirch, MD, a pneumologist at Hôpital Bichat–Claude Bernard in Paris, France, with rates of 11.3% for children and 20.3% for adults.
Anaphylaxis Incidence Increasing
Approximately 0.3% (95% CI, 0.1-0.5) of the population will experience an anaphylaxis episode in their lifetime. Incidence in Europe, across all causes, is estimated between 1.5 and 7.9 cases per 100,000 inhabitants per year. Although anaphylaxis is on the rise, its associated mortality remains low, ranging between 0.05 and 0.51 per million per year for drugs, between 0.03 and 0.32 per million per year for foods, and between 0.09 and 0.13 per million per year for hymenopteran venoms.
Data from the European Anaphylaxis Registry indicate that anaphylaxis manifests rapidly after allergen exposure: 55% of cases occur within 10 minutes and 80% within 30 minutes. In addition, a biphasic reaction, which can occur up to 72 hours after exposure, is observed in < 5% of cases.
While a delay in adrenaline use is associated with risk for increased morbidity and mortality, AAI significantly reduces error rates compared with manual treatments involving ampoules, needles, and syringes. It also reduces the associated panic risks. However, there are multiple barriers to adrenaline use. The clinical symptoms of anaphylaxis may be misleading, especially if it occurs without cutaneous and urticarial manifestations but with only acute bronchospasm. It may present as isolated laryngeal edema without digestive involvement, hypotension, or other respiratory problems.
Other limitations to adrenaline use include technical difficulties and the possibility of incorrect administration, the need for appropriate needle sizes for patients with obesity, needle phobia, potential adverse effects of adrenaline injections, failure to carry two autoinjectors, constraints related to storage and bulky transport, as well as the need for training and practice.
“These factors contribute to underuse of adrenaline by patients and caregivers,” said Dr. Neukirch, which results in delays in necessary administration.
Adrenaline Treatment Criteria?
An analysis published in 2023 based on pharmacovigilance data from 30 regional French centers from 1984 to 2022 included 42 reported cases (average age, 33 years; 26% children) of reactions to AAI, which probably is an underestimate. About 40% of AAI uses occurred during anaphylaxis. The remaining 60% were triggered outside of reactions. The main reasons were accidental injections, mainly in the fingers, and cases of not triggering the autoinjector, underlining the importance of patient education.
In 2015, the European Medicines Agency required pharmacological studies for injectable adrenaline on healthy volunteers. These studies include ultrasound measurements of bolus injection, pharmacokinetics (ie, absorption, distribution, metabolism, and excretion), and pharmacodynamics (ie, the effect of the drug and the mechanism of action in the body), with precise evaluation of cardiovascular effects (eg, systolic and diastolic blood pressures and heart rate).
Among the information collected with the different products, ultrasound studies have shown a different localization of the adrenaline bolus (ie, in muscle in patients with normal BMI and mostly in adipose tissue in patients with BMI indicating overweight and obesity). The consequences of this finding are still unknown.
In a study with 500 µg Anapen, women with overweight or obesity showed different pharmacokinetic or pharmacodynamic profiles from those in men with normal weight, with an increase in the area under the curve (0-240 min) and marked changes in the heart rate time curve.
IM administration of 0.5 mg produces rapid pharmacokinetic effects in patients with normal weight, overweight, or obesity, with a delay for the second peak in the latter case. This delay perhaps results from initial local vasoconstriction due to adrenaline.
The early peak plasma concentration occurs at 5-10 minutes for AAI, with a faster speed for Anapen and EpiPen.
Moreover, needle size is not the most important factor. Rather, it is the strength and speed of injection, which can vary depending on the AAI.
Also, the optimal plasma concentration of adrenaline to treat anaphylaxis is not known; studies cannot be conducted during anaphylaxis. In terms of pharmacokinetics, a small series discovered that increased skin or muscle thickness delays the absorption of EpiPen AAI.
Intranasal Adrenaline
To facilitate rapid adrenaline use and convince reluctant patients to carry and use adrenaline, intranasal, sublingual, or transcutaneous forms are under development.
Three intranasal forms of adrenaline are already well advanced, including Neffy from ARS Pharma, epinephrine sprays from Bryn Pharma and Hikma, and Oxero from Oragoo, which contains dry powder.
A comparison of intranasal adrenaline Neffy and AAI shows that the former has satisfactory pharmacokinetic and pharmacodynamic effects.
In a phase 1 randomized crossover study of 42 healthy adults comparing the pharmacokinetic effects of Neffy adrenaline (2 mg) and EpiPen (0.3 mg), as well as IM epinephrine 0.3 mg, several observations were made. For a single dose, the maximum concentration (Cmax) of Neffy was lower than that of EpiPen.
However, with repeated doses administered 10 minutes apart, the Cmax of Neffy was higher than that of EpiPen. At this stage, pharmacodynamic responses to intranasal products are at least comparable with those of approved injectable products.
A comparison of the pharmacodynamic effects, such as systolic and diastolic blood pressures and heart rate, of Neffy adrenaline and AAI concluded that the profile of Neffy is comparable with that of EpiPen and superior to that of IM epinephrine.
In patients with a history of allergic rhinitis, adrenaline Cmax appears to be increased, while time to peak plasma concentration (Tmax) is reduced. Low blood pressure does not prevent Neffy absorption. Neffy is currently under review by the American and European health authorities.
Intranasal absorption of dry powder adrenaline appears to be faster than that of EpiPen, thus offering a clinical advantage in the short therapeutic window for anaphylaxis treatment.
In an open-label trial conducted on 12 adults with seasonal allergic rhinitis without asthma, the pharmacokinetics, pharmacodynamics, and safety of adrenaline were compared between FMXIN002 (1.6 and 3.2 mg), which was administered intranasally with or without nasal allergen challenge, and IM EpiPen 0.3 mg. Pharmacokinetics varied by patient. Nevertheless, nasal FMXIN002 had a shorter Tmax, a doubled Cmax after the allergen challenge peak, and a higher area under the curve in the 8 hours following administration compared with EpiPen. Pharmacodynamic effects comparable with those of EpiPen were noted at 15 minutes to 4 hours after administration. The tolerance was good, with mild and local side effects. The powder seems to deposit slightly better in the nasal cavity. It remains stable for 6 months at a temperature of 40 °C and relative humidity of 75% and for 2 years at a temperature of 25 °C and relative humidity of 60%.
Sublingual Adrenaline Film
AQST-109 is a sublingual film that is intended to allow rapid administration of epinephrine 1, which is a prodrug of adrenaline. The product is the size of a postage stamp, weighs < 30 g, and dissolves on contact with the tongue.
The EPIPHAST II study was a phase 1, multiperiod, crossover study conducted on 24 healthy adults (age, 24-49 years) who were randomly assigned to receive either 12 or 0.3 mg of AQST-109 of manual IM adrenaline in the first two periods. All participants received 0.3 mg of EpiPen in the last period.
EpiPen 0.3 mg resulted in a higher Cmax than AQST-109 12 mg. AQST-109 12 mg had the fastest median Tmax of 12 minutes. The areas under the curve of AQST-109 12 mg fell between those of EpiPen 0.3 mg and manual IM adrenaline 0.3 mg.
Early increases in systolic blood pressure, diastolic blood pressure, and heart rate were observed with AQST-109 12 mg. Changes were more pronounced with AQST-109 12 mg despite a higher Cmax with EpiPen 0.3 mg.
Part 3 of the EPIPHAST study evaluated the impact of food exposure (ie, a peanut butter sandwich) on the pharmacokinetics of AQST-109 12 mg in 24 healthy adults. Oral food residues did not significantly affect pharmacodynamic parameters, and no treatment-related adverse events were reported.
Researchers concluded that AQST-109 12 mg absorption would not be altered by “real” situations if used during meals. “These results suggest that the sublingual adrenaline film could be promising in real situations,” said Dr. Neukirch, especially in cases of food allergy with recent ingestion of the allergenic food.
Transcutaneous Adrenaline
A transcutaneous form of adrenaline that uses the Zeneo device developed by Crossject, a company based in Dijon, France, comes in the form of an AAI that requires no needle. This project, funded by the European Union, uses a gas generator to propel the drug at very high speed through the skin in 50 milliseconds. This method allows for extended drug storage.
Dr. Neukirch reported financial relationships with Viatris, Stallergènes, ALK, Astrazeneca, Sanofi, GSK, and Novartis.
This story was translated from the Medscape French edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
PARIS — While anaphylaxis requires immediate adrenaline administration through autoinjection, the use of this treatment is not optimal. Therefore, the development of new adrenaline formulations (such as for intranasal, sublingual, and transcutaneous routes) aims to facilitate the drug’s use and reduce persistent delays in administration by patients and caregivers. An overview of the research was presented at the 19th French-speaking Congress of Allergology.
Anaphylaxis is a severe and potentially fatal immediate hypersensitivity reaction with highly variable and dynamic clinical presentations. It requires prompt recognition for immediate treatment with intramuscular (IM) adrenaline (at the anterolateral aspect of the mid-thigh).
One might think that this reflex is acquired, but in France, while the number of prescribed adrenaline autoinjection (AAI) devices has been increasing for a decade, reaching 965,944 units in 2022, this first-line treatment is underused. Anapen (150, 300, and 500 µg), EpiPen (150 and 300 µg), Jext (150 µg and 300 µg), and Emerade (150, 300, and 500 µg) are the four products marketed in France in 2024.
“Only 17.3% of individuals presenting to the emergency department in the Lorraine region used it in 2015,” said Catherine Neukirch, MD, a pneumologist at Hôpital Bichat–Claude Bernard in Paris, France, with rates of 11.3% for children and 20.3% for adults.
Anaphylaxis Incidence Increasing
Approximately 0.3% (95% CI, 0.1-0.5) of the population will experience an anaphylaxis episode in their lifetime. Incidence in Europe, across all causes, is estimated between 1.5 and 7.9 cases per 100,000 inhabitants per year. Although anaphylaxis is on the rise, its associated mortality remains low, ranging between 0.05 and 0.51 per million per year for drugs, between 0.03 and 0.32 per million per year for foods, and between 0.09 and 0.13 per million per year for hymenopteran venoms.
Data from the European Anaphylaxis Registry indicate that anaphylaxis manifests rapidly after allergen exposure: 55% of cases occur within 10 minutes and 80% within 30 minutes. In addition, a biphasic reaction, which can occur up to 72 hours after exposure, is observed in < 5% of cases.
While a delay in adrenaline use is associated with risk for increased morbidity and mortality, AAI significantly reduces error rates compared with manual treatments involving ampoules, needles, and syringes. It also reduces the associated panic risks. However, there are multiple barriers to adrenaline use. The clinical symptoms of anaphylaxis may be misleading, especially if it occurs without cutaneous and urticarial manifestations but with only acute bronchospasm. It may present as isolated laryngeal edema without digestive involvement, hypotension, or other respiratory problems.
Other limitations to adrenaline use include technical difficulties and the possibility of incorrect administration, the need for appropriate needle sizes for patients with obesity, needle phobia, potential adverse effects of adrenaline injections, failure to carry two autoinjectors, constraints related to storage and bulky transport, as well as the need for training and practice.
“These factors contribute to underuse of adrenaline by patients and caregivers,” said Dr. Neukirch, which results in delays in necessary administration.
Adrenaline Treatment Criteria?
An analysis published in 2023 based on pharmacovigilance data from 30 regional French centers from 1984 to 2022 included 42 reported cases (average age, 33 years; 26% children) of reactions to AAI, which probably is an underestimate. About 40% of AAI uses occurred during anaphylaxis. The remaining 60% were triggered outside of reactions. The main reasons were accidental injections, mainly in the fingers, and cases of not triggering the autoinjector, underlining the importance of patient education.
In 2015, the European Medicines Agency required pharmacological studies for injectable adrenaline on healthy volunteers. These studies include ultrasound measurements of bolus injection, pharmacokinetics (ie, absorption, distribution, metabolism, and excretion), and pharmacodynamics (ie, the effect of the drug and the mechanism of action in the body), with precise evaluation of cardiovascular effects (eg, systolic and diastolic blood pressures and heart rate).
Among the information collected with the different products, ultrasound studies have shown a different localization of the adrenaline bolus (ie, in muscle in patients with normal BMI and mostly in adipose tissue in patients with BMI indicating overweight and obesity). The consequences of this finding are still unknown.
In a study with 500 µg Anapen, women with overweight or obesity showed different pharmacokinetic or pharmacodynamic profiles from those in men with normal weight, with an increase in the area under the curve (0-240 min) and marked changes in the heart rate time curve.
IM administration of 0.5 mg produces rapid pharmacokinetic effects in patients with normal weight, overweight, or obesity, with a delay for the second peak in the latter case. This delay perhaps results from initial local vasoconstriction due to adrenaline.
The early peak plasma concentration occurs at 5-10 minutes for AAI, with a faster speed for Anapen and EpiPen.
Moreover, needle size is not the most important factor. Rather, it is the strength and speed of injection, which can vary depending on the AAI.
Also, the optimal plasma concentration of adrenaline to treat anaphylaxis is not known; studies cannot be conducted during anaphylaxis. In terms of pharmacokinetics, a small series discovered that increased skin or muscle thickness delays the absorption of EpiPen AAI.
Intranasal Adrenaline
To facilitate rapid adrenaline use and convince reluctant patients to carry and use adrenaline, intranasal, sublingual, or transcutaneous forms are under development.
Three intranasal forms of adrenaline are already well advanced, including Neffy from ARS Pharma, epinephrine sprays from Bryn Pharma and Hikma, and Oxero from Oragoo, which contains dry powder.
A comparison of intranasal adrenaline Neffy and AAI shows that the former has satisfactory pharmacokinetic and pharmacodynamic effects.
In a phase 1 randomized crossover study of 42 healthy adults comparing the pharmacokinetic effects of Neffy adrenaline (2 mg) and EpiPen (0.3 mg), as well as IM epinephrine 0.3 mg, several observations were made. For a single dose, the maximum concentration (Cmax) of Neffy was lower than that of EpiPen.
However, with repeated doses administered 10 minutes apart, the Cmax of Neffy was higher than that of EpiPen. At this stage, pharmacodynamic responses to intranasal products are at least comparable with those of approved injectable products.
A comparison of the pharmacodynamic effects, such as systolic and diastolic blood pressures and heart rate, of Neffy adrenaline and AAI concluded that the profile of Neffy is comparable with that of EpiPen and superior to that of IM epinephrine.
In patients with a history of allergic rhinitis, adrenaline Cmax appears to be increased, while time to peak plasma concentration (Tmax) is reduced. Low blood pressure does not prevent Neffy absorption. Neffy is currently under review by the American and European health authorities.
Intranasal absorption of dry powder adrenaline appears to be faster than that of EpiPen, thus offering a clinical advantage in the short therapeutic window for anaphylaxis treatment.
In an open-label trial conducted on 12 adults with seasonal allergic rhinitis without asthma, the pharmacokinetics, pharmacodynamics, and safety of adrenaline were compared between FMXIN002 (1.6 and 3.2 mg), which was administered intranasally with or without nasal allergen challenge, and IM EpiPen 0.3 mg. Pharmacokinetics varied by patient. Nevertheless, nasal FMXIN002 had a shorter Tmax, a doubled Cmax after the allergen challenge peak, and a higher area under the curve in the 8 hours following administration compared with EpiPen. Pharmacodynamic effects comparable with those of EpiPen were noted at 15 minutes to 4 hours after administration. The tolerance was good, with mild and local side effects. The powder seems to deposit slightly better in the nasal cavity. It remains stable for 6 months at a temperature of 40 °C and relative humidity of 75% and for 2 years at a temperature of 25 °C and relative humidity of 60%.
Sublingual Adrenaline Film
AQST-109 is a sublingual film that is intended to allow rapid administration of epinephrine 1, which is a prodrug of adrenaline. The product is the size of a postage stamp, weighs < 30 g, and dissolves on contact with the tongue.
The EPIPHAST II study was a phase 1, multiperiod, crossover study conducted on 24 healthy adults (age, 24-49 years) who were randomly assigned to receive either 12 or 0.3 mg of AQST-109 of manual IM adrenaline in the first two periods. All participants received 0.3 mg of EpiPen in the last period.
EpiPen 0.3 mg resulted in a higher Cmax than AQST-109 12 mg. AQST-109 12 mg had the fastest median Tmax of 12 minutes. The areas under the curve of AQST-109 12 mg fell between those of EpiPen 0.3 mg and manual IM adrenaline 0.3 mg.
Early increases in systolic blood pressure, diastolic blood pressure, and heart rate were observed with AQST-109 12 mg. Changes were more pronounced with AQST-109 12 mg despite a higher Cmax with EpiPen 0.3 mg.
Part 3 of the EPIPHAST study evaluated the impact of food exposure (ie, a peanut butter sandwich) on the pharmacokinetics of AQST-109 12 mg in 24 healthy adults. Oral food residues did not significantly affect pharmacodynamic parameters, and no treatment-related adverse events were reported.
Researchers concluded that AQST-109 12 mg absorption would not be altered by “real” situations if used during meals. “These results suggest that the sublingual adrenaline film could be promising in real situations,” said Dr. Neukirch, especially in cases of food allergy with recent ingestion of the allergenic food.
Transcutaneous Adrenaline
A transcutaneous form of adrenaline that uses the Zeneo device developed by Crossject, a company based in Dijon, France, comes in the form of an AAI that requires no needle. This project, funded by the European Union, uses a gas generator to propel the drug at very high speed through the skin in 50 milliseconds. This method allows for extended drug storage.
Dr. Neukirch reported financial relationships with Viatris, Stallergènes, ALK, Astrazeneca, Sanofi, GSK, and Novartis.
This story was translated from the Medscape French edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
PCPs May Have a New Tool to Help Identify Autism in Young Children
Incorporating eye-tracking biomarkers into pediatric autism assessments may make identifying the condition easier, according to new findings published in JAMA Network Open.
Researchers created an artificial intelligence–based tool to help primary care clinicians and pediatricians spot potential cases of the neurological condition, according to Brandon Keehn, PhD, associate professor in the Department of Speech, Language, and Hearing Sciences at Purdue University in West Lafayette, Indiana, and an author of the study.
Most primary care clinicians do not receive specialized training in identifying autism, and around a third diagnose the condition with uncertainty, according to Dr. Keehn. The tool helps clinicians by incorporating their diagnosis and self-reported level of certainty with eye-tracking biomarkers. A clinical psychologist also assessed children, either verifying or confuting the earlier results.
The tool produced the same diagnosis as that from a psychologist in 90% of cases. When children were assessed using eye biomarkers alone, the diagnosis was aligned with that of a psychologist 77% of the time.
“This is the first step in demonstrating both that eye-tracking biomarkers are sensitive to autism and whether or not these biomarkers provide extra clinical information for primary care physicians to more accurately diagnose autism,” Dr. Keehn told this news organization.
The study took place between 2019 and 2022 and included 146 children between 14 and 48 months old who were treated at seven primary care practices in Indiana. Dr. Keehn and colleagues asked primary care clinicians to rate their level of certainty in their diagnosis.
During the biomarker test, toddlers watched cartoons while researchers tracked their eye movements. Six biomarkers included in the test were based on previous research linking eye movements to autism, according to Dr. Keehn.
These included whether toddlers looked more at images of people or geometric patterns and the speed and size of pupil dilation when exposed to bright light.
Most toddlers produced a positive result for autism in only one biomarker test. Dr. Keehn said this confirms that children should be tested for a variety of biomarkers because each patient’s condition manifests differently.
Dr. Keehn said his team is still a few steps away from determining how the model would work in a real clinical setting and that they are planning more research with a larger study population.
Alice Kuo, MD, a pediatrician specializing in autism at the University of California, Los Angeles (UCLA), said primary care clinicians should feel comfortable making an autism diagnosis.
“Any tool that helps them to do that can be useful, since wait times for a specialist can take years,” Dr. Kuo, also the director of the Autism Intervention Research Network on Physical Health at UCLA, said.
However, Dr. Kuo said she is concerned about the cases that were falsely identified as positive or negative.
“To be told your kid is autistic when he’s not, or to be told your kid is not when he clinically is, has huge ramifications,” she said.
The study was funded by the National Institute of Mental Health, the Riley Children’s Foundation, and the Indiana Clinical and Translational Sciences Institute. Dr. Keehn reported payments for workshops on the use of the Autism Diagnostic Observation Schedule.
A version of this article appeared on Medscape.com .
Incorporating eye-tracking biomarkers into pediatric autism assessments may make identifying the condition easier, according to new findings published in JAMA Network Open.
Researchers created an artificial intelligence–based tool to help primary care clinicians and pediatricians spot potential cases of the neurological condition, according to Brandon Keehn, PhD, associate professor in the Department of Speech, Language, and Hearing Sciences at Purdue University in West Lafayette, Indiana, and an author of the study.
Most primary care clinicians do not receive specialized training in identifying autism, and around a third diagnose the condition with uncertainty, according to Dr. Keehn. The tool helps clinicians by incorporating their diagnosis and self-reported level of certainty with eye-tracking biomarkers. A clinical psychologist also assessed children, either verifying or confuting the earlier results.
The tool produced the same diagnosis as that from a psychologist in 90% of cases. When children were assessed using eye biomarkers alone, the diagnosis was aligned with that of a psychologist 77% of the time.
“This is the first step in demonstrating both that eye-tracking biomarkers are sensitive to autism and whether or not these biomarkers provide extra clinical information for primary care physicians to more accurately diagnose autism,” Dr. Keehn told this news organization.
The study took place between 2019 and 2022 and included 146 children between 14 and 48 months old who were treated at seven primary care practices in Indiana. Dr. Keehn and colleagues asked primary care clinicians to rate their level of certainty in their diagnosis.
During the biomarker test, toddlers watched cartoons while researchers tracked their eye movements. Six biomarkers included in the test were based on previous research linking eye movements to autism, according to Dr. Keehn.
These included whether toddlers looked more at images of people or geometric patterns and the speed and size of pupil dilation when exposed to bright light.
Most toddlers produced a positive result for autism in only one biomarker test. Dr. Keehn said this confirms that children should be tested for a variety of biomarkers because each patient’s condition manifests differently.
Dr. Keehn said his team is still a few steps away from determining how the model would work in a real clinical setting and that they are planning more research with a larger study population.
Alice Kuo, MD, a pediatrician specializing in autism at the University of California, Los Angeles (UCLA), said primary care clinicians should feel comfortable making an autism diagnosis.
“Any tool that helps them to do that can be useful, since wait times for a specialist can take years,” Dr. Kuo, also the director of the Autism Intervention Research Network on Physical Health at UCLA, said.
However, Dr. Kuo said she is concerned about the cases that were falsely identified as positive or negative.
“To be told your kid is autistic when he’s not, or to be told your kid is not when he clinically is, has huge ramifications,” she said.
The study was funded by the National Institute of Mental Health, the Riley Children’s Foundation, and the Indiana Clinical and Translational Sciences Institute. Dr. Keehn reported payments for workshops on the use of the Autism Diagnostic Observation Schedule.
A version of this article appeared on Medscape.com .
Incorporating eye-tracking biomarkers into pediatric autism assessments may make identifying the condition easier, according to new findings published in JAMA Network Open.
Researchers created an artificial intelligence–based tool to help primary care clinicians and pediatricians spot potential cases of the neurological condition, according to Brandon Keehn, PhD, associate professor in the Department of Speech, Language, and Hearing Sciences at Purdue University in West Lafayette, Indiana, and an author of the study.
Most primary care clinicians do not receive specialized training in identifying autism, and around a third diagnose the condition with uncertainty, according to Dr. Keehn. The tool helps clinicians by incorporating their diagnosis and self-reported level of certainty with eye-tracking biomarkers. A clinical psychologist also assessed children, either verifying or confuting the earlier results.
The tool produced the same diagnosis as that from a psychologist in 90% of cases. When children were assessed using eye biomarkers alone, the diagnosis was aligned with that of a psychologist 77% of the time.
“This is the first step in demonstrating both that eye-tracking biomarkers are sensitive to autism and whether or not these biomarkers provide extra clinical information for primary care physicians to more accurately diagnose autism,” Dr. Keehn told this news organization.
The study took place between 2019 and 2022 and included 146 children between 14 and 48 months old who were treated at seven primary care practices in Indiana. Dr. Keehn and colleagues asked primary care clinicians to rate their level of certainty in their diagnosis.
During the biomarker test, toddlers watched cartoons while researchers tracked their eye movements. Six biomarkers included in the test were based on previous research linking eye movements to autism, according to Dr. Keehn.
These included whether toddlers looked more at images of people or geometric patterns and the speed and size of pupil dilation when exposed to bright light.
Most toddlers produced a positive result for autism in only one biomarker test. Dr. Keehn said this confirms that children should be tested for a variety of biomarkers because each patient’s condition manifests differently.
Dr. Keehn said his team is still a few steps away from determining how the model would work in a real clinical setting and that they are planning more research with a larger study population.
Alice Kuo, MD, a pediatrician specializing in autism at the University of California, Los Angeles (UCLA), said primary care clinicians should feel comfortable making an autism diagnosis.
“Any tool that helps them to do that can be useful, since wait times for a specialist can take years,” Dr. Kuo, also the director of the Autism Intervention Research Network on Physical Health at UCLA, said.
However, Dr. Kuo said she is concerned about the cases that were falsely identified as positive or negative.
“To be told your kid is autistic when he’s not, or to be told your kid is not when he clinically is, has huge ramifications,” she said.
The study was funded by the National Institute of Mental Health, the Riley Children’s Foundation, and the Indiana Clinical and Translational Sciences Institute. Dr. Keehn reported payments for workshops on the use of the Autism Diagnostic Observation Schedule.
A version of this article appeared on Medscape.com .
FROM JAMA NETWORK OPEN
Fluoride, Water, and Kids’ Brains: It’s Complicated
This transcript has been edited for clarity.
I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not.
I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
Shall We Shake This Hornet’s Nest?
The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.
But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.
The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true.
I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
Let’s Dive Into These Shark-Infested, Fluoridated Waters
We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.
It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months.
The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.
Yikes.
But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.
Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.
I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial.
But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.
The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
Is Urinary Fluoride a Good Measure of Blood Fluoride?
It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5.
Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is.
This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.
Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study.
So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not.
I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
Shall We Shake This Hornet’s Nest?
The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.
But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.
The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true.
I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
Let’s Dive Into These Shark-Infested, Fluoridated Waters
We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.
It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months.
The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.
Yikes.
But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.
Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.
I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial.
But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.
The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
Is Urinary Fluoride a Good Measure of Blood Fluoride?
It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5.
Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is.
This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.
Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study.
So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not.
I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
Shall We Shake This Hornet’s Nest?
The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.
But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.
The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true.
I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
Let’s Dive Into These Shark-Infested, Fluoridated Waters
We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.
It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months.
The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.
Yikes.
But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.
Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.
I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial.
But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.
The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
Is Urinary Fluoride a Good Measure of Blood Fluoride?
It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5.
Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is.
This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.
Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study.
So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.